The New Need to Create a Secondary Perimeter to Detect for Malware’s Presence

Malware – and specifically ransomware – tends to regularly make headlines with some business somewhere in the world reporting having its data encrypted by it. Due to this routine occurrence, companies need to acknowledge that their standard first line defenses such as cybersecurity and backup software no longer completely suffice to detect malware. To augment these defenses, companies need to take new steps to shore up these traditional defenses which, for many, will start with creating a secondary perimeter around their backup stores to detect the presence of malware.

The size of companies getting infected by malware are not what one may classify as “small.” By way of example, a story appeared earlier this week about an 800-bed hospital in Malvern, Australia, that had the medical records of 15,000 of its patients in its cardiology unit compromised and encrypted at the end of January 2019.

While I am unfamiliar with both this hospital’s IT staff and procedures and the details of this incident, one can make two educated observations about its IT operations:

  • One, the hospital is sufficiently large that it likely had anti-virus software and firewalls in place that, in a perfect world, would have detected the malware and thwarted it.
  • Two, it probably did regular backups of its production data (nightly or weekly.) Even if the malware attack did succeed, it should have been able to use backups to recover.

So, the questions become:

  1. Why is his hospital, or any company for that matter, still susceptible to something as theoretically preventable as a malware attack in the form of ransomware?
  2. Why could the hospital not use its backups to recover?

Again, sufficient details are not yet publicly available about this attack to know with certainty why these defenses failed or if they were even in place. If one or both these defenses were not in place, then this hospital was susceptible to becoming a victim to this sort of attack. But even if both these defenses were in place or even if just one was in place, it begs asking, “Why did one or both of these defenses not suffice?

The short answer is, both these defenses remain susceptible to malware attacks whether used separately or together. This deficiency does not necessarily originate with poorly designed anti-virus software, backup software or firewalls. Rather, malware’s rapid evolution and maturity challenges the ability of cybersecurity and backup software providers to keep pace with them.

A 2017 study published by G DATA security experts revealed they discovered a new malware strain about every four seconds.  This massive number of malware strains makes it improbable that anti-virus software and firewalls can alone identify every new strain of malware as it enters a company.  Malware’s rapid evolution can also result in variations of documented ransomware strains such as Locky, NotPetya, and WannaCry slipping through undetected.

Backup software is also under attack by malware. Strains of malware now exist that may remain dormant and undetected for some time. Once inside a company, it first infects production files over a period of days, weeks or even months before it detonates. During the malware’s incubation period, companies will back up these infected production files. At the same time, they will, as part of their normal backup operations, delete their expiring backups.

After a few weeks or months of routine backup operations, all backups created during this time will contain infected production files. Then when the malware does detonate in the production environment, companies may get caught in a Zero-day Attack Loop.

Using cybersecurity software on the perimeter of corporate IT infrastructures and backup software inside the IT infrastructure does help companies detect and prevent malware attacks as well as recover from them. However, the latest strains of malware’s reflect its continuing evolution and growing sophistication that better equips them to bypass these existing corporate countermeasures as is evidenced by attacks on this hospital in Australia and the ones too numerous to mention around the world.

For these reasons, backup software that embeds artificial intelligence, machine learning, and, yes, even cybersecurity software, is entering the market place. Using these products, companies can create a secondary defense perimeter inside their company around their data stores that provides another means for companies to detect existing and new strains of malware as well as better position them to successfully recover from malware attacks when they do occur.




Security Industry Turning to Big Data to Accelerate Analysis of Event and Log Data

Yesterday I broke away from my normal routine of analyzing enterprise data protection and data storage technologies to take a closer look at enterprise security. To do so, I stopped by the Omaha Tech Security Conference held at the local Hilton Omaha conference center and visited some of the vendors’ booths to learn more about their respective technologies. In so doing, it quickly became evident from my conversations with a number of security providers that they recognize their need to introduce Big Data analytics into their products to convert the data, events, and incidents that they record and log into meaningful analysis that organizations can consume and act upon.

data connectors securing future

In recent months my interest in the security industry has been piqued by a growing sense that corporate interest in security is evolving from a series of peaks and valleys in terms of them making investments in security to regularly investing in security. This change in attitude and approach to investing in enterprise security was confirmed by every security professional (reseller, vendor and end-user) with which I have spoken with over the past few weeks and at this conference in particular.

While many of these security professionals to which I spoke quickly pointed out that investments in security software and/or hardware equates to neither a secure nor a compliant infrastructure, simply providing regular, ongoing funding for security software and hardware serves an important first step toward building a secure, compliant enterprise.

It was then that those conversations turned to the topic of how do organizations, once they have made the decision to budget for security on an annual basis:

  • Secure their data
  • Keep the “right” data in (as within or behind corporate fire walls)
  • Keep the “wrong” people out
  • Keep the “right” data in the “right” hands at the “right” time.

Achieving any of these ideals in recent years has become much more elusive to accomplish despite the advent and growth of security technologies such as data loss prevention, encryption, and security information and event management (SIEM). In discussing how to best deliver on these requirements with some of the providers on site, they shared some of the following thoughts:

  1. SIEM appliances and software have lost some of their luster. One reseller explained to me that initially SIEM appliances and software held a great deal of appeal due to their ability to monitor information, events, and logs from multiple security solutions. More recently, that perception has changed. Even though SIEM appliances and software gather all of this data, they still need someone to analyze it, identify the threats, and act on them in a timely manner. Due to the mass amount of information and events gathered, this quickly becomes an overwhelming task.
  2. Data Loss Prevention (DLP) software suffers from a similar inability of organizations to quickly analyze the data. DLP software differs from SIEM software in that its goal is to prevent the distribution or release of an organization’s data to individuals either inside or outside of the organization not authorized to access or view it. While its objective is again simple to state, executing on it becomes a much more complex task. Like SIEM software, DLP software needs to process large amounts of data in a short amount of time. Yet in order for it to do so, it must do quickly and efficiently. This again requires individuals dedicated to the task of creating policies who then administer them, track the results, and tweak as them as needed to get the desired results.

This ability to quickly, efficiently, and effectively perform meaningful analysis on these types of data took some of the shine off of these technologies in the last couple of years. However, it also explains why providers have begun to introduce Big Data analytics into their solutions. While Big Data analytics will not absolve organizations of the need to create policies and analyze the security data they collect, it does facilitate the introduction of more automation and simplicity into this process to equip organizations to better protect their infrastructure and should serve to help organizations more confidently move down the path of more broadly adopting security technologies to protect and secure their infrastructure going forward.




DCIG 2014-15 Security Information and Event Management (SIEM) Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of its DCIG 2014-15 Security Information and Event Management (SIEM) Appliance Buyer’s Guide. In this Buyer’s Guide, DCIG weights, scores and ranks 29 SIEM appliances respectively from nine (9) different providers. Like all previous DCIG Buyer’s Guides, this Buyer’s Guide provides the critical information that all size organizations need when selecting a SIEM appliance to help provide visibility into their security posture by providing usable and actionable information.

2014-15-DCIG-SIEM-Appliance-Icon-200x200

Data security is a part of the IT infrastructure that should take care of itself. Companies have enough to worry about without always looking over their shoulder to make sure no one is stealing vital information.

As most organizations recognize, this is NOT the case. Security specialists are rarely without work for the simple reason that almost every day a headline reads “International Company [you fill in the blank] Suffers Massive Data Breach.” Read deeper into those articles and a company representative is often quoted as saying something akin to, “The breach happened a couple days ago and we just caught it. We’re still trying to figure out how many of our customers were affected and who is responsible.”

The truth of the matter is that data security does not take care of itself. But SIEM solutions take the edge off of these concerns by acting as a constant watchdog that perform several services:

  • Logging information
  • Correlating data
  • Alerting security administrators as soon as a breach is detected
  • Providing a dashboard to provide a picture of what is happening in the environment at any given time

SIEM solutions and the dashboards they offer put a big dent in addressing that problem. When a breach is detected, an administrator can pull up a dashboard that breaks down every user session that has taken place by login name, location, applications launched, and more. Using these dashboards, the security administrator can make short work of find­ing those responsible.

This information is presented in easily accessible charts and lists that can be used for personal protection and also for forensic investigations should the need arise. Further, they may be shown to potential customers worried about security and who want proof that the company is doing everything it can to protect and secure their data.

Every SIEM appliance contained in the DCIG 2014-15 SIEM Appliance Buyer’s Guide performs the following primary functions:

  1. Data and log aggregation
  2. Data correlation
  3. Alerting
  4. Dashboarding
  5. Log and data retention and protection.

Many also perform three (3) secondary functions:

  1. Forensic analysis on the information
  2. Serve as an incident management tool
  3. Perform compliance monitoring

It is important that prospective buyers keep in mind their organization’s security require­ments as they look to acquire one of these SIEM appliances. What might be the best SIEM solution for a large, international company might be too robust for a smaller orga­nization. A company must be aware of its own security needs before investing time and money in an SIEM solution.

It is in this context that DCIG presents the DCIG 2014-15 SIEM Appliance Buyer’s Guide. As prior Buyer’s Guides have done, this Buyer’s Guide puts at the fingertips of organizations a comprehen­sive list of SIEM appliances and the features they offer in the form of detailed, standardized data sheets that can assist them in this important buying decision.

This DCIG 2014-15 SIEM Appliance Buyer’s Guide accomplishes the following objectives:

  • Provides an objective, third party evaluation of SIEM solutions that weights, scores and ranks their features from an end user’s viewpoint
  • Includes recommendations on how to best use this Buyer’s Guide
  • Scores and ranks the features on each SIEM appliance based upon criteria that matter most to end users so they can quickly know which products are the most appropriate for them to use and under what conditions
  • Provides data sheets for 29 SIEM appliances from nine (9) different providers so end users can do quick comparisons of the features that are supported and not supported on each product
  • Gives any organization the ability to request competitive bids from different providers of SIEM appliances to do apples-to-apples comparisons of these products

The DCIG 2014-15 SIEM Appliance Buyer’s Guide Top 10 solutions include (in alphabetical order):

  • BlackStratus MIDWAY
  • Hewlett-Packard ArcSight AE-7526
  • Hewlett-Packard ArcSight AE-7566
  • Hewlett-Packard ArcSight AE-7581
  • IBM Security QRadar SIEM 3105
  • IBM Security QRadar SIEM 3124
  • LogRhythm All-in-One (XM) 4300
  • LogRhythm All-in-One (XM) 4300
  • McAfee ETM-6000
  • TIBCO LogLogic MX4025

The LogRhythm All-in-One (XM) 6300 SIEM appliance achieved the Best-in-Class ranking in this inaugural DCIG 2014-15 SIEM Appliance Buyer’s Guide. Scoring at or near the top in every category (Hardware, Software, Management and Support) evaluated in this Buyer’s Guide, it represents the best of what SIEM appliances currently have to offer.

In doing its research for this Buyer’s Guide, DCIG uncovered some interesting statistics about SIEM appliances in general:

  • 100% include log management, application monitoring and audit report capabilities
  • 90% provide some type of RAID configuration
  • 79% of appliances provide the ability to export data as comma separated values (CSV)
  • 72% support distributed searches across multiple data stores
  • 58% can achieve maximum event processing rates of 2,500 EPS
  • 24% can provide logging or monitoring on more than 2000 systems
  • 21% offer 10 TB or more of storage capacity on their appliance

The DCIG 2014-15 SIEM Appliance Buyer’s Guide is immediately available through the DCIG analyst portal for subscribing users by following this link.




The Proper Roles that SIEM Appliances Should Fulfill in Organizations

Data security is a part of the IT infrastructure that should take care of itself. Companies have enough to worry about without always looking over their shoulder to make sure no one is stealing vital information.

As most organizations recognize, this is NOT the case. Security specialists are never without work for the simple reason that almost every day a headline reads “International Company [you fill in the blank] Suffers Massive Data Breach.” Read deeper into those articles and a company representative is often quoted as saying something akin to, “The breach happened a couple days ago and we just caught it. We’re still trying to figure out how many of our customers were affected and who is responsible.”

The truth of the matter is that data security does not take care of itself. But Security Information and Event Management (SIEM) solutions take the edge off these concerns by acting as a constant watchdog that performs several services:

  • Logging information
  • Correlating data
  • Alerting security administrators as soon as a breach is detected
  • Providing a dashboard to give an easily accessible picture of what is happening in the environment at any given time.

Simply put SIEM solutions gives organizations visibility into their security posture by providing usable and actionable information.

Large enterprise organizations are leading the charge into the adoption of SIEM appliances. Many of these organizations implemented SIEM solutions in large part due to their size to meet internal and external compliance requirements but a growing number of smaller organizations are adopting these solutions due to the sensitive information they handle.

The U.S. Commerce Department’s National Institute of Standards and Technology (NIST) released its Framework for Improving Critical Infrastructure Cybersecurity in early 2014 that outlined five (5) ways organizations with critical systems could protect themselves and their data from a cyberattack.The five areas that this framework outlined included:

  1. Identify
  2. Protect
  3. Detect
  4. Respond
  5. Recover

Without an SIEM solution, step 3 or detection is nearly impossible. Without detection, response and recovery are entirely impossible.

The hard truth is that it is impossible to prevent all breaches. The next best thing to all-out prevention is a good protection system and a planned, swift course of action in the case of a breach. SIEM solutions play a large role in quickly detecting breaches and many can be customized to provide immediate responses upon detection of a breach.

SIEM solutions do not prevent breaches. They are not force fields. They do not attack intruders. Rather they provide immediate, real-time alerts to security administrators and the organizations for which they work. In addition, they provide a tangible, measurable picture of where a company stands in the once-vague area of data security.




2014 Mobile Data Management Buyer’s Guide Now Available

DCIG is pleased to announce the release of its 2014 Mobile Data Management (MDM) Buyer’s Guide that weight, score and rank over 100 features. Like previous Buyer’s Guides, this Buyer’s Guide provides the critical information that organizations need when selecting Mobile Data Management software to help meet the security, compliance and Bring-Your-Own-Device (BYOD) challenges in an ever increasing mobile enterprises.

DCIG invested hundreds of hours designing a survey that would capture the data that matters most to prospective mobile data management purchasers, gathering the relevant data, and then analyzing the results. The data collection survey included over 150 scored questions. 

The resulting data was categorized, standardized and distilled into summary scoring and ranking tables as well as a one-page data sheet for each array. This powerful combination of summary data and data sheets make it easy to do quick, side-by-side comparisons of mobile data management vendors–enabling organizations to quickly get to a short list of products that may meet their requirements.

Recent statistics surrounding Mobile Data Management along with the bring your own device (BYOD) movement are astounding.  Consider these facts;

  • Sixty-one percent of small to mid-sized businesses (SMB’s) have adopted a BYOD policy or initiative for employee-owned smartphones, tablets, or computers.
  • In 2011, worldwide mobile enterprise management software revenue totaled $444.6 million.  This number is expected to grow at a compound annual growth rate (CAGR) of 31.8% over the forecast period, resulting in total Mobile Enterprise Management (MEM) software revenue of 2016.
  • In 2010-2011 companies such as MobileIron, AirWatch, Good Technology, Fiberlink and Zenprise each realized triple-digit growth.

The advent of multi-functioning mobile devices gave corporate America a whole new set of possibilities in the workplace.  Employers and employees alike realized the potential for growth and recognizable positive production with the use of devices that went wherever they did.  Apple’s iPhone and iPad emerged, then the Android platform, all of which forced organizations to maneuver past a corporate-liable policy and accept a BYOD strategy.

Though the concept of allowing private mobile devices at work may not be entirely new, how organizations have decided to deal with the onslaught of usage is.  Instead of managing the entire personal device, organizations simply want to control the trail of sensitive information.

In exchange for corporate issued devices, organizations began to look into geo-fencing as a means to control data.  Other state-of-the-art solutions were needed to alleviate the risk of data leakage and augment security around the information going to and from the devices network.  Therefore, management of devices needed to be flexible.  Solutions needed to be more open to work with either on-premise infrastructure, the cloud, or a hybrid approach deliver model.

Research has found that the mobility of the BYOD could be a way to maintain company efficiency.  Despite this fact, even with the use of BYOD on the upswing, twenty-six percent of businesses have yet to set up comprehensive Mobile Data Management strategies alongside their BYOD plans. 

As cloud adoption continues to gain acceptance so does the concern with administrative and security features available for differing mobile operating systems.

DCIG understands these needs and has risen to the unique challenge of providing you and your organization with a comprehensive list of MDM providers and their competing features.  Our goal is to assist you with in this all-important buying decision while removing much of the mystery around how MDM providers are configured and the stress in selecting which ones are suitable for which purposes.

It is in this context that DCIG presents its 2014 Mobile Data Management Buyer’s Guide. As prior Buyer’s Guides have done, it puts at the fingertips of organizations a comprehensive list of Mobile Data Management providers. This Guide includes detailed, standardized data sheets that list out the features of each Mobile Data Management vendor so they can understand the benefits and drawbacks of each one and then make an informed buying decision.

The DCIG 2014 Mobile Data Management Buyer’s Guide Top 10 solutions include (in alphabetical order): Amtel Lifecycle Management, Excitor DME Mobile Device Manager, Fiberlink MAAS 360 Mobile Device Manager, Fixmo EMP, MobileIron Advanced Mobile Management, Motorola Services Platform V4, SAP Afaria, Sophos Mobile Control, Symantec Mobile Management Suite and Tangoe Mobile Device Management.

The DCIG 2014 Mobile Data Management Buyer’s Guide is immediately available. It may be downloaded for no charge with registration by following this link.




New Solutions to Antivirus are Pushing Defense-in-Depth to the Network Edge

Security-in-depth is rarely discussed without including desktop antivirus with antivirus software being a cornerstone of corporate network protection since the advent of the computer virus. The danger that antivirus software presents is that within most organizations it presents the last line of defense so any threat capable of breaching this defense has the ability to wreak havoc within the enterprise. 

Although antivirus has long been known to be vulnerable, the rather sudden downturn in effectiveness of this security control has caught most organizations off guard and looking for new alternatives to protecting their networks from an ever increasing number of malware threats. 

A recent study by Google noted that, at best, antivirus scanners caught only 25% of malicious content. This begs the question, “Why is traditional signature based antivirus failing?

The inherent issue with antivirus is the signature-based approach to identifying possible malicious code. These signatures once manageable by antivirus vendors are now beyond their ability to keep up with the mass volume of malware that is being produced daily.  

A recent Sophos presentation noted three years ago new virus signature definitions were approximately 3000 per day.  Today those totals are closer to 300,000 new virus signatures per day. Obviously this negates any ability for antivirus vendors to be able to quickly respond to new virus threats. The increased lag time between identifying the virus in the wild and having a new signature definition has left most organizations vulnerable to these threats and looking for new alternatives.  

The reason for this rapid shift is simple; hackers have perfected malware creation and antivirus evasion. Attackers can take a newly written virus or even an old virus, quickly build a new version, and run the virus through a scanner such as VirusTotal until it passes all antivirus scanners. The attackers now have a new and fresh virus that can bypass current signature-based technology.  

Organizations are increasingly looking to take antivirus away from the desktop as the primary protection against viruses, and instead are looking for reputation-based malware controls that live on the network edge. There is also a strong push for cloud based aggregation of malware threat models to feed rapid identification and remediation of new malware threats. This aggregation leverages what is being seen across the globe and uploading those reputational characteristics quickly to block new threats before they become problems within the network.

These new approaches are pushing antivirus appliances to the edge of the network as part of the infrastructure before it reaches the desktop.  This approach was recently validated with Cisco’s $2.7B purchase of Sourcefire.  

Sourcefire, best known for the popular IDS tool Snort, also has a robust anti-malware detection tool called Advanced Malware Protection (AMP).  Enterprises have to defend against increasingly sophisticated attacks and are looking at these defenses from the point of entry into the network. This new approach allows a more robust look at possible viruses, investigation of an infection, the tract of the virus in the enterprise, as well as providing remediation. This is all being done within the appliance.    

Although desktop antivirus still has a place in a defense-in-depth strategy its relevance has diminished rapidly and substantially. This has left most organizations scrambling to find new alternatives to this age old problem.

What is clear in all of this is the fact that effective antivirus in the future is not signature-based and will live at the point-of-entry on the network where reputation and cloud based databases will make fast determinations on whether an attachment or web link etc. will be allowed to be delivered to the desktop. This is a rapidly changing environment and one in which more solutions will come to market driven by enterprises increasingly adopting this new defense-in-depth approach to antivirus.




Threat Detection is the Next Frontier in Data Security: Final Thoughts from Symantec Vision 2013

In the last few years security has shifted from being an issue that organizations only deal with when a crisis occurs to one with which they must now daily confront. This is putting pressure on organizations to stop taking a knee jerk reaction as their means of ongoing security management and instead adopt a systematic approach to effectively deal with both external and internal threats. The problems that internal threats present and why they are so difficult to detect were openly discussed this past Wednesday morning during that morning’s keynote at Symantec Vision 2013.

I have worked for organizations of all sizes (small, medium and large) in both the public and private sectors. If there is anything I learned over that period of time, security tended to (and still too often does) play second fiddle to other business priorities.

This approach is driven in large part by the sensationalized nature of security breaches. Incidents where companies such as Sony are hacked and credit card information taken almost always resulted in other organizations scrambling to make they were not susceptible to the same type of threat and then taking action if they were.

But over time the resolve to stay ahead of potential threats tended to fade. After the first wave of action was taken, focus on security tended to shift to maintenance mode with any future substantial security updates then driven by another external crisis. Thus the cycle became a crisis would occur, security enhanced if needed, the crisis passes, the public forgets, the system (hopefully) survives and other projects compete for dollars needed to enhance the corporate security infrastructure resilient.

Thankfully that mindset seems to have eroded in recent years – at least in larger enterprises. While security enhancements and purchases are still too often reactions to data breaches after they have already occurred, the Internet and the growing cyber security threat has forced larger organizations to maintain a proactive posture when maintaining their corporate infrastructures to ensure data breaches never occur.

Yet as anyone knows security is only as good as the weakest link or links as the case may be. Today those weak links are finding their way into today’s most secure infrastructures making them vulnerable to attack in ways that are not easily to detect. As individuals increase the number of mobile devices (phones, laptops, tablets and even thumb drives) that they use and bring to work, it renders useless traditional methods that organizations use to protect their intranet “perimeter.”

These devices may be hostile in two ways. First, they may contain software that the individual does not even know presents a threat to his or her company. In these cases, it is incumbent upon the organization to have defenses in place to detect this malicious software and then protect the company from the havoc that it can potentially create.

More troubling are those individuals (employees or otherwise) that enter a premises with a malicious intent. This could spell trouble in two possible ways. They could have a mobile wireless device that they use to hack the network. Alternatively, they may present themselves as a trusted individual carrying one of today’s small portable storage devices with tens or even hundreds of GBs of storage. In this case, they may not even need network access. Using it they can potentially copy and carry offsite unprecedented amounts of data without anyone knowing it ever occurred.

It is this new type of security threat that companies need to step up and address as it requires them to detect this threat from inside the firewall. This is an angle of data security that enterprises are often ill-equipped to handle.  While every corporation expects attacks to come from the Internet (and Symantec even used the term ‘Cyberwar‘ to describe this segment of the keynote,) these internal attacks may present an even larger risk of data loss or compromise than from those that now originate from the Internet.

More disconcerting, the data accessed or compromised by these attacks may never be detected or discovered. During the keynote presentation, Adventist Health System’s Corporate Data Security Officer, Sharon Finney, made the observation, “No offense to my DBAs but they present my biggest security risk since they have access to all data in an unencrypted format.”

Aggravating the situation in terms of detecting such data breaches, no benchmarks yet exist as to what constitutes “normal” patterns of data access. Even if an organization did detect that a large amount of data was being accessed and then copied from one storage device to another, the rules governing such data movement are yet to be defined. As a result, a detection of such data access and movement may still not prevent a data breach from occurring since there is no sense if this activity is normal, especially if it is being initiated by a “trusted” source inside the corporate network.

Yet a third problem that Symantec has seen is the existence of malware that sits dormant inside of organizations for up to a year or more before carrying out its evil intentions. These are often almost impossible to detect because, as HP’s VP of Enterprise World Security Services Sam Chun points out, “Highly sophisticated organizations are developing this software.

The world of security has changed significantly in the last few years. While most large organizations understand and recognize the threat that the Internet presents to their business and have put in place measures to counter those threats, they are still ill-prepared to deal with and manage the threats that originate from within. Yet it is these threats that may be ultimately prove to be far more dangerous than anything that they have dealt with to date and which the tools that they need to adequately protect themselves are still in their infancy.




The Coming Identity Based Network Management Revolution; Interview with Blackridge Technology CTO John Hayes, Part III

Since the advent of the TCP/IP protocol, network administrators have had a major blind spot: the ability to reliably determine the identity of an individual device or user. BlackRidge’s new Eclipse™ solution, built on BlackRidge’s patented Transport Access Control (TAC), uses client drivers or gateway appliances to insert unique identity information to every TCP packet. In this third and final post in our blog interview series, BlackRidge Technology CTO John Hayes and I discuss where BlackRidge is heading and the challenge of managing infrastructures from the perspective of devices rather than networks.

Ben: Let’s discuss your recent identity aware networking solution release. What is different in this release?

John: We’ve made a number of advances since our first release, which was targeted at government customers. The main advance is in scalability. We can now handle 10,000 Identities per physical or virtual gateway.

Next is manageability. We have a more comprehensive management interface, including a GUI. We have Active Directory and LDAP integration. In general, we have a better-defined integration with existing systems. We have also implemented log management so we can process the logs appropriately and feed into other systems.

Another interesting use case that we have found is in network segmentation. One of the issues with VLANs is a technical limitation within the VLAN tag itself, you can only support 4,000, 4k VLANs. Eclipse allows you to implement the functionality of VLANs without having to deal with the limitations of VLANs.

I would consider VLANs to be another topological limitation of the networks. Think of it this way, VLANs were invented as a management control mechanism. It took one physical LAN and carved it up into a bunch of virtual LANs and it works well. But you are still tied to the policies of both the underlying physical and the virtual LAN. If you can completely free yourself from that, it makes policy implementation easier.

Ben: How do you keep administrators from getting snowed under? This is a pretty revolutionary concept and when you are looking at managing individual devices it sounds complicated.

John: You are right. At first glance this can viewed as a big hairball. We really are looking at this from the perspective of the Identity of an individual device or user.

But you probably do not want to manage everything — if I had 10,000 users, I do not want to manage every user individually. What we normally do is determine a couple of common groups. Then you drop users into those groups.

You are going to be in the engineering group. You are going be in the finance group. You are going be in the sales group. What that does is it allows you to say, here are the policy filters that for sales, and anybody in the sales group uses that policy.

That means I do not have to write individual policies for everybody in that group. And also I can modify those policies and modify the policies for sales, and it applies to everybody in the group. That is how we are looking at it.

If you need to, you can have a group of one or set specific policies for a specific identity and it can be completely custom. Or I can say, you are going to follow the sales group policy, and then you are also going to have these other policies in addition.

But for most users, we would expect the administrator to say, “OK, you are going to fall into this group of users and just follow those policies.

Ben: You also announced a virtual appliance correct?

John:  Yes. The biggest challenge we have going into customers in many cases is they say, “We love your product, but I do not have any rack space for you.

Having a virtual appliance gives us the ability to say, “OK use your existing infrastructure. Just deploy us virtually and we can provide you the same features.

Again, this is in response to major customer feedback. Power, cooling, floor and rack space are some of the most precious commodities in the data center. Operating virtually gives our customers the ability to take advantage of the core reasons they deployed a virtual environment in the first place.

Ben: This also helps organizations be more agile, is that the case?

John: Yes. Actually, one of the areas that we are working on for future releases is enhancing both the agility, not only of the clients and the gateways, but also being able to track the resources that are being authenticated, or who the requests are being authenticated for.  This is an exciting path for us.

Ben: Speaking of the future, are you looking at any strategic partnerships?

John:
Yes we are. May has started off with a couple of very important announcements for us.  The first announcement came from Sypris Electronics.  Sypris announced they are integrating our TAC technology into their key management system giving public and private sector customers a new level of network protection.

The second announcement came from McAfee.  McAfee announced that we have joined their SIA program. McAfee’s alliance program is enabling us to plug into an established framework for interoperable and compatible solutions within the security marketplace.  We look forward to building tighter relationships with some of the other companies in this program.

Today, I would say our partner activity fall into three categories.

  • On the client side, and I would also emphasize mobile here, it is very important for us to be able to get our clients out and accessible to the customer on as many platforms as possible. We are continuing to focus on that.  Mobile initiatives based on BYOD or the ‘Internet of Things’ are going to continue to keep this top of mind for the foreseeable future.
  • The second one is although we are just announcing Eclipse in both a physical and virtual appliance, there’s other vendors doing integrated security devices that are also expressing interest in the Eclipse functionality.  We are getting a fair bit of interest from OEM and channel partners right now.
  • Third, when our products are in operation, we see a lot of things. We are able to learn things. By generating events I can tell that, “Hey, we are getting a DOS attack from some place over there.” Obviously, this is not saying that we are going to expose identities or things of that sort. But we can use those identities internally as an additional point of reference.

Using those additional points of reference you can basically gain operational knowledge of what’s going on in the network. The ability to allow people to subscribe and to communicate to those events is another really interesting area that we are having some very interesting conversations with folks on. I think that’s the area that you should probably keep an eye on for partnerships in the future.

Ben: John, thank you. I think this has been a very enlightening interview.

John: Thank you Ben and the rest of the DCIG team! We look forward to keeping you updated as our product roadmap gets fulfilled.

In Part I of this executive interview series we examined the three practical use cases for network layer identification.

In Part II of thi
s executive interview series we discussed why most current authentication schemes fail in headless environments and described Eclipse’s underlying technology, TAC.




Symantec Vision 2012 Exposes Attendees to the Real Threat of Today’s Constant Barrage of Attacks

The keynote given by Symantec’s CEO Enrique Salem this past Tuesday and the series of presentations that followed exposed every attendee at Symantec Vision 2012 to just how dangerous today’s internet world really has become. Yet the larger threat that every business faces is not putting in place a solution to address them. Rather it is the danger that dealing with these threats will cause organizations to take their eyes off of the ball and fail to focus on where their business needs to go next.

Every business realizes it needs to protect its data from loss, keep its production applications highly available and secure its perimeter from attack or theft. The challenges each business faces as it seeks to deliver on these objectives are:

  • What is the appropriate amount of data protection to put in place?
  • How available do their applications need to be? (three 9’s, four 9’s or five?)
  • What is the right level of security to put on the perimeter to keep the bad guys out while keeping our good data in?

If you were in attendance at Symantec Vision 2012 this week and heard some of the stats they had to share, you begin to realize just how difficult it is to achieve that balance. The paranoid among us might even think it is time for everyone to batten down the hatches, go into their bomb shelters and expect the apocalypse to strike at any moment.

Clearly that last statement is a bit melodramatic but there is an element of truth to it as I got a sense of what Symantec sees every day in terms of the number of attacks that it has to help businesses defend against. Here is just some of what Symantec had to share regarding what it sees in the security space:

  • The number of threats increased 81% in 2011 over 2010.
  • 1 million new pieces of malware are now written every single day.
  • Cybercrime attackers made over $100 billion in 2011 and may have cost businesses between 3 and 4 times that much.
  • It used to take a few minutes to discover a piece of malware. Now it may take months to detect it.
  • Malware can now kill power grids, open dams and sabotage nuclear reactors.
  • Threats are becoming more targeted towards individuals and the intellectual property they possess.
  • 97% of security events are now false positives as attackers look to get in and get out undetected making it difficult for a business to know it has been compromised.

Yet potentially the biggest threat that companies face is becoming so consumed with reacting to these threats that they fail to create and then execute on more strategic initiatives that keep their company moving forward.

To Symantec’s credit, it realized that it is not immune from this same problem so back in 2010 it tried to envision what the world was going to look like in a few years. Frankly, it did a remarkably good job of constructing that vision. The most poignant part of Salem’s keynote was when he shared how Symantec in May 2010 documented how it thought the world might look in 2012.

It envisioned:

  • A transformed data center where tasks once only done inside the data center could be done anywhere
  • People would use multi-purpose devices with both local and cloud storage
  • People would move from one device to another without waiting for data to be moved

In other words, Symantec was envisioning the world of bring your own device (BYOD) to work that was a major theme of InterOp, another conference going on essentially across the street from Symantec Vision 2012.  It was as he shared these thoughts that he also put on the big screen behind him a picture that closely resembled the tablet. This was the vision Symantec had developed in 2010.

However what was most impressive and is a credit to Symantec, it did more than just put a picture on a drawing board two years ago and then forgot about it. It treated that vision seriously and acted on it. As Salem said, “This is the year Symantec’s vision becomes a reality.

Based on what I saw at Symantec Vision 2012, companies can learn a lot from Symantec. Yes, they can turn to Symantec to get data protection software, high availability software and security software to meet just about any level of need they may have from low to moderate to the most extreme.

Yet my primary takeaway out of Symantec Vision 2012 was that the real threat that today’s barrage of attacks against businesses present was how they can distract organizations from focusing on the business. Nothing is more distracting than when the information that you need to run your business on a day-to-day basis is suddenly not available and, in worst case scenarios, may actually be in the hands of someone else that may now use it against you or to even use it put you out of business.




The Three Practical Use Cases for Network Layer Identification; Interview with BlackRidge Technology CTO John Hayes Part I

Followers of my previous blog entries should recognize the next company in DCIG’s Executive Interview series.  I have previously discussed both the technical and operational impact of BlackRidge Technology’s patented breakthrough technology known as Transport Access Control (TAC).  Today, BlackRidge announces their first product, Eclipse, based on their TAC technology. I begin a discussion of this release, in the form of a multi-part interview series, with BlackRidge Technology’s CTO John Hayes.

In this first entry, Hayes and I discuss the need for network layer identification and some of the more popular use cases of the technology.

Ben: John, thank you for agreeing to talk with me today.  I’ve written about TAC, now known as Eclipse, a number of times but I think it would be good for us to review what your company does. What is network identity and why should I care?

John: Very simply, BlackRidge applies identity to networks. Identity itself can be something as simple as a user name and password. It can be more secure smart card credentials. It can be how you access a single sign on system.

But whatever form of identity you are using today, identity is interpreted and understood only at the application layer. By that I mean identity is used only by applications, whether I am logging into a service on the web, or an email server or what have you, the application layer is aware of identity.

But all of the network below it, for instance TCP/IP and all the protocols that underpin the internet, are completely unaware of identity. This results in networks being managed based on topologies, and it’s very hard for organizations to do strong policy enforcement at the network layer.

For instance, you start off a company and the company grows. Like any organization it grows in fairly random ways. Ideally my company would grow along lines that would be easy to implement with a network topology, VLAN, subnets and things of that sort. In reality, there’s no resemblance of the organization to the network, and therefore I need to compromise my policies in order to make them work well within the network.

If you can apply identity to your networks, it gives you the ability to say, “OK, I can implement the policy I want, and I no longer have to compromise.

What BlackRidge does is we can take identity, compress it down to a small enough element that it can be communicated and interpreted at the network layer to allow you to do this.

Ben: In real world terms, what does having this additional layer of identification get me? What problems can I solve?

John: There are three use cases that we see most often. Their applicability obviously depends on the customer needs.

The first one is what we call the absolute security use case. That is where we use identity to effectively remove resources from the network. Those resources only appear when they are being accessed by a recognized requestor, which has both the identity and the authority to access those resources. 

To all unidentified and unauthorized requestors, there are essentially no resources on the network. And when I say the requestors cannot see them, I mean they cannot scan, they cannot look around, they cannot access, they cannot coerce the resources to give up their user name and password. They cannot see the machine or resources there. This is of primary importance to people where security is the overriding concern.

The second use case is what we call the ROI use case. If I am an enterprise type organization, I have a public facing internet presence, and I am trying to provide a service to known users, whether they are employees or customers.

Today, we are talking to customers and what we are finding is those public facing interfaces range anywhere from 50 to 90 percent of the traffic that they are seeing is unwanted. Now unwanted traffic can range from network scans to network reconnaissance. However it can also be attacks such as denial of service attacks or it can be spam; in short, it can be all sorts of these things.

But what is significant is that in order to get the information I want from the people I want to communicate with, I need to over provision my front end pipes and security resources, anywhere from a factor of two to a factor of 10, just to get the communications I want out. That means my firewalls, my deep packet inspections, my intrusion protection system, all need to be scaled up correspondingly.

For instance, if I have a gigabit connection to the Internet, that’s a pretty big pipe. But if I’ve got a gigabit connection, and I see 90 percent garbage, that means I have have to put a gigabit’s worth of firewall, intrusion detection, deep packet inspection, all of this other stuff out there, in order to get 100 megabits of good data.

There’s two different ways we would approach this problem. If I believe that all the traffic coming at me should be known, in other words it is going to be only my own employees and my customers, and we can provision and assign identities, we will just simply filter out and drop all unauthenticated traffic. Now you take your resources and you focus them on suspected good traffic instead of all of the traffic coming at you.

The second one is I’ve got a mix of anonymous traffic coming at me.  I still need to service it but I want to have different levels of service. What I mean by that is, my employees and my customers are known and I have an established relationship with them so they are going to get a certain quality of service.

This anonymous traffic, I can now separate and give them best effort service, as opposed to a predefined level of service that otherwise I would have to do. So in both cases customers can realize an ROI in the provisioning of front-end security services.

Now the last use case actually has to do with how I manage my networks.
As we discussed earlier, my networks grow in certain ways as I add resources, and resources tend to get added organically. I also add clients. And then I also merge with organizations or all sorts of other things happen that cause me to have networks that are not ideal to the way I’m doing things. So I might have a finance guy sitting next to an engineer sitting next to somebody in sales.

Now I sure do not want my sales guy and my engineer to be accessing my finance server. But if they’re all on the same VLAN or subnet because of the way the building is wired, I am stuck with that situation. Identity gives me the ability to segment those resources based on the policies I want, based on identity, instead of being tied down to traditional network policies.

This occurred very recently. We were talking to a customer that had just acquired a company. But part of the facilities they acquired with this new company were actually still with another company.
They needed to carve out this piece of the facility and isolate it without moving everybody around and actually doing some fairly unnatural organizational acts. In other words, they could not do this with current technologies.

With BlackRidge, they are able to very clearly say, “OK, we are going to isolate these sets of users, and we are going to have them only access the resources they are supposed to see which is going to be outside the company. Everything else inside is now protected.

It’s just a much easier way to do it and it saves a lot of what are relatively non-technical things we’re working around. You can actually apply the policy you want without having to compromise based on other, in many cases non-tech
nical, requirements. So those are really the three major use cases that we see.

In Part II of this executive interview series I will discuss the deployment and technical aspects of Eclipse and how BlackRidge continues to find new use cases for the product.

In Part III of this interview series I will discuss the management of Eclipse and how BlackRidge continues to find new use cases for this product.




Payload and Event Reporting by MetaFlows CEO Livio Ricciulli, Part III

MetaFlows is a network security monitoring tool implementing some unique capabilities in today’s ever-changing security environment.  They are allowing security administrators access to not only aggregated threat information for their own network, but are also alerting them to potential global threats in their enterprise spaces.  I am finishing up my interview today with MetaFlows CEO Livio Ricciulli, looking at how they are able to aggregate threat information while maintaining security in a cloud-based solution.

Joshua:  MetaFlows is managing security from the cloud, using a “Software as a Service”-based solution.  Our readers will probably be wondering:  Is MetaFlows shipping all of the internal log events, including the intrusion detection system-related events, to the cloud and storing them there as a part of that system?

Livio:  Yes, everything except for the payloads.  Everything that happens in the enterprise generates either a log or an intrusion detection system event through our agent.  But, all these events are then anonymized.  Once the event data is sent to the cloud, it is stored in our private cloud space in a way similar to the way your bank account information is stored, nobody can see it.  It is private to the user, only authorized users can see it.

The important point is that the payloads themselves do not go to the cloud. They are logged for forensic reasons only.  But the payloads themselves stay on the devices within the enterprise.

MetaFlows - System Infrastructure Diagram.jpg

Joshua: So you are saying, if MetaFlows is looking at  an HTTP get or put request, MetaFlows examines and stores the source IP and target IP address, but not data transferred?

Livio:    Exactly. MetaFlows only stores the event that happens, the time, source and destination IP, and the signature that triggered.  That data is stored in the cloud in that company’s private webspace.  

Log management is a big deal, too.  Many organizations nowadays have a problem with compliance. They need to log everything happening no matter what.  If you have a distributed system with these types of requirements, you are in a kind of pickle because you need a way to concentrate all this data and manage logs from all these different systems and networks.

We have solved that problem with this model because now you can point all your logs to our system.  Then, using a browser-based dashboard, you can correlate your log data with IDS events. So our model has significant advantage with logging, too.

Joshua:   It seems like today; people are scratching their heads, especially in this virtualized world we live in and asking “why do I need to have another appliance-based system?”
 
Livio:  That is true. One thing we found out is some customers still want to buy the appliance, but they want to buy it as a service rather than buy it as expenditure.  Most of the appliances we have sold are actually hardware and software solutions at an early charge. Customers can expense the entire amount rather than expense just a portion of it.  We have found that selling the appliance bundled with the software as a service as a yearly charge is very attractive to customers.

Joshua:   What you are saying is companies like the idea of purchasing these as an operating expense as opposed to a capital expense that has to be depreciated over a period of three to five years?

Livio:    Exactly.

This is the final installment of a three part series with MetaFlows CEO Livio Ricciulli.  Here are the first two installments:

Network Security Monitoring delivered through a “Software as a Service” Model by MetaFlows CEO Livio Ricciulli, Part I

Network Security Performance Tuning by MetaFlows CEO Livio Ricciulli, Part II




Network Security Performance Tuning by MetaFlows CEO Livio Ricciulli, Part II

Network security monitoring is a constantly changing environment of both tools and methodologies.  Most of them today, however, have used a lone “cowboy” mentality where datacenter solutions operate independently.  MetaFlows is changing that.  Today, I am continuing my interview with MetaFlows CEO Livio Ricciulli, discussing how their product is optimizing network security monitoring and performance.

Joshua:  The MetaFlows product is delivering Network and Log monitoring facilities through a cloud-based “Software as a Service” model.  I am sure one of our readers concerns will be that of performance.  How does MetaFlows stack up compared to a solution where all the hardware and software is hosted within a company’s own network?

Livio:      We have placed a lot of effort in optimizing the performance of our software.  In the past if you had a fairly fast network you had to buy appliances specifically designed for security, ranging in cost from $20,000 to $80,000 per gigabit per second. If you have multiple gigabits, they cost even more. These are specialized hardware devices that are highly optimized for security applications. So what MetaFlows did was to take our product and tune it for very high performance on off-the-shelf hardware.  We reduced the cost of network security over that of deploying hardware appliances by using hardware that you can buy from any hardware vendor, and then creating a software library that parallelizes the processing.

For example,  you can buy a $1,000 machine with a nice set of Intel processors and process up to 800 megabits of data per second sustained with our software, which was unheard of before we added this capability.

Joshua:  I want to talk briefly about your competition.  Who are your primary competitors?  Is Splunk one of them?

Livio:     I think our innovation is aggregating a lot of functions into one system that have traditionally been split up between different vendors.  Splunk does the log management portion. There are others doing the intrusion detection system function.  Then there are still others doing flow monitoring, like Arbor Networks.  What we have done is to take pieces from all these different functions and aggregated them into one product giving companies the best of all these tools in one software suite.  So I would say that Splunk is a competitor, but they are not a head on competitor.

Joshua:  So, then, your biggest competitor might be more operational issues within a company.  That competition is convincing some organizations that using a disparate set of products is orthogonal to the operational success of their company.  Using MetaFlows SaaS model, a company gets a best in class toolkit, and then does not have to worry about building installation, management and configuration expertise because MetaFlows is doing all that for them.

Livio:    Exactly. We have a conceptualization of the market today in four different solutions. There are low-end appliances, high-end appliances, open source solutions, and MetaFlows.  The low-end is typically just an implementation of a particular open source IDS.  It is low-end in the sense that it is not very sophisticated, and costs $20,000. There is an initial 20 percent markup for a subscription to signature updates.  To operate these appliances effectively, you need an expert on staff to interpret what is going on in that appliance and to interpret the output.

Next, there are the high-end appliances that are much more expensive. There is a higher subscription cost, but the administrator does not need to be an expert.  You will pay around $50,000 a year for them.

Then there is the open source route where you put a lot of time and effort into build something on your own. But you still need an expert to administer it because you need to be able to update and manage it yourself.

What we have done it to give you a solution at the cost of an open source product with a minimal amount of subscription cost, $99 per month per CPU.  You reduce the administrative costs because you do not need as many people to install and run it.

Joshua:   It seems like you can help companies better position themselves by having better tools and at the same time lower their costs.  They will be able to deliver higher quality threat monitoring and threat identification while moving closer to cloud-enabling. Do you believe Mobile First applications and Cloud Storage should have a positive impact on the perception of your product, especially the global aggregation via Cloud?

Livio:    Our take on this idea is that it takes a cloud to secure a cloud.  The idea is that this architecture really is the best way to merge traditional hard-asset monitoring and cloud- based monitoring. Now, instead of having to host the database, you can disperse your agents globally, and have all of them point to one cloud-based system for storing events and logging everything that goes on in your network.

Last time, Ricciulli discussed how MetaFlows is delivering an innovative SaaS-based network security solution.  Next time, he will explain how MetaFlows deals with payload security and how they deliver threat information to the end-user.




Network Security Monitoring delivered through a “Software as a Service” Model by MetaFlows CEO Livio Ricciulli, Part I

Enterprise organizations face the daily challenge of ever-growing threats to their network and IT infrastructure.  Not only are these threats growing, but they are constantly changing as well, forcing companies to adapt by changing not only their tools but also their training.  Today, I’m talking with MetaFlows CEO Livio Ricciulli about how MetaFlows is addressing these problems by delivering network security monitoring using the “Software as a Service” model.

Joshua:    Livio, thanks for taking the time to talk with me today.  The network security monitoring space is filled with players today, but nobody is delivering a security product quite the way MetaFlows does.  Can you talk to me a little bit about what makes your company and product unique in such a crowded space like network security monitoring?

Livio:        We’re the only system in the world that can do network security monitoring using the software as a service model. What that means is that the user does not host any of the system, except a very small portion of it – the agent that detects the threats – on their own network. The database where the events are stored, and all the software that analyzes the events, creates reports, and does the analysis, reside on a web server. You’re probably familiar with this model used by other “software as s service” applications.

Joshua:    Salesforce is a good example.

Livio:        Another example would be accounting with Quickbooks online – you don’t host the data and the application. Everything goes through a browser.  We’re the first to provide a complete enterprise-ready, very sophisticated dashboard for monitoring the security of an enterprise entirely through a browser.  Users simply install an agent that runs in the network and feeds events to a system that resides in the cloud. Then, the user can basically forget about the agent. It just runs there all the time. Everything that you need can then be found in the browser-based dashboard.

This promotes online collaboration between multiple analysts because you can access the data from a secure browser from anywhere in an environment in a way that’s as secure as online banking.

Joshua:
    How does this affect budget and existing analyst workload?

Livio:        This solution does two things.  On the one hand it makes network security monitoring cheaper because most of the processing and the software updates are all taken care of centrally.  When we do an update, it is immediately available to all our users. This is common to other software as a service services. But in the realm of security, it’s an added value.

Another value-add is correlation.  Our customers aggregate all their event data to a single location – our cloud.  With that volume of data, MetaFlows can do a more effective job of correlating events across enterprises.

Joshua:    So are can you cleanse the data for analysts, such that they focus on critical threats across all data centers?

Livio:         We’ve also developed our own algorithm to rank security threats.  This will help us to improve security as well because it will alert people with a similar security posture to pay more attention to certain events than others. This type of targeting is not possible using a traditional model where everybody stores their own events in their own database and they don’t share any information. This is very new. Nobody else is doing this as far as I know.

Joshua:    Can you reduce the amount of time security teams spend implementing, configuring and maintaining on-premise software?

Livio:         MetaFlows supports off-the-shelf hardware, meaning you can download our agents for practically any hardware you can buy.  We also sell inexpensive appliances for those wanting a more traditional hardware-based solution.  In either case, you download an agent from our website that then does traffic monitoring in the enterprise.  It includes a suite of applications that are designed to give very broad detection capabilities ranging from looking for bots – computers that are subverted to become Trojan horses- to more of a generic intrusion detection system, where we look for events, like somebody using peer to peer file sharing, that is not a permitted use of the network.

The agent does the monitoring, and when an event is detected, the event data, not the payload just the event, gets fed to our MetaFlows cloud in real time. These events then get stored, correlated, and archived in the cloud.  Then, users can interact with Metaflows to look at events, correlate them, and get a picture of what is going on in their network, all through a browser.

Joshua:    Since this is security log data, can you include other log data from devices on a network?

Livio:         We also support log management.   We can store all the events logged by third-party devices on the network.  So, essentially, we can unify the IDS function and the log management function all in one place. And this solution is turn-key, you don’t have to install, configure and setup anything, just download it and run it.

In the next blog entry in this interview series with Ricciulli he will explain how MetaFlows is optimizing network security monitoring and performance.




Think AES is Unbreakable? RSA Security’s Shamir Debunks that Notion

The 2008 Crypto Conference provided a lot to talk about this year. If you didn’t know a Crypto Conference existed, you aren’t alone, but it is where the best and brightest mathematicians gather to discuss cryptographic and cryptoanalytic research. However at this conference Adi Shamir (the “S” in RSA Security that stands for Rivest, Shamir and Adleman and that is now owned by EMC) gave a presentation for a new attack on encryption systems called the “cube attack”. The ramifications of this attack sent a collective shockwave across the data security sector. Since encryption is revered as our best alternative and last safe harbor from data exposure, any weakness shown by encryption algorithms can have a dramatic ripple effect in data security.

The presentation was general as to the details it revealed but the recently published white paper called “Cube Attacks on Tweakable Black Box Polynomials” by Itai Dinur and Adi Shamir provides an in depth look at how this attack is carried out. While I would not assume to describe this type of attack better than the white paper itself, this attack provides an order of magnitude improvement for capturing the encryption key through solving such tweakable polynomials.  Tweakable polynomials contain both secret variables, or key bits, and public variables, or plaintext bits. 

The two most common types of encryption algorithms are block ciphers, such as AES, and stream ciphers such as Trivium. Block ciphers encrypt data in predefined blocks while stream ciphers encrypt data one bit at a time. Although more susceptible to attack, stream ciphers are widely used due to the dramatic performance gains that they deliver over block ciphers in the encryption and decryption processes.

The cube attack shows a dramatic improvement in attacking low polynomial algorithms of which stream ciphers are comprised. Conversely block ciphers polynomials grow exponentially with the number of rounds.  A round is considered the specific sequence transformation process plaintext data goes through to perform encryption.  The number of rounds in an algorithm is dependent upon the key size.  For example AES with a 256 bit key has 14 rounds.  So, the cube attack would theoretically lack the ability to successfully attack block ciphers. But with that being said there still could be reason to worry for the following reasons:

  • Possible order of magnitude improvement over exhaustive and current attack techniques. Shamir theorizes in the paper that the cube attack could be coupled with other attacks such as side-channel, or meet-in-the-middle attacks, to reduce the order of magnitude of cracking block ciphers. This is particularly worrisome due to the wide distribution of AES in both private industry and government. If it is possible to lower the order of magnitude by coupling this attack with a separate attack, then we would have a real reason to worry if the attack were perfected and performed timely. Although Shamir will bring forth more information in a future publication on the meet-in-the-middle coupling within the cube attack, it would seem reasonable that if a reduction in the order of magnitude of attack is accomplished this could be very worrisome to the future security of widely deployed block ciphers.
  • Stream Ciphers are all susceptible. Stream ciphers are low polynomial algorithms and are particularly susceptible to the cube attack.  Trivium was used as an example since no previous attacks against it had been better than an exhaustive attack. The cube attack showed a dramatic order of magnitude improvement over an exhaustive attack. Shamir’s conclusion is that Trivium is easily breakable. Bottom line there are now serious security concerns regarding the use of stream cipher algorithms. 
  • LSFR (Linear Shift Feedback Registers) are likely susceptible. LSFR’s are widely used as random bit/number generators in stream ciphers. Everything digital that uses LSFR random generators is therefore susceptible which could affect any of a number of current applications that employ LSFR, such as Bluetooth, GSM and RFID.    

The effects of the cube attack are still being worked through and its true effects on the industry are still mostly unknown.  But if Shamir’s new attack method plays out (and I have no reason to believe it will not), the viability of stream ciphers are already seriously weakened just by the potential of such a cube attack.  Furthermore if in the future it can be shown that mixing the cube attack with other attacks dramatically lowers the order of magnitude in capturing the block cipher encryption key, industry’s last safe harbor might just be a data security bomb shelter.




FTC Issues Red Flag Rules Reminder; Ensuring IT is Ready as Unlimited Liability Looms on the Horizon

The Federal Trade Commission (FTC) recently issued a reminder to financial companies of the upcoming November 1st 2008 deadline to be in compliance with the identity theft prevention program, and the pursuant FTC “Red Flag Rules.” If this is news to you, then you probably aren’t alone; but you should make yourself aware as your company might be subject to this regulation.

Although this pending regulation is widely known within the banking industry, organizations outside of the financial industry might be caught unawares that they too could be subject to penalties if they are found to be out of compliance. Financial Institutions and “Creditors” are subject to the Red Flag regulation but what companies might overlook is the definition as to what constitutes a creditor. Companies need to ascertain if they are fall under this classification since, if they need, they need to comply with the FTC regulation. The FTC defines a “Creditor” as the following:

  • You are a creditor as defined by the FTC;
  • If you are subject to FCRA (Fair Credit Reporting Act);
  • Provide covered accounts i.e. allowing multiple payments or transactions.

If you are unsure whether you fall into the area of compliance it would be wise to seek legal help to ensure you aren’t leaving yourself open to liability. Although there are no active plans to audit organizations, a negative event could trigger an investigation of your company.  Any negative event such as a data breach, or even a whistle blower, could open your company up to monetary penalties and civil litigation.  There are three area of concern when discussing penalties:

  • Federal Trade Commission. The FTC is authorized to bring enforcement actions in federal court for violations, and could enact penalties of up to $2500 for each independent violation of the rule.
  • State Enforcement. States are authorized to bring actions on behalf of their residents and may recover up to $1000 for each violation, and may recover attorney’s fees.
  • Civil Liability. This area is where companies stand to lose the most.  Not only will companies suffer untold damage to their reputation and subsequent customer churn, but each consumer may be entitled to recover actual damages sustained from a violation.  There is the possibility of class action law suits potentially resulting in massive damages. 

So, what are the Red Flag Rules trying to protect and how does it affect compliance and IT?  Basically the Red Flags are relevant indicators of a possible risk to identity theft.  Federal regulators have described various patterns, practices, and specific forms of activity that are possible precursors to identity theft. They have then outlined broad categories and specific incidents which must be complied with by both financial institutions and creditors. 

 

To comply with the Red Flag Rules, financial institutions and creditors must implement a program to identify, detect and respond to the indicators of identity theft. The designed program must be approved by the organization’s board of directors or appointed committee, and it must be updated and monitored according to changes in risk. This mandates a covered company should enact a program that detects, prevents, and mitigates identity theft and should include reasonable policies and procedures, assign specific oversight, train staff, and audit compliance to accomplish the following:

  • Identify Red Flags. Some Red Flags indicators are: Types of covered accounts offered; Methods to open covered accounts; Methods to access open accounts; and, previous experiences with identity theft.
  • Detect Red Flags. This includes how to authenticate customers, monitor customer transactions and verify validity of change-of-address requests.
  • Respond to Red Flags. You must take appropriate responses that prevent and mitigate identity theft.
  • Ensure the program is updated. You must update your program accordingly to reflect changes to risks to your customers.

As companies have automated processes and brought services to the Internet this has made it a certainty that IT will play a large role in compliance with the Red Flag rules.  -Possible areas of concern for IT are:

  • Data Flow Analysis. Understanding how data flows within and throughout your organization by doing a gap analysis to understand the risks to companies IT systems and then mitigating those risks as they pertain to the Red Flag Rules.
  • Identity Verification. Verifying a persons I.D. will entail going beyond current single step password authentication in favor of a knowledge-based authentication method as well as detecting suspicious authentication activity on customer accounts.
  • Multiple authentication requests coming from the same IP address.  Understanding and monitoring fraud precursors such as this. While burdensome, is necessary to detect fraud. 
  • Transaction monitoring. Ensuring transactions are valid and information is not being exposed to unauthorized persons.
  • Phishing prevention. Phishing is an increasingly popular and unfortunately successful way of gaining personally identifiable information for customer account access.  It is necessary to understand what constitutes phishing and then take the steps necessary to mitigate the risk of this type of activity.

Providing a way to track and respond to risk factors as it pertains to IT systems is a complicated issue, but one that could have a large impact on an organization if a negative event starts an investigation or the FTC audits your organization. With automated systems and electronic data, IT will play a central role in ensuring the steps necessary for compliance with the FTC Red Flag Rules. But, like all regulation it can be burdensome to an already stretched IT team and budget. Failure to take appropriate steps to protect customers from identity theft can have far reaching impact to an organization in the form of customer churn and damaged reputation, not to mention the unlimited civil possibilities from customer law suits.  The reminder from the FTC is a warning that time is running out and IT has its work cut out for them, but securing customer data is paramount to company success and customer confidence. 

Bitnami