Tuesday, April 29, 2014

Let's Talk Analyst to Analyst

Recently, I was contacted “analyst to analyst” by an employee in the professional services area of a large security vendor looking for some inspiration and guidance. I won’t name the vendor, but it is one whose offerings haven’t exactly wowed me during the course of my career. We engaged in an informative and interesting email dialogue -- analyst to analyst, where we learned from each other. To some, this may seem a bit risque on the part of the professional services employee. In my experience, I’ve found that analysts from large security vendors actually approach me for dialogue, advice, or to share thoughts regularly.

Throughout my career, I have found that analysts as a group want to move forward, make progress, solve problems, and improve the state of information security. This requires a collaborative environment and thinking outside the box. Whether or not analysts find a creative, thought provoking, boundary-pushing environment within their own organizations, they will also seek it externally via direct analyst to analyst contact with their peers, discussion forums, conferences, and informal meetups. Analysts are an inquisitive and helpful bunch by nature -- always more than willing to understand a challenge someone else is facing and help that person work through it. After all, if we can help to make another organization more secure, we all win.

Informal analyst to analyst channels also accomplish things people aren’t often aware of. For example, almost everyone in the security community is aware that information sharing is an important undertaking -- one that is critical to a successful security operations program. Unfortunately, there are various organizational, bureaucratic, and other factors that can hinder timely information sharing. To work around this, analysts typically set up informal circles of trust in which they feel comfortable sharing non-attributable information, in accordance with the policies of the organizations they represent. These informal channels serve as a means to meet the operational demands of the mission while organizations work through the strategic endeavor of setting up formal information sharing channels.

Let’s continue to talk analyst to analyst. It’s a good thing.

Monday, April 28, 2014

IE Zero-Day as a Use Case

Recently, a new Internet Explorer zero-day exploit, discovered by FireEye, was discussed on the FireEye blog (http://www.fireeye.com/blog/uncategorized/2014/04/new-zero-day-exploit-targeting-internet-explorer-versions-9-through-11-identified-in-targeted-attacks.html). The exploit was also acknowledged by Microsoft. According to the FireEye blog: “The APT group responsible for this exploit has been the first group to have access to a select number of browser-based 0-day exploits (e.g. IE, Firefox, and Flash) in the past. They are extremely proficient at lateral movement and are difficult to track, as they typically do not reuse command and control infrastructure.”

All versions of Internet Explorer since IE6 appear to be vulnerable to this exploit, which represents more than 25% of the browser market world-wide. That amounts to a large number of vulnerable systems. The exploit and associated malware are still under investigation, but attackers do appear to be exploiting the vulnerability for the purpose of targeted attacks. This presents us with an interesting and illustrative security operations use case.

Many security professionals will arrive to work Monday (if not sooner) with some variant of the following question from management: “Does this affect us?”. Let’s break this high-level question down into more specific questions and take a look at how an organization might answer this question from management.

First, the organization will need to answer the question, “Are we vulnerable to this exploit?”. Since this particular exploit affects all versions of Internet Explorer since IE6 and most organizations run Internet Explorer, the answer to this question is most likely: yes.

After determining that they are vulnerable, the organization will need to answer the question, “What is the relevant time window for this investigation?”. Unfortunately, as of the writing of this blog post, there are few details around any attacks for which this exploit may have been used. Because of this, there is no obvious time window to use here -- a challenge encountered fairly frequently in security operations. As a rule of thumb, the organization may want to choose a reasonable time basis for its investigation -- perhaps the past 30 days. This can be adjusted as appropriate based on any new information that becomes available.

Next, the organization will need to answer the question, “Has this exploit been used to attack us?”. The organization will need check with its network or end-point alerting technology vendors to determine if any signatures or other detection capabilities exist that may have detected the exploit as it traversed the wire.  If so, those logs will need to be searched thoroughly.  Since the community’s awareness of this exploit is so new, there may be nothing helpful here -- another challenge that sometimes occurs in security operations. If that is the case, the organization may need to work with peer and partner organizations to identify what the exploit might look like going across the wire. Network forensics can then be performed to find the exploit in the network traffic data itself, provided the traffic has been captured appropriately.

Independent of whether or not the organization is able to determine if the exploit has been seen on its network, the organization still needs to answer the question, “Do we have any compromised machines as a result of this?”. The organization will need to perform network forensics and search for relevant Indicators of Compromise (IOCs), such as the lateral movement and command and control (C2) activity mentioned in the FireEye blog post. These IOCs can provide valuable insight into whether or not an organization has been compromised by informing the organization of any signs of post-infection activity on their network. These attacks are still under investigation, necessitating organizations staying in close contact with peer organizations, information sharing groups, partners, and vendors in order to obtain and leverage reliable IOCs as soon as they become available. Once reliable IOCs are obtained, the organization should search for them in the end-point data and network traffic data as appropriate.

If an organization has determined that it has been attacked, the question then becomes, “How can we contain and remediate this attack?”. The organization should follow its incident response process as appropriate. Containment and remediation suggestions from the FireEye blog post and Microsoft may also be helpful here. Of course, regular updates will need to be communicated to management and other important stakeholders to ensure that all relevant parties are aware of the progress of the investigation.

Once all required questions have been answered, the organization should look to any lessons that can be learned. In my experience, some organizations will, unfortunately, have a difficult time answering management’s original question.  The reasons why this is the case should be used to improve capabilities for the next incident. Some organizations will not have the appropriate network forensics technology in place to provide the data of record required to research this activity. Others will not have taken the time to build the required relationships with peer organizations, information sharing groups, partners, and vendors. Still others will not have the appropriate process or the right people with the requisite level of knowledge required for this investigation.

Whatever situation an organization finds itself in, my hope is that any attacks leveraging the new IE zero-day exploit are found quickly, contained, and remediated before too much damage has been done. This zero-day exploit provides us with a good, illustrative security operations use case that we can use to learn from, grow through, and improve our security operations programs.

Friday, April 25, 2014

Too Busy for Round Wheels

Recently, a cartoon circulated on LinkedIn that caught my eye. In the cartoon, two people struggle to move a cart with square wheels. A third person comes along offering round wheels, but is told “No thanks! We are too busy.”

Security operations is a stressful business. There is always more to do than there are resources available to do it. It’s too easy to get caught up in day-to-day activities and forget to come up for air. The tragedy in this is that, sometimes, we are too busy to see that the reason we get bogged down is because we need to adjust or improve our processes, approaches, methodologies, techniques, and/or technologies. Our industry is constantly evolving. Possibilities may exist today that did not exist even one or two years ago. A fresh perspective may provide insight into where and how efficiencies and improvements can be introduced.

I rarely come across a Security Operations Center (SOC) that isn’t struggling to keep up with its work queue. At the same time, I’ve never seen a SOC that wouldn’t benefit from taking a step back and assessing *why* it is overwhelmed. Are there any potential bottlenecks or inefficiencies that process or technology could address? Are there time-consuming tasks being performed that don’t provide much value? Are team members spending a disproportionate amount of time waiting for queries to return or otherwise fighting with the technology that’s supposed to be helping them?

In my view, a swamped SOC presents an opportunity -- a wake-up call. That is actually a good thing, provided the organization can seize the opportunity. Being overwhelmed indicates that it is a good use of time to take a step back, assess where time is being spent, evaluate the value of each of those activities, and determine if efficiencies can be introduced. The security operations community is a helpful one -- peer organizations and others in the industry are often more than willing to offer some suggestions and helpful advice. The question is more whether an organization and its leadership is self-aware enough to seek advice, receptive to feedback, and prepared to listen and learn. In my experience, it is helpful to learn from the successes -- and failures -- of others.

I am also reminded of another picture I’ve seen recently on LinkedIn that contains the quote “The most dangerous phase in the language is ‘we’ve always done it this way’.” There is a lot of truth in that.

Wednesday, April 23, 2014

Knee Jerk Reaction

One of the most unfortunate mistakes I’ve seen in enterprise security operations is the knee jerk reaction. When something goes wrong, there is overwhelming pressure to do something -- anything. In the absence of experience, this can sometimes lead to a dizzying array of reactive activities. For example, after a breach notification, one of the biggest mistakes I’ve seen organizations make is to begin running off in dozens of unfocused and uncoordinated directions. Although action clearly needs to be taken, it would be better to perform incident response in a structured, organized, and professional manner.

A better approach than the knee jerk reaction is to keep calm and be prepared. Have the people, process, and technology in place ahead of time to perform incident response rapidly, smoothly, and efficiently when the need arises.

People: Build and train a strong team of analytical, responsive, and professional individuals. Put the right leadership in place. Have an agreement in place for surge support/incident response support ahead of time, so that precious time isn’t wasted getting these agreements set up during a breach response.

Process: Have a mature incident response process at the strategic, tactical, and operational levels. Ensure that team members at all levels are intimately familiar with the process and able to communicate progression through the process during incident response.

Technology: Remember that incident response is first and foremost dependent on the ability to interrogate the data rapidly and assess damage quickly. Put the right technology in place for this purpose ahead of time -- covering both collection and analysis.

Rash decisions are seldom the right decisions. It’s more helpful to be prepared to perform incident response with the right people, process, and technology. That is the only way and organization can be in a position to calmly make rational, fact-based, and accurate decisions during a breach response.

Tuesday, April 22, 2014

Neil deGrasse Tyson, Carl Sagan, and the Art of Communication

Recently, I found inspiration in the premier episode of the series “Cosmos”. Near the end of the episode, renowned astrophysicist Neil deGrasse Tyson (the host) pulled out Carl Sagan’s day planner from 1975. December 20th contained an entry marked “Neil Tyson” -- the day that a young, aspiring astrophysicist named Neil Tyson met Carl Sagan (who passed away in 1996) for the first time. Dr. Tyson recalled the conclusion of his visit with Carl Sagan:


At the end of the day, he drove me back to the bus station. The snow was falling harder. He wrote his phone number, his home phone number, on a scrap of paper. And he said, "If the bus can't get through, call me. Spend the night at my home, with my family."

I already knew I wanted to become a scientist, but that afternoon I learned from Carl the kind of person I wanted to become. He reached out to me and to countless others. Inspiring so many of us to study, teach, and do science. Science is a cooperative enterprise, spanning the generations.


Although Neil deGrasse Tyson is a modest person, he also has the unique ability that he credits Carl Sagan with -- namely to communicate complicated scientific subjects to a general, non-scientific audience.

When I look at the information security profession, I see a profession that can learn a lot from Carl Sagan and Neil deGrasse Tyson. At a high level, one of the greatest challenges we face as a community is how to communicate complicated, deeply technical security subjects to a business audience and/or a general audience. This challenge plays out time and time again -- in board meetings, on sales calls, during partner discussions, in the media, in educational settings, and elsewhere. Essentially, we want to inspire people to study, teach, and do security, just as Carl Sagan inspired people to study, teach, and do science.

Unfortunately, there are some people in the security community who tend to react to a lack of security knowledge in a snarky, condescending way. Sadly, this behavior also sometimes governs discourse within the security community itself. Needless to say, this does not help us to communicate. This approach causes people to tune out or ignore us, and as a result, opportunities to advance the state of security are often missed.

I am reminded of the quote: “If you can’t explain it simply, you don’t understand it well enough”. We in the security community can take a lesson from this to help us advance our ideas and improve the state of security. Communication is an art, and we as a community could learn a powerful lesson from the communication successes of Carl Sagan and Neil deGrasse Tyson. I think this is something we can all keep in mind the next time we finds ourselves in a position to communicate something.

Friday, April 18, 2014

Yet Another Breach

Yet another breach. Today it is Michaels Stores that is in the news for having suffered a breach. Tomorrow it will be someone else. We might as well coin the acronym (YAB) now. As we know, breaches happen. Every organization will be breached at one time or another. Because of this, the security community embraces the concept of incident response. In other words, do everything you can to protect your organization from attack, but know that attacks will still get through your defenses. When the attacks do occur, be prepared for incident response with the right people, process, and technology. This is the only way to ensure that breaches are detected promptly, incidents are handled swiftly, and damage is minimized.

A few points that I thought were noteworthy about the Michaels Stores breach:
  • There were actually two breaches -- one involving the theft of 3,000,000 credit card numbers at Michaels Stores, and one involving the theft of 400,000 credit card numbers at its Aaron Brothers subsidiary.
  • The breaches were each on the network for more than eight months before being detected. This may seem like a long time, but unfortunately, this is quite common.
  • Michaels Stores had said in January that it was investigating a breach, implying that the incident response took three months. Although this also seems like a long time, unfortunately, this is quite common.
Even given the latest breach, the news is not all bad. Although the retail sector provides a financially attractive target for criminals, there are a few promising signs that things will soon improve:
  • A new retail ISAC (Information Sharing and Analysis Center) will soon be created to facilitate information sharing amongst retail organizations
  • The recent spate of breaches has raised awareness and caused many organizations (retail or otherwise) to thoroughly review their security operations programs and incident response preparedness
  • Many retailers are taking a proactive stance and standing up a formal Security Operations Center (SOC) with a rigorous incident response process
These trends indicate to me that, at a high level, we as a community are moving in the right direction. It’s important to remember that there is no silver bullet, and that no one project or piece of advice will address all of an organization’s issues. As always, the right people, process, and technology are the key to a successful security operations program and proper incident response preparedness.

Thursday, April 17, 2014

Five Ways to Chase Away Your Best Analysts

Recently, Fast Company published an article entitled “10 Ways to Lose Your Best Employees” (http://www.fastcompany.com/3019050/leadership-now/10-ways-to-lose-your-best-employees). The article provided an interesting perspective on how companies sometimes drive away their best talent. I thought it might be interesting to take a look at this concept from a security operations/incident response perspective. During the course of my career, I have seen organizations make mistakes that have cost them their best analysts. Hopefully this blog post will help organizations identify ways in which they can improve in order to retain their best talent. Here are my thoughts on “Five Ways to Chase Away Your Best Analysts”:
  1. Put a jerk or an idiot in charge: This concept is fairly universal, and was listed in the original Fast Company article as well. Studies have shown time and time again that the manager has the most direct effect on an employee’s happiness. Security operations is a serious business with serious consequences, and it is one that deserves a serious leader. Think twice before you crown a leader who can’t spell incident response, or who has no incident response or security operations experience. An analyst who needs to take time away from important work to give remedial security lessons to his or her “leader” is not going to be a happy analyst.
  2. Deliver technology that doesn’t work: In the heat of an incident response, key stakeholders need answers, and they need them fast. An experienced analyst knows how to interrogate the data to answer the tough questions of the day. Want to infuriate your best analysts? Provide them with technology that fights them and swims against the workflow, rather than technology that supports the mission. Another great way to bring about that resignation letter.
  3. Micro-manage incident response: During an incident response, management has the best intentions and wants to do what’s best for the organization. But management may be several years removed from the operational realities and best practices of the day. The role of management during incident response is to ask tough questions that need to be answered, and then to step back and let the analysts/incident responders go about doing the work required to answer those questions. More often than not, in my experience, management micro-manages incident response. This causes valuable analyst cycles to be wasted in pursuits that are less value-added or potentially off task. Remember, there are a lot of ideas or thoughts that may seem good in theory, but experience and practice have shown that they are a dead-end. The best analysts know this, and management can empower them by focusing them on high level objectives and then letting them get to work.
  4. Value body heat over grey matter: It’s an unfortunate reality that office environments are sometimes political and require self-promotion. The best analysts are generally apolitical and spend most of their time hard at work, rather than tooting their own horn. Management can help them by presenting and representing their efforts and accomplishments to leadership. Want to step in and take credit for the hard work your best analysts are doing to make yourself look good? Kiss those analysts goodbye.
  5. Don’t match your actions to your words: Analysts are, not surprisingly, analytical by nature. Actions speak louder than words, and analysts can see through words that are not matched by action. If your security operations program is a priority, then make it so through action. Simply speaking to it as a priority without matching that talk with action will cause your best analysts to look elsewhere for a better fit.
Security operations and incident response are already a high priority or are quickly becoming a high priority for almost every organization. There is not enough experienced analytical talent to meet the demands of the field. Given this constraint, it’s perhaps helpful to understand the mistakes of others and to look at making your organization more attractive to scarce analytical talent.

Wednesday, April 16, 2014

Data Value vs. Data Volume

There is no shortage of big data talk these days. One concept that I have always hoped would get more attention is the concept of data value. Much of today’s big data discussion is dominated by the concepts of volume, velocity, and variety, but unfortunately, I haven’t seen much discussion around the concept of value.

When security organizations tackle the big data challenge, they primarily focus on two things:
  1. Gaining access to every data source that might be relevant to security operations
  2. Warehousing the data from all of those data sources
In my experience, organizations take this approach for two primary reasons:
  1. Historically, access to log data was scarce, creating a “let’s take everything we can get our hands on” culture
  2. There is not a great understanding of the value of each different data source to security operations, creating a “let’s collect everything so that we don’t miss anything” philosophy
Unfortunately, this creates new challenges that are particularly acute in the era of big data:
  1. The variety of data sources creates confusion, uncertainty, and inefficiency -- the first question during incident response is often “Where do I go to get the data I need?” rather than “What question do I need to ask of the data?”
  2. The volume and velocity of the data deluge the collection/warehouse system, resulting in an inability to retrieve the data in a timely manner when required
While it is true that a conservative, “collect everything” approach is good in the absence of anything better, I would suggest an alternative process when facing the challenges of collection and analysis head-on:
  1. Determine logging/visibility needs scientifically based on business needs, policy requirements, incident response process, and other guidelines
  2. Review the network architecture to identify the most efficient collection points
  3. Instrument the network appropriately where necessary/lacking visibility
  4. Identify the smallest subset of data sources that provide the required visibility and value with the least amount of volume
This approach may seem radical at first glance, but those of us that have worked with log data in an incident response setting will see that this is really the only way that security operations programs can keep pace with the big data deluge. After all, if you can’t get a timely answer from the very data you insisted on collecting, was there really any value in collecting it? What goes in must come out easily, efficiently, and rapidly. Otherwise, there is simply no point in collecting it.

Monday, April 14, 2014

A Business Take on Heartbleed

I spent some time today reviewing blogs and articles from the past week. Not surprisingly, much of the content was dominated by Heartbleed, the OpenSSL vulnerability dating from 2011. There were many different perspectives taken on the Heartbleed vulnerability across a variety of different forums. From my perspective, one vantage point that seemed to be missing was the business take.

I thought it would be a useful contribution to the blogosphere to take the reader through the business perspective on Heartbleed. For the sake of this blog, let's put ourselves in the shoes of an organization that has recently learned of the vulnerability and seeks to assess its risk, damage, and liability. Though individual organizations may differ in their exact steps, the following process may help give the reader a general understanding of the approach taken in response to the Heartbleed vulnerability at a high level:
  1. The first step after learning of the Heartbleed vulnerability is to identify vulnerable servers within the organization. In other words, before we can respond, we must understand what we are responding to. If accurate and detailed server inventory information is available, an organization could identify vulnerable servers from this information. More likely, the organization will need to perform a vulnerability scan across all server networks to identify any vulnerable servers. This is predicated on the assumption that the organization knows where all of its servers are located, including any located in outsourced, hosting, or cloud providers. In practice, unfortunately, this turns out to be a fairly large assumption.
  2. Once vulnerable servers have been identified, the organization will need to determine for how long each server has been vulnerable. In the case of Heartbleed, each server may have been vulnerable for two years or longer. The length of time that each server was vulnerable provides us with important time bounds around our investigation.
  3. For each vulnerable server, a damage assessment will need to be done. In other words, the organization will need to understand if the vulnerability was actively exploited, and if so, what information was compromised. Further, the organization will need to understand if that information was used to gain access to additional information. For example, if passwords were stolen and subsequently used to log in to users’ accounts, that could potentially compromise all information available through that specific application for those users. In order to perform a damage assessment, network forensics will need to be performed. Network forensics allows us to study the network traffic data of record and understand the history of what occurred, when it occurred, and how it occurred. Hopefully the organization has:
    1. Its server networks properly instrumented for collection
    2. A period of retention long enough to allow the organization to properly assess the damage
    3. The capability to analyze all of that data.
  4. Once the damage has been assessed, notifications may need to be made per legal or other requirements. Incident responders will need to work with management, executive, legal, and privacy stakeholders to understand what actions need to be taken. An accurate, factual, non-assumptive assessment of the damage will need to be relayed to the stakeholders so that they can determine what subsequent actions need to be taken.  In my experience, in the absence of factual information (e.g., where it is not possible to perform network forensics to obtain ground truth), most organizations take a conservative approach and assume everything that could have been compromised has been compromised.
  5. Obviously, patching (remediation) is a large part of the Heartbleed response. Patching can be performed in parallel to the damage assessment. Since we are dealing with live, production servers, the patching process will require a fair bit of coordination.
  6. Lessons learned are always an important part of the incident response process, and the Heartbleed response is no different. In my experience, many organizations will learn that:
    1. They have server networks that they were not previously aware of
    2. Some or all of their server networks were not properly instrumented for collection
    3. Their data retention period is not long enough to meet the needs of the Heartbleed response
    4. They do not have the analytical capability to properly perform network forensics
If any or all of these are the case, steps can be taken to ensure that the organization is better prepared for the next incident response.

Heartbleed provides us with a unique use case with which to peer through the incident response lens. Some organizations may have been better prepared for Heartbleed than others, but we can all learn from the lessons of Heartbleed. It's only a matter of time until the next Heartbleed comes along, and our goal as a community should be to learn from this experience and improve our preparedness as a whole.

Friday, April 11, 2014

The Money Shot

Wikipedia defines an Indicator of Compromise (IOC) as "an artifact observed on a network or in an operating system that with high confidence indicates a computer intrusion." Associated contextual information is also usually included along with the artifact and helps an organization to properly leverage the IOC. Context most often includes, among other things, information regarding to which attack stage an indicator is relevant. Attack stages can be broken up into three main families, each of which contains one or more attack stages:
  • Pre-infection: reconnaissance, exploit, re-direct
  • Infection: payload delivery
  • Post-infection: command and control, update, drop, staging, exfiltration
It is well known that many organizations struggle with excessive amounts of false positives and low signal-to-noise ratios in their alert queues. There are several different angles from which an organization can approach this problem. One such approach, which can be used in combination with others, is to go for the "money shot".

At some point, when an organization wants to watch for and alert on a given attack, intrusion, or activity of concern, that organization will need to select one or more IOCs for this purpose. Going for the "money shot" involves selecting the highest fidelity, most reliable, least false-positive prone IOC or IOCs for a given attack, intrusion, or activity of concern. For example, if we look at a typical web-based re-direct attack, it may involve the following stages:
  1. Compromise of a legitimate third party site to re-direct to a malicious exploit site
  2. Exploitation of the system from the malicious exploit site
  3. Delivery of the malicious code
  4. Command and control, along with other post-infection activity
Although it is possible to use IOCs from all four of the above attack stages, using IOCs from the first three stages presents some challenges:
  1. Compromised legitimate third party sites likely number in the millions, meaning we would need millions of IOCs to identify just this one attack at this stage. Further, there is no guarantee that the attempted re-direct would succeed (e.g., if it were blocked by the proxy). An unsuccessful re-direct means that there was no attempt to exploit. In other words, for our purposes, a false positive.
  2. Exploits don't always succeed, and as such, alerting on attempted exploits can often generate thousands upon thousands of false positives.
  3. If we see a malicious payload being delivered, that is certainly of concern. But what if the malicious payload does not successfully transfer, install, execute, and/or persist? We have little insight into whether a system is infected, unless of course, we see command and control or other post-infection activity.
Command and control (C2) and other post-infection activity, on the other hand, is always post-infection. That means that if we can distill a high fidelity, reliable IOC for this attack stage, we can identify malicious code infections immediately after they happen with a very low false positive rate. Granted, it is not always possible to identify a reliable post-infection IOC. In those cases, the next best alternative should be sought, likely an IOC for payload delivery.

Some people may argue that detecting an attack after it has already happened is too late. To those people, I would argue that, in my experience, attempting to detect attacks as they happen most often leads to a deluge of false positives. This deluge almost always buries the team and prevents real attacks from being detected. If there are reliable ways to detect attacks as they happen, they should be used to block those attacks. However, it's important to remind ourselves that attacks will ultimately still get through those defenses. In those instances, the money shot has proven to be the best approach.

For each attack, there may be thousands or millions of noisy pre-infection IOCs. Conversely, there are usually a very small number of low noise post-infection IOCs. Whenever possible, go for the "money shot" to help reduce false positives and improve the signal-to-noise ratio.

Friday, April 4, 2014

Clouds on the Horizon

I had the pleasure of attending the Secure Cloud 2014 conference this week. The presentations and discussions were quite interesting, and I was eager to hear and understand the perspectives of the other conference attendees. As we are all aware, the move to the cloud is already underway. With the move comes various consequences and challenges relating to security operations and incident response. I wanted to share a few thoughts and observations from this week's conference in this post.
  • As one might expect, cloud providers focus first and foremost on business operations. Meeting the demands and requirements of SLAs weighs heavily on the minds of cloud providers. I did hear some encouraging dialogue around the idea that good security is good business. In other words, if customers are worried that their data will be stolen because of the risk of a breach at a given provider, they are more likely to change providers. This was encouraging to hear.
  • Current security efforts appear to be heavily focused on regulatory requirements, compliance, privacy, encryption, and protection of customer data from nation-state spying. I was surprised at the percentage of dialogue during the conference centered on Edward Snowden, the NSA, and nation-state spying in general. Granted, this focus likely comes from the customers of the cloud providers, who are undoubtedly concerned about these issues. Nonetheless, there are many other topics that are important to the security discussion that deserve equal attention. If the cloud community becomes overly focused on nation-state spying, it risks succumbing to tunnel vision. This tunnel vision risks preventing the cloud community from progressing to a holistic approach to security that includes a robust security operations program.
  • Continuous security monitoring, security operations, incident response, and forensics remain a challenge for the cloud community. Awareness of these challenges is growing within the community, and I believe that the community will gradually move towards maturity in these areas. I must admit that I was quite surprised when the CSO of a cloud provider told me that after a breach, "what happened", "how it happened", and "what was taken" were not important questions to him. I am hopeful that this is the exception, rather than the rule, and that as awareness grows, this type of attitude and approach will change.
Cloud providers appear to be heading in the correct direction, and I applaud them for this. Those providers that understand the need to perform security operations and institute security operations programs proactively will fare better than those that only become concerned about security operations after a large breach or intrusion has occurred. In my estimation, it is only a matter of time before customers begin closely examining the security operations programs of cloud providers for maturity.

The good news, from my perspective, is that security operations is now something on the radar of cloud providers. It is also something that businesses will weigh as part of their decision regarding whether to move to the cloud, what to move to the cloud, when to move it, how to move it, and to which provider or providers to move it. It is most definitely encouraging and exciting to see the practices and wisdom of security operations moving into the cloud.