Friday, December 23, 2011
SOC/IRC Building
Time
Consider the example of the administrative assistant who sends emails and calendar invites (amidst performing a variety of other tasks) all day long. If we study the mail logs, there is nothing particularly interesting or unusual about this. But what if that same administrative assistant sends a bunch of emails and calendar invites between 02:00 and 03:00 on Sunday? Perhaps he/she is dedicated and catching up on work while dealing with a bout of insomnia. Or, perhaps he/she is about to become a pawn in a spear phishing campaign that will await targeted personnel when they arrive to work Monday morning....
Tuesday, November 22, 2011
Money Shot
Message Clarity
I've so often seen cases where the message is garbled or over-complicated (for whatever reason -- be it a lack of knowledge, lack of communication skills, or some other reason). This helps no one. I've often been told that one of my greatest strengths is being able to clearly and effectively communicate what I find through detailed analysis in an easy to understand manner. There is elegance in simplicity -- I firmly believe that. And an elegant, clear, concise, and simple message can often facilitate network security monitoring and security operations.
Tuesday, November 1, 2011
Properly Leveraging a SIEM
A SIEM can be a valuable tool to use as the foundation of an organization's security operations workflow -- but only if it is configured/set up in such a way as to provide events of value to the analysts and incident handlers. In other words, if there is more noise than value-add, human nature is to tune out. There are a number of techniques that one can employ to increase the value-add of one's SIEM by selectively and precisely engineering/tuning the rules, events, and views presented to the analyst. They mainly involve (surprise, surprise) studying, analyzing, and understanding the data on the network, and then purposely selecting those rules, events, and views that will return the most bang for the buck when presented to the analyst and/or incident handler.
I believe this to be an important point that is, unfortunately, often overlooked by organizations.
Friday, October 28, 2011
Outbound Denies
Monitoring this type of traffic creates two types of extremely interesting data from a network traffic analysis perspective, both involving denied outbound traffic:
1) Traffic that is proxy aware, but attempting to connect out to controlled/filtered/denied sites.
2) Traffic that is not proxy aware, and is thus dropped (no default route out allowed).
Both #1 and #2 can be caused by misconfigurations, and #1 can be caused by attempted drive-by re-directs that fail/are blocked (and thus do not result in successful compromise). In both those cases, the denied traffic is not caused by malicious code being successfully installed and executed. In other cases, however, #1 and #2 may indeed be caused by malicious code or some other type of rogue process.
It's another jumping off point that can be used to provide valuable insight into what may be occurring on the network. With the speed at which attackers maneuver, every little bit helps.
Sunday, October 16, 2011
Giving Credit Where Credit is Due
Talk is Cheap
Thursday, October 6, 2011
Passive DNS
Passive DNS data is a lot of fun to experiment with and analyze, and it provides a good deal of value. Check it out!
Thursday, September 22, 2011
Seek First to Understand
Where to Look?
Wednesday, September 14, 2011
Don't Forget About DNS
In some sense, the near-constant attention paid to HTTP-based threats is good. Looking at it from a different perspective though, it opens up organizations to attacks from other angles. We often pay an inordinate amount of attention to HTTP-based threats at the expense of other threats. One can't fault organizations for this -- there is still a tremendous amount of badness delivered via HTTP-based mechanisms of various different sorts and defending against it is very necessary.
In this post, I'd like to remind us not to forget about DNS. DNS is an incredibly flexible protocol that can move almost any amount of data in and out of organizations, sometimes undetected. For example, recently, I've seen DNS TXT records used for command and control of malicious code and even delivery of additional payload, rather than HTTP. Why would an attacker do this? Why not? When we learn of a malicious domain, what do we most often do? Yep, that's right -- we block HTTP-based communication with it (via proxy blocks, firewall blocks, or otherwise). But what do we most often do regarding DNS requests for those very same malicious domains? Yep, that's right -- nothing. So, can you blame the attackers for being creative in their exploitation and use of the DNS protocol?
Just another reason to stay diligent in the continual analysis of all types of network traffic -- not just HTTP-based traffic.
Sender Policy Framework
To think about it through a concrete example, say I wanted to relay email and spoof the sender such that the email appears to be sent from someguy@example.com. If I'm attempting to relay email from a cable modem dynamic IP address, then I'm probably not a legit mail gateway for example.com. Implementing SPF instructs your mail server to perform this "reality check" before accepting the email. Seems straightforward, right? Exactly.
Taming Your Ingress/Egress Points
Thursday, September 8, 2011
Other Alphabets
Window of Opportunity
So how can we turn this tidbit into an interesting analytical technique? What about reviewing the list of blocks received from our proxy vendor and searching back a week or two in our proxy log data to see if any systems were infected before the block was pushed down? Pretty neat if you ask me, and a highly effective way to identify infected systems in the enterprise.
Sunday, August 28, 2011
Common Challenges
Give and Take
Wednesday, August 10, 2011
Uber Data Source
The community appears to be receptive to the idea that we need to consider moving towards a consolidated uber data source that allows us to successfully monitor our networks and investigate incidents/events. In addition, there are a number of vendors beginning to move in this direction, which is great to see.
My talk should be posted on the GFIRST conference website (http://www.us-cert.gov/GFIRST) after the conclusion of the conference. I'd be interested in hearing thoughts and opinions regarding both the talk and the concept of the uber data source in general.
Chess Game
The best and the brightest folks, those that we need to have monitoring and securing our most critical government assets aren't interesting in playing chess. They want to be put to work on analytically challenging and motivating tasks. The chess game frustrates them, burns them out, and causes the best and brightest to leave the federal sector. I think this is extremely unfortunate, and I can only hope that one day those at the top of the political pyramid will realize this and change things for the better. Until then, I see this as a major challenge for the federal sector.
In the great game of politics and bureaucracy, it's unfortunately the American people who lose.
Friday, July 22, 2011
Free and Open Discourse
In the world of network traffic analysis and network security monitoring, free and open discourse is extremely important. Analytical techniques that are subject to discussion and critique will always be better than those that aren't. Quite simply put, analysts cannot thrive in a bubble, and neither can a robust network security monitoring program.
Unfortunately, there are some organizations that keep their analysts insulated, for whatever reason. These organizations tend to wall themselves off from the free exchange of ideas. In my experience, this hurts those organizations, as their analysts often fall behind the collective intelligence.
The good news is that ideas are always willing to be heard if someone is willing to listen.
Friday, July 8, 2011
What? When? Where? How?
What?
When?
Where?
How?
The other two question words in the English language, namely the questions of Who? and Why? are best left for law enforcement to answer for a number of reasons. That's a bit beyond the scope of this blog, so I'll brush it aside for now.
The four questions of incident response can be elaborated a bit more as:
What happened? What type of incident has occurred? What damage has occurred?
When did the incident happen? When was the incident detected?
Where did the incident occur? Is it isolated or widespread? Where is the incident coming from?
How did the incident occur? How did the intruders get in (the infection vector)?
If an analyst keeps these four questions in mind, it's much easier to focus an incident investigation/analysis and ensure that the correct supporting evidence is maintained and that the correct information is reported.
It's an intuitive approach that has been proven to help analysts focus their attention to the most value-added activities. Hopefully you'll find it useful as well.
Friday, June 24, 2011
Spear Phishing
A simple analytical method to monitor for this is to watch mail logs or a PCAP solution for "From" addresses claiming to be from within your organization, but from mail gateway IP addresses or sender IP addresses that are outside of your organization. The data resulting from this is quite fascinating. Have a look!
GFIRST Presentation
I always enjoy GFIRST, as I seldom have the opportunity to be around so many like-minded analyst geeks at one time.
Monday, June 20, 2011
4G Hotspot
Loss of Visibility
Remember, even the best analyst can't identify security issues on a network if the data isn't there to support the analysis....
Friday, May 20, 2011
We Already Use Layer 7 Enriched Meta-Data and Don't Know It
Merits of Meta-Data
- Long term retention of data for auditing and forensics purposes without the need for large amounts of expensive disk space.
- The ability to see all the data without needing to sample, filter, or drop certain traffic.
- Rapid search capability over vast quantities of data collected over long periods of time.
Now, for sure there is information in the packet data that is helpful for identifying the true nature of malicious or suspicious traffic. I believe that meta-data based technologies and packet-based technologies can work together beautifully here. Meta-data allows one to craft incisive queries designed to interrogate the data so as to identify network traffic that requires further investigation. I call these jumping off points (also discussed in a previous blog post). From there, the packet data can be consulted to assist in the investigation (presuming that the retention window for the packet data has not already expired).
As the amount of traffic on our networks continues to grow, I believe that we as a community will need to get used to the network traffic analysis model/work flow described above. I sometimes refer to it as breadth, then depth. I believe it to be a model capable of scaling with the data volumes of the present and future.
Seeing It All
Wednesday, May 4, 2011
Analyst Freedom
Thursday, April 14, 2011
Darknet
Since there are no legitimate services or servers on "dark" portions of the network, all the traffic destined for the darknet becomes suspect and therefore analytically interesting. For example, consider an aggregate analytic that looks at the top destination ports for inbound TCP traffic by number of sessions. If we were to look at traffic across the whole network, we would probably get something like this after running our analytic:
Port | Session Count
80 | #######
53 | #######
25 | #######
Not surprisingly, we see that our routine web, DNS (yes, even DNS over TCP), and SMTP traffic (all necessary and expected for normal business operations) would top the list. In this case, standard business traffic is the noise that hides the signal of the suspicious/malicious traffic. Could we somehow lower the noise to make the signal jump out at us? Yes -- through the power of darknet!
What happens, however, if we look at the same aggregate analytic, but now restrict it to look only at traffic destined for our darknet? That's where we might get a more interesting result:
Port | Session Count
5678 | #####
Why would we have traffic inbound to TCP port 5678 (which I chose purely for illustrative purposes)? I don't know why, but I do know that what we now have is a jumping off point. Is someone attempting reconnaissance of our network? If they are, what are they looking for? Do we have other systems communicating with the outside world on that port that we never noticed before? These questions and others would need to be answered by network traffic analysis via deep-diving into the data with the end result of determining the true nature of the traffic.
So, this is just a quick example of traffic analysis triggered via a darknet provided jumping off point. Hopefully it has helped to illustrate the broader power of the darknet. Darknets are truly an analytical goldmine.
Thursday, April 7, 2011
Analytical Platform
So how can we best enable analysts to create new analytical techniques? I believe that analysts need to be provided with an analytical platform that allows them the freedom to quickly and easily develop, test, and implement new analytical techniques without the hassles of data munging and data manipulation. In other words, the analytical platform should abstract the data, providing the analyst with an intuitive way to interact with the data. Additionally, the analytical platform should allow the analysts to seamlessly interact with the results of their analytical queries as they conduct their investigation.
For many years, I have dreamed of such a capability. The good news is that there are now products and technologies coming onto the market that begin to address this need. Here, here!
Wednesday, April 6, 2011
Training
- There isn't a great deal of literature/background reading on the topic
- There aren't specialized training classes that a cyber security professional can enroll in to gain this skill set per se
- It turns out that it's often quite hard to do analysis for a number of reasons (reference an earlier post entitled "Making Analysis About Analysis").
Regarding point 2, I'm hoping that the various different cyber security training institutions/organizations that exist will begin to form curricula around the topic of network monitoring/network traffic analysis. I see this as necessary, since those organizations have trained and will continue to train a large number of professionals in the field.
On point 3, I'm looking to technology to help address this point. There are emerging products and technologies that will help address this point by providing an analytical platform upon which network monitoring/network traffic analysis techniques can be developed without all the frustrations of "fighting with the data" that are commonplace today.
There is some work that we as a community need to do here. I am optimistic that we will together rise to the challenge. The time has come to get to work.
Friday, April 1, 2011
ISSA Journal Article
"This article describes practical techniques for the cyber security professional to efficiently sift through the voluminous amounts of network data. These techniques leverage different views of the data to discern between patterns of normal and abnormal behavior and provide tangible jumping off points for deeper investigation."
If you are interested, give it a read and share your thoughts!
Thursday, March 31, 2011
80-20 Rule
There are some emerging technologies coming onto the scene now to get the uber analyst closer to that last 20%. At the same time, the broader cyber security community is awakening to the first 80% (in that the awareness of the need for network monitoring is rising). We live in interesting times for sure, and I'm excited to watch the evolution.
Tuesday, March 29, 2011
Sharing
My intent is to continue to share knowledge and techniques with the larger community. All indications are that the community is extremely interested in the topic.
Monday, March 21, 2011
Collection and Analysis
We mustn't forget the other half of the equation: analysis. I think if we as a community thought more about what we wanted to do analytically before we instrumented our networks, we would save ourselves a lot of pain down the line. Perhaps we will improve in this area in the coming years.
Wednesday, March 16, 2011
The Future
I issued the students this challenge: "Seek out, identify, and study the unknown unknowns and turn them into known knowns" (reference an earlier blog post regarding known knowns). I believe that this is the boiled down essence of our obligation as network monitoring professionals/analysts.
The future holds great potential for our field. I am realizing that the onus is on those of us currently in the field to capture the interest and energy of the brightest minds. The network monitoring field and broader cyber security field face many challenges, and in order to conquer them, we will need the best and the brightest.
Sunday, March 13, 2011
Jumping Off Points Revisited
I have previously blogged about the concept of jumping off points -- identifying subsets of data in a workflow that an analyst can run with. I often speak about this topic as well -- raw data can be overwhelming, and finding a way to present it to an analyst with a way forward can greatly improve productivity.
What I discovered last week was that this concept also holds with other types of work as well. I was working on a paper with another co-worker. We were having trouble finding a way forward until we identified a jumping off point that we could work from. Once we figured out our jumping off point, everything else flowed. It was amazing.
I guess I shouldn't be surprised by now that a structured, well organized approach would yield good results.
Monday, March 7, 2011
Known Knowns
"Known Knowns" are network traffic data that we understand well and can firmly identify. Members of this class of network traffic can be categorized as either benign or malicious. Detection methods here can be automated and don't require much human analyst labor on a continuing basis. Unfortunately, this is the class of network traffic that we as a community spend the bulk of our time on. Why do I use the term unfortunately? More on that later.
"Known unknowns" are network traffic data that we have detected, but are puzzled by. We don't have a good, solid understanding of how to categorize this class of network data. One would think that because of this, we should spend a decent amount of time trying to figure out what exactly this traffic is. After all, if we don't know what it is, it could be malicious, right? Unfortunately, not enough time is put into this class of network traffic, and as a result, most organizations remain puzzled and/or turn a blind eye to the known unknowns. Why don't we work harder here? We're too focused on the known knowns.
"Unknown unknowns" are network traffic data that we have not yet detected, and as a result, we aren't aware of what this class of network traffic is (or isn't) doing on our network. This is the class of network data that contains most of the large breaches (and thus most of the collateral damage), as well as most of the truly interesting network traffic. Finding this traffic takes a skilled analyst, good tools, the right data, and a structured, well-organized approach to network monitoring. Ironically, this class would be extremely interesting to a skilled analyst, but due to the known known "rut" that we as a community are in, analysts don't really get a chance to touch this class.
So now I think you can understand why I think it's unfortunate that we as a community are so focused on the known knowns. We are so busy "detecting" that which we've detected time and time again, that we ignore the bulk of the rest of the network traffic out there. That's where we get in trouble repeatedly.
On the bright side, I do see the idea of taking an analytical approach to information security slowly spreading throughout the community. I think it's only a matter of time before one organization after another wakes up to the fact that their 1990s era signature-based approaches are only one part of the larger solution. With proper analysis and monitoring of network data and network traffic comes knowledge. And with knowledge comes the realization that what you don't know is often a lot scarier than what you do know.
Friday, March 4, 2011
Data Value
In thinking about why organizations end up in an overloaded/confused/complicated state, I've come up with two primary reasons:
1) No one data type by itself gives them what they need analytically/forensically/legally
2) There is great uncertainty of what data needs to be collected and maintained to ensure adequate “network knowledge”, so organizations err on the side of caution and collect everything.
To me, this seems quite wasteful. It's not only wasteful of computing resources (storage, instrumentation hardware, etc.), but it's also wasteful of precious analytical/monitoring/forensics cycles. With so few individuals skilled in how to properly monitor a network, the last thing we want to do is make doing so harder, more confusing, and further obfuscated.
The good news is, I think there is a way forward here. As I discussed in a previous post, enriching layer 4 meta-data (e.g., network flow data) with some of the layer 7 (application layer) meta-data can open up a world of analytical/monitoring/forensics possibilities. I believe that one could take the standard netflow (layer 4) meta-data fields and enrich them with some application layer (layer 7) meta-data to create an "uber" data source that would meet the network monitoring needs of most organizations. I'm not sure exactly what that "uber" data source would look like, but I know it would be much easier to collect, store, and analyze than the current state of the art. The idea would be to find the right balance between the extremes of netflow (extremely compact size, but no context) and full packet capture (full context, but extremely large size). The "uber" data source would be somewhat compact in size and have some context. Exactly how to tweak that dial should be the subject of further thought and dialogue.
This is something I intend to continue thinking about, as I see great promise here.
Wednesday, February 23, 2011
Pow-wow Power
Today I was on-site working with a customer, which I always enjoy. For part of the day, we engaged in an analytical pow-wow -- a free exchange of analytical thoughts, ideas, and methods. The session was great fun, but also highly effective. We all learned from each other, and after an hour or two, we had some cool new analytical techniques we wanted to try out. These pow-wows are always interesting and never fail to be productive. Definitely something to keep in mind for planning purposes.