Wednesday, December 22, 2010
Time to Reflect
Things have slowed a bit around the workplace of late. It's kind of nice, as it gives me a chance to reflect. Yesterday, I was thinking about how far the network monitoring/network traffic analysis community has come in the past decade. Ten years ago, large portions of the information security community were content to implement signature-based detection methods and believe they were covered. Nowadays, people en masse are awakening to the reality that the traffic transiting a network needs to be examined with an analytical eye. This is a wonderful thing, as us die hard analysts can now share our passion with an audience that is ready to hear the message.
Tuesday, October 26, 2010
Abusing Standards
This morning I presented at the Techno Forensics conference. I had a great audience and tried to share my thoughts on analysis and network forensics with them. I think the talk went well. The audience asked some great questions on abusing IP protocol standards, which is one of my favorite artifacts to look for analytically. See, that's the nice thing about network traffic analysis -- network traffic conforms (or should conform) to IETF standards. Looking for cases when it doesn't (for example, TCP packets of less than 48 bytes) can turn up some very interesting finds!
Wednesday, September 22, 2010
Truth
The past few days I've been pondering what truth is. This may seem like an odd thing to think about. If you really think about it though, how often do we know what absolute truth is in a situation? Unfortunately, not often. This is the case in both the analog and digital worlds. We are only as good as our data in the digital world. Over the course of my career, I've seen situations where the data tells a very bizarre story, only to be found later to have been collected in error.
Thursday, September 2, 2010
More Proxy Fun
This morning I met a friend of mine for coffee. He is a bright guy and also a talented cyber security analyst. We had a good discussion on a number of different topics. At one point, we got into a discussion of blind proxying of DNS requests with no logging (an earlier topic I had blogged on). He decided to check his proxy, which was a different one than the one I had blogged about earlier (I am keeping both anonymous here). Same issue. Yup -- the proxy just blindly forwards DNS requests with no logging. I am beginning to think that this is a fairly common behavior with proxies. Have you checked yours lately?
Monday, August 30, 2010
Success
The other day, I was having a conversation with someone who used some of my jumping off points on one of the large, enterprise networks they monitor. They were shocked that the jumping off points were able to identify some truly sketchy traffic on that network (serious compromises). They said to me, "your theory really works!". To which I replied, "it's not a theory -- it's been tested and proven repeatedly." Another believer.
Monday, August 16, 2010
Elegance in Brevity
It seems to be a common misconception that in order for a solution to be value-add and useful, it must be cumbersome and complex. I'm not sure why this is, as in practice, I've found this to be quite the opposite. There is elegance -- and usefulness -- in brevity. For example, many cyber security teams struggle with what information they should share/pass around to other teams to be good netizens and collaborators. Well, for starters, why not pass around malicious domain names and malicious code MD5 hashes? True, this is not the whole kit and caboodle and doesn't tell the whole story, but if we all shared even just those two pieces of data, wouldn't we be better off as a community?
Some interesting food for thought.
Some interesting food for thought.
Monday, August 2, 2010
Spotting a True Anomaly
The other day, I was boarding an 8 AM flight with a cup of coffee (purchased after the security checkpoint) in my hand. If you've ever taken an 8 AM flight, you know that you're probably leaving for the airport around 6 AM. Wanting to take a cup of coffee on the flight with you is not such an anomaly. In other words, it's a very expected type of behavior. Nonetheless, TSA pulled me aside as I was waiting to board, in their words "because you want to board the plane with a cup of coffee, we will need to vapor test your coffee". They proceeded to hold what looked like litmus paper over the coffee. They then sprayed the paper with some clear liquid and pronounced my coffee free of harmful vapors. I was then free to board the plane.
There are few issues with this logic:
1) One is allowed to purchase liquids after the security checkpoint and bring them on board the aircraft. This is an accepted behavior that is seen frequently and has been identified as legitimate by TSA authorities. Functionally, this is a white listed behavior. So why waste precious TSA personnel cycles on it?
2) If I wanted to mix something into the coffee to produce some sort of harmful vapor, I would wait until I was on the plane to do so. Why would I waste precious vapors before boarding?
3) I could just as easily order coffee on the plane, mix something into it, and have the same effect without TSA being able to vapor test my coffee. The TSA test is easily avoided.
So, you're probably asking yourself what relevance this has to this blog? In the above example, the TSA inspector (the analyst in this example) pulled me aside for what he considered an anomalous behavior. The problem is that my behavior was routine, legitimate, widely accepted, and easily explained behavior. It wasn't a wise use of precious analyst cycles.
It works the same in the cyber realm. We need to make sure we optimize analyst workflow so that analysts spend most of their day chasing down true anomalies that have no easy explanation, aren't routine, aren't legitimate, and aren't widely accepted. There are a lot of ways to generate fruitless leads for analysts to chase down. The challenge is generating actionable, focused leads. That's where taking an analytical approach can help.
Moving back to the physical realm, I hope the good folks at TSA will take a good look at the value-add of some of these procedures. TSA personnel's time is valuable and limited. They should be focused on procedures with high value-add.
There are few issues with this logic:
1) One is allowed to purchase liquids after the security checkpoint and bring them on board the aircraft. This is an accepted behavior that is seen frequently and has been identified as legitimate by TSA authorities. Functionally, this is a white listed behavior. So why waste precious TSA personnel cycles on it?
2) If I wanted to mix something into the coffee to produce some sort of harmful vapor, I would wait until I was on the plane to do so. Why would I waste precious vapors before boarding?
3) I could just as easily order coffee on the plane, mix something into it, and have the same effect without TSA being able to vapor test my coffee. The TSA test is easily avoided.
So, you're probably asking yourself what relevance this has to this blog? In the above example, the TSA inspector (the analyst in this example) pulled me aside for what he considered an anomalous behavior. The problem is that my behavior was routine, legitimate, widely accepted, and easily explained behavior. It wasn't a wise use of precious analyst cycles.
It works the same in the cyber realm. We need to make sure we optimize analyst workflow so that analysts spend most of their day chasing down true anomalies that have no easy explanation, aren't routine, aren't legitimate, and aren't widely accepted. There are a lot of ways to generate fruitless leads for analysts to chase down. The challenge is generating actionable, focused leads. That's where taking an analytical approach can help.
Moving back to the physical realm, I hope the good folks at TSA will take a good look at the value-add of some of these procedures. TSA personnel's time is valuable and limited. They should be focused on procedures with high value-add.
Thursday, July 22, 2010
Premonition
Analysis junkies like me get an intuitive feeling about particular ways to slice and dice network traffic data that are likely to produce interesting results. I had never heard this referred to formally until a meeting I was in this past week. My counterpart used the word "premonition". I like it. It's now in my digerati vocabulary.
Uncommon Protocols
IANA assigns 256 registered Internet Protocol numbers (0-255). The protocols assigned to each number vary widely in capability, purpose, and function. What protocols do you route on your network? For sure TCP, UDP, and ICMP. Perhaps a few others as well for one reason or another. Are you sure that you only route the protocols you think you do? Looking at the data is the only way to find out for sure.
A colleague of mine was looking at a client network and discovered that the client was routing some pretty unusual protocols. The client was not aware of this and became quite concerned. Just another reason we should all be vigilant in monitoring our networks and studying what the data are telling us.
A colleague of mine was looking at a client network and discovered that the client was routing some pretty unusual protocols. The client was not aware of this and became quite concerned. Just another reason we should all be vigilant in monitoring our networks and studying what the data are telling us.
Thursday, July 8, 2010
Logging Update
I have good news regarding the logging issues I described in previous posts. I sat down with the client and the vendor, and we had a productive meeting together. We all agreed that logging of DNS queries ought to be part of the product. In fact, the vendor couldn't understand why it was ever overlooked/omitted by them in the first place. The vendor agreed to include this feature in the next release of the product (date of release still undetermined).
The good news here is that analyzing the data on the network revealed a shortcoming in a vendor solution that many organizations use (including yours perhaps). Most people probably rely on the logging of this product without having any reason to question it. My hope here is that the issue I identified will allow this vendor's entire customer base to better protect and defend their networks.
Today is a good day. The entire cyber security community will benefit because of this. Now that's cool.
The good news here is that analyzing the data on the network revealed a shortcoming in a vendor solution that many organizations use (including yours perhaps). Most people probably rely on the logging of this product without having any reason to question it. My hope here is that the issue I identified will allow this vendor's entire customer base to better protect and defend their networks.
Today is a good day. The entire cyber security community will benefit because of this. Now that's cool.
Wednesday, June 30, 2010
Looking Outward
The Internet is a noisy and scary place. When analyzing their network traffic, how can an organization effectively and efficiently sift through all the noise? One method to help with this is to look outward. What do I mean by this? By looking outward, I mean focusing on traffic leaving your enterprise (headed to IP addresses that are external to your network). This works primarily for one reason: Although there may be a great deal of noise on the Internet and a great deal of noise internally, there shouldn't be a cross pollination between those two noisy realms. That cross pollination would be indicative of something anomalous leaving your network (other than the routine/obvious types of outbound traffic that we would expect).
There are some ways to further reduce the noise contained within outbound traffic, and I will blog about those in a future post. The bottom line is that if you can create a jumping off point with very little noise, it's going to be an efficient analytical technique.
There are some ways to further reduce the noise contained within outbound traffic, and I will blog about those in a future post. The bottom line is that if you can create a jumping off point with very little noise, it's going to be an efficient analytical technique.
Wednesday, June 23, 2010
Acquisition
On June 15th, NetflowData LLC was acquired by 21st Century Technologies, Inc. The acquisition creates an awesome combo, and I'll tell you why I think so. NetflowData LLC specialized in an analytical approach to information security. We took an objective look at network traffic data and used analytical techniques to ferret out odd and unusual traffic. 21st Century Technologies is a software company with a product solution named LYNXeon. LYNXeon specializes in deep graph analytics over large data sets. It's the perfect platform for us to marry our analytical skills to. Here's to the future. Cheers.
Friday, June 11, 2010
Inspiration
Inspiration can be found all around us. Sometimes it's just a matter of taking a moment to realize the treasures found around us. One example I often give is the relation of cars on a highway to network traffic analysis. When you drive down the highway, you expect to see many different types of cars around you. If you saw all the same type of car, you'd find it quite strange. Network traffic is the same. It should appear quite random/all the packets should be different from one another. If we start seeing a bunch of traffic that correlates strongly with a bunch of other traffic (in other words, it looks the same), then that is anomalous. An interesting application of a principle from the analog world to the digital world.
Friday, June 4, 2010
Logging (Non-)Resolution
So, after a few weeks of going back and forth with the vendor on the logging issues I described in a previous post, we came to the conclusion that the product does not support logging of DNS requests. There are no plans to include this feature at this time, and there is no way to work around/override. So, where does that leave this client? Flying somewhat blind, unfortunately.
There is a valuable lesson here. We're only as good as our logging, and we can't assume that a device is logging properly. We have to use a scientific approach and look at what the data tell us before we can know what is actually going on. It's a painful lesson, but an important one in the quest to "Know Your Network".
Lesson learned.
There is a valuable lesson here. We're only as good as our logging, and we can't assume that a device is logging properly. We have to use a scientific approach and look at what the data tell us before we can know what is actually going on. It's a painful lesson, but an important one in the quest to "Know Your Network".
Lesson learned.
Thursday, June 3, 2010
An International Language
Recently I had the privilege to travel abroad and do an exchange with cyber security analysts in other countries. The experience was wonderful -- and quite fascinating. What's amazing to me is that although we all come from different backgrounds and different experiences, we all want the same things. We want to be free to be creative and clever in defending and analyzing our networks. We want to safeguard information and intellectual property. We want to keep the attackers out, while not bringing undo hardship on legitimate users. And most of all, we want our respective leadership to "get it". Very interesting.
Friday, May 21, 2010
Aggregation
Aggregation is my friend. When I'm first introduced to a pile of data, be it logs, flow data, PCAP, etc., it can be overwhelming. With a client eagerly awaiting some results, what is an analyst to do? Enter aggregation. Aggregating data over multiple fields can help an analyst very quickly slice through data to get a big picture view and pull out events of interest to analyze further. It's also a great way to create jumping off points (reference an earlier post).
What are some of my favorite fields to aggregate over you may ask? In this post, I'll start with one of my favorites:
Source Port, Destination Port, Number of Bytes
Why do I find this particular aggregation so interesting? Let's go through it. For those that are familiar with Internet Protocol (IP), we know that servers typically communicate on a fixed port. For example, most web servers serve web pages on port 80. For this example, we will equate server port to destination port. In other words, we will assume that we are on the inside of our network looking out (in practice, this is actually a useful vantage point to take). Clients, on the other hand, choose a source (client) port in an incremental fashion. The exact method of picking the source port varies by operating system, but with a large enough sample, we can assume that the distribution of source ports is roughly uniform. In practice, network traffic is so voluminous, that this is a relatively safe assumption.
So, where does that leave us? Well, for starters, we can exploit the roughly uniform distribution of source (client) ports to identify cases in which the source port did not appear to be chosen as expected. In other words, one or more source ports were "favored" for one reason or another. Typically, an automated/machine action will cause one or more source ports to be "favored", whereas a human action would not have this same effect. If we aggregate source port, destination port, and number of bytes, what we're in effect doing is picking out instances where a given byte size is sent repeatedly from the same source port(s). Pretty neat, eh?
As an added benefit, this aggregation is time agnostic. That means that it can catch the low and slow attacks just as well as it can catch the blatantly obvious. Love it.
What are some of my favorite fields to aggregate over you may ask? In this post, I'll start with one of my favorites:
Source Port, Destination Port, Number of Bytes
Why do I find this particular aggregation so interesting? Let's go through it. For those that are familiar with Internet Protocol (IP), we know that servers typically communicate on a fixed port. For example, most web servers serve web pages on port 80. For this example, we will equate server port to destination port. In other words, we will assume that we are on the inside of our network looking out (in practice, this is actually a useful vantage point to take). Clients, on the other hand, choose a source (client) port in an incremental fashion. The exact method of picking the source port varies by operating system, but with a large enough sample, we can assume that the distribution of source ports is roughly uniform. In practice, network traffic is so voluminous, that this is a relatively safe assumption.
So, where does that leave us? Well, for starters, we can exploit the roughly uniform distribution of source (client) ports to identify cases in which the source port did not appear to be chosen as expected. In other words, one or more source ports were "favored" for one reason or another. Typically, an automated/machine action will cause one or more source ports to be "favored", whereas a human action would not have this same effect. If we aggregate source port, destination port, and number of bytes, what we're in effect doing is picking out instances where a given byte size is sent repeatedly from the same source port(s). Pretty neat, eh?
As an added benefit, this aggregation is time agnostic. That means that it can catch the low and slow attacks just as well as it can catch the blatantly obvious. Love it.
Monday, May 17, 2010
Ad Hoc
I often see working groups or standing meetings set up explicitly for the purpose of sharing cyber security information. What amazes me is how quickly these devolve into regularly scheduled knowledge droughts. I'm convinced that the most efficient transfers of knowledge happen in an ad hoc manner. When one or more parties seek knowledge and one or more parties have knowledge, the transaction is smooth and effortless. Kind of like water flowing downstream. Working groups, on the other hand, remind me of salmon swimming upstream to spawn....
My colleagues and I routinely share actionable information. It's all built on trust. There are no MOAs, no MOUs, and no working groups.
My colleagues and I routinely share actionable information. It's all built on trust. There are no MOAs, no MOUs, and no working groups.
A Dangerous Box
My colleagues and I recently discussed how people seem to stop thinking when they are looking at a computer screen. People seem to believe what "the system" says, even when it may be nonsensical (e.g., in the case of an obvious data error). I don't quite understand this behavior. "The system" is only as reliable as the data that are in it. In some cases, the data are flat out wrong. Some people can't grok that I suppose. That's a dangerous box to live inside, don't you think?
Wednesday, May 12, 2010
Only As Good As Your Logging
I'm helping a client work through an interesting issue this week. It seems that one of their network monitoring devices is not logging as one would expect. You know, the type of network monitoring device everyone buys precisely for the increased network visibility provided by quality logging? I will remain vague on the device here so as not to seem to favor one technology over another. It seems that the documentation on the device's logging ability could be a lot better, and on top of that, what the device logs (and doesn't log) seems to be somewhat arbitrary. How did I stumble upon this issue? I noticed the device making certain DNS queries, but couldn't find any corresponding log entries in the device's logs that would explain said DNS queries. Yikes!
So, herein lies the rub. I had set up several jumping off points that queued off this device's logs. I assumed that the device was logging properly and gave myself a big pat on the back for helping this client defend its network. Not so fast.... I guess one can never assume....
Still working on this one....stay tuned....
So, herein lies the rub. I had set up several jumping off points that queued off this device's logs. I assumed that the device was logging properly and gave myself a big pat on the back for helping this client defend its network. Not so fast.... I guess one can never assume....
Still working on this one....stay tuned....
Sunday, May 9, 2010
Anecdotal Assumption
I had an interesting meeting this past week with some nice folks who run a fairly important network. During the meeting, we spent a fair bit of time discussing some of their concerns relating to the security of the network. I was surprised to find out that one of their biggest concerns is the network being attacked and brought down. Not because it can't happen (it certainly could), but rather because it's a big hypothetical. Compare that to what's probably already happening. I asked those on the other side of the conference room table how they were monitoring their network. Their response? "We aren't." Yikes. I'm not a betting man, but if I were, I'd probably bet that there are already nefarious elements on their network operating as they please (be it for profit or other means). Perhaps these nefarious elements are slowly taking bits and pieces of the network for themselves as it pleases them. Just slowly enough so as not to raise any eyebrows. How can my new friends find out for sure what's on their network? Take a look at it. Let the data tell you what's on the network and what the biggest threats to the network are.
I often hear people mention the whole "bring the network down" fear. Personally, I'm much more concerned with what's already on the network and is a real, tangible threat. Worried about some hypothetical future attack? Call your upstream providers. Make sure you have a good rapport with them, and that they know how to drop packets when packets need to be dropped. Preparations complete. Next threat please.
I often hear people mention the whole "bring the network down" fear. Personally, I'm much more concerned with what's already on the network and is a real, tangible threat. Worried about some hypothetical future attack? Call your upstream providers. Make sure you have a good rapport with them, and that they know how to drop packets when packets need to be dropped. Preparations complete. Next threat please.
Friday, May 7, 2010
Jumping Off Points
People often ask me, "How do I find the bad stuff on my network"? Well, the answer to this is relatively straightforward. Knowing what belongs on your network is a great way to know what doesn't belong on your network. Easier said than done for sure. How do you come to know your network, or at least come close to knowing your network? Jumping off points are a key piece to answering this question. What is a jumping off point you ask? It's all about perspective. Most organizations have an incredible amount of network traffic data. It's far too much to dig through with a fork. The trick is to organize the data using a number of different methods. These methods produce different views or perspectives into the data. Often, this results in certain suspicious or malicious activity jumping out at an analyst, which leads to further investigation. The part that the analyst seizes on? That's called a jumping off point.
In my experience, most organizations give their analysts a fork and a giant pile of data and say "dig". Naturally, most of us can't get our heads around the data in this manner (largely because it's so immense). I've found that creating actionable jumping off points for analysts allows them to seize upon anomalous events and investigate them to resolution. In my opinion, a much more efficient way to roll.
In my experience, most organizations give their analysts a fork and a giant pile of data and say "dig". Naturally, most of us can't get our heads around the data in this manner (largely because it's so immense). I've found that creating actionable jumping off points for analysts allows them to seize upon anomalous events and investigate them to resolution. In my opinion, a much more efficient way to roll.
The First Post
Here I stand about to enter the blogosphere. I created this blog to share my thoughts about and experience in applying mature analytical techniques to the domain of cyber security. My general philosophy is simple: Know Your Network. Throughout my career, I've observed many instances where individuals make assumptions, and worse yet, decisions about the (in)security of their network based on heresay and anecdotal evidence. In science, factual evidence is required to support a hypothesis. I'd argue the same should be true in the field of cyber security.
Subscribe to:
Posts (Atom)