In my experience, the most successful organizations are those that are humble. Successful organizations are smart enough to know what they don't know and are also humble enough to consider that others may know better than they do.
I can think of two recent examples of this:
1) I was recently on an email thread where some individuals from the federal government were discussing the capabilities of the federal government analytically. After much chest beating and boastfulness, someone wondered aloud if the private sector may also have some interesting and unique capabilities analytically. I've worked in both the federal sector and the private sector, and that statement may very well take the prize for the understatement of the year award. If I were still in the federal sector, I would assume that others were better analytically until I found out otherwise, and moreover, I would try to learn from them. The attitude in the government appears to be the opposite, and in my experience, is unjustified. It's a shame really.
2) I recently witnessed a vendor pitch gone horribly wrong. Although the vendor was explicitly told several times what the customer was looking for, the vendor chose to decide for itself what it wanted to sell the customer. The result was downright embarrassing and painful to watch. A catastrophic miscalculation? Sure. But a little humility and willingness to actually listen to the customer could have gone a long way towards avoiding what turned out to be a dead end and waste of everyone's time.
A little humility can go a long way.
Wednesday, June 13, 2012
Friday, June 1, 2012
The Right Pivot is Everything
Every incisive query into network traffic data needs to be anchored, or keyed, on some field. This is, essentially, the pivot field. Pivoting on the right field is crucial -- I've seen inexperienced analysts spend days mired in data that is off-topic and non-covergent. In some cases, simply changing their pivot vantage point results in an answer/convergence in a matter of minutes.
For example, consider the simple case of a malicious binary download detected in proxy logs. If we want to understand what else the client endpoint was doing around the time of the download, we would pivot on source IP address and search a variety of different data sources keyed on source IP address during the given time period. If we want to quickly assess who else may have also downloaded the malicious binary, we would pivot on domain and/or URL.
Naturally, these are simple pivots, but the point is a good one. Take care to use the right pivot. Otherwise, the results may be confusing, divergent, and inconclusive.
For example, consider the simple case of a malicious binary download detected in proxy logs. If we want to understand what else the client endpoint was doing around the time of the download, we would pivot on source IP address and search a variety of different data sources keyed on source IP address during the given time period. If we want to quickly assess who else may have also downloaded the malicious binary, we would pivot on domain and/or URL.
Naturally, these are simple pivots, but the point is a good one. Take care to use the right pivot. Otherwise, the results may be confusing, divergent, and inconclusive.
Peer Inside The Tangled Web
Most enterprises do a reasonably good job monitoring their edges, or at the very least logging network transactions traversing those edges these days. What's interesting though is that most enterprises have almost no visibility (or interest for that matter) to understand what's happening in the interior of their networks. I have found there to be great value in peering inside the tangled web that is an enterprise network.
There are several data sources that can prove extremely valuable for monitoring the interior of a network:
First, let's consider the example where an enterprise monitors proxy logs and DPI (Deep Packet Inspection) at the edge. Let's say that a client endpoint was re-directed via a drive-by re-direct attack (e.g., Blackhole Exploit Kit), downloaded, and was subsequently infected by a malicious executable. Further, let's say that the malicious executable that was downloaded is not proxy aware. If we miss the executable download (which happens somewhat regularly, even in an enterprise that is monitored 24x7), then our proxy and DPI will likely be of little help to us in detecting the artifacts of intrusion. This is because of two main reasons:
Hopefully this hard earned advice is helpful to all.
There are several data sources that can prove extremely valuable for monitoring the interior of a network:
- Interior firewall logs
- DNS logs
- Network flow data (netflow)
First, let's consider the example where an enterprise monitors proxy logs and DPI (Deep Packet Inspection) at the edge. Let's say that a client endpoint was re-directed via a drive-by re-direct attack (e.g., Blackhole Exploit Kit), downloaded, and was subsequently infected by a malicious executable. Further, let's say that the malicious executable that was downloaded is not proxy aware. If we miss the executable download (which happens somewhat regularly, even in an enterprise that is monitored 24x7), then our proxy and DPI will likely be of little help to us in detecting the artifacts of intrusion. This is because of two main reasons:
- The malicious code is not proxy aware, and thus its callback/C&C (command and control) attempts will most likely be blocked by an interior firewall.
- The infected system will likely attempt domain name lookups for callback/C&C domain names. Even if these domain name requests resolve (they don't always resolve, i.e., in cases where the domain name has been taken down), there will no subsequent HTTP request (remember, it was blocked by an interior firewall). Because of this, there will be no detectable footprint in the proxy log. In the DPI data, the DNS request will be co-mingled with the millions of other DNS requests and will show as coming from an enterprise DNS server. This makes detection and endpoint identification nearly impossible.
- Interior firewall logs will allow us to detect attempts to connect to callback/C&C sites that have been blocked/denied.
- DNS logs will allow us to identify endpoints requesting suspicious/malicious domain names.
- Netflow data will allow us to very quickly identify other systems that may be exhibiting the same suspicious/malicious behavior.
Hopefully this hard earned advice is helpful to all.
Subscribe to:
Posts (Atom)