It's always amazed me how a high profile incident seems to turn up management and executives in an incident response environment. When a serious incident hits, everyone wants to hang out with the incident responders, even people who one might never see day to day. In these type of situations, managing up is key. Management and executives have the best intentions, but they don't work with the data day to day, and their technical skills may be a bit dated. Suggestions on how to proceed, what analysis to do, and how to do it will fly at an incident responder faster than he/she can process them. Unfortunately, many ideas that seem good in theory are not good ideas in practice. The volume and variety of data makes finding the right approach tricky, and many behaviors that seem like they would indicate malicious activity don't. Trust your senior team members and manage up in the best way you can. Your team will be more productive because of it, and your senior team members will thank you for it.
Monday, October 22, 2012
Focus
When performing incident response, focus is extremely important. A significant incident can produce innumerable leads and avenues to investigate. Unfortunately, not all of these leads/avenues are productive ones. Choosing poorly can have the unintended consequence of locking up resources for days while producing very little value-added analysis. It is often difficult to know which direction or directions to go in analytically. In my experience, senior members of the incident response team, who base recommendations on past experiences, lessons learned, and day to day familiarity with the network and its data have good advice to offer here. That being the case, I've always wondered why management ends up driving high profile incidents. It's a bit of a wonder if you think about it....
Wednesday, September 5, 2012
Over Classification
In theory, as a large enterprise and/or critical infrastructure provider, building a working relationship with the US government can provide valuable intelligence (in both directions). Where the theory often breaks down in practice, however, is that the government suffers from issues arising from severe over-classification of data. I understand that there are certain sensitivities and secrets that must remain closely guarded, but lists of malicious domain names do not fall within that realm. The attackers already know where they are attacking us from, so we are not keeping anything from them. They also already know that we know about them (it is very difficult to truly disguise network defense measures). Moreover, malicious domain names themselves are not really all that valuable anymore (reference earlier blog posts), and on top of that, are often re-purposed by multiple actors/groups. I frequently see domain names that used to mean one thing, but today mean something else or multiple different things.
So, my question remains, why guard these so tightly? Can't we all agree that withholding this intelligence hurts the overall security posture of the United States? Seems like the opposite of what we were aiming for.
So, my question remains, why guard these so tightly? Can't we all agree that withholding this intelligence hurts the overall security posture of the United States? Seems like the opposite of what we were aiming for.
Anomaly Detection
Most network security monitoring techniques use one of two approaches:
- Signature-based detection (i.e., "I know this specific activity is bad")
- Pattern-based detection (i.e., "I know this pattern of activity is bad")
Those are both well and good, but they leave a gaping hole. What is the answer to the question: "Is this previously unknown activity normal and expected, or is it weird and unexpected?"
The way to answer that question is through anomaly-based detection techniques. Unfortunately, at the present time, we as a community do not have so many mature, production-ready approaches to anomaly-based detection, nor do we have many vendor options.
I am cautiously optimistic that in the coming years, we will begin to mature out capabilities in this area. It is sorely needed.
The Final Frontier
If you think about it, the final frontier for network monitoring is most likely the internal network. We as a community have become quite good at instrumenting the edge and somewhat proficient at monitoring it. Most of us, interestingly enough, have no idea what is going on inside our perimeter. This is something that requires serious thought and attention in my opinion. What lies beneath? That is the question that we should seek to answer.
Nary a Vendor Can Keep Pace
There was a time a few years back where a list of malicious domain names was one of the prized possessions of an incident response team. In previous blog postings, I've discussed how attackers have moved away from purely malicious domains and more towards using legitimate or even "disposable" domains coupled with specific URL patterns. One needs to work at staying on top of these as Indicators of Compromise (IoCs), as they change quite frequently. Given this, one would expect that vendors would quickly pounce on this opportunity to service their customer base by:
- Providing URL pattern based intelligence rather than just domain name based intelligence
- Allowing for mining of the vendor collected data using URL patterns
Surprisingly, there are few vendors that facilitate this type of approach. My hope is that in the near future, more vendors will rise to the challenge confronting us all.
Friday, July 6, 2012
Hacks
I've been told more than once that "there are an awful lot of hacks in the information security world". Sadly, this is to be expected, particularly in recent years as people have begun to see dollar signs when they hear the phrases "cyber security" or "information security". Unfortunately, the number of people who say they know how to perform network forensics/network traffic analysis is far greater than the number of people who actually know how to perform network forensics/network traffic analysis. Many people talk a good game with multiple certifications, all the right buzz words, a polished resume, and a smooth social networking profile. It's definitely a buyer beware market on the customer's end, particularly since there is more demand for the skill set than there are people with the skill set.
So what is a customer to do to avoid hiring a phony? Find someone, even just one person, whom you trust, and whose work is of the highest caliber, even if they're not working for you. Per my previous blog post "Peer Respect", the community of analysts is a close knit one. I'm sure any trusted member of the community would be happy to share an honest opinion if they have one.
Peer Respect
In the network forensics/network traffic analysis community, peer respect is huge. The community is, in essence, a large and open meritocracy, and its members are particularly respectful of analytical ability, new insights, and fresh thinking. When an analyst proves his or her merit, he or she gains a tremendous amount of respect from the community and is invited into a trusted circle. Trust is everything in this profession. We trust our peers with information that could cause us and our organizations grave damage if it fell into the wrong hands.
What do I value most in my professional life? The trust and respect of my peers.
Monday, July 2, 2012
You Can't Teach Analytical Skills
From time to time, I get asked to teach people how to be analysts. What I've found over the years is that there are those who are naturally analytical and can become solid, experienced analysts. There are also those who are not naturally analytical. Teach someone how to use a tool or tools to ask incisive questions of the data? Sure. Teach someone what makes one network traffic sample legitimate and another network traffic sample malicious? Sure. Teach someone how an attack pattern/intrusion vector works? Sure. Teach someone how to be an analyst? Nope. Can't be done. They either have analytical skills or they don't. My job is to lay the foundation and share my experience. If a person is analytically inclined, he or she will take off running. If not? Then, unfortunately, no amount of training will be able to make an analyst of the person.
End of an Era?
For many years, domain-based intelligence (e.g., lists of known malicious domain names) provided actionable information that could be leveraged to identify infected systems on enterprise networks. In its day, domain-based intelligence represented a considerable step forward over IP-based intelligence, which had proven to be quite prone to false positives. Of late, however, domain-based intelligence has itself fallen victim to a high rate of false positives. There are a number of reasons for this, but chief among them is the fact that attackers have moved from using entirely malicious domains to compromising small corners of legitimate domains. Because of this, URL patterns (e.g., a POST request for /res.php) have proven to be far more effective at identifying infected systems. Now, for sure, there are some entirely malicious domains that are still used. These domains are often randomly generated via algorithms that change daily, hourly, or even more frequently. Quite simply put, the domains change faster than the intelligence lists can share them out. Could it be that we've reached the end of an era vis a vis domain-based intelligence? Has the era of URL pattern based intelligence begun? I know that I am leveraging URL patterns heavily, and I know that I am not alone in that.
Wednesday, June 13, 2012
Humility
In my experience, the most successful organizations are those that are humble. Successful organizations are smart enough to know what they don't know and are also humble enough to consider that others may know better than they do.
I can think of two recent examples of this:
1) I was recently on an email thread where some individuals from the federal government were discussing the capabilities of the federal government analytically. After much chest beating and boastfulness, someone wondered aloud if the private sector may also have some interesting and unique capabilities analytically. I've worked in both the federal sector and the private sector, and that statement may very well take the prize for the understatement of the year award. If I were still in the federal sector, I would assume that others were better analytically until I found out otherwise, and moreover, I would try to learn from them. The attitude in the government appears to be the opposite, and in my experience, is unjustified. It's a shame really.
2) I recently witnessed a vendor pitch gone horribly wrong. Although the vendor was explicitly told several times what the customer was looking for, the vendor chose to decide for itself what it wanted to sell the customer. The result was downright embarrassing and painful to watch. A catastrophic miscalculation? Sure. But a little humility and willingness to actually listen to the customer could have gone a long way towards avoiding what turned out to be a dead end and waste of everyone's time.
A little humility can go a long way.
I can think of two recent examples of this:
1) I was recently on an email thread where some individuals from the federal government were discussing the capabilities of the federal government analytically. After much chest beating and boastfulness, someone wondered aloud if the private sector may also have some interesting and unique capabilities analytically. I've worked in both the federal sector and the private sector, and that statement may very well take the prize for the understatement of the year award. If I were still in the federal sector, I would assume that others were better analytically until I found out otherwise, and moreover, I would try to learn from them. The attitude in the government appears to be the opposite, and in my experience, is unjustified. It's a shame really.
2) I recently witnessed a vendor pitch gone horribly wrong. Although the vendor was explicitly told several times what the customer was looking for, the vendor chose to decide for itself what it wanted to sell the customer. The result was downright embarrassing and painful to watch. A catastrophic miscalculation? Sure. But a little humility and willingness to actually listen to the customer could have gone a long way towards avoiding what turned out to be a dead end and waste of everyone's time.
A little humility can go a long way.
Friday, June 1, 2012
The Right Pivot is Everything
Every incisive query into network traffic data needs to be anchored, or keyed, on some field. This is, essentially, the pivot field. Pivoting on the right field is crucial -- I've seen inexperienced analysts spend days mired in data that is off-topic and non-covergent. In some cases, simply changing their pivot vantage point results in an answer/convergence in a matter of minutes.
For example, consider the simple case of a malicious binary download detected in proxy logs. If we want to understand what else the client endpoint was doing around the time of the download, we would pivot on source IP address and search a variety of different data sources keyed on source IP address during the given time period. If we want to quickly assess who else may have also downloaded the malicious binary, we would pivot on domain and/or URL.
Naturally, these are simple pivots, but the point is a good one. Take care to use the right pivot. Otherwise, the results may be confusing, divergent, and inconclusive.
For example, consider the simple case of a malicious binary download detected in proxy logs. If we want to understand what else the client endpoint was doing around the time of the download, we would pivot on source IP address and search a variety of different data sources keyed on source IP address during the given time period. If we want to quickly assess who else may have also downloaded the malicious binary, we would pivot on domain and/or URL.
Naturally, these are simple pivots, but the point is a good one. Take care to use the right pivot. Otherwise, the results may be confusing, divergent, and inconclusive.
Peer Inside The Tangled Web
Most enterprises do a reasonably good job monitoring their edges, or at the very least logging network transactions traversing those edges these days. What's interesting though is that most enterprises have almost no visibility (or interest for that matter) to understand what's happening in the interior of their networks. I have found there to be great value in peering inside the tangled web that is an enterprise network.
There are several data sources that can prove extremely valuable for monitoring the interior of a network:
First, let's consider the example where an enterprise monitors proxy logs and DPI (Deep Packet Inspection) at the edge. Let's say that a client endpoint was re-directed via a drive-by re-direct attack (e.g., Blackhole Exploit Kit), downloaded, and was subsequently infected by a malicious executable. Further, let's say that the malicious executable that was downloaded is not proxy aware. If we miss the executable download (which happens somewhat regularly, even in an enterprise that is monitored 24x7), then our proxy and DPI will likely be of little help to us in detecting the artifacts of intrusion. This is because of two main reasons:
Hopefully this hard earned advice is helpful to all.
There are several data sources that can prove extremely valuable for monitoring the interior of a network:
- Interior firewall logs
- DNS logs
- Network flow data (netflow)
First, let's consider the example where an enterprise monitors proxy logs and DPI (Deep Packet Inspection) at the edge. Let's say that a client endpoint was re-directed via a drive-by re-direct attack (e.g., Blackhole Exploit Kit), downloaded, and was subsequently infected by a malicious executable. Further, let's say that the malicious executable that was downloaded is not proxy aware. If we miss the executable download (which happens somewhat regularly, even in an enterprise that is monitored 24x7), then our proxy and DPI will likely be of little help to us in detecting the artifacts of intrusion. This is because of two main reasons:
- The malicious code is not proxy aware, and thus its callback/C&C (command and control) attempts will most likely be blocked by an interior firewall.
- The infected system will likely attempt domain name lookups for callback/C&C domain names. Even if these domain name requests resolve (they don't always resolve, i.e., in cases where the domain name has been taken down), there will no subsequent HTTP request (remember, it was blocked by an interior firewall). Because of this, there will be no detectable footprint in the proxy log. In the DPI data, the DNS request will be co-mingled with the millions of other DNS requests and will show as coming from an enterprise DNS server. This makes detection and endpoint identification nearly impossible.
- Interior firewall logs will allow us to detect attempts to connect to callback/C&C sites that have been blocked/denied.
- DNS logs will allow us to identify endpoints requesting suspicious/malicious domain names.
- Netflow data will allow us to very quickly identify other systems that may be exhibiting the same suspicious/malicious behavior.
Hopefully this hard earned advice is helpful to all.
Wednesday, May 2, 2012
Best Practices
Around the information security community, you'll often hear people talking about "implementing best practices". While best practices provide helpful guidance at a high level, I'd argue they can't be "implemented". Implementing something involves wrestling with the realities, nuances, and imperfections of a real enterprise network. This pain is felt particularly acutely when developing reliable and actionable jumping off points for analysis. As I'm sure you're aware, the number of false positives caused by blindly implementing "best practices" is enormous and is enough to stifle any incident response workflow. What I've found over the course of my career is that the best and most reliable jumping off points are created through sweat equity that comes from an iterative cycle of intuition, data-driven refinement, and automation. No manual, white paper, or vendor can sell you the secret sauce.
So how does one get there? By using analysis (of real data) to navigate the difficult path from conception to implementation. Know your network.
So how does one get there? By using analysis (of real data) to navigate the difficult path from conception to implementation. Know your network.
Monday, April 16, 2012
Actions Speak Louder Than Words
We all know the popular phrase, "actions speak louder than words". What some of us may not realize is just how wise and true that statement really is. In my experience, I've seen people express words of respect for others, but it's rarely that I witness people demonstrating repect for others through their actions. The difference is important and is one that people notice. Here's to action.
Sunday, March 25, 2012
Artifacts of Intrusion
As many of us in the network forensics/network traffic analysis field know, looking for the artifacts of infection/intrusion can often be much easier than catching the actual infection. Regardless of how one "latches on" to or otherwise becomes aware of an infected system, one should also take the time to perform due diligence in confirming that the system is indeed infected. Here are some points that can be helpful during this process:
- Look for the infection vector. Was it web-based? Email-based? Some other means? Can information relating to the infection vector be used to harden the enterprise or otherwise improve the enterprise's security posture?
- Check anti-virus or host-based IDS/IPS logs. Was the threat detected and remediated by either of those? Could there have been more than one threat, one or more of which was perhaps not caught by either of those?
- Can a sample of the malicious code be isolated for analysis? Does analysis of the malicious code sample provide any information or intelligence (such as callbacks or other artifacts of intrusion) that can be fed back into the analysis process and/or used to harden the enterprise or otherwise improve the enterprise's security posture?
- Look for the artifacts of intrusion in the network traffic data. Does the network traffic data provide evidence that the malicious code successfully infected the machine?
- Based on the information gathered in steps 1-4, make an educated decision, rather than a happenstance decision, regarding remediation.
An Inspiring Phone Call
Last night, I received a phone call from a neighbor asking if he should renew his home anti-virus subscription. I typically shy away from giving people advice on this sort of question, but it did give me pause and caused me to do some thinking before I answered him.
I thought about how in most large enterprises that I advise, anti-virus is <25% effective at detecting malicious code threats. Worse yet, most anti-virus vendors have not only sold large enterprises software that is largely ineffective, but they have also sold those enterprises on a mantra/workflow/philosophy that is nearly useless. In other words, most anti-virus vendors will suggest that you review their logs regularly to identify systems infected with malicious code. I remember from graduate school that this is, by definition, the quintessential biased sample. In other words, since anti-virus is <25% effective at detecting and identifying threats, the enterprise is left with a gaping hole/blind spot of >75%. Stated another way, if you trust anti-virus logs to guide your analysis, workflow, and/or process, you will be left largely in the dark.
The "Know Your Network" philosophy discussed in this blog and elsewhere is really the only way to effectively monitor a large, enterprise network. Unfortunately, anti-virus vendors do not seem to be interested in improving through novel or cutting edge techniques. They are quite content to sell both home users and enterprises the 21st century equivalent of a bridge.
Of course, anti-virus logs are somewhat useful as a supporting/corroborating data source for investigations that begin with true network forensics/network traffic analysis, but not much else. In other words, anti-virus logs can tell you if the malicious code that you see a user downloading in the proxy logs or via deep packet inspection (DPI) data was somehow detected and cleaned by anti-virus. As an aside, even this is not 100% reliable though, as it is not uncommon for malicious code infection vectors to try several different exploits and/or executable downloads. Just because anti-virus caught one of them, doesn't mean it caught all of them. The analyst still needs to examine the proxy logs, firewall logs, DPI data, and/or any other relevant data looking for artifacts of infection.
So back to my neighbor. After thinking about this for a moment, I explained to him that the anti-virus vendor he was thinking of sending additional money to was really no more effective than some of the free products that are available. I advised him to hold onto his money. After all, if one ignores the hype and thinks about it logically, it only makes sense.
I thought about how in most large enterprises that I advise, anti-virus is <25% effective at detecting malicious code threats. Worse yet, most anti-virus vendors have not only sold large enterprises software that is largely ineffective, but they have also sold those enterprises on a mantra/workflow/philosophy that is nearly useless. In other words, most anti-virus vendors will suggest that you review their logs regularly to identify systems infected with malicious code. I remember from graduate school that this is, by definition, the quintessential biased sample. In other words, since anti-virus is <25% effective at detecting and identifying threats, the enterprise is left with a gaping hole/blind spot of >75%. Stated another way, if you trust anti-virus logs to guide your analysis, workflow, and/or process, you will be left largely in the dark.
The "Know Your Network" philosophy discussed in this blog and elsewhere is really the only way to effectively monitor a large, enterprise network. Unfortunately, anti-virus vendors do not seem to be interested in improving through novel or cutting edge techniques. They are quite content to sell both home users and enterprises the 21st century equivalent of a bridge.
Of course, anti-virus logs are somewhat useful as a supporting/corroborating data source for investigations that begin with true network forensics/network traffic analysis, but not much else. In other words, anti-virus logs can tell you if the malicious code that you see a user downloading in the proxy logs or via deep packet inspection (DPI) data was somehow detected and cleaned by anti-virus. As an aside, even this is not 100% reliable though, as it is not uncommon for malicious code infection vectors to try several different exploits and/or executable downloads. Just because anti-virus caught one of them, doesn't mean it caught all of them. The analyst still needs to examine the proxy logs, firewall logs, DPI data, and/or any other relevant data looking for artifacts of infection.
So back to my neighbor. After thinking about this for a moment, I explained to him that the anti-virus vendor he was thinking of sending additional money to was really no more effective than some of the free products that are available. I advised him to hold onto his money. After all, if one ignores the hype and thinks about it logically, it only makes sense.
Sunday, March 4, 2012
Passive DNS Expansion
Recently, I learned that an analytical method I had experimented with a while back was being used by an organization to generate intelligence. It would have been nice to have been given proper credit, but regardless, it's still great that the method is in use. I've called the method "Passive DNS Expansion" for lack of a better term. The basic idea is that you use a combination of forward DNS resolution and passive DNS data to generate lists of malicious domain names that can be used for analysis and/or network defense purposes. It turns out to be a pretty handy way to generate some actionable intelligence. The basic algorithm is as follows:
1) Begin with a known malicious domain name
2) Forward resolve the known malicious domain name and obtain its current IP address
3) Leverage passive DNS data to identify the domain's IP address history
4) Search passive DNS data for the IP addresses to obtain a list of domain names associated with those IP addresses
5) Perform some cursory research on the domain names to determine whether or not they can be reliably used for analysis/network defense
I'd like to illustrate this with an example. I picked the domain name at the top of the Zeus Tracker list this morning to use as an example for illustrative purposes:
1) I begin with the domain name freetop[.]mobi, which I obtained from the Zeus Tracker list
2) The domain name forward resolves to 69[.]175[.]127[.]82
3) Passive DNS data shows only one IP address for this domain: 69[.]175[.]127[.]82
4) In this case, there is only one additional domain name associated with this IP address in passive DNS data: mymobilewap[.]info
5) A Google search for mymobilewap[.]info indicates that the domain name is most likely malicious and could probably be used reliably for network traffic analysis purposes.
I'm a big fan of this method. In the past, I've scripted these steps to automate the process with positive results. Feel free to give it a try and let me know your thoughts!
1) Begin with a known malicious domain name
2) Forward resolve the known malicious domain name and obtain its current IP address
3) Leverage passive DNS data to identify the domain's IP address history
4) Search passive DNS data for the IP addresses to obtain a list of domain names associated with those IP addresses
5) Perform some cursory research on the domain names to determine whether or not they can be reliably used for analysis/network defense
I'd like to illustrate this with an example. I picked the domain name at the top of the Zeus Tracker list this morning to use as an example for illustrative purposes:
1) I begin with the domain name freetop[.]mobi, which I obtained from the Zeus Tracker list
2) The domain name forward resolves to 69[.]175[.]127[.]82
3) Passive DNS data shows only one IP address for this domain: 69[.]175[.]127[.]82
4) In this case, there is only one additional domain name associated with this IP address in passive DNS data: mymobilewap[.]info
5) A Google search for mymobilewap[.]info indicates that the domain name is most likely malicious and could probably be used reliably for network traffic analysis purposes.
I'm a big fan of this method. In the past, I've scripted these steps to automate the process with positive results. Feel free to give it a try and let me know your thoughts!
Sunday, February 26, 2012
Cyber Is Not A Hill
Of late, I've heard a lot from military minded people about running "cyber security and cyber operations" (whatever that means) like you would run a physical world military operation. In other words, I hear a lot of discussion about running a cyber operation like you would run an operation to take a hill (land), drop a bomb (air), or secure a strait (sea). This seems to be a foregone conclusion that is not up for debate. Unfortunately, I don't see how it's possible to run cyber security and cyber operations in this manner. In other words, the enemy in cyber is one that I can't see, can't react as fast as, and am not as smart as. I can't outspend this enemy, and the enemy has just as much "history" and "experience" as I do. The enemy does not have to follow bureaucratic processes and/or laws, and the enemy is free to recruit the cream of the crop without needing them to adhere to arcane standards that are irrelevant to the cyber landscape. So, all in all, how is a large, hulking bureaucracy designed for and experienced in other purposes supposed to fight this enemy?
They're not. Perhaps that's why I've seen a lot of discussion to date, but little progress. Everyone seems to be a cyber expert of late (experts follow the money I suppose), but most of these so-called experts have never worked in the space, even for a short while. If cyber security is indeed to be treated like a battle, the enemy has already infiltrated us and is living within us. Talk is cheap. Action is rare, sorely needed, and often winds up stalled in the headstrong trade winds that often dominate bureaucracies. I would urge those skilled in the physical world's battle strategies (these are often the people in leadership positions these days) to keep an open mind and choose progress and success over tradition and procedure. It may necessitate listening to people who have little or no military experience and may look or act differently than you would expect. It may also necessitate being open to the fact that we, as a society, may not know how to approach the cyber landscape, and that approaching it as a military operation may be entirely misguided. Otherwise, I fear we may end up in a bad place.
I've had experience consulting in both the public sector and the private sector. What's amazing to me is that although private sector organizations often start out with their capabilities behind those of similarly sized public sector organizations, they are soon able to catch up and surpass their public sector peers. It isn't voodoo or magic that is responsible for this transformation -- it's openness, discipline, competency, and most importantly, the choice of progress over politics, pomp, and circumstance..
They're not. Perhaps that's why I've seen a lot of discussion to date, but little progress. Everyone seems to be a cyber expert of late (experts follow the money I suppose), but most of these so-called experts have never worked in the space, even for a short while. If cyber security is indeed to be treated like a battle, the enemy has already infiltrated us and is living within us. Talk is cheap. Action is rare, sorely needed, and often winds up stalled in the headstrong trade winds that often dominate bureaucracies. I would urge those skilled in the physical world's battle strategies (these are often the people in leadership positions these days) to keep an open mind and choose progress and success over tradition and procedure. It may necessitate listening to people who have little or no military experience and may look or act differently than you would expect. It may also necessitate being open to the fact that we, as a society, may not know how to approach the cyber landscape, and that approaching it as a military operation may be entirely misguided. Otherwise, I fear we may end up in a bad place.
I've had experience consulting in both the public sector and the private sector. What's amazing to me is that although private sector organizations often start out with their capabilities behind those of similarly sized public sector organizations, they are soon able to catch up and surpass their public sector peers. It isn't voodoo or magic that is responsible for this transformation -- it's openness, discipline, competency, and most importantly, the choice of progress over politics, pomp, and circumstance..
Following The Trail Of DNS
As discussed previously on this blog, logging and subsequently monitoring/analyzing DNS traffic is extremely important. It sounds straightforward, but it can often be challenging to accomplish this in an enterprise. This is likely the reason why most enterprises I encounter, unfortunately, don't log DNS traffic.
Here are a few issues that enterprises often encounter:
1) The volume of DNS traffic in an enterprise is overwhelming. If you think about it, almost every network transaction requires at least one, and often times multiple domain name look ups to function properly. This generates an incredible amount of traffic when you consider that there may be several hundred thousand machines generating several billion network transactions per day on a typical enterprise network. Keeping even 30, 60, or 90 days worth of this data can quickly inundate an organization.
2) DNS functions in a "nested" fashion within an enterprise. Many enterprises use Active Directory Domain Controllers, or some other nested means by which to resolve domain names. This creates multiple layers of "NATing"/"client abstraction" issues that both necessitates multiple layers of DNS logging (often redundant), and creates the need to "following the trail" of DNS transactions through the network. An easy fix to this issue is to mandate that all client endpoints resolve DNS in one logical place. Note that one logical place for logging purposes can translate into an array of physical systems for redundancy and availability reasons. I'm not trying to introduce a single point of failure here, but rather a single point of success.
3) Most DNS logging doesn't "match up" or sessionize requests with their corresponding responses (or lack thereof). This leads to 2 problems. First, nearly twice as much data is generated as is really necessary. Instead of just one line in the logging with the DNS request and its corresponding response(s) (or lack thereof), there are two or more lines in the logging. Most of the information in the additional lines is redundant, with only a small amount of that information being new and/or adding value.
4) Because the information contained in the DNS logging is not sessionized, it is incumbent on the analyst to "match up" the requests with their corresponding response(s) (or lack thereof). It isn't always obvious how best to do this, and in some cases, the logging is such that it's almost impossible to do this. This introduces a needless level of complexity and concern that often acts as a barrier to enterprises adopting the policy of logging DNS traffic.
If I look at the four challenges explained above, I believe that one solution could potentially abate and/or solve many of the issues presented by these challenges. To me, a sessionized, intelligently aggregated DNS logging ability could produce a compact, efficient data source for retention, analysis, and monitoring within an enterprise. To those of you familiar with my blog and/or my way of thinking, this may sound a lot like a specialized version of layer 7 enriched network flow data and/or the uber data source. Indeed it is. Indeed it is.
Thursday, January 26, 2012
Thinking About Vectors
I find it interesting how most clients ask me to help them write reports or alerts to monitor their enterprises for various different threats. This is, of course, to be expected in my line of work. What I often ask them is "What vectors into (and out of) the enterprise are you concerned with/do you want to monitor for?" Clients often find this to be a surprising question, but when you think about it, the question isn't really all that surprising. The most advanced analytic, the fanciest report, or the coolest detection technique aren't worth much if they aren't relevant to the enterprise they're being applied to, right?
It's important to conceptualize and understand what it is you'd like to monitor for based on the vectors into (and out of) the enterprise you're concerned with. Once that is done, implementing those concepts is usually fairly straightforward. That's all well and good, but what if you don't know what vectors you ought to be concerned with? To that, I say, know your network! Study and analyze the data transiting the network and let it guide you towards an understanding of the vectors you might want to concern yourself with.
Wednesday, January 11, 2012
Boiling The Ocean
Boiling the ocean is one of my favorite phrases. As the phrase connotes, boiling the ocean is a process that will likely never converge to success, nor end. I am reminded of this phrase as I attend FloCon this week. The vast majority of people I work with professionally understand the need to make compromises and accept some imperfections in order to make progress operationally. In my experience, operational progress, though often imperfect, still leads to improved network security monitoring and security operations in infinitely more cases than taking a "boil the ocean" approach. In other words, an 80% solution at least gets you 80% of what you want and need, while waiting for everything to be 100% perfect will always get you nowhere. There are a few in attendance at FloCon for whom the compromises that operational personnel must make is lost on them. I can't think of a way to show them the other side, other than to put them in an operational environment for a year (or perhaps longer)....
Tuesday, January 10, 2012
And Then What?
I am at FloCon this week and enjoying the conference tremendously. I always enjoy FloCon, as it's a unique opportunity to catch up with peers in the community. It's also a great place to learn about different techniques and methods that people are using to analyze network security data.
There was a presentation this morning from US-CERT that discussed some interesting analytical work US-CERT is currently doing. The presentation described some of the architecture, systems, processes, and procedures that US-CERT is using to perform analysis of various different types of data. The presentation was interesting, but it made me ask the question, "and then what?". All that analysis is great, but at the end of the day practitioners (like myself) need actionable intelligence and information that we can use to defend the networks we are responsible for. Unfortunately, we're not getting much in the way of actionable information and intelligence from US-CERT. As our national CERT, this is disappointing.
My intention here is not to pick on or harass the analysts who work hard in service of our nation day in and day out. Rather, I'm hoping that the leadership in our government, and particularly the leadership within DHS will get a clue sometime soon. If I had a minute with the leadership I would ask them why they can't find some way to cut through the bureaucratic red tape and share information with a nation (and world) so desparate for it. After all, the security of our nation's most critical infrastructure depends on it, right?
Analysis is great, but I am reminded this morning that analysis is not for analysis' sake. Analysis should serve some productive end, namely producing actionable information and intelligence for those who so desperately need it. Come on DHS -- get with the program.
Subscribe to:
Posts (Atom)