Of late, I've heard a lot from military minded people about running "cyber security and cyber operations" (whatever that means) like you would run a physical world military operation. In other words, I hear a lot of discussion about running a cyber operation like you would run an operation to take a hill (land), drop a bomb (air), or secure a strait (sea). This seems to be a foregone conclusion that is not up for debate. Unfortunately, I don't see how it's possible to run cyber security and cyber operations in this manner. In other words, the enemy in cyber is one that I can't see, can't react as fast as, and am not as smart as. I can't outspend this enemy, and the enemy has just as much "history" and "experience" as I do. The enemy does not have to follow bureaucratic processes and/or laws, and the enemy is free to recruit the cream of the crop without needing them to adhere to arcane standards that are irrelevant to the cyber landscape. So, all in all, how is a large, hulking bureaucracy designed for and experienced in other purposes supposed to fight this enemy?
They're not. Perhaps that's why I've seen a lot of discussion to date, but little progress. Everyone seems to be a cyber expert of late (experts follow the money I suppose), but most of these so-called experts have never worked in the space, even for a short while. If cyber security is indeed to be treated like a battle, the enemy has already infiltrated us and is living within us. Talk is cheap. Action is rare, sorely needed, and often winds up stalled in the headstrong trade winds that often dominate bureaucracies. I would urge those skilled in the physical world's battle strategies (these are often the people in leadership positions these days) to keep an open mind and choose progress and success over tradition and procedure. It may necessitate listening to people who have little or no military experience and may look or act differently than you would expect. It may also necessitate being open to the fact that we, as a society, may not know how to approach the cyber landscape, and that approaching it as a military operation may be entirely misguided. Otherwise, I fear we may end up in a bad place.
I've had experience consulting in both the public sector and the private sector. What's amazing to me is that although private sector organizations often start out with their capabilities behind those of similarly sized public sector organizations, they are soon able to catch up and surpass their public sector peers. It isn't voodoo or magic that is responsible for this transformation -- it's openness, discipline, competency, and most importantly, the choice of progress over politics, pomp, and circumstance..
Sunday, February 26, 2012
Following The Trail Of DNS
As discussed previously on this blog, logging and subsequently monitoring/analyzing DNS traffic is extremely important. It sounds straightforward, but it can often be challenging to accomplish this in an enterprise. This is likely the reason why most enterprises I encounter, unfortunately, don't log DNS traffic.
Here are a few issues that enterprises often encounter:
1) The volume of DNS traffic in an enterprise is overwhelming. If you think about it, almost every network transaction requires at least one, and often times multiple domain name look ups to function properly. This generates an incredible amount of traffic when you consider that there may be several hundred thousand machines generating several billion network transactions per day on a typical enterprise network. Keeping even 30, 60, or 90 days worth of this data can quickly inundate an organization.
2) DNS functions in a "nested" fashion within an enterprise. Many enterprises use Active Directory Domain Controllers, or some other nested means by which to resolve domain names. This creates multiple layers of "NATing"/"client abstraction" issues that both necessitates multiple layers of DNS logging (often redundant), and creates the need to "following the trail" of DNS transactions through the network. An easy fix to this issue is to mandate that all client endpoints resolve DNS in one logical place. Note that one logical place for logging purposes can translate into an array of physical systems for redundancy and availability reasons. I'm not trying to introduce a single point of failure here, but rather a single point of success.
3) Most DNS logging doesn't "match up" or sessionize requests with their corresponding responses (or lack thereof). This leads to 2 problems. First, nearly twice as much data is generated as is really necessary. Instead of just one line in the logging with the DNS request and its corresponding response(s) (or lack thereof), there are two or more lines in the logging. Most of the information in the additional lines is redundant, with only a small amount of that information being new and/or adding value.
4) Because the information contained in the DNS logging is not sessionized, it is incumbent on the analyst to "match up" the requests with their corresponding response(s) (or lack thereof). It isn't always obvious how best to do this, and in some cases, the logging is such that it's almost impossible to do this. This introduces a needless level of complexity and concern that often acts as a barrier to enterprises adopting the policy of logging DNS traffic.
If I look at the four challenges explained above, I believe that one solution could potentially abate and/or solve many of the issues presented by these challenges. To me, a sessionized, intelligently aggregated DNS logging ability could produce a compact, efficient data source for retention, analysis, and monitoring within an enterprise. To those of you familiar with my blog and/or my way of thinking, this may sound a lot like a specialized version of layer 7 enriched network flow data and/or the uber data source. Indeed it is. Indeed it is.
Subscribe to:
Posts (Atom)