Sunday, February 26, 2012

Following The Trail Of DNS

As discussed previously on this blog, logging and subsequently monitoring/analyzing DNS traffic is extremely important.  It sounds straightforward, but it can often be challenging to accomplish this in an enterprise.  This is likely the reason why most enterprises I encounter, unfortunately, don't log DNS traffic.

Here are a few issues that enterprises often encounter:

1) The volume of DNS traffic in an enterprise is overwhelming.  If you think about it, almost every network transaction requires at least one, and often times multiple domain name look ups to function properly.  This generates an incredible amount of traffic when you consider that there may be several hundred thousand machines generating several billion network transactions per day on a typical enterprise network.  Keeping even 30, 60, or 90 days worth of this data can quickly inundate an organization.

2) DNS functions in a "nested" fashion within an enterprise.  Many enterprises use Active Directory Domain Controllers, or some other nested means by which to resolve domain names.  This creates multiple layers of "NATing"/"client abstraction" issues that both necessitates multiple layers of DNS logging (often redundant), and creates the need to "following the trail" of DNS transactions through the network.  An easy fix to this issue is to mandate that all client endpoints resolve DNS in one logical place.  Note that one logical place for logging purposes can translate into an array of physical systems for redundancy and availability reasons.  I'm not trying to introduce a single point of failure here, but rather a single point of success.

3) Most DNS logging doesn't "match up" or sessionize requests with their corresponding responses (or lack thereof).  This leads to 2 problems.  First, nearly twice as much data is generated as is really necessary.  Instead of just one line in the logging with the DNS request and its corresponding response(s) (or lack thereof), there are two or more lines in the logging.  Most of the information in the additional lines is redundant, with only a small amount of that information being new and/or adding value.

4) Because the information contained in the DNS logging is not sessionized, it is incumbent on the analyst to "match up" the requests with their corresponding response(s) (or lack thereof).  It isn't always obvious how best to do this, and in some cases, the logging is such that it's almost impossible to do this.  This introduces a needless level of complexity and concern that often acts as a barrier to enterprises adopting the policy of logging DNS traffic.

If I look at the four challenges explained above, I believe that one solution could potentially abate and/or solve many of the issues presented by these challenges.  To me, a sessionized, intelligently aggregated DNS logging ability could produce a compact, efficient data source for retention, analysis, and monitoring within an enterprise.  To those of you familiar with my blog and/or my way of thinking, this may sound a lot like a specialized version of layer 7 enriched network flow data and/or the uber data source.  Indeed it is.  Indeed it is.

No comments:

Post a Comment