Sunday, February 26, 2012

Cyber Is Not A Hill

Of late, I've heard a lot from military minded people about running "cyber security and cyber operations" (whatever that means) like you would run a physical world military operation.  In other words, I hear a lot of discussion about running a cyber operation like you would run an operation to take a hill (land), drop a bomb (air), or secure a strait (sea).  This seems to be a foregone conclusion that is not up for debate.  Unfortunately, I don't see how it's possible to run cyber security and cyber operations in this manner.  In other words, the enemy in cyber is one that I can't see, can't react as fast as, and am not as smart as.  I can't outspend this enemy, and the enemy has just as much "history" and "experience" as I do.  The enemy does not have to follow bureaucratic processes and/or laws, and the enemy is free to recruit the cream of the crop without needing them to adhere to arcane standards that are irrelevant to the cyber landscape.  So, all in all, how is a large, hulking bureaucracy designed for and experienced in other purposes supposed to fight this enemy?

They're not.  Perhaps that's why I've seen a lot of discussion to date, but little progress.  Everyone seems to be a cyber expert of late (experts follow the money I suppose), but most of these so-called experts have never worked in the space, even for a short while.  If cyber security is indeed to be treated like a battle, the enemy has already infiltrated us and is living within us.  Talk is cheap.  Action is rare, sorely needed, and often winds up stalled in the headstrong trade winds that often dominate bureaucracies.  I would urge those skilled in the physical world's battle strategies (these are often the people in leadership positions these days) to keep an open mind and choose progress and success over tradition and procedure.  It may necessitate listening to people who have little or no military experience and may look or act differently than you would expect.  It may also necessitate being open to the fact that we, as a society, may not know how to approach the cyber landscape, and that approaching it as a military operation may be entirely misguided.  Otherwise, I fear we may end up in a bad place.

I've had experience consulting in both the public sector and the private sector.  What's amazing to me is that although private sector organizations often start out with their capabilities behind those of similarly sized public sector organizations, they are soon able to catch up and surpass their public sector peers.  It isn't voodoo or magic that is responsible for this transformation -- it's openness, discipline, competency, and most importantly, the choice of progress over politics, pomp, and circumstance..

Following The Trail Of DNS

As discussed previously on this blog, logging and subsequently monitoring/analyzing DNS traffic is extremely important.  It sounds straightforward, but it can often be challenging to accomplish this in an enterprise.  This is likely the reason why most enterprises I encounter, unfortunately, don't log DNS traffic.

Here are a few issues that enterprises often encounter:

1) The volume of DNS traffic in an enterprise is overwhelming.  If you think about it, almost every network transaction requires at least one, and often times multiple domain name look ups to function properly.  This generates an incredible amount of traffic when you consider that there may be several hundred thousand machines generating several billion network transactions per day on a typical enterprise network.  Keeping even 30, 60, or 90 days worth of this data can quickly inundate an organization.

2) DNS functions in a "nested" fashion within an enterprise.  Many enterprises use Active Directory Domain Controllers, or some other nested means by which to resolve domain names.  This creates multiple layers of "NATing"/"client abstraction" issues that both necessitates multiple layers of DNS logging (often redundant), and creates the need to "following the trail" of DNS transactions through the network.  An easy fix to this issue is to mandate that all client endpoints resolve DNS in one logical place.  Note that one logical place for logging purposes can translate into an array of physical systems for redundancy and availability reasons.  I'm not trying to introduce a single point of failure here, but rather a single point of success.

3) Most DNS logging doesn't "match up" or sessionize requests with their corresponding responses (or lack thereof).  This leads to 2 problems.  First, nearly twice as much data is generated as is really necessary.  Instead of just one line in the logging with the DNS request and its corresponding response(s) (or lack thereof), there are two or more lines in the logging.  Most of the information in the additional lines is redundant, with only a small amount of that information being new and/or adding value.

4) Because the information contained in the DNS logging is not sessionized, it is incumbent on the analyst to "match up" the requests with their corresponding response(s) (or lack thereof).  It isn't always obvious how best to do this, and in some cases, the logging is such that it's almost impossible to do this.  This introduces a needless level of complexity and concern that often acts as a barrier to enterprises adopting the policy of logging DNS traffic.

If I look at the four challenges explained above, I believe that one solution could potentially abate and/or solve many of the issues presented by these challenges.  To me, a sessionized, intelligently aggregated DNS logging ability could produce a compact, efficient data source for retention, analysis, and monitoring within an enterprise.  To those of you familiar with my blog and/or my way of thinking, this may sound a lot like a specialized version of layer 7 enriched network flow data and/or the uber data source.  Indeed it is.  Indeed it is.

Thursday, January 26, 2012

Thinking About Vectors

I find it interesting how most clients ask me to help them write reports or alerts to monitor their enterprises for various different threats. This is, of course, to be expected in my line of work. What I often ask them is "What vectors into (and out of) the enterprise are you concerned with/do you want to monitor for?" Clients often find this to be a surprising question, but when you think about it, the question isn't really all that surprising. The most advanced analytic, the fanciest report, or the coolest detection technique aren't worth much if they aren't relevant to the enterprise they're being applied to, right? It's important to conceptualize and understand what it is you'd like to monitor for based on the vectors into (and out of) the enterprise you're concerned with. Once that is done, implementing those concepts is usually fairly straightforward. That's all well and good, but what if you don't know what vectors you ought to be concerned with? To that, I say, know your network! Study and analyze the data transiting the network and let it guide you towards an understanding of the vectors you might want to concern yourself with.

Wednesday, January 11, 2012

Boiling The Ocean

Boiling the ocean is one of my favorite phrases. As the phrase connotes, boiling the ocean is a process that will likely never converge to success, nor end. I am reminded of this phrase as I attend FloCon this week. The vast majority of people I work with professionally understand the need to make compromises and accept some imperfections in order to make progress operationally. In my experience, operational progress, though often imperfect, still leads to improved network security monitoring and security operations in infinitely more cases than taking a "boil the ocean" approach. In other words, an 80% solution at least gets you 80% of what you want and need, while waiting for everything to be 100% perfect will always get you nowhere. There are a few in attendance at FloCon for whom the compromises that operational personnel must make is lost on them. I can't think of a way to show them the other side, other than to put them in an operational environment for a year (or perhaps longer)....

Tuesday, January 10, 2012

And Then What?

I am at FloCon this week and enjoying the conference tremendously. I always enjoy FloCon, as it's a unique opportunity to catch up with peers in the community. It's also a great place to learn about different techniques and methods that people are using to analyze network security data. There was a presentation this morning from US-CERT that discussed some interesting analytical work US-CERT is currently doing. The presentation described some of the architecture, systems, processes, and procedures that US-CERT is using to perform analysis of various different types of data. The presentation was interesting, but it made me ask the question, "and then what?". All that analysis is great, but at the end of the day practitioners (like myself) need actionable intelligence and information that we can use to defend the networks we are responsible for. Unfortunately, we're not getting much in the way of actionable information and intelligence from US-CERT. As our national CERT, this is disappointing. My intention here is not to pick on or harass the analysts who work hard in service of our nation day in and day out. Rather, I'm hoping that the leadership in our government, and particularly the leadership within DHS will get a clue sometime soon. If I had a minute with the leadership I would ask them why they can't find some way to cut through the bureaucratic red tape and share information with a nation (and world) so desparate for it. After all, the security of our nation's most critical infrastructure depends on it, right? Analysis is great, but I am reminded this morning that analysis is not for analysis' sake. Analysis should serve some productive end, namely producing actionable information and intelligence for those who so desperately need it. Come on DHS -- get with the program.

Friday, December 23, 2011

SOC/IRC Building

Over the last decade, I've had the privilege to help build multiple different Security Operations Centers (SOCs)/Incident Response Centers (IRCs).  This is a line of work that I'm truly passionate about and have had a good amount of success in.  The good news is that this skill appears to be moving from a niche line of work to a more mainstream endeavor.  I see this as a tremendous positive for the world -- proper network security monitoring and a successful SOC/IRC are an integral part of helping organizations combat the security threats of today.  Onward!

Time

Time is an extremely interesting concept analytically.  It's a dimension that's often overlooked when performing network traffic analysis.  On this blog, I've discussed the concept of looking for anomalous or unexpected traffic/behavior on an enterprise network quite a bit.  But what about traffic that may be completely normal/expected at 14:00 on a weekday, but not at 02:00 on a Sunday?  By considering the dimension of time analytically, one can look for normal traffic that because of the time window it occurs in is considered abnormal.

Consider the example of the administrative assistant who sends emails and calendar invites (amidst performing a variety of other tasks) all day long.  If we study the mail logs, there is nothing particularly interesting or unusual about this.  But what if that same administrative assistant sends a bunch of emails and calendar invites between 02:00 and 03:00 on Sunday?  Perhaps he/she is dedicated and catching up on work while dealing with a bout of insomnia.  Or, perhaps he/she is about to become a pawn in a spear phishing campaign that will await targeted personnel when they arrive to work Monday morning....