Taking a step back and truly understanding what you're looking at when analyzing network traffic is extremely important. There are quite a few people who can look for and analyze a well-defined, known threat. But what happens when attackers change tactics, or a new type of attack is encountered for the first time? It requires the analyst to take a step back, think deeply, and truly understand what is going on. This is a rare skill, but one that is invaluable. This type of thinking/mindset is what we as a community need more of in order to rise to the challenges we are confronted with on a continual basis. Long live the deep thinker.
Thursday, September 22, 2011
Where to Look?
When I work with clients to build their Security Operations Centers (SOCs)/Incident Response Centers (IRCs), I often see a common challenge. As I've mentioned previously, most organizations spend a good deal of time instrumenting their network to collect data. Unfortunately, they don't often give enough thought to how one might analyze all that data. In other words, the questions of "Where do we put all this data?" and "What type of questions do we want to ask of all this data?" should be asked at the same time the instrumentation of the network is being planned. As you might expect, this is almost never the case. As a result, organizations often end up with large amounts of data in various different locations. There are some data types where there is a good deal of overlap with other data types, which results in redundancy, waste, and long query times (due to excessive volumes of data). In other data types, there may be (potentially large) gaps in the data, which results in the inability to ask certain (sometimes crucial) questions of the data.
What's striking is that the first question an analyst usually needs to ask is "Where do I go to get the data to answer my question?", rather than "What is the answer that the data provides to my question?". It's unfortunate. The good news is that better coordination between the collection side of the enterprise and the analysis side of the enterprise can result in incredible gains analytically. Something to keep in mind when building a SOC/IRC for sure.
Wednesday, September 14, 2011
Don't Forget About DNS
As a community, we've paid a good deal of attention to HTTP-based threats. This includes malicious downloads, callbacks, command and control, exfiltration of data, and other threats. To combat this threat, we've deployed proxies, firewalls, and myriad other technologies. A good deal of the threat feeds these days are dominated by HTTP-based threats, if not entirely focused on HTTP-based threats.
In some sense, the near-constant attention paid to HTTP-based threats is good. Looking at it from a different perspective though, it opens up organizations to attacks from other angles. We often pay an inordinate amount of attention to HTTP-based threats at the expense of other threats. One can't fault organizations for this -- there is still a tremendous amount of badness delivered via HTTP-based mechanisms of various different sorts and defending against it is very necessary.
In this post, I'd like to remind us not to forget about DNS. DNS is an incredibly flexible protocol that can move almost any amount of data in and out of organizations, sometimes undetected. For example, recently, I've seen DNS TXT records used for command and control of malicious code and even delivery of additional payload, rather than HTTP. Why would an attacker do this? Why not? When we learn of a malicious domain, what do we most often do? Yep, that's right -- we block HTTP-based communication with it (via proxy blocks, firewall blocks, or otherwise). But what do we most often do regarding DNS requests for those very same malicious domains? Yep, that's right -- nothing. So, can you blame the attackers for being creative in their exploitation and use of the DNS protocol?
Just another reason to stay diligent in the continual analysis of all types of network traffic -- not just HTTP-based traffic.
In some sense, the near-constant attention paid to HTTP-based threats is good. Looking at it from a different perspective though, it opens up organizations to attacks from other angles. We often pay an inordinate amount of attention to HTTP-based threats at the expense of other threats. One can't fault organizations for this -- there is still a tremendous amount of badness delivered via HTTP-based mechanisms of various different sorts and defending against it is very necessary.
In this post, I'd like to remind us not to forget about DNS. DNS is an incredibly flexible protocol that can move almost any amount of data in and out of organizations, sometimes undetected. For example, recently, I've seen DNS TXT records used for command and control of malicious code and even delivery of additional payload, rather than HTTP. Why would an attacker do this? Why not? When we learn of a malicious domain, what do we most often do? Yep, that's right -- we block HTTP-based communication with it (via proxy blocks, firewall blocks, or otherwise). But what do we most often do regarding DNS requests for those very same malicious domains? Yep, that's right -- nothing. So, can you blame the attackers for being creative in their exploitation and use of the DNS protocol?
Just another reason to stay diligent in the continual analysis of all types of network traffic -- not just HTTP-based traffic.
Sender Policy Framework
As I'm sure you know, many organizations face email spoofing/spam/phishing/spear phishing as one of their major infection vectors these days. Sender Policy Framework (SPF), which is RFC 4408, can help tremendously in combating this infection vector. SPF uses a DNS TXT record to specify which IP range(s) are permitted to send email as coming from a given domain. It's implementation is optimal. SPF's elegance is in its simplicity, and I would encourage organizations to consider implementing it if they haven't already.
To think about it through a concrete example, say I wanted to relay email and spoof the sender such that the email appears to be sent from someguy@example.com. If I'm attempting to relay email from a cable modem dynamic IP address, then I'm probably not a legit mail gateway for example.com. Implementing SPF instructs your mail server to perform this "reality check" before accepting the email. Seems straightforward, right? Exactly.
To think about it through a concrete example, say I wanted to relay email and spoof the sender such that the email appears to be sent from someguy@example.com. If I'm attempting to relay email from a cable modem dynamic IP address, then I'm probably not a legit mail gateway for example.com. Implementing SPF instructs your mail server to perform this "reality check" before accepting the email. Seems straightforward, right? Exactly.
Taming Your Ingress/Egress Points
Many organizations have legacy ingress/egress points that will route traffic to and from the Internet. In some cases, these ingress/egress points may have been "forgotten" about and as a result, are not being properly monitored. A well-run Security Operations Center (SOC)/Incident Response Center (IRC) can be highly effective and can greatly improve the security posture of an organization, but only if all ingress/egress points are well known and properly instrumented. To think about it another way, it's like trying to defend the network based on data that simply isn't there. Pretty hard to do.
Thursday, September 8, 2011
Other Alphabets
Westerners often forget that there are a decent number of alphabets in use that don't use Latin characters. In 2009, internationalized domain names that use non-Latin characters were approved, allowing for a much broader array of characters from which to build domain names. The non-Latin characters are translated for our "Latin-only" DNS system using a schema known as "Punycode". What does this mean for analysts? There are a lot more domain names in use on our network than we might realize. Unfortunately, because of this, some of our "Latin-centric" analytical methods may miss certain traffic that we may want to inspect more closely. It's something to be aware of. Don't be a "Latin-centric" analyst. :-)
Window of Opportunity
As I'm sure most of you reading this blog know, drive-by web re-directs are a major malicious code infection vector for organizations these days. Many proxy vendors continually make a noble effort to stay on top of domains hosting malicious code and push blocks down to their customers' proxy devices. This is actually highly effective at preventing a large number of malicious code infections in enterprises. What's interesting analytically though is that there is usually a 24-48 hour window between when a domain begins hosting malicious code and when the proxy vendors are able to push the blocks down to their customers. That time period is a window of opportunity for the attackers, and it's often enough time to infect countless systems.
So how can we turn this tidbit into an interesting analytical technique? What about reviewing the list of blocks received from our proxy vendor and searching back a week or two in our proxy log data to see if any systems were infected before the block was pushed down? Pretty neat if you ask me, and a highly effective way to identify infected systems in the enterprise.
So how can we turn this tidbit into an interesting analytical technique? What about reviewing the list of blocks received from our proxy vendor and searching back a week or two in our proxy log data to see if any systems were infected before the block was pushed down? Pretty neat if you ask me, and a highly effective way to identify infected systems in the enterprise.
Subscribe to:
Posts (Atom)