Thursday, June 6, 2013

Content Development

I often speak about how streamlining the CIRC/SOC workflow and producing high reliability, high fidelity, low noise alerts is a key element to an organization's success in this arena.  The question I often get asked after this is "Where do the alerts come from?".

Well, that is a very good question.  The answer is that it can vary.  One thing that's for sure though is that the process that leads to the alerts -- content development -- is extremely important.  It's important to assess your organization's risk and exposure, and then determine what you would like to look for (conceptually) to monitor/address that risk.  Once you define what you want to look for, then a careful study of the data needs to be performed in order to guide the organization to alerting that will identify activity of concern with low noise.

It's an art, rather than a science, but allowing the data, risk/exposure, and operational requirements to guide the content development process will produce better results than any other approach I've seen.

Collecting, Vetting, Retaining, and Leveraging Intelligence

If you've done a good job building bridges and relationships for your incident response center or SOC, you will receive intelligence/indicators of compromise from time to time.  Once you receive them, what do you do with them?  If you search back in your logs a few weeks or months to see if you've got any hits, that's a good start.  But what if an attack hits tomorrow, after you've already run your search and (likely) "discarded" that intelligence?

That's where a robust intelligence analysis function and process can help.  Joining information sharing groups, purchasing intelligence from vendors, working collaboratively with peer organizations, and building bridges and trusted relationships can all net you decent intelligence.  Once you collect it, it should be vetted.  In other words, given the context (which is extremely important) of a particular piece of intelligence (e.g., is it a payload delivery site, C2 site, malicious email sender, etc.), is it reliable as an indicator of compromise?  Does it produce a large number of false positives, or is the noise relatively tame (making it more useful/reliable as an indicator of compromise)?

Once an indicator has been vetted and deemed reliable, it should be retained.  I've seen a number of organizations use some sort of an intelligence repository to retain the vetted, reliable, high fidelity, actionable intelligence they have.  Once it's retained, of course, it should be fully leveraged.  This includes writing alerts to check the intelligence repository regularly and run the data against recent logging.

I'm sure that this sounds conceptually simple, but it's amazing how many organizations don't properly retain and leverage the intelligence they receive.  Take a look within your own organization -- if you can better retain and leverage intelligence, it will serve you well in the long run!