Friday, March 30, 2012

The Need for Analysis in Intelligence-Driven Defense

I recently ran across this very interesting paper by Dan Guido; the paper is titled, "A Case Study of Intelligence-Driven Defense".  Dan's research point out the fallacy of the current implementation of the Defender's Paradigm, and how attackers, however unknowingly, are exploiting this paradigm.  Dan's approach throughout the paper is to make his very valid points based on analysis of information, rather than vague statements and speculation.

In the paper, Dan takes the stance, in part, that:

- Over the past year, MITRE's CVE identified and tracked more than 8,000 vulnerabilities
- Organizations expend considerable resources (man-power, money, etc.)
- In 2010, only 13 vulnerabilities "were exploited to install SpyEye, Zeus, Gozi, Clampi and other info-stealing Trojans in massive exploitation campaigns".

As such, his point is that rather than focusing on compliance and having to address all 8K+ vulnerabilities, a more effective use of resources would be to put more focus on those vulnerabilities that are actually being included in the crimeware packs for mass malware distribution.  Based on the information that Dan's included in the paper, this approach would also work for targeted attacks by many of the "advanced" adversaries, as well.

Dan goes on to say:

"Analysis of attacker data, and a focus on vulnerabilities exploited rather than vulnerabilities discovered, might yield more effective defenses."

This sounds very Deming-esque, and I have to agree.  Admittedly, Dan's paper focuses on mass malware, but in many ways, I think that the approach Dan advocates can be used across a number of other data breach and compromise issues.  For example, many of the folks who are working the PCI forensic audits are very likely still seeing a lot of the same issues across the board...the same or similar malware placed on systems that are compromised using some of the same techniques.  So, Dan's approach to intel-driven defense can be applied to other aspects of DFIR besides mass malware infections in an equally effective manner.

Through the analysis he described in his paper, Dan was able to demonstrate how focusing on a few, low-cost (or even free), centrally-managed updates could have significantly reduced the attack surface of an organization and limited, inhibited, or even stopped mass malware infections.  The same approach could clearly be applied to many of the more targeted attacks, as well.

Where the current approach to infosec defense falls short is the lack of adequate detection, response, and analysis.

Detection - look at the recent reports available from TrustWave, Verizon, and Mandiant, and consider the percentage of their respective customers for whom "detection" consists of third-party notification.  So determining that an organization is infected or compromised at all can be difficult.  Detection itself often requires that monitoring be put in place, and this can be scary, and thought of as expensive to those who don't have it already.  However, tools like Carbon Black (Cb) can provide a great deal of ROI, as it's not just a security monitoring tool.  When used as part of a security monitoring infrastructure, Cb will retain copies of binaries (many of the binaries that are downloaded to infected systems will be run and deleted) as well as information about the processes themselves...information that persists after the process has exited and the binary has been deleted as part of the attack process.  The next version of Cb will include Registry modifications and network initiations, which means that the entire monitored infrastructure can then be searched for other infected systems, all from a central location and without adding anything additional to the systems themselves.

Response - What happens most often when systems are found to be infected?  The predominant response methodology appears to be that systems suspected to be compromised or infected are taken offline, wiped, and the operating system and data are reinstalled.  As such, critical information (and intelligence) is lost.

Analysis - Analysis of mass malware is often impossible because a sample isn't collected.  If a sample is available, critical information about "in the wild" artifacts are not documented, and AV vendor write-ups based on the provided samples will only give a partial view of the malware capabilities, and will very often include self-inflicted artifacts.  I've seen and found malware on compromised systems for which if no intelligence had been provided, the malware analyst would have had a very difficult time performing any useful analysis.

In addition, most often, the systems themselves are not subject to analysis...memory is not collected, nor is an image acquired from the system.  In most cases, this simply isn't part of the response process, and even when it is, these actions aren't taken because the necessary training and/or tools aren't available, or they are but they simply aren't on-hand at the time.  Military members are familiar with the term "immediate actions"...these are actions that are drilled into members so that they become part of "muscle memory" and can be performed effectively while under stress.  A similar "no-excuses" approach needs to be applied as part of a top-down security posture that is driven by senior management.

Another important aspect of analysis is sharing.  The cycle of developing usable, actionable intelligence depends on analysis being performed.  That cycle can move much faster if the analysis methodologies are shared (and open to review and improvement), and if the results of the analysis are also shared.  Consider the current state of Dan points out, the cycle of developing the crimeware packs for mass malware infections is highly specialized and compartmentalized, and clearly has an economic/monetary stimulus.  That is, someone who's really good at one specific step in the chain (i.e., locating vulnerabilities and developing exploits, or pulling the exploits together into a nice, easy to use package) performs that function, and then passes it along to the next step, usually for a fee.  As such, there's the economic motivation to provide a quality product and remain relevant.  Ultimately, the final product in the chain is being deployed against infrastructures with little monitoring in place (evidenced by the external third party notification...).  When the IT staff (who are, by definition, generalists) are notified of an issue, the default reaction is to take the offending systems offline, wipe them and get them back into service.  As such, analysis is not being done and a great deal of valuable intelligence is being lost.

Why is that?  Is it because keeping systems up and running, and getting systems back online as quickly as possible are the primary goals of infected organizations?  Perhaps this is the case.  But what if there were some way to perform the necessary analysis in a timely manner, either because your staff has the training to do so, or because you've collected the necessary information and have a partner or trusted adviser who can perform that analysis?  How valuable would it be to you if that partner could then provide not only the results of the analysis in a timely manner, but also provide additional insight or intelligence to help you defend your organization due to partnership with law enforcement and other intel-sharing organizations?

Consider this...massive botnets have been taken down (Khelios, Zeus, etc.) when small, dedicated groups have worked together, with a focused approach, to achieve a common goal.  This approach has been proven to be highly effective...but it doesn't have to be restricted to just those folks involved in those groups.  This has simply been a matter of a couple of folks saying, "we can do this", and then doing it.

The state of DFIR affairs is not going to get better unless the Defender's Paradigm is changed.  Information needs to be collected and analyzed, and from that, better defenses can be put in place in a cost effective manner.  From the defender's perspective, there may seem to be chaos on the security landscape, but that's because we're not operating from an intelligence-driven perspective...there are too many unknowns, but these unknowns are there because the right resources haven't been focused on the problem.

Here are Shawn Henry's (former exec. assistant director at the FBI) thoughts on sharing intel:
“We have to share threat data. My threat data will help enable you to be more secure. It will help you be predictive and will allow you to be proactive. Despite our best efforts we need to assume that we will get breached, hence we need to ensure our organisations have consequence management in its systems that allow us to minimise any damage.” 

Dan's Exploit Intel Project video


clint said...

Secunia had an interesting paper called How to Secure a Moving Target with Limited Resources that is similar to what Dan Guido is saying. Basically they said patch XYZ apps first because they are exploited the most.

I've been using Cb Enterprise for a while now. I think it will get better over time but I haven't found it that useful so far. Its interface could better (slow, not very flexible for queries). I'm hoping the next version will be enough to get me more excited about it though (registry, network, plugins). At least the price is right and their support people are excellent.

H. Carvey said...

Here is the link to the paper that clint mentioned.

Clint, have you considered accessing the Cb server database directly, via Perl or Python?

I haven't found it that useful so far.

I'm curious...have you sat down with the Cb guys and talked to them about what you'd need to make this useful to you?

H. Carvey said...

If you have limited resources available, taking a prioritized, targeted approach to protecting your assets can be an effective approach. Tools such as Carbon Black can be used to determine not only the most prevalent and most used applications within your infrastructure, but also the versions of those applications, giving you a better, more granular view of your potential exposure.

You can also turn the analysis around; from an intel-based perspective, who within your organization would most likely be targeted? Use Cb to dump a list of applications run on those users' systems, and determine your exposure.

Monitoring tools such as Cb can also be used to determine the scope of an incident within your organization. The Cb guys have given webinars where they've walked through a three-stage downloader incident, and walked all the back up the process tree to Firefox having kicked off Java, even getting the files created on the system as a result of a successful exploit. You can then run a search for file or process names across your enterprise, determining other systems that may have been similarly infected.

-Sketchymoose said...

Great post-- I akin it too "Why are you spending so much time patching up the walls when the door is swinging wide open?"

I think Charlie Miller said it best... why would any person use a 0-day right off the bat when they could easily subvert a user's machine with a well known Java Exploit or similar. Using a 0-day runs the risk of intel being captured on it, thus making it ineffectual.

Sharing is such a huge part of this field, we can all learn from each other. Shame that from a business perspective many do not like to share, and it comes down to a business decision.

H. Carvey said...

Thanks for the comment.

I look at the SOP for most organizations...take possibly affected systems offline and "clean" them, then put them back into service...and wonder how much intel has been missed.

Like many things within the industry, I do feel that things need to change with respect to sharing. Part of the problem is that there's a perception that it's a one-way street...join an organization led or attended by federal law enforcement, and the "intel-sharing" is one-way.

I also feel that the issue with the business decision, in most cases, occurs well before intel is even collected. In the IR work that I've done, the business decision against intel sharing occurs well before intel is even collected.

clint said...

Clint, have you considered accessing the Cb server database directly, via Perl or Python?

We haven't gone to far down the direct database access route since there will be a new official plugin API.

I'm curious...have you sat down with the Cb guys and talked to them about what you'd need to make this useful to you?

We have had various emails with Cb support. They have been very responsive and receptive to our ideas. They've even offered to do some of the work for us. Ultimately we decided to wait for the next release rather than have them spend time on our specific needs. We'd rather have them improve the product for everyone than just our needs. I think once the next version is out, we will revisit some of the issues we had and see what the new plugin API offers. The next release has a few features that we think will be more useful.