Thursday, July 15, 2010

Thoughts and Comments

Exploit Artifacts
There was an interesting blog post on the MMPC site recently, regarding an increase in attacks against the Help and Support Center vulnerability. While the post talks about an increase in attacks through the use of signatures, one thing I'm not seeing is any discussion of the artifacts of the exploit. I'm not talking about a secondary or tertiary download...those can change, and I don't want people to think, "hey, if you're infected with this malware, you were hit with the hcp:// exploit...". Look at it from a compartmentalized perspective...exploit A succeeds and leads to malware B being downloaded onto the system. If malware B can be anything...how do we go about determining the Initial Infection Vector? After all, isn't that what customers ask us? Okay, raise your hand if your typical answer is something like, "...we were unable to determine that...".

I spoke briefly to Troy Larson at the recent SANS Forensic Summit about this, and he expressed some interest in developing some kind of...something...to address this issue. I tend to think that there's a great benefit to this sort of thing, and that a number of folks would benefit from this, including LE.

Awards
The Forensic 4Cast Awards were...uh...awarded during the recent SANS Forensic Summit, and it appears that WFA 2/e received the award for "Best Forensics Book". Thanks to everyone who voted for the book...I greatly appreciate it!

Summit Follow-up
Chris and Richard posted their thoughts on the recent SANS Forensic Summit. In his blog post, Richard said:

I heard Harlan Carvey say something like "we need to provide fewer Lego pieces and more buildings." Correct me if I misheard Harlan. I think his point was this: there is a tendency for speakers, especially technical thought and practice leaders like Harlan, to present material and expect the audience to take the next few logical steps to apply the lessons in practice. It's like "I found this in the registry! Q.E.D." I think as more people become involved in forensics and IR, we forever depart the realm of experts and enter more of a mass-market environment where more hand-holding is required?

Yes, Richard, you heard right...but please indulge me while I explain my thinking here...

In the space of a little more than a month, I attended four events...the TSK/Open Source conference, the FIRST conference, a seminar that I presented, and the SANS Summit. In each of these cases, while someone was speaking (myself included) I noticed a lot of blank stares. In fact, at the Summit, Carole Newell stated at one point during Troy Larson's presentation that she had no idea what he was talking about. I think that we all need to get a bit better at sharing information and making it easier for the right folks to get (understand) what's going on.

First, I don't think for an instant that, from a business perspective, the field of incident responders and analysts is saturated. There will always be more victims (i.e., folks needing help) than there are folks or organizations qualified and able to really assist them.

Second, one of the biggest issues I've seen during my time as a responder is that regardless of how many "experts" speak at conferences or directly to organizations, those organizations that do get hit/compromised are woefully unprepared. Hey, in addition to having smoke alarms, I keep fire extinguishers in specific areas of my house because I see the need to immediate, emergency response. I'm not going to wait for the fire department to arrive, because even with their immediate response, I can do something to contain the damage and losses. My point is that if we can get the folks who are on-site on-board, maybe we'll have fewer intrusions and data breaches that are (a) third-party notification weeks after the fact, and (b) actually have some preserved data available when we (responders) show up.

If we, as "experts", were to do a better job of bringing this stuff home and making it understandable, achievable and useful to others, maybe we'd see the needle move just a little bit in the good guy's favor when it comes to investigating intrusions and targeted malware. I think we'd get better responsiveness from the folks already on-site (the real _first_ responders) and ultimately be able to do a better job of addressing incidents and breaches overall.

Tools
As part of a recent forensic challenge, Wesley McGrew created pcapline.py to help answer the questions of the challenge. Rather than focusing on the tool itself, what I found interesting was that Wesley was faced with a problem/challenge, and chose to create something to help him solve it. And then provide it to others. This is a great example of the I have a problem and created something to solve it...and others might see the same problem as well approach to analysis.

Speaking of tools, Jesse is looking to update md5deep, and has posted some comments about the new design of the tool. More than anything else, I got a lot out of just reading the post and thinking about what he was saying. Admittedly, I haven't had to do much hashing in a while, but when I was doing PCI forensic assessments, this was a requirement. I remember looking at the list and thinking to myself that there had to be a better way to do this stuff...we'd get lists of file names, and lists of hashes...many times, separate lists. "Here, search for this hash..."...but nothing else. No file name, path or size, and no context whatsoever. Why does this matter? Well, there were also time constraints on how long you had before you had to get your report in, so anything that would intelligently speed up the "analysis" without sacrificing accuracy would be helpful.

I also think that Jesse is one of the few real innovators in the community, and has some pretty strong arguments (whether he's aware of it or not) for moving to the new format he mentioned as the default output for md5deep. It's much faster to check the size of the file first...like he says, if the file size is different, you're gonna get a different hash. As disk sizes increase, and our databases of hashes increase, we're going to have to find smart ways to conduct our searches and analysis, and I think that Jesse's got a couple listed right there in his post.

New Attack?
Many times when I've been working an engagement, the customer wants to know if they were specifically targeted...did the intruder specifically target them based on data and/or resources, or was the malware specifically designed for their infrastructure? When you think about it, these are valid concerns. Brian Krebs posted recently on an issue where that's already been determined to be the case...evidently, there's an issue with how Explorer on Windows 7 processes shortcut files, and the malware in question apparently targets specific SCADA systems.

At this point, I don't which is more concerning...that someone knows enough about Seimens systems to write malware for them, or that they know that SCADA systems are now running Windows 7...or both?

When I read about this issue, my first thoughts went back to the Exploit Artifacts section above...what does this "look like" to a forensic examiner?

Hidden Files
Not new at all, but here's a good post from the AggressiveVirusDefense blog that provides a number of techniques that you can use to look for "hidden" files. Sometimes "hidden" really isn't...it's just a matter of perception.

DLL Search Order as a Persistence Mechanism
During the SANS Forensic Summit, Nick Harbour mentioned the use of MS's DLL Search Order as a persistence mechanism. Now he's got a very good post up on the M-unition blog...I won't bother trying to summarize it, as it won't do the post justice. Just check it out. It's an excellent read.

11 comments:

Unknown said...

Your post got me thinking that someone smarter than me might be able create something similar to what Mitre has done with CVE, CWE, etc. Perhaps a CEE (Common Exploit Enumeration) database to document and categorize exploits and their artifacts. That may or may not be the right approach for maintaining and organizing the data, but it seems to me that the number of true exploits to enumerate would be small compared to the number of secondary and tertiary malware samples that get downloaded. Having this kind of database available would help determine root causes as you suggest, and would have the added benefit (I think) of reducing the amount of time needed to determine "yes, I've been pwned." The real challenge, I suspect, lies in collecting the exploit samples and documenting their artifacts on the various platforms they affect. Thanks for the great post!

H. Carvey said...

Gregory,

My thoughts, exactly!

Stefan said...

Harlan,

just a minor addition: the 'shortcut attack' not only works against Windows 7 but at least against XP SP3 as well, if not any version of Windows.

Cheers,
Stefan.

H. Carvey said...

Stefan,

Do you have any more info on this? I'd love to be able to round out what Brian said in his post...

H. Carvey said...

Gregory,

There are a couple of ways to go about what you suggest...

One is to spend a great deal of time thinking about the right framework and the right structure to build, and then get to the point where (a) someone else has already done something, and (b) what you have doesn't make sense to the folks who need to use it.

The other way is to start doing something. Matt recently released the ForensicsArtifacts site, but I asked what could have been done using what's already available at the ForensicsWiki site.

I'm not saying that we should plow ahead without putting thought into something like this...but I am saying that if this is something useful to folks, we do need to do something.

H. Carvey said...

Here's some info on the malware discovered.

So, secondary artifacts of the exploit would be, in this case, the signed drivers (mrxnet.sys, mrxcls.sys), as well as the artifacts of the USB device that was plugged in. In fact, the entry in the setupapi.log (or equiv. on Windows 7) for the USB device, the LastWrite time on the subkeys beneath the DeviceClasses key and the original creation dates on the two driver files should all be relatively close in a timeline.

Questions I have in mind at this point include, if this was discovered, what is the persistence mechanism once the malware is on the system? Driver files are added to the system, so are services added as well, or are we to simply assume this to be the case?

I applaud VirusBlokAda for what they've provided but like most AV sites, the info provided often falls short of being useful to forensic analysts.

Unknown said...

Is the Mitre Malware Attribute Enumeration and Characterization (MAEC) standard on anyone's radar? I think it's in very early days at the moment.

Stefan said...

Harlan,

Do you have any more info on this?


Yes

I

have
:-)

However, these articles do not talk about which Windows versions are affected. That is something we tried in our lab.

Cheers,
Stefan.

H. Carvey said...

Stefan,

Very interesting, thanks!

du212 said...

Did you ever come across 1stgen-artifacts from the HCP vulnerability ???

I saw this over at metasploit:
https://www.metasploit.com/redmine/projects/framework/repository/revisions/9495/diff/modules/exploits/windows/browser/ms10_xxx_helpctr_xss_cmd_exec.rb

It'd be interesting to test this and verify the hcp:// URL entries in IE history.

H. Carvey said...

du212,

what did you find?