Monday, September 27, 2010

Updates

I know, I haven't posted in a while...but then, I haven't really had a lot to post about, as I've been busy completing one project (book) and getting kicked off on another one...

Speaking
I've got a couple of upcoming speaking engagements in the next couple of months...

WACCI - I think I'm giving keynote 2 on Tues; I'm strongly considering using a discussion-style approach (as in "no PPT"), and broaching the subject of engaging and sharing between LE and the private sector in some manner. This is a bit of a sticky wicket, as one cannot simply pontificate on the need for LE to reach to the private sector to engage and share; without being LE, I'm afraid that I'd have little credibility in the matter.

PFIC2010 - I'll be heading to Utah in Nov for a couple of days, to present on "Windows System Forensics". Yeah, I know...lots of room there for me to mess this one up! ;-)

Windows Registry Forensics
I recently finished work on the Windows Registry Forensics book, including sending in the DVD master.

Here's a short description of the chapters:
Ch 1 - This chapter addresses some of what I consider to be the basic concepts of "analysis"; this book is about Registry analysis and as such, I thought it was important to lay out what I see to constitute Registry analysis. This chapter also goes into the purpose and structure of the Registry, in order to build a foundation for the analyst, for the rest of the book.

Ch 2 - This chapter addresses the tools that can be used to perform analysis of the Registry, in both live and post-mortem situations. This is kind of important, I think, because there are issues such as malware infections that really overwhelm both IT admins and responders; identifying Registry artifacts of an infection can allow you to comb the infrastructure using native or custom tools for other infected systems.

Note: This chapter does not address any commercial forensic analysis applications, for the simple reason that I did not have access to any of them, beyond Technology Pathways' ProDiscover.

Ch 3 - This chapter addresses Registry analysis from a system-wide perspective; that is, it addresses the files that compromise the system hives, and what information that is of value to an analyst that can be extracted from the hives. For example, I discuss what information is available from the SAM hive, and also include some tools that you can use to determine if a user has a password, and how you can crack it.

Ch 4 - This chapter addresses analyzing the user's hives.

With chapters 3 and 4, I took a bit of a different approach than I have in the past. Rather than listing key and value after key and value, I provided examples of how the information (key name, key LastWrite time, values, data, etc.) could be and have been used during incident response or during an examination.

The DVD includes a copy of the RegRipper tools (including the Plugin Browser), as well as all of the latest plugins that I've written, as of mid-Sept, 2010.

Open Source
Having finished the Registry book, I'm working diligently on my chapters of Digital Forensics with Open Source Tools, which I'm co-authoring with Cory Altheide (it was Cory's idea, and he's handling the lion's share of the writing).

Projects
I've run across a couple of interesting issues during analysis recently that I wanted to document, and the best way I found to do was to create what I'm referring to as a forensic system scanner, based on tools such as RegRipper and Nessus. The idea is that rather than memorizing the various things I've looked for or found, I can now mount an acquired image as a drive letter, and then run a number of plugins across the image. Much like RegRipper, the plugins can parse the Registry, but what I've done now is included checks for the file system, as well.

I'm designing/writing this scanner to be run against a mounted (via ImDisk, etc.) image, or against a system accessed via F-Response. Given this, it will also work with other mounting methods, such as Windows 7's ability to mount .vhd files. If you have a method for mounting .vmdk files, you could also use that. At this point, I've got a couple/half dozen or so working plugins and things seem to be moving smoothly in that regard. One plugin looks for .tmp files with 'MZ' signatures in the user profiles "Local Settings\Temp" directories...it gets the information about the profile paths from the ProfileList key. That plugin also checks for .exe files with 'MZ' signatures, as well. In both cases, if it finds any such files, it also computes an MD5 hash for each file.

Another plugin checks for a Zeus/Zbot infection. This plugin checks the UserInit value in the Software hive for the addition of "sdra64.exe", as well as checks the system32 directory within the image for the existence of the file. This plugin can be expanded to include any of the other file names that have been used, as well.

Chris Pogue has mentioned seeing the Perfect Key Logger in a number of exams; it is pretty simple and straightforward to write a plugin that checks for this, as well.

As with RegRipper, I can run either individual plugins, or lists of plugins, which I've taken to calling "profiles".

My overall thinking is that there can be a lot of things to look for out there, and if I can codify a check, I don't have to keep trying to remember all of them. God knows I'll get through an exam, deliver the final report, and think, "wow, I forgot to check for this...".

This scanner is NOT meant to completely comprehensive, nor is it meant to replace deep analysis. My goal here is to produce something that can be used as part of an in-processing procedure; get an image, verify it, copy it, then run the scanner against the working copy of the image. The report can then be provided along with either the image, or with selected files extracted from the image, to an analyst. This is intended to be more of a sweeper, but also a force multiplier...think of it; someone writes a check or plugin, and provides it to their team. Now, everyone on the team has the ability to run the same check, even though they don't all see the same thing on every engagement. With a central repository of well-documented plugins, we now have retention of corporate knowledge from each analyst.

Addendum: I was reading this MMPC post regarding Camec.A (they should have entitled the post "Getting a Brazilian"), and it occurred to me that this would be a great example of using the scanner I mentioned above. Specifically, you could easily write a plugin to check for this issue; one way to write it would be to simply check for the DLL files mentioned in the post, and if you find them, get and compare their hashes. Another way to write the plugin would be to add a Registry query for BHOs (RegRipper already has such a plugin), and then check for the files.

Again, once the plugin is written and added to the tool's plugins directory, every system that this is run against would be checked; the analyst running the scan would not need to know (or have memorized) the specifics indicators of the malware.

Tuesday, September 07, 2010

Another WFA 2/e Review

I received the following comments from Allen Wheeler (a Systems Engineer with Troll Systems) today via email, and with his permission, I wanted to share his comments with you:

I just wanted to tell you how impressed I am with how thoroughly you cover the methods covered in your book. Usually when I pick up a book like this I’m prepared for more questions than when I started but that’s not the case with Windows Forensic Analysis. It’s elegantly worded and easy to follow which is hard to find when it comes to technical books. I read for about 8 hours straight yesterday on a trip back from the east coast and I still didn’t want to put it down. As a long time Windows user, you’ve helped fill in a lot of blanks and I feel like I’ll have a lot more flexibility and power by employing many of your tools and techniques as a Systems Engineer who spends a lot of time in this OS. Thank you!

Allen also posted a review of WFA 2/e on Amazon. Thanks, Allen, for your words, and for feeling led to reach out and share your thoughts, not just with me, but with others!

Tuesday, August 24, 2010

It's those darned DLLs again...

There's been a lot (and I mean a LOT) of traffic on the interwebs lately regarding a vulnerability associated with how DLLs are loaded on Windows systems. In short, the idea is that due to how applications look for DLLs that aren't explicitly defined by their path, and how the current working directory can be defined (includes how an application is launched), the application can end up loading a malicious DLL that is in a non-traditional directory.

First, some background/references:
DLL Search Order Vuln (circa 2000)
MSDN DLL Search Order article
ACROS Security's recent announcement
Recent Mandiant blog post (author: Nick Harbour)
SANS ISC blog post (author: Bojan Zdrnja)
ComputerWorld article
Register article (UK)
Rapid7 blog post (author: HD Moore)
Metasploit blog post (author: HD Moore)
SecDev blog post (author: Thierry Zoller)
MS Research and Defense blog
MS Security Advisory 2269637
MS KB Article 2264107 (includes a Registry change)

This was originally something that was pointed out 10 years ago...the new twist is that it has been found to work on remote resources. MS has issued instructions and guidance on how to properly use the appropriate APIs to ensure the safe loading of libraries/DLLs. However, this, combined with the findings from ACROS and others (see the paper referenced in HD's Rapid7 blog post), pretty much tells us that we have a problem, Houston.

The DLL search order vulnerability is a class of vulnerability that can be implemented/exploited through several avenues. Basically, some mechanism needs to tell an application that it needs to load a DLL (import table, Registry entry, etc.) through an implicit (i.e., not explicitly defined) path. The application will then start looking for the DLL in the current working directory...depending on how this exploit is implemented, this may be the directory where the application resides, or it may be a remote folder accessible via SMB or WebDAV. Therefore, the application starts looking for the DLL, and as there is no validation, the first DLL with the specified name is loaded. One thing to keep in mind is that the DLL cannot break required functionality needed by the application, or else the application itself may cease to work properly.

Okay, so what does all of this mean to forensic analysts? So some user clicks on a link that they aren't supposed to and accesses a specific file, and at the same time a malicious DLL gets loaded, and the system becomes infected. Most often, forensic analysts end up addressing the aftermath of that initial infection, finding indications of secondary or perhaps tertiary downloads. But what question do customers and regulatory bodies all ask?

How was the system originally infected?

What they're asking us to do is, to do a root cause analysis and determine the initial infection vector (IIV).

Some folks may feel that the method by which the DLL is designated to load (import table, file extension association, etc.) is irrelevant and inconsequential, and when it comes to a successful exploit, they'd be right. However, when attempting to determine the root cause, it's very important. Take Nick's post on the Mandiant blog, for example...we (me, and the team I work with) had seen this same issue, where Explorer.exe loaded ntshrui.dll, but not the one in the C:\Windows\system32 directory. Rather, the timeline I put together for the system showed the user logging in and several DLLs being loaded from the system32 directory, then C:\Windows\ntshrui.dll was loaded. The question then became, why was Explorer.exe loading this DLL? It turned out that when Explorer.exe loads, it checks the Registry for approved shell extensions, and then starts loading those DLLs. A good number of these approved shell extensions are listed in the Registry with explicit paths, pointing directly to the DLL (in most cases, in the system32 directory). However, for some reason, some of the DLLs are listed with implicit paths...just the name of the DLL is provided and Explorer.exe is left to its own devices to go find that DLL. Ntshrui.dll is one such DLL, and it turns out that in a domain environment, that particular DLL provides functionality to the shell that most folks aren't likely to miss; therefore, there are no overt changes to the Explorer shell that a user would report on.

Yes, I have written a plugin that parses the Registry for approved shell extensions. It's not something that runs quickly, because the process requires that the GUIDs that are listed as the approved shell extensions then be mapped to the CLSID entries...but it works great. This isn't as nifty as the tools that Nick and HD mention in their blog posts, but it's another avenue to look to...list all of the approved shell extensions listed with implicit paths, and then see if any of those DLLs are resident in the C:\Windows directory.

Again, there's a lot of great information out there and a lot to digest with respect to this issue, but the overarching issue that I'm most interested in now is, what does this mean to a forensic analyst, and how can we incorporate a thorough examination for this issue into our processes? Things that forensic analysts would want to look for during an exam would be the MRU lists for various applications...were any files recently accessed or opened from remote resources (thumb drives, remote folders, etc.)? If there's a memory dump available (or the output of handles.exe or listdlls.exe) check to see if there are any DLLs loaded by an application that have unusual paths.

Some of the issues you're going to run into is that you may not have access to the remote resource, and while you may see indications of a file being accessed, you won't have any direct indication that a DLL was in the same folder. Or, if the vulnerability was exploited using a local resource, AV applications most likely are not going to flag on the malicious DLL. In some cases, a timeline may provide you with a sequence of events...user launched an application, remote file was accessed, system became infected...but won't show you explicitly that the malicious DLL was loaded. This is why it is so important for forensic analysts and incident responders to be aware of the mechanisms that can cause an application to load a DLL.

One final note...if you're reading MS KB article 2264107, you'll see that at one point, "CWDIllegalInDllSearch" is referred to as a Registry key, then later as a value. Apparently, it's a value. Kind of important, guys. I'm just sayin'...

I've written a plugin that checks for the mentioned value, and updated another plugin that already checks the Image File Execution Options keys for Debugger values to include checking for this new value, as well.

Monday, August 16, 2010

Links

Who You Gonna Call?
Tom Liston had a great post to the ISC blog a bit ago entitled Incident Reporting - Liston's "How To" Guide. This humorous bent on the situation simply oozed truthiness. Many moons ago, I was on the receiving end of the "abuse@" emails for a telecomm provider, and marveled at the emails threatening litigation as a result of...and get this...a single ICMP packet received by some home user's firewall. Seriously.

One thing I would add to that, particularly for larger organizations...seek a relationship with professional responders before (because one will...if one hasn't, you simply haven't been paying attention) an incident occurs, and work with them to understand how to best help yourself. What really turns most organizations off to responding to incidents in general is the overall impact to their organization...having people on-site, engaging their staff, taking them away from keeping the email server up...and there's the price tag, to boot. But the problem is that in almost every instance, every question asked by the responders is answered with "I don't know"...or with something incorrect that should have been "I don't know". Questions like, what first alerted you to the incident? Where is does your sensitive data live in your infrastructure? How many egress points/ISPs do you have? Do you have any firewalls, and if so, can we see the logs? It's questions like these that the responders need to have answered so that they can scope the engagement (how many people and how much equipment to send), and determine how long it will take to complete the engagement. To be honest, we really don't want to have to engage your staff any more than we have to...we understand that as the engagement goes on, they become less and less interested, and less and less responsive.

Art Imitates Life
My wife and I watched Extraordinary Measures recently...we both like that feel-good, "based on a true story" kind of movie. After watching the movie, it occurred to me that there are a lot of parallels between what was portrayed in the movie and our industry...specifically, that there are a lot of tools and techniques that are discussed, but never see common usage. Through sites such as eEvidence, I've read some very academic papers that have described a search technique or some other apparently very useful technique (or tool), but we don't see these things come into common usage within the industry.

IIVs
H-Online has a nice CSI-style write up on investigating a "PDF timebomb". I know a lot of folks don't want to necessarily look at (I hesitate to say "read") some really dry technical stuff, particularly when it doesn't really talk about how a tool is used. A lot of available tools have a lot of nice features, but too many times I think a lot of us don't realize that espousing the virtues and capabilities of a tool do little to really don't get...after all, we may have made a connection that others simply don't see right away. So things like this are very beneficial. Matt Shannon cracked the code on this and has published Mission Guides.

So what does this have to do with initial infection vectors, or IIVs? Well, malware and bad guys get on a system somehow, and H-Online's write-up does a good job of representing a "typical" case. I think that for the most part, many of us can find the malware on a system, but finding how it got there in the first place is still something of a mystery. After all, there are a number of ways that this can happen...browser drive-by, infected attachment through email, etc. We also have to consider the propagation mechanism employed by the malware...Ilomo uses psexec.exe and the WinInet API to move about the infrastructure. So, given the possibilities, how do we go about determining how the malware got on the system?

Book News - Translations
I received a nice package in the mail recently...apparently, WFA 2/e has been translated into French...the publisher was nice enough to send me some complimentary copies! Very cool! I also found out that I will be seeing copies of WFA 2/e in Korean sometime after the first of the year, and in a couple of weeks, I should be seeing WFA 1/e in Chinese!

Malware Communications
Nick posted recently on different means by which malware will communicate off of Windows systems, from a malware RE perspective. This is an interesting post...like I said, it's from an RE perspective. Nick's obviously very knowledgeable, and I think that what he talks about is very important for folks to understand...not just malware folks, but IR and even disk forensics types. I've talked before (in this blog and in my books) about the artifacts left behind through the use of WinInet and URLMON, and there are even ways to prevent these artifacts from appearing on disk. The use of COM to control IE for malicious purposes goes back (as far as I'm aware) to BlackHat 2002 when Roelof and Haroon talked about Setiri.

Evidence Cleaners
Speaking of disk analysis, Matt has an excellent post on the SANS Forensic blog that talks about using "evidence cleaners" as a means for identifying interesting artifacts. I think that this is a great way to go about identifying interesting artifacts of various tools...after all, if someone wants to "clean" it, it's probably interesting, right? I've said for years that the best places to go to find malware artifacts are (a) AV write-ups and (b) MS's own AutoRuns tool (which now works when run against offline hives).

As a side note, I had a case where an older version of WindowWasher had been run against a system a short time before acquisition, and was able to recover quite a bit of the deleted Registry information using regslack.

Tuesday, August 03, 2010

Artifacts: Direct, indirect

In reviewing some of the available materials on Stuxnet, I've started to see somethings that got me thinking. Actually, what really got me thinking was what I wasn't seeing...

To begin with, as I mentioned here, the .sys files associated with Stuxnet are apparently signed device drivers. So, I see this, and my small military mind starts churning...device drivers usually have entries in the Services keys. Thanks to Stefan, that was confirmed here and here.

Now, the interesting thing about the ThreatExpert post is the mention of the Enum\Root\LEGACY_* keys, which brings us to the point of this post. The creation of these keys is not a direct result of the malware infect, but rather an artifact created by the operating system, as a result of the execution of the malware through this particular means.

We've seen this before, with the MUICache key, way back in 2005. AV vendors were stating that malware created entries in this key when executed, when in fact, this artifact was an indirect result of how the malware was launched in the testing environment. Some interesting things about the LEGACY_* keys are that:

1. The LastWrite time of the keys appear to closely correlate to the first time the malicious service was executed. I (and others, I can't take all of the credit here) have seen correlations between when the file was created on the system, Event Log entries showing that the service started (event ID 7035), and the LastWrite time of the LEGACY_* key for the service. This information can be very helpful in establishing a timeline, as well as indicating whether some sort of timestomping was used with respect to the malware file(s).

2. The LEGACY_* key for the service doesn't appear to be deleted if the service itself is removed.

So, in short, what we have is:

Direct - Artifacts created by the compromise/infection/installation process, such as files/Registry keys being created, deleted, or modified.

Indirect - Ancillary (secondary, tertiary) artifacts created by the operating system as a result of the installation/infection process running in the environment.

So, is this distinction important, and if so, how? How does it affect what I do?

Well, consider this...someone gets into a system or a system becomes infected with malware. There are things that can be done to hide the presence of the malware, intrusion or the overall infection process itself. This is particularly an issue if you consider Least Frequency of Occurrence (LFO) analysis (shoutz to Pete Silberman!), as we aren't looking for the needle in haystack, we're looking for the hay that shouldn't be in the haystack. So, hide all of the direct artifacts, but how do you go about hiding all of the indirect artifacts, as well?

So, by example...someone throws a stone into a pool, and even if I don't see the stone being thrown, I know that something happened, because I see the ripples in the water. I can then look for a stone. So let's say that someone wants to be tricky, and instead of throwing a stone, throws a glass bead, something that is incredibly hard to see at the bottom of the pool. Well, I still know that something happened, because I saw the ripples. Even if they throw a stone attached to a string, and remove the stone, I still know something happened, thanks to the ripples...and can act/respond accordingly.

Also, indirect artifacts are another example of how Locard's Exchange Principle applies to what we do. Like Jesse said, malware wants to run and wants to remain persistent; in order to do so, it has to come in contact with the operating system, producing both direct and indirect artifacts. We may not find the direct artifacts right away (i.e., recent scans of acquired images using up-to-date AV do not find Ilomo...), but finding the indirect artifacts may lead us to the direct ones.

Another example of an indirect artifact is web browsing history and Temporary Internet Files associated with the "Default User" or LocalService accounts; these indicate the use of the WinInet API for off-system communications, but through an account with System-level privileges.

Thoughts?

Saturday, July 31, 2010

Ugh

Sorry, I just don't what other title to use...I wasn't able to come up with something witty or pithy, because all I kept thinking was "ugh".

The "ugh" comes from a question (two, actually, that are quite similar) that appear over and over again in the lists and online forums (forii??)...

I have an image of a system that may have been compromised...how do I prove that data was exfiltrated/copied from the system?

Admit it...we've all seen it. Some may have actually asked this question.

Okay, this is the part where, instead of directly answering the question, I tend to approach the answer from the perspective of getting the person who asked the question to reason through the process to the answer themselves. A lot of people really hate this, I know...many simply want to know which button to click in their forensic application that will give them a list of all of the files that had been copied from the system (prior to the image being acquired).

So, my question to you is...with just an image of supposedly victim system, how would you expect to demonstrate that data was copied or exfiltrated from that system?

Well, there are a couple of things I've done in the past. One is to look for indications of collected data, usually in a file. I've seen this in batch files, VBS scripts, and SQL injection commands...commands are run and the output is collected to a file. From there, you may see additional commands in the web server logs that indicate that the file was accessed...be sure to check the web server response code and the bytes sent by the server, if they're recorded.

In other instances, I've found that the user had attached files to web-based email. Some artifacts left after accessing a GMail account indicated that a file was attached to an email and sent to another address. In several instances, this was a resume...an employee was actively looking for a job and interviewing for that position while on company time. Based on file names and sizes (which may or may not be available), used in conjunction with file last accessed times, we've been able to provide indications that files were sent off of the system.

What else? Well, there's P2P applications...you may get lucky, and the user will have installed one that clearly delineates which files are to be shared. Again, this may only be an indication...you may have to access the P2P network itself and see if the file (name, size, hash) is out there.

What about copying? Most analysts are aware of USB devices by now; however, there is still apparently considerable confusion over what indications of the use of such devices reside within an image. One typical scenario is that a user plugs such a device in and copies files to the device...how would you go about proving this? Remember, you only have the image acquired from the system. The short answer is simply that you can't. Yes, you can show when a device was plugged in (with caveats) and you may have file last access times to provide additional indications, but how do you definitively associate the two, and differentiate the file accesses from, say, a search, an AV scan, or other system activity?

I hope that this makes sense. My point is that contrary to what appears to be popular belief, Windows systems do not maintain a list of files copied off of the system, particularly not in the Registry. If your concern is data exfiltration (insider activity, employee takes data, intruder gets on the system and exfils data...), consider the possible scenarios and demonstrate why they would or wouldn't be plausible (i.e., exfil via P2P would not be plausible if no P2P apps are installed). Reason through the analysis process and provide clear explanations and documentation as to what you did, what you found, and justify your findings.

Thursday, July 29, 2010

Exploit Artifacts redux, plus

As a follow-up to my earlier post where I discussed Stuxnet, I wanted to take a moment to update some of what's come out on this issue in the past 12 days.

Claus wrote a great post that seems pretty comprehensive with respect to the information that's out there and available. He points to the original posts from VirusBlokAda, as well as two posts from F-Secure, the first of which points to the issue targeting SCADA systems. According to the second post:

...the Siemens SIMATIC WinCC database appears to use a hardcoded admin username and password combination that end users are told not to change.

Does that bother anyone?

Okay, all that aside, as always I'm most interested in how we (forensic analysts, incident responders) can use this information to our benefit, particularly during response/analysis activities. This PDF from VirusBlokAda, hosted by F-Secure has some very good information on artifacts associated with this malware. For example, there are two digitally signed driver files, mrxnet.sys and mrxcls.sys (found in the system32\driver directory), as well as several hidden (via attributes) LNK files and a couple of .tmp files. There are also two other files in the system32\inf directory; oem6c.pnf and oem7a.pnf, both of which reportedly contain encrypted data. This MS Threat Encyclopedia entry indicates that these files (and others) may be code that gets injected into lsass.exe. This entry points to online hosts reportedly contacted by the worm/downloader itself, so keep an eye on your DNS logs.

As the two .sys files are drivers, look for references to them both in the Services keys. This also means that entries appear in the Enum\Root keys (thanks to Stefan for the link to ThreatExpert).

This post from Bojan of the SANS ISC has some additional information, particularly that the LNK files themselves are specially-crafted so that the embedded MAC times within the LNK file are all set to 0 (ie, Jan 1, 1970). Outside of that, Bojan says that there is nothing special about the LNK files themselves...but still, that's something.

MS also has a work-around for disabling the display of icons for shortcuts.

So, in summary, what are we looking for? Run RegRipper and see if there are entries in the Services and Enum\Root (I have a legacy.pl plugin, or you can write your own) keys for the drivers. Within the file system (in a timeline), look for the driver files, as well as the .tmp and .pnf files. Should you find those, check the Registry and the appropriate log file (setupapi.log on XP) for a recently connected USB device.

Speaking of artifacts, Rebhip.A looks like a lot of fun...

For those interested, here's some PoC code from Exploit-DB.com.

Addendum: Anyone write a plugin for Rebhip.A yet? Also, I almost missed Pete's post on Stuxnet memory analysis...

Updates

I haven't posted in a while, as I was on a mission trip with a team of wonderful folks involved in Compassion International. There wasn't a lot of connectivity where I was, and to be honest, it was good to get away from computers for a while.

However, in the meantime, things haven't stopped or slowed down in my absence. Matt has added support to F-Response for the Android platform. Also, within 24 hours of the release, a customer had posted a video showing F-Response for Android running on an HTC Desire. I have an Android phone...Backflip...but I have read about how the Android OS is rolling out on more than just phones. Andrew Hoog (viaForensics) has a site on Android Forensics.

The folks at the MMPC site posted about a key logger (Win32/Rebhip.A) recently. There's some information about artifacts that is very useful to forensic analysts and incident responders in the write-up for Rebhip.A. There are some very interesting/useful indicators at the site.

Det. Cindy Murphy has a really nice paper out on cell phone analysis...not Windows-specific, I know, but very much worth a mention. You can find info linked on Eric's blog, as well as a link to the paper on Jesse's blog. I've read through the paper, and though I don't do many/any cell phone exams, the paper is a very good read. If you have a moment, or will have some down time (traveling) soon, I'd highly recommend printing it out and reading it, as well as providing feedback and comments.

Claus has a couple of great posts, like this one on network forensics. As Claus mentions in his post (with respect to wirewatcher), network capture analysis is perhaps most powerful when used in conjunction with system analysis.

In addition, Claus also has a post about a mouse jiggler...comedic/lewd comments aside, I've been asked about something like this by LE in the past, so I thought I'd post a link to this one.

Finally (for Claus's site, not this post...), be sure to check out Claus's Security and Forensics Linkfest: Weekend Edition post, as he has a number of great gems linked in there. For example, there's a link to PlainSight, a recent update to Peter Nordahl-Hagen's tools, WinTaylor from Caine (great for CSI fans), as well as a tool from cqure.net to recover TightVNC passwords. There's more and I can't do it justice...go check it out.

Ken Pryor had a great post over on the SANS Forensic blog entitled I'm here! Now what? The post outlines places you can go for test images and data, to develop your skills. One site that I really like (and have used) is Lance's practicals (go to the blog, and search for "practical"), especially the first one. The first practical has some great examples of time manipulation and has provided a number of excellent examples for timeline analysis.

Do NOT go to Windows-ir.com

All,

I get a lot of comments about links to a domain I used to own called windows-ir.com. I no longer have anything to do with that domain, but I know that there are some older links on this blog that still point there.

Don't go there, folks. And please...no more comments about the site. I'm aware of it, but there's nothing I can do about the content there. If you're going in search of 4-yr-old content, feel free to do so, but please...no more comments about what's there.

Thanks.

Saturday, July 17, 2010

Exploit Artifacts redux

I posted yesterday, and included some discussion of exploit artifacts, the traces left by an exploit, prior to the secondary download. When a system is exploited, something happens...in many cases, that something is an arbitrary command being run and/or something being downloaded. However, the initial exploit will itself have artifacts...they may exist only in memory, but they will have artifacts.

Let's look at an example...from the MMPC blog post on Stuxnet:
What is unique about Stuxnet is that it utilizes a new method of propagation. Specifically, it takes advantage of specially-crafted shortcut files (also known as .lnk files) placed on USB drives to automatically execute malware as soon as the .lnk file is read by the operating system. In other words, simply browsing to the removable media drive using an application that displays shortcut icons (like Windows Explorer) runs the malware without any additional user interaction. We anticipate other malware authors taking advantage of this technique.

So the question at this point is, what is unique about the .lnk files that causes this to happen? A while back, there was an Excel vulnerability, wherein if you opened a specially-crafted Excel document, the vulnerability would be exploited, and some arbitrary action would happen. I had folks telling me that there ARE NO artifacts to this, when, in fact, there has to be a specially-crafted Excel document on the system that gets opened by the user. Follow me? So if that's the case, if I search the system for all Excel documents, and then look for those that were created on the system near the time that the incident was first noticed...what would I be looking for in the document that makes it specially-crafted, and therefore different from your normal Excel document?

So, we know that with Stuxnet, we can look for the two files in an acquired image (mrxcls.sys, mrxnet.sys) for indications of this exploit. But what do we look for in the malicious LNK file?

Why is this important? Well, when an incident occurs, most organizations want to know how it happened, and what happened (i.e., what data was exposed or compromised...). This is part of performing a root cause analysis, which is something that needs to be done...if you don't know the root cause of an incident and can't address it, it's likely to happen again. If you assume that the initial exploit is email-borne, and you invest in applying AV technologies to your email gateway, what effect will that have if it was really browser-borne, or the result of someone bringing in an infected thumb drive?

The MMPC site does provide some information as to the IIV for this issue:
In addition to these attack attempts, about 13% of the detections we’ve witnessed appear to be email exchange or downloads of sample files from hacker sites. Some of these detections have been picked up in packages that supposedly contain game cheats (judging by the name of the file).

Understanding the artifacts of an exploit can be extremely beneficial in determining and addressing the root cause of incidents. In the case of Stuxnet, this new exploit seems to be coupled with rootkit files that are signed with once-legitimate certificates. This coupling may be unique, but at the same time, we shouldn't assume that every time we find .sys files signed by RealTek, that the system had fallen victim to Stuxnet. Much like vulnerability databases, we need to develop a means by which forensic analysts can more easily determine the root cause of an infection or compromise; this is something that isn't so easily done.

Addendum
Another thing that I forgot to mention...we see in the MMPC post that the files are referred to, but nothing about the persistence mechanism. Do we assume that these files just sit there, or do we assume that they're listed as device drivers under the Services key? What about the directory that they're installed in? Do they get loaded into a "Program Files\RealTek" directory, or to system32, or to Temp? All of this makes a huge difference when it comes to IR and forensic analysis, and can be very helpful in resolving issues. Unfortunately, AV folks don't think like responders...as such, I really think that there's a need for an upgrade to vulnerability and exploit analysis, so that the information that would be most useful to someone who suspects that they're infected can respond accordingly.

Thursday, July 15, 2010

Thoughts and Comments

Exploit Artifacts
There was an interesting blog post on the MMPC site recently, regarding an increase in attacks against the Help and Support Center vulnerability. While the post talks about an increase in attacks through the use of signatures, one thing I'm not seeing is any discussion of the artifacts of the exploit. I'm not talking about a secondary or tertiary download...those can change, and I don't want people to think, "hey, if you're infected with this malware, you were hit with the hcp:// exploit...". Look at it from a compartmentalized perspective...exploit A succeeds and leads to malware B being downloaded onto the system. If malware B can be anything...how do we go about determining the Initial Infection Vector? After all, isn't that what customers ask us? Okay, raise your hand if your typical answer is something like, "...we were unable to determine that...".

I spoke briefly to Troy Larson at the recent SANS Forensic Summit about this, and he expressed some interest in developing some kind of...something...to address this issue. I tend to think that there's a great benefit to this sort of thing, and that a number of folks would benefit from this, including LE.

Awards
The Forensic 4Cast Awards were...uh...awarded during the recent SANS Forensic Summit, and it appears that WFA 2/e received the award for "Best Forensics Book". Thanks to everyone who voted for the book...I greatly appreciate it!

Summit Follow-up
Chris and Richard posted their thoughts on the recent SANS Forensic Summit. In his blog post, Richard said:

I heard Harlan Carvey say something like "we need to provide fewer Lego pieces and more buildings." Correct me if I misheard Harlan. I think his point was this: there is a tendency for speakers, especially technical thought and practice leaders like Harlan, to present material and expect the audience to take the next few logical steps to apply the lessons in practice. It's like "I found this in the registry! Q.E.D." I think as more people become involved in forensics and IR, we forever depart the realm of experts and enter more of a mass-market environment where more hand-holding is required?

Yes, Richard, you heard right...but please indulge me while I explain my thinking here...

In the space of a little more than a month, I attended four events...the TSK/Open Source conference, the FIRST conference, a seminar that I presented, and the SANS Summit. In each of these cases, while someone was speaking (myself included) I noticed a lot of blank stares. In fact, at the Summit, Carole Newell stated at one point during Troy Larson's presentation that she had no idea what he was talking about. I think that we all need to get a bit better at sharing information and making it easier for the right folks to get (understand) what's going on.

First, I don't think for an instant that, from a business perspective, the field of incident responders and analysts is saturated. There will always be more victims (i.e., folks needing help) than there are folks or organizations qualified and able to really assist them.

Second, one of the biggest issues I've seen during my time as a responder is that regardless of how many "experts" speak at conferences or directly to organizations, those organizations that do get hit/compromised are woefully unprepared. Hey, in addition to having smoke alarms, I keep fire extinguishers in specific areas of my house because I see the need to immediate, emergency response. I'm not going to wait for the fire department to arrive, because even with their immediate response, I can do something to contain the damage and losses. My point is that if we can get the folks who are on-site on-board, maybe we'll have fewer intrusions and data breaches that are (a) third-party notification weeks after the fact, and (b) actually have some preserved data available when we (responders) show up.

If we, as "experts", were to do a better job of bringing this stuff home and making it understandable, achievable and useful to others, maybe we'd see the needle move just a little bit in the good guy's favor when it comes to investigating intrusions and targeted malware. I think we'd get better responsiveness from the folks already on-site (the real _first_ responders) and ultimately be able to do a better job of addressing incidents and breaches overall.

Tools
As part of a recent forensic challenge, Wesley McGrew created pcapline.py to help answer the questions of the challenge. Rather than focusing on the tool itself, what I found interesting was that Wesley was faced with a problem/challenge, and chose to create something to help him solve it. And then provide it to others. This is a great example of the I have a problem and created something to solve it...and others might see the same problem as well approach to analysis.

Speaking of tools, Jesse is looking to update md5deep, and has posted some comments about the new design of the tool. More than anything else, I got a lot out of just reading the post and thinking about what he was saying. Admittedly, I haven't had to do much hashing in a while, but when I was doing PCI forensic assessments, this was a requirement. I remember looking at the list and thinking to myself that there had to be a better way to do this stuff...we'd get lists of file names, and lists of hashes...many times, separate lists. "Here, search for this hash..."...but nothing else. No file name, path or size, and no context whatsoever. Why does this matter? Well, there were also time constraints on how long you had before you had to get your report in, so anything that would intelligently speed up the "analysis" without sacrificing accuracy would be helpful.

I also think that Jesse is one of the few real innovators in the community, and has some pretty strong arguments (whether he's aware of it or not) for moving to the new format he mentioned as the default output for md5deep. It's much faster to check the size of the file first...like he says, if the file size is different, you're gonna get a different hash. As disk sizes increase, and our databases of hashes increase, we're going to have to find smart ways to conduct our searches and analysis, and I think that Jesse's got a couple listed right there in his post.

New Attack?
Many times when I've been working an engagement, the customer wants to know if they were specifically targeted...did the intruder specifically target them based on data and/or resources, or was the malware specifically designed for their infrastructure? When you think about it, these are valid concerns. Brian Krebs posted recently on an issue where that's already been determined to be the case...evidently, there's an issue with how Explorer on Windows 7 processes shortcut files, and the malware in question apparently targets specific SCADA systems.

At this point, I don't which is more concerning...that someone knows enough about Seimens systems to write malware for them, or that they know that SCADA systems are now running Windows 7...or both?

When I read about this issue, my first thoughts went back to the Exploit Artifacts section above...what does this "look like" to a forensic examiner?

Hidden Files
Not new at all, but here's a good post from the AggressiveVirusDefense blog that provides a number of techniques that you can use to look for "hidden" files. Sometimes "hidden" really isn't...it's just a matter of perception.

DLL Search Order as a Persistence Mechanism
During the SANS Forensic Summit, Nick Harbour mentioned the use of MS's DLL Search Order as a persistence mechanism. Now he's got a very good post up on the M-unition blog...I won't bother trying to summarize it, as it won't do the post justice. Just check it out. It's an excellent read.

Friday, July 09, 2010

SANS Forensic Summit Take-Aways

I attended the SANS Forensic Summit yesterday...I won't be attending today due to meetings and work, but I wanted to provide some follow-up, thoughts, etc.

The day started off with the conference intro from Rob Lee, and then a keynote discussion from Chris Pogue of TrustWave and Major Carole Newell, Commander of Headquarters Division the Broken Arrow Police Dept. This was more of a discussion and less of a presentation, and focused on communications between private sector forensic consultants and (local) LE. Chris had volunteered to provide his services, pro bono, to the department, and Major Newell took him up on his offer, and they both talked about how successful that relationship has been. After all, Chris's work has helped put bad people in jail...and that's the overall goal, isn't it? Private sector analysts supporting LE has been a topic of discussion in several venues, and it was heartening to hear Maj Newell chime in and provide her opinion on the subject, validating the belief that this is something that needs to happen.

There were a number of excellent presentations and panels during the day. During the Malware Reverse Engineering panel, Nick Harbour of Mandiant mentioned seeing the MS DLL Search Order being employed as a malware persistence mechanism. I got a lot from Troy Larson's and Jesse Kornblum's presentations, and sat next to Mike Murr while he tweeted using the #forensicsummit tag to keep folks apprised of the latest comments, happenings, and shenanigans.

Having presented and been on a panel, it was great opportunity to share my thoughts and experiences and get comments and feedback not only from other panelists, but also from the attendees.

One of the things I really like about this conference is the folks that it brings together. I got to reconnect with friends, and talk to respected peers that I haven't seen in a while (Chris Pogue, Matt Shannon, Jesse Kornblum, Troy Larson, Richard Bejtlich), or have never met face-to-face (Dave Nardoni, Lee Whitfield, Mark McKinnon). This provides a great opportunity for sharing and discussing what we're all seeing out there, as well as just catching up. Also, like I said, it's great to discuss things with other folks in the industry...I think that a lot of times, if we're only engaging with specific individuals time and again, we tend to loose site of certain aspects of what we do, and what it means to others...other responders, as well as customers.

If someone asked me to name one thing that I would recommend as a change to the conference, that would be the venue. While some folks live and/or work close to downtown DC and it's easy to get to the hotel where the conference is held, there are a number of locations west of DC that are easily accessible from Dulles Airport (and folks from Arlington and Alexandria will be going against traffic to get there).

Other than that, I think the biggest takeaways, for me, were:

1. We need to share better. I thought I was one of the few who thought this, but from seeing the tweets on the conference and talking to folks who are there, it's a pretty common thread. Sharing between LE and the private sector is a challenge, but as Maj Newell said, it's one that everyone (except the bad guys) benefits from.

2. When giving presentations, I need to spend less time talking about what's cool and spend more time on a Mission Guide (a la Matt Shannon) approach to the material. Throwing legos on the table and expecting every analyst to 'get it' and build the same structure is a waste of time...the best way to demonstrate the usefulness and value of a tool or technique is to demonstrate how it's used.

Thanks to Rob and SANS for putting on another great conference!

Follow-ups
Foremost on Windows (Cygwin build)

Wednesday, July 07, 2010

More Timeline Stuff

I'll be at the SANS Forensic Summit tomorrow, giving a presentation on Registry and Timeline Analysis in the morning, and then participating on a panel in the afternoon. Over all, it looks like this will be another excellent conference, due to the folks attending, their presentations, and opportunities for networking.

I talk (and blog) a lot about timelines, as this is a very powerful technique that I, and others, have found to be very useful. I've given presentations on the subject (including a seminar last week), written articles about it, and used the technique to great effect on a number of investigations. In many instances, this technique has allowed me to "see" things that would not normally be readily apparent through a commercial forensic analysis tool, nor via any other technique.

One of the aspects of Windows systems is that there a wide range of data sources that provide time stamped events and indicators. I mean, the number of locations within a Windows system that provides this sort of information is simply incredible.

To meet my own needs, I've updated my toolkit to include a couple of additional tools. For one, I've created a script that directly parses the IE index.dat files, rather than going through a third-party tool (pasco, Web Historian, etc.). This just cuts down on the steps required, and the libmsiecf tools, mentioned in Cory's Going Commando presentation, does not appear to be readily available to run on Windows systems.

Parsing EVT files is relatively straightforward using tools such as evtparse.pl, and Andreas provides a set of Perl-based tools to parse EVTX (Event Logs from Vista and above) files. As an alternative, I wanted to write something that could easily parse the output of LogParser (free from MS), when run against EVT or EVTX files, using a command such as the following:

logparser -i:evt -o:csv "SELECT * FROM D:\Case\File\SysEvent.EVT" > output.csv

Keep in mind that LogParser uses the native API on the system to parse the EVT/EVTX files, so if you're going to parse EVTX files extracted from a Vista or Windows 2008 or Windows 7 system, you should do so on a Windows 7 system or VM. The output from the LogParser command is easily read and output to a TLN format, and the output from the script I wrote is identical to that of evtparse.pl. This can be very useful, as LogParser can be installed on and run from a DVD or thumb drive, and used in live IR (change "D:\Case\File\SysEvent.EVT" to "System" or "Application"), as well as run against files extracted from acquired images (or files accessible via a mounted image). However, keep in mind that LogParser uses the native API, so if sysevent.evt won't open in the Event Viewer because it is reportedly "corrupted" (which has been reported for EVT files from XP and 2003), then using evtparse.pl would be the preferable approach.

The next tool I'm considering working on is one to parse the MFT and extract the time stamps from the $FILE_NAME attribute into TLN format. This would undoubtedly provide some insight into the truth about what happened on a system, particularly where some sort of timestomping activity has occurred (a la Clampi). This will take some work, as the full paths need to be reassembled, but it should be useful nonetheless.

Tuesday, July 06, 2010

Links

Malware in PDFs
As a responder and forensic analyst, one of the things I'm usually very interested in (in part, because customers want to know...) is determining how some malware (or someone) was first able to get on a system, or into an infrastructure...what was the Initial Infection Vector? I've posted about this before, and the SANS ISC had an interesting post yesterday, as well, regarding malware in PDF files. This is but one IIV, but

Does this really matter beyond simply determining the IIV for malware or an intrusion? I'd say...yes, it does. But why is that? Well, consider this...this likely started small, with someone getting into the infrastructure, and then progressed from there.

PDF files are one way in...Brian Krebs pointed out another IIV recently, which apparently uses the hcp:// protocol to take advantage of an issue in the HextoNum function and allow an attacker to run arbitrary commands. MS's solution/workaround for the time being is to simply delete a Registry key. More information on exploiting the vulnerability can be seen here (the fact that the vulnerability is actively being exploited is mentioned here)...this is a very interesting read, and I would be interested to see what artifacts there may be to the use of an exploit as described in the post. Don's mentioned other artifacts associated with exploiting compiled HelpHTML files, in particular how CHM functionality can be turned into a malware dropper. But this is a bit different, so I'd be interested to see what analysts may be able to find out.

Also, if anyone knows of a tool or process for parsing hh.dat files, please let me know.

Free Tools
For those interested, here's a list of free forensic tools at ForensicControl.com. I've seen where folks have looked for this sort of thing, and the disadvantage of having lists like this out there is that...well...they're they're out there, and not in one centralized location. I know some folks have really liked the list of network security tools posted at InSecure.org, and it doesn't take much to create something like that at other sites. For example, consider posting something on the ForensicsWiki.

Speaking of tools, Claus has a great post from the 4th that mentions some updates to various tools, including ImDisk, Network Monitor, and some nice remote control utilities. If you're analyzing Windows 2008 or Windows 7 systems, you might want to take a look at AppCrashView from Nirsoft...I've been able to find a good deal of corroborating data in Dr. Watson logs on Windows XP/2003 systems, and this looks like it might be just as useful, if not more so.

Shadow Analyzer
There's been some press lately about a tool called "Shadow Analyzer", developed by Lee Whitfield and Mark McKinnon, which is to be used to access files in Volume Shadow Copies. I also see that this has been talked about on the CyberCrime101 podcast...should be a good listen!

On that note, ShadowExplorer is at version 0.7.

Parsing NTFS Journal Files
Seth recently posted a Python script for parsing NTFS Journal Transaction Log files (ie, $USNJRNL:$J files). I don't know about others but I've been doing a lot of a parsing of NTFS-related files, whether it's the MFT itself, or running $LogFile through BinText.

I'm sure that one of the things that would help with folks adopting tools like this, particularly those that may require Python or Perl being installed, is an explanation or examples of how the information can be useful to an examiner/analyst. So, if folks do find that these tools are useful, post something that lets others know why/how you used it, and what you found that supported your examination goals.

Saturday, July 03, 2010

Skillz Follow-up

Based on some of the events of the week, and in light of a follow-up post from Eric, I wanted to follow-up on my last blog post.

Earlier this week, I spent an entire day talking with a great group of folks on the other side of the country...a whole day dedicated to just talking about forensics! Some of what we talked about was malware (specifically, 'bot) related, and as part of that, we (of course) talked about some of the issues related to malware characteristics and timeline analysis. One of the aspects that ties all of these topics together is timestomping...when some malware gets installed, the file times (more appropriately, the $STANDARD_INFORMATION attributes within the MFT) are purposely modified. MS provides open APIs (GetFileTime/SetFileTime) that allow for this, and in some cases, the file times for the malware are copied from a legitimate system file.

So, tying this back to "skillz"...in my previous post, I'd mentioned modifying my own code to extract the information I wanted from the MFT. Interestingly, Eric's post addressed the issue of having the information available, and I tend to agree with the first comment to his post...too many times, some GUI analysis tools get over-crowded and there's just too much stuff around to really make sense of things. Rather than having a commercial analysis app into which I can load an image and have it tell me everything that's wrong, I tend to rely on the goals for the analysis that I work out with the customer...even if it means that I would use separate tools. I don't always need information from the MFT...but sometimes I might. So do I want to pay for a commercial application that's going to attempt to keep up with all of the possible wrong stuff that's out there, or can I use a core set of open-source tools that allow me to get the information I need, when I need it, because I know what I'm looking for?

So, what does the output of my code look like? Check out the image...does this make sense to folks? How about to folks familiar with the MFT? Sure, it makes sense to me...and because the code is open source, I can open the Perl script in an editor or Notepad and see what some of the information means. For example, on the first line, we see:

132315 FILE 2 1 0x38 4 1

What does that mean? The first number is the count, or number of the MFT record. The word "FILE" is the signature...the other possible entry is "BAAD". The number "2" is the sequence number, and the "1" is the link count. Where did I get this? From the code, and it's documentation. I can add further comments to the code, if need be, that describe what various pieces of information mean or refer to, or I can modify the output so that everything's explained right there in the output.

Or, because this stuff is open-source, another options is to just move everything to CSV output, so that it can be opened up as a spreadsheet.

Again, because this is open-source, another option that can be added to the output is to add the offset within the MFT where the entry is found...that's not really necessary, as that offset can be easily computed from the file count, and that may even be intuitively obvious to folks who understand the format of the MFT (and as such may not be necessary for the output).

Now, back to the image. The "0x0010" attribute is the $STANDARD_INFORMATION attribute, and the "0x0030" attribute is the $FILE_NAME attribute. Note the differences in the time stamps. Yet another option available...again, because this is open-source code...is to convert the output, or just the $FILE_NAME attribute information, to the five-field TLN format.

So, one way to approach the issue of analysis is to say, hey, I paid for this GUI application and it should include X, Y, and Z in the output. As you can imagine, after a while, you're going to have one crowded UI, or you're going to have so many layers that you're going to loose track of where everything is located, or how to access it. Another way to approach your analysis is to start with your goals, and go from there...identify what you need, and go get it. Does this mean that you have to be a programmer? Not at all. It just means that you have to have a personal or professional network of friends in the industry...a network that you contribute to and can go to get information, etc.

Monday, June 28, 2010

Skillz

Remember that scene from Napoleon Dynamite where he talks about having "skillz"? Well, analysts have to have skillz, right?

I was working on a malware exam recently...samples had already been provided to another analyst for reverse engineering, and it was my job to analyze acquired images and answer a couple of questions. We knew the name of the malware, and when I was reviewing information about it at various sites (to prepare my analysis plan), I found that when the malware files are installed, their MAC times are copied from kernel32.dll. Okay, no problem, right? I'll just parse the MFT and get the time stamps from the $FILE_NAME attribute.

So I received the images and began my case in-processing. I then got to the point where I extracted the MFT from the image, and the first thing I did was run David Kovar's analyzemft.py against it. I got concerned after I ran it for over an hour, and all I got was a 9Kb file. I hit Ctrl-C in the command prompt and killed the process. I then ran Mark Menz's MFTRipperBE against the file and when I opened the output .csv file and ran a search for the file name, Excel told me that it couldn't find the string. I even tried opening the .csv file in an editor and ran the same search, with the same results. Nada.

Fortunately, as part of my in-processing, I had verified the file structure with FTK Imager, and then created a ProDiscover v6.5 project and navigated to the appropriate directory. From there, I could select the file within the Content View of the project and see the $FILE_NAME attribute times in the viewer.

I was a bit curious about the issue I'd had with the first two tools, so I ran my Perl code for parsing the MFT and found an issue with part of the processing. I don't know if this is the same issue that analyzemft.py encountered, but I made a couple of quick adjustments to my Perl script, and I was able to fairly quickly get the information I needed. I can see that the file has $STANDARD_INFORMATION and $FILE_NAME attributes, as well as as data attribute, that the file is allocated (from the flags), and that the MFT sequence number is 2. Pretty cool.

The points of this post are:

1. If you run a tool and do not find the output that you expect, there's likely a reason for it. Validate your findings with other tools or processes, and document what you do. I've said (and written in my books) that the absence of an artifact where you would expect to find one is itself an artifact.

2. Analysts need to have an understanding of what they're looking at and for, as well as some troubleshooting skills, particularly when it comes to running tools. Note that I did not say "programming" skills. Not everyone can, or wants to, program. However, if you don't have the skills, develop relationships with folks who do. But if you're going to ask someone for help, you need to be able to provide enough information that they can help you.

3. Have multiple tools available to validate your findings, should you need to do so. I ran three tools to get the same piece of information, of which I had documented the need in my analysis plan prior to receiving the data. One tool hung, another completed without providing the information, and I was able to get what I needed from the third, and then validate it with a fourth. And to be honest, it didn't take me days to accomplish that.

4. The GUI tool that provided the information doesn't differentiate between "MFT Entry Modified" and "File Modified"...I just have two time stamps from the $FILE_NAME attribute called "Modified". So I tweaked my own code to print out the time stamps in MACB format, along with the offset of the MFT entry within the MFT itself. Now, everything I need is documented, so if need be, it can be validated by others.

Tuesday, June 22, 2010

Links and what not

Case Notes
Chris posted on writing and maintaining case notes a bit ago, using Case Notes. One of the things that is sorely overlooked many times is note taking and documenting what you're doing during an examination.

This is perhaps one of the most overlooked aspects of what we (IR/DF folks) do...or perhaps more appropriately, need to be doing. Many times, so little of what we do is documented, and it needs to be, for a number of reasons. One that's mentioned many times is, how are you going to remember the details of what you did 6 months later? Which tool version did you use? Not to pick on EnCase, but have you been to the user forum? Spend a little time there and you'll see why the version of the tool makes a difference.

Another aspect that few really talk about is process improvement...how do you improve upon something if you're not documenting it? As forensics nerds, we really don't think about it too much, but there are a lot of folks out there who have processes and procedures...EMTs, for example. Let's say you're helping someone with some analysis, and you've worked out an analysis plan with them. Now, let's say that they didn't follow it...they told you that...but they can't remember what it was they did do. How do you validate the results? How can you tell that what they did was correct or sufficient?

A good example is when a customer suspects that they have malware on a system, and all you have to work with is an acquired image. How do you go about finding "the bad stuff"? Well, one way is to mount the image read-only and scan it with AV. Oh, but wait...did you check to see which AV, if any, was already installed on the system? Might not be a good idea to run that, because apparently it didn't work in the first place. So what do/did you run? Which version? When was it updated? What else did you do? You will not ever reach 100% certainty, but with a documented process you can get close.

When you document what you do, one of the side effects is that you can improve upon that process. Hey, two months ago, here's what I did...that was the last time that I had a malware case. Okay, great...what've I learned...or better yet, what have other folks learned since then, and how can we improve this process? Another nice side effect is that if you document what you did, the report (another part of documentation that we all hate...) almost writes itself.

In short, if you didn't document what you did...it didn't happen.

Raw2vmdk
Raw2vmdk is a Java-based, platform independent tool for mounting raw images as VMWare vmdk disks. This is similar to LiveView, and in fact, raw2vmdk reportedly uses some of the same Java classes as LiveView. However, raw2vmdk is a command line tool; thanks for JD for taking the time to try it out and describe it in a blog post.

MFT $FILE_NAME Attributes
Eric posted to the Fistful of Dongles blog, asking about tools that can be used to extract/retrieve $FILE_NAME attributes from the MFT. I mentioned two tools in my comment that have been around and available for some time, and another commenter mentioned the use of EnScripts.

Tool Updates
Autoruns was updated on 15 June; perhaps the most notable update is the -z switch, which specifies an offline Windows system to scan. I really like Autoruns (and it's CLI companion, autorunsc.exe), as it does a great job of going through and collecting information about entries in startup locations, particularly the Registry. The drawback of the tool is that there isn't much of an explanation as to why some areas are queried, leaving it up to the investigator to figure it out. This is pretty unfortunate, given the amount of expertise that goes into developing and maintaining a valuable tool like this, and usually means that a great deal of the information that the tool collects is simply overlooked.

If you're interested in USB device histories on your system, check out usbhistory.exe. The web page provides a very good explanation of how the tool works and were it goes to get it's information. Overall, if this is information that you're interested in, this would be a very good tool to add to a batch file that collects information from a live system.

USB Devices
Speaking of USB device history, there's a topic I keep seeing time and time again in the lists that has to do with when a USB device had last been connected to a system. In almost all cases, the question that is posed indicates that there has been some issue determining when devices had been attached, as the subkeys beneath the Enum\USBStor key all have the same LastWrite times.

While there is clearly some process that is modifying these subkeys so that the LastWrite times are updated, these are not the keys we're interested in when it comes to determining when the devices were last attached to the system. I addressed this in WFA 2/e, starting on pg 209, and Rob Lee has covered it in his guides for profiling USB devices on various versions of Windows.

Sunday, June 20, 2010

Who's on...uh, at...FIRST?

I attended the FIRST conference in Miami last week. My employer is not a member of FIRST, but we were a sponsor, and we hosted the "Geek Bar"...a nice room with two Wiis set up, a smoothie bar (also with coffee and tea), and places to sit and relax. One of my roles at the conference was to be at the Geek Bar to answer questions and help sign folks up for the NAP tour on Thursday, as well as mingle with folks at the conference. As such, I did not get to attend all of the presentations...some were going on during my shift at the Geek Bar, for example.

Note: Comments made in this blog are my own thoughts and opinions, and do not necessarily reflect or speak for my employer.

Dave Aitel's presentation hit the nail on the head...defenders are behind and continue to learn from the attacker. Okay, not all defenders learn from the attacker...some do, others, well, not so much. Besides Dave's really cool presentation, I think that what he said was as important as what he didn't say. I mean, Dave was really kind of cheerful for what, on the surface, could be a "doom-and-gloom" message, but someone mentioned after the presentation that Dave did not provide a roadmap to fixing/correcting the situation. I'd suggest that the same things that have been said for the past 25 years, the same core principles still apply...they simply need to be used. My big take-away from this presentation was that we cannot say that defensive tactics, or the tactics, techniques, and strategies used by the defenders have failed, because in most cases, they haven't been implemented properly, or at all.

I really liked Heather Adkins' presentation regarding Aurora and Google, and I think that overall it was very well received. It was clear that she couldn't provide every bit of information associated with the incident, and I think she did a great job of heading off some questions by pointing out what was already out there publicly and could be easily searched for...via Google.

Vitaly Kamluk's (Kaspersky/Japan) presentation on botnets reiterated Dave's presentation a bit, albeit not in so many words. Essentially, part of the presentation was spent talking about the evolution of botnet infrastructures, going through one-to-many, many-to-one, C2, P2P, and a C2/P2P hybrid.

Unfortunately, I missed Steven Adair's presentation, something I wanted to see. However, I flew to Miami on the same flight as Steven, one row behind and to the side of his seat, so I got to see a lot of the presentation development in action! I mean, really...young guy opens up Wireshark on one of two laptops he's got open...who wouldn't watch?

Jason Larsen (researcher from Idaho National Labs) a good talk on Home Area Networks (HANs). He mentioned that he'd found a way to modify firmware on power meters to do such things as turn on the cellular networking of some of these devices. Imagine the havok that would insue if home power meters suddenly all started transmitting on cellular network frequencies. How about if the transmitting were on emergency services frequencies?

The Mandiant crew was in the hizz-ouse, and I saw Kris Harms at the mic not once, but twice! Once for the Mandiant webcast, and once for the presentation on Friday. I noticed that some parts of both of Mandiant's presentations were from previous presentations...talking to Kris, they were still seeing similar techniques, even as long as two years later. I didn't get a chance to discuss this with Kris much more, to dig into things like, were customers against which these techniques used detecting the incident, or was the call to Mandiant the result of an external third party calling the victim organization?

Richard Bejtlich took a different approach to his presentation...no PPT! If you're read his blog, you know that he's been talking about this recently, so I wasn't too terribly surprised (actually, I was very interested to see where it would go) when he started his time at the mic by asking members of the audience for questions. He'd had a handout prior to kicking things off, and his presentation was very interesting because of how he spent the time.

There were also a number of presentations on cloud computing, including one by our own Robert Rounsavall, and another on Fri morning by Chris Day. It's interesting to see some folks get up and talk about "cloud computing" and how security, IR, and forensics need to be addressed, and then for others to get up and basically say, "hey, we have it figured out."

Take Aways from FIRST
My take aways from FIRST came from two sources...listening to and thinking about the presentations, and talking to other attendees about their thoughts on what they heard.

As Dave Aitel said, defenders are steps behind the attackers, and continue to learn from them. Kris Harms said that from what they've seen at Mandiant, considerable research is being placed into malware persistence mechanisms...but when talking about these to some attendees, particularly those involved in incident response in some way within their organizations, there were a lot of blank stares. A lot of what was said by some was affirmed by others, and in turn, affirmed my experiences as well as those of other responders.

I guess the biggest take away is that there are two different focuses with respect to business. Organizations conducting business most often focus on business and not so much on securing the information that is their business. The bad guys have a business model, as well, that is also driven by revenue...they are out to get your information, or access to your systems, and they are often better at using your infrastructure than you are. The drive or motivation of business is to do business, and at this point, security is such a culture change that its no wonder that victims find out about so many intrusions and data breaches after the fact, due to third party notification. The road map, the solutions, to addressing this have been around for a very long time, and nothing will change until organizations start adopting those security principles as part of their culture. Security is ineffective if it's "bolted on"...it has to be part of what businesses do...just like billing and collections, shipping and product fulfillment, etc. Incident responders have watched the evolution of intruder's tactics over the years, and organizations that fall victim to these attacks are often stagnant and rooted in archaic cultures.

Overall, FIRST was a good experience, and a good chance to hear what others were experiencing and thinking, both in and out of the presentation forum.