Friday, October 29, 2010

Analysis Techniques

Now and again, it's a good idea to revisit old thoughts and ideas, to dredge them up again and expose them to the light of day. One of the things that I have spent a lot of time thinking about is the analysis techniques that I'm using during examinations, and in particular taking a step forward and seeing how those techniques stack up against some of the threats and issues that are being seen.

First, a caveat...a lot of what I think (and sometimes talk) about focuses on disk analysis. This is due to the realities of the work I (and others) do. Was a responder, many times, all I have access to is a disk image. Or, as is often the case now, I focus on disk analysis because I'm part of a team that includes folks who are way smarter and far more capable than I am in network, memory and malware analysis. So the disk stuff is a piece of the puzzle, and should be treated as such, even if it's the only piece you have.

Analysis
So, there are a couple of general techniques I tend to use in analysis, and I often start with timeline analysis. This is a great analysis technique to use due to the fact that when you build a timeline from multiple data sources on and from within a system, you give yourself two things that you don't normally have through more traditional analysis techniques...context, and a greater relative level of confidence in your data. By context, we aren't simply seeing an event such as a file being created or changed...we're seeing other surrounding events that can (and often do) indicate what lead to that event occurring. The multiple data sources included in a timeline provide a great deal of information; Event Logs may show us a user login, Registry contents may provide additional indications of that login, we may see indicators of web browsing activity, perhaps opening an email attachment, etc...all of which may provide us with the context of the event in which we are most interested.

As to the overall relative level of confidence in our data, we have to understand that all data sources have a relative level of confidence associated with each of them. For example, from Chris's post, we know that the relative confidence level of the time stamps within the $STANDARD_INFORMATION attributes within the MFT (and file system) is (or should be) low. That's because these values are fairly easily changed, often through "time stomping", so that the MACB times (particularly the "B" time, or creation date of the file) do not fall within the initial timeframe of the incident. However, the time stamps within the $FILE_NAME attributes can provide us with a greater level of confidence in the data source (MFT, in this case). By adding other data sources (Event Log, Registry, Prefetch file metadata, etc.), particularly data source whose time stamps are not so easily modified (such as Registry key LastWrite times), we can elevate our relative confidence level in the data.

Another aspect of this is that by adding multiple sources, we will begin to see patterns in the data and also begin to see where there are gaps in that data.

This is particularly important as intrusions and malware are very often the least frequency of occurrence on a system. Credit for this phrase goes to Pete Silberman of Mandiant, and it's an extremely important concept to understand, particularly when it comes to timeline analysis. In short, many times analysts will look for large tool kits or spikes in the event volume as an indication of compromise. However, most often, this simply is not the case...spikes in activity in a timeline will correspond to an operating system or application update, Restore Point being created, etc. So, in short, intrusions and even malware have taken a turn toward minimalization on systems, so looking for spikes in activity likely won't get you anywhere. This is not to say that if your FTP server is turned into a warez server, you won't see a spike in activity or event volume...rather, the overall effects of an incident are most likely minimized. A user clicks a link or an application is exploited, something is uploaded to the system, and data gets exfiltrated at some point...disk forensics artifacts are minimized, particularly if the data is exfiltrated without ever writing it to disk.

Timeline Creation
When creating a timeline, I tend to take what most analysts consider a very manual process, in that I do not use tools that simply sweep through an image and collect all possible timeline data. However, there is a method to my madness, which can be seen in part in Chris's Sniper Forensics presentation. I tend to take a targeted approach, adding the information that is necessary to complete the picture. For example, when analyzing a system that had been compromised via SQL injection, I included the file system metadata and only the web server logs that contained the SQL injection attack information. There was no need to include user information (Registry, index.dat, etc.); in fact, doing so would have added considerable noise to the timeline, and the extra data would have required significantly more effort to analyze and parse through in order to find what I was looking for.

Many times when creating a timeline, I don't want to see everything all at once. When examining a Windows system, there are so many possible data sources that filling in the timeline with the appropriate sources in an iterative manner is...for me...more productive and efficient that loading everything up into the timeline all at once, and parsing things out from there. If a system had a dozen user profiles, but only one or two is of interest, I'm not about to populate the timeline with the LastWrite times of all the keys in the other user's NTUSER.DAT hives. Also, when you get to more recent versions of Windows, specific keys in the USRCLASS.DAT hive become more important, and I don't want to add the LastWrite time of all of those keys when I'm more interested in some specific values from specific keys.

Part of the iterative approach in developing the timeline is looking at the data you have available and seeing gaps. Some gaps may be due to the fact that the data no longer exists...file last access/modified times may be updated to more recent times, Event Logs may "roll over", etc. Looking at gaps has a lot to do with your analysis goals and the questions you're trying to answer, and knowing what you would need to fill that gap. Many times, we may not have access to a direct artifact (such as an event record with ID 528, indicating a login...), but we may be able to fill that gap (at least partially) with indirect artifacts (i.e., UserAssist key entries, etc.).

Feedback Loop
Timeline analysis provides a view of a system that, in most cases, an analyst wouldn't get any other way. This leads to discovering things that wouldn't otherwise be found, due in part to the view enabled via context. This context can also lead to finding those much quicker. Once you've done this, its critical for future exams that you take what you have learned and roll it back into your analysis process. After all, what good would it be for anyone for you to simply let that valuable institutional knowledge disappear? Retaining that institutional knowledge allows you to fine-tune and hone your analysis process.

Consider folks that work on teams...say, there's a team of 12 analysts. One analyst finds something new after 16 hrs of analysis. If every analyst were to take 16 hrs to find the same thing, then by not sharing what you found, your team consumes an additional 176 (16 x 11) hrs. That's not terribly efficient, particularly when it could be obviated by a 30 minute phone call. However, if you share your findings with the rest of the team, they will know to look for this item. If you share it with them via a framework such as a forensic scanner (see the Projects section of this post), similar to RegRipper (for example), then looking for that same set of artifacts is now no longer something they have to memorize and remember, as it takes just seconds to check for them via the scanner. All that's really needed is that your image intake process is modified slightly; when the image comes in, make your working copy, verify the file system of the copy, and then run the forensic scanner. Using the results, you can determine whether or not things that you've seen already were found, removing the need to remember all of the different artifacts and freeing your analysts up some in-depth analysis.

So What?
So why does all this matter? Why is it important? The short answer is that things aren't going to get easier, and if we (analysts) don't take steps to improve what we do, we're going to quickly fall further behind. We need to find and use innovative analysis techniques that allow us to look at systems and the available data in different ways, adding context and increasing our relative level of confidence in the data, particularly as some data becomes less and less reliable. We also need to consider and explore previously un- or under-utilized data sources, such as the Registry.

Consider malware detection...some folks may think that you can't use RegRipper or tools like it for something like finding malware, but I would suggest that that simply isn't the case. Consider Conficker...while the different variants escaped detection via AV scanners, there were consistent Registry artifacts across the entire family. And it's not just me...folks like MHL (and his co-authors) have written RegRipper plugins (if you deal with malware at all, you MUST get a copy of their book...) to assist them in their analysis.

The same thing can be said of intrusions...understanding Windows from a system perspective, and understanding how different artifacts are created or modified can show us what we need to focus on, and what data sources we need to engage for both direct and indirect artifacts. But these techniques are not limited to just malware and intrusions...they can be used similarly used to analyze other incidents, as well.

6 comments:

Dave Hull said...

Great post Harlan. What tool do you use for pulling Filename time stamps from the MFT? I've been experimenting with this and time stamp manipulation a bunch lately, and have several methods for gathering the information, but they are cumbersome. Extending The Sleuthkit's fls arguments to include a flag for pulling FN time stamps seems ideal as it is a tool many of us already have on hand.

H. Carvey said...

Dave,

Thanks for the comment.

What tool do you use for pulling Filename time stamps from the MFT?

I use the Perl script that I shared with Chris, per his post.

Extending The Sleuthkit's fls arguments...

Have you mentioned this to Brian?

Dave Hull said...

I'll check out the script this evening/weekend. Mark McKinnon sent me a tool that may be useful as well.

I did send a feature request to Brian and though I never heard back directly, I know Rob Lee talked to him about it at WACCI. Brian was concerned that adding such a flag would be a problem because there's no equivalent alternative time stamp to pull for other file systems, or so I gather, I haven't actually talked to Brian about it directly.

My thinking was that this flag would only work against NTFS file systems and would warn the user accordingly if they tried to use it against non-NTFS file systems.

I should have a blog post out in the next week that gives more details and makes a better argument for what I'm after.

H. Carvey said...

...no equivalent alternative time stamp to pull for other file systems...

That's likely true, which is why I use the script I wrote. You can also use Dave Kovar's analyzemft.py, but I ran into some issues, and wanted something with more of a debug option, so I wrote my own. It was also a great learning experience, and allows for further work.

I look forward to your blog post, but I have to say, I can also see Brian's perspective. I mean, there are a number of options already available...

Unknown said...

Harlan,
Great post! The forensic analyst has to become the sniper. As the amount of data requiring analysis continues to grow, knowing the score of the game and where you need to look is going to make the analyst more productive, and open more time for in-depth analysis, just as you stated. It all ties back to sharing and using a phased, frugal approach to forensics.

H. Carvey said...

Brad,

Exactly. Data storage is increasing, but I think that in a lot of cases, the actual data that requires analysis is becoming more and more minimal. However, your point is well taken...who has time to acquire an image from a 500GB drive for a compromise or malware infection.