Thursday, November 29, 2012

Forensic Scanner has moved

In order to be in line with other projects available through my employer, the Forensic Scanner has moved from Google Code to GitHub. When you get to the page, simply click the "Zip" button and the project will download as a Zip archive.

There has been no change to the Scanner itself.

Also, note that the license has changed to the Perl Artistic License.

Monday, November 26, 2012

The Next Big Thing

First off, this is not an end-of-year summary of 2012, nor where I'm going to lay out my predictions for 2013...because that's not really my thing.  What I'm more interested in addressing is, what is "The Next Big Thing" in DFIR?  Rather than making a prediction, I'm going to suggest where, IMHO, we should be going within our community/industry.

There is, of course, the CDFS, which provides leadership and advocacy for the DFIR profession.  If you want to be involved in a guiding force behind the direction of our profession, and driving The Next Big Thing, consider becoming involved through this group.

So what should be The Next Big Thing in DFIR?  In the time I've been in and around this profession, one thing I have seen is that there is still a great deal effort directed to providing a layer of abstraction to analysts in order to represent the data.  Commercial tools provide frameworks for looking at the available (acquired) data, as do collections of free tools.  Some tools or frameworks provide different capabilities, such as allowing the analyst to easily conduct keyword searches, or providing default viewers or parsers for some file types.  However, what most tools do not provide is an easy means for analysts to describe the valuable artifacts that they've found, nor an easy means to communicate intelligence gathered through examination and research to other analysts.

Some of what I see happening includes analysts go to training and/or a conference, and hearing "experts" (don't get me wrong, many speakers are, in fact, experts in their field...) speak, and then return to their desks with...what?  Not long ago, I was giving a presentation and the subject of analysis of shellbag artifacts came up.  I asked how many of the analysts in the room did shellbag analysis and two raised their hands.  One of them stated that they had analyzed shellbag artifacts when they attended a SANS training course, but they hadn't done so since.  I then asked how many folks in the room conducted analysis where what the user did on the system was of primary interest in most of their exams, and almost everyone in the room raised their hands.  The only way I can explain the disparity between the two responses is that the tools used by most analysts provide a layer of abstraction to the data (acquired images) that they're viewing, and leave the identification of valuable (or even critical) artifacts and the overall analysis process up to the analyst.  A number of training courses provide information regarding analysis processes, but once analysts return from these courses, I'm not sure that there's a great deal of stimulus for them to incorporate what they just learned into what they do.  As such, I tend to believe that there's a great deal of extremely valuable intelligence either missed or lost within our community.

I'm beginning to believe more and more that tools that simply provide a layer of abstraction to the data viewed by analysts are becoming a thing of the past.  Or, maybe it's more accurate to say that they should become a thing of the past.  The analysis process needs to be facilitated more, and the sharing of information and intelligence between both the tools used, as well as the analysts using them, needs to become more part of our daily workflow.

Part of this belief may be because many of the tools available don't necessarily provide an easy means for analysts to share that process and intelligence.  What do I mean by that?  Take a look at some of the tools used by analysts today, and consider why those tools are used.  Now, think to yourself for a moment...how easy is it for one analyst using that tool to share any intelligence that they've found with another (any other) analyst?  Let's say that one analyst finds something of value during an exam, and it would behoove the entire team to have access to that artifact or intelligence.  Using the tool or framework available, how does the analyst then share the analysis or investigative processed used, as well as the artifact found or intelligence gleaned?  Does the framework being used provide a suitable means for doing so?

Analysts aren't sharing intelligence for two reasons...they don't know how to describe it, and even if they do, there's no easy means for doing so within the framework that they're using.  They can't easily share information and intelligence between the tools they're using, nor with other analysts, even those using the same tools.

For a great example of what I'm referring to, take a look at Volatility.  This started out as a project that was delivering something not available via any other means, and the folks that make up the team continue to do just that.  The framework provides much more than just a layer of abstraction that allows analysts to dig into a memory dump or hibernation file...the team also provides plugins that serve to illustrate not just what's possible to retrieve from a memory dump, but also what they've found valuable, and how others can find these artifacts via a repeatable process.  Another excellent resource is MHL et al's book, The Malware Analyst's Cookbook, which provides a great deal of process information via the format, as well as intel via the various 'recipes'.

I kind of look at it this way...when I was in high school, we read Chaucer's Canterbury Tales, and each year the books were passed down from the previous year.  If you were lucky, you'd get a copy with some of the humorous or ribald sections highlighted...but what wasn't passed down was the understanding of what was leading us to read these passages in the first place.  Sure, there's a lot of neat and interesting stuff that analysts see on a regular basis, but what we aren't good at is sharing the really valuable stuff and the intel with other analysts.  If that's something that would be of use...one analyst being aware of what another analyst found...then as consumers we need to engage tool and process developers directly and consistently, let them know what our needs are, and start intelligently using those processes and tools that meet our needs.

Wednesday, November 21, 2012

Updates

Timeline Analysis
I recently taught another iteration of our Timeline Analysis course, and as is very often the case, I learned somethings as a result.

First, the idea (in my case, thanks goes to Corey Harrell and Brett Shavers) of adding categories to timelines in order to increase the value of the timeline, as well as to bring a new level of efficiency to the analysis, is a very good one.  I'll discuss categories a bit more later in this post.

Second (and thanks goes out to Cory Altheide for this one), I'm reminded that timeline analysis provides the examiner with context to the events being observed, as well as a relative confidence in the data.  We get context because we see more than just a file being modified...we see other events around that event that provide indications as to what led to the file being modified.  Also, we know that some data is easily mutable, so seeing other events that are perhaps less mutable occurring "near" the event in question gives us confidence that the data we're looking at is, in fact, accurate.

Another thing to consider is that timelines help us reduce complexity in our analysis.  If we understand the nature of the artifacts and events that we observe in a timeline, and understand what creates or modifies those artifacts, we begin to see what is important in the timeline itself.  There is no magic formula for creating timelines...we may have too little data in a timeline (i.e., just a file being modified) or we may have too much data.  Knowing what various artifacts mean or indicate allows us to separate the wheat from the chaff, or separate what is important from the background noise on systems.

Categories
Adding category information to timelines can do a great deal to make analysis ssssooooo much easier!  For example, when adding Prefetch file metadata to a timeline, identifying the time stamps as being related to "Program Execution" can do a great deal to make analysis easier, particularly when it's included along with other data that is in the same category.  Also, as of Vista (and particularly so with Windows 7 and 2008 R2), there have been an increase in the number of event logs, and many of the event IDs that we're familiar with from Windows XP have changed.  As such, being able to identify the category of an event source/ID pair, via a short descriptor, makes analysis quicker and easier.

One thing that is very evident to me is that many artifacts will have a primary, as well as a secondary (or even tertiary) category.  For example, let's take a look at shortcut/LNK files.  Shortcuts found in a user's Recents folder are created via a specific activity performed by the user...most often, by the user double-clicking a file via the shell.  As such, the primary category that a shortcut file will belong to is something akin to "File Access", as the user actually accessed the file.  While it may be difficult to keep the context of how the artifact is created/modified in your mind while scrolling through thousands of lines of data, it is oh so much easier to simply provide the category right there along with the data.

Now, take a look at what happens when a user double-clicks a file...that file is opened in a particular application, correct?  As such, a secondary category for shortcut files (found in the user's Recents folder) might be "Program Execution".  Now, the issue with this is that we would need to do some file association analysis to determine which application was used to open the file...we can't always assume that files ending in the ".txt" extension are going to be opened via Notepad. File association analysis is pretty easy to do, so it's well worth doing it.

Not all artifacts are created alike, even if they have the same file extension...that is to say, some artifacts may have to have categories based on their context or location.  Consider shortcut files on the user's desktop...many times, these are either specifically created by the user, or are placed there as the result of the user installing an application.  For those desktop shortcuts that point to applications, they do not so much refer to "File Access", as they do to "Application Installation", or something similar.  After all, when applications are installed and create a shortcut on the desktop, that shortcut very often contains the command line "app.exe %1", and doesn't point to a .docx or .txt file that the user accessed or opened. 

Adding categories to your timeline can bring a great deal of power to your fingertips, in addition to reducing the complexity and difficulty of finding the needle(s) in the hay stack...or stack of needles, as the case may be.  However, this addition to timeline analysis is even more powerful when it's done with some thought and consideration given to the actual artifacts themselves.  Our example of LNK files clearly shows that we cannot simply group all LNK files in one category.  The power and flexibility to include categories for artifacts based on any number of conditions is provided in the Forensic Scanner.

RegRipper
Sorry that I didn't come up with a witty title for this section of the post, but I wanted to include something here.  I caught up to SketchyMoose's blog recently and found this post that included a mention of RegRipper.

In the post, SM mentions a plugin named 'findexes.pl'.  This is an interesting plugin that I created as a result of something Don Weber found during an exam when we were on the ...that the bad guy was hiding PE files (or portions thereof) in Registry values!  That was pretty cool, so I wrote a plugin.  See how that works?  Don found it, shared the information, and then a plugin was created that could be run during other exams.

SM correctly states that the plugin is looking for "MZ" in the binary data, and says that it's looking for it at the beginning of the value.  I know it says that in the comments at the top of the plugin file, but if you look at the code itself, you'll see that it runs a grep(), looking for 'MZ' anywhere in the data.  As you can see from the blog post, the plugin not only lists the path to the value, but also the length of the binary data being examined...it's not likely that you're going to find executable code in 32 bytes of data, so it's good visual check for deciding which values you want to zero in on.

SM goes on to point out the results of the userinit.pl plugin...which is very interesting.  Notice that in the output of that plugin, there's a little note that indicates what 'normal' should look like...this is a question I get a lot when I give presentations on Registry or Timeline Analysis...what is 'normal', or what about what I'm looking at jumps out at me as 'suspicious'.  With this plugin, I've provided a little note that tells the analyst, hey, anything other than just "userinit.exe" is gonna be suspicious!

USB Stuff
SM also references a Hak5 episode by Chris Gerling, Jr, that discusses mapping USB storage devices found on Windows systems.  I thought I'd reference that here, in order to say, "...there are more things in heaven and earth than are dreamt of in your philosophy, Horatio!"  Okay, so what does quoting the Bard have to do with anything?  In her discussion of her dissertation entitled, Pitfalls of Interpreting Forensic Artifacts in the Registry, Jacky Fox follows a similar process for identifying USB storage devices connected to a Windows system.  However, the currently accepted process for doing this USB device identification has some...shortcomings...that I'll be addressing.  Strictly speaking, the process works, and works very well.  In fact, if you follow all the steps, you'll even be able to identify indications of USB thumb drives that the user may have tried to obfuscate or delete.  However, this process does not identify all of the devices that are presented to the user as storage.

Please don't misunderstand me here...I'm not saying that either Chris or Jacky are wrong in the process that they use to identify USB storage devices.  Again, they both refer to using regularly accepted examination processes.  Chris refers to Windows Forensic Analysis 2/e, and Jacky has a lot of glowing and positive things to say about RegRipper in her dissertation (yes, I did read it...the whole thing...because that's how I roll!), and some of those resources are based on information that Rob Lee has developed and shared through SANS.  However, as time and research have progressed, new artifacts have been identified and need to be incorporated into our analysis processes.

Propagation
I ran across this listing for Win32/Phorpiex on the MS MMPC blog, and it included something pretty interesting.  This malware includes a propagation mechanism when using removable storage devices.

While this propagation mechanism seems pretty interesting, it's not nearly as interesting as it could be, because (as pointed out in the write up) when the user clicks on the shortcut for what they think is a folder, they don't actually see the folder opening.  As such, someone might look for an update to this propagation mechanism in the near future, if one isn't already in the wild.

What's interesting to me is that there's no effort taken to look at the binary contents of the shortcut/LNK files to determine if there's anything odd or misleading about them.  For example, most of the currently used tools only parse the LinkInfo block of the LNK file...not all tools parse the shell item ID list that comes before the LinkInfo block.  MS has done a great job of documenting the binary specification for LNK files, but commercial tools haven't caught up.

In order to see where/how this is an issue, take a look at this CyanLab blog post.

Malware Infection Vectors
This blog post recently turned up on MMPC...I think that it's great because it illustrates how systems can be infected via drive-bys that exploit Java vulnerabilities.  However, I also think that blog posts like this aren't finishing the race, as it were...they're start, get most of the way down the track, and then stop...they stop before they show what this exploit looks like on a system.  Getting and sharing this information would serve two purposes...collect intelligence that they (MS) and others could use, and help get everyone else closer to conducting root cause analyses after an incident.  I think that the primary reason that RCAs aren't being conducted is that most folks think that it takes too long or is too difficult.  I'll admit...the further away from the actual incident that you detect a compromised or infected system, the harder it can be to determine the root cause or infection vector.  However, understanding the root cause of an incident, and incorporating it back into your security processes, can go a long way toward helping you allocate resources toward protecting your assets, systems, and infrastructure.

If you want to see what this stuff might look like on a system, check out Corey's jIIr blog posts that are labeled "exploits".  Corey does a great job of exploiting systems and illustrating what that looks like on a system.



Wednesday, November 07, 2012

PFIC2012 slides

Several folks at PFIC 2012 asked that I make my slides from the Windows 7 Forensic Analysis and Timeline Analysis presentations available...so here they are.

I'll have to admit, I've become somewhat hesitant to post slides, not because I don't want to share the info, but because posting the slides from my presentations doesn't share the info...most of the information that is shared during a presentation isn't covered in the slides.