Wednesday, November 21, 2012

Updates

Timeline Analysis
I recently taught another iteration of our Timeline Analysis course, and as is very often the case, I learned somethings as a result.

First, the idea (in my case, thanks goes to Corey Harrell and Brett Shavers) of adding categories to timelines in order to increase the value of the timeline, as well as to bring a new level of efficiency to the analysis, is a very good one.  I'll discuss categories a bit more later in this post.

Second (and thanks goes out to Cory Altheide for this one), I'm reminded that timeline analysis provides the examiner with context to the events being observed, as well as a relative confidence in the data.  We get context because we see more than just a file being modified...we see other events around that event that provide indications as to what led to the file being modified.  Also, we know that some data is easily mutable, so seeing other events that are perhaps less mutable occurring "near" the event in question gives us confidence that the data we're looking at is, in fact, accurate.

Another thing to consider is that timelines help us reduce complexity in our analysis.  If we understand the nature of the artifacts and events that we observe in a timeline, and understand what creates or modifies those artifacts, we begin to see what is important in the timeline itself.  There is no magic formula for creating timelines...we may have too little data in a timeline (i.e., just a file being modified) or we may have too much data.  Knowing what various artifacts mean or indicate allows us to separate the wheat from the chaff, or separate what is important from the background noise on systems.

Categories
Adding category information to timelines can do a great deal to make analysis ssssooooo much easier!  For example, when adding Prefetch file metadata to a timeline, identifying the time stamps as being related to "Program Execution" can do a great deal to make analysis easier, particularly when it's included along with other data that is in the same category.  Also, as of Vista (and particularly so with Windows 7 and 2008 R2), there have been an increase in the number of event logs, and many of the event IDs that we're familiar with from Windows XP have changed.  As such, being able to identify the category of an event source/ID pair, via a short descriptor, makes analysis quicker and easier.

One thing that is very evident to me is that many artifacts will have a primary, as well as a secondary (or even tertiary) category.  For example, let's take a look at shortcut/LNK files.  Shortcuts found in a user's Recents folder are created via a specific activity performed by the user...most often, by the user double-clicking a file via the shell.  As such, the primary category that a shortcut file will belong to is something akin to "File Access", as the user actually accessed the file.  While it may be difficult to keep the context of how the artifact is created/modified in your mind while scrolling through thousands of lines of data, it is oh so much easier to simply provide the category right there along with the data.

Now, take a look at what happens when a user double-clicks a file...that file is opened in a particular application, correct?  As such, a secondary category for shortcut files (found in the user's Recents folder) might be "Program Execution".  Now, the issue with this is that we would need to do some file association analysis to determine which application was used to open the file...we can't always assume that files ending in the ".txt" extension are going to be opened via Notepad. File association analysis is pretty easy to do, so it's well worth doing it.

Not all artifacts are created alike, even if they have the same file extension...that is to say, some artifacts may have to have categories based on their context or location.  Consider shortcut files on the user's desktop...many times, these are either specifically created by the user, or are placed there as the result of the user installing an application.  For those desktop shortcuts that point to applications, they do not so much refer to "File Access", as they do to "Application Installation", or something similar.  After all, when applications are installed and create a shortcut on the desktop, that shortcut very often contains the command line "app.exe %1", and doesn't point to a .docx or .txt file that the user accessed or opened. 

Adding categories to your timeline can bring a great deal of power to your fingertips, in addition to reducing the complexity and difficulty of finding the needle(s) in the hay stack...or stack of needles, as the case may be.  However, this addition to timeline analysis is even more powerful when it's done with some thought and consideration given to the actual artifacts themselves.  Our example of LNK files clearly shows that we cannot simply group all LNK files in one category.  The power and flexibility to include categories for artifacts based on any number of conditions is provided in the Forensic Scanner.

RegRipper
Sorry that I didn't come up with a witty title for this section of the post, but I wanted to include something here.  I caught up to SketchyMoose's blog recently and found this post that included a mention of RegRipper.

In the post, SM mentions a plugin named 'findexes.pl'.  This is an interesting plugin that I created as a result of something Don Weber found during an exam when we were on the ...that the bad guy was hiding PE files (or portions thereof) in Registry values!  That was pretty cool, so I wrote a plugin.  See how that works?  Don found it, shared the information, and then a plugin was created that could be run during other exams.

SM correctly states that the plugin is looking for "MZ" in the binary data, and says that it's looking for it at the beginning of the value.  I know it says that in the comments at the top of the plugin file, but if you look at the code itself, you'll see that it runs a grep(), looking for 'MZ' anywhere in the data.  As you can see from the blog post, the plugin not only lists the path to the value, but also the length of the binary data being examined...it's not likely that you're going to find executable code in 32 bytes of data, so it's good visual check for deciding which values you want to zero in on.

SM goes on to point out the results of the userinit.pl plugin...which is very interesting.  Notice that in the output of that plugin, there's a little note that indicates what 'normal' should look like...this is a question I get a lot when I give presentations on Registry or Timeline Analysis...what is 'normal', or what about what I'm looking at jumps out at me as 'suspicious'.  With this plugin, I've provided a little note that tells the analyst, hey, anything other than just "userinit.exe" is gonna be suspicious!

USB Stuff
SM also references a Hak5 episode by Chris Gerling, Jr, that discusses mapping USB storage devices found on Windows systems.  I thought I'd reference that here, in order to say, "...there are more things in heaven and earth than are dreamt of in your philosophy, Horatio!"  Okay, so what does quoting the Bard have to do with anything?  In her discussion of her dissertation entitled, Pitfalls of Interpreting Forensic Artifacts in the Registry, Jacky Fox follows a similar process for identifying USB storage devices connected to a Windows system.  However, the currently accepted process for doing this USB device identification has some...shortcomings...that I'll be addressing.  Strictly speaking, the process works, and works very well.  In fact, if you follow all the steps, you'll even be able to identify indications of USB thumb drives that the user may have tried to obfuscate or delete.  However, this process does not identify all of the devices that are presented to the user as storage.

Please don't misunderstand me here...I'm not saying that either Chris or Jacky are wrong in the process that they use to identify USB storage devices.  Again, they both refer to using regularly accepted examination processes.  Chris refers to Windows Forensic Analysis 2/e, and Jacky has a lot of glowing and positive things to say about RegRipper in her dissertation (yes, I did read it...the whole thing...because that's how I roll!), and some of those resources are based on information that Rob Lee has developed and shared through SANS.  However, as time and research have progressed, new artifacts have been identified and need to be incorporated into our analysis processes.

Propagation
I ran across this listing for Win32/Phorpiex on the MS MMPC blog, and it included something pretty interesting.  This malware includes a propagation mechanism when using removable storage devices.

While this propagation mechanism seems pretty interesting, it's not nearly as interesting as it could be, because (as pointed out in the write up) when the user clicks on the shortcut for what they think is a folder, they don't actually see the folder opening.  As such, someone might look for an update to this propagation mechanism in the near future, if one isn't already in the wild.

What's interesting to me is that there's no effort taken to look at the binary contents of the shortcut/LNK files to determine if there's anything odd or misleading about them.  For example, most of the currently used tools only parse the LinkInfo block of the LNK file...not all tools parse the shell item ID list that comes before the LinkInfo block.  MS has done a great job of documenting the binary specification for LNK files, but commercial tools haven't caught up.

In order to see where/how this is an issue, take a look at this CyanLab blog post.

Malware Infection Vectors
This blog post recently turned up on MMPC...I think that it's great because it illustrates how systems can be infected via drive-bys that exploit Java vulnerabilities.  However, I also think that blog posts like this aren't finishing the race, as it were...they're start, get most of the way down the track, and then stop...they stop before they show what this exploit looks like on a system.  Getting and sharing this information would serve two purposes...collect intelligence that they (MS) and others could use, and help get everyone else closer to conducting root cause analyses after an incident.  I think that the primary reason that RCAs aren't being conducted is that most folks think that it takes too long or is too difficult.  I'll admit...the further away from the actual incident that you detect a compromised or infected system, the harder it can be to determine the root cause or infection vector.  However, understanding the root cause of an incident, and incorporating it back into your security processes, can go a long way toward helping you allocate resources toward protecting your assets, systems, and infrastructure.

If you want to see what this stuff might look like on a system, check out Corey's jIIr blog posts that are labeled "exploits".  Corey does a great job of exploiting systems and illustrating what that looks like on a system.



2 comments:

Rob Lee said...

If you would like to examine a partial shot at pairing context with artifacts, I suggest you take a look at the DFIR community-built "Evidence of" category poster we released this past year. While we couldn't fit everything on it, it does a decent job at listing some of the core contextual pairings.

https://blogs.sans.org/computer-forensics/files/2012/06/SANS-Digital-Forensics-and-Incident-Response-Poster-2012.pdf

H. Carvey said...

Thanks for the comment, Rob.

However, I'm not sure (and not clear on) how the poster illustrates "pairing context with artifacts". I can clearly see the pairings of categories to artifacts, but from my post, it's the actual contents of the timelines that provide the context.

Again, thanks.