Pages

Monday, March 12, 2018

New and Updated Plugins, Other Items

BAM Key and Process Execution, Updated Plugins
Recently, blog posts describing the "BAM Key" and it's viability as a process execution artifact began to appear (port139 blog, padawan-4n6 blog).  The potential for this key was previously mentioned last summer by Alex Ionescu, and this began to come to light as of Feb, 2018.  As such, I wrote two plugins for parsing this data, bam.pl and bam_tln.pl.

There's also been a good bit of testing going on and being shared with respect to when data in Registry transaction logs is committed to the hive files themselves.  In addition to the previously linked blog, there's also this blog post.

I recently ran across a couple of System hive files from Windows 10 systems for which the AppCompatCache data was not parsing correctly.  As such, I updated the appcompatcache.pl and shimcache.pl plugins, as well as their corresponding *_tln.pl variants.

Finally, Eric documented changes to the AmCache.hve file, but until recently, I hadn't seen any updated hive files.  Thankfully, Ali Hadi was kind enough to share some hives for testing, and I updated the amcache.pl and amcache_tln.pl plugins accordingly. However, these updates were solely to address the process execution artifacts, and I have not yet updated them to include the device data that Eric pointed out in his blog post.

NOTE: Yes, I uploaded these plugins (a total of 8) to the GitHub repository.  Again, I do not modify the RegRipper profiles when I do so, so if you want to incorporate the plugins in your processes, you'll need to open Notepad and update the profiles yourself.

Part of the reason I made these updates is to perform testing of the various artifacts, specifically to see where the BAM key data falls out in the spectrum of process execution artifacts. 

The data that Ali shared with me included the AmCache.hve, System hive (as well as other hives from the config folder). and the user's NTUSER.DAT hive.  Using these sources, I created a micro-timeline using the AmCache process execution data, AppCompatCache data, BAM key data, and the user's UserAssist data, and the results were really quite fascinating. 

Here's an example of some of the data that I observed.  I created my overall timeline, and then picked out one entry (for time2.exe) for a closer look.  I used the type command to isolate just the events I wanted from the events file, and created the below timeline:

Thu Feb 15 16:49:16 2018 Z
  AmCache      - Key LastWrite - f:\time2.exe (46f0f39db5c9cdc5fe123807bb356c87eb08c48e)

Thu Feb 15 16:49:14 2018 Z
  BAM             - \Device\HarddiskVolume4\Time2.exe (S-1-5-21-441239525-4047580167-3361022386-1001)

Thu Feb 15 16:49:10 2018 Z
  REG             forensics - [Program Execution] UserAssist - C:\Users\forensics\Desktop\Time2 - Shortcut.lnk (1)
  REG             forensics - [Program Execution] UserAssist - F:\Time2.exe (1)

Mon Nov  2 22:20:14 2009 Z
  REG             - M... AppCompatCache - F:\Time2.exe

This is pretty fascinating stuff.  We know the context of the time stamps for the AppCompatCache data; even though we understand the data to be populated as a result of process execution events, the time stamp associated with the data is the file system last modification time (specifically, from the $STANDARD_INFORMATION attribute).  The UserAssist data illustrates the user launching the application, and in the following 6 seconds, the BAM and AmCache entries are created.

Keep in mind that this is just one example, from one test.  In the data I've got, not all entries in the AppCompatCache data have corresponding entries in the BAM key.  For example, there's a file called "slacker.exe" that appears in the AppCompatCache and AmCache data, but there doesn't seem to be an entry in the BAM key. 

Over on the Troy 4n6 blog, Troy has a couple of great comments on testing.  Yes, they're specific to the P2P cases he mentions in the blog, but they're also true throughout the rest of the DFIR community.

Processes and Checklists
Something I've done over the years is develop and maintain processes and checklists for the various types of analysis work I've done.  For example, as an incident responder, I've received a lot of those "...we think this system may have been infected with malware..." cases over the years, and what I've done is maintained processes and checklists for conducting this sort of analysis. 

Now, there's a school of thought within the DFIR community that follows the belief that having defined and documented processes stifles the creativity and innovation of the individual analyst. I wholeheartedly disagree...in my experience, having a documented process means I don't forget things, and it leads directly to automating the process, so that I spend less time sifting through data and more time conducting actual analysis. 

Further, maintaining a documented process does not mean that the process is set in stone once it's written; instead, it's a living document that continues to be updated and developed as new things are learned.  As MS has developed not just new operating systems, but updated the currently available OSs, new artifacts (see the BAM key mentioned above) have been discovered.  And this is solely with respect to the OS, and doesn't take new applications (or new versions of current applications) into account.  Maintaining an ever-expanding list of Windows artifacts is neither as useful nor as viable as maintaining documented processes that illustrate how those artifacts are used in an investigation, so having documented processes is a key component of providing comprehensive and accurate analysis in a timely manner.

Word Metadata Verification
Phill's got a blog post up on documenting Word doc metadata bugs.  My take-aways from his post are:

1. Someone asked him for assistance in verifying what was thought to be a bug in a tool.  This isn't to point out that the first thing that was blamed was the tool...not at all.  Rather, it's to point out that someone said, hey, I'm seeing something odd, I'll see if others are seeing the same thing.  It isn't the first step I'd take, but it was still a good call. 

2. Phill documented and shared his testing methodology.  As someone who's written open source tools, I've gotten a lot of "...the tool doesn't work..." over the years, without any information regarding what was done, or how "doesn't work" was reached.  Sharing what you did and saw in a concise manner isn't an art...it's necessary, and it allows others to have a baseline for further testing.

3. Phill stated, "...I decided I was being lazy by not actually looking for the word count in the docx file itself...".  Sometimes, the easiest approach escapes us completely.  I've seen/done the same thing myself, and I've gotten to the point where, if I get odd errors from RegRipper or Volatility, I'll open the target file in a hex editor to make sure it's not all zeroes (yes, that has happened).  I guess that's the benefit of being familiar with file formats; like Phill said, he opened up the .docx file he was using for testing in an archive tool and pulled out what he needed.

1 comment: