Tuesday, July 22, 2014

File system ops, effects on MFT records

I recently conducted some testing of different actions on a Windows 7 system, with the specific purpose of identifying artifacts within the file system (in this case, the MFT and the USN change journal), particularly within individual records.  I wanted to take a look at the effects of different actions to see what they "look like" within the individual records, as well as within the USN change journal, in hopes that things would pop out that could be used during forensic exams.  Once I completed my testing, I decided to share what I'd done and what I'd found, in hopes that others might find it useful.

Testing Platform: 32-bit Windows 7 Ultimate VM running in Virtual Box.

Tools: My own custom stuff.  I updated the MFT parser included with WFA 4/e, and used usnj.pl to parse the USN Change Journal, and parse.pl to translate the output of the change journal parser into a timeline.  This page at MS identifies that USN record v2 structure, and the reason codes, used by usnj.pl.

Methodology:  I started by writing down and outlining all of the tests that I wanted to perform.  I had a total of 5 tests that I wanted to run in order to see what the effects of each individual action was on the MFT, and individual records within the MFT.  I picked 5 different files within the VM to use in each test, respectively.  Once that was done, I added the VM to FTK Imager as an evidence item and extracted the MFT; this was my "before" sample.  Then, I launched the VM, performed all of the tests, logged out and shut down the VM, and extracted the MFT (my "after" sample) and the USN change journal.

All testing occurred on 17 July 2014.  In all of the tests, I've changed the font color for items of interest to red.

Test 1 - Renaming a file
This was a simple test, but something I hadn't specifically looked at before.  All I did with this one was open a command prompt, change to the directory in question, and issued the command, "ren eula.txt eula30.txt".

Here's the record details from before the test was run:

44657      FILE Seq: 3    Links: 1   
[FILE],[BASE RECORD]
.\tools\Eula.txt
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Jul 28 14:32:44 2006 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri Jul 28 14:32:44 2006 Z
  FN: Eula.txt  Parent Ref: 44361/32
  Namespace: 3
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Nov  8 15:17:17 2013 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri Nov  8 15:17:17 2013 Z

...and here are the record details after the test:

44657      FILE Seq: 3    Links: 1   
[FILE],[BASE RECORD]
.\tools\eula_30.txt
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Jul 28 14:32:44 2006 Z
    C: Thu Jul 17 20:38:52 2014 Z
    B: Fri Jul 28 14:32:44 2006 Z
  FN: eula_30.txt  Parent Ref: 44361/32
  Namespace: 3
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Jul 28 14:32:44 2006 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri Jul 28 14:32:44 2006 Z

Again, this was an atomic action; that is to say, all I did with respect to this file was run the ren command.  I honestly have no idea why the last accessed (A) and creation (B) dates from the $STANDARD_INFORMATION attribute would be copied into the corresponding time stamps of the $FILE_NAME attribute for a rename operation.  However, notice that very little else about the record changed; the record number (from the DWORD at offset 0x2C within the record header), the sequence number, and the parent file reference number remained the same, which is to be expected.

Here are the changes recorded in the USN change journal:

eula_30.txt: Rename_New_Name  FileRef: 44657/3  ParentRef: 44361/32
eula_30.txt: Rename_New_Name,Close  FileRef: 44657/3  ParentRef: 44361/32
Eula.txt: Rename_Old_Name  FileRef: 44657/3  ParentRef: 44361/32

Now, these changes are not in the specific order in which they occurred...they're listed in a timeline, so they occurred within the same second.  But it is interesting that there is rename_old_name and rename_new_name identifiers for the actions that took place.  Perhaps because a good deal of the analysis work that I do comes from corporate environments, I've been seeing a lot of Windows 7 systems with VSCs disabled in the Registry; as such, I haven't had access to an older version of the MFT via a VSC in order to compare record contents, on a per-record basis.  By incorporating the USN change journal into my analysis, I can get some additional context with respect to what I'm seeing.

The use of the USN change journal can also be useful in identifying activity that occurs during a malware infection.  For example, in some cases, malware may create a downloader, use that to download another bit of malware, and then delete the original downloader.  The USN change journal can help you identify that activity, even if the MFT record for the original downloader has been reused and overwritten.

Test 2 - Adding an ADS to a file
For this test, I added an ADS to a file by typing echo "This is an ADS" > procmon.chm:ads.txt at the command prompt.  Now, this file is the ProcMon help file that is included when you download the ProcMon archive from SysInternals, and as such, it already had a Zone.Identifier ADS associated with the file.

The "before" record:

44401      FILE Seq: 11   Links: 1   
[FILE],[BASE RECORD]
.\tools\procmon.chm
    M: Fri Nov  8 15:17:17 2013 Z
    A: Mon Nov 28 16:46:42 2011 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Mon Nov 28 16:46:42 2011 Z
  FN: procmon.chm  Parent Ref: 44361/32
  Namespace: 3
    M: Fri Nov  8 15:17:16 2013 Z
    A: Fri Nov  8 15:17:16 2013 Z
    C: Fri Nov  8 15:17:16 2013 Z
    B: Fri Nov  8 15:17:16 2013 Z
**ADS: Zone.Identifier

...and the "after" record:

44401      FILE Seq: 11   Links: 1   
[FILE],[BASE RECORD]
.\tools\procmon.chm
    M: Thu Jul 17 20:39:22 2014 Z
    A: Mon Nov 28 16:46:42 2011 Z
    C: Thu Jul 17 20:39:22 2014 Z
    B: Mon Nov 28 16:46:42 2011 Z
  FN: procmon.chm  Parent Ref: 44361/32
  Namespace: 3
    M: Fri Nov  8 15:17:16 2013 Z
    A: Fri Nov  8 15:17:16 2013 Z
    C: Fri Nov  8 15:17:16 2013 Z
    B: Fri Nov  8 15:17:16 2013 Z
**ADS: ads.txt
**ADS: Zone.Identifier

In this case, you'll notice that only the M (modified) and C (MFT entry change) times in the $STANDARD_INFORMATION attribute have changed.  I would expect that the C (entry changed) time stamp would change, as the addition of an ADS constitutes a change to the MFT record itself, but the M (last modified) time stamp changed, also.

From the USN change journal:

procmon.chm: Stream_Change  FileRef: 44401/11  ParentRef: 44361/32
procmon.chm: Named_Data_Extend,Close,Stream_Change  FileRef: 44401/11  ParentRef: 44361/32
procmon.chm: Named_Data_Extend,Stream_Change  FileRef: 44401/11  ParentRef: 44361/32

So now, if an ADS is suspected, a good place to look for indications of when the ADS was added to a file (or folder) would be to parse the USN change journal and look for stream_change entries.  This can be valuable during an examination because an ADS does not have any unique time stamps associated with it within the MFT record.  An ADS is a $DATA attribute within the MFT record, and as such, does not have a unique $STANDARD_INFORMATION or $FILE_NAME attribute associated with it.

Test 3 - File system tunneling
In this test, I created a batch file named "tunnel.bat" in the C:\Tools folder, with the following contents:

del procmon.exe
echo "This is a test file" > procmon.exe

For this test, I ran the batch file, which deletes procmon.exe and then creates a new file named procmon.exe in the same folder, in relatively short order.  In fact, for file system tunneling to take effect, the entire process has to happen within 15 seconds (by default; the time can be changed, or file system tunneling itself disabled, via the Registry).  As we'll see, the entire process took place within a second.

The original MFT record appears as follows:

44631      FILE Seq: 4    Links: 1   
[FILE],[BASE RECORD]
.\tools\Procmon.exe
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri May 31 20:54:54 2013 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri May 31 20:54:54 2013 Z
  FN: Procmon.exe  Parent Ref: 44361/32
  Namespace: 3
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Nov  8 15:17:17 2013 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri Nov  8 15:17:17 2013 Z
**ADS: Zone.Identifier

After the test was run, the MFT record appeared as follows:

44631      FILE Seq: 5    Links: 1   
[FILE],[DELETED],[BASE RECORD]
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri May 31 20:54:54 2013 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri May 31 20:54:54 2013 Z
  FN: Procmon.exe  Parent Ref: 44361/32
  Namespace: 3
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Nov  8 15:17:17 2013 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri Nov  8 15:17:17 2013 Z
**ADS: Zone.Identifier

Here's the new file record for the file:

22977      FILE Seq: 12   Links: 1   
[FILE],[BASE RECORD]
.\tools\procmon.exe
    M: Thu Jul 17 20:40:35 2014 Z
    A: Thu Jul 17 20:40:35 2014 Z
    C: Thu Jul 17 20:40:35 2014 Z
    B: Fri May 31 20:54:54 2013 Z
  FN: procmon.exe  Parent Ref: 44361/32
  Namespace: 3
    M: Thu Jul 17 20:40:35 2014 Z
    A: Thu Jul 17 20:40:35 2014 Z
    C: Thu Jul 17 20:40:35 2014 Z
    B: Fri May 31 20:54:54 2013 Z
[RESIDENT]

Notice that the only difference between the two 44631 records is the sequence number, and that the original file record is now marked "DELETED".  What this illustrates is that the MFT record itself is NOT reused during file system tunneling on NTFS, and that a new record is created during the operation.  This was something I'd wondered about for some time, and now I can see the effect of file system tunneling.

We can see in this case that the MAC times for the new file are all for the date of the testing, and that the B (creation) date is from the original file record.  Also, notice the $FILE_NAME attribute time stamps of the new file...very interesting.

Also, because the file went from being a PE file to a string, the resulting file is now resident; I didn't include the hex dump of the file contents, extracted from the MFT record.

This blog post (from 2005) explains why tunneling exists at all.

From the USN change journal:

procmon.exe: Data_Extend,Close,File_Create  FileRef: 22977/12  ParentRef: 44361/32
procmon.exe: Data_Extend,File_Create  FileRef: 22977/12  ParentRef: 44361/32
procmon.exe: File_Create  FileRef: 22977/12  ParentRef: 44361/32
Procmon.exe: File_Delete,Close  FileRef: 44631/4  ParentRef: 44361/32

When I first read about file system tunneling, I was curious as to whether the original MFT record for the deleted file was simply reused, and this test clearly illustrates that is not the case.

Additional Resources:
Here's a jIIr post from Corey Harrell in which he discusses the use of the USN change journal and file system tunneling
- Eric Huber's blog post on file system tunneling
- Blazer Catzen discussed some file system tunneling testing he'd done on David Cowen's Forensic Lunch podcast, and posted the presentation he'd put together on the subject.

Test 4 - Copy a file to another location in the same volume
In this test, I copied C:\Windows\Logs\IE9_NR_setup.log to C:\Users\IE9_NR_setup.log, using drag-n-drop via the Windows Explorer shell.

From "before" MFT:

96296      FILE Seq: 3    Links: 2   
[FILE],[BASE RECORD]
.\Windows\Logs\IE9_NR_Setup.log
    M: Fri Nov  8 13:26:02 2013 Z
    A: Fri Nov  8 13:26:02 2013 Z
    C: Fri Nov  8 13:26:02 2013 Z
    B: Fri Nov  8 13:26:02 2013 Z
  FN: IE9_NR~1.LOG  Parent Ref: 1966/1
  Namespace: 2
    M: Fri Nov  8 13:26:02 2013 Z
    A: Fri Nov  8 13:26:02 2013 Z
    C: Fri Nov  8 13:26:02 2013 Z
    B: Fri Nov  8 13:26:02 2013 Z
  FN: IE9_NR_Setup.log  Parent Ref: 1966/1
  Namespace: 1
    M: Fri Nov  8 13:26:02 2013 Z
    A: Fri Nov  8 13:26:02 2013 Z
    C: Fri Nov  8 13:26:02 2013 Z
    B: Fri Nov  8 13:26:02 2013 Z

From the "after" MFT, the original file:

96296      FILE Seq: 3    Links: 2   
[FILE],[BASE RECORD]
.\Windows\Logs\IE9_NR_Setup.log
    M: Fri Nov  8 13:26:02 2013 Z
    A: Fri Nov  8 13:26:02 2013 Z
    C: Fri Nov  8 13:26:02 2013 Z
    B: Fri Nov  8 13:26:02 2013 Z
  FN: IE9_NR~1.LOG  Parent Ref: 1966/1
  Namespace: 2
    M: Fri Nov  8 13:26:02 2013 Z
    A: Fri Nov  8 13:26:02 2013 Z
    C: Fri Nov  8 13:26:02 2013 Z
    B: Fri Nov  8 13:26:02 2013 Z
  FN: IE9_NR_Setup.log  Parent Ref: 1966/1
  Namespace: 1
    M: Fri Nov  8 13:26:02 2013 Z
    A: Fri Nov  8 13:26:02 2013 Z
    C: Fri Nov  8 13:26:02 2013 Z
    B: Fri Nov  8 13:26:02 2013 Z

...and the resulting file:

22987      FILE Seq: 12   Links: 2   
[FILE],[BASE RECORD]
.\Users\IE9_NR_Setup.log
    M: Fri Nov  8 13:26:02 2013 Z
    A: Thu Jul 17 20:41:39 2014 Z
    C: Thu Jul 17 20:41:39 2014 Z
    B: Thu Jul 17 20:41:39 2014 Z
  FN: IE9_NR~1.LOG  Parent Ref: 486/1
  Namespace: 2
    M: Thu Jul 17 20:41:39 2014 Z
    A: Thu Jul 17 20:41:39 2014 Z
    C: Thu Jul 17 20:41:39 2014 Z
    B: Thu Jul 17 20:41:39 2014 Z
  FN: IE9_NR_Setup.log  Parent Ref: 486/1
  Namespace: 1
    M: Thu Jul 17 20:41:39 2014 Z
    A: Thu Jul 17 20:41:39 2014 Z
    C: Thu Jul 17 20:41:39 2014 Z
    B: Thu Jul 17 20:41:39 2014 Z

Now, one question you might have is that if I dragged-and-dropped the file, shouldn't the record show indications of the file having been accessed?  Well, we have to remember that as of Vista, the NtfsDisableLastAccessUpdate value is enabled by default, meaning that "normal" user actions won't cause the

From the USN change journal:

IE9_NR_Setup.log: Data_Extend,Data_Overwrite,File_Create  FileRef: 22987/12  ParentRef: 486/1
IE9_NR_Setup.log: File_Create  FileRef: 22987/12  ParentRef: 486/1
IE9_NR_Setup.log: Data_Extend,File_Create  FileRef: 22987/12  ParentRef: 486/1
CONSENT.EXE-531BD9EA.pf: Data_Extend,Data_Truncation,Close  FileRef: 1582/11  ParentRef: 59062/1
CONSENT.EXE-531BD9EA.pf: Data_Truncation  FileRef: 1582/11  ParentRef: 59062/1
CONSENT.EXE-531BD9EA.pf: Data_Extend,Data_Truncation  FileRef: 1582/11  ParentRef: 59062/1
IE9_NR_Setup.log: Data_Extend,Data_Overwrite,Close,File_Create  FileRef: 22987/12  ParentRef: 486/1

From the USN change journal, we see a reference to consent.exe being run; this is the dialog that pops up when you drag-and-drop a file between folders, asking if you want to copy or move the file, or cancel the operation.

Test 5 - Move a file to another location in the same volume
Moved C:\Windows\Logs\IE10_NR_setup.log to C:\Temp\IE10_NR_setup.log (drag-n-drop, via the Windows Explorer shell)

The "before" record:

16420      FILE Seq: 15   Links: 2   
[FILE],[BASE RECORD]
.\Windows\Logs\IE10_NR_Setup.log
    M: Fri Nov  8 14:24:59 2013 Z
    A: Fri Nov  8 14:24:59 2013 Z
    C: Fri Nov  8 14:24:59 2013 Z
    B: Fri Nov  8 14:24:59 2013 Z
  FN: IE10_N~1.LOG  Parent Ref: 1966/1
  Namespace: 2
    M: Fri Nov  8 14:24:59 2013 Z
    A: Fri Nov  8 14:24:59 2013 Z
    C: Fri Nov  8 14:24:59 2013 Z
    B: Fri Nov  8 14:24:59 2013 Z
  FN: IE10_NR_Setup.log  Parent Ref: 1966/1
  Namespace: 1
    M: Fri Nov  8 14:24:59 2013 Z
    A: Fri Nov  8 14:24:59 2013 Z
    C: Fri Nov  8 14:24:59 2013 Z
    B: Fri Nov  8 14:24:59 2013 Z

...and the "after" record:

16420      FILE Seq: 15   Links: 2   
[FILE],[BASE RECORD]
.\temp\IE10_NR_Setup.log
    M: Fri Nov  8 14:24:59 2013 Z
    A: Fri Nov  8 14:24:59 2013 Z
    C: Thu Jul 17 20:41:58 2014 Z
    B: Fri Nov  8 14:24:59 2013 Z
  FN: IE10_N~1.LOG  Parent Ref: 44311/7
  Namespace: 2
    M: Fri Nov  8 14:24:59 2013 Z
    A: Fri Nov  8 14:24:59 2013 Z
    C: Fri Nov  8 14:24:59 2013 Z
    B: Fri Nov  8 14:24:59 2013 Z
  FN: IE10_NR_Setup.log  Parent Ref: 44311/7
  Namespace: 1
    M: Fri Nov  8 14:24:59 2013 Z
    A: Fri Nov  8 14:24:59 2013 Z
    C: Fri Nov  8 14:24:59 2013 Z
    B: Fri Nov  8 14:24:59 2013 Z

Okay, the file was moved (copy + delete operations), but we might expect to see some changes in the time stamps...shouldn't we?  Well, in this case, we cannot tell if the $FILE_NAME attribute time stamps had been changed, because for this file, all of the time stamps, in all of the available attributes, were the same.  We do, however, see that the C (entry modified) time in the $STANDARD_INFORMATION attribute changed (as expected) and that the parent file reference number changed.

From the USN change journal:

IE10_NR_Setup.log: Security_Change  FileRef: 16420/15  ParentRef: 44311/7
IE10_NR_Setup.log: Rename_New_Name,Close  FileRef: 16420/15  ParentRef: 44311/7
IE10_NR_Setup.log: Rename_New_Name  FileRef: 16420/15  ParentRef: 44311/7
IE10_NR_Setup.log: Security_Change,Close  FileRef: 16420/15  ParentRef: 44311/7
IE10_NR_Setup.log: Rename_Old_Name  FileRef: 16420/15  ParentRef: 1966/1
CONSENT.EXE-531BD9EA.pf: Data_Extend,Data_Truncation,Close  FileRef: 1582/11  ParentRef: 59062/1
CONSENT.EXE-531BD9EA.pf: Data_Truncation  FileRef: 1582/11  ParentRef: 59062/1
CONSENT.EXE-531BD9EA.pf: Data_Extend,Data_Truncation  FileRef: 1582/11  ParentRef: 59062/1

Again, we see a reference to consent.exe having been launched.  I'm not entirely sure why the "Security_change" reason code in the USN change journal was generated for a move operation.

Both tests 4 and 5 validate what's described in MS KB article 299648, keeping in mind that the article only discusses time stamps from the $STANDARD_INFORMATION attribute.

Summary
Again, I ran these tests as a means for determining what different file operations look like in the MFT and USN change journal, and what the effects are on individual records.  This information can be helpful in a variety of investigation types, such as malware detection, and finding indications of historical activity and data (i.e., files that are no longer on the system).

Future Efforts
For the future, I'll need to look at copy and move file operations performed at the command line, using the copy and move commands, respectively.

Thursday, July 10, 2014

Random Stuff

Host-Based Digital Analysis
There are a lot of folks with different skill sets and specialties involved in targeted threat analysis and threat intel collection and dissemination.  There are a lot of researchers with specific skill sets in network traffic analysis, malware reverse engineering, etc.

One of the benefits I find in host-based analysis is that the disk is one of the least volatile of the data sources.  Ever been asked to answer the "what data left our organization" definitively?  Most often, the answer to that question is, if you didn't conduct full packet capture when the data was leaving, at the time that it was leaving, then you really have no way of knowing definitively.  Information in memory persists longer than what's on the wire, but if you're not there to collect memory within a reasonable time frame, you're likely going to miss the artifacts you're interested in, just the same.  While the contents on disk won't tell you definitively what left that system, artifacts on disk persist far longer than those available via other sources.

With malware RE or dynamic analysis, you're getting a very limited view of what could have happened on the infected host, rather than looking at what did happen.  A malware RE analyst with only a sample to to work with will be able to tell you what that sample was capable of, but won't be able to tell you what actually happened on the infected host.  They can tell you that the malware included the capability to perform screen captures and keystroke logging, but they can't tell you if either or both of those capabilities were actually used.

One of the aspects of targeted threat incidents is the longevity of these groups.  During one investigation I worked on a number of years ago, our team found that the original compromise had occurred via a phishing email opened by three specific employees, 21 months prior to our being called for assistance.  More recently, I've found evidence of the creation of and access to web shells going back a year prior to the activity that caught our attention in the first place.  Many of those who respond to these types of incidents will tell you that it is not at all unusual to find that the intruders had compromised the infrastructure several months (sometimes even a year or more) before the activity that got someone's attention (C2 comms, etc.) was generated, and it's often host-based analysis that will demonstrate that.

Also, what happens when these groups no longer use malware?  If malware isn't being used, then what will be monitored on the network and looked for in logs?  That's when host-based analysis becomes really important.  While quite a few analysts know how to use application prefetch/*.pf files in their analysis, what happens when the intruder accesses a server?  There is a great deal of information available within a system image that can provide insight into what the intruder was doing, what they were interested in, etc., if you know where to go to get it.  For example, I've seen intruders use Windows Explorer to access FTP sites, and the only place that artifacts of this activity appear are in the user shellbags.

Used appropriately, host-based analysis can assist in scoping an incident, as well as be extremely valuable for collecting detailed information about an intruder's activities, even going back several months.

Now, some folks think that host-based analysis takes far too long to get answers and is not suitable for use in high-tempo environments.  When used appropriately, this aspect of analysis can provide some extremely valuable insights. Like the other aspects of analysis (memory, network), host-based analysis can provide findings unique to that aspect that are not available via the others.  Full disk acquisition is not always required; nor is completely indexing the image, or running keyword searches across the entire image.  When done correctly, answers to critical questions can be retrieved from limited data sources, allowing the response team to take appropriate action based on those findings.

RTFM
I recently received a copy of RTFM that I'd purchased, and I have to say, I really like the layout of this book.  It is definitely a "field manual", something that can be taken on-site and used to look up common command line options for widely-used tools (particularly when there is no, or limited, external access), and something that an analyst can write their own notes and reminders in.  For example, the book includes some common WMIC and PowerShell commands to use to quickly collect information from a compromised system.  In a lot of ways, it reads like one of the O'Reilly Publishing "...in a Nutshell" books...just the raw facts, assuming a certain level of competency in the reader, and no fluff.

As anyone who has read my books knows, I have a number checklists that I use (included in the book materials), and it occurred to me that they'd make a great field manual when pulled together in a similar format.  For example, I have a cheatsheet that I use for timeline creation...rather than printing it out over and over, I could put something like this into a field manual that I could then reference when I need to without having to have an Internet connection, or look up on my system.

I think that having a field manual that includes commonly used command line options is a great idea.  Also, sometimes it's hard to remember all of the different artifacts that can fall into different categories, such as 'program execution', or things to look for if you're interested in determining lateral movement within an infrastructure.  Many times, it's hard for me to remember the different artifacts on different versions of Windows that fall into these categories, and having a field manual would be very useful.  There are a number of useful tidbits in my blog that I cannot access if I don't have Internet access, and I can't remember everything (which is kind of why I write stuff into my blog).  Having a reference guide would be extremely beneficial, I think...and I already have a couple of great sources for this sort of information - my case notes, my blog, etc.

Actually, I think that a lot of us have a whole bunch of little tidbits that we don't write down and share, and don't occur to us during the heat of the moment (because analysis can often be a very high-energy event...), but would be extremely valuable if they were shared somehow.

I'm not one of those people with an eidetic memory who can remember file and Registry paths, particularly on different versions of Windows, unless I'm using them on a regular basis.  The same is true for things like tools available for different artifacts...different tools provide different information, and are useful in different circumstances.

Telling A Story
Chris Pogue recently published a post on the blog of his new employer, Nuix, describing how an investigator needs to be a story teller.  Chris points out some very important points that many of us who work in this field likely see over and over, in particular the three points he lists right at about the middle of the post.  Chris's article is worth a read.  And congratulations to Chris on his new opportunity...I'm sure he'll do great.

Something to keep in mind, as well, is that when developing our story, when translating what you've done...log analysis, pcap capture and analysis, host-based analysis...into something that the C-suite executives can digest, we must all be very mindful to do so based on the facts that we've observed.  That is to say, we must be sure to not fill in gaps in the story with assumption, or embellishment.  As "experts" (in the client's eyes), we were asked to provide answers...so when telling our story, expecting the client to just "get it", or giving them a reference to go look up or research really isn't telling our story.  It's being lazy.  Our job is to take a myriad of highly technical facts and findings, and weaving them into a story that allows the C-suite executives to make critical business decisions, in a timely manner.  That means we need to be correct, accurate, and timely.  To paraphrase what I learned in the military, many times a good answer now is better than the best answer delivered too late.  We need to keep in mind that while we're looking at logs, network traffic, the output of Volatility plugins, or parsing host-based data, there's a C-suite executive who has to report to a compliance or regulatory board, to whom bits and bytes, flags and Registry values mean absolutely nothing.

All of this also means that we need to be open to exposure and criticism.  What Chris says in his article is admittedly easier said than done.  How often do we get feedback from clients?

Mentoring
So how do we get better at telling our story, particularly when each response engagement is as different from the others we've done as snowflakes? This leads us right into a thread over on Twitter where mentoring was part of the topic.  Our community is in dire need of mentoring.  Mentoring is a great way to go about improving what we do, because many times we're so busy and engaged in response and analysis that we don't have the time to step back and see the forest for the trees, as it were.  Sometimes it takes an outside influence to get us to see the need to change, or to show us a better way.  However, I do not get the impression that many of the folks in our community are open to mentoring, and that impression has very little to do with distance.

First, mentoring should/needs to be an active, give-and-take relationship, and my experience in the community (as an analyst, writer, presenter, etc.) at large has been that there is a great deal of passivity.  We rarely see thoughtful reviews of things such as books, presentations, and conferences in this community.  People don't want to share their thoughts, nor have their name associated with such things, and as such, we're missing a great opportunity for overall improvement and advancement in this industry, without this give-and-take.

Second, mentoring opens the mentee to exposing what they're currently doing.  Very few in this community appear to want or seek out that kind of exposure, even if it's limited to just the mentor.  Years ago, I was part of a team and our manager instructed everyone to upload the reports that they'd sent to clients to a file share.  After several months, I accessed the file share to upload my most recent report, and found that the folder for that quarter was empty, even though I knew that other analysts had been working really hard and billing a great deal of hours.  Our manager conducted an audit and found that only a very few of us were following his instructions.  While there was never any explanation that I was aware of for other analysts not uploading their reports, my thought remains that they did not want to expose what they were doing.  As Chris mentioned in his article, he's been tasked with reviewing reports provided to clients by other firms.  When we were on the IBM ISS ERS team together, I can remember him reviewing two such reports.  I've been similarly tasked during my time in this field, and I've seen a wide range of what's been sent to clients.  I've taken those experiences and tried to incorporate them into how I write my reports; I covered a great deal of this in chapter 9 of Windows Forensic Analysis 4/e.

Monday, June 30, 2014

RegRipper

Just a reminder to everyone out there that the OFFICIAL download link for the most current version of RegRipper is available from the link found here, or here (i.e., at the [RegRipper download]" link).

Some folks have reached to me recently and said, "I have the most recent download...", and that's apparently not been the case.  I left the Google Code page for RegRipper populated in part because there is some information that I put in the Wiki pages that I still want to be able to access.

Just a note...if you think that the download link is broken, be sure to check to see if your infrastructure allows access to Dropbox.

If you want to see what's going to be new with RegRipper, be sure to vote for the presentation at OSDFCon.

Thursday, May 22, 2014

Book Writing: To Self-Publish, or Not

The CEIC Conference is going on as I write this, and Suzanne Widup's author panel went on yesterday.  I'm not at the conference, so like many others, I live vicariously through what gets Tweeted about the conference, as well as about specific portions of the conference, such as the panel.

I saw a question posted to Twitter, in which the tweeter asked, "for the panel, why not self-publish like RTFM?"

My initial thought was, you need to consider the members of the panel and the books they've written or co-authored; those titles really don't lend themselves too well to a format similar to RTFM, which, in some cases, is described as a collection of notes and tips bound into a book.  For example, I don't think I could see Hacking Exposed Computer Forensics in a format similar to RTFM.  As such, the question is essentially an apples-to-oranges comparison.  While self-publishing definitely has it's place, but may not always the best option for the material, nor for the author.  But that doesn't mean that there aren't publishing possibilities out there that would be very well suited to a format similar to RTFM.

I've addressed this topic before, but it is a good question, and certainly bears addressing.  Essentially, the choice of whether to go with a publisher or to self-publish comes down to how much time and effort you do want to invest in getting the book published...at least, that's my perspective.  Other perspectives might be about what the author gets out of it, or how much someone has to pay for the book.  Writing a book is tough enough as it is...having someone there to do some of those things that need to be done (i.e., formatting, illustrations, copy editing, printing, etc.) in order for the book to be available to others means that the author can focus time and effort on writing the book, and not have to stop and figure out how get something done, or find a resource.

A friend of mine told me that her husband publishes CDs for his band on his own, rather than going through a production firm.  That means that he does everything himself; in some cases this makes perfect sense.  In others, such as writing a DFIR book, maybe not so much.  Most of us don't like to write as it is; if you self-publish, would you have someone review your materials for grammar, consistency, and technical accuracy?  If so, would it be someone you pay for that service?  Where's that money going to come from?  How will you handle illustrations or figures?

As such, consider this... if self-publishing were the sole route available, we'd likely have far fewer books available in the DFIR field.  Or, maybe another way to put it is that if self-publishing really were that easy, we'd have more books.  In the years that I've been involved in writing books, I've seen a fairly good number of folks start down the road and not make it very far, for a variety of reasons.  In some cases, it's due to the realization that there's much more to writing a book than simply having an idea.  When the publisher comes back and gives you a bunch of forms to fill out, and requests a market analysis and a detailed outline, with a swag on a word count, the reality of the situation becomes readily apparent.  I've seen people stop there.  I've also seen one instance where the author got past the point of signing a contract, and the publisher came back later and modified the contract, almost doubling the word count for the final manuscript, but made no other changes to contract, including the delivery date. The author simply walked away.

I've read a number of the reviews for RTFM, and to be honest, the book sounds like a fantastic idea; it was apparently originally intended to be an accumulation of someone's notes to be passed on to their team.  In the right hands, something like that can be extremely useful, and I can relate; when I was in grad school in '95-ish, I taught myself Java programming and relied heavily on O'Reilly's "...in a Nutshell" books for tips and guidance.  I found it very useful, because I wasn't looking for the basics of programming, and the basics of Java programming to be explained to me...I just wanted the bare bones stuff, with no fluff.  Material that might be better suited to an RTFM-like format might be something like what's found here.

Self-publishing simply isn't for everyone...the audience for a book like this is pretty limited.  I can see books like this for using other tools, but I think that one of the strengths of RTFM is that there's the base assumption that anyone purchasing the book is familiar with both operating at the command line, and with Linux.  While there are certain segments of the DFIR community that would strongly suggest that that's exactly how it should be, the fact is that this is far from reality.

Resources
Self-publishing a book: 25 things you need to know - I strongly suggest that you read them all
Lulu.com - self-publishing company
How to self-publish - a guide, with pictures

Saturday, May 17, 2014

Artifacts

I received a request right before WFA 4/e hit the streets...after the writing and editing was complete and while the printed book was being shipped...to "talk about anti-forensics".  Unfortunately, at this point, I still haven't heard any more than just that, but I've had more than a couple of instances where knowledge of artifacts and Windows structures has allowed me to gather valuable data for analysis, even when the bad guy took steps, however unknowingly, to remove other artifacts.  I say "unknowingly" because sometimes the steps taken may not specifically be intended to be "anti-forensic" in nature, but may still have that effect.

Something that I've found over the years is that even when steps are taken to remove indications of activity, there may still be artifacts available that can prove valuable to an analyst.  While the analyst may not be able to answer THE question that they have, there may be data available that will still provide insight into the case and allow other questions to be answered.  For example, if the intruder accessed a system via RDP and removed or obscured some valuable data source (i.e., cleared the Windows Event Log, etc.) and the question you have is, "where did they access this system from?", you might not be able to answer that question.  However, using other data, you would be able to show when they were active on the system, what they were doing at the time, and even demonstrate access to other systems.

To quote Blade: "When you understand the nature of a thing, you know what it's capable of."  I know, I know...but I really wanted to work that quote into this post.  ;-)  What I'll do now is take a look at some of the things I and others have seen, and provide some thoughts as to other data sources that would be of value.

FTP Via Windows Explorer
I've seen the native ftp.exe client used on systems in a variety of cases, and not only to exfiltrate data.  Back when I was doing PCI forensic analysis, we saw a good deal of SQL injection activity, some of which would use echo to create an FTP script on a system, and then launch that script using the -s switch with ftp.exe.  The use of ftp.exe to infiltrate or exfil data can leave artifacts in the Registry, and for workstation systems, there will be an application prefetch file created.  On XP systems, the last accessed time on the file will be updated, and there will very likely be a value created in the user's MUICache key for ftp.exe.

However, you can use Windows Explorer to connect to an FTP server.  My publisher used to have me do this in order to transfer chapters, and I've seen this used a number of times on various cases.  The interesting thing about this is that while it involves interaction via the GUI shell, it leaves far fewer artifacts than using the command line utility.  In fact, having looked at several cases where this technique is used, the only place that I've found artifacts of this activity is in the user's shellbag artifacts.  I've discussed these artifacts before, so I won't go into a great deal of detail here.  Suffice to say, shellbags can be a great resource, demonstrating access to network resources such as shares (even C$ shares), MTP devices (digital cameras and smartphones), FTP sites, etc., providing artifacts of activity that you might not find anywhere else on the system.

Clearing Windows Event Logs
Ever access a system and find out that the records in the Security and System Event Logs only go back a day or two?  One of the things I talk about in my books and presentations is that while it's easy to assume that the default configuration of the Event Logs caused them to roll over, it's also pretty trivial to check and see if there was some other reason for this, such as a user clearing the Event Logs.  If this happens, you'll likely see a record in the Security Event Log indicating that this happened, so look for the appropriate event ID (517 or 1102).  I've seen intruders do this, and I've seen admins who are responding to and troubleshooting an "incident" do this, as well.  Many times when the Event Log is cleared, you'll see a user accessing the Event Viewer (usually visible via the UserAssist data) just prior to that time.

When the Event Log is cleared, that doesn't mean that all data goes away.  You can try to recover Windows Event Log records using Willi Ballenthin's EVTXtract, or depending on what you're trying to illustrate, you can look to other data.  For example, I've had instances when the Windows Event Logs have been cleared, but I've been able to demonstrate a user's windows of activity over time using other sources of data, such as the Registry, VSCs, etc.

The Power of Mini-, Micro-, and Nano-Timelines
Daniel Garcia recently added a review of WFA 4/e to the Amazon page for the book (thanks again, Daniel, for taking the time do to that, I greatly appreciate it); in that review he mentioned mini-timelines.  Interestingly enough, I use this technique all the time.  Many times, I'll grab some information and start putting together a timeline from a small subset of data sources, in order to get an idea of what's going on, and then once I have that info, I'll kick off a heftier process and let that run while I'm analyzing what I have.  Or, as is often the case, the results of analyzing the mini-timeline will provide me with the direction for my next steps.  This allows me to see things that I might have missed had I included voluminous amounts of file system metadata, Windows Event Logs, etc., and goes back to the technique of using overlays that I mentioned over two years ago.  This technique has provided useful in a number of cases.  For example, if someone is in a data center acquiring data, I can send them a batch script (similar to auto_rip) that runs various tools (RegRipper, etc.) and have them ship me the output of the tools. This allows me to start analysis while the bulk of the data is in transit, and when it shows up, I'm ready to start my focused analysis.  Or, they can acquire the data and once it's been verified, send me subsets of the data (Registry hive files, Windows Event Logs, etc.) in a secure archive, allowing me to begin analysis on a few KB of data while the full archive (several hundred GB of data) is enroute.

Not long ago, I collected the NTUSER.DAT, USRCLASS.DAT and index.dat files from three user profiles within an image.  These profiles were thought to be active during the time of the incident, so I parsed the Registry hives with RegRipper, and the index.dat files with a custom tool, and created a micro-timeline that showed me not just times of activity, but patterns of activity that I would have missed had I included all of the data (file system, WEVTX, Registry hives, other user profiles, etc.) available within the image.  The results of this analysis allowed me to then focus my analysis on the more inclusive timeline and develop a much clearer picture of the activity that was the focus of my interest.

Browser Analysis
When we hear 'browser analysis', most of us think about data sources such as index.dat files or SQLite databases, and tools like IEF.  But there are other potentially valuable data sources available to us, such as cookie files, bookmarks/favorites, and session recovery files.

If the user is using IE and you're interested in their activity during a specific point in time, you may have options available to you to get the information you're looking for.  For example, the TypedURLs key (and TypedURLsTime key, if they're using Windows 8) may prove fruitful, particularly when used in conjunction with VSCs.  If IE crashed (for whatever reason) while the user was browsing the web, you'll have the Travelog files available, and these can provide much more insight into what the user was doing than an index.dat record would.

The IE session restore files are structured storage/OLE format, and Yogesh has an EnScript available for parsing them.  I've used strings to get the data that I want, and MiTeC's Structured Storage Viewer to view the contents of individual streams within the file.  Python has a good module for parsing OLE files (I really haven't found anything that works as well in Perl, and have written some of my own stuff), and it shouldn't take too much effort to put a parser together for these files.  What's really fascinating about these files is that within a timeline, you may see where the user launched IE (UserAssist data), accessed a particular site (TypedURLs key, index.dat data), but at that point, you really can't tell too much about what they did, or what sort of interaction they had with the page, or pages, that they visited.  If you're lucky and there's a session saved in a Travelog file, then you can see what they were doing at the time of the crash.  I've seen commands sent to database servers via default stored procedures.  So, these files can be a rich source of data.

For other browsers, here's information on session restore functionality:
Chrome User Data Directory  (here's a tip for restoring the last session from the command line)
Firefox - Mozilla Session Restore

Summary
My point in all this is that while in most cases we really want to see all of the data, there are times when we either don't need everything, or as is often the case, everything simply isn't available and we have to make the best use of what we have.  For example, if I simply want to see when a user was active on a system, over time, I wouldn't need everything from the system, and I wouldn't need everything from the user profile.  All I'd need to get started are the two Registry hives, browser history files, and maybe the Jump Lists.  The total size of this data is much less than the full image, and it's even smaller if I can get someone on site to run the tools and just send me the data.

Friday, May 16, 2014

Updates

Exploit Artifacts
Corey is back with yet another of his amazing exploit artifacts blog posts!  This time around, the post has to do with Silverlight exploits from 2013; even so, this is something (providing exploit artifacts) that's been talked about for a long time, and Corey's one of the only ones (THE only one that I know of) who crosses the boundaries and self-imposed obstacles between vuln devs/pen testers and the IR community.  I know that there are others who are documenting artifacts associated with exploits with a particular focus on exploit kits, but Corey is the only one that I'm aware of that's doing as complete a job, and sharing that information publicly.

What I really like about Corey's approach is that he's targeting those areas most likely to yield results.  Looking into the USN change journal entries has been fruitful for many an investigator, particularly if they can get access to the system within a relatively short time after the incident occurs.

I also like that fact that Corey clearly documents what he did, as well as what he found.  He also points out a couple of "tidbits" that he found that require further examination and testing before they're discussed a bit more fully.

WFA 4/e Reviews
I saw recently that there is another review of WFA 4/e posted to Amazon; thanks to Daniel for sharing his thoughts!  I greatly appreciate all reviews, regardless of how they're perceived, and I really appreciate reviews like Daniel's because he took the time to actually read some of the new information that was added to this edition.

Volatility Plugin Manager
Via David Cowen's blog, it seems that there's a GUI plugin manager available for Volatility now, written by Andrew Nind.  This looks like a great idea, and I look forward to seeing what people think of it.

RegRipper Updates
I've been working on some updates to RegRipper, not so much based on community-wide input but more based on some discussions that I've had with a very few (like, 3 or 4) folks within the DFIR community.  There've been no comments (that I've seen) regarding the usefulness or value of the alerting function that was added to the tool last year.

Output Formats
Earlier this year, I exchanged some emails with Willi Ballenthin, as he had put the effort in to look at the code for RegRipper, and he came up with a means for allowing users to create their own output formats.

Ultimately, given the wide variation in available data and possible formats, I just thought that it would be easier to provide a switch to allow users to select the 'regular' default output format that is available now, or choose between CSV, TLN, or bodyfile output formats.  The switch will be incorporated into the CLI tool, and for the time being, nothing will change with GUI...the default output format will be the 'regular' default output that we see now.  However, that may change in the future, once I get the necessary modifications completed.

The move to provide bodyfile output was based solely on discussions with two people, but I hope that others will find it useful.

Adding .csv output is something that has been asked for in the past, and the caveat for this output format is that it's going to be very different across the various plugins. As anyone who's looked at the output of the plugins knows, what RegRipper extracts from the Run key is different from what it extracts from the UserAssist subkeys.

One thing I've wanted to do for sometime is consolidate plugin output formats within the plugin, rather than create new plugins for each output format.  I've found a number of plugins to be extremely valuable during targeted threat engagements, particularly when anti-forensics measures have been employed.  I've used this combination of plugins to develop mini- or even micro-/nano-timelines that have lead to significant findings with respect to intruder activity and the scope of their reach, even when other potentially valuable resources were not available.  By combining the output of these plugins from various systems using the TLN format, I'm able to see a clear progression of intruder activity across multiple systems and user accounts, and achieve a significant level of visibility.  However, as things are now, if I modify one plugin, I have to be sure to incorporate that modification into the version of the plugin that produces TLN output, if there is one.  By incorporating the various output formats into a single plugin, the entire process of maintaining the plugins is simplified.

Is there any value in incorporating the l2t_csv format, as well?

Incorporate Artifact Categories
I know that others have talked about artifact categories before, and that one variation was included in the original SANS DFIR poster that was published.  However, I've taken a slightly different approach to the identification of artifact categories, one that is more along the lines of what Corey discussed when he released auto_rip, and what I listed in the HowTo blog posts from last July.  With respect to the Registry specifically, I'm thinking that it would be very useful to group artifacts within categories, so that they're more easily understood and remembered.

Multiple Hives
Something that I've already started incorporating into plugins is combining functionality within a single plugin to query multiple hives.  Within RegRipper, this doesn't happen automatically; what this means is that there are some plugins that can be run against the NTUSER.DAT and Software hives, or as with a new plugin I wrote recently (and haven't included in the plugin archive yet) to address Adam's latest autostart discovery, the same plugin could be run against the NTUSER.DAT or System hive.

To be clear, this does not mean that RegRipper will correlate data from across multiple hives...RegRipper's foundational design doesn't allow for that.  What it means is that one plugin can be run against several hives.

Artifacts
I ran across an instance recently where Yogesh Khatri's TraveLog research proved to be very beneficial.  Someone was using IE to perform lateral movement, and we weren't sure what they were actually doing...but IE had crashed, and we were able to "see" what was visible in each tab when the browser had it's issue.  Very cool.  Thanks to Yogesh for sharing his research...you never know when you're going to be able to make good use of information like that, but it's unlikely that if he hadn't shared the information, that we would've even known to look at the files.

Tuesday, May 13, 2014

Links

OpenLiveView
Tim Vidas has posted OpenLV, an update to the popular LiveView tool that many of use have used before. When conducting an investigation, there are a number of ways to access acquired images, such as via any number of analysis frameworks (DFF, ProDiscover, Autopsy, etc.) that provide a great deal of functionality for interacting with data.  There are tools for mounting an acquired image as a read-only volume (FTK Imager, etc.), but OpenLV allows you to boot the acquired image.  This can provide a great deal of visibility into the system, allowing the investigator to see what the intruder saw, interact with the system the way the intruder interacted with it, and even verify malware autostart functionality.

Be sure to check out the DFRWS Proceedings, written by Tim, Matthew Geiger, and Brian Kaplan.

EVTXtract
The other day I was answering a question about Windows Event Log analysis, and I ran across Willi Ballenthin's tool, EVTXtract (PDF here).  This tool allows an analyst to recover deleted Windows Event Log records.  The Windows Event Log (.evtx) files follow a binary structure that's much different from the Event Log (.evt) files on Windows XP and 2003, but deleted records can apparently be recovered, at least in some cases.

ThunderBird Parser
Mari has shared her ThunderBird Parser.  Her blog post has some great information...she talks about what issue she faced and how she chose to address it by writing her own code.  Doing this not only helped her understand the underlying data on much more intimate level, but it also opened that understanding up to other analysts.

Conferences
My conference attendance changed recently, and I am no longer a member of Suzanne Widup's author panel at the SANS DFIR Summit in Austin, TX.  I was really looking forward to speaking on the panel (I've written a book or two), and discussing various topics around writing DFIR books.  In fact, we'd already started addressing some questions in my blog, and I was really looking forward to hearing and addressing other questions.

My not attending the summit has nothing whatsoever to do with any review of my book, and honestly, I'm more than a little shocked that someone would think that, let alone say it out loud to others.

Brian Carrier has opened up the call for papers for the OSDFCon, to be held in Herndon, VA, on 5 Nov. This has always been a great conference to attend (see here), and needs more practitioners to submit presentations.  In fact, I've recommended to Mari that she submit to the conference to give a presentation on the ThunderBird email parser, or any of the other tools she's written.  I've already submitted two presentation ideas.

I'm also looking for thoughts and ideas for other conferences to which I can submit to the CfP.  CEIC is out because it's already come and gone.  If anyone has any thoughts regarding a conference (or conferences) that are specific to DFIR, and include topics on addressing targeted threats, I'd greatly appreciate it if you'd comment here or drop me an email.  Thanks.

Tuesday, May 06, 2014

New Stuff

RegRipper Plugins
Corey's busy this week attending Volatility training, but last night sent me a couple of RegRipper plugins he wrote, inspired by what he was learning in the training.  He'd also sent me a third one, which I got the okay to include right after I'd posted the newest release of RegRipper, so I'm including it now.

I've added  processor_architecture.plwinevt.pl, and an updated pagefile.pl to the download archive.  However, I have not updated the appropriate profiles to include the two new plugins, nor have I changed the version number for the download.  Many thanks to Corey for sending those plugins in!  Keep 'em coming!

Malware Hiding Techniques
I've had another article posted on the Dell SecureWorks Research blog.  Part of the purpose of this post was to illustrate how sometimes we make assumptions about how malware (or other artifacts) may have ended up on the system, and there while there are times that the assumptions may be correct, when they aren't, the actual method of infection can be a game-changer. The analysis that resulted in these findings was fascinating, to say the least.

After you've read the post, something else to consider from the examples is how they circumvent protections.  For the first example, the assumption many analysts have with respect to the deployed RAT is that it gets on systems as a result of a spearphishing attack.  As such, protections against this infection vector would include email filtering and user education; however, both of these protections are obviated if the user is capable of disabling protections (AV, etc.) and installing the RAT.  As mentioned in the article, network monitoring flagged the system based on C2 communications, and efforts to install endpoint detection technologies were...again...obviated by the user.

With the second example, the malware file itself used an interesting technique to hide itself from casual view on the system, which also worked equally well against some digital analysis techniques.  The carrier file was identified as Vercuser.B, which "cleared the way" for the Poison Ivy infection, by checking for various protection mechanisms (VM, running AV software, etc.).

Something else that isn't mentioned in the blog post is that I initially ran into some analysis roadblocks, or so I thought...but after reaching out to Jamie Levy for some input, she pointed me in the right direction and the analysis went really smooth.  A good bit of what she helped me with was described in this blog post.  I didn't crowd-source this one, because to be honest, I didn't want to hear what a lot of folks thought, I wanted to hear what an expert knew.

My previous articles published to the Dell SecureWorks Research blog are here, and here.

WFA 4/e
There have been additional reviews of WFA 4/e posted on Amazon; again, thank you to everyone who's taken the time to share their thoughts...I greatly appreciate it.

There have been some discussion on social media regarding the edition number for this book.  While I understand the issue that was raised, I do not control what the publisher chooses to do with respect to numbering the editions. I did, however, get several folks that I trust to look at my outline and planned updates, and give their opinions as to the proposed content.  The book was tech edited by someone knowledgeable and known within the DFIR community. Further, I have asked for feedback on the third edition (as I have for my other books), as well as gone to the community to ask for input regarding what they'd like to see in the next edition.  In both cases, there have been very little of either.  I did receive a request to talk about "anti-forensics", but after I asked the person who asked for that to elaborate and expand on that a bit, I have yet to hear back.

I have to say, I have asked Syngress about their color scheme for the books.  Digital Forensics with Open Source Tools, Windows Forensic Analysis 2/e and 3/e, and Windows Registry Forensics all have the same color scheme, and the same shade of green.  I've been to conferences and given presentations, during which I've stated at the beginning of the presentation (when I have a copy of one of my books on the who am I slide) that the books all have the same color scheme and it confuses people.  Then, at the end of the presentation, I ask a question, offering to give away a copy of one of my books to whomever gets the right answer...and inevitably, the winner immediately states that they already have the book, only to find out that they thought they did because they only looked at the color scheme.  That happened at the USA CyberCrime Conference just last week.  So, it does confuse people.

My point is simply this...there's a great deal authors do not control when it comes to working with a publisher.  However, I have tried to address the content issue by reaching out to the community, particularly while developing the outline for the next book or edition, and I have received little input.  I tried to address one of the first questions I received regarding the content for this edition in this blog post, although that came after this book was actually published.

One thing that I hope folks consider doing before commenting or writing a review (good or bad) is actually reading the content of the book.  The two chapters at the end of the book are new material.  In the third edition, ch 8 was "Application Analysis", and this edition, it's "Correlating Artifacts", which includes information similar to what I posted to this blog in July, 2013.  Chapter 9, "Reporting", is entirely new.

Friday, April 25, 2014

WFA 4/e Reviews

Brett Shavers has posted the first (that I'm aware of) reviews of WFA 4/e...one on Amazon, and a longer one can be found on his WinFE blog.

Not so much a review, but Corey refers to the book in one of his recent blog posts, saying that he's still digesting it.  He also refers to the RecentFileCache.bcf file, as well as to a tool I wrote and provided with the book materials for parsing it.

I greatly appreciate the time and effort folks are putting into reading and digesting the book, and particularly those who are writing reviews.  Thank you, all.

Addendum, 29 Apr: Two more reviews are up on Amazon!

Wednesday, April 16, 2014

Follow up on TTPs post

David Bianco's "Pyramid of Pain"
As a follow-up to my previous post on TTPs, a couple of us (David Bianco, Jack Crook, etc.) took the discussion to G+.  Unfortunately, I did not set the conversation to public, so I wanted to recap the comments here, and then take this back to G+ for open discussions.

First, if you're new to this discussion, start by reading my previous post, and then check out David's post on combining the "Kill Chain" with the Pyramid of Pain.  For another look at this, check out David's Enterprise Security Monitoring presentation from BSidesAugusta - he talks about the kill chain, PoP, and getting inside the adversary's OODA loop.  Pay particular attention to David's "bed of nails" slide in the presentation.

Second, I wanted to provide a synopsis of the discussion from G+.  Those involved included myself, David, Jack, and Ryan Stillions...David brought him into the conversation initially because Ryan had developed a concept of "Detection Maturity Level" that overlaps with David's Pyramid concept.  Nothing is available yet, and hopefully Ryan will blog on it soon.

To start off the discussion, I asked that if finding, understanding and countering TTPs causes the adversary "pain", why is there so much emphasis within the community on finding indicators?  There was the thought that indicators are shared because that's what clients are looking and asking for, implying that those providing 'threat intel' services follow client requests, rather than driving them.  This goes back to maturity...in order to share TTPs, organizations have to be mature enough to (a) detect and find them, and (b) understand and employ them within their infrastructure.  There was another comment that indicators at the lowest levels of the PoP are focused on because there are more of them...a recent presentation at RSA 2014 mentioned "3000 indicators".  From a marketing perspective, that's much better than "TTPs for one group".

Ryan followed up with a comment that focusing on the lower levels of the PoP actually inflicts pain on the analysts (re: false positives), and he used the phrase "Cost of Context Reconstruction" (Ryan, start blogging, dude!!), which refers to the "lower in the stack you operate, longer it takes to re-establish situational context, arrive at conclusions, pivot, etc."

At that point, the discussion then moved to organizational maturity and people...skills, etc.  David recommended his above blog post and video, and I went off at that point to get caught up.

The question was then posed asking if attribution was important.  Ryan thought that would be a great panel question, and I agree...but I also think that this is a great question to start thinking about now, not simply to mature and crystallize your thoughts, but when it is posed to a panel, there are going to be a lot of folks who are hearing it for the first time.

What the discussion then centered around at that point was that attribution can be important, depending upon the context (if you're in the intel or LE communities), but for most organizations with a maturity level that has them at the lower levels of the Pyramid, attribution is a distraction.  What needs to be focused on at that point is moving further up the Pyramid and maturing the organization to the point where TTPs are understood, detected, and employed within the detection and response framework.

This then circled back to the "why", with "because that's what the client is asking for" thrown in as a possible response.  David brought up the concept of "provisional attribution" during the course of an incident, meaning that "this is what we know at the moment, but we may be wrong so it's subject to change at any time".

At that point, we got back to "hey, maybe we should open this up", hence, this post.  So, that's where we are at this point.  So, as a means of summary:

Use the Pyramid of Pain to:
- Identify detection/skill gaps
- Determine organizational detection/response maturity (looking for a blog post from Ryan...)
- Combine with the Kill Chain to bring "pain" to the adversary

There was also the idea of actually having a panel discussion at a conference.  I think that's great idea, but I also think that it's limiting...shelving the discussion until a conference means no movement, and then all of a sudden, there's a discussion that many folks are seeing for the first time, and they haven't had time to catch up.  So, we'll take this back to G+ for the time being, simply because at this point, there really hasn't been any better ideas for a forum for this sort of discussion.

Addendum: The G+ post with comments can be found here.

Monday, April 14, 2014

WFA 4/e

Okay, so Windows Forensic Analysis 4/e showed up in a couple of boxes on my doorstep tonight.  It's now a thing.  Cool.

As I write this, I'm working on finishing up the materials that go along with the book.  I got hung up on something, and then there was work...but the link will be posted very soon.

A question from Twitter from "Dark Operator":

so it is a version per version of Windows or the latest will cover 7 and 8?

I know the cover says "for Windows 8", and  I tried to incorporate as much info as I could about Windows 8 into the book by the time it went in for the final review before printing...which was back in February.  This edition includes all the Windows 7 information from the third edition, plus some new information (and some corrections), as well as some information for Windows 8.

The thing about questions like this is that Twitter really isn't the medium for them.  If you have a question or comment about the book contents, you can email me, or comment  here.  It's just that sometimes the answers to questions like that do not fit neatly in to 140 characters or less.

Over the past couple of months, I've been asked to speak at a number of events, and when I ask what they'd like me to speak about, I generally get responses like, "...what's new in Windows 8?".  The simple answer is...a lot.  Also, most folks doing DFIR work may not be completely familiar with what information is available for Windows 7 systems, so what could I say about Windows 8 in an hour that would be useful to anyone.  Some things (Jump Lists, the Registry, etc.) are very similar in Windows 8 as they are in Windows 7, but other things...the Registry, in particular...are different enough to pose some challenges to a good number of analysts.

So, once again...I'll be posting the link to the materials that go along with the book very soon.  I post them online because people kept leaving their DVDs somewhere (at home, at work, with a friend, in their car...) and needed a means for getting the download, so I moved it online.  This also allows me to update the materials, as well.

Questions?  Comments?  Leave 'em here, or email me.  Thanks so much.

Addendum: The book materials are posted here.