Remember that old tag line from the '80's? It's right up there with "where's the beef!" However, my question is directed more toward forensic analysis, including anything collected during live response.
Where do you go for thoughts, input or validation regarding your live response process? Do you just grab a copy of Helix and run the WFT tools? Or are you concerned about doing that blindly (believe me, there are folks out there who aren't...), and want some kind of validation? (I'm not saying that WFT and toolkits like are bad...in fact, that's not the case at all. What I am saying is that running the tools without an understanding of what they're doing is a bad thing.)
What about analysis of an image? Do you ever reach out and ask someone for insight into your analysis, just to see if you've got all of your bases covered? If so, where do you go? Is it a tight group of folks you know, and you only contact them via email, or do you reach out to a listserv like CFID, or go on lists like ForensicFocus?
Another good example is the Linux Documentation Project and the list of HowTo documents. These are great sources of information...albeit not specific to forensic analysis...and something I've used myself.
NIST provides Special Publications in PDF format, and Security Horizon is distributed in PDF. CyberSpeak is a podcast. IronGeek posts videos, mostly due to hacking. I included a couple of desktop video captures on the DVD with my book, showing how to use some of the tools.
While agree that we don't need yet another resource to pile up on our desks and go unread, I do wonder at times why there isn't something out there specific to forensic analysis.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Pages
▼
Saturday, December 29, 2007
Friday, December 28, 2007
Deleted Apps
Another question I've received, as well as seen in online forums, has to do with locating deleted applications.
As Windows performs some modicum of tracking of user activities, you may find references to applications that were launched in the UserAssist keys in the user's NTUSER.DAT file. Not only would you find references to launching the application or program itself, but I've seen where the user has clicked on the "Uninstall" shortcut that gets added to the Program menu of the Start Menu. I've also seen in the UserAssist keys where a user has launched an installation program, run the installed application, and then clicked on the Uninstall shortcut for the application.
You may also find references to the user launching the "Add/Remove Programs" Control Panel applet.
If you're dealing with an XP system you may find that if the application was originally installed via an MSI package, a Restore Point was created when the application was installed...and one may have been created when the application was removed, as well. So, be sure to parse those rp.log files in the Restore Points.
An MFT entry that has been marked as deleted is great for showing that a file, or even several files, had been deleted. Analysis of the Recycle Bin/INFO2 file may show you something useful. But there are other ways to find more data, to include showing that at one time, the application had been used...such as by parsing the .pf files in the XP Prefetch directory, and performing Registry analysis.
Other Resources:
Intentional Erasure
As Windows performs some modicum of tracking of user activities, you may find references to applications that were launched in the UserAssist keys in the user's NTUSER.DAT file. Not only would you find references to launching the application or program itself, but I've seen where the user has clicked on the "Uninstall" shortcut that gets added to the Program menu of the Start Menu. I've also seen in the UserAssist keys where a user has launched an installation program, run the installed application, and then clicked on the Uninstall shortcut for the application.
You may also find references to the user launching the "Add/Remove Programs" Control Panel applet.
If you're dealing with an XP system you may find that if the application was originally installed via an MSI package, a Restore Point was created when the application was installed...and one may have been created when the application was removed, as well. So, be sure to parse those rp.log files in the Restore Points.
An MFT entry that has been marked as deleted is great for showing that a file, or even several files, had been deleted. Analysis of the Recycle Bin/INFO2 file may show you something useful. But there are other ways to find more data, to include showing that at one time, the application had been used...such as by parsing the .pf files in the XP Prefetch directory, and performing Registry analysis.
Other Resources:
Intentional Erasure
The MAC Daddy
I received a question in my inbox today regarding locating a system's MAC address within an image of a system, and I thought I'd share the response I provided...
"The path to the key that tells you which NICs are/were in use on the system is:
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkCards
Beneath this key, you will see a set of subkeys, each with different numbers;
on my system, I see "10", "12", and "2". Each of these keys contains values;
Description and ServiceName. The ServiceName value contains the GUID
for the interface.
Using the GUIDs, go to:
HKLM\SYSTEM\ControlSet00x\Services\Tcpip\Parameters
\Interfaces
*Be sure to use the ControlSet marked as "Current".
Beneath this key, you'll see subkeys with names that are GUIDs. You're
interested in the GUIDs you found beneath the previous key. Within each key,
you will find the values associated with that interface.
By default, Windows does not retain the MAC address in the Registry. I'm
aware that there are sites out there that say that it does, but they are incorrect...
at least, with regards to this key. If you *do* find an entry within the "Interfaces"
key above that contains a value such as "NetworkAddress", it is either specific
to the NIC/vendor, or it's being used to spoof the MAC address (this is a known
method).
Also check the following key for subkeys that contain a "NetworkAddress" value:
HKLM\SYSTEM\ControlSet001\Control\Class
\{4D36E972-E325-11CE-BFC1-08002bE10318}
Other places you can look for the MAC address:
*Sometimes* (not in all cases) if you find the following key, you may find a value
named "MAC", as well:
HKLM\SOFTWARE\Microsoft\Windows Genuine Advantage
Another place to look is Windows shortcut (*.lnk) files...Windows File Analyzer
is a GUI tool that parses directories worth of *.lnk files and one of the fields that
may be populated is the MAC address of the system."
I thought others might find this helpful as well...
Sunday, December 23, 2007
Perl Scripting Book
It looks like my Perl Scripting book made it out just in time for Christmas!
Oddly enough, it only seems to be available on Amazon...I can't seem to locate it on the Syngress site, but I did find it on the Elsevier site (Elsevier owns Syngress).
Perl Scripting for IT Security is not a follow-on or companion to my previous book, Windows Forensic Analysis. Rather, it goes more into showing what can be done, and how it can be done, in the world of Incident Response and Computer Forensics Analysis using an open-source solution such as Perl. The book, in part, shows that with a little bit of knowledge and skill, we are no longer limited to viewing only what our commercial forensic analysis tools show us.
Addendum, 28 Dec: A box showed up on my doorstep today, with several copies of the book! Woot!
Oddly enough, it only seems to be available on Amazon...I can't seem to locate it on the Syngress site, but I did find it on the Elsevier site (Elsevier owns Syngress).
Perl Scripting for IT Security is not a follow-on or companion to my previous book, Windows Forensic Analysis. Rather, it goes more into showing what can be done, and how it can be done, in the world of Incident Response and Computer Forensics Analysis using an open-source solution such as Perl. The book, in part, shows that with a little bit of knowledge and skill, we are no longer limited to viewing only what our commercial forensic analysis tools show us.
Addendum, 28 Dec: A box showed up on my doorstep today, with several copies of the book! Woot!
Sunday, December 02, 2007
Windows Shortcut (LNK) files
I saw a question in a forum recently regarding Windows Shortcut Files, and thought I would post something that included my findings from a quick test.
The question had to do with the file MAC times found embedded in the .lnk file, with respect to those found on the file system (FAT32 in this case). The question itself got me curious, so I ran a quick test of my own.
I located a .lnk file in my profile's (on my own system) Recent folder (NTFS partition) and ran the lslnk.pl script from my book against it, extracting (in part) the embedded file times:
MAC Times:
Creation Time = Fri Nov 16 20:33:21 2007 (UTC)
Modification Time = Fri Nov 16 22:47:50 2007 (UTC)
Access Time = Fri Nov 16 20:33:21 2007 (UTC)
I then double-clicked the Shortcut file itself (from within Windows Explorer), launching the .pdf file. I closed the .pdf file and used "dir /ta" to verify that the last access time on the shortcut file and on the .pdf file itself had been updated to the current time (in this case, 7:29am EST). I re-ran the lslnk.pl script against the original shortcut file, and extracted the same MAC times as seen above.
BTW...the creation date on the .pdf file in question is 11/16/2007 03:33 PM...which is identical to the creation date embedded in the shortcut file, with the time zone taken into account.
Based on extremely limited testing, it would appear that the MAC times in the shortcut file do not change following the initial creation of the shortcut file.
Can anyone confirm/verify this, on NTFS as well as FAT32?
The question had to do with the file MAC times found embedded in the .lnk file, with respect to those found on the file system (FAT32 in this case). The question itself got me curious, so I ran a quick test of my own.
I located a .lnk file in my profile's (on my own system) Recent folder (NTFS partition) and ran the lslnk.pl script from my book against it, extracting (in part) the embedded file times:
MAC Times:
Creation Time = Fri Nov 16 20:33:21 2007 (UTC)
Modification Time = Fri Nov 16 22:47:50 2007 (UTC)
Access Time = Fri Nov 16 20:33:21 2007 (UTC)
I then double-clicked the Shortcut file itself (from within Windows Explorer), launching the .pdf file. I closed the .pdf file and used "dir /ta" to verify that the last access time on the shortcut file and on the .pdf file itself had been updated to the current time (in this case, 7:29am EST). I re-ran the lslnk.pl script against the original shortcut file, and extracted the same MAC times as seen above.
BTW...the creation date on the .pdf file in question is 11/16/2007 03:33 PM...which is identical to the creation date embedded in the shortcut file, with the time zone taken into account.
Based on extremely limited testing, it would appear that the MAC times in the shortcut file do not change following the initial creation of the shortcut file.
Can anyone confirm/verify this, on NTFS as well as FAT32?
Saturday, December 01, 2007
Forensic Analysis
Often (well, often enough) we run into instances where it becomes evident that someone has an unrealistic expectation of what answers forensic analysis can provide.
I know that right now, most of you are thinking, "dude...no way!" But I say, truly, it is so. There's even a term for it...it's called the CSI Effect. There are even articles written about it (here, and here).
Let's look at an example of what I mean. Our hero, the forensic analyst and incident responder Dude Diligence gets a call from a company, and the caller says that they've been "hacked" ("hacked" is the new "smurfed", by the way...a verb with many nuances and flavors...). Dude gets on site to find that even before he was called, a considerable amount of remediation was going on...systems were being accessed and "cleaned" (another one of those verbs with many nuances and flavors...) by administrators, with no real focus as to who was doing what, when, or why?
I'm sure that by now, some of you who are consultants like our hero are now weeping into your beers, sobbing, "dude..." under your breath...but our story continues.
Dude does the best he can to get the story of what happened, and what systems were affected. In the end, he acquires images of about half a dozen systems and returned to the Dude Lab to do analysis. Before leaving, however, he made sure that he had a solid understanding of what questions needed to be answered for the customer...specifically, was this a targeted attack, and was sensitive data (could be credit card numbers, could be PII, PHI, etc.) compromised.
To make a long story short, ultimately what Dude finds is that...he can't find anything. Systems had not been configured for any sort of viable logging (the system, as well as applications), and what logs that were there had been removed from the system. Systems had been scanned with AV applications, etc., etc. Traces of the intruder's activity (if there was one) had been completely obliterated by the actions of those who "responded" to the incident. Even if Dude had determined that sensitive information was, in fact, on the system, he isn't able to provide a definitive response to the question of, does that information now, as a result of the intrusion/compromise, exist somewhere it shouldn't? Was it exposed?
Even under the best of circumstances, there are just some questions that forensic analysis cannot answer. One question that comes up time and time again, particularly in some of the online forensic forums, is, from an imaged system, can you tell what files were copied off of the system? Dude has found artifacts of files being uploaded to web-based email systems such as GMail, and found other artifacts, but what computer forensic artifacts are there if someone opens a Word or PDF document on their screen and copies information out of it onto a piece of paper, using a pen? How about if someone connects a removable storage device to the system and copies the files off onto to that device? Sure, there are artifacts of the device being connected to the system (particularly on Windows systems), but without actually imaging and analyzing that removable storage device, how would Dude determine what files were copied from the system to the device?
I've talked about alternative analysis techniques before, and the solutions I'm looking toward include, for example, how you may be able to show names of files that someone viewed, and possibly the dates, even if the user deleted and overwrote the files themselves, or viewed them from removable media, etc. There are lots of ways to get additional information through analysis of the Registry, Event Logs, and even of the contents of RAM captured from the system...but there are just some questions that computer forensics can not answer.
That being said, how does our hero Dude Diligence go about his dude-ly analysis? Well, to begin with, Dude is part-sysadmin. This means that he understands, or knows that he needs to understand, the interrelation between the different components of a system...be that the interrelation between the external router, firewall, and the compromised system within the network infrastructure, or the interrelation between the network, the apparently compromised host (i.e., the operating system), and the applications running on the host.
When it comes to analyzing intrusions, Dude doesn't have to be a pen-tester or "ethical hacker"...although it may help. Instead, Dude needs to shift his focus a bit and not so much concentrate on breaking into or compromising a system, but instead concentrate "around" it...what artifacts are left, and where, by various actions, such as binding a reverse shell to the system through the buffer overflow of an application or service? For example, when someone logs into a system (over the network via NetBIOS or ssh or some other application, or at the console), where would Dude expect to see artifacts? What does it mean to Dude if the artifacts are there? What if they're not there?
Remember Harlan's Corollary to Jesse's First Law of Computer Forensics?
This leads us back to the first statement in this post...there are some actions for which the artifacts are not available to forensic analysts like Dude when he's performing a post-mortem analysis, and there are some questions that simply cannot be answered by that analysis. There are some questions that can be answered, if Dude pursues the appropriate areas in his analysis...
I know that right now, most of you are thinking, "dude...no way!" But I say, truly, it is so. There's even a term for it...it's called the CSI Effect. There are even articles written about it (here, and here).
Let's look at an example of what I mean. Our hero, the forensic analyst and incident responder Dude Diligence gets a call from a company, and the caller says that they've been "hacked" ("hacked" is the new "smurfed", by the way...a verb with many nuances and flavors...). Dude gets on site to find that even before he was called, a considerable amount of remediation was going on...systems were being accessed and "cleaned" (another one of those verbs with many nuances and flavors...) by administrators, with no real focus as to who was doing what, when, or why?
I'm sure that by now, some of you who are consultants like our hero are now weeping into your beers, sobbing, "dude..." under your breath...but our story continues.
Dude does the best he can to get the story of what happened, and what systems were affected. In the end, he acquires images of about half a dozen systems and returned to the Dude Lab to do analysis. Before leaving, however, he made sure that he had a solid understanding of what questions needed to be answered for the customer...specifically, was this a targeted attack, and was sensitive data (could be credit card numbers, could be PII, PHI, etc.) compromised.
To make a long story short, ultimately what Dude finds is that...he can't find anything. Systems had not been configured for any sort of viable logging (the system, as well as applications), and what logs that were there had been removed from the system. Systems had been scanned with AV applications, etc., etc. Traces of the intruder's activity (if there was one) had been completely obliterated by the actions of those who "responded" to the incident. Even if Dude had determined that sensitive information was, in fact, on the system, he isn't able to provide a definitive response to the question of, does that information now, as a result of the intrusion/compromise, exist somewhere it shouldn't? Was it exposed?
Even under the best of circumstances, there are just some questions that forensic analysis cannot answer. One question that comes up time and time again, particularly in some of the online forensic forums, is, from an imaged system, can you tell what files were copied off of the system? Dude has found artifacts of files being uploaded to web-based email systems such as GMail, and found other artifacts, but what computer forensic artifacts are there if someone opens a Word or PDF document on their screen and copies information out of it onto a piece of paper, using a pen? How about if someone connects a removable storage device to the system and copies the files off onto to that device? Sure, there are artifacts of the device being connected to the system (particularly on Windows systems), but without actually imaging and analyzing that removable storage device, how would Dude determine what files were copied from the system to the device?
I've talked about alternative analysis techniques before, and the solutions I'm looking toward include, for example, how you may be able to show names of files that someone viewed, and possibly the dates, even if the user deleted and overwrote the files themselves, or viewed them from removable media, etc. There are lots of ways to get additional information through analysis of the Registry, Event Logs, and even of the contents of RAM captured from the system...but there are just some questions that computer forensics can not answer.
That being said, how does our hero Dude Diligence go about his dude-ly analysis? Well, to begin with, Dude is part-sysadmin. This means that he understands, or knows that he needs to understand, the interrelation between the different components of a system...be that the interrelation between the external router, firewall, and the compromised system within the network infrastructure, or the interrelation between the network, the apparently compromised host (i.e., the operating system), and the applications running on the host.
When it comes to analyzing intrusions, Dude doesn't have to be a pen-tester or "ethical hacker"...although it may help. Instead, Dude needs to shift his focus a bit and not so much concentrate on breaking into or compromising a system, but instead concentrate "around" it...what artifacts are left, and where, by various actions, such as binding a reverse shell to the system through the buffer overflow of an application or service? For example, when someone logs into a system (over the network via NetBIOS or ssh or some other application, or at the console), where would Dude expect to see artifacts? What does it mean to Dude if the artifacts are there? What if they're not there?
Remember Harlan's Corollary to Jesse's First Law of Computer Forensics?
This leads us back to the first statement in this post...there are some actions for which the artifacts are not available to forensic analysts like Dude when he's performing a post-mortem analysis, and there are some questions that simply cannot be answered by that analysis. There are some questions that can be answered, if Dude pursues the appropriate areas in his analysis...
Wednesday, November 21, 2007
Alternative Methods of Analysis
Do I need to say it again? The age of Nintendo Forensics is gone, long past.
Acquiring a system is no longer as simple as removing the hard drive, hooking it up to a write blocker, and imaging it. Storage capacity is increasing, devices capable of storing data are diversifying and becoming more numerous (along with the data formats)...all of which are becoming more ubiquitous and common-place. As the sophistication and complexity of devices and operating systems increases, the solution to the issue of backlogs due to examinations requiring additional resources is training and education.
Training and education lead to greater subject-matter knowledge, allowing the investigator to ask better questions, and perhaps even make better requests for assistance. Having a better understanding of what is available to you and where to go to look leads to better data collection, and more thorough and efficient examinations. It also leads to solutions that might not be readily apparent to those that follow the "point and click execution" methodology.
Take this article from Police Chief Magazine, for example. 1stSgt Cohen goes so far as to specifically mention the collection of volatile data and the contents of RAM. IMHO, this is a HUGE step in the right direction. In this instance, a law enforcement officer is publicly recognizing the importance of volatile data in an investigation.
It's also clear from the article that training and education has led to the use of a "computer forensics field triage", which simply exemplifies the need for growth in this area. It's also clear from the article that a partnership between law enforcement, the NW3C and Purdue University has benefited all parties. It would appear that at some point in the game, the LEs were able to identify what they needed, and were able to request the necessary assistance from Purdue and NW3C...something known in consulting circles as "requirements analysis". At some point, the cops understood the importance of volatile memory, and thought, "we need this...now, how do we collect it in the proper manner?"
So what does this have to do with alternative methods of analysis? An increase in knowledge allows you to seek out alternative methods for your investigation.
For example, take the Trojan Defense. The "purist" approach to computer forensics...remove the hard drive from the system, acquire an image, and look for files...appears to have been less than successful, in at least one case in 2003. The effect of this decision may have set the stage for other similar decisions. So, let's say you've examined the image, searched for files, even mounted the image as a file system and hit it with multiple AV, anti-spyware, and hash-comparison, and still haven't found anything. Then lets assume you had collected volatile data...active process list, network connections, port-to-process mapping, etc. Parsing that data, wouldn't the case have been a bit more iron-clad? You'd be able to show that at the time the system was acquired, here are the processes that were running (along with their command line options, etc.), installed modules, network connections, etc.
At that point, the argument may have been that the Trojan included a rootkit component that was memory-resident and never wrote to the disk. Okay, so let's say that instead of running individual comments to collect specific elements of memory (or, better yet, before doing that...), you'd grabbed the contents of RAM? Tools for parsing the contents of RAM do not need to employ the MS API to do so, and can even locate an exited or unlinked process, and then extract the executable image file for that process from the RAM dump.
What if the issue had occurred in an environment with traffic monitoring...firewall, IDS and IPS logs may have come into play, not to mention traffic captures gathered by an alert admin or a highly-trained IR team? Then you'd have even more data to correlate...filter the network traffic based on the IP address of the system, isolate that traffic, etc.
The more you know about something, the better. The more you know about your car, for example, the better you are able to describe the issue to a mechanic. With even more knowledge, you may even be able to diagnose the issue and be able to provide something more descriptive than "it doesn't work". The same thing applies to IR, as well as to forensic analysis...greater knowledge leads to better response and collection, which leads to more thorough and efficient analysis.
So how do we get there? Well, someone figured it out, and joined forces with Purdue and NW3C. Another way to do this is through online collaboration, forums, etc.
Acquiring a system is no longer as simple as removing the hard drive, hooking it up to a write blocker, and imaging it. Storage capacity is increasing, devices capable of storing data are diversifying and becoming more numerous (along with the data formats)...all of which are becoming more ubiquitous and common-place. As the sophistication and complexity of devices and operating systems increases, the solution to the issue of backlogs due to examinations requiring additional resources is training and education.
Training and education lead to greater subject-matter knowledge, allowing the investigator to ask better questions, and perhaps even make better requests for assistance. Having a better understanding of what is available to you and where to go to look leads to better data collection, and more thorough and efficient examinations. It also leads to solutions that might not be readily apparent to those that follow the "point and click execution" methodology.
Take this article from Police Chief Magazine, for example. 1stSgt Cohen goes so far as to specifically mention the collection of volatile data and the contents of RAM. IMHO, this is a HUGE step in the right direction. In this instance, a law enforcement officer is publicly recognizing the importance of volatile data in an investigation.
It's also clear from the article that training and education has led to the use of a "computer forensics field triage", which simply exemplifies the need for growth in this area. It's also clear from the article that a partnership between law enforcement, the NW3C and Purdue University has benefited all parties. It would appear that at some point in the game, the LEs were able to identify what they needed, and were able to request the necessary assistance from Purdue and NW3C...something known in consulting circles as "requirements analysis". At some point, the cops understood the importance of volatile memory, and thought, "we need this...now, how do we collect it in the proper manner?"
So what does this have to do with alternative methods of analysis? An increase in knowledge allows you to seek out alternative methods for your investigation.
For example, take the Trojan Defense. The "purist" approach to computer forensics...remove the hard drive from the system, acquire an image, and look for files...appears to have been less than successful, in at least one case in 2003. The effect of this decision may have set the stage for other similar decisions. So, let's say you've examined the image, searched for files, even mounted the image as a file system and hit it with multiple AV, anti-spyware, and hash-comparison, and still haven't found anything. Then lets assume you had collected volatile data...active process list, network connections, port-to-process mapping, etc. Parsing that data, wouldn't the case have been a bit more iron-clad? You'd be able to show that at the time the system was acquired, here are the processes that were running (along with their command line options, etc.), installed modules, network connections, etc.
At that point, the argument may have been that the Trojan included a rootkit component that was memory-resident and never wrote to the disk. Okay, so let's say that instead of running individual comments to collect specific elements of memory (or, better yet, before doing that...), you'd grabbed the contents of RAM? Tools for parsing the contents of RAM do not need to employ the MS API to do so, and can even locate an exited or unlinked process, and then extract the executable image file for that process from the RAM dump.
What if the issue had occurred in an environment with traffic monitoring...firewall, IDS and IPS logs may have come into play, not to mention traffic captures gathered by an alert admin or a highly-trained IR team? Then you'd have even more data to correlate...filter the network traffic based on the IP address of the system, isolate that traffic, etc.
The more you know about something, the better. The more you know about your car, for example, the better you are able to describe the issue to a mechanic. With even more knowledge, you may even be able to diagnose the issue and be able to provide something more descriptive than "it doesn't work". The same thing applies to IR, as well as to forensic analysis...greater knowledge leads to better response and collection, which leads to more thorough and efficient analysis.
So how do we get there? Well, someone figured it out, and joined forces with Purdue and NW3C. Another way to do this is through online collaboration, forums, etc.
Tuesday, November 20, 2007
Windows Memory Analysis
It's been a while since I've blogged on memory analysis, I know. This is in part due to my work schedule, but it also has a bit to do with how things have apparently cooled off in this area...there just doesn't seem to be the flurry of activity that there was in the past...
However, I could be wrong on that. I had received an email from someone telling me that certain tools mentioned in my book were not available (of those mentioned...nc.exe, BinText, and pmdump.exe, only BinText seems to be no longer available via the FoundStone site), so I began looking around to see if this was, in fact, the case. While looking for pmdump.exe, I noticed that Arne had released a tool called memimager.exe recently, which allows you to dump the contents of RAM using the NtSystemDebugControl API. I downloaded memimager.exe and ran it on my XP SP2 system, and then ran lsproc.pl (a modified version of my lsproc.pl for Windows 2000 systems) against it and found:
0 0 Idle
408 2860 cmd.exe
2860 3052 memimager.exe
408 3608 firefox.exe
408 120 aim6.exe
408 3576 realsched.exe
120 192 aolsoftware.exe(x)
1144 3904 svchost.exe
408 2768 hqtray.exe
408 1744 WLTRAY.EXE
408 2696 stsystra.exe
244 408 explorer.exe
Look familiar? Sure it does, albeit the above is only an extract of the output. Memimager.exe appears to work very similar to the older version of George M. Garner, Jr's dd.exe (the one that accessed the PhysicalMemory object), particularly where areas of memory that could not be read were filled with 0's. I haven't tried memimager on a Windows 2003 (no SPs) system yet. However, it is important to note that Nigilant32 from Agile Risk Management is the only other freely available tool that I'm aware of that will allow you to dump the contents of PhysicalMemory from pre-Win2K3SP1 systems...it's included with Helix, but if you're a consultant thinking about using it, be sure to read the license agreement. If you're running Nigilant32 from the Helix CD, the AgileRM license agreement applies.
I also wanted to followup and see what AAron's been up to over at Volatile Systems...his Volatility Framework is extremely promising! From there, I went to check out his blog, and saw a couple of interesting posts and links. AAron is definitely one to watch in this area of study, and he's coming out with some really innovative tools.
One of the links on AAron's blog went to something called "Push the Red Button"...this apparently isn't the same RedButton from MWC, Inc. (the RedButton GUI is visible in fig 2-5 on page 50 of my first book...you can download your own copy of the "old skool" RedButton to play with), but is very interesting. One blogpost that caught my eye had to do with carving Registry hive files from memory dumps. I've looked at this recently, albeit from a different perspective...I've written code to locate Registry keys, values, and Event Log records in memory dumps. The code is very alpha at this point, but what I've found appears fairly promising. Running such code across an entire memory dump doesn't provide a great deal of context for your data, so I would strongly suggest first extracting the contents of process memory (perhaps using lspm.pl, found on the DVD with my book), or using a tool such as pmdump.exe to extract the process memory itself during incident response activities. Other tools of note for more general file carving include Scalpel and Foremost.
So...more than anything else, it looks like it's getting to be a good time to update processes and tools. I mentioned an upcoming speaking engagement earlier, and I'm sure that there will be other opportunities to speak on Windows memory analysis in the future.
However, I could be wrong on that. I had received an email from someone telling me that certain tools mentioned in my book were not available (of those mentioned...nc.exe, BinText, and pmdump.exe, only BinText seems to be no longer available via the FoundStone site), so I began looking around to see if this was, in fact, the case. While looking for pmdump.exe, I noticed that Arne had released a tool called memimager.exe recently, which allows you to dump the contents of RAM using the NtSystemDebugControl API. I downloaded memimager.exe and ran it on my XP SP2 system, and then ran lsproc.pl (a modified version of my lsproc.pl for Windows 2000 systems) against it and found:
0 0 Idle
408 2860 cmd.exe
2860 3052 memimager.exe
408 3608 firefox.exe
408 120 aim6.exe
408 3576 realsched.exe
120 192 aolsoftware.exe(x)
1144 3904 svchost.exe
408 2768 hqtray.exe
408 1744 WLTRAY.EXE
408 2696 stsystra.exe
244 408 explorer.exe
Look familiar? Sure it does, albeit the above is only an extract of the output. Memimager.exe appears to work very similar to the older version of George M. Garner, Jr's dd.exe (the one that accessed the PhysicalMemory object), particularly where areas of memory that could not be read were filled with 0's. I haven't tried memimager on a Windows 2003 (no SPs) system yet. However, it is important to note that Nigilant32 from Agile Risk Management is the only other freely available tool that I'm aware of that will allow you to dump the contents of PhysicalMemory from pre-Win2K3SP1 systems...it's included with Helix, but if you're a consultant thinking about using it, be sure to read the license agreement. If you're running Nigilant32 from the Helix CD, the AgileRM license agreement applies.
I also wanted to followup and see what AAron's been up to over at Volatile Systems...his Volatility Framework is extremely promising! From there, I went to check out his blog, and saw a couple of interesting posts and links. AAron is definitely one to watch in this area of study, and he's coming out with some really innovative tools.
One of the links on AAron's blog went to something called "Push the Red Button"...this apparently isn't the same RedButton from MWC, Inc. (the RedButton GUI is visible in fig 2-5 on page 50 of my first book...you can download your own copy of the "old skool" RedButton to play with), but is very interesting. One blogpost that caught my eye had to do with carving Registry hive files from memory dumps. I've looked at this recently, albeit from a different perspective...I've written code to locate Registry keys, values, and Event Log records in memory dumps. The code is very alpha at this point, but what I've found appears fairly promising. Running such code across an entire memory dump doesn't provide a great deal of context for your data, so I would strongly suggest first extracting the contents of process memory (perhaps using lspm.pl, found on the DVD with my book), or using a tool such as pmdump.exe to extract the process memory itself during incident response activities. Other tools of note for more general file carving include Scalpel and Foremost.
So...more than anything else, it looks like it's getting to be a good time to update processes and tools. I mentioned an upcoming speaking engagement earlier, and I'm sure that there will be other opportunities to speak on Windows memory analysis in the future.
Sunday, November 18, 2007
Jesse's back!
Jesse blogged recently about being over at the F3 conference in the UK, and how he was impressed with the high SNR. Jesse shared with me that during a trivia contest, one of the teams chose the name, "The Harlan Carvey Book Club". Thanks, guys (and gals, as the case may be)! It's a nice thought and very flattering...though I don't ever expect to be as popular as Jesse or The Shat. ;-)
Saturday, November 17, 2007
Upcoming Speaking Engagement
Next month, I'll be in Hong Kong, speaking to the local police, as well as at the HTCIA-HK conference. The primary topic I've been asked to present is Registry analysis, with some live response/memory analysis, as well. The presentations vary from day-long to about 5 hrs, with a 45 min presentation scheduled for Thu afternoon - I'll likely be summing things up, with that presentation tentatively titled "Alternative Analysis Methods". My thought is that I will speak to the need for such things, and then summarize my previous three days of talks to present some of those methods.
Sunday, November 11, 2007
Pimp my...Registry analysis
There are some great tools out there for viewing the Registry in an acquired image. EnCase has this, as does ProDiscover (I tend to prefer ProDiscover's ability to parse and display the Registry...) and AccessData's Registry Viewer. Other tools have similar abilities, as well. But you know what? Most times, I don't want to view the Registry. Nope. Most times, I don't care about 90% of what's there. That's why I wrote most of the tools available on the DVD that ships with my book, and why I continue to write other, similar tools.
For example, if I want to get an idea of the user's activity on a system, one of the first places I go is to the SAM hive, and see if the user had a local account on the system. From there, I go to the user's hive file (NTUSER.DAT) located in their profile, and start pulling out specific bits of information...parsing the UserAssist keys, etc...anything that shows not only the user's activities on the system, but also a timeline of that activity. Thanks to folks like Didier Stevens, we all have a greater understanding of the contents of the UserAssist keys.
Now, the same sort of thing applies to the entire system. For example, one of the tools I wrote allows me to type in a single command, and I'll know all of the USB removable storage devices that had been attached to the system, and when they were last attached. Note: this is system-wide information, but we now know how to tie that activity to a specific user.
On XP systems, we also have the Registry files in the Restore Points available for analysis. One great example of this is the LEO that wanted to know when a user had been moved from the Users to the Administrators group...by going back through the SAM hives maintained in the Restore Points, he was able to show approximately when that happened, and then tied that to other activity on the system, as well.
So...it's pretty clear that when it comes to Registry analysis, the RegEdit-style display of the Registry has limited usefulness. But it's also clear that there really isn't much of a commercial market for these kinds of tools. So what's the answer? Well, just like the folks who get their rides or cribs pimped out on TV, specialists bring a lot to the table. What needs to happen is greater communication of needs, and there are folks out there willing and able to fulfill that need.
Here's a good question to get discussion started...what's good, easy-to-use and easy-to-access format for a guideline of what's available in the Registry (and where)? I included an Excel spreadsheet with my book...does this work for folks? Is the "compiled HTML" (ie, *.chm) Windows Help format easier to use?
If you can't think of a good format, maybe the way to start is this...what information would you put into something like this, and how would you format or organize it?
For example, if I want to get an idea of the user's activity on a system, one of the first places I go is to the SAM hive, and see if the user had a local account on the system. From there, I go to the user's hive file (NTUSER.DAT) located in their profile, and start pulling out specific bits of information...parsing the UserAssist keys, etc...anything that shows not only the user's activities on the system, but also a timeline of that activity. Thanks to folks like Didier Stevens, we all have a greater understanding of the contents of the UserAssist keys.
Now, the same sort of thing applies to the entire system. For example, one of the tools I wrote allows me to type in a single command, and I'll know all of the USB removable storage devices that had been attached to the system, and when they were last attached. Note: this is system-wide information, but we now know how to tie that activity to a specific user.
On XP systems, we also have the Registry files in the Restore Points available for analysis. One great example of this is the LEO that wanted to know when a user had been moved from the Users to the Administrators group...by going back through the SAM hives maintained in the Restore Points, he was able to show approximately when that happened, and then tied that to other activity on the system, as well.
So...it's pretty clear that when it comes to Registry analysis, the RegEdit-style display of the Registry has limited usefulness. But it's also clear that there really isn't much of a commercial market for these kinds of tools. So what's the answer? Well, just like the folks who get their rides or cribs pimped out on TV, specialists bring a lot to the table. What needs to happen is greater communication of needs, and there are folks out there willing and able to fulfill that need.
Here's a good question to get discussion started...what's good, easy-to-use and easy-to-access format for a guideline of what's available in the Registry (and where)? I included an Excel spreadsheet with my book...does this work for folks? Is the "compiled HTML" (ie, *.chm) Windows Help format easier to use?
If you can't think of a good format, maybe the way to start is this...what information would you put into something like this, and how would you format or organize it?
Pimp my...live acquisition
Whenever you perform a live acquisition, what do you do? Document the system, write down things like the system model (ie., Dell PowerEdge 2960, etc), maybe write down any specific identifiers (such as the Dell service tag) and then acquire the system. But is this enough data? Are we missing things by not including the collection of other data in our live acquisition process?
What about collecting volatile data? I've had to perform live acquisitions of systems that had been rebooted multiple times since the incident was discovered, as well as systems that could not be acquired without booting them (SAS/SATA drives, etc.). Under those circumstances, maybe I wouldn't need to collect volatile data...after all, what data of interest would it contain...but maybe we should do so anyway.
How about collecting non-volatile data? Like the system name and IP address? Or the disk configuration? One of the tools available on the DVD that comes with my book lets you see the following information about hard drives attached to the system:
DeviceID : \\.\PHYSICALDRIVE0
Model : ST910021AS
Interface : IDE
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x41ab2316
Serial No : 3MH0B9G3
DeviceID : \\.\PHYSICALDRIVE1
Model : WDC WD12 00UE-00KVT0 USB Device
Interface : USB
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x96244465
Serial No :
Another tool lets you see the following:
Drive Type File System Path Free Space
----- ----- ----------- ----- ----------
C:\ Fixed NTFS 17.96 GB
D:\ Fixed NTFS 38.51 GB
E:\ CD-ROM 0.00
G:\ Fixed NTFS 42.24 GB
In the above output, notice that there are no network drives attached to the test system...no shares mapped. Had there been any, the "Type" would have been listed as "Network", and the path would have been displayed.
Does it make sense to acquire this sort of information when performing a live acquisition? If so, is this information sufficient...or, what other information, at a minimum, should be collected?
Are there conditions under which I would acquire certain info, but not others? For example, if the system had not been rebooted, would I dump the contents of physical memory (I'd say "yes" to that one)...however, if the system had been accessed by admins, scanned with AV, and rebooted several times, would it do me any good at that point to dump RAM or should I simply document the fact that I didn't, and why?
Would this information be collected prior to the live acquisition, or immediately following?
What are your thoughts...and why?
PS: If you can think of a pseudonym I can use...think of it as a "hacker handle" (that thing that Joey from Hackers kept whining about), but REALLY cool, like "Xzibit", let me know. Oh, yeah...Ovie Carroll needs one, too. ;-)
What about collecting volatile data? I've had to perform live acquisitions of systems that had been rebooted multiple times since the incident was discovered, as well as systems that could not be acquired without booting them (SAS/SATA drives, etc.). Under those circumstances, maybe I wouldn't need to collect volatile data...after all, what data of interest would it contain...but maybe we should do so anyway.
How about collecting non-volatile data? Like the system name and IP address? Or the disk configuration? One of the tools available on the DVD that comes with my book lets you see the following information about hard drives attached to the system:
DeviceID : \\.\PHYSICALDRIVE0
Model : ST910021AS
Interface : IDE
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x41ab2316
Serial No : 3MH0B9G3
DeviceID : \\.\PHYSICALDRIVE1
Model : WDC WD12 00UE-00KVT0 USB Device
Interface : USB
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x96244465
Serial No :
Another tool lets you see the following:
Drive Type File System Path Free Space
----- ----- ----------- ----- ----------
C:\ Fixed NTFS 17.96 GB
D:\ Fixed NTFS 38.51 GB
E:\ CD-ROM 0.00
G:\ Fixed NTFS 42.24 GB
In the above output, notice that there are no network drives attached to the test system...no shares mapped. Had there been any, the "Type" would have been listed as "Network", and the path would have been displayed.
Does it make sense to acquire this sort of information when performing a live acquisition? If so, is this information sufficient...or, what other information, at a minimum, should be collected?
Are there conditions under which I would acquire certain info, but not others? For example, if the system had not been rebooted, would I dump the contents of physical memory (I'd say "yes" to that one)...however, if the system had been accessed by admins, scanned with AV, and rebooted several times, would it do me any good at that point to dump RAM or should I simply document the fact that I didn't, and why?
Would this information be collected prior to the live acquisition, or immediately following?
What are your thoughts...and why?
PS: If you can think of a pseudonym I can use...think of it as a "hacker handle" (that thing that Joey from Hackers kept whining about), but REALLY cool, like "Xzibit", let me know. Oh, yeah...Ovie Carroll needs one, too. ;-)
Thursday, November 08, 2007
Pimp my...forensics analysis
How often do you find yourself in the position where, when performing forensic analysis, you end up either not having the tools you need (ie, the tools you do have don't show you what you need, or don't provide you with useful output)? Many of the tools we use provide basic functionality, but there are very few tools that go beyond that, and are capable of providing what we need over a large number of cases (or in some instances, even examination to examination). This leads to one of the major challenges (IMHO) of the forensic community...having the right tool for the job. Oddly enough, there just isn't a great market for tools that do very specific things like parse binary files, extract data from the Registry, etc. The lack of such tools is very likely due to the volume of work (i.e., case load) that needs to be done, and to a lack of training...commercial GUI tools with buttons to push seem to be preferred over open-source command line tools, but only if the need is actually recognized. Do we always tell someone when we need something specific, or do we just put our heads down, push through on the investigation using the tools and skill sets that we have at hand, and never address the issue because of our work load?
With your forensic-analysis-tool-of-choice (FTK, EnCase, ProDiscover, etc.), many times you may still be left with the thought, "...that data isn't in a format I can easily understand or use...". Know what I mean? Ever been there? I'm sure you have...extract that Event Log file from an image and load it up into Event Viewer on your analysis system, only to be confronted with an error message telling you that the Event Log is "corrupted". What do you do? Boot the image with LiveView (version 0.6 is available, by the way) and log into it to view the Event Log? Got the password?
The answer to this dilemma is to take a page from Xzibit's book and "pimp my forensics analysis". That's right, we're going to customize or "trick it out" our systems with the tools and stuff you need to do the job.
One way to get started on this is to take a look at my book [WARNING: Shameless self-promotion approaching...], Windows Forensic Analysis; specifically at some of the tools that ship on the accompanying DVD. All of the tools were written in Perl, but all but a few have been "compiled" into standalone EXEs so that you don't have to have Perl installed to run them, or know anything about Perl -- in fact, based on the emails I have received since the book was released in May 2007, the real limiting factor appears to be nothing more than a lack of familiarity with running command line (CLI) tools (re: over-dependence on pushing buttons). The tools were developed out of my own needs, and my hope is that as folks read the book, they too will recognize the value in parsing the Event Log files, Registry, etc., as well as the value of the tools provided.
Another resource is the upcoming Perl Scripting for Forensic Investigation and Security, to be published in the near future by Syngress/Elsevier.
What do these two resources provide? In a nutshell, customized and customizable tools. While a number of tools exist that will let you view the Registry in the familiar (via RegEdit) tree-style format, how many of those tools will translate arbitrary binary data stored in the Registry values? How many will do correlation, not only between multiple keys within a hive, but across hives, as well?
How many of the tools will allow you to parse the contents of a Windows 2000, XP, or 2003 Event Log file into an Excel spreadsheet, in a format that is easily sorted and searched? Or report on various event record statistics?
How many of these tools provide the basis for growth and customization, in order to meet the needs of a future investigation? Some tools do...they provide base functionality, and allow the user to extend that functionality through scripting languages. Some are easier to learn than others, and some are more functional that others. But even these can be limiting sometimes.
The data is there, folks...what slows us down sometimes is either (a) not knowing that the data is there (and that a resource is a source of valuable data or evidence), and (b) not knowing how to get at that data and present it in a usable and understandable manner. Overcoming this is as simple as identifying what you need, and then reaching out to the Xzibits and Les Stroud's of the forensic community to get the job done. Rest assured, you're not the only one looking for that functionality...
With your forensic-analysis-tool-of-choice (FTK, EnCase, ProDiscover, etc.), many times you may still be left with the thought, "...that data isn't in a format I can easily understand or use...". Know what I mean? Ever been there? I'm sure you have...extract that Event Log file from an image and load it up into Event Viewer on your analysis system, only to be confronted with an error message telling you that the Event Log is "corrupted". What do you do? Boot the image with LiveView (version 0.6 is available, by the way) and log into it to view the Event Log? Got the password?
The answer to this dilemma is to take a page from Xzibit's book and "pimp my forensics analysis". That's right, we're going to customize or "trick it out" our systems with the tools and stuff you need to do the job.
One way to get started on this is to take a look at my book [WARNING: Shameless self-promotion approaching...], Windows Forensic Analysis; specifically at some of the tools that ship on the accompanying DVD. All of the tools were written in Perl, but all but a few have been "compiled" into standalone EXEs so that you don't have to have Perl installed to run them, or know anything about Perl -- in fact, based on the emails I have received since the book was released in May 2007, the real limiting factor appears to be nothing more than a lack of familiarity with running command line (CLI) tools (re: over-dependence on pushing buttons). The tools were developed out of my own needs, and my hope is that as folks read the book, they too will recognize the value in parsing the Event Log files, Registry, etc., as well as the value of the tools provided.
Another resource is the upcoming Perl Scripting for Forensic Investigation and Security, to be published in the near future by Syngress/Elsevier.
What do these two resources provide? In a nutshell, customized and customizable tools. While a number of tools exist that will let you view the Registry in the familiar (via RegEdit) tree-style format, how many of those tools will translate arbitrary binary data stored in the Registry values? How many will do correlation, not only between multiple keys within a hive, but across hives, as well?
How many of the tools will allow you to parse the contents of a Windows 2000, XP, or 2003 Event Log file into an Excel spreadsheet, in a format that is easily sorted and searched? Or report on various event record statistics?
How many of these tools provide the basis for growth and customization, in order to meet the needs of a future investigation? Some tools do...they provide base functionality, and allow the user to extend that functionality through scripting languages. Some are easier to learn than others, and some are more functional that others. But even these can be limiting sometimes.
The data is there, folks...what slows us down sometimes is either (a) not knowing that the data is there (and that a resource is a source of valuable data or evidence), and (b) not knowing how to get at that data and present it in a usable and understandable manner. Overcoming this is as simple as identifying what you need, and then reaching out to the Xzibits and Les Stroud's of the forensic community to get the job done. Rest assured, you're not the only one looking for that functionality...
Saturday, October 27, 2007
Some new things...
I've been offline and not posting for a while, I know...not much time to post with so much going on during my day job (but that's a Good Thing).
A couple of new things have popped up recently that I wanted to share with everyone. First, Didier Stevens has produced an update to his UserAssist program, for parsing the UserAssist Registry keys on a live system. This update parses the GUIDs, giving you even more information about the user's activities. This is something that I'll have to add to my own tools that parse the same keys, but during post-mortem analysis.
Second, Peter Burkholder over at Ellipsis has produced a patch for running my Forensic Server Project (FSP) on *nix-variant systems, to include MacOSX. I have said from the very beginning that this could be done, and Peter has gone and done it! Very cool!
Jesse Kornblum has released md5deep 2.0, which has some new features and bug fixes...check it out.
If I've missed anything, please drop me a line and let me know...
A couple of new things have popped up recently that I wanted to share with everyone. First, Didier Stevens has produced an update to his UserAssist program, for parsing the UserAssist Registry keys on a live system. This update parses the GUIDs, giving you even more information about the user's activities. This is something that I'll have to add to my own tools that parse the same keys, but during post-mortem analysis.
Second, Peter Burkholder over at Ellipsis has produced a patch for running my Forensic Server Project (FSP) on *nix-variant systems, to include MacOSX. I have said from the very beginning that this could be done, and Peter has gone and done it! Very cool!
Jesse Kornblum has released md5deep 2.0, which has some new features and bug fixes...check it out.
If I've missed anything, please drop me a line and let me know...
Saturday, September 15, 2007
Registry Analysis
You've probably noticed a huge gap between posts...sorry about that. Contrary to popular belief, I don't just sit around all day writing books. ;-) In addition to actually working, I like to think about ways to make my job easier, and to make the final product of my analysis better. Like many, I've known for sometime that considerable, valuable data exists in the Registry...there's a quite a bit there, whether you're looking for evidence of devices attached to the system, or of user activity. One of the things I've noticed is that there are a good number of tools available that allow you to do Registry data extraction...that is, pull data from the Registry, presenting the data found in a key or value. AccessData has their Registry Viewer, tools such as EnCase and ProDiscover allow you to visualize the Registry data...however, all of these are just tools, and each has their own strength and weakness.
One of the issues that confronts us today is knowing what we're looking at or looking for. Having a tool present data to us is nice, but if we don't know how that data is populated, then what good is the tool when some one-off condition is encountered? If the analyst does not understand how the artifact in question is created or modified, then what happens when the data that he or she expects to see is not present? Remember Jesse's First Law of Computer Forensics and my own subsequent corollary?
Why is this important? Well, for one, there's been a great deal of discussion about antiforensics lately, starting with a couple of articles in CIO and CSO magazines. "Antiforensics" and "counterforensics" (thanks to Richard for definitions) are not new concepts at all...the use of such activities has been around for quite some time. However, systems are becoming more and more complex, and at the same time, feature-rich. One of the benefits of Windows XP, for example, is that the feature-rich nature of the operating system goes some lengths in offsetting the inherent antiforensic features of the operating system.
So...let's try to come full circle...Registry analysis comes into play in assisting the investigator in determining what happened, and in many cases, when. Registry analysis can provide key or corroborating data. No tool out there will provide everything an investigator needs...it's up to the investigator to understand what's needed (ie, which Registry keys pertain to a particular P2P client, or which show the files most recently accessed with an image viewing utility?) and then understand how to get it. There are tools out there that do not have pretty GUIs and buttons you can push that will provide you with that information.
One of the issues that confronts us today is knowing what we're looking at or looking for. Having a tool present data to us is nice, but if we don't know how that data is populated, then what good is the tool when some one-off condition is encountered? If the analyst does not understand how the artifact in question is created or modified, then what happens when the data that he or she expects to see is not present? Remember Jesse's First Law of Computer Forensics and my own subsequent corollary?
Why is this important? Well, for one, there's been a great deal of discussion about antiforensics lately, starting with a couple of articles in CIO and CSO magazines. "Antiforensics" and "counterforensics" (thanks to Richard for definitions) are not new concepts at all...the use of such activities has been around for quite some time. However, systems are becoming more and more complex, and at the same time, feature-rich. One of the benefits of Windows XP, for example, is that the feature-rich nature of the operating system goes some lengths in offsetting the inherent antiforensic features of the operating system.
So...let's try to come full circle...Registry analysis comes into play in assisting the investigator in determining what happened, and in many cases, when. Registry analysis can provide key or corroborating data. No tool out there will provide everything an investigator needs...it's up to the investigator to understand what's needed (ie, which Registry keys pertain to a particular P2P client, or which show the files most recently accessed with an image viewing utility?) and then understand how to get it. There are tools out there that do not have pretty GUIs and buttons you can push that will provide you with that information.
Conference Presentations
I noticed from Anton's blog that the DFRWS 2007 papers were posted and available. I agree with Anton, a couple of the presentations are interesting. I was aware of Andreas's work with the Vista Event Log format, and transforming them into plain text...cool stuff.
Rich Murhpey had a paper entitled Automated Windows event log forensics, which I thought was interesting. In his paper, Rich points to "repairing" Event Log files that have been corrupted, using the method made available by Capt. Stephen Bunting. I can't say that I agree fully with this, as the Event Log files can be easily parsed directly from their binary format. Rich provides the beginnings of this in his paper, and more detailed information is available in my book. Rich also goes into extracting event records from unallocated space using Scalpel, but the file header he uses specifically targets the header and not event records in general (again, check out Windows Forensic Analysis for details on this). Extracting event records (and other objects that contain timestamp information, such as Registry keys) from unallocated space, the pagefile, or a RAM dump takes a bit more work, but it's definitely an achievable goal. Knowing the structure of these objects, we can locate the "magic number", perform some checking to ensure that we indeed have found a valid object, and then extract the information. Oh...wait...that's how the brute force EProcess parsing tools work on RAM dumps! ;-)
For those interested in time synchronization, check out A brief study of time...no, Stephen Hawking did not present this year - though, how cool would that be?! Not only is he one of the greatest minds of our time, but he's been portrayed on The Simpsons, and he's appeared in a quite funny excerpt from Star Trek:TNG! A complete set of media appearances can be seen here. For a brainiac, king-of-all-nerds kind of guy, he really rocks!
Going back to Rich's paper, the theme of DFRWS 2007 was "file carving", and the forensic challenge seems to have gone off fairly well. If you're at all interested in some of the cutting edge research in file carving, take a look at the challenge and results.
Rich Murhpey had a paper entitled Automated Windows event log forensics, which I thought was interesting. In his paper, Rich points to "repairing" Event Log files that have been corrupted, using the method made available by Capt. Stephen Bunting. I can't say that I agree fully with this, as the Event Log files can be easily parsed directly from their binary format. Rich provides the beginnings of this in his paper, and more detailed information is available in my book. Rich also goes into extracting event records from unallocated space using Scalpel, but the file header he uses specifically targets the header and not event records in general (again, check out Windows Forensic Analysis for details on this). Extracting event records (and other objects that contain timestamp information, such as Registry keys) from unallocated space, the pagefile, or a RAM dump takes a bit more work, but it's definitely an achievable goal. Knowing the structure of these objects, we can locate the "magic number", perform some checking to ensure that we indeed have found a valid object, and then extract the information. Oh...wait...that's how the brute force EProcess parsing tools work on RAM dumps! ;-)
For those interested in time synchronization, check out A brief study of time...no, Stephen Hawking did not present this year - though, how cool would that be?! Not only is he one of the greatest minds of our time, but he's been portrayed on The Simpsons, and he's appeared in a quite funny excerpt from Star Trek:TNG! A complete set of media appearances can be seen here. For a brainiac, king-of-all-nerds kind of guy, he really rocks!
Going back to Rich's paper, the theme of DFRWS 2007 was "file carving", and the forensic challenge seems to have gone off fairly well. If you're at all interested in some of the cutting edge research in file carving, take a look at the challenge and results.
Monday, September 03, 2007
More on (the) UserAssist keys
Didier Stevens has continued some of his excellent work regarding the UserAssist keys in the Registry. This morning, he posted an entry that explains part of the value names that appear when you decode (ie, un-ROT-13) the names. He has added the capability of providing an explanation to his UserAssist tool.
When you decode the value names from beneath the UserAssist\{GUID}\Count keys, you see that the value names begin with "UEME_" and include names like "RUNPIDL" and "RUNCPL", to name just a few. Since research into these Registry entries began, no one has really known or explored what these refer to...until now. Didier has done an excellent (say "excellent" the way Mr. Burns...more data on Wikipedia...does from "The Simpons", while tenting your fingers...) job of digging into what they mean, as well as providing that explanation via his tool.
If you get a chance, please be sure to thank Didier for his work, and if you see him at a conference, buy him a beer!
Addendum, 5 Sept: Rich over at ForensicZone.com has an interesting web page posted about extracting UserAssist key value names from memory dumps. This is a very interesting move on Rich's part...I've been looking at memory dumps and finding the "magic numbers" for Registry keys and values, but I have yet (due to time constraints) to go as far as writing code to pull out the key/value structures. The interesting thing about this (I think...being the complete nerd that I am) is that if we dump the contents of physical memory and then are capable of parsing out images used by each process as well as the memory used by each process, we can then (potentially) find Registry keys and values that we can associate with a specific process, but have yet to be written to disk! In addition, we know from the Registry key structure that the keys (albeit not the values) have a timestamp associated with them, increasing their evidentary value. Great catch, Rich! I hope you and Didier keep up the great work you've been doing!
Addendum, 7 Sept: Wow, when things get rolling, it's amazing! Didier and I have exchanged a couple of emails discussing various aspects of the UserAssist keys and some of the more esoteric settings that are out there, and according to some, have actually been used! Didier's a veritable fountain of energy and enthusiasm when it comes to researching this kind of thing, so keep an eye on his blog for more good things!
When you decode the value names from beneath the UserAssist\{GUID}\Count keys, you see that the value names begin with "UEME_" and include names like "RUNPIDL" and "RUNCPL", to name just a few. Since research into these Registry entries began, no one has really known or explored what these refer to...until now. Didier has done an excellent (say "excellent" the way Mr. Burns...more data on Wikipedia...does from "The Simpons", while tenting your fingers...) job of digging into what they mean, as well as providing that explanation via his tool.
If you get a chance, please be sure to thank Didier for his work, and if you see him at a conference, buy him a beer!
Addendum, 5 Sept: Rich over at ForensicZone.com has an interesting web page posted about extracting UserAssist key value names from memory dumps. This is a very interesting move on Rich's part...I've been looking at memory dumps and finding the "magic numbers" for Registry keys and values, but I have yet (due to time constraints) to go as far as writing code to pull out the key/value structures. The interesting thing about this (I think...being the complete nerd that I am) is that if we dump the contents of physical memory and then are capable of parsing out images used by each process as well as the memory used by each process, we can then (potentially) find Registry keys and values that we can associate with a specific process, but have yet to be written to disk! In addition, we know from the Registry key structure that the keys (albeit not the values) have a timestamp associated with them, increasing their evidentary value. Great catch, Rich! I hope you and Didier keep up the great work you've been doing!
Addendum, 7 Sept: Wow, when things get rolling, it's amazing! Didier and I have exchanged a couple of emails discussing various aspects of the UserAssist keys and some of the more esoteric settings that are out there, and according to some, have actually been used! Didier's a veritable fountain of energy and enthusiasm when it comes to researching this kind of thing, so keep an eye on his blog for more good things!
Wednesday, August 29, 2007
TechTalk Interview
Back on 10 June, I did an interview on TechTalk, and I saw today that it's posted here. There's some other news...CNN, weather, etc...before the actual interview.
Check it out, give it a listen. Also be sure to check out some of the other shows in the archives.
Check it out, give it a listen. Also be sure to check out some of the other shows in the archives.
Tuesday, August 21, 2007
Vista IR
I recently started doing some testing of IR tools on Vista, using Vista Ultimate (32-bit) installed into a VMWare Workstation 6.0 virtual machine.
Part of my testing involved running some tools on Vista to see how they worked, and another part involved mounting the *.vmdk file for my Vista VM using the latest versions of VDK and VDKWin.
IR Tools
I started off by downloading (via IE7) a couple of tools...specifically Autorunsc.exe and Tcpvcon.exe. Both seemed to work quite well, the only real hiccup being the GUI EULA dialog that pops up if you run the tools without the "/accepteula" switch (either way, the tools create a Registry key...be sure you understand and document this as part of your IR methodology if you're using these tools). An interesting part of the tcpvcon output was the amount of IPv6 stuff visible.
My next steps are to test additional tools, as well as the use of WMI-based tools.
Extracting Files from a Vista VM
Mounting the Vista VM as a read-only file system/drive letter on my XP system went off without a hitch. I was already in the process of updating the VDK drivers and VDKWin GUI files, and on a whim I pointed the mount utility at the Vista VM *.vmdk file. I was pleasantly surprised to see the VM mounted as J:\. As expected, some of the directories (specifically, System Volume Information) could not be accessed...this is due to ACLs on those objects. However, I had fairly unrestricted access to the rest of the file system.
A friend mentioned to me recently that the offsets for the Last Run time and runcount in Vista Prefetch files is different from those of XP. I extracted a Prefetch file from the Vista VM and opened it in UltraEdit to look for the offset for the last run time. I found what appeared to be a FILETIME object at offset 0x80, and modified my existing code to extract those 8 bytes. The result matched up quite nicely:
C:\Perl>vista_pref.pl d:\hacking\vista_autorunsc.exe-7bca361f.pf
Last Run = Mon Aug 20 22:50:43 2007 (UTC)
I also tried running some of the Registry parsing tools (using the Parse::Win32Registry module by James McFarlane) against files extracted from the Vista VM. I started with a Perl script that would parse the contents of the UserAssist key - here's an extract of the results:
C:\Perl\reg>pnu.pl d:\hacking\vista_ntuser.dat
LastWrite time = Mon Aug 20 22:53:02 2007 (UTC)
Mon Aug 20 22:53:02 2007 (UTC)
UEME_RUNPATH
UEME_RUNPIDL
UEME_RUNPIDL:%csidl2%\Accessories\Command Prompt.lnk
UEME_RUNPATH:C:\Windows\System32\cmd.exe
Tue Aug 14 18:47:55 2007 (UTC)
UEME_RUNPATH:C:\Windows\system32\control.exe
Wed Jul 11 20:37:27 2007 (UTC)
UEME_RUNPATH:C:\Windows\system32\Wuauclt.exe
That seems to work quite nicely! It looks like I won't have any trouble accessing the raw Registry files using James' module, at least not on 32-bit versions of Vista, so that's good news!
There's still more testing and analysis to do, but this is a good start!
Part of my testing involved running some tools on Vista to see how they worked, and another part involved mounting the *.vmdk file for my Vista VM using the latest versions of VDK and VDKWin.
IR Tools
I started off by downloading (via IE7) a couple of tools...specifically Autorunsc.exe and Tcpvcon.exe. Both seemed to work quite well, the only real hiccup being the GUI EULA dialog that pops up if you run the tools without the "/accepteula" switch (either way, the tools create a Registry key...be sure you understand and document this as part of your IR methodology if you're using these tools). An interesting part of the tcpvcon output was the amount of IPv6 stuff visible.
My next steps are to test additional tools, as well as the use of WMI-based tools.
Extracting Files from a Vista VM
Mounting the Vista VM as a read-only file system/drive letter on my XP system went off without a hitch. I was already in the process of updating the VDK drivers and VDKWin GUI files, and on a whim I pointed the mount utility at the Vista VM *.vmdk file. I was pleasantly surprised to see the VM mounted as J:\. As expected, some of the directories (specifically, System Volume Information) could not be accessed...this is due to ACLs on those objects. However, I had fairly unrestricted access to the rest of the file system.
A friend mentioned to me recently that the offsets for the Last Run time and runcount in Vista Prefetch files is different from those of XP. I extracted a Prefetch file from the Vista VM and opened it in UltraEdit to look for the offset for the last run time. I found what appeared to be a FILETIME object at offset 0x80, and modified my existing code to extract those 8 bytes. The result matched up quite nicely:
C:\Perl>vista_pref.pl d:\hacking\vista_autorunsc.exe-7bca361f.pf
Last Run = Mon Aug 20 22:50:43 2007 (UTC)
I also tried running some of the Registry parsing tools (using the Parse::Win32Registry module by James McFarlane) against files extracted from the Vista VM. I started with a Perl script that would parse the contents of the UserAssist key - here's an extract of the results:
C:\Perl\reg>pnu.pl d:\hacking\vista_ntuser.dat
LastWrite time = Mon Aug 20 22:53:02 2007 (UTC)
Mon Aug 20 22:53:02 2007 (UTC)
UEME_RUNPATH
UEME_RUNPIDL
UEME_RUNPIDL:%csidl2%\Accessories\Command Prompt.lnk
UEME_RUNPATH:C:\Windows\System32\cmd.exe
Tue Aug 14 18:47:55 2007 (UTC)
UEME_RUNPATH:C:\Windows\system32\control.exe
Wed Jul 11 20:37:27 2007 (UTC)
UEME_RUNPATH:C:\Windows\system32\Wuauclt.exe
That seems to work quite nicely! It looks like I won't have any trouble accessing the raw Registry files using James' module, at least not on 32-bit versions of Vista, so that's good news!
There's still more testing and analysis to do, but this is a good start!
Saturday, August 18, 2007
Copying Files
I've been party to or heard a good number of discussions lately regarding USB removable storage devices, and one of the topics that invariably comes up is, how can you determine which files were copied from the system to a thumb drive, or vice versa.
In most instances, folks are working only with the image of a system, and do not have access to the thumb drive itself. They can easily find the information that tells them when a thumb drive was first connected, and when it was last connected...and then the next question is, what files were or may have been copied to the thumb drive (or from the thumb drive to the system)?
The fact is that Windows systems do not maintain a record of file copies or moves...there is simply no way for a forensic analyst to look at the image of a system and say which files were copied off of the system onto a thumb drive. In order to determine this, you'd need to have the thumb drive (or other media) itself, and be able to see that you had two files of the same or similar size (you can also compare the files with md5deep or ssdeep), one of which is on each piece of media. From there, you could then check the file MAC times and possibly make some conclusions regarding the direction of transfer.
Many times in a conversation on this topic, someone will bring up Windows shortcuts or LNK files. To be honest, I'm not really sure why this comes up, it just seems to be the case. Shortcuts can be created manually, of course, but with regards to files, they are created when a user double-clicks a file, such as a Word document, to open it. Repeated testing on my part (including testing done by others) has yet to turn a method by which normal (as in "normal user activity") dragging-and-dropping a file or using the "copy" command will result in a Windows shortcut file being created.
Does anyone out there have any thoughts or input on tracking this kind of activity, having nothing more than a single system image to analyze? If so, I'd appreciate hearing from you.
In most instances, folks are working only with the image of a system, and do not have access to the thumb drive itself. They can easily find the information that tells them when a thumb drive was first connected, and when it was last connected...and then the next question is, what files were or may have been copied to the thumb drive (or from the thumb drive to the system)?
The fact is that Windows systems do not maintain a record of file copies or moves...there is simply no way for a forensic analyst to look at the image of a system and say which files were copied off of the system onto a thumb drive. In order to determine this, you'd need to have the thumb drive (or other media) itself, and be able to see that you had two files of the same or similar size (you can also compare the files with md5deep or ssdeep), one of which is on each piece of media. From there, you could then check the file MAC times and possibly make some conclusions regarding the direction of transfer.
Many times in a conversation on this topic, someone will bring up Windows shortcuts or LNK files. To be honest, I'm not really sure why this comes up, it just seems to be the case. Shortcuts can be created manually, of course, but with regards to files, they are created when a user double-clicks a file, such as a Word document, to open it. Repeated testing on my part (including testing done by others) has yet to turn a method by which normal (as in "normal user activity") dragging-and-dropping a file or using the "copy" command will result in a Windows shortcut file being created.
Does anyone out there have any thoughts or input on tracking this kind of activity, having nothing more than a single system image to analyze? If so, I'd appreciate hearing from you.
Friday, August 17, 2007
IR "Best Practices"
I know it's been a while since I've posted, but work has been really keeping me busy...that's an excuse that we all use, but that's my story and I'm stickin' with it!
So, I've been talking to a number of different folks recently, having discussions during my travels to and fro about incident response and computer forensics. Many times, the issue of "best practices" has come up and that got me thinking...with no specific standards body governing computer forensics or incident response, who decides what "best practices" are? Is it FIRST? After all, they have "IR" in their name, and it does stand for "incident response". Is it the ACPO Guidelines that specify "best practices"?
Well, the easy answer (for me, anyway) is that "it depends". Funny, haha, I know. But in a sense, it does. "Best practices" depend a great deal on the political and cultural makeup of your organization; I've seen places where the incident response team has NO access to the systems, and must request data and information from the network operations staff, which has their own daily tasks and requirements.
But all that aside, we can focus on the technical details of responding to Windows systems. When we say "best practices", we can specify the information we need to get in order to properly assess the situation/incident, how to go about getting it (retrieve it locally, remotely, via live acquisition, via post-mortem exam, etc.), and in what order to retrieve that data (the ACPO Guidelines specify dumping the contents of physical memory last, rather than first...), etc.
For example, we know that during live response, anything we do is going to leave an artifact. So, why not specify what data you're going to collect and the method by which you're going to collect that data? Then determine the artifacts that are left through testing, and document your procedure and artifacts (hint: things like batch files and scripts can constitute "documentation", particularly when burned to CD), and implement that procedure via some means (ie, batch file, script, etc.).
One of the questions that invariably comes up during discussions of live response and "best practices" is, can data collected via live response be used as evidence in court? I would suggest that the answer is, "why not?" After all, an assault victim can be treated by EMTs, operated on by a surgeon, and cared for in a hospital, and the police can still collect evidence and locate and prosecute the perpetrator, right? It's all about documentation, folks.
So, I've been talking to a number of different folks recently, having discussions during my travels to and fro about incident response and computer forensics. Many times, the issue of "best practices" has come up and that got me thinking...with no specific standards body governing computer forensics or incident response, who decides what "best practices" are? Is it FIRST? After all, they have "IR" in their name, and it does stand for "incident response". Is it the ACPO Guidelines that specify "best practices"?
Well, the easy answer (for me, anyway) is that "it depends". Funny, haha, I know. But in a sense, it does. "Best practices" depend a great deal on the political and cultural makeup of your organization; I've seen places where the incident response team has NO access to the systems, and must request data and information from the network operations staff, which has their own daily tasks and requirements.
But all that aside, we can focus on the technical details of responding to Windows systems. When we say "best practices", we can specify the information we need to get in order to properly assess the situation/incident, how to go about getting it (retrieve it locally, remotely, via live acquisition, via post-mortem exam, etc.), and in what order to retrieve that data (the ACPO Guidelines specify dumping the contents of physical memory last, rather than first...), etc.
For example, we know that during live response, anything we do is going to leave an artifact. So, why not specify what data you're going to collect and the method by which you're going to collect that data? Then determine the artifacts that are left through testing, and document your procedure and artifacts (hint: things like batch files and scripts can constitute "documentation", particularly when burned to CD), and implement that procedure via some means (ie, batch file, script, etc.).
One of the questions that invariably comes up during discussions of live response and "best practices" is, can data collected via live response be used as evidence in court? I would suggest that the answer is, "why not?" After all, an assault victim can be treated by EMTs, operated on by a surgeon, and cared for in a hospital, and the police can still collect evidence and locate and prosecute the perpetrator, right? It's all about documentation, folks.
Monday, July 16, 2007
Book Review
Donald Tabone posted a review of my book, Windows Forensic Analysis, here.
In his review, Mr. Tabone included this:
What I dislike about the book:
- No mention of Steganography techniques/tools which Chapter 5 could have benefited from.
Back in the fall of '06, while I was writing the book, I sent out emails to a couple of lists asking folks what they would like to see in a book that focuses on forensic analysis of Windows systems; I received very few responses, and if memory serves, only one mentioned steganography. In this book, I wanted to focus specifically on issues directly related to the forensic analysis of Windows systems, and I did not see where steganography fit into that category.
Interestingly enough, Mr. Tabone did not mention what it was he wanted to know about steganography. Articles have already been written on the subject (SecurityFocus, etc.) and there are sites with extensive lists of tools, for a variety of platforms.
I would like to extend a hearty "thanks" to Mr. Tabone for taking the time and effort to write a review of my book, and for posting it.
Incidently, Hogfly posted a review of my book, as well. Be sure to read the comments that follow the review, as well.
In his review, Mr. Tabone included this:
What I dislike about the book:
- No mention of Steganography techniques/tools which Chapter 5 could have benefited from.
Back in the fall of '06, while I was writing the book, I sent out emails to a couple of lists asking folks what they would like to see in a book that focuses on forensic analysis of Windows systems; I received very few responses, and if memory serves, only one mentioned steganography. In this book, I wanted to focus specifically on issues directly related to the forensic analysis of Windows systems, and I did not see where steganography fit into that category.
Interestingly enough, Mr. Tabone did not mention what it was he wanted to know about steganography. Articles have already been written on the subject (SecurityFocus, etc.) and there are sites with extensive lists of tools, for a variety of platforms.
I would like to extend a hearty "thanks" to Mr. Tabone for taking the time and effort to write a review of my book, and for posting it.
Incidently, Hogfly posted a review of my book, as well. Be sure to read the comments that follow the review, as well.
Saturday, July 14, 2007
Thoughts on RAM acquisition
As a follow-on to the tool testing posts, I wanted to throw something else out there...specifically, I'll start with a comment I received, which said, in part:
Tool criteria include[sic] whether the data the tool has acquired actually existed.
This is a slightly different view of RAM acquisition (or "memory dumping") than I've seen before...perhaps the most common question/concern is more along the lines of, does exculpatory evidence get overwritten?
One of the issues here is that unlike a post-mortem acquisition of a hard drive (ie, the traditional "hook the drive up to a write-blocker, etc."), when acquiring or dumping RAM, one cannot use the same method and obtain the same results...reproduceability is an issue. Because you're acquiring the contents of physical memory from a running system, at any given point in time, something will be changing; processes process, threads execute, network connections time out, etc. So, similar to the live acquisition of a hard drive, you're going to have differences (remember, one of the aspects of cryptographic hash algorithms such as MD5 is that flipping a single bit will produce a different hash). I would suggest that the approach we should take to this is to accept it and document it.
That being said, what are some of the questions that we can anticipate addressing, and how would/should we answer them? I'll take a stab at a couple of basic questions (and responses), but I'd really like to see what others have to say:
1. Did you acquire this data using an accepted, validated process?
In order to respond to this question, we need to develop a process, in such a way as to validate it, and get it "accepted". Don't ask me by whom at this point...that's something we'll need to work on.
2. Did this process overwrite evidence, exculpatory or otherwise?
I really think that determining this is part of the validation process. In order to best answer this question, we have to look at the process that is used...are we using third-party software to do this, or are we using some other method? How does that method or process affect or impact the system we're working with?
3. Was this process subverted by malware running on the system?
This needs to be part of the validation process, as well, but also part of our analysis of the data we retrieved.
4. Did you add anything to this data once you had collected it, or modify it in any way?
This particular question is not so much a technical question (though we do have to determine if our tools impact the output file in anyway) as it is a question for the responder or examiner.
As you can see, there's still a great deal of work to be done. However, please don't think for an instant that I'm suggesting that acquiring the contents of physical memory is the be-all and end-all of forensic analysis. It's a tool...a tool that when properly used can produce some very valuable results.
Tool criteria include[sic] whether the data the tool has acquired actually existed.
This is a slightly different view of RAM acquisition (or "memory dumping") than I've seen before...perhaps the most common question/concern is more along the lines of, does exculpatory evidence get overwritten?
One of the issues here is that unlike a post-mortem acquisition of a hard drive (ie, the traditional "hook the drive up to a write-blocker, etc."), when acquiring or dumping RAM, one cannot use the same method and obtain the same results...reproduceability is an issue. Because you're acquiring the contents of physical memory from a running system, at any given point in time, something will be changing; processes process, threads execute, network connections time out, etc. So, similar to the live acquisition of a hard drive, you're going to have differences (remember, one of the aspects of cryptographic hash algorithms such as MD5 is that flipping a single bit will produce a different hash). I would suggest that the approach we should take to this is to accept it and document it.
That being said, what are some of the questions that we can anticipate addressing, and how would/should we answer them? I'll take a stab at a couple of basic questions (and responses), but I'd really like to see what others have to say:
1. Did you acquire this data using an accepted, validated process?
In order to respond to this question, we need to develop a process, in such a way as to validate it, and get it "accepted". Don't ask me by whom at this point...that's something we'll need to work on.
2. Did this process overwrite evidence, exculpatory or otherwise?
I really think that determining this is part of the validation process. In order to best answer this question, we have to look at the process that is used...are we using third-party software to do this, or are we using some other method? How does that method or process affect or impact the system we're working with?
3. Was this process subverted by malware running on the system?
This needs to be part of the validation process, as well, but also part of our analysis of the data we retrieved.
4. Did you add anything to this data once you had collected it, or modify it in any way?
This particular question is not so much a technical question (though we do have to determine if our tools impact the output file in anyway) as it is a question for the responder or examiner.
As you can see, there's still a great deal of work to be done. However, please don't think for an instant that I'm suggesting that acquiring the contents of physical memory is the be-all and end-all of forensic analysis. It's a tool...a tool that when properly used can produce some very valuable results.
Thursday, July 12, 2007
Tool Testing Methodology, Memory
In my last post, I described what you'd need to do to set up a system in order to test the effects of a tool we'd use on a system for IR activities. I posted this as a way of filling in a gap left by the ACPO Guidelines, which says that we need to "profile" the "forensic footprint" of our tools. That post described tools we'd need to use to discover the footprints within the file system and Registry. I invite you, the reader, to comment on other tools that may be used, as well as provide your thoughts regarding how to use them...after all, the ACPO Guidelines also state that the person using these tools must be competent, and what better way to get there than through discussion and exchange of ideas?
One thing we haven't discussed, and there doesn't seem to be a great deal of discussion of, is the effects of the tools we use on memory. One big question that is asked is, what is the "impact" that our tools have on memory? This is important to understand, and I think one of the main drivers behind this is the idea that when IR activities are first introduced in a court of law, claims will be made that the responder overwrote or deleted potentially exculpatory data during the response process. So...understanding the effect of our tools will make us competent in their use, and we'll be able to address those (and other) issues.
When a process is created (see Windows Internals, by Russinovich and Solomon for the details, or go here), the EXE file is loaded into memory...the EXE is opened and a section object is created, followed by a process object and a thread object. So, memory pages (default size is 4K) are "consumed". Now, almost all EXEs (and I say "almost" because I haven't seen every EXE file) include an import table in their PE header, which describes all of the dynamic link libraries (DLLs) that the EXE accesses. MS provides API functions via DLLs, and EXEs access these DLLs rather than the author rewriting all the code used completely from scratch. So...if the necessary DLL isn't already in memory, then it has to be located and loaded...which in turn, means that more memory pages are "consumed".
So, knowing that these memory pages are used/written to, what is the possibility that important 'evidence' is overwritten? Well, for one thing, the memory manager will not overwrite pages that are actively being used. If it did, stuff would randomly disappear and stop working. For example, your copy of a document may disappear because you loaded Solitaire and a 4K page was randomly overwritten. We wouldn't like this, would we? Of course not! So, the memory manager will allocate memory pages to a process that are not currently active.
For an example of this, let's take a look at Forensic Discovery, by Dan Farmer and Wietse Venema...specifically, chapter 8, section 17:
As the size of the memory filling process grows, it accelerates the memory decay of cached files and of terminated anonymous process memory, and eventually the system will start to cannibalize memory from running processes, moving their writable pages to the swap space. That is, that's what we expected. Unfortunately even repeat runs of this program as root only changed about 3/4 of the main memory of various computers we tested the program on. Not only did it not consume all anonymous memory but it didn't have much of an affect on the kernel and file caches.
Now, keep in mind that the tests that were run were on *nix systems, but the concept is the same for Windows systems (note: previously in the chapter, tests run on Windows XP systems were described, as well).
So this illustrates my point...when a new process is loaded, memory that is actively being used does not get overwritten. If an application (Word, Excel, Notepad) is active in memory, and there's a document that is open in that application, that information won't be overwritten...at worst, the pages not currently being used will be swapped out to the pagefile. If a Trojan is active in memory, the memory pages used by the process, as well as the information specific to the process and thread(s) themselves will not be overwritten. The flip side of this is that what does get "consumed" are memory pages that are freed for use by the memory manager; research has shown that the contents of RAM can survive a reboot, and that even after a new process (or several processes) have been loaded and run, information about exited processes and threads still persists. So, pages used by previous processes may be overwritten, as will pages that contained information about threads, and even pages that had not been previously allocated. When we recover the contents of physical memory (ie, RAM) one of the useful things about our current tools is that we can locate a process, and then by walking the page directory and table entries, locate the memory pages used by that process. By extracting and assembling these pages, we can then search them for strings, and anything we locate as "evidence" will have context; we'll be able to associate a particular piece of information (ie, a string) with a specific process. The thing about pages that have been freed when a process has exited is that we may not be able to associate that page with a specific process; we may not be able to develop context to anything we find in that particular page.
Think of it this way...if I dump the contents of memory and run strings.exe against it, I will get a lot of strings...but what context will that have? I won't be able to associate any of the strings I locate in that memory dump with a specific process, using just strings.exe. However, if I parse out the process information, reassembling EXE files and memory used by each process, and then run strings.exe on the results, I will have a considerable amount of context...not only will I know which process was using the specific memory pages, but I will have timestamps associated with process and threads, etc.
Thoughts? I just made all this up, just now. ;-) Am I off base, crazy, a raving lunatic?
One thing we haven't discussed, and there doesn't seem to be a great deal of discussion of, is the effects of the tools we use on memory. One big question that is asked is, what is the "impact" that our tools have on memory? This is important to understand, and I think one of the main drivers behind this is the idea that when IR activities are first introduced in a court of law, claims will be made that the responder overwrote or deleted potentially exculpatory data during the response process. So...understanding the effect of our tools will make us competent in their use, and we'll be able to address those (and other) issues.
When a process is created (see Windows Internals, by Russinovich and Solomon for the details, or go here), the EXE file is loaded into memory...the EXE is opened and a section object is created, followed by a process object and a thread object. So, memory pages (default size is 4K) are "consumed". Now, almost all EXEs (and I say "almost" because I haven't seen every EXE file) include an import table in their PE header, which describes all of the dynamic link libraries (DLLs) that the EXE accesses. MS provides API functions via DLLs, and EXEs access these DLLs rather than the author rewriting all the code used completely from scratch. So...if the necessary DLL isn't already in memory, then it has to be located and loaded...which in turn, means that more memory pages are "consumed".
So, knowing that these memory pages are used/written to, what is the possibility that important 'evidence' is overwritten? Well, for one thing, the memory manager will not overwrite pages that are actively being used. If it did, stuff would randomly disappear and stop working. For example, your copy of a document may disappear because you loaded Solitaire and a 4K page was randomly overwritten. We wouldn't like this, would we? Of course not! So, the memory manager will allocate memory pages to a process that are not currently active.
For an example of this, let's take a look at Forensic Discovery, by Dan Farmer and Wietse Venema...specifically, chapter 8, section 17:
As the size of the memory filling process grows, it accelerates the memory decay of cached files and of terminated anonymous process memory, and eventually the system will start to cannibalize memory from running processes, moving their writable pages to the swap space. That is, that's what we expected. Unfortunately even repeat runs of this program as root only changed about 3/4 of the main memory of various computers we tested the program on. Not only did it not consume all anonymous memory but it didn't have much of an affect on the kernel and file caches.
Now, keep in mind that the tests that were run were on *nix systems, but the concept is the same for Windows systems (note: previously in the chapter, tests run on Windows XP systems were described, as well).
So this illustrates my point...when a new process is loaded, memory that is actively being used does not get overwritten. If an application (Word, Excel, Notepad) is active in memory, and there's a document that is open in that application, that information won't be overwritten...at worst, the pages not currently being used will be swapped out to the pagefile. If a Trojan is active in memory, the memory pages used by the process, as well as the information specific to the process and thread(s) themselves will not be overwritten. The flip side of this is that what does get "consumed" are memory pages that are freed for use by the memory manager; research has shown that the contents of RAM can survive a reboot, and that even after a new process (or several processes) have been loaded and run, information about exited processes and threads still persists. So, pages used by previous processes may be overwritten, as will pages that contained information about threads, and even pages that had not been previously allocated. When we recover the contents of physical memory (ie, RAM) one of the useful things about our current tools is that we can locate a process, and then by walking the page directory and table entries, locate the memory pages used by that process. By extracting and assembling these pages, we can then search them for strings, and anything we locate as "evidence" will have context; we'll be able to associate a particular piece of information (ie, a string) with a specific process. The thing about pages that have been freed when a process has exited is that we may not be able to associate that page with a specific process; we may not be able to develop context to anything we find in that particular page.
Think of it this way...if I dump the contents of memory and run strings.exe against it, I will get a lot of strings...but what context will that have? I won't be able to associate any of the strings I locate in that memory dump with a specific process, using just strings.exe. However, if I parse out the process information, reassembling EXE files and memory used by each process, and then run strings.exe on the results, I will have a considerable amount of context...not only will I know which process was using the specific memory pages, but I will have timestamps associated with process and threads, etc.
Thoughts? I just made all this up, just now. ;-) Am I off base, crazy, a raving lunatic?
Tool Testing Methodology
As I mentioned earlier, the newly-released ACPO Guidelines state:
By profiling the forensic footprint of trusted volatile data forensic tools,
Profiling, eh? Forensic footprint, you say? The next logical step is...how do we do this? Pp. 46 - 48 of my book make a pretty good start at laying this all out.
First, you want to be sure to document the tool you're testing...where you found it, the file size, cryptographic hashes, any pertinent info from the PE header, etc.
Also, when testing, you want to identify your test platform (OS, tools used, etc.) so that the tests you run are understandable and repeatable. Does the OS matter? I'm sure some folks don't think so, but it does! Why is that? Well, for one, the various versions of Windows differ...for example, Windows XP performs application prefetching by default. This means that when you run your test, depending upon how you launch the tool you're testing, you may find a .pf file added to the Prefetch directory (assuming that the number of .pf files hasn't reached the 128 file limit).
So, what testing tools do you want to have in place on the testing platform? What tools do we need to identify the "forensic footprint"? Well, you'll need two classes of tools...snapshot 'diff' tools, and active monitoring tools. Snapshot 'diff' tools allow you to snapshot the system (file system, Registry) before the tool is run, and again afterward, and then will allow you to 'diff' the two snapshots to see what what changed. Tools such as InControl5 and RegShot can be used for this purpose.
For active monitoring tools, I'd suggest ProcessMonitor from MS SysInternals. This tool allows you to monitor file and Registry accesses in real-time, and then save that information for later analysis.
In order to monitor your system for network activity while the tool is run, I'd suggest installing PortReporter on your system as part of your initial setup. The MS KB article (click on "PortReporter") also includes links to the MS PortQry and PortQryUI tools, as well as the PortReporter Log Parser utility for parsing PortReporter logs.
As many of the tools used in IR activities are CLI tools and will execute and complete fairly quickly, I'd also suggest enabling the Process Tracking auditing on your test system, so that the event record for process creation will be recorded.
Okay, so all of this covers a "forensic footprint" from the perspective of the file system and the Registry...but what about memory? Good question! Lets leave that for another post...
Thoughts so far?
By profiling the forensic footprint of trusted volatile data forensic tools,
Profiling, eh? Forensic footprint, you say? The next logical step is...how do we do this? Pp. 46 - 48 of my book make a pretty good start at laying this all out.
First, you want to be sure to document the tool you're testing...where you found it, the file size, cryptographic hashes, any pertinent info from the PE header, etc.
Also, when testing, you want to identify your test platform (OS, tools used, etc.) so that the tests you run are understandable and repeatable. Does the OS matter? I'm sure some folks don't think so, but it does! Why is that? Well, for one, the various versions of Windows differ...for example, Windows XP performs application prefetching by default. This means that when you run your test, depending upon how you launch the tool you're testing, you may find a .pf file added to the Prefetch directory (assuming that the number of .pf files hasn't reached the 128 file limit).
So, what testing tools do you want to have in place on the testing platform? What tools do we need to identify the "forensic footprint"? Well, you'll need two classes of tools...snapshot 'diff' tools, and active monitoring tools. Snapshot 'diff' tools allow you to snapshot the system (file system, Registry) before the tool is run, and again afterward, and then will allow you to 'diff' the two snapshots to see what what changed. Tools such as InControl5 and RegShot can be used for this purpose.
For active monitoring tools, I'd suggest ProcessMonitor from MS SysInternals. This tool allows you to monitor file and Registry accesses in real-time, and then save that information for later analysis.
In order to monitor your system for network activity while the tool is run, I'd suggest installing PortReporter on your system as part of your initial setup. The MS KB article (click on "PortReporter") also includes links to the MS PortQry and PortQryUI tools, as well as the PortReporter Log Parser utility for parsing PortReporter logs.
As many of the tools used in IR activities are CLI tools and will execute and complete fairly quickly, I'd also suggest enabling the Process Tracking auditing on your test system, so that the event record for process creation will be recorded.
Okay, so all of this covers a "forensic footprint" from the perspective of the file system and the Registry...but what about memory? Good question! Lets leave that for another post...
Thoughts so far?
Wednesday, July 11, 2007
Are you Security Minded?
Kai Axford posted on his blog that he's been "terribly remiss in his forensic discussions". This originated with one of Kai's earlier posts on forensic resources; I'd commented, mentioning my own books as resources.
Kai...no need to apologize. Really. Re: TechEd...gotta get approval for that from my Other Boss. ;-)
Kai...no need to apologize. Really. Re: TechEd...gotta get approval for that from my Other Boss. ;-)
Updates, etc.
Not posting anywhere close to regularly lately, I felt that a couple of updates and notices of new finds were in order...
First off, James MacFarlane has updated his Parse-Win32Registry module, fixing a couple of errors, adding a couple of useful scripts (regfind.pl and regdiff.pl...yes, that's the 'diff' you've been looking for...), and adding a couple of useful functions. Kudos to James, a huge thanks, and a hearty "job well done"! James asked me if I was still using the module...I think "abuse" would be a better term! ;-)
An RTF version of Forensic CaseNotes has been released. I use this tool in what I do...I've added a tab or two that is useful for what I need and do, and I also maintain my analysis and exhibit list using CaseNotes. Now, with RTF support, I can add "formatted text, graphics, photos, charts and tables". Very cool!
LonerVamp posted about some MAC changing and Wifi tools, and I got to thinking that I need to update my Perl scripts that use James' module to include looking for NICs with MACs specifically listed in the Registry. Also, I saw a nifty tool called WirelessKeyView listed...looks like something good to have on your tools CD, either as an admin doing some troubleshooting, or as a first responder.
Another useful tool to have on your CD is Windows File Analyzer, from MiTeC. This GUI tool is capable of parsing some of the more troublesome, yet useful files from a Windows system, such as Prefetch files, shortcut/LNK files, and index.dat files. So...what's in your...uh...CD?
LonerVamp also posted a link to MS KB 875357, Troubleshooting Windows Firewall settings in Windows XP SP2. You're probably thinking, "yeah? so?"...but look closely. From a forensic analysis perspective, take a look at what we have available to us here. For one, item 3 shows the user typing "wscui.cpl" into the Run box to open the Security Center applet...so if you're performing analysis and you find "wscui.cpl" listed in the RunMRU or UserAssist keys, what does that tell you?
What other useful tidbits do you find in the KB article that can be translated into useful forensic analysis techniques? Then, how would you go about automating that?
Another useful tool if you're doing any work with scripts (JavaScript, etc.) in HTML files, is Didier Stevens' ExtractScripts tool. The tool is written in Python, and takes an HTML file as an argument, and outputs each script found in the HTML file as a separate file. Very cool stuff!
Some cool stuff...anyone got anything else they'd like to add?
First off, James MacFarlane has updated his Parse-Win32Registry module, fixing a couple of errors, adding a couple of useful scripts (regfind.pl and regdiff.pl...yes, that's the 'diff' you've been looking for...), and adding a couple of useful functions. Kudos to James, a huge thanks, and a hearty "job well done"! James asked me if I was still using the module...I think "abuse" would be a better term! ;-)
An RTF version of Forensic CaseNotes has been released. I use this tool in what I do...I've added a tab or two that is useful for what I need and do, and I also maintain my analysis and exhibit list using CaseNotes. Now, with RTF support, I can add "formatted text, graphics, photos, charts and tables". Very cool!
LonerVamp posted about some MAC changing and Wifi tools, and I got to thinking that I need to update my Perl scripts that use James' module to include looking for NICs with MACs specifically listed in the Registry. Also, I saw a nifty tool called WirelessKeyView listed...looks like something good to have on your tools CD, either as an admin doing some troubleshooting, or as a first responder.
Another useful tool to have on your CD is Windows File Analyzer, from MiTeC. This GUI tool is capable of parsing some of the more troublesome, yet useful files from a Windows system, such as Prefetch files, shortcut/LNK files, and index.dat files. So...what's in your...uh...CD?
LonerVamp also posted a link to MS KB 875357, Troubleshooting Windows Firewall settings in Windows XP SP2. You're probably thinking, "yeah? so?"...but look closely. From a forensic analysis perspective, take a look at what we have available to us here. For one, item 3 shows the user typing "wscui.cpl" into the Run box to open the Security Center applet...so if you're performing analysis and you find "wscui.cpl" listed in the RunMRU or UserAssist keys, what does that tell you?
What other useful tidbits do you find in the KB article that can be translated into useful forensic analysis techniques? Then, how would you go about automating that?
Another useful tool if you're doing any work with scripts (JavaScript, etc.) in HTML files, is Didier Stevens' ExtractScripts tool. The tool is written in Python, and takes an HTML file as an argument, and outputs each script found in the HTML file as a separate file. Very cool stuff!
Some cool stuff...anyone got anything else they'd like to add?
ACPO Guidelines
The Association of Chief Police Officers (ACPO), in association with 7safe, has recently released their updated guide to collecting electronic evidence. While the entire document makes for an interesting read, I found pages 18 and 19, "Network forensics and volatile data" most interesting.
The section begins with a reference back to Principle 2 of the guidelines, which states:
In circumstances where a person finds it necessary to access original data held on a computer or on storage media, that person must be competent to do so and be able to give evidence explaining the relevance and the implications of their actions.
Sounds good, right? We should also look at Principle 1, which states:
No action taken by law enforcement agencies or their agents should change data held on a computer or storage media which may subsequently be relied upon in court.
Also, Principle 3 states:
An audit trail or other record of all processes applied to computer-based electronic evidence should be created and preserved. An independent third party should be able to examine those processes and achieve the same result.
Okay, I'm at a loss here. Collecting volatile data inherently changes the state of the system, as well as the contents of the storage media (i.e., Prefetch files, Registry contents, pagefile, etc.), and the process used to collect the volatile data cannot be later used by a third party to "achieve the same result", as the state of the system at the time that the data is collected cannot be reproduced.
That being said, let's move on...page 18, in the "Network forensics and volatile data" section, includes the following:
By profiling the forensic footprint of trusted volatile data forensic tools, an investigator will be in a position to understand the impact of using such tools and will therefore consider this during the investigation and when presenting evidence.
It's interesting that this says "profiling the forensic footprint", but says nothing about error rates or statistics of any kind. I fully agree that this sort of thing needs to be done, but I would hope that it would be done and made available via a resource such as the ForensicWiki, so that not every examiner has to run every test of every tool.
Here's another interesting tidbit...
Considering a potential Trojan defence...
Exactly!
Continuing on through the document, I can't say that I agree with the order of the sequence for collecting volatile data...specifically, the binary dump of memory should really be first, not last. This way, you can collect the contents of physical memory in as near a pristine state as possible. I do have to question the use of the term "bootable" to describe the platform from which the tools should be run, as booting to this media would inherently destroy the very volatile data you're attempting to collect.
Going back to my concerns (the part where I said I was "at a loss") above, I found this near the end of the section:
By accessing the devices, data may be added, violating Principle 1 but, if the logging mechanism is researched prior to investigation, the forensic footprints added during investigation may be taken into consideration and therefore Principle 2 can be complied with.
Ah, there we go...so if we profile our trusted tools and document what their "forensic footprints" are, then we can identify our (investigators) footprints on the storage media, much like a CSI following a specific route into and out of a crime scene, so that she can say, "yes, those are my footprints."
Thoughts?
The section begins with a reference back to Principle 2 of the guidelines, which states:
In circumstances where a person finds it necessary to access original data held on a computer or on storage media, that person must be competent to do so and be able to give evidence explaining the relevance and the implications of their actions.
Sounds good, right? We should also look at Principle 1, which states:
No action taken by law enforcement agencies or their agents should change data held on a computer or storage media which may subsequently be relied upon in court.
Also, Principle 3 states:
An audit trail or other record of all processes applied to computer-based electronic evidence should be created and preserved. An independent third party should be able to examine those processes and achieve the same result.
Okay, I'm at a loss here. Collecting volatile data inherently changes the state of the system, as well as the contents of the storage media (i.e., Prefetch files, Registry contents, pagefile, etc.), and the process used to collect the volatile data cannot be later used by a third party to "achieve the same result", as the state of the system at the time that the data is collected cannot be reproduced.
That being said, let's move on...page 18, in the "Network forensics and volatile data" section, includes the following:
By profiling the forensic footprint of trusted volatile data forensic tools, an investigator will be in a position to understand the impact of using such tools and will therefore consider this during the investigation and when presenting evidence.
It's interesting that this says "profiling the forensic footprint", but says nothing about error rates or statistics of any kind. I fully agree that this sort of thing needs to be done, but I would hope that it would be done and made available via a resource such as the ForensicWiki, so that not every examiner has to run every test of every tool.
Here's another interesting tidbit...
Considering a potential Trojan defence...
Exactly!
Continuing on through the document, I can't say that I agree with the order of the sequence for collecting volatile data...specifically, the binary dump of memory should really be first, not last. This way, you can collect the contents of physical memory in as near a pristine state as possible. I do have to question the use of the term "bootable" to describe the platform from which the tools should be run, as booting to this media would inherently destroy the very volatile data you're attempting to collect.
Going back to my concerns (the part where I said I was "at a loss") above, I found this near the end of the section:
By accessing the devices, data may be added, violating Principle 1 but, if the logging mechanism is researched prior to investigation, the forensic footprints added during investigation may be taken into consideration and therefore Principle 2 can be complied with.
Ah, there we go...so if we profile our trusted tools and document what their "forensic footprints" are, then we can identify our (investigators) footprints on the storage media, much like a CSI following a specific route into and out of a crime scene, so that she can say, "yes, those are my footprints."
Thoughts?
Thursday, July 05, 2007
Windows Forensic Analysis Book Review Posted
Richard Bejtlich has posted a review of Windows Forensic Analysis on Amazon.
5 (count 'em!) stars! Wow! The review starts with:
Wow -- what a great forensics book -- a must read for investigators.
Very cool. High praise, indeed!
Thanks, Richard!
5 (count 'em!) stars! Wow! The review starts with:
Wow -- what a great forensics book -- a must read for investigators.
Very cool. High praise, indeed!
Thanks, Richard!
Wednesday, June 27, 2007
Book Updates
I got word yesterday that Syngress is going to discontinue selling hard copy books from their web site. They will continue to sell ebooks, and provide links to other online retailers.
The new link for my book is here.
The new link for my book is here.
Sunday, June 17, 2007
What is RAM, legally speaking?
Ever wondered what the legal definition of "RAM" is? I was perusing the Computer Forensics and Incident Response blog this morning and found an interesting post regarding RAM and the US Courts. In short, a court document (judge's decision) from the case of Columbia Pictures Industries, et al, vs Justin Bunneli, et al, was posted on the web, and contains some interesting discussion regarding RAM.
In short, the document illustrates a discussion in which RAM constitutes "electronically stored information", and can be included in discovery. The document contains statement such as "...Server Log Data is temporarily stored in RAM and constitutes a document...".
Interestingly enough, there is also discussion of "sploilation of evidence" due to the defendant's failure to preserve/retain RAM.
The defendants claimed that, in part, they could not produce the server log data from RAM due to burden of cost...which the judge's decision states that they failed to demonstrate. There are some interesting notes that address issues of RAM as "electronically stored information" from which key data would otherwise not be available (ie, the document states that the server's logging function was not enabled, but the requests themselves were stored in RAM).
Ultimately, the judge denied the plaintiff's request for evidentary sanctions due to the defendant's failure to preserve the contents of RAM, partially due to a lack of prior precedence and a specific request to preserve RAM (the request was for documents).
The PDF document is 36 pages long, and well worth a read. I will not attempt to interpret a legal document here...I simply find the judge's decision that yes, RAM constitutes electronically stored information, however temporary, to be very interesting.
What are your thoughts? How do you think this kind of issue will fare given that there are no longer any freely available tools for dumping the contents of Physical Memory from Windows systems?
Addendum: An appeal brief has been posted by the defendant's lawyers.
In short, the document illustrates a discussion in which RAM constitutes "electronically stored information", and can be included in discovery. The document contains statement such as "...Server Log Data is temporarily stored in RAM and constitutes a document...".
Interestingly enough, there is also discussion of "sploilation of evidence" due to the defendant's failure to preserve/retain RAM.
The defendants claimed that, in part, they could not produce the server log data from RAM due to burden of cost...which the judge's decision states that they failed to demonstrate. There are some interesting notes that address issues of RAM as "electronically stored information" from which key data would otherwise not be available (ie, the document states that the server's logging function was not enabled, but the requests themselves were stored in RAM).
Ultimately, the judge denied the plaintiff's request for evidentary sanctions due to the defendant's failure to preserve the contents of RAM, partially due to a lack of prior precedence and a specific request to preserve RAM (the request was for documents).
The PDF document is 36 pages long, and well worth a read. I will not attempt to interpret a legal document here...I simply find the judge's decision that yes, RAM constitutes electronically stored information, however temporary, to be very interesting.
What are your thoughts? How do you think this kind of issue will fare given that there are no longer any freely available tools for dumping the contents of Physical Memory from Windows systems?
Addendum: An appeal brief has been posted by the defendant's lawyers.
Saturday, June 16, 2007
Restore Point Analysis
Others have posted bits and pieces regarding System Restore Point analysis (Stephen Bunting's site has some great info), and I've even blogged on this topic before, but I wanted to add a bit more information and a tidbit or two I've run across. This will go a bit beyond what's in my book, but I do want to say that the content in my book is not invalidated in any way.
First off, you can use some of the tools on the DVD accompanying my book in either live response or during post-mortem analysis to collect information from the Restore Points. I've recently updated the code to the SysRestore.pl ProDiscover ProScript to make it more usable and flexible, given some situations I've seen recently.
Another interesting thing I've run across is that using an alternative method of analysis, such as mounting the acquired image as a read-only drive letter (using VDKWin or Mount Image Pro), can be more of a problem than a solution. Accessing the system this way can really be a boon to the examiner, as you can hit the system with an AV scanner (...or two...or three...) and save yourself a great deal of time trying to locate malware. However, the problem occurs due to the fact that the ACLs on the System Volume Information directory require System level access for that system, and even having System level access on your analysis system does not equate to System level access on the mounted image. So things tend not to work as well...files within the "protected" directories will not be scanned, and your alternative is to either perform your analysis within a forensic analysis application such as ProDiscover (using ProScripts), or export the entire directory structure out of the image, at which point, storage is then a consideration (I've seen systems with quite a number of Restore Points).
This is can be an issue because bad guys may try to hide stuff in these directories...read on...
Remember me mentioning the existence of web browsing history for the Default User? This indicates the use of the WinInet API (wget.exe, IE, etc.) by someone who accessed the system with System level privileges. This level of access would also allow that user to access the System Volume Information directory, where the Restore Points are maintained, and possibly put things there, such as executable image files, etc. It's unlikely that a restore point would be used for persistence (ie, point a Windows Service to an executable image within a restore point), as the restore points eventually get deleted or flushed out (see the fifo.log file). However, this would be an excellent place to put an installer or downloader file, and then the intruder could place the files that he wanted to be persistent in either the System Volume Information directory, or the "_restore*" directory.
So, besides looking for files that we know are in the Restore Points (ie, drivetable.txt, rp.log, Registry files), we should also consider looking for files that shouldn't be there, particularly if we find other artifacts that indicate a System-level intrusion.
Beyond this, Restore Points provide a wealth of historical information about the system. By parsing all of the rp.log files, we can develop a timeline of activity on the system that will give us an idea of what was done (system checkpoint, application install/uninstall, etc.) as well as provide us with that timeline...if the Restore Points are in sequence and the dates seem skewed, then we have an indication that someone may have fiddled with the system time. Using the drivetable.txt file, you can see what drives were attached to the system at the time that the Restore Point was created (by default, one is created every 24 hrs).
Beyond these files, we also have access to the Registry files that are backed up to the Restore Points. You can parse these to see if and when a user's privilege levels were modified (ie, added to the Administrator group), determine IP addresses and network settings for the system (parse the NetworkCards key from the Software file, then the Tcpip Services key from the System file), etc.
Analysis of the Registry files maintained in Restore Points is also useful in determining a timeline for certain Registry modifications that are difficult to pin down. For example, when a Registry value is added or modified, the key's LastWrite time is updated. What if one value is added, and one is modified...how do we determine which action caused the LastWrite time to be modified? Well, by using the historical data maintained in the Restore Points, we can look back and see that on this date, the modified value was there, but the added value wasn't...and then it appears in the next Restore Point. Not exact, but it does give us a timeline.
So...what interesting things have you found in Restore Points?
Links
Kelly's Korner (XP Restore Points)
MS Windows XP System Restore
System Restore WMI Classes
First off, you can use some of the tools on the DVD accompanying my book in either live response or during post-mortem analysis to collect information from the Restore Points. I've recently updated the code to the SysRestore.pl ProDiscover ProScript to make it more usable and flexible, given some situations I've seen recently.
Another interesting thing I've run across is that using an alternative method of analysis, such as mounting the acquired image as a read-only drive letter (using VDKWin or Mount Image Pro), can be more of a problem than a solution. Accessing the system this way can really be a boon to the examiner, as you can hit the system with an AV scanner (...or two...or three...) and save yourself a great deal of time trying to locate malware. However, the problem occurs due to the fact that the ACLs on the System Volume Information directory require System level access for that system, and even having System level access on your analysis system does not equate to System level access on the mounted image. So things tend not to work as well...files within the "protected" directories will not be scanned, and your alternative is to either perform your analysis within a forensic analysis application such as ProDiscover (using ProScripts), or export the entire directory structure out of the image, at which point, storage is then a consideration (I've seen systems with quite a number of Restore Points).
This is can be an issue because bad guys may try to hide stuff in these directories...read on...
Remember me mentioning the existence of web browsing history for the Default User? This indicates the use of the WinInet API (wget.exe, IE, etc.) by someone who accessed the system with System level privileges. This level of access would also allow that user to access the System Volume Information directory, where the Restore Points are maintained, and possibly put things there, such as executable image files, etc. It's unlikely that a restore point would be used for persistence (ie, point a Windows Service to an executable image within a restore point), as the restore points eventually get deleted or flushed out (see the fifo.log file). However, this would be an excellent place to put an installer or downloader file, and then the intruder could place the files that he wanted to be persistent in either the System Volume Information directory, or the "_restore*" directory.
So, besides looking for files that we know are in the Restore Points (ie, drivetable.txt, rp.log, Registry files), we should also consider looking for files that shouldn't be there, particularly if we find other artifacts that indicate a System-level intrusion.
Beyond this, Restore Points provide a wealth of historical information about the system. By parsing all of the rp.log files, we can develop a timeline of activity on the system that will give us an idea of what was done (system checkpoint, application install/uninstall, etc.) as well as provide us with that timeline...if the Restore Points are in sequence and the dates seem skewed, then we have an indication that someone may have fiddled with the system time. Using the drivetable.txt file, you can see what drives were attached to the system at the time that the Restore Point was created (by default, one is created every 24 hrs).
Beyond these files, we also have access to the Registry files that are backed up to the Restore Points. You can parse these to see if and when a user's privilege levels were modified (ie, added to the Administrator group), determine IP addresses and network settings for the system (parse the NetworkCards key from the Software file, then the Tcpip Services key from the System file), etc.
Analysis of the Registry files maintained in Restore Points is also useful in determining a timeline for certain Registry modifications that are difficult to pin down. For example, when a Registry value is added or modified, the key's LastWrite time is updated. What if one value is added, and one is modified...how do we determine which action caused the LastWrite time to be modified? Well, by using the historical data maintained in the Restore Points, we can look back and see that on this date, the modified value was there, but the added value wasn't...and then it appears in the next Restore Point. Not exact, but it does give us a timeline.
So...what interesting things have you found in Restore Points?
Links
Kelly's Korner (XP Restore Points)
MS Windows XP System Restore
System Restore WMI Classes
Thursday, June 14, 2007
EventLog Analysis
In my book, I covered the Windows 2000, XP, and 2003 EventLog file header and event record structure in some detail. There's also a Perl script or two on the DVD that accompanies the book that let you parse an Event Log without using the Windows API, so that you avoid that pesky message about the Event Log being corrupted.
I've since updated one of the scripts (changing the name to evt2xls.pl), so that now it writes the information that it parses from the Event Log file directly into an Excel spreadsheet, even going so far as to format the date field to that it "makes sense" to Excel when you want to sort based on the date. I've found that writing the data directly to a spreadsheet makes things a bit easier for me, particularly when I want to sort the data to see just certain event record sources, or perform some other analysis. I've also added some functionality to collect statistics from the Event Log file, and display information such as total counted event records, frequency of event sources and IDs, etc., in a separate report file. I've found these to be very useful and efficient, giving me a quick overview of the contents of the Event Logs, and making my analysis of a system go much smoother, particularly when combined with Registry analysis (such as parsing the Security file for the audit policy...see the Bonus directory on the DVD for the Perl script named poladt.pl and its associated EXE file). One of the things I'm considering adding to this script is reporting of successful and failed login attempts, basing this reporting in part on the type of the login attempt (ie, Service vs Local vs Remote).
Here's something to think about...there is sufficient information in the book, and Perl code on the DVD, such that you can create tools for parsing of event records from other sources, such as RAM dumps, the pagefile, and even unallocated space. I'm considering writing a couple of small tools to do this...not search the files, specifically (I can add that to the code that parses RAM dumps) but to start by simply extracting event records given a file and an offset within the file.
But what about actual Event Log analysis? What about really using the Event Log to get some insight into activity on the system? What can we look for and how can we use it?
Here are some tidbits that I've come across and use...please don't consider this a complete list, as I hope that people will contribute. This is just to get folks started...
Stephen Bunting has a great write-up that explains how to use the Event Log to track time change events, such as when someone alters their system time.
The Application Event Log is a great place to look for events generated by antivirus applications. This will not only tell you if an antivirus application is installed on the system (you can also perform Registry analysis to determine this information), but perhaps the version, when it was active, etc.
In the System Event Log, Event ID 6161 (Source: Print) tells you when a file failed to print. The event message tells you the name of the file that failed to print, the username, and the printer.
Also in the System Event Log, Event ID 35 (Source: W32Time) is an Information event that tells you that your system is sync'ing with a time server, and provides the IP address of your system. This can be very useful in a DHCP environment, as it tells you the IP address assigned to the system (actually, the interface) at a particular date and time.
Windows Defender (Source: WinDefend) will generate an event ID 1007 when it detects malware on a system; the event strings contain specific information about what was found.
Whenever you're doing Event Log analysis, be sure to go to EventID.net for help understanding what you're looking at. Most of the listed event IDs have detailed explanations of what can cause the event, as well as links to information at MS.
Again, this is not a complete list of items that you may find and use in your analysis...these are just somethings that come to mind. And remember, you get a bit more out of Event Log analysis when you combine it with Registry analysis, not only of the audit policy for the system and the settings for the Event Logs, but with other sources, as well.
Links
EventLog Header structure
Event Record structure
EventLog EOF record structure
EventLog File Format
I've since updated one of the scripts (changing the name to evt2xls.pl), so that now it writes the information that it parses from the Event Log file directly into an Excel spreadsheet, even going so far as to format the date field to that it "makes sense" to Excel when you want to sort based on the date. I've found that writing the data directly to a spreadsheet makes things a bit easier for me, particularly when I want to sort the data to see just certain event record sources, or perform some other analysis. I've also added some functionality to collect statistics from the Event Log file, and display information such as total counted event records, frequency of event sources and IDs, etc., in a separate report file. I've found these to be very useful and efficient, giving me a quick overview of the contents of the Event Logs, and making my analysis of a system go much smoother, particularly when combined with Registry analysis (such as parsing the Security file for the audit policy...see the Bonus directory on the DVD for the Perl script named poladt.pl and its associated EXE file). One of the things I'm considering adding to this script is reporting of successful and failed login attempts, basing this reporting in part on the type of the login attempt (ie, Service vs Local vs Remote).
Here's something to think about...there is sufficient information in the book, and Perl code on the DVD, such that you can create tools for parsing of event records from other sources, such as RAM dumps, the pagefile, and even unallocated space. I'm considering writing a couple of small tools to do this...not search the files, specifically (I can add that to the code that parses RAM dumps) but to start by simply extracting event records given a file and an offset within the file.
But what about actual Event Log analysis? What about really using the Event Log to get some insight into activity on the system? What can we look for and how can we use it?
Here are some tidbits that I've come across and use...please don't consider this a complete list, as I hope that people will contribute. This is just to get folks started...
Stephen Bunting has a great write-up that explains how to use the Event Log to track time change events, such as when someone alters their system time.
The Application Event Log is a great place to look for events generated by antivirus applications. This will not only tell you if an antivirus application is installed on the system (you can also perform Registry analysis to determine this information), but perhaps the version, when it was active, etc.
In the System Event Log, Event ID 6161 (Source: Print) tells you when a file failed to print. The event message tells you the name of the file that failed to print, the username, and the printer.
Also in the System Event Log, Event ID 35 (Source: W32Time) is an Information event that tells you that your system is sync'ing with a time server, and provides the IP address of your system. This can be very useful in a DHCP environment, as it tells you the IP address assigned to the system (actually, the interface) at a particular date and time.
Windows Defender (Source: WinDefend) will generate an event ID 1007 when it detects malware on a system; the event strings contain specific information about what was found.
Whenever you're doing Event Log analysis, be sure to go to EventID.net for help understanding what you're looking at. Most of the listed event IDs have detailed explanations of what can cause the event, as well as links to information at MS.
Again, this is not a complete list of items that you may find and use in your analysis...these are just somethings that come to mind. And remember, you get a bit more out of Event Log analysis when you combine it with Registry analysis, not only of the audit policy for the system and the settings for the Event Logs, but with other sources, as well.
Links
EventLog Header structure
Event Record structure
EventLog EOF record structure
EventLog File Format