Donald Tabone posted a review of my book, Windows Forensic Analysis, here.
In his review, Mr. Tabone included this:
What I dislike about the book:
- No mention of Steganography techniques/tools which Chapter 5 could have benefited from.
Back in the fall of '06, while I was writing the book, I sent out emails to a couple of lists asking folks what they would like to see in a book that focuses on forensic analysis of Windows systems; I received very few responses, and if memory serves, only one mentioned steganography. In this book, I wanted to focus specifically on issues directly related to the forensic analysis of Windows systems, and I did not see where steganography fit into that category.
Interestingly enough, Mr. Tabone did not mention what it was he wanted to know about steganography. Articles have already been written on the subject (SecurityFocus, etc.) and there are sites with extensive lists of tools, for a variety of platforms.
I would like to extend a hearty "thanks" to Mr. Tabone for taking the time and effort to write a review of my book, and for posting it.
Incidently, Hogfly posted a review of my book, as well. Be sure to read the comments that follow the review, as well.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Monday, July 16, 2007
Saturday, July 14, 2007
Thoughts on RAM acquisition
As a follow-on to the tool testing posts, I wanted to throw something else out there...specifically, I'll start with a comment I received, which said, in part:
Tool criteria include[sic] whether the data the tool has acquired actually existed.
This is a slightly different view of RAM acquisition (or "memory dumping") than I've seen before...perhaps the most common question/concern is more along the lines of, does exculpatory evidence get overwritten?
One of the issues here is that unlike a post-mortem acquisition of a hard drive (ie, the traditional "hook the drive up to a write-blocker, etc."), when acquiring or dumping RAM, one cannot use the same method and obtain the same results...reproduceability is an issue. Because you're acquiring the contents of physical memory from a running system, at any given point in time, something will be changing; processes process, threads execute, network connections time out, etc. So, similar to the live acquisition of a hard drive, you're going to have differences (remember, one of the aspects of cryptographic hash algorithms such as MD5 is that flipping a single bit will produce a different hash). I would suggest that the approach we should take to this is to accept it and document it.
That being said, what are some of the questions that we can anticipate addressing, and how would/should we answer them? I'll take a stab at a couple of basic questions (and responses), but I'd really like to see what others have to say:
1. Did you acquire this data using an accepted, validated process?
In order to respond to this question, we need to develop a process, in such a way as to validate it, and get it "accepted". Don't ask me by whom at this point...that's something we'll need to work on.
2. Did this process overwrite evidence, exculpatory or otherwise?
I really think that determining this is part of the validation process. In order to best answer this question, we have to look at the process that is used...are we using third-party software to do this, or are we using some other method? How does that method or process affect or impact the system we're working with?
3. Was this process subverted by malware running on the system?
This needs to be part of the validation process, as well, but also part of our analysis of the data we retrieved.
4. Did you add anything to this data once you had collected it, or modify it in any way?
This particular question is not so much a technical question (though we do have to determine if our tools impact the output file in anyway) as it is a question for the responder or examiner.
As you can see, there's still a great deal of work to be done. However, please don't think for an instant that I'm suggesting that acquiring the contents of physical memory is the be-all and end-all of forensic analysis. It's a tool...a tool that when properly used can produce some very valuable results.
Tool criteria include[sic] whether the data the tool has acquired actually existed.
This is a slightly different view of RAM acquisition (or "memory dumping") than I've seen before...perhaps the most common question/concern is more along the lines of, does exculpatory evidence get overwritten?
One of the issues here is that unlike a post-mortem acquisition of a hard drive (ie, the traditional "hook the drive up to a write-blocker, etc."), when acquiring or dumping RAM, one cannot use the same method and obtain the same results...reproduceability is an issue. Because you're acquiring the contents of physical memory from a running system, at any given point in time, something will be changing; processes process, threads execute, network connections time out, etc. So, similar to the live acquisition of a hard drive, you're going to have differences (remember, one of the aspects of cryptographic hash algorithms such as MD5 is that flipping a single bit will produce a different hash). I would suggest that the approach we should take to this is to accept it and document it.
That being said, what are some of the questions that we can anticipate addressing, and how would/should we answer them? I'll take a stab at a couple of basic questions (and responses), but I'd really like to see what others have to say:
1. Did you acquire this data using an accepted, validated process?
In order to respond to this question, we need to develop a process, in such a way as to validate it, and get it "accepted". Don't ask me by whom at this point...that's something we'll need to work on.
2. Did this process overwrite evidence, exculpatory or otherwise?
I really think that determining this is part of the validation process. In order to best answer this question, we have to look at the process that is used...are we using third-party software to do this, or are we using some other method? How does that method or process affect or impact the system we're working with?
3. Was this process subverted by malware running on the system?
This needs to be part of the validation process, as well, but also part of our analysis of the data we retrieved.
4. Did you add anything to this data once you had collected it, or modify it in any way?
This particular question is not so much a technical question (though we do have to determine if our tools impact the output file in anyway) as it is a question for the responder or examiner.
As you can see, there's still a great deal of work to be done. However, please don't think for an instant that I'm suggesting that acquiring the contents of physical memory is the be-all and end-all of forensic analysis. It's a tool...a tool that when properly used can produce some very valuable results.
Thursday, July 12, 2007
Tool Testing Methodology, Memory
In my last post, I described what you'd need to do to set up a system in order to test the effects of a tool we'd use on a system for IR activities. I posted this as a way of filling in a gap left by the ACPO Guidelines, which says that we need to "profile" the "forensic footprint" of our tools. That post described tools we'd need to use to discover the footprints within the file system and Registry. I invite you, the reader, to comment on other tools that may be used, as well as provide your thoughts regarding how to use them...after all, the ACPO Guidelines also state that the person using these tools must be competent, and what better way to get there than through discussion and exchange of ideas?
One thing we haven't discussed, and there doesn't seem to be a great deal of discussion of, is the effects of the tools we use on memory. One big question that is asked is, what is the "impact" that our tools have on memory? This is important to understand, and I think one of the main drivers behind this is the idea that when IR activities are first introduced in a court of law, claims will be made that the responder overwrote or deleted potentially exculpatory data during the response process. So...understanding the effect of our tools will make us competent in their use, and we'll be able to address those (and other) issues.
When a process is created (see Windows Internals, by Russinovich and Solomon for the details, or go here), the EXE file is loaded into memory...the EXE is opened and a section object is created, followed by a process object and a thread object. So, memory pages (default size is 4K) are "consumed". Now, almost all EXEs (and I say "almost" because I haven't seen every EXE file) include an import table in their PE header, which describes all of the dynamic link libraries (DLLs) that the EXE accesses. MS provides API functions via DLLs, and EXEs access these DLLs rather than the author rewriting all the code used completely from scratch. So...if the necessary DLL isn't already in memory, then it has to be located and loaded...which in turn, means that more memory pages are "consumed".
So, knowing that these memory pages are used/written to, what is the possibility that important 'evidence' is overwritten? Well, for one thing, the memory manager will not overwrite pages that are actively being used. If it did, stuff would randomly disappear and stop working. For example, your copy of a document may disappear because you loaded Solitaire and a 4K page was randomly overwritten. We wouldn't like this, would we? Of course not! So, the memory manager will allocate memory pages to a process that are not currently active.
For an example of this, let's take a look at Forensic Discovery, by Dan Farmer and Wietse Venema...specifically, chapter 8, section 17:
As the size of the memory filling process grows, it accelerates the memory decay of cached files and of terminated anonymous process memory, and eventually the system will start to cannibalize memory from running processes, moving their writable pages to the swap space. That is, that's what we expected. Unfortunately even repeat runs of this program as root only changed about 3/4 of the main memory of various computers we tested the program on. Not only did it not consume all anonymous memory but it didn't have much of an affect on the kernel and file caches.
Now, keep in mind that the tests that were run were on *nix systems, but the concept is the same for Windows systems (note: previously in the chapter, tests run on Windows XP systems were described, as well).
So this illustrates my point...when a new process is loaded, memory that is actively being used does not get overwritten. If an application (Word, Excel, Notepad) is active in memory, and there's a document that is open in that application, that information won't be overwritten...at worst, the pages not currently being used will be swapped out to the pagefile. If a Trojan is active in memory, the memory pages used by the process, as well as the information specific to the process and thread(s) themselves will not be overwritten. The flip side of this is that what does get "consumed" are memory pages that are freed for use by the memory manager; research has shown that the contents of RAM can survive a reboot, and that even after a new process (or several processes) have been loaded and run, information about exited processes and threads still persists. So, pages used by previous processes may be overwritten, as will pages that contained information about threads, and even pages that had not been previously allocated. When we recover the contents of physical memory (ie, RAM) one of the useful things about our current tools is that we can locate a process, and then by walking the page directory and table entries, locate the memory pages used by that process. By extracting and assembling these pages, we can then search them for strings, and anything we locate as "evidence" will have context; we'll be able to associate a particular piece of information (ie, a string) with a specific process. The thing about pages that have been freed when a process has exited is that we may not be able to associate that page with a specific process; we may not be able to develop context to anything we find in that particular page.
Think of it this way...if I dump the contents of memory and run strings.exe against it, I will get a lot of strings...but what context will that have? I won't be able to associate any of the strings I locate in that memory dump with a specific process, using just strings.exe. However, if I parse out the process information, reassembling EXE files and memory used by each process, and then run strings.exe on the results, I will have a considerable amount of context...not only will I know which process was using the specific memory pages, but I will have timestamps associated with process and threads, etc.
Thoughts? I just made all this up, just now. ;-) Am I off base, crazy, a raving lunatic?
One thing we haven't discussed, and there doesn't seem to be a great deal of discussion of, is the effects of the tools we use on memory. One big question that is asked is, what is the "impact" that our tools have on memory? This is important to understand, and I think one of the main drivers behind this is the idea that when IR activities are first introduced in a court of law, claims will be made that the responder overwrote or deleted potentially exculpatory data during the response process. So...understanding the effect of our tools will make us competent in their use, and we'll be able to address those (and other) issues.
When a process is created (see Windows Internals, by Russinovich and Solomon for the details, or go here), the EXE file is loaded into memory...the EXE is opened and a section object is created, followed by a process object and a thread object. So, memory pages (default size is 4K) are "consumed". Now, almost all EXEs (and I say "almost" because I haven't seen every EXE file) include an import table in their PE header, which describes all of the dynamic link libraries (DLLs) that the EXE accesses. MS provides API functions via DLLs, and EXEs access these DLLs rather than the author rewriting all the code used completely from scratch. So...if the necessary DLL isn't already in memory, then it has to be located and loaded...which in turn, means that more memory pages are "consumed".
So, knowing that these memory pages are used/written to, what is the possibility that important 'evidence' is overwritten? Well, for one thing, the memory manager will not overwrite pages that are actively being used. If it did, stuff would randomly disappear and stop working. For example, your copy of a document may disappear because you loaded Solitaire and a 4K page was randomly overwritten. We wouldn't like this, would we? Of course not! So, the memory manager will allocate memory pages to a process that are not currently active.
For an example of this, let's take a look at Forensic Discovery, by Dan Farmer and Wietse Venema...specifically, chapter 8, section 17:
As the size of the memory filling process grows, it accelerates the memory decay of cached files and of terminated anonymous process memory, and eventually the system will start to cannibalize memory from running processes, moving their writable pages to the swap space. That is, that's what we expected. Unfortunately even repeat runs of this program as root only changed about 3/4 of the main memory of various computers we tested the program on. Not only did it not consume all anonymous memory but it didn't have much of an affect on the kernel and file caches.
Now, keep in mind that the tests that were run were on *nix systems, but the concept is the same for Windows systems (note: previously in the chapter, tests run on Windows XP systems were described, as well).
So this illustrates my point...when a new process is loaded, memory that is actively being used does not get overwritten. If an application (Word, Excel, Notepad) is active in memory, and there's a document that is open in that application, that information won't be overwritten...at worst, the pages not currently being used will be swapped out to the pagefile. If a Trojan is active in memory, the memory pages used by the process, as well as the information specific to the process and thread(s) themselves will not be overwritten. The flip side of this is that what does get "consumed" are memory pages that are freed for use by the memory manager; research has shown that the contents of RAM can survive a reboot, and that even after a new process (or several processes) have been loaded and run, information about exited processes and threads still persists. So, pages used by previous processes may be overwritten, as will pages that contained information about threads, and even pages that had not been previously allocated. When we recover the contents of physical memory (ie, RAM) one of the useful things about our current tools is that we can locate a process, and then by walking the page directory and table entries, locate the memory pages used by that process. By extracting and assembling these pages, we can then search them for strings, and anything we locate as "evidence" will have context; we'll be able to associate a particular piece of information (ie, a string) with a specific process. The thing about pages that have been freed when a process has exited is that we may not be able to associate that page with a specific process; we may not be able to develop context to anything we find in that particular page.
Think of it this way...if I dump the contents of memory and run strings.exe against it, I will get a lot of strings...but what context will that have? I won't be able to associate any of the strings I locate in that memory dump with a specific process, using just strings.exe. However, if I parse out the process information, reassembling EXE files and memory used by each process, and then run strings.exe on the results, I will have a considerable amount of context...not only will I know which process was using the specific memory pages, but I will have timestamps associated with process and threads, etc.
Thoughts? I just made all this up, just now. ;-) Am I off base, crazy, a raving lunatic?
Tool Testing Methodology
As I mentioned earlier, the newly-released ACPO Guidelines state:
By profiling the forensic footprint of trusted volatile data forensic tools,
Profiling, eh? Forensic footprint, you say? The next logical step is...how do we do this? Pp. 46 - 48 of my book make a pretty good start at laying this all out.
First, you want to be sure to document the tool you're testing...where you found it, the file size, cryptographic hashes, any pertinent info from the PE header, etc.
Also, when testing, you want to identify your test platform (OS, tools used, etc.) so that the tests you run are understandable and repeatable. Does the OS matter? I'm sure some folks don't think so, but it does! Why is that? Well, for one, the various versions of Windows differ...for example, Windows XP performs application prefetching by default. This means that when you run your test, depending upon how you launch the tool you're testing, you may find a .pf file added to the Prefetch directory (assuming that the number of .pf files hasn't reached the 128 file limit).
So, what testing tools do you want to have in place on the testing platform? What tools do we need to identify the "forensic footprint"? Well, you'll need two classes of tools...snapshot 'diff' tools, and active monitoring tools. Snapshot 'diff' tools allow you to snapshot the system (file system, Registry) before the tool is run, and again afterward, and then will allow you to 'diff' the two snapshots to see what what changed. Tools such as InControl5 and RegShot can be used for this purpose.
For active monitoring tools, I'd suggest ProcessMonitor from MS SysInternals. This tool allows you to monitor file and Registry accesses in real-time, and then save that information for later analysis.
In order to monitor your system for network activity while the tool is run, I'd suggest installing PortReporter on your system as part of your initial setup. The MS KB article (click on "PortReporter") also includes links to the MS PortQry and PortQryUI tools, as well as the PortReporter Log Parser utility for parsing PortReporter logs.
As many of the tools used in IR activities are CLI tools and will execute and complete fairly quickly, I'd also suggest enabling the Process Tracking auditing on your test system, so that the event record for process creation will be recorded.
Okay, so all of this covers a "forensic footprint" from the perspective of the file system and the Registry...but what about memory? Good question! Lets leave that for another post...
Thoughts so far?
By profiling the forensic footprint of trusted volatile data forensic tools,
Profiling, eh? Forensic footprint, you say? The next logical step is...how do we do this? Pp. 46 - 48 of my book make a pretty good start at laying this all out.
First, you want to be sure to document the tool you're testing...where you found it, the file size, cryptographic hashes, any pertinent info from the PE header, etc.
Also, when testing, you want to identify your test platform (OS, tools used, etc.) so that the tests you run are understandable and repeatable. Does the OS matter? I'm sure some folks don't think so, but it does! Why is that? Well, for one, the various versions of Windows differ...for example, Windows XP performs application prefetching by default. This means that when you run your test, depending upon how you launch the tool you're testing, you may find a .pf file added to the Prefetch directory (assuming that the number of .pf files hasn't reached the 128 file limit).
So, what testing tools do you want to have in place on the testing platform? What tools do we need to identify the "forensic footprint"? Well, you'll need two classes of tools...snapshot 'diff' tools, and active monitoring tools. Snapshot 'diff' tools allow you to snapshot the system (file system, Registry) before the tool is run, and again afterward, and then will allow you to 'diff' the two snapshots to see what what changed. Tools such as InControl5 and RegShot can be used for this purpose.
For active monitoring tools, I'd suggest ProcessMonitor from MS SysInternals. This tool allows you to monitor file and Registry accesses in real-time, and then save that information for later analysis.
In order to monitor your system for network activity while the tool is run, I'd suggest installing PortReporter on your system as part of your initial setup. The MS KB article (click on "PortReporter") also includes links to the MS PortQry and PortQryUI tools, as well as the PortReporter Log Parser utility for parsing PortReporter logs.
As many of the tools used in IR activities are CLI tools and will execute and complete fairly quickly, I'd also suggest enabling the Process Tracking auditing on your test system, so that the event record for process creation will be recorded.
Okay, so all of this covers a "forensic footprint" from the perspective of the file system and the Registry...but what about memory? Good question! Lets leave that for another post...
Thoughts so far?
Wednesday, July 11, 2007
Are you Security Minded?
Kai Axford posted on his blog that he's been "terribly remiss in his forensic discussions". This originated with one of Kai's earlier posts on forensic resources; I'd commented, mentioning my own books as resources.
Kai...no need to apologize. Really. Re: TechEd...gotta get approval for that from my Other Boss. ;-)
Kai...no need to apologize. Really. Re: TechEd...gotta get approval for that from my Other Boss. ;-)
Updates, etc.
Not posting anywhere close to regularly lately, I felt that a couple of updates and notices of new finds were in order...
First off, James MacFarlane has updated his Parse-Win32Registry module, fixing a couple of errors, adding a couple of useful scripts (regfind.pl and regdiff.pl...yes, that's the 'diff' you've been looking for...), and adding a couple of useful functions. Kudos to James, a huge thanks, and a hearty "job well done"! James asked me if I was still using the module...I think "abuse" would be a better term! ;-)
An RTF version of Forensic CaseNotes has been released. I use this tool in what I do...I've added a tab or two that is useful for what I need and do, and I also maintain my analysis and exhibit list using CaseNotes. Now, with RTF support, I can add "formatted text, graphics, photos, charts and tables". Very cool!
LonerVamp posted about some MAC changing and Wifi tools, and I got to thinking that I need to update my Perl scripts that use James' module to include looking for NICs with MACs specifically listed in the Registry. Also, I saw a nifty tool called WirelessKeyView listed...looks like something good to have on your tools CD, either as an admin doing some troubleshooting, or as a first responder.
Another useful tool to have on your CD is Windows File Analyzer, from MiTeC. This GUI tool is capable of parsing some of the more troublesome, yet useful files from a Windows system, such as Prefetch files, shortcut/LNK files, and index.dat files. So...what's in your...uh...CD?
LonerVamp also posted a link to MS KB 875357, Troubleshooting Windows Firewall settings in Windows XP SP2. You're probably thinking, "yeah? so?"...but look closely. From a forensic analysis perspective, take a look at what we have available to us here. For one, item 3 shows the user typing "wscui.cpl" into the Run box to open the Security Center applet...so if you're performing analysis and you find "wscui.cpl" listed in the RunMRU or UserAssist keys, what does that tell you?
What other useful tidbits do you find in the KB article that can be translated into useful forensic analysis techniques? Then, how would you go about automating that?
Another useful tool if you're doing any work with scripts (JavaScript, etc.) in HTML files, is Didier Stevens' ExtractScripts tool. The tool is written in Python, and takes an HTML file as an argument, and outputs each script found in the HTML file as a separate file. Very cool stuff!
Some cool stuff...anyone got anything else they'd like to add?
First off, James MacFarlane has updated his Parse-Win32Registry module, fixing a couple of errors, adding a couple of useful scripts (regfind.pl and regdiff.pl...yes, that's the 'diff' you've been looking for...), and adding a couple of useful functions. Kudos to James, a huge thanks, and a hearty "job well done"! James asked me if I was still using the module...I think "abuse" would be a better term! ;-)
An RTF version of Forensic CaseNotes has been released. I use this tool in what I do...I've added a tab or two that is useful for what I need and do, and I also maintain my analysis and exhibit list using CaseNotes. Now, with RTF support, I can add "formatted text, graphics, photos, charts and tables". Very cool!
LonerVamp posted about some MAC changing and Wifi tools, and I got to thinking that I need to update my Perl scripts that use James' module to include looking for NICs with MACs specifically listed in the Registry. Also, I saw a nifty tool called WirelessKeyView listed...looks like something good to have on your tools CD, either as an admin doing some troubleshooting, or as a first responder.
Another useful tool to have on your CD is Windows File Analyzer, from MiTeC. This GUI tool is capable of parsing some of the more troublesome, yet useful files from a Windows system, such as Prefetch files, shortcut/LNK files, and index.dat files. So...what's in your...uh...CD?
LonerVamp also posted a link to MS KB 875357, Troubleshooting Windows Firewall settings in Windows XP SP2. You're probably thinking, "yeah? so?"...but look closely. From a forensic analysis perspective, take a look at what we have available to us here. For one, item 3 shows the user typing "wscui.cpl" into the Run box to open the Security Center applet...so if you're performing analysis and you find "wscui.cpl" listed in the RunMRU or UserAssist keys, what does that tell you?
What other useful tidbits do you find in the KB article that can be translated into useful forensic analysis techniques? Then, how would you go about automating that?
Another useful tool if you're doing any work with scripts (JavaScript, etc.) in HTML files, is Didier Stevens' ExtractScripts tool. The tool is written in Python, and takes an HTML file as an argument, and outputs each script found in the HTML file as a separate file. Very cool stuff!
Some cool stuff...anyone got anything else they'd like to add?
ACPO Guidelines
The Association of Chief Police Officers (ACPO), in association with 7safe, has recently released their updated guide to collecting electronic evidence. While the entire document makes for an interesting read, I found pages 18 and 19, "Network forensics and volatile data" most interesting.
The section begins with a reference back to Principle 2 of the guidelines, which states:
In circumstances where a person finds it necessary to access original data held on a computer or on storage media, that person must be competent to do so and be able to give evidence explaining the relevance and the implications of their actions.
Sounds good, right? We should also look at Principle 1, which states:
No action taken by law enforcement agencies or their agents should change data held on a computer or storage media which may subsequently be relied upon in court.
Also, Principle 3 states:
An audit trail or other record of all processes applied to computer-based electronic evidence should be created and preserved. An independent third party should be able to examine those processes and achieve the same result.
Okay, I'm at a loss here. Collecting volatile data inherently changes the state of the system, as well as the contents of the storage media (i.e., Prefetch files, Registry contents, pagefile, etc.), and the process used to collect the volatile data cannot be later used by a third party to "achieve the same result", as the state of the system at the time that the data is collected cannot be reproduced.
That being said, let's move on...page 18, in the "Network forensics and volatile data" section, includes the following:
By profiling the forensic footprint of trusted volatile data forensic tools, an investigator will be in a position to understand the impact of using such tools and will therefore consider this during the investigation and when presenting evidence.
It's interesting that this says "profiling the forensic footprint", but says nothing about error rates or statistics of any kind. I fully agree that this sort of thing needs to be done, but I would hope that it would be done and made available via a resource such as the ForensicWiki, so that not every examiner has to run every test of every tool.
Here's another interesting tidbit...
Considering a potential Trojan defence...
Exactly!
Continuing on through the document, I can't say that I agree with the order of the sequence for collecting volatile data...specifically, the binary dump of memory should really be first, not last. This way, you can collect the contents of physical memory in as near a pristine state as possible. I do have to question the use of the term "bootable" to describe the platform from which the tools should be run, as booting to this media would inherently destroy the very volatile data you're attempting to collect.
Going back to my concerns (the part where I said I was "at a loss") above, I found this near the end of the section:
By accessing the devices, data may be added, violating Principle 1 but, if the logging mechanism is researched prior to investigation, the forensic footprints added during investigation may be taken into consideration and therefore Principle 2 can be complied with.
Ah, there we go...so if we profile our trusted tools and document what their "forensic footprints" are, then we can identify our (investigators) footprints on the storage media, much like a CSI following a specific route into and out of a crime scene, so that she can say, "yes, those are my footprints."
Thoughts?
The section begins with a reference back to Principle 2 of the guidelines, which states:
In circumstances where a person finds it necessary to access original data held on a computer or on storage media, that person must be competent to do so and be able to give evidence explaining the relevance and the implications of their actions.
Sounds good, right? We should also look at Principle 1, which states:
No action taken by law enforcement agencies or their agents should change data held on a computer or storage media which may subsequently be relied upon in court.
Also, Principle 3 states:
An audit trail or other record of all processes applied to computer-based electronic evidence should be created and preserved. An independent third party should be able to examine those processes and achieve the same result.
Okay, I'm at a loss here. Collecting volatile data inherently changes the state of the system, as well as the contents of the storage media (i.e., Prefetch files, Registry contents, pagefile, etc.), and the process used to collect the volatile data cannot be later used by a third party to "achieve the same result", as the state of the system at the time that the data is collected cannot be reproduced.
That being said, let's move on...page 18, in the "Network forensics and volatile data" section, includes the following:
By profiling the forensic footprint of trusted volatile data forensic tools, an investigator will be in a position to understand the impact of using such tools and will therefore consider this during the investigation and when presenting evidence.
It's interesting that this says "profiling the forensic footprint", but says nothing about error rates or statistics of any kind. I fully agree that this sort of thing needs to be done, but I would hope that it would be done and made available via a resource such as the ForensicWiki, so that not every examiner has to run every test of every tool.
Here's another interesting tidbit...
Considering a potential Trojan defence...
Exactly!
Continuing on through the document, I can't say that I agree with the order of the sequence for collecting volatile data...specifically, the binary dump of memory should really be first, not last. This way, you can collect the contents of physical memory in as near a pristine state as possible. I do have to question the use of the term "bootable" to describe the platform from which the tools should be run, as booting to this media would inherently destroy the very volatile data you're attempting to collect.
Going back to my concerns (the part where I said I was "at a loss") above, I found this near the end of the section:
By accessing the devices, data may be added, violating Principle 1 but, if the logging mechanism is researched prior to investigation, the forensic footprints added during investigation may be taken into consideration and therefore Principle 2 can be complied with.
Ah, there we go...so if we profile our trusted tools and document what their "forensic footprints" are, then we can identify our (investigators) footprints on the storage media, much like a CSI following a specific route into and out of a crime scene, so that she can say, "yes, those are my footprints."
Thoughts?
Thursday, July 05, 2007
Windows Forensic Analysis Book Review Posted
Richard Bejtlich has posted a review of Windows Forensic Analysis on Amazon.
5 (count 'em!) stars! Wow! The review starts with:
Wow -- what a great forensics book -- a must read for investigators.
Very cool. High praise, indeed!
Thanks, Richard!
5 (count 'em!) stars! Wow! The review starts with:
Wow -- what a great forensics book -- a must read for investigators.
Very cool. High praise, indeed!
Thanks, Richard!
Subscribe to:
Posts (Atom)