I got word yesterday that Syngress is going to discontinue selling hard copy books from their web site. They will continue to sell ebooks, and provide links to other online retailers.
The new link for my book is here.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Wednesday, June 27, 2007
Sunday, June 17, 2007
What is RAM, legally speaking?
Ever wondered what the legal definition of "RAM" is? I was perusing the Computer Forensics and Incident Response blog this morning and found an interesting post regarding RAM and the US Courts. In short, a court document (judge's decision) from the case of Columbia Pictures Industries, et al, vs Justin Bunneli, et al, was posted on the web, and contains some interesting discussion regarding RAM.
In short, the document illustrates a discussion in which RAM constitutes "electronically stored information", and can be included in discovery. The document contains statement such as "...Server Log Data is temporarily stored in RAM and constitutes a document...".
Interestingly enough, there is also discussion of "sploilation of evidence" due to the defendant's failure to preserve/retain RAM.
The defendants claimed that, in part, they could not produce the server log data from RAM due to burden of cost...which the judge's decision states that they failed to demonstrate. There are some interesting notes that address issues of RAM as "electronically stored information" from which key data would otherwise not be available (ie, the document states that the server's logging function was not enabled, but the requests themselves were stored in RAM).
Ultimately, the judge denied the plaintiff's request for evidentary sanctions due to the defendant's failure to preserve the contents of RAM, partially due to a lack of prior precedence and a specific request to preserve RAM (the request was for documents).
The PDF document is 36 pages long, and well worth a read. I will not attempt to interpret a legal document here...I simply find the judge's decision that yes, RAM constitutes electronically stored information, however temporary, to be very interesting.
What are your thoughts? How do you think this kind of issue will fare given that there are no longer any freely available tools for dumping the contents of Physical Memory from Windows systems?
Addendum: An appeal brief has been posted by the defendant's lawyers.
In short, the document illustrates a discussion in which RAM constitutes "electronically stored information", and can be included in discovery. The document contains statement such as "...Server Log Data is temporarily stored in RAM and constitutes a document...".
Interestingly enough, there is also discussion of "sploilation of evidence" due to the defendant's failure to preserve/retain RAM.
The defendants claimed that, in part, they could not produce the server log data from RAM due to burden of cost...which the judge's decision states that they failed to demonstrate. There are some interesting notes that address issues of RAM as "electronically stored information" from which key data would otherwise not be available (ie, the document states that the server's logging function was not enabled, but the requests themselves were stored in RAM).
Ultimately, the judge denied the plaintiff's request for evidentary sanctions due to the defendant's failure to preserve the contents of RAM, partially due to a lack of prior precedence and a specific request to preserve RAM (the request was for documents).
The PDF document is 36 pages long, and well worth a read. I will not attempt to interpret a legal document here...I simply find the judge's decision that yes, RAM constitutes electronically stored information, however temporary, to be very interesting.
What are your thoughts? How do you think this kind of issue will fare given that there are no longer any freely available tools for dumping the contents of Physical Memory from Windows systems?
Addendum: An appeal brief has been posted by the defendant's lawyers.
Saturday, June 16, 2007
Restore Point Analysis
Others have posted bits and pieces regarding System Restore Point analysis (Stephen Bunting's site has some great info), and I've even blogged on this topic before, but I wanted to add a bit more information and a tidbit or two I've run across. This will go a bit beyond what's in my book, but I do want to say that the content in my book is not invalidated in any way.
First off, you can use some of the tools on the DVD accompanying my book in either live response or during post-mortem analysis to collect information from the Restore Points. I've recently updated the code to the SysRestore.pl ProDiscover ProScript to make it more usable and flexible, given some situations I've seen recently.
Another interesting thing I've run across is that using an alternative method of analysis, such as mounting the acquired image as a read-only drive letter (using VDKWin or Mount Image Pro), can be more of a problem than a solution. Accessing the system this way can really be a boon to the examiner, as you can hit the system with an AV scanner (...or two...or three...) and save yourself a great deal of time trying to locate malware. However, the problem occurs due to the fact that the ACLs on the System Volume Information directory require System level access for that system, and even having System level access on your analysis system does not equate to System level access on the mounted image. So things tend not to work as well...files within the "protected" directories will not be scanned, and your alternative is to either perform your analysis within a forensic analysis application such as ProDiscover (using ProScripts), or export the entire directory structure out of the image, at which point, storage is then a consideration (I've seen systems with quite a number of Restore Points).
This is can be an issue because bad guys may try to hide stuff in these directories...read on...
Remember me mentioning the existence of web browsing history for the Default User? This indicates the use of the WinInet API (wget.exe, IE, etc.) by someone who accessed the system with System level privileges. This level of access would also allow that user to access the System Volume Information directory, where the Restore Points are maintained, and possibly put things there, such as executable image files, etc. It's unlikely that a restore point would be used for persistence (ie, point a Windows Service to an executable image within a restore point), as the restore points eventually get deleted or flushed out (see the fifo.log file). However, this would be an excellent place to put an installer or downloader file, and then the intruder could place the files that he wanted to be persistent in either the System Volume Information directory, or the "_restore*" directory.
So, besides looking for files that we know are in the Restore Points (ie, drivetable.txt, rp.log, Registry files), we should also consider looking for files that shouldn't be there, particularly if we find other artifacts that indicate a System-level intrusion.
Beyond this, Restore Points provide a wealth of historical information about the system. By parsing all of the rp.log files, we can develop a timeline of activity on the system that will give us an idea of what was done (system checkpoint, application install/uninstall, etc.) as well as provide us with that timeline...if the Restore Points are in sequence and the dates seem skewed, then we have an indication that someone may have fiddled with the system time. Using the drivetable.txt file, you can see what drives were attached to the system at the time that the Restore Point was created (by default, one is created every 24 hrs).
Beyond these files, we also have access to the Registry files that are backed up to the Restore Points. You can parse these to see if and when a user's privilege levels were modified (ie, added to the Administrator group), determine IP addresses and network settings for the system (parse the NetworkCards key from the Software file, then the Tcpip Services key from the System file), etc.
Analysis of the Registry files maintained in Restore Points is also useful in determining a timeline for certain Registry modifications that are difficult to pin down. For example, when a Registry value is added or modified, the key's LastWrite time is updated. What if one value is added, and one is modified...how do we determine which action caused the LastWrite time to be modified? Well, by using the historical data maintained in the Restore Points, we can look back and see that on this date, the modified value was there, but the added value wasn't...and then it appears in the next Restore Point. Not exact, but it does give us a timeline.
So...what interesting things have you found in Restore Points?
Links
Kelly's Korner (XP Restore Points)
MS Windows XP System Restore
System Restore WMI Classes
First off, you can use some of the tools on the DVD accompanying my book in either live response or during post-mortem analysis to collect information from the Restore Points. I've recently updated the code to the SysRestore.pl ProDiscover ProScript to make it more usable and flexible, given some situations I've seen recently.
Another interesting thing I've run across is that using an alternative method of analysis, such as mounting the acquired image as a read-only drive letter (using VDKWin or Mount Image Pro), can be more of a problem than a solution. Accessing the system this way can really be a boon to the examiner, as you can hit the system with an AV scanner (...or two...or three...) and save yourself a great deal of time trying to locate malware. However, the problem occurs due to the fact that the ACLs on the System Volume Information directory require System level access for that system, and even having System level access on your analysis system does not equate to System level access on the mounted image. So things tend not to work as well...files within the "protected" directories will not be scanned, and your alternative is to either perform your analysis within a forensic analysis application such as ProDiscover (using ProScripts), or export the entire directory structure out of the image, at which point, storage is then a consideration (I've seen systems with quite a number of Restore Points).
This is can be an issue because bad guys may try to hide stuff in these directories...read on...
Remember me mentioning the existence of web browsing history for the Default User? This indicates the use of the WinInet API (wget.exe, IE, etc.) by someone who accessed the system with System level privileges. This level of access would also allow that user to access the System Volume Information directory, where the Restore Points are maintained, and possibly put things there, such as executable image files, etc. It's unlikely that a restore point would be used for persistence (ie, point a Windows Service to an executable image within a restore point), as the restore points eventually get deleted or flushed out (see the fifo.log file). However, this would be an excellent place to put an installer or downloader file, and then the intruder could place the files that he wanted to be persistent in either the System Volume Information directory, or the "_restore*" directory.
So, besides looking for files that we know are in the Restore Points (ie, drivetable.txt, rp.log, Registry files), we should also consider looking for files that shouldn't be there, particularly if we find other artifacts that indicate a System-level intrusion.
Beyond this, Restore Points provide a wealth of historical information about the system. By parsing all of the rp.log files, we can develop a timeline of activity on the system that will give us an idea of what was done (system checkpoint, application install/uninstall, etc.) as well as provide us with that timeline...if the Restore Points are in sequence and the dates seem skewed, then we have an indication that someone may have fiddled with the system time. Using the drivetable.txt file, you can see what drives were attached to the system at the time that the Restore Point was created (by default, one is created every 24 hrs).
Beyond these files, we also have access to the Registry files that are backed up to the Restore Points. You can parse these to see if and when a user's privilege levels were modified (ie, added to the Administrator group), determine IP addresses and network settings for the system (parse the NetworkCards key from the Software file, then the Tcpip Services key from the System file), etc.
Analysis of the Registry files maintained in Restore Points is also useful in determining a timeline for certain Registry modifications that are difficult to pin down. For example, when a Registry value is added or modified, the key's LastWrite time is updated. What if one value is added, and one is modified...how do we determine which action caused the LastWrite time to be modified? Well, by using the historical data maintained in the Restore Points, we can look back and see that on this date, the modified value was there, but the added value wasn't...and then it appears in the next Restore Point. Not exact, but it does give us a timeline.
So...what interesting things have you found in Restore Points?
Links
Kelly's Korner (XP Restore Points)
MS Windows XP System Restore
System Restore WMI Classes
Thursday, June 14, 2007
EventLog Analysis
In my book, I covered the Windows 2000, XP, and 2003 EventLog file header and event record structure in some detail. There's also a Perl script or two on the DVD that accompanies the book that let you parse an Event Log without using the Windows API, so that you avoid that pesky message about the Event Log being corrupted.
I've since updated one of the scripts (changing the name to evt2xls.pl), so that now it writes the information that it parses from the Event Log file directly into an Excel spreadsheet, even going so far as to format the date field to that it "makes sense" to Excel when you want to sort based on the date. I've found that writing the data directly to a spreadsheet makes things a bit easier for me, particularly when I want to sort the data to see just certain event record sources, or perform some other analysis. I've also added some functionality to collect statistics from the Event Log file, and display information such as total counted event records, frequency of event sources and IDs, etc., in a separate report file. I've found these to be very useful and efficient, giving me a quick overview of the contents of the Event Logs, and making my analysis of a system go much smoother, particularly when combined with Registry analysis (such as parsing the Security file for the audit policy...see the Bonus directory on the DVD for the Perl script named poladt.pl and its associated EXE file). One of the things I'm considering adding to this script is reporting of successful and failed login attempts, basing this reporting in part on the type of the login attempt (ie, Service vs Local vs Remote).
Here's something to think about...there is sufficient information in the book, and Perl code on the DVD, such that you can create tools for parsing of event records from other sources, such as RAM dumps, the pagefile, and even unallocated space. I'm considering writing a couple of small tools to do this...not search the files, specifically (I can add that to the code that parses RAM dumps) but to start by simply extracting event records given a file and an offset within the file.
But what about actual Event Log analysis? What about really using the Event Log to get some insight into activity on the system? What can we look for and how can we use it?
Here are some tidbits that I've come across and use...please don't consider this a complete list, as I hope that people will contribute. This is just to get folks started...
Stephen Bunting has a great write-up that explains how to use the Event Log to track time change events, such as when someone alters their system time.
The Application Event Log is a great place to look for events generated by antivirus applications. This will not only tell you if an antivirus application is installed on the system (you can also perform Registry analysis to determine this information), but perhaps the version, when it was active, etc.
In the System Event Log, Event ID 6161 (Source: Print) tells you when a file failed to print. The event message tells you the name of the file that failed to print, the username, and the printer.
Also in the System Event Log, Event ID 35 (Source: W32Time) is an Information event that tells you that your system is sync'ing with a time server, and provides the IP address of your system. This can be very useful in a DHCP environment, as it tells you the IP address assigned to the system (actually, the interface) at a particular date and time.
Windows Defender (Source: WinDefend) will generate an event ID 1007 when it detects malware on a system; the event strings contain specific information about what was found.
Whenever you're doing Event Log analysis, be sure to go to EventID.net for help understanding what you're looking at. Most of the listed event IDs have detailed explanations of what can cause the event, as well as links to information at MS.
Again, this is not a complete list of items that you may find and use in your analysis...these are just somethings that come to mind. And remember, you get a bit more out of Event Log analysis when you combine it with Registry analysis, not only of the audit policy for the system and the settings for the Event Logs, but with other sources, as well.
Links
EventLog Header structure
Event Record structure
EventLog EOF record structure
EventLog File Format
I've since updated one of the scripts (changing the name to evt2xls.pl), so that now it writes the information that it parses from the Event Log file directly into an Excel spreadsheet, even going so far as to format the date field to that it "makes sense" to Excel when you want to sort based on the date. I've found that writing the data directly to a spreadsheet makes things a bit easier for me, particularly when I want to sort the data to see just certain event record sources, or perform some other analysis. I've also added some functionality to collect statistics from the Event Log file, and display information such as total counted event records, frequency of event sources and IDs, etc., in a separate report file. I've found these to be very useful and efficient, giving me a quick overview of the contents of the Event Logs, and making my analysis of a system go much smoother, particularly when combined with Registry analysis (such as parsing the Security file for the audit policy...see the Bonus directory on the DVD for the Perl script named poladt.pl and its associated EXE file). One of the things I'm considering adding to this script is reporting of successful and failed login attempts, basing this reporting in part on the type of the login attempt (ie, Service vs Local vs Remote).
Here's something to think about...there is sufficient information in the book, and Perl code on the DVD, such that you can create tools for parsing of event records from other sources, such as RAM dumps, the pagefile, and even unallocated space. I'm considering writing a couple of small tools to do this...not search the files, specifically (I can add that to the code that parses RAM dumps) but to start by simply extracting event records given a file and an offset within the file.
But what about actual Event Log analysis? What about really using the Event Log to get some insight into activity on the system? What can we look for and how can we use it?
Here are some tidbits that I've come across and use...please don't consider this a complete list, as I hope that people will contribute. This is just to get folks started...
Stephen Bunting has a great write-up that explains how to use the Event Log to track time change events, such as when someone alters their system time.
The Application Event Log is a great place to look for events generated by antivirus applications. This will not only tell you if an antivirus application is installed on the system (you can also perform Registry analysis to determine this information), but perhaps the version, when it was active, etc.
In the System Event Log, Event ID 6161 (Source: Print) tells you when a file failed to print. The event message tells you the name of the file that failed to print, the username, and the printer.
Also in the System Event Log, Event ID 35 (Source: W32Time) is an Information event that tells you that your system is sync'ing with a time server, and provides the IP address of your system. This can be very useful in a DHCP environment, as it tells you the IP address assigned to the system (actually, the interface) at a particular date and time.
Windows Defender (Source: WinDefend) will generate an event ID 1007 when it detects malware on a system; the event strings contain specific information about what was found.
Whenever you're doing Event Log analysis, be sure to go to EventID.net for help understanding what you're looking at. Most of the listed event IDs have detailed explanations of what can cause the event, as well as links to information at MS.
Again, this is not a complete list of items that you may find and use in your analysis...these are just somethings that come to mind. And remember, you get a bit more out of Event Log analysis when you combine it with Registry analysis, not only of the audit policy for the system and the settings for the Event Logs, but with other sources, as well.
Links
EventLog Header structure
Event Record structure
EventLog EOF record structure
EventLog File Format
Wednesday, June 13, 2007
Determining the version of XP
I received an interesting comment to one of my recent blog posts...the poster was musing that he wished he could determine the version of XP (Home or Pro), presumably during a post-mortem examination. As this struck my interest, I began to research this...and most of what I found applies to a live running system. For example, MS has a KB article that tells you how to determine the version of XP you've got. Also, the WMI class Win32_OperatingSystem has a value called "SuiteMask" which will let you determine the version of the operating system; to see if you're on the Home version of XP, perform a logical AND operation with the SuiteMask value and 0x0200 (the "Personal" bit) - if it succeeds, you're on XP Home. You can also use the Win32::GetOSVersion() function in Perl, or implement the WMI Win32_OperatingSystem class in Perl.
This information seems to be maintained in memory, and appears to be retrieved using the GetVersionEx() API function. Running a couple of tests to extract the information while running RegMon doesn't appear to reveal anything interesting as far as Registry keys that are accessed while attempting to determine the OS version.
During a post-mortem examination, you can go to the file "%WinDir%\system32\eula.txt" and locate the last line of the file that begins with "EULAID", and you'll see something similar to:
EULAID:XPSP2_RM.0_PRO_OEM_EN
If it says "HOM" instead of "PRO", you're dealing with the Home version of XP.
Also, you can try the file "%windir%\system32\prodspec.ini", and right below the line that says "[Product Specification]", you'll see an entry that will tell you which version of the OS you're working with (note: be sure to check the last modification date on these files, as well...).
Links
Determine the version of IE installed
Check the Version of Office XP
Determine the Windows version using C# (using VB)
32- or 64-bit version of Windows?
This information seems to be maintained in memory, and appears to be retrieved using the GetVersionEx() API function. Running a couple of tests to extract the information while running RegMon doesn't appear to reveal anything interesting as far as Registry keys that are accessed while attempting to determine the OS version.
During a post-mortem examination, you can go to the file "%WinDir%\system32\eula.txt" and locate the last line of the file that begins with "EULAID", and you'll see something similar to:
EULAID:XPSP2_RM.0_PRO_OEM_EN
If it says "HOM" instead of "PRO", you're dealing with the Home version of XP.
Also, you can try the file "%windir%\system32\prodspec.ini", and right below the line that says "[Product Specification]", you'll see an entry that will tell you which version of the OS you're working with (note: be sure to check the last modification date on these files, as well...).
Links
Determine the version of IE installed
Check the Version of Office XP
Determine the Windows version using C# (using VB)
32- or 64-bit version of Windows?
Monday, June 11, 2007
Some Registry stuff...
I like "Registry stuff". I don't know what the fascination is, but for some reason, I love stuff that has to do with the Registry.
Anyway, I ran across something recently...I was looking at one of my own systems and ran across an interesting value in my AppInit_DLLs Registry value. Just the fact that there was data within this value was interesting enough! But then I saw something even more interesting...another value named LoadAppInit_DLLs. I haven't found anything specific about this value at the MS site yet, but this appears to be a Vista-only Registry value, in that it is only recognized and utilized by the Vista operating system. This is covered briefly in Symantec's Analysis of the Windows Vista Security Model paper.
This value appears to be used by PGP, as well as some tools from Google (both of these are based on Google searches for occurances of the value name).
On the topic of the Registry, here's how to use PowerShell to get the name of the last user to log onto a system.
So, what are you looking in the Registry for...or looking for in the Registry?
Links:
Forensics Wiki: Windows Registry
The Windows Registry as a Forensic Resource
Alien Registry Viewer
32-bit Application access to the Registry on 64-bit versions of Windows
Anyway, I ran across something recently...I was looking at one of my own systems and ran across an interesting value in my AppInit_DLLs Registry value. Just the fact that there was data within this value was interesting enough! But then I saw something even more interesting...another value named LoadAppInit_DLLs. I haven't found anything specific about this value at the MS site yet, but this appears to be a Vista-only Registry value, in that it is only recognized and utilized by the Vista operating system. This is covered briefly in Symantec's Analysis of the Windows Vista Security Model paper.
This value appears to be used by PGP, as well as some tools from Google (both of these are based on Google searches for occurances of the value name).
On the topic of the Registry, here's how to use PowerShell to get the name of the last user to log onto a system.
So, what are you looking in the Registry for...or looking for in the Registry?
Links:
Forensics Wiki: Windows Registry
The Windows Registry as a Forensic Resource
Alien Registry Viewer
32-bit Application access to the Registry on 64-bit versions of Windows
Windows Forensic Analysis Book Review
Andrew Hay posted the first review of my book...that I'm aware of! ;-)
Andrew also posted the review on Amazon!
Thanks, Andrew!
Andrew also posted the review on Amazon!
Thanks, Andrew!
Saturday, June 02, 2007
AntiForensics Article
I read an interesting article recently that talks about antiforensics. At first glance, the article is something of an interesting piece, but reading it a second time and thinking about what was actually being said really got me thinking. Not because the article addresses the use of antiforensics, but because it identifies an issue (or issues) that needs to be addressed within the forensics community. Yes, these tools are out there, and we should be thankful that they we made available by someone...otherwise, how could we address the issue? So, what do we need to do to update our methodologies accordingly? Perhaps more importantly, should be be trying to get ahead of the power curve, rather than playing catch up?
I do feel that it is important to mention something else in the article that I found very concerning, though:
"...details of the TJX breach—called the biggest data heist in history, with more than 45 million credit card records compromised—strongly suggest that the criminals used antiforensics to maintain undetected access to the systems for months or years and capture data in real time."
Strongly suggest, how?
The article goes on to say:
"Several experts said it would be surprising if antiforensics weren’t used."
Several experts? Who? Were any of them involved in the investigation? If they were, what "expert" reveals this kind of information, and keeps his or her job? If not...why are they speculating? It just seems to me that this part of the article is out of place, and when viewed within the context of the entire article, breaks up the flow. The article has a logical progression of here's the issue, okay we've identified it, let's get about fixing it...which all makes sense...but then this bit of speculation seems out of place.
Overall, though, it appears that the article points to some issues that should be addressed within the digital forensic community. Are the tools we have worthless? Not at all. We just have to make better use of the information we have at hand. The article mentions building layers of "evidence", using multiple sources of information to correlate and support what we found in our digital investigation.
Also, Harlan's Corollary to Jesse's First Law of Computer Forensics really seems to be applicable now more than ever! ;-)
I do feel that it is important to mention something else in the article that I found very concerning, though:
"...details of the TJX breach—called the biggest data heist in history, with more than 45 million credit card records compromised—strongly suggest that the criminals used antiforensics to maintain undetected access to the systems for months or years and capture data in real time."
Strongly suggest, how?
The article goes on to say:
"Several experts said it would be surprising if antiforensics weren’t used."
Several experts? Who? Were any of them involved in the investigation? If they were, what "expert" reveals this kind of information, and keeps his or her job? If not...why are they speculating? It just seems to me that this part of the article is out of place, and when viewed within the context of the entire article, breaks up the flow. The article has a logical progression of here's the issue, okay we've identified it, let's get about fixing it...which all makes sense...but then this bit of speculation seems out of place.
Overall, though, it appears that the article points to some issues that should be addressed within the digital forensic community. Are the tools we have worthless? Not at all. We just have to make better use of the information we have at hand. The article mentions building layers of "evidence", using multiple sources of information to correlate and support what we found in our digital investigation.
Also, Harlan's Corollary to Jesse's First Law of Computer Forensics really seems to be applicable now more than ever! ;-)
Thoughts on Live Acquisition
Recently, I've been doing some thinking about issues surrounding live acquisitions - specifically, acquiring an image from a system that is running, as opposed to either booting the system to an alternate OS, or removing the hard drive and hooking it up to a write-blocker.
There are several methods for performing a live acquisition, but most involve running an agent or application of some kind on the system itself. You can do this using ProDiscover (install or run the PDServer agent from a CD or thumb drive), FTK Imager (run from a CD or thumb drive, and write the image files to an external drive or an already-mapped share), or with good ol' dd and netcat running from a CD. Regardless of how you choose to do this, there is an additional process running on the system...so think Heisenberg's Uncertainty Principle, but in the digital realm.
So in acquiring an image from a live system, we need to introduce a process into a system of already running processes. While our intention is to take care and disturb the 'scene' as little as possible, it is simply a fact that we're going to leave some artifacts of our activities on the system...memory is consumed by our process, as are buffers, perhaps other processes have pages written out to the pagefile, etc. However, we address this issue with thorough documentation of our procedures and methodologies.
Now, in his book Computer Evidence: Collection & Preservation, Chris Brown describes the result of a live acquisition as a "smear", as from a temporal perspective, that's what we've got. Remember, it takes time to acquire an image from a hard drive, and if the system is still live and running when you acquire that image, then there is the distinct possibility that some sectors may change after you acquire them. Rather than having a sharp, distinct snapshot in time as when you acquire an image from a hard drive that has been removed from a system, you get a "smudge" or a "smear" instead. Also, as Greg Kelly pointed out in another forum recently, some of what we would normally consider stagnant data (Registry, files, etc.) can actually be considered volatile, and subject to change during the course of our acquisition...think log files, Event Log entries, the pagefile, etc.
Now, would it be possible to minimize this effect, by limiting what's running on the system during the acquisition? I believe that the answer to this is "yes". To do this, we'd need to take a couple of things into consideration. We'd should first ask ourselves, is this necessary? Hhhhmmm...if you're imaging the system over the network, and that system is still connected to the network, what is it doing on the network while you're acquiring your image? Is it serving up web pages, processing email, etc?
Before we continue, remember, we're talking about a live acquisition here, getting an image of the hard drive, NOT collecting the contents of memory.
Okay, that being said...if you're imaging over the network, is the system still providing services (shares via the Server service, etc.) that may have a significant effect on what you walk away with? We have to consider this in the face of what effect our actions will have on the system itself when we, say, disable a service. First, we have to see what processes are running, and to do that, we need to load some software and run another process (unless you're using the ProDiscover PDServer, as the agent provides this functionality, as well). Then we have to weigh the benefits of disabling the process or service against the "costs" of the effect that our actions have on the contents of the hard drive. Is that process really processing anything? We know that it may have file handles open, etc. But is there enough of an effect on the contents of the drive that it would make a significant difference within the time it takes to acquire the image? Also, if I disable that process/service, what happens? Any log files may be closed, perhaps the operating system itself will write an entry into the System or Application Event Log, etc.
Some of the processes or services I might consider shutting down include:
So, before doing any of this, we need to put some thought into it. What, if anything, am I going to disable or shut down? Do I even need to? In some cases, live acquistions I've done have been of systems that had already been taken off of the network (not by me) and shut down, and then rebooted so they could be acquired (acquisition via write-blocker was not possible). With no network connections, I'm not overly concerned about major changes to the system during the acquisition process...the NTP service may log a complaint with the Event Log, but little else may happen. However, this isn't always the case...we're seeing a greater need for live response, both in acquiring the contents of memory, as well as performing live acquisitions of major systems that cannot be brought down or offline. If this is the case, document it. On your acquisition worksheet or in your case notes, clearly state "these services could not be halted or disabled due to...".
This is just a first blush...getting my thoughts down, thinking through things, determining if there is even a strong enough need for something like this...perhaps a matrix of services and processes, and when its a good idea to shut them down, how to do so, etc. Is this necessary?
Also, because I know the question is going to come up...addressing this same issue in the face of acquiring the contents of physical memory is an entirely separate post. Stay tuned!
There are several methods for performing a live acquisition, but most involve running an agent or application of some kind on the system itself. You can do this using ProDiscover (install or run the PDServer agent from a CD or thumb drive), FTK Imager (run from a CD or thumb drive, and write the image files to an external drive or an already-mapped share), or with good ol' dd and netcat running from a CD. Regardless of how you choose to do this, there is an additional process running on the system...so think Heisenberg's Uncertainty Principle, but in the digital realm.
So in acquiring an image from a live system, we need to introduce a process into a system of already running processes. While our intention is to take care and disturb the 'scene' as little as possible, it is simply a fact that we're going to leave some artifacts of our activities on the system...memory is consumed by our process, as are buffers, perhaps other processes have pages written out to the pagefile, etc. However, we address this issue with thorough documentation of our procedures and methodologies.
Now, in his book Computer Evidence: Collection & Preservation, Chris Brown describes the result of a live acquisition as a "smear", as from a temporal perspective, that's what we've got. Remember, it takes time to acquire an image from a hard drive, and if the system is still live and running when you acquire that image, then there is the distinct possibility that some sectors may change after you acquire them. Rather than having a sharp, distinct snapshot in time as when you acquire an image from a hard drive that has been removed from a system, you get a "smudge" or a "smear" instead. Also, as Greg Kelly pointed out in another forum recently, some of what we would normally consider stagnant data (Registry, files, etc.) can actually be considered volatile, and subject to change during the course of our acquisition...think log files, Event Log entries, the pagefile, etc.
Now, would it be possible to minimize this effect, by limiting what's running on the system during the acquisition? I believe that the answer to this is "yes". To do this, we'd need to take a couple of things into consideration. We'd should first ask ourselves, is this necessary? Hhhhmmm...if you're imaging the system over the network, and that system is still connected to the network, what is it doing on the network while you're acquiring your image? Is it serving up web pages, processing email, etc?
Before we continue, remember, we're talking about a live acquisition here, getting an image of the hard drive, NOT collecting the contents of memory.
Okay, that being said...if you're imaging over the network, is the system still providing services (shares via the Server service, etc.) that may have a significant effect on what you walk away with? We have to consider this in the face of what effect our actions will have on the system itself when we, say, disable a service. First, we have to see what processes are running, and to do that, we need to load some software and run another process (unless you're using the ProDiscover PDServer, as the agent provides this functionality, as well). Then we have to weigh the benefits of disabling the process or service against the "costs" of the effect that our actions have on the contents of the hard drive. Is that process really processing anything? We know that it may have file handles open, etc. But is there enough of an effect on the contents of the drive that it would make a significant difference within the time it takes to acquire the image? Also, if I disable that process/service, what happens? Any log files may be closed, perhaps the operating system itself will write an entry into the System or Application Event Log, etc.
Some of the processes or services I might consider shutting down include:
- AntiVirus products - these things reach out on their own and update themselves, or run scans automatically
- Task Scheduler - check to see if there are any jobs scheduled to run...this can get in the way of your acquisition (also, see the "Notes from the Underground..." sidebar on pp 215-216 of my book for an interesting tidbit on hiding scheduled tasks)
- Windows Firewall - depending upon how it's configured (we'd want to check, of course) there may be some significant issues. I included a sample pfirewall.log file on the DVD with my book...I'd turned up logging on the firewall and hit it with nmap. ;-)
- Exchange/IIS - if the system is still connected to the network do you want it processing email and web pages during the acquisition? Think the same thing for the FTP and SMTP services that sometimes get installed along with IIS.
So, before doing any of this, we need to put some thought into it. What, if anything, am I going to disable or shut down? Do I even need to? In some cases, live acquistions I've done have been of systems that had already been taken off of the network (not by me) and shut down, and then rebooted so they could be acquired (acquisition via write-blocker was not possible). With no network connections, I'm not overly concerned about major changes to the system during the acquisition process...the NTP service may log a complaint with the Event Log, but little else may happen. However, this isn't always the case...we're seeing a greater need for live response, both in acquiring the contents of memory, as well as performing live acquisitions of major systems that cannot be brought down or offline. If this is the case, document it. On your acquisition worksheet or in your case notes, clearly state "these services could not be halted or disabled due to...".
This is just a first blush...getting my thoughts down, thinking through things, determining if there is even a strong enough need for something like this...perhaps a matrix of services and processes, and when its a good idea to shut them down, how to do so, etc. Is this necessary?
Also, because I know the question is going to come up...addressing this same issue in the face of acquiring the contents of physical memory is an entirely separate post. Stay tuned!
Friday, June 01, 2007
A little about my book...
I apologize for this brief digression from the normal flow of the blog, but I've been receiving certain comments of late from several venues, and I thought I would address them all at once...
Many times, in forums (forii??) or email, someone will see me say "...as I mentioned in my book..." or "...as detailed in my book..." and I've received comments that some folks have been turned off by that. Okay, I can go with that, as I dislike sales pitches myself. So why do I say something like that?
The first conclusion that many seem to come to is that I'm trying to get you to purchase my book to line my pockets. Don't take this personally...but that is not only the first and most popular reaction, but also the most naive and uneducated one, as well. The folks who feel that way have not written a book and do not know what goes into writing such a book. Further, they have no idea how little an author makes on the sale of a book.
So why do it? Well, I wrote both of my books (first one and second one) as references...I had a lot of information to share, and I wanted to put it all in one place, and thought that it would be a good idea to do so in a manner that would make it available to others.
Now, I could post this stuff on the Internet for free, couldn't I? Rather constantly rewriting the same thing over and over again into emails and posts, I could cut-n-paste it, or simply post it on the Internet and constantly repost the link. But that gets pretty tiresome...so why not put it into a book? Another benefit of having it in a book is that there is a certain amount of credibility to the material...after all, it has to be tech edited and reviewed. My first book had three tech reviewers (some more engaged than others)...my second one started with one, and ultimately had two. Look at who tech edited my second book, and also look at the names of folks who are acknowledged as having made contributions that were important to the development of the book...doesn't that give the material a bit more credibility than posting it to the Internet?
So the next time you see me say those words, and think to yourself, "man, I wish this guy would just shut up about his book!!", try thinking instead that there maybe something useful in that book or on the DVD...Troy Larson thought so.
Many times, in forums (forii??) or email, someone will see me say "...as I mentioned in my book..." or "...as detailed in my book..." and I've received comments that some folks have been turned off by that. Okay, I can go with that, as I dislike sales pitches myself. So why do I say something like that?
The first conclusion that many seem to come to is that I'm trying to get you to purchase my book to line my pockets. Don't take this personally...but that is not only the first and most popular reaction, but also the most naive and uneducated one, as well. The folks who feel that way have not written a book and do not know what goes into writing such a book. Further, they have no idea how little an author makes on the sale of a book.
So why do it? Well, I wrote both of my books (first one and second one) as references...I had a lot of information to share, and I wanted to put it all in one place, and thought that it would be a good idea to do so in a manner that would make it available to others.
Now, I could post this stuff on the Internet for free, couldn't I? Rather constantly rewriting the same thing over and over again into emails and posts, I could cut-n-paste it, or simply post it on the Internet and constantly repost the link. But that gets pretty tiresome...so why not put it into a book? Another benefit of having it in a book is that there is a certain amount of credibility to the material...after all, it has to be tech edited and reviewed. My first book had three tech reviewers (some more engaged than others)...my second one started with one, and ultimately had two. Look at who tech edited my second book, and also look at the names of folks who are acknowledged as having made contributions that were important to the development of the book...doesn't that give the material a bit more credibility than posting it to the Internet?
So the next time you see me say those words, and think to yourself, "man, I wish this guy would just shut up about his book!!", try thinking instead that there maybe something useful in that book or on the DVD...Troy Larson thought so.
Subscribe to:
Posts (Atom)