Wednesday, August 29, 2007

TechTalk Interview

Back on 10 June, I did an interview on TechTalk, and I saw today that it's posted here. There's some other news...CNN, weather, etc...before the actual interview.

Check it out, give it a listen. Also be sure to check out some of the other shows in the archives.

Tuesday, August 21, 2007

Vista IR

I recently started doing some testing of IR tools on Vista, using Vista Ultimate (32-bit) installed into a VMWare Workstation 6.0 virtual machine.

Part of my testing involved running some tools on Vista to see how they worked, and another part involved mounting the *.vmdk file for my Vista VM using the latest versions of VDK and VDKWin.

IR Tools
I started off by downloading (via IE7) a couple of tools...specifically Autorunsc.exe and Tcpvcon.exe. Both seemed to work quite well, the only real hiccup being the GUI EULA dialog that pops up if you run the tools without the "/accepteula" switch (either way, the tools create a Registry key...be sure you understand and document this as part of your IR methodology if you're using these tools). An interesting part of the tcpvcon output was the amount of IPv6 stuff visible.

My next steps are to test additional tools, as well as the use of WMI-based tools.

Extracting Files from a Vista VM
Mounting the Vista VM as a read-only file system/drive letter on my XP system went off without a hitch. I was already in the process of updating the VDK drivers and VDKWin GUI files, and on a whim I pointed the mount utility at the Vista VM *.vmdk file. I was pleasantly surprised to see the VM mounted as J:\. As expected, some of the directories (specifically, System Volume Information) could not be accessed...this is due to ACLs on those objects. However, I had fairly unrestricted access to the rest of the file system.

A friend mentioned to me recently that the offsets for the Last Run time and runcount in Vista Prefetch files is different from those of XP. I extracted a Prefetch file from the Vista VM and opened it in UltraEdit to look for the offset for the last run time. I found what appeared to be a FILETIME object at offset 0x80, and modified my existing code to extract those 8 bytes. The result matched up quite nicely:

C:\Perl>vista_pref.pl d:\hacking\vista_autorunsc.exe-7bca361f.pf
Last Run = Mon Aug 20 22:50:43 2007 (UTC)

I also tried running some of the Registry parsing tools (using the Parse::Win32Registry module by James McFarlane) against files extracted from the Vista VM. I started with a Perl script that would parse the contents of the UserAssist key - here's an extract of the results:

C:\Perl\reg>pnu.pl d:\hacking\vista_ntuser.dat
LastWrite time = Mon Aug 20 22:53:02 2007 (UTC)
Mon Aug 20 22:53:02 2007 (UTC)
UEME_RUNPATH
UEME_RUNPIDL
UEME_RUNPIDL:%csidl2%\Accessories\Command Prompt.lnk
UEME_RUNPATH:C:\Windows\System32\cmd.exe
Tue Aug 14 18:47:55 2007 (UTC)
UEME_RUNPATH:C:\Windows\system32\control.exe
Wed Jul 11 20:37:27 2007 (UTC)
UEME_RUNPATH:C:\Windows\system32\Wuauclt.exe

That seems to work quite nicely! It looks like I won't have any trouble accessing the raw Registry files using James' module, at least not on 32-bit versions of Vista, so that's good news!

There's still more testing and analysis to do, but this is a good start!

Saturday, August 18, 2007

Copying Files

I've been party to or heard a good number of discussions lately regarding USB removable storage devices, and one of the topics that invariably comes up is, how can you determine which files were copied from the system to a thumb drive, or vice versa.

In most instances, folks are working only with the image of a system, and do not have access to the thumb drive itself. They can easily find the information that tells them when a thumb drive was first connected, and when it was last connected...and then the next question is, what files were or may have been copied to the thumb drive (or from the thumb drive to the system)?

The fact is that Windows systems do not maintain a record of file copies or moves...there is simply no way for a forensic analyst to look at the image of a system and say which files were copied off of the system onto a thumb drive. In order to determine this, you'd need to have the thumb drive (or other media) itself, and be able to see that you had two files of the same or similar size (you can also compare the files with md5deep or ssdeep), one of which is on each piece of media. From there, you could then check the file MAC times and possibly make some conclusions regarding the direction of transfer.

Many times in a conversation on this topic, someone will bring up Windows shortcuts or LNK files. To be honest, I'm not really sure why this comes up, it just seems to be the case. Shortcuts can be created manually, of course, but with regards to files, they are created when a user double-clicks a file, such as a Word document, to open it. Repeated testing on my part (including testing done by others) has yet to turn a method by which normal (as in "normal user activity") dragging-and-dropping a file or using the "copy" command will result in a Windows shortcut file being created.

Does anyone out there have any thoughts or input on tracking this kind of activity, having nothing more than a single system image to analyze? If so, I'd appreciate hearing from you.

Friday, August 17, 2007

IR "Best Practices"

I know it's been a while since I've posted, but work has been really keeping me busy...that's an excuse that we all use, but that's my story and I'm stickin' with it!

So, I've been talking to a number of different folks recently, having discussions during my travels to and fro about incident response and computer forensics. Many times, the issue of "best practices" has come up and that got me thinking...with no specific standards body governing computer forensics or incident response, who decides what "best practices" are? Is it FIRST? After all, they have "IR" in their name, and it does stand for "incident response". Is it the ACPO Guidelines that specify "best practices"?

Well, the easy answer (for me, anyway) is that "it depends". Funny, haha, I know. But in a sense, it does. "Best practices" depend a great deal on the political and cultural makeup of your organization; I've seen places where the incident response team has NO access to the systems, and must request data and information from the network operations staff, which has their own daily tasks and requirements.

But all that aside, we can focus on the technical details of responding to Windows systems. When we say "best practices", we can specify the information we need to get in order to properly assess the situation/incident, how to go about getting it (retrieve it locally, remotely, via live acquisition, via post-mortem exam, etc.), and in what order to retrieve that data (the ACPO Guidelines specify dumping the contents of physical memory last, rather than first...), etc.

For example, we know that during live response, anything we do is going to leave an artifact. So, why not specify what data you're going to collect and the method by which you're going to collect that data? Then determine the artifacts that are left through testing, and document your procedure and artifacts (hint: things like batch files and scripts can constitute "documentation", particularly when burned to CD), and implement that procedure via some means (ie, batch file, script, etc.).

One of the questions that invariably comes up during discussions of live response and "best practices" is, can data collected via live response be used as evidence in court? I would suggest that the answer is, "why not?" After all, an assault victim can be treated by EMTs, operated on by a surgeon, and cared for in a hospital, and the police can still collect evidence and locate and prosecute the perpetrator, right? It's all about documentation, folks.

Monday, July 16, 2007

Book Review

Donald Tabone posted a review of my book, Windows Forensic Analysis, here.

In his review, Mr. Tabone included this:

What I dislike about the book:

- No mention of Steganography techniques/tools which Chapter 5 could have benefited from.

Back in the fall of '06, while I was writing the book, I sent out emails to a couple of lists asking folks what they would like to see in a book that focuses on forensic analysis of Windows systems; I received very few responses, and if memory serves, only one mentioned steganography. In this book, I wanted to focus specifically on issues directly related to the forensic analysis of Windows systems, and I did not see where steganography fit into that category.

Interestingly enough, Mr. Tabone did not mention what it was he wanted to know about steganography. Articles have already been written on the subject (SecurityFocus, etc.) and there are sites with extensive lists of tools, for a variety of platforms.

I would like to extend a hearty "thanks" to Mr. Tabone for taking the time and effort to write a review of my book, and for posting it.

Incidently, Hogfly posted a review of my book, as well. Be sure to read the comments that follow the review, as well.

Saturday, July 14, 2007

Thoughts on RAM acquisition

As a follow-on to the tool testing posts, I wanted to throw something else out there...specifically, I'll start with a comment I received, which said, in part:

Tool criteria include[sic] whether the data the tool has acquired actually existed.

This is a slightly different view of RAM acquisition (or "memory dumping") than I've seen before...perhaps the most common question/concern is more along the lines of, does exculpatory evidence get overwritten?

One of the issues here is that unlike a post-mortem acquisition of a hard drive (ie, the traditional "hook the drive up to a write-blocker, etc."), when acquiring or dumping RAM, one cannot use the same method and obtain the same results...reproduceability is an issue. Because you're acquiring the contents of physical memory from a running system, at any given point in time, something will be changing; processes process, threads execute, network connections time out, etc. So, similar to the live acquisition of a hard drive, you're going to have differences (remember, one of the aspects of cryptographic hash algorithms such as MD5 is that flipping a single bit will produce a different hash). I would suggest that the approach we should take to this is to accept it and document it.

That being said, what are some of the questions that we can anticipate addressing, and how would/should we answer them? I'll take a stab at a couple of basic questions (and responses), but I'd really like to see what others have to say:

1. Did you acquire this data using an accepted, validated process?

In order to respond to this question, we need to develop a process, in such a way as to validate it, and get it "accepted". Don't ask me by whom at this point...that's something we'll need to work on.

2. Did this process overwrite evidence, exculpatory or otherwise?

I really think that determining this is part of the validation process. In order to best answer this question, we have to look at the process that is used...are we using third-party software to do this, or are we using some other method? How does that method or process affect or impact the system we're working with?

3. Was this process subverted by malware running on the system?

This needs to be part of the validation process, as well, but also part of our analysis of the data we retrieved.

4. Did you add anything to this data once you had collected it, or modify it in any way?

This particular question is not so much a technical question (though we do have to determine if our tools impact the output file in anyway) as it is a question for the responder or examiner.

As you can see, there's still a great deal of work to be done. However, please don't think for an instant that I'm suggesting that acquiring the contents of physical memory is the be-all and end-all of forensic analysis. It's a tool...a tool that when properly used can produce some very valuable results.

Thursday, July 12, 2007

Tool Testing Methodology, Memory

In my last post, I described what you'd need to do to set up a system in order to test the effects of a tool we'd use on a system for IR activities. I posted this as a way of filling in a gap left by the ACPO Guidelines, which says that we need to "profile" the "forensic footprint" of our tools. That post described tools we'd need to use to discover the footprints within the file system and Registry. I invite you, the reader, to comment on other tools that may be used, as well as provide your thoughts regarding how to use them...after all, the ACPO Guidelines also state that the person using these tools must be competent, and what better way to get there than through discussion and exchange of ideas?

One thing we haven't discussed, and there doesn't seem to be a great deal of discussion of, is the effects of the tools we use on memory. One big question that is asked is, what is the "impact" that our tools have on memory? This is important to understand, and I think one of the main drivers behind this is the idea that when IR activities are first introduced in a court of law, claims will be made that the responder overwrote or deleted potentially exculpatory data during the response process. So...understanding the effect of our tools will make us competent in their use, and we'll be able to address those (and other) issues.

When a process is created (see Windows Internals, by Russinovich and Solomon for the details, or go here), the EXE file is loaded into memory...the EXE is opened and a section object is created, followed by a process object and a thread object. So, memory pages (default size is 4K) are "consumed". Now, almost all EXEs (and I say "almost" because I haven't seen every EXE file) include an import table in their PE header, which describes all of the dynamic link libraries (DLLs) that the EXE accesses. MS provides API functions via DLLs, and EXEs access these DLLs rather than the author rewriting all the code used completely from scratch. So...if the necessary DLL isn't already in memory, then it has to be located and loaded...which in turn, means that more memory pages are "consumed".

So, knowing that these memory pages are used/written to, what is the possibility that important 'evidence' is overwritten? Well, for one thing, the memory manager will not overwrite pages that are actively being used. If it did, stuff would randomly disappear and stop working. For example, your copy of a document may disappear because you loaded Solitaire and a 4K page was randomly overwritten. We wouldn't like this, would we? Of course not! So, the memory manager will allocate memory pages to a process that are not currently active.

For an example of this, let's take a look at Forensic Discovery, by Dan Farmer and Wietse Venema...specifically, chapter 8, section 17:

As the size of the memory filling process grows, it accelerates the memory decay of cached files and of terminated anonymous process memory, and eventually the system will start to cannibalize memory from running processes, moving their writable pages to the swap space. That is, that's what we expected. Unfortunately even repeat runs of this program as root only changed about 3/4 of the main memory of various computers we tested the program on. Not only did it not consume all anonymous memory but it didn't have much of an affect on the kernel and file caches.

Now, keep in mind that the tests that were run were on *nix systems, but the concept is the same for Windows systems (note: previously in the chapter, tests run on Windows XP systems were described, as well).

So this illustrates my point...when a new process is loaded, memory that is actively being used does not get overwritten. If an application (Word, Excel, Notepad) is active in memory, and there's a document that is open in that application, that information won't be overwritten...at worst, the pages not currently being used will be swapped out to the pagefile. If a Trojan is active in memory, the memory pages used by the process, as well as the information specific to the process and thread(s) themselves will not be overwritten. The flip side of this is that what does get "consumed" are memory pages that are freed for use by the memory manager; research has shown that the contents of RAM can survive a reboot, and that even after a new process (or several processes) have been loaded and run, information about exited processes and threads still persists. So, pages used by previous processes may be overwritten, as will pages that contained information about threads, and even pages that had not been previously allocated. When we recover the contents of physical memory (ie, RAM) one of the useful things about our current tools is that we can locate a process, and then by walking the page directory and table entries, locate the memory pages used by that process. By extracting and assembling these pages, we can then search them for strings, and anything we locate as "evidence" will have context; we'll be able to associate a particular piece of information (ie, a string) with a specific process. The thing about pages that have been freed when a process has exited is that we may not be able to associate that page with a specific process; we may not be able to develop context to anything we find in that particular page.

Think of it this way...if I dump the contents of memory and run strings.exe against it, I will get a lot of strings...but what context will that have? I won't be able to associate any of the strings I locate in that memory dump with a specific process, using just strings.exe. However, if I parse out the process information, reassembling EXE files and memory used by each process, and then run strings.exe on the results, I will have a considerable amount of context...not only will I know which process was using the specific memory pages, but I will have timestamps associated with process and threads, etc.

Thoughts? I just made all this up, just now. ;-) Am I off base, crazy, a raving lunatic?

Tool Testing Methodology

As I mentioned earlier, the newly-released ACPO Guidelines state:

By profiling the forensic footprint of trusted volatile data forensic tools,

Profiling, eh? Forensic footprint, you say? The next logical step is...how do we do this? Pp. 46 - 48 of my book make a pretty good start at laying this all out.

First, you want to be sure to document the tool you're testing...where you found it, the file size, cryptographic hashes, any pertinent info from the PE header, etc.

Also, when testing, you want to identify your test platform (OS, tools used, etc.) so that the tests you run are understandable and repeatable. Does the OS matter? I'm sure some folks don't think so, but it does! Why is that? Well, for one, the various versions of Windows differ...for example, Windows XP performs application prefetching by default. This means that when you run your test, depending upon how you launch the tool you're testing, you may find a .pf file added to the Prefetch directory (assuming that the number of .pf files hasn't reached the 128 file limit).

So, what testing tools do you want to have in place on the testing platform? What tools do we need to identify the "forensic footprint"? Well, you'll need two classes of tools...snapshot 'diff' tools, and active monitoring tools. Snapshot 'diff' tools allow you to snapshot the system (file system, Registry) before the tool is run, and again afterward, and then will allow you to 'diff' the two snapshots to see what what changed. Tools such as InControl5 and RegShot can be used for this purpose.

For active monitoring tools, I'd suggest ProcessMonitor from MS SysInternals. This tool allows you to monitor file and Registry accesses in real-time, and then save that information for later analysis.

In order to monitor your system for network activity while the tool is run, I'd suggest installing PortReporter on your system as part of your initial setup. The MS KB article (click on "PortReporter") also includes links to the MS PortQry and PortQryUI tools, as well as the PortReporter Log Parser utility for parsing PortReporter logs.

As many of the tools used in IR activities are CLI tools and will execute and complete fairly quickly, I'd also suggest enabling the Process Tracking auditing on your test system, so that the event record for process creation will be recorded.

Okay, so all of this covers a "forensic footprint" from the perspective of the file system and the Registry...but what about memory? Good question! Lets leave that for another post...

Thoughts so far?

Wednesday, July 11, 2007

Are you Security Minded?

Kai Axford posted on his blog that he's been "terribly remiss in his forensic discussions". This originated with one of Kai's earlier posts on forensic resources; I'd commented, mentioning my own books as resources.

Kai...no need to apologize. Really. Re: TechEd...gotta get approval for that from my Other Boss. ;-)

Updates, etc.

Not posting anywhere close to regularly lately, I felt that a couple of updates and notices of new finds were in order...

First off, James MacFarlane has updated his Parse-Win32Registry module, fixing a couple of errors, adding a couple of useful scripts (regfind.pl and regdiff.pl...yes, that's the 'diff' you've been looking for...), and adding a couple of useful functions. Kudos to James, a huge thanks, and a hearty "job well done"! James asked me if I was still using the module...I think "abuse" would be a better term! ;-)

An RTF version of Forensic CaseNotes has been released. I use this tool in what I do...I've added a tab or two that is useful for what I need and do, and I also maintain my analysis and exhibit list using CaseNotes. Now, with RTF support, I can add "formatted text, graphics, photos, charts and tables". Very cool!

LonerVamp posted about some MAC changing and Wifi tools, and I got to thinking that I need to update my Perl scripts that use James' module to include looking for NICs with MACs specifically listed in the Registry. Also, I saw a nifty tool called WirelessKeyView listed...looks like something good to have on your tools CD, either as an admin doing some troubleshooting, or as a first responder.

Another useful tool to have on your CD is Windows File Analyzer, from MiTeC. This GUI tool is capable of parsing some of the more troublesome, yet useful files from a Windows system, such as Prefetch files, shortcut/LNK files, and index.dat files. So...what's in your...uh...CD?

LonerVamp also posted a link to MS KB 875357, Troubleshooting Windows Firewall settings in Windows XP SP2. You're probably thinking, "yeah? so?"...but look closely. From a forensic analysis perspective, take a look at what we have available to us here. For one, item 3 shows the user typing "wscui.cpl" into the Run box to open the Security Center applet...so if you're performing analysis and you find "wscui.cpl" listed in the RunMRU or UserAssist keys, what does that tell you?

What other useful tidbits do you find in the KB article that can be translated into useful forensic analysis techniques? Then, how would you go about automating that?

Another useful tool if you're doing any work with scripts (JavaScript, etc.) in HTML files, is Didier Stevens' ExtractScripts tool. The tool is written in Python, and takes an HTML file as an argument, and outputs each script found in the HTML file as a separate file. Very cool stuff!

Some cool stuff...anyone got anything else they'd like to add?

ACPO Guidelines

The Association of Chief Police Officers (ACPO), in association with 7safe, has recently released their updated guide to collecting electronic evidence. While the entire document makes for an interesting read, I found pages 18 and 19, "Network forensics and volatile data" most interesting.

The section begins with a reference back to Principle 2 of the guidelines, which states:

In circumstances where a person finds it necessary to access original data held on a computer or on storage media, that person must be competent to do so and be able to give evidence explaining the relevance and the implications of their actions.

Sounds good, right? We should also look at Principle 1, which states:

No action taken by law enforcement agencies or their agents should change data held on a computer or storage media which may subsequently be relied upon in court.

Also, Principle 3 states:

An audit trail or other record of all processes applied to computer-based electronic evidence should be created and preserved. An independent third party should be able to examine those processes and achieve the same result.

Okay, I'm at a loss here. Collecting volatile data inherently changes the state of the system, as well as the contents of the storage media (i.e., Prefetch files, Registry contents, pagefile, etc.), and the process used to collect the volatile data cannot be later used by a third party to "achieve the same result", as the state of the system at the time that the data is collected cannot be reproduced.

That being said, let's move on...page 18, in the "Network forensics and volatile data" section, includes the following:

By profiling the forensic footprint of trusted volatile data forensic tools, an investigator will be in a position to understand the impact of using such tools and will therefore consider this during the investigation and when presenting evidence.

It's interesting that this says "profiling the forensic footprint", but says nothing about error rates or statistics of any kind. I fully agree that this sort of thing needs to be done, but I would hope that it would be done and made available via a resource such as the ForensicWiki, so that not every examiner has to run every test of every tool.

Here's another interesting tidbit...

Considering a potential Trojan defence...

Exactly!

Continuing on through the document, I can't say that I agree with the order of the sequence for collecting volatile data...specifically, the binary dump of memory should really be first, not last. This way, you can collect the contents of physical memory in as near a pristine state as possible. I do have to question the use of the term "bootable" to describe the platform from which the tools should be run, as booting to this media would inherently destroy the very volatile data you're attempting to collect.

Going back to my concerns (the part where I said I was "at a loss") above, I found this near the end of the section:

By accessing the devices, data may be added, violating Principle 1 but, if the logging mechanism is researched prior to investigation, the forensic footprints added during investigation may be taken into consideration and therefore Principle 2 can be complied with.

Ah, there we go...so if we profile our trusted tools and document what their "forensic footprints" are, then we can identify our (investigators) footprints on the storage media, much like a CSI following a specific route into and out of a crime scene, so that she can say, "yes, those are my footprints."

Thoughts?

Thursday, July 05, 2007

Windows Forensic Analysis Book Review Posted

Richard Bejtlich has posted a review of Windows Forensic Analysis on Amazon.

5 (count 'em!) stars! Wow! The review starts with:
Wow -- what a great forensics book -- a must read for investigators.

Very cool. High praise, indeed!

Thanks, Richard!

Wednesday, June 27, 2007

Book Updates

I got word yesterday that Syngress is going to discontinue selling hard copy books from their web site. They will continue to sell ebooks, and provide links to other online retailers.

The new link for my book is here.

Sunday, June 17, 2007

What is RAM, legally speaking?

Ever wondered what the legal definition of "RAM" is? I was perusing the Computer Forensics and Incident Response blog this morning and found an interesting post regarding RAM and the US Courts. In short, a court document (judge's decision) from the case of Columbia Pictures Industries, et al, vs Justin Bunneli, et al, was posted on the web, and contains some interesting discussion regarding RAM.

In short, the document illustrates a discussion in which RAM constitutes "electronically stored information", and can be included in discovery. The document contains statement such as "...Server Log Data is temporarily stored in RAM and constitutes a document...".

Interestingly enough, there is also discussion of "sploilation of evidence" due to the defendant's failure to preserve/retain RAM.

The defendants claimed that, in part, they could not produce the server log data from RAM due to burden of cost...which the judge's decision states that they failed to demonstrate. There are some interesting notes that address issues of RAM as "electronically stored information" from which key data would otherwise not be available (ie, the document states that the server's logging function was not enabled, but the requests themselves were stored in RAM).

Ultimately, the judge denied the plaintiff's request for evidentary sanctions due to the defendant's failure to preserve the contents of RAM, partially due to a lack of prior precedence and a specific request to preserve RAM (the request was for documents).

The PDF document is 36 pages long, and well worth a read. I will not attempt to interpret a legal document here...I simply find the judge's decision that yes, RAM constitutes electronically stored information, however temporary, to be very interesting.

What are your thoughts? How do you think this kind of issue will fare given that there are no longer any freely available tools for dumping the contents of Physical Memory from Windows systems?

Addendum: An appeal brief has been posted by the defendant's lawyers.

Saturday, June 16, 2007

Restore Point Analysis

Others have posted bits and pieces regarding System Restore Point analysis (Stephen Bunting's site has some great info), and I've even blogged on this topic before, but I wanted to add a bit more information and a tidbit or two I've run across. This will go a bit beyond what's in my book, but I do want to say that the content in my book is not invalidated in any way.

First off, you can use some of the tools on the DVD accompanying my book in either live response or during post-mortem analysis to collect information from the Restore Points. I've recently updated the code to the SysRestore.pl ProDiscover ProScript to make it more usable and flexible, given some situations I've seen recently.

Another interesting thing I've run across is that using an alternative method of analysis, such as mounting the acquired image as a read-only drive letter (using VDKWin or Mount Image Pro), can be more of a problem than a solution. Accessing the system this way can really be a boon to the examiner, as you can hit the system with an AV scanner (...or two...or three...) and save yourself a great deal of time trying to locate malware. However, the problem occurs due to the fact that the ACLs on the System Volume Information directory require System level access for that system, and even having System level access on your analysis system does not equate to System level access on the mounted image. So things tend not to work as well...files within the "protected" directories will not be scanned, and your alternative is to either perform your analysis within a forensic analysis application such as ProDiscover (using ProScripts), or export the entire directory structure out of the image, at which point, storage is then a consideration (I've seen systems with quite a number of Restore Points).

This is can be an issue because bad guys may try to hide stuff in these directories...read on...

Remember me mentioning the existence of web browsing history for the Default User? This indicates the use of the WinInet API (wget.exe, IE, etc.) by someone who accessed the system with System level privileges. This level of access would also allow that user to access the System Volume Information directory, where the Restore Points are maintained, and possibly put things there, such as executable image files, etc. It's unlikely that a restore point would be used for persistence (ie, point a Windows Service to an executable image within a restore point), as the restore points eventually get deleted or flushed out (see the fifo.log file). However, this would be an excellent place to put an installer or downloader file, and then the intruder could place the files that he wanted to be persistent in either the System Volume Information directory, or the "_restore*" directory.

So, besides looking for files that we know are in the Restore Points (ie, drivetable.txt, rp.log, Registry files), we should also consider looking for files that shouldn't be there, particularly if we find other artifacts that indicate a System-level intrusion.

Beyond this, Restore Points provide a wealth of historical information about the system. By parsing all of the rp.log files, we can develop a timeline of activity on the system that will give us an idea of what was done (system checkpoint, application install/uninstall, etc.) as well as provide us with that timeline...if the Restore Points are in sequence and the dates seem skewed, then we have an indication that someone may have fiddled with the system time. Using the drivetable.txt file, you can see what drives were attached to the system at the time that the Restore Point was created (by default, one is created every 24 hrs).

Beyond these files, we also have access to the Registry files that are backed up to the Restore Points. You can parse these to see if and when a user's privilege levels were modified (ie, added to the Administrator group), determine IP addresses and network settings for the system (parse the NetworkCards key from the Software file, then the Tcpip Services key from the System file), etc.

Analysis of the Registry files maintained in Restore Points is also useful in determining a timeline for certain Registry modifications that are difficult to pin down. For example, when a Registry value is added or modified, the key's LastWrite time is updated. What if one value is added, and one is modified...how do we determine which action caused the LastWrite time to be modified? Well, by using the historical data maintained in the Restore Points, we can look back and see that on this date, the modified value was there, but the added value wasn't...and then it appears in the next Restore Point. Not exact, but it does give us a timeline.

So...what interesting things have you found in Restore Points?

Links
Kelly's Korner (XP Restore Points)
MS Windows XP System Restore
System Restore WMI Classes

Thursday, June 14, 2007

EventLog Analysis

In my book, I covered the Windows 2000, XP, and 2003 EventLog file header and event record structure in some detail. There's also a Perl script or two on the DVD that accompanies the book that let you parse an Event Log without using the Windows API, so that you avoid that pesky message about the Event Log being corrupted.

I've since updated one of the scripts (changing the name to evt2xls.pl), so that now it writes the information that it parses from the Event Log file directly into an Excel spreadsheet, even going so far as to format the date field to that it "makes sense" to Excel when you want to sort based on the date. I've found that writing the data directly to a spreadsheet makes things a bit easier for me, particularly when I want to sort the data to see just certain event record sources, or perform some other analysis. I've also added some functionality to collect statistics from the Event Log file, and display information such as total counted event records, frequency of event sources and IDs, etc., in a separate report file. I've found these to be very useful and efficient, giving me a quick overview of the contents of the Event Logs, and making my analysis of a system go much smoother, particularly when combined with Registry analysis (such as parsing the Security file for the audit policy...see the Bonus directory on the DVD for the Perl script named poladt.pl and its associated EXE file). One of the things I'm considering adding to this script is reporting of successful and failed login attempts, basing this reporting in part on the type of the login attempt (ie, Service vs Local vs Remote).

Here's something to think about...there is sufficient information in the book, and Perl code on the DVD, such that you can create tools for parsing of event records from other sources, such as RAM dumps, the pagefile, and even unallocated space. I'm considering writing a couple of small tools to do this...not search the files, specifically (I can add that to the code that parses RAM dumps) but to start by simply extracting event records given a file and an offset within the file.

But what about actual Event Log analysis? What about really using the Event Log to get some insight into activity on the system? What can we look for and how can we use it?

Here are some tidbits that I've come across and use...please don't consider this a complete list, as I hope that people will contribute. This is just to get folks started...

Stephen Bunting has a great write-up that explains how to use the Event Log to track time change events, such as when someone alters their system time.

The Application Event Log is a great place to look for events generated by antivirus applications. This will not only tell you if an antivirus application is installed on the system (you can also perform Registry analysis to determine this information), but perhaps the version, when it was active, etc.

In the System Event Log, Event ID 6161 (Source: Print) tells you when a file failed to print. The event message tells you the name of the file that failed to print, the username, and the printer.

Also in the System Event Log, Event ID 35 (Source: W32Time) is an Information event that tells you that your system is sync'ing with a time server, and provides the IP address of your system. This can be very useful in a DHCP environment, as it tells you the IP address assigned to the system (actually, the interface) at a particular date and time.

Windows Defender (Source: WinDefend) will generate an event ID 1007 when it detects malware on a system; the event strings contain specific information about what was found.

Whenever you're doing Event Log analysis, be sure to go to EventID.net for help understanding what you're looking at. Most of the listed event IDs have detailed explanations of what can cause the event, as well as links to information at MS.

Again, this is not a complete list of items that you may find and use in your analysis...these are just somethings that come to mind. And remember, you get a bit more out of Event Log analysis when you combine it with Registry analysis, not only of the audit policy for the system and the settings for the Event Logs, but with other sources, as well.

Links
EventLog Header structure
Event Record structure
EventLog EOF record structure
EventLog File Format

Wednesday, June 13, 2007

Determining the version of XP

I received an interesting comment to one of my recent blog posts...the poster was musing that he wished he could determine the version of XP (Home or Pro), presumably during a post-mortem examination. As this struck my interest, I began to research this...and most of what I found applies to a live running system. For example, MS has a KB article that tells you how to determine the version of XP you've got. Also, the WMI class Win32_OperatingSystem has a value called "SuiteMask" which will let you determine the version of the operating system; to see if you're on the Home version of XP, perform a logical AND operation with the SuiteMask value and 0x0200 (the "Personal" bit) - if it succeeds, you're on XP Home. You can also use the Win32::GetOSVersion() function in Perl, or implement the WMI Win32_OperatingSystem class in Perl.

This information seems to be maintained in memory, and appears to be retrieved using the GetVersionEx() API function. Running a couple of tests to extract the information while running RegMon doesn't appear to reveal anything interesting as far as Registry keys that are accessed while attempting to determine the OS version.

During a post-mortem examination, you can go to the file "%WinDir%\system32\eula.txt" and locate the last line of the file that begins with "EULAID", and you'll see something similar to:

EULAID:XPSP2_RM.0_PRO_OEM_EN

If it says "HOM" instead of "PRO", you're dealing with the Home version of XP.

Also, you can try the file "%windir%\system32\prodspec.ini", and right below the line that says "[Product Specification]", you'll see an entry that will tell you which version of the OS you're working with (note: be sure to check the last modification date on these files, as well...).

Links
Determine the version of IE installed
Check the Version of Office XP
Determine the Windows version using C# (using VB)
32- or 64-bit version of Windows?

Monday, June 11, 2007

Some Registry stuff...

I like "Registry stuff". I don't know what the fascination is, but for some reason, I love stuff that has to do with the Registry.

Anyway, I ran across something recently...I was looking at one of my own systems and ran across an interesting value in my AppInit_DLLs Registry value. Just the fact that there was data within this value was interesting enough! But then I saw something even more interesting...another value named LoadAppInit_DLLs. I haven't found anything specific about this value at the MS site yet, but this appears to be a Vista-only Registry value, in that it is only recognized and utilized by the Vista operating system. This is covered briefly in Symantec's Analysis of the Windows Vista Security Model paper.

This value appears to be used by PGP, as well as some tools from Google (both of these are based on Google searches for occurances of the value name).

On the topic of the Registry, here's how to use PowerShell to get the name of the last user to log onto a system.

So, what are you looking in the Registry for...or looking for in the Registry?

Links:
Forensics Wiki: Windows Registry
The Windows Registry as a Forensic Resource
Alien Registry Viewer
32-bit Application access to the Registry on 64-bit versions of Windows

Windows Forensic Analysis Book Review

Andrew Hay posted the first review of my book...that I'm aware of! ;-)

Andrew also posted the review on Amazon!

Thanks, Andrew!

Saturday, June 02, 2007

AntiForensics Article

I read an interesting article recently that talks about antiforensics. At first glance, the article is something of an interesting piece, but reading it a second time and thinking about what was actually being said really got me thinking. Not because the article addresses the use of antiforensics, but because it identifies an issue (or issues) that needs to be addressed within the forensics community. Yes, these tools are out there, and we should be thankful that they we made available by someone...otherwise, how could we address the issue? So, what do we need to do to update our methodologies accordingly? Perhaps more importantly, should be be trying to get ahead of the power curve, rather than playing catch up?

I do feel that it is important to mention something else in the article that I found very concerning, though:
"...details of the TJX breach—called the biggest data heist in history, with more than 45 million credit card records compromised—strongly suggest that the criminals used antiforensics to maintain undetected access to the systems for months or years and capture data in real time."

Strongly suggest, how?

The article goes on to say:
"Several experts said it would be surprising if antiforensics weren’t used."

Several experts? Who? Were any of them involved in the investigation? If they were, what "expert" reveals this kind of information, and keeps his or her job? If not...why are they speculating? It just seems to me that this part of the article is out of place, and when viewed within the context of the entire article, breaks up the flow. The article has a logical progression of here's the issue, okay we've identified it, let's get about fixing it...which all makes sense...but then this bit of speculation seems out of place.

Overall, though, it appears that the article points to some issues that should be addressed within the digital forensic community. Are the tools we have worthless? Not at all. We just have to make better use of the information we have at hand. The article mentions building layers of "evidence", using multiple sources of information to correlate and support what we found in our digital investigation.

Also, Harlan's Corollary to Jesse's First Law of Computer Forensics really seems to be applicable now more than ever! ;-)