Thursday, March 16, 2006

Windows Physical Memory Analysis, pt II

I've been taking a look at what's out there with regards to analyzing physical memory (RAM) dumps from Windows systems, and correlating what's available. Most of this is in my previous post on the subject, so what I'd like to do now it present some of what I've got so far. I've taken the work that Andreas Schuster (ie, ptfinder.pl) and Joe Stewart (ie, pmodump.pl from the TRUMAN project) have done, done some research on structures such as the PEB, PEB_LDR_DATA, and RTL_USER_PROCESS_PARAMETERS, and taken it a step further, albiet a small one.

Here's what I've got...the Perl code I've got right now parses through the memory dump (I'm using the dump from the DFRWS 2005 Memory Challenge as my initial test case) looking for EPROCESS blocks. Note that this is specific to Windows 2000, as we already know that's what the platform is/was, and so far, I haven't gotten around to working out how to determine the base OS from nothing more than a memory dump file. From there, the code retrieves data from the dump file, parsing structures and handling translations between virtual addresses (and pointers) to physical offsets within the dump file itself. Some of the information that gets pulled for each process includes the FLINK/BLINK values (pointers to previous/next EPROCESS block in the doubly-linked list), creation time (exit time, if applicable), whether exit has been called or not, and the location of the Process Environment Block.

Here's an example:

Possible EPROCESS block located at offset 0x6601460
Process Name : metasploit.exe
PID : 600
Parent PID : 240
FLINK : 0x0
BLINK : 0x0
SubSystem : 4.0
Exit Status : 0
Create Time : Sun Jun 5 00:55:08 2005
Exit Time : Sun Jun 5 00:55:08 2005
Exit Called : 1
PEB : 0x7ffdf000
DTB : 0x01a6d000
PEB Offset : 0x00000000

Notice that for this process named "metasploit.exe", the FLINK/BLINK values are zero'd out, and exit has been called. The offset to the PEB (ie, the physical offset within the dump file) has been calculated to be 0, based on the PEB and DirectoryPageTable values retrieved from the EPROCESS block.

Now, if the process was active at the time that the memory dump was made, the Perl code I wrote will parse the RTL_USER_PROCESS_PARAMETERS structure and retrieve information such as the current directory path, DLL path, command line, window title, and desktop name (I'll add parsing of additional information, such as flag values, once I locate information on the format of these structures). Here's an example:

Possible EPROCESS block located at offset 0x6352d60
Process Name : cmd2k.exe
PID : 1132
Parent PID : 324
FLINK : 0xff1190c0
BLINK : 0xff1440c0
SubSystem : 4.0
Exit Status : 259
Create Time : Sun Jun 5 14:10:52 2005
Exit Called : 0
PEB : 0x7ffdf000
DTB : 0x058dd000
PEB Offset : 0x01862000
Mutant = 0xffffffff
Base Addr = 0x00000000
PEB_LDR_DATA = 0x00131e90 (0x03293e90)
Params = 0x00020000 (0x04126000)
Current Directory Path = E:\Shells\
DllPath = E:\Shells;.;C:\WINNT\System32;C:\WINNT\system;C:\WINNT;
C:\WINNT\system32;C:\WINNT;C:\WINNT\System32\Wbem
ImagePathName = E:\Shells\cmd2k.exe
Command Line = "E:\Shells\cmd2k.exe" /D /T:80 /F:ON /K cmdenv.bat
Window Title = E:\Shells\cmd2k.exe
Desktop Name = WinSta0\Default

From this information, we can see where the process was initiated from, as well as the command line used to launch the process. I'd sure like to see what's in cmdenv.bat!

Oh, and there's more! I was running through the output of the script and found that nc.exe was running when the dump was made...here's the image path and command line for the nc.exe process (PID = 1096):

ImagePathName = c:\winnt\system32\nc.exe
Command Line = "c:\winnt\system32\nc.exe" -L -p 3000 -t -e cmd.exe

Not only did I find a process called dd.exe that had exited, but I also found one that was still running (yeah, I know...it's the one that created the memory dump!). The image path and command line for that process are (note: I cleaned it up a little to make it easier to read):

ImagePathName = E:\Acquisition\FAU\dd.exe
Command Line = ..\Acquisition\FAU\dd.exe if=\\.\PhysicalMemory
of=F:\i
ntrusion2005\physicalmemory.dd conv=noerror
--md5sum --verifymd5 --md5out=F:\in
trusion2005\physicalmemory.dd.md5 --log=F:\intrusion2005\audit.log

So...so far, so good. The Perl code for this is a little too messy to release right now, but I will post it once I clean it up a bit and document it a bit better. I need to add dumping of the module list, and a couple of other functions to the script, as well as parsing for other structures.

Addendum 26 Mar: I've continued working on the code, and moved to a little side project. I've copied the subroutines from the original code and targetted individual EPROCESS blocks...searching the RAM dump for each EPROCESS block was just too slow for some simple testing and coding.

So, anyway...I'm pulling more from the EPROCESS block and PEB. For example, I've been able to pull the Environment variables (if they exist) from the process...the stuff you see when you type "Set" into the command prompt, as well as dumping the loading modules. From the first DFRWS 2005 Memory Challenge memory dump, the cmd2k.exe process uses the following modules:

E:\Shells\cmd2k.exe
C:\WINNT\System32\ntdll.dll
C:\WINNT\system32\KERNEL32.dll
C:\WINNT\system32\USER32.dll
C:\WINNT\system32\GDI32.DLL
C:\WINNT\system32\ADVAPI32.dll
C:\WINNT\system32\RPCRT4.DLL
C:\WINNT\system32\MSVCRT.dll

The Environment contains such things as LOGONSERVER and COMPUTERNAME. Cool stuff.

What I'm up to now is parsing through the handle table, starting with the ObjectTable value from within the EPROCESS block. If anyone wants to throw some pointers my way, I'd appreciate it! ;-)

Addendum 31 Mar: I wanted to get this one in late...if I wait until tomorrow, everyone will think it's a prank! With some help from Andreas, I've been able to extract the handle tables from a process, using the ObjectTable value. Andreas was kind enough to point to me, among other things, that I wasn't translating one of the virtual addresses I was receiving to a physical address.

So, what I've been getting for the cmd2k.exe process looks like this:

Object Header at 0xfcd79010 (0x01396010)
Type Name : File
Type = 5
Size = 112
Name = \Shells

Object Header at 0xfca255c0 (0x010425c0)
Type Name : WindowStation

Object Header at 0xfca255c0 (0x010425c0)
Type Name : WindowStation

Object Header at 0xff29e9e0 (0x052099e0)
Type Name : Desktop

This is what I've been able to extract from the dump. I have to keep reminding myself that the system that the physical memory was dumped from had 128MB of RAM, so some of the virtual addresses will be to pages that have been swapped out of physical memory to the pagefile. I'll be sure to add this to the documentation of the code, and show where that aspect of an address is calculated.

Here's some more eye-candy...the UMGR32.exe process has these two handles:

Object Header at 0xff1bc010 (0x01a8d010)
Type Name : File
Type = 5
Size = 112
Name = \Endpoint

Object Header at 0xff1cf7b0 (0x006827b0)
Type Name : File
Type = 5
Size = 112
Name = \WINNT\system32\Perflib_Perfdata_29c.dat

The nc.exe process also points to a file handle named "\Endpoint".

Now, to get the pages of the memory used by each process...

Saturday, March 04, 2006

Windows Physical Memory Analysis

I'll be writing more on this particular subject as time goes on, but I wanted to post something right away. I've noticed over the past couple of weeks that Andreas Schuster has been doing some work with debuggers, etc., to document some of the structures found in physical memory for various versions of Windows, from Win2000 SP4 through Vista (he's noted that most of the structures change between versions and even between Service Packs). Well, I took a look the other day and noticed that he'd posted something called PTFinder, which as it just so happens is a Perl script that parses a dump of physical memory (this version works for dumps of physical memory from Win2000 SP4 systems).

So, I took a look at what he was doing, and exchanged some emails, and expressed my desire to assist with what he's doing. As it turns out, the DFRWS 2005 Memory Challenge provides a great set of test files (2, actually) for testing any tools you're writing to parse a memory dump generated using dd.exe.

I started working on something of my own, using the exercise as a learning process so that I can not only be smarter on this stuff, but to also assist Andreas with what he's doing. I've got the output of the Memory Challenge submissions to check my work against, and so far, things are working pretty well. Here's an excerpt of the output from my version of the script:

Possible EPROCESS block located at offset 0x3e35ae0
Process Name : Explorer.Exe
PID : 820
Parent PID : 800
Exit Status : 259
Create Time : Sun Jun 5 00:33:53 2005
Exit Called : 0

Possible EPROCESS block located at offset 0x40b4660
Process Name : PcfMgr.exe
PID : 1048
Parent PID : 820
Exit Status : 259
Create Time : Sun Jun 5 00:34:01 2005
Exit Called : 0

Possible EPROCESS block located at offset 0x414dd60
Process Name : dd.exe
PID : 284
Parent PID : 1112
Exit Status : 259
Create Time : Sun Jun 5 14:53:42 2005
Exit Called : 0

Pretty cool, eh? One thing to point out is that processes (specifically, EPROCESS blocks) are maintained in a doubly-linked list. One way to walk through the process blocks is to find one and follow the links...but you can miss things this way. Andreas' approach, and the one I've chosen to follow, is to start at the beginning of the dump file and start looking for process blocks.

There's a lot of work that still needs to be done. Last week, a LEO I know attending the Southeast CyberCrime Summit sent me a message from his Blackberry, asking me if I'd come up with a way to run a tool to dump the contents of physical memory to a thumb drive or some other removable storage platform. In the past, I've talked to folks (particularly LEOs) about why they collect the contents of physical memory, and most have told me that they do so in order to run 'strings' against it to see if they can find leads (not evidence) such as passwords, IP or email addresses, IM screennames, etc. The DFRWS 2005 Memory Challenge didn't result in any publicly available tools but research has continued (like I said, I'll be posting more on this later, and recognizing others who've done research along these lines, particularly Mariusz Burdach and his presentation at BlackHat Federal 2006).

Addendum 7 Mar 2006: Like I said, I wanted to post some more information on this subject, describing what has already gone on, and then hopefully where we need to go with this research.

Last year, I became aware of a paper by Mariusz Burdach that described how to analyze the contents of physical memory that was collected using dd/dd.exe. Being interested primarily in how to do this on Windows, I was a little disappointed that the paper focused on Linux. This sort of thing has been discussed in other areas, the most formal being the DFRWS 2005 Memory Challenge. There has been discussion of this subject in other forums, such as Rootkit.com and Windows Forensic Analysis (Yahoo! Group).

The results of the DFRWS Memory Challenge included two winning responses, one by Chris Betz, and the other by George M. Garner, Jr. and Robert-Jan Mora. Both of these approaches are very involved, and produced some interesting results. The down-side of this work is that neither of the tools mentioned in both winning submissions is publicly available. Andreas Schuster released ptfinder.pl, a Perl script that parses through a dump of Windows physical memory searching for the different structures. Andreas recently posted on the difference between tools like ptfinder.pl and the "list-walkers" produced by Betz and Garner/Mora.

Addendum 9 Mar: Yesterday I became aware of some work by Joe Stewart over at LURHQ.com...specifically, the TRUMAN Project, described as the "reusable unknown malware analysis net". Part of the project is a Perl script called pmodump.pl
, "a Perl-based tool to reconstruct the virtual memory space of a process from a PhysicalMemory dump". The description goes on to say that "with this tool it is possible to circumvent most packers to perform strings analysis on the dumped malware". Very cool, and a great idea.

Pmodump.pl operates a bit differently from the other methods presented so far. The script locates potential Page Directory blocks and then does a translation between "logical" or virtual addresses and physical offsets within the dump file, using the default value for the pointer to the Process Environment Block (PEB) for pre-WinXP systems. The output of the script is very comprehensive, including such things as PE headers for executables loaded into memory (check out the File::ReadPE module to see how to parse this info), module lists, etc.

What this shows is that there have been several different, disparate approaches to this issue, different authors taking different approaches, as their needs and skill sets have been different. This looks like a great opportunity to bring all of these efforts together into a single project.

Another thing I'd like to see is information regarding other kernel structures that can be found within memory. I've got the MS Debugging Tools installed on my system, so all I'm really missing at this point is the correct names of the structures. Does anyone have any pointers?

Addendum 11 Mar: More great reading posted...Andreas has posted a blog entry on translating virtual addresses located in memory dumps to physical offsets within the dump file. Great stuff!

Addendum 14 Mar: I got a call late last night, just before CSI:Miami started (note to self: unplug phone when any version of CSI starts...)...evidently this blog entry had been mentioned in the 11 Mar Cyberspeak podcast! If you look at the main page, you'll see that many a famous name in the community has been interviewed by these guys...I haven't made it a habit of listening to these podcasts, but that's about to change. For the record, though, guys, it's "windowsir", not "windows", "s", "i", "r".

Friday, March 03, 2006

ProScript posted

I've posted another ProScript to the Techpathways forum...this one consolidates the other two that I previously posted. It dumps user information by parsing the F and V structures from the user's Registry key in the SAM hive, and gets group information by parsing the C structure from the group's key in the SAM hive. Note: The version of the ProScript that I posted to the forum doesn't try to translate any of the FILETIME objects found in the F structure.

Here's an excerpt from output from the script (one of the ones that does attempt to translate FILETIME objects), with user information displayed. I have an image that I downloaded from the Internet (one of those online challenges) open:

Username : Mr. Evil
Acct Creation Date : Thu Aug 19 23:03:54 2004
RID : 1003
Logins : 15
Flags :
Password does not expire
Normal user account

I have to go back and take a look at that script again...I wonder why my translation subroutine thinks that if Mr. Evil logged in 15 times, that he doesn't have a last login date. Hhhmmm...that's easy enough to check, though...I'll just have ProDiscover dump the appropriate key value to a file, and I'll open that in a hex editor. Either way, it's really cool stuff, being able to pull this sort of thing from the Registry. Now, correlate that with (a) the contents of the ProfileList Registry key, and (b) the "Documents and Settings" directory contents, and you've got a pretty comprehensive look at who's been logging into the system.

Here's an excerpt of that the group information looks like:

Group : Administrators
Comment : Administrators have complete and unrestricted access to the computer/domain
--> Administrator
--> Mr. Evil

I had a good deal of help from two sources in particular, Andreas Schuster and Peter Nordahl. Andreas provided information about the C structure, and Peter's NT bootdisk source code laid out what the F and V structures "look like". Very helpful...thanks to you both.

Tuesday, February 28, 2006

Perl module upload to CPAN

I've uploaded another module to my directory on CPAN...this one is called File::ReadEvt. This is another module that reads a binary file type from a Windows system without using the MS API. You'd use this module if you were performing analysis on a Linux system, or any other system that has Perl installed, or if the tools that use the MS API (ie, EventViewer, psloglist, etc.) report that the .evt file is corrupted in some way.

Another great use includes online parsing systems, where you'd upload a file to a web server, which would parse/analyze the file and display the results in a web page. Or you can use this module with a database that provides some sort of hints or descriptions of various event types, so that the information you read is easier to understand.

No, this module doesn't support parsing the Event Log message files (DLLs) for how the strings should be inserted into the message (ie, %1 %2 %3 etc.). Sorry, but this module does provide the information from the events that would let YOU do that.

Here's the output from a run of the evtstats.pl example script I included in the archive:

C:\Perl>evtstats.pl c:\testing\appevent.evt
Max Size of the Event Log file = 327680 bytes
Actual Size of the Event Log file = 524288 bytes
Total number of event records (header info) = 1138
Total number of event records (actual count) = 1086
Total number of event records (rec_nums) = 1086
Total number of event records (sources) = 1086
Total number of event records (types) = 1086
Total number of event records (IDs) = 1086

This script also collects information in Perl hashes that you can use for statistical analysis. Examples include numbers of each record number, source, event type, and event ID.

There's another example script (lsevt3.pl) that parses through the Event Log file and sends the event record information to STDOUT (redirect this to a file).

A brief word on the format of the Event Log file...

A "normal" Event Log file (such as those found in the system32\config directory) begins with a 48 byte header. The first DWORD (ie, 4 bytes) of the header (and the last, as well) contains the size of the header...in a hex editor, you'll see "30 00 00 00". The second DWORD is the magic number, or "4C 66 4C 65" (ie, "LeLf")...this being the "magic number" as it should be unique to the file. The header also contains information about where certain events are located, such as the oldest one, and the next one to be written, as well as the maximum size of the file and the retention time.

Once you've read the header, you're ready to start reading the event records. To read the records, all you have to do is parse through the file a DWORD at a time, and locate the magic number...then, back up a DWORD, get the total size of the record, and begin parsing. The event record header is 56 bytes in size and anything beyond that is where the data associated with the event is located within the .evt file.

Now, event records are not (I repeat NOT) contiguous. There can be gaps...huge ones. I've seen DrWatson entries that are over 100K in size. Once you finish reading the header or a particular record, it could be a while within the file before you come upon a full event record. I took a look at an example Event Log file the other day, and there was a partial DrWatson entry right after the header, and the first event record was located about 25K into the file.

This module is right there along with File::MSWord (parses an MSWord document without using the MS API), and File::ReadPE (parses the headers of a PE file...great for analysis or educational purposes).

If you don't see the module in the directory when you click on the link above, give it a bit...I just uploaded it and I don't know how long it takes CPAN to process submitted modules.

If you have any questions about the use of any of these modules, feel free to contact me. If you're contacting me about problems with a particular Event Log file, be prepared to send me the file...

Tuesday, February 21, 2006

Quoted

Joab Jackson quote me in this article over on Government Computer News. File metadata has been an issue for a while, and this isn't anything new...except for the fact that it's still an issue. MSWord and PDF documents can both contain metadata, and most do. Do a little Google hacking and pull up MSWord documents based on domain (.mil) or content, and see what you can see.

The script mentioned in the article is now available as a Perl module.

In the news...

If you own a Windows system (home user, corporate or university admin, etc.), you will want to see Brian Krebs' latest article in the Washington Post Magazine. I read it this weekend...largely because people I know who aren't in the computer industry kept telling me, "dude...you've GOT to read this article!" They were right...it is an incredible article, well crafted and written, that gives the reader an insight into the issue of botnets, and a glimpse of one of the micro-economies created as a result of the Internet.

One thing is clear, too...breaking into computers is no longer just for kicks. There are economic/financial/profit motives now...things like adware and spyware, and identity theft are all reasons for someone to want to break into and gain control of your computer.

Friday, February 17, 2006

Determining group membership from an image

A while back, I received this question..."how do I figure out a user's group membership from an image?" Well, I didn't know off the top of my head, and decided that knowing would be useful, so I started digging around. After receiving some much-needed insight from a friend on one of the online forums I frequent, I was able to create a ProScript (Perl script) for use with ProDiscover. I posted the script to their online forum, along with one to enumerate users from an image...and that one is just the beginning, as it only retrieves the user's name from the V structure. There is still quite a bit of information embedded in the F and V structures.

In a nutshell, the way it works is this...within the HKLM\SAM\SAM\Domains\Builtin\Aliases key, there are several subkeys...00000220, etc. These keys have a value named "C", which is binary, and contains the name of the group, the comment describing the group, etc. The first 52 (13 DWORDS) bytes of this value is the header, and the last three DWORDs describe the offset to the listing of user SIDs, the length of the data, and how many users there are. One of the SIDs for the Administrators group will end with the RID of "1F4", which is 500 in decimal...the Administrator account.

Now, to map the user RID to a username, go to HKLM\SAM\SAM\Domains\Account\Users key, where you will find subkeys that look like 000001F4 and 00000E3B, etc. These are the user RIDs, and user info is maintained in the binary V and F values within the key. The offset to the username (ofsName) is stored in the DWORD located 0xC bytes into the binary V structure (the length of the username is maintained in the next DWORD, which starts at offset 0x10). The username itself is in Unicode format and is found at offset 0xCC + ofsName from the beginning of the structure.

So...pretty cool, eh? I got started down the path of looking to the Builtin\Aliases key by running one of the 'net' commands (to enumerate group membership) while also running RegMon, and then filtering on the process I wanted. From there, a little work with a hexeditor and a little help from a friend went a long way.

It's really no surprise that there's nothing in the above post that points to Microsoft as providing assistance of documentation...that information simply is not available. It's not the fault of the folks who have done their best to assist me (and others) over the years...they can't provide what doesn't exist. What I've had to do is go to Linux-based documentation, talk to others, experiment on my own, etc.

On a side note, sometimes when I'm working with or discussing something with someone, particularly things related to the forensic analysis of Windows systems, I'll ask them if they have any sort of documentation or reference. This isn't meant as an indictment of them...I'm not asking them to prove anything, though most of the times, the response I get seems to indicate that they were seriously offended by the question. No, I'm asking for that simply because sometimes I may be able to correlate information from some sort of documentation with something else I've been looking at, or some other little tidbit of information I mayhave. Whenever possible, I try to provide references and/or documentation for what I do as it not only gives everyone a common base to work from, but it also lets others see what I've been looking at, so that they don't have to relearn all that stuff on their own.

Thoughts?

Sunday, February 12, 2006

What do you do when...??

Sometimes you find yourself in one of those situations, where the customer calls you and wants to know:
  • Who copied or modified a file, or
  • Who created or modified a user account
When you get on-site, you find that the system in question had no auditing enabled, that the admins use a group account (and they all use the password) for administration functions, and that there are simply no protections in place at all.

So, what do you do? After all, if it wasn't important, the customer wouldn't have called you, right? What do you tell them?

What the...??

What is the weirdest/most interesting thing you've seen when performing an investigation of a Windows system?

I once was asked to look at a Win2K system with no auditing enabled, and a weak Admin password. The system was infested with spyware and malware...at least three Trojans were installed. I also found three additional Admin accounts...God, g0d and gawd. It looked to me as if one person had gotten in and created the first account (gee-oh-dee), then the second person got in and found the account he wanted to create already there, so he created his own variation...gee-zero-dee.

What about you?

Tuesday, February 07, 2006

The Forensic Server Project

Recently, I've received a couple of emails about the Forensic Server Project (aka, FSP), seen it written up in the Helix 1.7 Beginner's Manual, and even seen it demo'd on the CERT VTE.

For a listing of sites that mention the FSP, go here.

Anyway, the one thing in common amongst all of these sites seems to be that the FSP is simply misunderstood. I talked about the FSP in chapter 8 of my book, but soon after the book was published, I made some updates to the tools, and released them as standalone executables (I compiled the Perl scripts with Perl2Exe).

So...what is the FSP? The FSP is a client/server architecture for effective, sound incident response procedures. The FSP, along with the client application (the First Responder Utility), provides a framework for quick, efficient, and effective data collection during incident response activities. The FRU can be run from a CD or thumb drive, and communicates via TCP with the FSP server component in order to get data off of a system.

Think of it like this...have you seen where netcat is used to get data off of systems? Each command is typed in by hand, or run via a batch file, and the output is piped out over netcat to a waiting listener on another system. The responder has to log each command entered, and do a lot of documentation manually. On the server end, the investigator either ends up with one huge file he must parse, or he has to shutdown and restart the listener for each new command.

The FSP, on the other hand, handles all that for you. The server component takes care of logging, hash generation (and comparison, if files are copied), timestamping of the logs, etc.

The FSP is also very flexible. The FRU runs off of ini files, which tell the application what commands to run, which Registry keys/values to collect, etc. With all of the necessary tools copied to the CD, the investigator create and choose from a variety of ini files, based on location, operating system, installed applications, etc. The ini file can be loaded on the CD or thumb drive, or run from a floppy or even a mapped drive.

So, yes, the FSP does require a little bit extra work in set up, but in providing the standalone executables, I've made that process MUCH easier. The flexibility simply can't be beat...there's no waiting for updates, as the FSP framework allows the investigator to run whichever third party tools she chooses.

There is still some room for improvement, though. For example, incorporating the ability to select files to be copied directly into the FRU. Another thing is the analysis suite...pulling together tools to parse and analyze that data that's collected.

Which brings me to something else that's beneficial about the FSP...it's written in Perl. A lot of folks may look at this and think "Ugh!" Well, being written in Perl, it's also open source. Which means that you can run the server component on just about any platform the supports Perl, and write your own client apps. The currently available FRU is the client for Windows, but clients can be written for Linux, the Mac, etc. And because the framework is open, you're not restricted to using just Perl. Use Python, Ruby, shell scripts...whatever you're most comfortable with.

So I guess I just added a couple of things to my ToDo list...a user manual, and a developer's manual.

Images to play with

From other forums, I've found example images that can be used to sharpen your skills in forensic analysis. For example, there are some images at the CRFeDS Project at NIST...I've downloaded the "Hacking Case" images. There are also Digital Forensics Tool Testing images that are available.

There are also some things to play with over at the HoneyNet Project SotM site. Not only are there binaries you can look at and log files you can examine, but SotMs 24 and 26 involve examining the image of a floppy.

On a slightly tangential note, VMWare has made their Server product a free download...from there, you can find a list of community-built virtual machines. These are primarily various flavors of Linux/*nix, but would offer some practice if you ran the VMs and performed live imaging (I do this with my Windows VMs, using ProDiscover).

Are there any other example images of Windows systems out there, available for download?

On a side note, has anyone used some of the popular tools (such as the FSP, or WFT, or any of the various batch files) for retrieving volatile data from live Windows systems, and posted the data for analysis?

Registry research

Is anyone out there doing research into the Windows Registry, from a forensic perspective?

I know that there are viewers available to allow you to see what's in the raw Registry files, and that these viewers are available for a variety of platforms. That's not what I'm looking for.

I'm also aware of the lists of Registry keys that are available, particular the one from AccessData that seems to be pretty popular. While it is a good starting point, there really isn't enough information about the keys/values in the list, and what causes them to be created, modified, or deleted to be useful beyond a certain point.

What I'm asking here is this...is anyone doing research into the conditions that cause various keys/values to be added to, modified, or deleted from the Registry (this also applies to the LastWrite time associated with Registry keys)? This is extremely important in the area of Windows forensic analysis, as it adds context to what the investigator sees.

Some things are obvious (though they could be better documented) such as the TypedURLs key...values are added when the user types a URL into the Address bar of IE. Other things aren't so obvious, such as what causes the LastWrite time of the unique ID key for a USB removeable storage device to be updated?

Is there anyone out there doing this kind of research? At the least, I'd like to consolidate a list of links. Ideally, I'd like to see the efforts themselves consolidated and optimized.

Sunday, January 29, 2006

The Evolution of Live Response

In my last post, I mentioned that the FRU and FSP had been demos as part of the CERT VTE. Very cool. If you've read this blog for any period of time, you'll know that I've been interested in live response and forensic analysis of Windows systems for a while. One thing that the VTE demo showed me is that I really have to write up a user guide or manual for using the FSP tools, and then add a GUI to them.

That being said, I've been purusing the usual blogs for write ups regarding the recent BlackHat Federal conference. When you can't attend, reading other's impressions is the next best thing to being there. Kevin Mandia's presentation caught my eye, so I downloaded it and read through it. One of the things mentioned in the presentation is a "Live Response" tool that should be released by Kevin's company, Red Cliff Consulting, in January. After some discussion in the presentation on how incidents are detected, things that need to be collected are mentioned on slide 55 (there are 90 slides, folks, but the presentation is well worth the wait). The next slide contains a list of tools that can be run - while I agree with the tools for the most part, there are a couple that I'm not sure I'd run, but that's all covered in my book.

Slide 66 shows an image of the Live Response tool...it looks very interesting, and I really wish I'd been able to make it to the conference. I really like what I see in the presentation, overall...Kevin evidently went over several things (at least, in the slides) that I've been thinking about for some time now, such as the fact that live response is evolving due to the notification laws, with California's SB1386 being one of the first. In essence, companies need to know if client data has been compromised in anyway.

Thoughts? Where do we go from here? Is live response viable, particularly on Windows systems?

Thursday, January 26, 2006

In search of...

No, this isn't a flashback to that old Leonard Nimoy show...it's about training. I got an email last night (don't have permission yet to say from whom) that pointed me to CERT's Virtual Training Environment. It's an online classroom, of sorts, where you can go, select a class, and watch it.

So I checked it out this morning. I went to the "Welcome to VTE" page, and clicked on "Launch VTE". Within seconds, I was looking at a list of topics. I saw "Forensics and Incident Response" and dove right in!

Once the choices of "classes" appeared, I saw that I could choose from documents, demos, and labs. What you get when you run one of the demos is basically a movie. Someone has a screen capture utility running while they narrate what they're doing, and they walk through things. The first one I looked at was "Analyzing Log Files with Notepad". It was pretty basic, but also pretty straightforward and really easy to follow.

There were a lot of other demos available, not all for Windows...there are some for Linux, as well. What I found most interesting, though, is that there is a "Configuration and Setup of the FCU" demo (it's my FRU, just misspelled), and a "Configuration and Setup of the FSP" module!

There's quite a bit of info at this site. The "Forensics" topic includes demos on EnCase, Autopsy, the use of dd, etc. It's very informative...take a look when you get a chance.

Wednesday, January 25, 2006

Cool Magazines

Recently, with the demise of Phrack, I've run across a couple of very interesting online magazines, or e-zines.

From Richard Bejtlich's TaoSecurity blog, the Uniformed, and the CodeBreaker's Journal. I haven't really taken the time to dig into either of these, but on the surface, they look like promising technical e-zines.

I ran across CheckMate today...pretty interesting first issue. This e-zine specifically targets computer forensics and incident response, and looks as if it may be a pretty good read. I went through one of the first articles on examining a user's browsing activities, and it provided pretty thorough coverage of the topic. One of the things I like most about the article is that it gave the reader information on how to interpret what they saw, rather than just pointing the reader to tools.

Know of any others?

Tuesday, January 24, 2006

"Ooops, I did it again..."

No, this isn't a post about what would be on my iPod if I had one...it's even better.

Recently, the NSA posted a document entitled, "Redacting with Confidence: How to Safely Publish Sanitized Reports Converted From Word to PDF". I downloaded the document, because, well...I like metadata. I decided to see what metadata is in the PDF itself, and found this:

Title -> Redacting with Confidence: How to Safely Publish Sanitized Reports Converted from Word to PDF
Author -> SNAC
CreationDate -> D:20060110111526Z
ModDate -> D:20060120090543-05'00'
Creator -> activePDF DocConverter
Keywords -> word, pdf, redaction, metadata
Producer -> 5D PDFLib
Subject -> I333-015R-2005

Very nice. I pulled this out of the document using the pdfmeta.pl script listed on page 254 of my book.

Thursday, January 12, 2006

The need for IR training

This ComputerWorld article caught my eye this morning...I've done vulnerability assessments before, and I fully agree with all of these mistakes. They do, in fact, occur.

Each of the mistakes has earned it's rightful place in precedence. I started doing vulnerability scanning, commercially, in 1997. This has always included much more than simply running a scanning tool, but what occurred then continues today...we'd deliver our report, and most customers would thank us...and that was it. In some very few cases, some of the issues would be fixed, but mostly due to infrastructure changes, upgrades, etc.

The big one that jumps out at me know, though is number 5 - "Being unprepared for the unknown". To me, this is an issue of being prepared for incidents. The real world has all sorts of incident response capability. I've been on plains with rescue dogs that go to Washington, DC, once a year for testing...they'd been involved in the 9/11 search and rescue, and their condition is being tracked for any signs of health issues. We see cops and firefighters on the news. Ever hear of "smoke jumpers"? Heck, even the military is an "incident response capability" in and of itself.

So look around your IT shop right now. What's your incident response capability? If you're reading this, it's probably you. Are you prepared? How do you recognize or receive notification that an incident has occurred? Is it based on known signatures?

Are you prepared for zero-day exploits?

And don't think for an instant that you can't be prepared for these. I know what some of you are saying, that by definition, one can't be prepared for a "zero-day", because it isn't known. Well, I'm hear to tell you...you're wrong. You're prepared if you know that not all malware processes appear in the Task Manager as "danger.exe" or "malware.exe". Do you know how to get more information about processes, about what's running on a system? Can you triage a system? Can you gather specific information from a system, so that you (or someone else) has a fairly complete snapshot of what's going on and can at least begin to figure out what's going on?

Here's another consideration...what do you watch at night? Are you a CSI fan? How about House? Watch a couple episodes of House and start thinking about how you'd perform a "differential diagnosis" of a system.

Looking at the ComputerWorld article one last time, I guess, in a way, my mind ties all of the mistakes back to training issues and misconceptions...chicken. Egg.

Wednesday, January 11, 2006

What is "security"??

Good question. We each approach topics like this differently, based on our background, experiences, etc. This thread on Slashdot caught my eye this morning, as did Brian Kreb's latest blog entry on SecurityFix (careful folks, it's a long blog entry, but an excellent read nonetheless).

My background in security started with pen-tests, war-dialing, and vulnerability assessments. I've also done policy development, etc. I've had a hand in incident response and forensics, as well. This is very different from Bill Gate's background...so we view security differently. The Microsoft stance has been to support better security practices and there have been intiatives with regards to security...so in light of things like SoBig, CodeRed, Nimda, and the more recent WMF issues, can we say that Micrsofot has failed?

Before giving you my thoughts, let me tell you about something that happened to me back on '00. I was working at a now-defunct telecomm company, as part of the corporate security staff. There was a rogue group of guys who claimed that they had security responsibilities, but you could never really tie them down...they were like kids who were told to not to do something, but they did it anyway. So, at one point, one of the guys from the team comes over and tells me how his group had identified an issue and confiscated a system. When they confronted the employee (without the presence of or even notifying HR, BTW...), the employee denied any knowledge of the issue. So these guys hired an outside consulting firm to come in and do forensic analysis of the hard drive...and the tasking they gave was to locate any files specific to the SubSeven Trojan/backdoor. That's it.

So this guy tells me that he looked at the hard drive and found a hidden DOS partition. He told me that we shouldn't deal with this company b/c in his mind, they didn't know what they were doing.

We (my boss and I) sat down and talked to the forensic analyst from the company. He showed us the tasking, and their final report. The documents clearly stated that the sole tasking was to locate files associated with SubSeven, which the company did (and to be honest, pretty much anyone could have done at the time).

So the question is, did the company "fail" or perform poorly? The analyst said that he'd identified the hidden DOS partition, but that partition did not contain the files in question. Since it wasn't part of the tasking, and the company was never given any information regarding the overall case or issue, they provided what they were asked for.

Now, back to the issue with Microsoft. I think what this all boils down to is a matter of expectations. When someone high up the food chain within Microsoft gets up on stage, most of the security guys in the audience hear "blahblahblahsecurityblahblah". They then fill in the gaps surrounding "security" with their own expectations, and feel justified pointing out failures. But wait a second...had they listened to the speaker, they might have heard him (or her) set those expectations and define "success" in their own context.

So, on the one hand, you can look at what Microsoft has done to improve security with things like a firewall for XP, and IIS 6.x functionality that's "off" by default as successful steps toward better "security". But does the recent WMF exploit issue really show that Microsoft has failed overall? Perhaps not. Microsoft's stance seems to be, "yeah, we know that this issue has been around since Windows 3.0, but there haven't been any publicly available exploits until now, and we had higher priority things to work on." Can you get mad at them for that? Really? I mean, don't we do the exact some thing everyday? Don't we have limited resources (time, money, etc.) and make decisions about what's important to us? How do we then feel when someone comes back to us and says that we "failed", but their determination of success is different from our own?

Maybe the approach that needs to be taken is different. Maybe what needs to happen is that more of Microsoft's customers need to get together and say, "hey, this stuff you've done is all well and good, but you know, malware, worms and rootkits are really kicking our butts...can you help us out?" Maybe if enough customers said this, Horton would hear the Who (NOT Roger Daltry). After all, haven't customers gotten Microsoft to redefine "success" before? Didn't someone from Microsoft say back in the early '90s that the Internet would never become what it has, in fact, become today?

Thursday, January 05, 2006

How are you spending your security dollars?

I was reading some comments on another blog today, and one comment in particular caught my eye. A retired CIO lamented the fact that "millions of dollars" were spent providing security to Windows systems...but he put a Mac on the network and never had a problem. Yeah, you read it right..."millions", with an M.

Ugh. My dooty detector goes off, screaming like a banshee, whenever a C-level executive makes comments like that. No, I'm not going to break things down to a para-religious argument over who's OS is better. That's not where I'm going with this one. What I am leading to here is, this sounds like a training issue to me. It sounds like the knowledge level of the IT staff has...shall we say...room for improvement.

Now, I have no doubt that there are some really bright, very knowledgeable IT guys and gals out there, so if this doesn't apply to you, feel free to leave the room.

I was a security weenie at a company once, and the senior admin guy had a bunch of guys working for him. The senior admin guy went to his desktop support guy, a legitimate Dude Among Dudes, and told him, "we can't promote you to an administrator, like I promised, until you get your MCSE." I heard that and thought it was funny...not laughing funny, but "here, try this, it tastes like crap" funny...because none of the current IT administrator staff had an MCSE. Yep, you read that right..."none", with an N.

My point of all this is that this Dude, who'd helped me with virus eradication, was knowledgeable and had a good head on his shoulders (and still does). The admins who wouldn't let him play in their reindeer games, botched pretty much every incident they responded to, had no documentation and had no network diagram. They didn't even know where the egress points were...the ones that bypassed the firewall...even though they'd set them up.

Okay, getting back on track here...where was I? Oh, yeah...training. My thought is this...when it comes to securing any network, regardless of operating systems and applications, you need to start with documentation. If you don't have it, then getting it is going to be a very necessary exercise. This is a good place to start when identifying your risks. Why? Because you have to know what your risks are so that you can start mitigating them...right?

What I'm getting at here is that spending "millions of dollars" to secure Windows systems probably wasn't necessary. If you don't know what you're doing, then of course trying to secure a network is going to be expensive. I simply think that that kind of money would be better spent on things like hiring better qualified personnel, and training the ones you have.

Oh, one other thing...if you're reading this blog entry, then let me throw this out. There are training opportunities available from a variety of sources, at a variety of price points. But what would you say if you could get your entire staff (like, 12 - 20 people) training, with that training targetted to your needs (for your environment), for less than it takes to send 2 or 3 people away to some of the bigger training events? How about if that training led to follow-on training and services that continued to apply specifically to your needs? Would you be interested in something like this?

New NIST document (draft)

I was reviewing the updated E-evidence.info site this morning, and one of the interesting things I came across was the draft NIST SP800-86, Guide to Computer and Network Data Analysis: Applying Forensic Techniques to Incident Response.

As I read through the document for the first time, it's clear that this is a great place to start. From my perspective, I'm glad to see a short, 2 paragraph discussion of NTFS alternate data streams on page 4-5 of the document. The author's did provide footnotes with links to URLs for more information. There's also a section on collecting volatile data from systems.

It's a good resource, that's for sure. Take a look when you get a chance.