Wednesday, August 31, 2005

Offline Regisry Parser

I've posted the current iteration of the code I wrote for an offline Registry parser here.

The code is written in Perl, and uses only one module...and that's just to handle time/date translation. I've got some documentation in the code, and I know that I need to clean it up a bit...but take a look.

Here's what the code does...let's say that you've got an imaged Windows boxen, and you want to take a look at the Registry to get certain values out. Well, the script will dump the contents of the NTUSER.DAT file, or the SOFTWARE or SYSTEM files.

To run the script, use a command line like:

C:\Perl>perl [path_to_file] > output.log

The script parses the Registry file in binary mode, and prints out the keys with LastWrite times (in GMT format), as well as values, the data type of the value, and the data associated with the value.

This script isn't the most efficient way of parsing the Registry, but it works. You can search/grep through the output file to find the information you're looking for.

I developed the script on Windows, but my goal is to make it cross-platform. Also, I'm going to use the subroutines in the script as building blocks for scripts that search for specific keys and values, based on user input.

Tuesday, August 30, 2005

When does size really matter?

Well, when you're talking about Registry values, that's for sure!

On 24 Aug, Secunia released an advisory about how overly-long Registry value names were not being displayed in the Registry Editor. The advisory basically says that Registry values names that are "overly-long" will not be displayed in the Registry Editor, and that this "problem reportedly also exists for overly long registry keys."

So...what's the issue? Well, a great deal of malware maintains persistence on a system by creating a reference to itself in an autostart location, meaning that by making a reference to itself in one of these locations, it will automatically be started when the system starts, when a user logs in, or when the user takes some action. No direct interaction is required from the user to launch the application. Most folks doing incident response and forensics on Windows systems know to check these locations for indications, but now, it seems that some tools are not capable of displaying the value names if the name is longer than 254 bytes/characters.

The Internet Storm Center has a couple of diary entries about this, and are working to not only create a list of tools that do and do not display/react to these long names, but also to get vendors to update their products appropriately. Tom Liston, one of the ISC watchstanders, created a tool that will search your Registry for long value names.

I've written a Perl script that will parse offline Registry files...I'll need to add a check for value names that are longer than 254 bytes.


I was quoted in Tony Bradley's article in "Processor". The article talks about involving IT in the planning of the infrastructure, to include the physical facilities. One of the things I've seen over time is that some companies hire cut-rate admins and IT staff because in the short term, it's less expensive. After all, it makes good business sense to save money in the short term, right? Why hire someone (or a couple of someone's) for $65K a year, when you can get them for $40K? Well, that may be all well and good if you're trying to beef up an already experienced staff, but not good if you're trying to create a staff.

Something the article really doesn't go into is the development of IT staff...that's something that needs to be addressed in a lot of organizations. Some places will go out of their way to incentivize the marketing and/or sales dept, or HR, or other areas, but sometimes the IT staff is largely overlooked. Apathy in your IT staff can be a pretty big security risk.

Morpheus Searches

Many times, law enforcement will have a need for information concerning specific applications. In some cases, those specific applications are P2P file sharing clients. Recently on another list, someone asked about Morpheus, and I thought I'd take a look. After all, you never know when you're going to have that same question yourself.

I got a copy of the P2P client application and installed it. I opened and then minimized the application. I ran the first half of the "two-phase mode" for InControl5. I ran a couple of searches in the client, then completed the "two-phase mode" for InControl5, and took a look at the report. As I suspected, the search terms were not kept in a file, but were instead maintained in the Registry, in the following key:


The LastWrite time on this key will tell the investigator when the key was last written to, ie, when the last search term was added to the list.

I'm sure that more comprehensive testing could be done; in fact, it might be of benefit to compile information about several P2P clients, such as where search terms are maintained, etc.

Anyone out there need or have this info? What are the specific P2P/file sharing clients that you're running across, and what information are you looking for about them?

Monday, August 22, 2005

Event Log Analysis and Reporting

I've long thought about what kinds of things can be derived from a normal, default Event Log, mostly based on my needs at the time. What I was looking for usually depended upon what my job function happened to be at the time. In some cases I might be looking for a particular user's logon time, and in others, I was looking for reports from A/V software.

Now, I'm looking at Event Logs from the perspective of...I don't really know what I'm looking for, or what might be useful. For example, I can cull through a System Event Log and look for instances of event ID 6005 followed by an instance of event ID 6009, which indicate when the system was started. Depending upon the audit configuration of the system, other information can be derived as well.

So...what kinds of things are other looking at, or looking for, in the Event Logs? What types of things should be correlated?

Friday, August 19, 2005

GMU2005 in review

Well, GMU2005 is over with...I'm looking forward to GMU2006, or whatever they'll end up calling it.

First off, shoutz out to the staff, volunteers, sponsors and attendees who made GMU2005 possible. It was fairly well put together, though not without its hiccups...but that's too be expected.

I ended up giving a total 4 different presentations, but was on the podium 6 times. Usually, after 4 hrs of presenting, I wasn't really in the mood to attend any of the training or other presentations. Funny how having to keep the balls in the air for four hours straight can sort of take it out of you. Some presentations ended up being cancelled, either due to good reason, or b/c the speaker wasn't able to make it. I did want to attend Terry Gudaitis's presentation on cybercrime profiling, but she presented at the same time I did, albiet in a different room. However, she was nice enough to send me a copy of her presentation, which I found very interesting.

During and after my presentations, I got a lot of great questions...questions are always good. I could see from the questions that a lot of folks were interested in actually using what I was talking about, especially with regards to my USB storage device presentation. After lunch on Thursday, one of the attendees told me that he'd received an email from the HTCIA listserv, asking how to determine if an iPod had been connected to the system. So, this guy broke out my presentation and sent the original poster the answer! Very, very cool! And Cory, if you're reading this...thanks for putting me on the spot with this guy - I did end up giving him a free copy of my book.

Speaking of giving away free copies of my book...if the publisher doesn't have a bookstore at the conference, I usually get in touch with them and ask them to provide me with free copies of the book to give away at the end of my presentation. I usually give one away to someone who can answer a trivia question. For the first presentation on Tues, I gave a copy of my book to the guy who brought a video cable to the room so I could plug my laptop into the projection system (that was one of the hiccups).

A couple of notes/comments for attendees of conferences like this...most presenters work pretty hard in order to not only put a presentation together, but to also make it pertinent and useful to the audience. Sometimes, this can be tough...the content of the presentation depends upon the make-up of the conference attendees. If you think that the presenter did a good job, tell him or her that. If you've got comments about what the presenter could have done to make the presentation a little better, let them know. Keep in mind that there are some things that a presenter can control, and other things they can't...such as providing desks to write on and paper copies of the slides ahead of time, etc. This sort of thing really helps, as it lets the presenter know how they did, and maybe even what they can do next time that might improve the presentation.

A question I get a lot (and I mean A LOT) when I give presentations is, "...what happens if you...?". This is the case, whether I'm talking about NTFS alternate data streams, USB connected storage devices, or embedding/merging OLE documents. I've developed a stock answer for these questions..."why don't you try it and tell us." I don't do this to be mean or rude...I just think that a lot of times, the questions aren't reasoned through before they're asked. Now, don't get me wrong, I love questions...they get me thinking and if I can give a presentation and walk away having learned something, I'm happy. But, members of the audience, please keep this in mind...I'm not rich, and I do have a life. Yes, there are a lot of things out there that maybe I didn't cover in my research or presentation...but that's usually because I didn't have the time (I have a life, or had a deadline) and/or because I'm not rich and can't afford to purchase one or two of every type of USB device available on the market. I'm sure that in the course of a case, you're going to come across a specific piece of equipment that I didn't specifically cover in my presentation, but that's why I try to lay out the process I used to get the information that I you can follow that process.

So...I just wanted to say that in case anyone attending my presentations thought that I was being rude when I responded the way I did to that particular question.

Anyway, as a wrap-up for this entry, I heard other attendees say that some of the presentations were good, others weren't so good, but overall they were pretty happy with the conference as a whole. Next year should have a much larger attendence, so if you're interested in attending or presenting, keep your eye on the RCFG website.

Wednesday, August 17, 2005

Linux vs Windows TCO

There's an interesting article by Lauro DiDio over on NewsFactor about the Linux vs. Windows Total Cost of Ownership (TCO) argument. The article has some really interesting comments (for the popular media, that is), specifically, that there is no one-size-fits-all, silver bullet solution.

Wow. This harkens back to the "which is more secure" argument...I think the best response to that one is that the operating system is irrelevant in the face of poor administration.

Something that really jumped out at me from the article was the following: "If you do not know what is on your network...then you cannot truly evaluate whether Linux, Windows or Unix is right for your business."

How true is that? I've seen it with large as well as small networks...folks just have no idea what's out there. The same holds true with incident response...

Industry_Insider article

An article I wrote appeared in the MS Industry Insider blog recently. The article is about two misconceptions I see in the IT world when it comes to incident response, specifically on Windows systems...that the best thing to do if you think that a Windows system has been compromised or infected is to wipe it and re-install from clean media, and that bootable Linux distributions are the best tools to use for Windows incident response.

My comments on these misconceptions are supported by the recent NIST document SP 800-86: Guide to Computer and Network Data Analysis: Applying Forensic Techniques to Incident Response. I found the link to this paper over on Keith Jones' blog.

So...take a look, I'd appreciate any comments you may have, particularly regarding the article.

Monday, August 15, 2005

GMU2005 presentations updated!

Well, the first day of GMU2005 went off without a hitch. Sunday evening I updated my presentations...nothing major, just a bullet here, a link there. I've also added some of the Perl scripts mentioned in my presentations...after posting the presentations, some folks reviewing them began asking me for the scripts. I didn't provide them because, well, in the past, no one seemed interested.

To get the new archive, just click here.

There was a glitch with presenters this morning, so I pulled out an already-approved presentation and gave that one to fill the time (and yes, that one's included in the archive as well..."The Windows Registry as a Forensics Resource").

Everything went pretty smoothly, and I got some interesting comments, both during and after the presentations. Some folks would come up and say, "great job!", while others would say something along the lines of, "I'm working on a case where I could really use this information."

To all who attended my presentations today...thanks!

Tomorrow and Thursday, I'll be presenting two presentations each on file metadata, and another on USB-connect storage devices. After each presentation, I'll ask a question, and whomever answers it correctly gets a free copy of my book!

Friday, August 12, 2005

GMU2005 presentations posted!!

I finally received approval for public release of my presentations for GMU2005...which starts Monday. ;-) Ugh...don't even...

There are three PPTs in the on the Windows Event Log file format, one on tracking USB storage devices across Windows systems, and one on file/document metadata. Take a look, and if you have any questions or comments, please feel free to post them here, or email me...

Thursday, August 11, 2005

FRUC INI files

If you're currently using the FSPC/FRUC tools, what would you think about posting your INI files for others to see?

An explanation of the FSPC and the FRUC

Okay, so I posted my updates of the FRUC yesterday, and I'm glad I got that out of the way. But I probably need to give an explanation of the FRUC, and talk about why I wrote it in the first place.

When performing incident response activities, it's generally considered a GOOD THING to get the data you're collecting off of the victim system. In some cases, writing files to the hard drive of the victim system may overwrite important data that's in slack space, but just getting the data off of the system so that it can be analyzed quickly is important. You may simply want to determine quickly whether you have a real incident or not.

Well, on way of doing this is to write the files to the drive, archive them using a zipping utility, and then FTP/TFTP the archive to a waiting server. Hhhhmmm...lots of steps, creates files on the drive...okay, that may work sometimes, but what else have you got?

Another option is to use netcat to get the data off of the system. The way it works is open netcat as a listener on your forensic server and head on over to the victim system. Once there, you run the commands you would use to gather data...command line tools that will display their output at the console, or STDOUT...and rather than viewing the output of the commands on the screen in front of you, you "pipe" the output through netcat, sending the data to the waiting server. For the sake of brevity, the specifics of this are covered in my book, as well as in online resources like this one from Keith Jones.

Okay, so what we've got here is a way to get data off of the system. When you're all done, it ends up on your forensic server in one big file, which you have to parse apart before you analyze it. Also, you've had to maintain a notebook detailing everything you've done...commands you ran, when you ran them, etc. Just typing in all those commands can be pretty tedious, even if you have them written down...right? Say that you have multiple boxes you're looking at...what then? You've got to retype all of those commands...and maintain the notebook.

There must be an easier way...and there is!

Let's start with the server component first...the FSPC. This is the component that sits on your forensic server, waiting for connections. The FSPC basically handles data management...when a connection is received, the FSPC puts the files that it's sent into a directory, and keeps a timestamped log of everything that goes on. When data is sent to the FSPC, it generates a hash for the file that it creates. If a file is copied to the FSPC, it will verify the hashes of the file when it writes it to the hard drive.

The FRUC is the "First Responder Utility (Commandline)"...this means that there's no GUI, no buttons to push. The FRUC runs from a preconfigured .ini file, or via arguments entered at the commandline. Basically, the investigator or first responder copies the FRUC files (.exe files, supporting DLL, .ini file) and any third-party utilities that the FRUC will run to some media...a CD, a USB-connected thumb drive, etc. Once done, she carries it with her, or deploys it for someone else to use. Once the media is inserted into the machine, a single command is run, and all of the data that the investigator wants to collect is sent to the forensic server for storage.

Pretty simple, eh? Powerful tools, automated (to minimize mistakes and reduce the amount of time necessary to collect data), and minimal interaction required by the user.

The framework seems to be pretty complete for data collection at this point. As I've said before, data collection is easy...and from looking at the FSP, I hope you see that, too. It's the data analysis that's tough. One way to deploy this is to have multiple .ini files on the media. The user will insert the media and run the first .ini file, which collects certain data...say, process and network connection information, etc. On the server side, the investigator can run quick analysis tools/scripts, and determine whether a more detailed look is necessary. For larger investigations involving multiple machines, the investigator can open multiple FSPC listeners on the server, each on a different port, and instruct first responders to run the FRUC with the necessary commandline switch to change the port used in the .ini file.

There are other toolkits for pulling information from Windows systems, most notably IRCRv2 and WFT. However, I prefer the FSP framework...not just because I wrote it, but because I think that by removing some of the interaction that the first responder will have with the tools, it's easier to deploy and use, particularly in larger environments.

Also, while the FSPC and FRUC are deployed as executables, their Perl source is also provided (ie, the tools are compiled into executables using Perl2Exe). This makes the tools very extensible...using the Perl source, you can easily make your own modifications to the code, and extend the tools for your own use, in your own environment.

Wednesday, August 10, 2005

FRUC Updated!!


I finally completed my development and testing of an updated version of the First Responder Utility (Commandline), the data collection portion of the Forensic Server Project.

A bit ago, one of the FSP users pointed out two things...first, that the commands listed in the INI file weren't run in order (he had included "date /t" and "time /t" at the beginning and the end of the list of commands...), and that there were some issues with commands that would hang.

So, I wanted to get this blog entry out...I'll be writing more about the use of the FSP and the FRUC as time goes on, but I wanted to let you know that the new version is out.

Tuesday, August 09, 2005

A web site in an alternate data stream

A new blog by Inge Henriksen of Norway was pointed out to me today, and the posts so far are very interesting. One of the posts describes how to create an entire web site within NTFS Alternate Data Streams (ADSs).

An interesting thing about ADSs is that if you have a file with an ADS associated with it on a file share, and you drag-n-drop that file to another NTFS partition, the ADS will remain with the file. However, if you download files via HTTP or FTP, the ADSs are not persistent. But that's not what Inge is doing...the web server is simply serving up a file, and that's all that an ADS really is anyway...a file.

Remember, while Windows has native tools for creating ADSs, there are no tools native to the Windows distributions that allow you to view the existance of arbitrary ADSs.

If you're not familiar with ADSs, take a look at this article that I wrote on the subject...or read about ADSs in chapter 3 of my book.

USB blocking software

I ran across this NetworkWorld article on USB blocking software products, by Ellen Messmer, this morning. There's not a whole lot of interest in the article, from my basically says that McAfee and Sygate are adding the functionality to their products.

What did catch my attention is the first sentence: Unauthorized use of USB hardware to gain access to information in laptops and servers is a growing concern.

Really? A growing concern for whom? What about all of those systems out there that haven't had the software installed on them yet, and may possibly have already been used in this unauthorized manner?

Well, if you're a regular reader of this blog, you already know how to go into systems and check to see if any USB removable storage devices have been connected to the system.

Monday, August 08, 2005

Hashing code updates

Jesse Kornblum has updated md5deep, so head on over and pick up your copy. The tools include hashing programs for MD5, SHA-1, SHA-256, Tiger, and Whirlpool. Very, very cool. Jesse is THE MAN when it comes to these hashing programs, and he even goes so far as to provide a Windows binary of the tools.

MSRC Strider HoneyMonkey Project

I ran across the MS Security Response Center's (MSRC blog) Strider HoneyMonkey project this morning...very interesting stuff. From the web site:

The Strider HoneyMonkey Exploit Detection System, as the research project is code-named, was created to help detect attacks that use Web servers to exploit unpatched browser vulnerabilities and install malware on the PCs of unsuspecting users. Such attacks have become one of the most vexing issues confronting Internet security experts.

Sounds like it's kind of a client-based honeypot, with the system automating the actions of a user to have the browser go out and visit suspect web sites. This would be done by automating IE, and analyzing the system after it was infected. Wanna see how easy it is to automate IE? Check out Dave Roth's Perl on Win32 web site. Go to the Scripts archive and check out his script. This shows you how to use the Win32::OLE module to automate IE (which is a COM server) and have it do things for you.

And yet, like the GhostBuster thingy, this won't be a product that's released for use by anyone outside of MS.

Addendum [9 Aug]: It seems that others have (had??) picked up on the ol' honeymonkey. Rob Lemos has a SecurityFocus article on the honeymonkey, which points to a paper that MS just released this month. The paper even mentions that the honeymonkey detected a zero-day exploit, specifically the javaprxy.dll vulnerability (1, 2) that was known, but did not have an available patch. At the time it was detected (ie, early July 2005) it was not known whether it was being publicly exploited or not.

Thursday, August 04, 2005

Analysis of a Win2K rootkit

I ran across this site recently (can't remember where I originally got the link) and finally got around to actually reading through it.

The first thing that jumps out at me is that the content is incorrectly titled. When talking about rootkits, particularly on Windows systems, the term most generally refers to tools that are used to hide the attacker's presence from not only the Administrator, but the operating system and native tools, as well. However, I suppose that since the attacker "had root", used two self-extracting archives (kits) to move his tools over, and "hide" from casual viewing by (a) using the attrib command, and (b) deleting some files, then technically, it is a "root kit".

Like I've said before, I really like to see things like this posted...seriously. I've heard from others that not only are such things interesting to read, but they provide a great learning opportunity, and I wholeheartedly agree.

I do have a couple of suggestions for improvement, and a question. First, the question...the author makes reference to the HKU\{SID}\Software\Microsoft\Internet Explorer\Explorer Bars\{GUID}\FilesNamedMRU key, and seems to indicate that this location can be used as an autostart location (ie, "...thus executing all of the files listed if any one of them is started.").

Does anyone have any insight on this? I'm going to try this out and see what happens, but I'd love to know how close he is on this.

Okay...suggestions...constructive stuff. I like the screen captures, but there's a lot going on in the article that would really benefit from a screen capture or two. For example, ...used an MSBlaster style exploit to open port 4444 with root privileges... Okay...but can we see that? For example, how about showing the output of fport/openports side-by-side with the output of handle.exe, showing the user context of the process? That would be cool.

Throughout the article, the author speculates as to the intentions of the attacker. I don't necessarily disagree with his assessment of the skill level of the attacker (though I'd like to see more detailed information), but I think that sometimes we can misguide ourselves when we try to guess someone's intentions.

Finally, there are some confusing statements in the article. For example,
  1. The svc.bat file sets the user name of the IRC bot in win.dll (which is actually just a plain text file)...
  2. ...but instead sets the machine up as a warez server via IRC. The bot installed connects to and joins the channel #XiSO...
  3. Edit the C:\WINNT\system32\Setup\svchost\ file to find the process id (PID) of the IRC daemon
  4. This stops the IRC daemon from running
  5. ...suggest the intended use for this kit is not just to run an IRC bot...

So, is it a 'bot or a daemon? Client or server? It sounds like it's a 'bot/client, based on the explanation, but the use of the term 'daemon' suggests a server or service. This is an important distinction.

The author says, "This results in 3 processes called lsass.exe, though only one is legitimate." This demonstrates a "hiding" technique that still works. Basically, the attacker started several processes named "lsass.exe", but because the executable images weren't in the system32 directory, there were no issues with WFP. Even though you won't see the full path to the executable images in the Task Manager, just having more than one copy of lsass.exe running should be a tip to even the most casual observer. Now, using 'svchost.exe' is another matter entirely...

Overall, a great job. Shout-outs/greetz to the author.

MS Office 2003 Redaction tool

I got an email this morning about a tool released by MS called the "Office 2003 Redaction Tool". This tool is an Office 2003 add-in that allows you to redact documents.

Now, many of us are aware of issues with redaction, as well as with metadata in Office documents in general. In the past, folks have posted "redacted" PDF documents to the web, only to have someone with a slow download speed find out that the redaction was...well...not a good as expected. Talk about low-tech! Some of the earliest issues I'm aware of involved someone downloading a PDF document, and then being surprised when, through a slow link, serveral words were suddenly replaced with black blocks! Hey! In a nutshell, it simply took a few more seconds for the redaction blocks to crawl through the link and make their appearance on the page. At that point, all the reader had to do was remove the blocks.

So, I downloaded and installed the redaction tool, and opened up a small document I have on my desktop. I redacted two unique words in the document, and saved the resulting document. I then opened the resulting document in a hex editor and found that in hex, the redacted words had been replaced by "7C". Wow.

This is simply an initial test. There's lots more to do. For example, it appears that the redaction tool also looks in the document properties (ie, metadata) for the same redacted words...I say "appears" because I don't know for sure yet.

Something else to consider is this...let's say that you have a Word document, and in the interests of national security, you need to redact the phrase "small happy dog". Now, let's then assume that you have the phrase "I'm really happy about my job" in the comments (metadata) for the document.

Wednesday, August 03, 2005

Process monitoring and malware analysis

As part of looking into ways of improving malware analysis, I got to thinking about some things. For instance, some malware may maintain encrypted strings (ie, passwords or URLs) which it then decrypts in memory, and deletes once the string has been used. Some data used by the malware process may be transient in memory. So what do you do?

Well, I had a thought...what if you could run a process for a specified period of time, dump the contents of memory (and maybe run some other tools), then resume the process for a short period of time? Using Perl, we can do this! Here's the code:

#! c:\perl\bin\perl.exe
# use to launch an application (malware) and then at predefined
# intervals, suspend() the process, perform some actions (ie, dump
# contents of memory, run external programs like listdlls.exe, etc.)
# then resume the application
use strict;
use Win32::Process;

So, we're using the Win32::Process module. Now, let's set up our variables. For testing purposes, I used AIM...

my $proc;
my $app = "c:\\program files\\aim\\aim.exe";
my $args = "";
my $dir = ".";
my $exit_code;

Now, let's go ahead and create our process...

if (Win32::Process::Create($proc,$app,$args,0,NORMAL_PRIORITY_CLASS,$dir)) {
print "Process created.\n";
# Get PID
my $pid = $proc->GetProcessID();
print "Process $pid created.\n";

Now that we've created our process, let's let it run for a bit (in this case, 3 seconds), then suspend the process. What we're doing, for testing purposes is running this loop only 10 times. This can be expanded (and the Wait() time changed, as well), or an entirely different loop structure can be used.

foreach (1..10) {
print "Process suspended.\n";

Now that the process has been suspended, we can go ahead and do things like dump process memory (with pmdump.exe), or run tools like handle.exe, listdlls.exe, etc.


Once we're done, go ahead and resume the process...

print "Process resumed.\n";

Now, just a little error checking...

else {
print "Process creation failed: ".Win32::FormatMessage Win32::GetLastError()."\n";

So, once this script has completed, you can conduct your analysis. For memory dumps, run strings.exe against the files, or use scripts to extract email or IP addresses, etc.

Addedum: In responding to some comments this morning, I thought that something I said should be included in the text of the post itself.

What I've presented above is not meant to replace anything. Rather, it's a tool...nothing more. Like any tool, it's effectiveness depends upon how you use it. One possible scenario for the use of this tool would be where an administrator or analyst has an unknown bit of software (we won't call it malware yet b/c we don't know). When looking at the program with a hex editor (part of performing static analysis), the analyst may believe that the program is encrypted or obfuscated in some manner. Now, most admins/analysts don't have tools like IDAPro at their disposal, and don't have the necessary skill sets to use things like debuggers and disassemblers. So, moving on to dynamic analysis, they want to run the program on a "sacrificial lamb" system to see what it does. Loading up monitoring tools, the analyst may want to run the application in a more controlled fashion, dumping memory contents along the way.

The above script will let you do that. Admittedly, the script itself is raw, and in it's early stages...but the basic idea is there. The great thing about scripts like this is that they can be easily expanded. For example, let's say we replace sleep(3) the necessary command to run pmdump.exe. When the above script ends, you'll have a directory with a bunch of memory dump files. Well, at that point, the script can then:
  1. Hash each of the files (using Jesse's tools)
  2. Run strings.exe against each file
  3. Grep() through each file looking for email or IP addresses, keywords, etc.
  4. Run 'diff' between all of the files to see what changed between snapshots
  5. Save the results to a flat file, a spreadsheet, or a database for analysis

To me, it's all kinda cool! Something like this would have been kind of interesting when I was using netcat to demonstrate Locard's Exchange Principle.

Tuesday, August 02, 2005

iPod info

I was doing my monthly check of the site this morning, and I ran across an interesting article on how iPods are used as hard drives. So I plugged an iPod into my Windows box and fired up UVCView, and pulled the serial number from the this case, "0000008A0136".

From there, I went to the Registry and navigated to the HKLM\System\CurrentControlSet\Enum Registry key, then dropped down and opened up the USBStor subkey. There, I found the device ID I was looking for...Disk&Ven_Apple&Prod_iPod&Rev_1.62. Beneath this subkey, I found the instance ID that contained the serial number of the device; "0000008A0136&0". From there, I mapped the ParentIdPrefix value to the corresponding value under HKLM\System\MountedDevices and located the drive letter that the device had been mapped to; in my case, \DosDevices\G:.

So what does this all mean? If you're looking around for who's been using iPods at work, you know which key to check. If you're performing a forensic investigation, you should check under the ControlSet00x key, rather than the CurrentControlSet key.

Monday, August 01, 2005

Training Poll

A while ago, I posted about a training course based on my book. Since then, I've received emails from folks, asking about when such a course would be available near them, so I thought I'd post here and see what folks are interested in...

When I taught my Windows 2000 Incident Response course, I would generally go on-site and teach approximately 20 folks at a time, for two days. In some cases, attendees would ask me to come on-site to their location, and provide specialized content, based on the needs of their particular group. This sort of set-up (ie, having me come on-site) worked out really well for the folks who took advantage of it, because they saved a ton of money. Having me come on-site for two days to teach a dozen (most cases were closer to 20) folks the material was less expensive than sending 3 people off to other training courses (that shall remain nameless). Also, they didn't have to deal with processing travel claims for all those folks...and the material that I presented could be...and was...used immediately.

However, this kind of set up does not work for everyone. Not everyone can provide 8, 10, or more folks at one time, in one location, for two days. Some folks would be happier going off-site for three or four days. And still others would be happier doing it all online.

So, here're my questions to you, dear reader...

1. Would you be interested in a training course (2-3 days), based on my book? The material would be technical, with a lot of hands-on work, includes labs/exercises.

2. If you would be interested in such a course, what information, specifically, would you be interested in? My data hiding presentation has always been popular, but my "Windows Registry as a forensic resource" can be a bit out of reach for some folks. What type of content would you like to see, specifically? Would you be more incident response oriented?

3. What type of setting/forum would you like to see? Would you prefer to have me come on-site, or would you like to go somewhere off-site? If you're more interested in an web-based approach, can you point me toward some services?

Thanks. Feel free to post a comment (please sign it) or email me...