Saturday, December 30, 2006


Okay, I haven't blogged in a while...not because I haven't wanted to, but because I've been working (you know, that thing that pays the bills) and writing my next book (I've been exchanging chapters with tech editors). In between all that, I've been coding...and if I take this any further, I'm going to have to add a "really bad excuse" label to my blog.

So what exactly is a "blog tag"? To be honest, I have no idea. I first saw mention of it over on MySpace, so that should give you some indication of what it is. Now it's been picked up in the rest of the virtual world, even by nerds ("hey, I resemble that remark!") like us. So, my understanding from Richard is that I'm supposed to document 5 things people may not know about me, and then "tag" five other bloggers. Okay, here goes...

1. I attended the Virginia Military Institute for my undergraduate accident. I am a Navy brat, and was working toward an appointment to one of the federal service academies, and a high school counselor handed me an application to a small military school in the Shenandoah Valley. In fact, he gave me the last application in his folder, and didn't ask me to make copies of it. That's one of the things about my life that I would NOT change if I could.

2. While attending VMI, I ran marathons my senior year. Yes, looking at me, you'd never know that...according to many fellow Marines, I was "built like a brick sh*t house"...a physique not necessarily associated with nor conducive to marathoning-like activities. I first ran the Marine Corps Marathon in '88, with a finishing time of 2:57. This qualified me for the Boston Marathon, which I signed up for (that is, paid money) and ran...with a finishing time of 3:11. There's more to that story, but unless you attended VMI, it would neither make sense nor be funny. I ran that Marine Corps Marathon again in '89, while I was in The Basic School, and finished in 2:54. I still have all of my finisher medals.

3. In my first tour in the Fleet, I was in Okinawa for a year, stationed with MACS-4. This is the same sort of unit that Lee Harvey Oswald served in. I was a Communications Officer (MOS: 2502). While there, I studied Shorin Ryu, an Okinawan martial art. I'd like to study Akido and Krav Maga.

4. I have a horse, and yes, I ride. Western. Well, not so much "ride" in the traditional sense, like dressage, but more in the sense of "don't dump me off and I'll give you a carrot". I've even done this at a dude ranch, but the carrot thing didn't work.

5. I completed my master's degree at the Naval Postgraduate School, just prior to separating from the military. While there, I wasn't actively engaged in a whole lot (set up a 10Base2 network and connected the detachment to the token ring CAN), so I worked out quite a bit. At one point, I was doing 27 dead-hang, palms-facing-out pullups and ran the 3 mile PFT run in 17 minutes or less. I could bench press 360 lbs unassisted...and yet I was 5 lbs over the weight that the Marine Corps said I should be for my height. So I was placed on weight control, along with the Ho-ho and Twinkie bandits.

Now, on to the tagging...

Jesse Kornblum

Marcus Ranum

Dave Roth

Andreas Schuster

Robert "van" Hensing

Live long and prosper!

Wednesday, November 29, 2006

Artifact classes

I've been doing some thinking about IR and CF artifacts over the past couple of weeks, and wanted to share my thoughts on something that may be of use, particularly if its developed a bit...

When approaching many things in life, particularly a case I'm investigating, I tend to classify things (image the scene in the Matrix where Agent Smith has Morpheus captive, and tells him that he's classified the human species as a virus*) based on information I've received...incident reports, interviews with the client, etc. By classify, I mean categorizing the incident in my mind...web page defacement, intrusion/compromise, inappropriate use, etc. To some extent, I think we all do this...and the outcome of this is that we tend to look for artifacts that support this classification. If I don't find these artifacts, or the artifacts that I do find do not support my initial classification, then I modify my classification.

A by-product of this is that if I've classified a case as, say, an intrusion, I'm not necessarily going to be looking for something else, such as illicit images, particularly if it hasn't been requested by the client. Doing so would consume more time, and when you're working for a client, you need to optimize your time to meet their needs. After all, they're paying for your time.

Now, what got me thinking is that many time in the public lists (and some that require membership) I'll see questions or comments that indicate that the analyst really isn't all that familiar with either the operating system in the image, or the nature of the incident they're investigating, or both. This is also true (perhaps more so) during incident response activities...not understanding the nature of an issue (intrusion, malware infection, DoS attack, etc.) can many times leave the responder either pursuing the wrong things, or suffering from simple paralysis and not knowing where to begin.

So, understanding how we classify things in our minds can lead us to classifying events and incidents, as well as classifying artifacts, and ultimately mapping between the two. This then helps us decide upon the appropriate course of action, during both live response (ie, an active attack) and post-mortem activities.

My question to the community is this...even given the variables involved (OS, file system, etc.), is there any benefit to developing a framework for classification, to include artifacts, to provide (at the very least) a roadmap for investigating cases?

Addendum, 30 Nov: Based on an exchange going on over on FFN, I'm starting to see some thought being put into this, and it's helping me gel (albiet not crystalize, yet) up my thinking, as well. Look at it this way...doctors have a process that they go through to diagnose patients. There are things that occur every time you show up at the doctor's office (height, weight, temperature, blood pressure), and there are those things that the doctor does to diagnose your particular "issue du jour". Decisions are made based on the info the doctor receives from the patient, and courses of action are decided. The doctor will listen to the patient, but also observe the patient's reaction to certain stimuli...because sometimes patients lie, or the doctor may be asking the wrong question.

Continuing with the medical analogy, sometimes it's a doctor that responds, sometimes a nurse or an EMT. Either way, they've all had training, and they all have knowledge of the human body...enough to know what can possibly be wrong and how to react.

Someone suggested that this may not be the right framework to establish...IMHO, at least it's something. Right now we have nothing. Oh, and I get to be Dr. House. ;-)

*It's funny that I should say that...I was interviewed on 15 May 1989 regarding the issue of women at VMI, and I said that they would initially be treated like a virus.

Tuesday, November 28, 2006

MS AntiMalware Team paper

I know I'm a little behind on this one, but I saw this morning that back in June, the MS AntiMalware team released an MSRT white paper entitled "Windows Malicious Software Removal Tool: Progress Made, Trends Observed".

I've seen some writeups and overviews of the content, particularly at the SANS ISC. Some very interesting statistics have been pulled from the data collected, and as always, I view this sort of thing with a skeptical eye. From the overview:

This report provides an in-depth perspective of the malware landscape based on the data collected by the MSRT...

It's always good for the author to set the reader's expectations. What this tells me is that we're only looking at data provided by the MS tool.

Here are a couple of statements from the overview that I found intersting:

Backdoor Trojans, which can enable an attacker to control an infected computer and steal confidential information, are a significant and tangible threat to Windows users.

Yes. Very much so. If a botmaster can send a single command to a channel and receive the Protected Storage data from thousands (or tens of thousands) of systems, this would represent a clear and present danger (gotta get the Tom Clancy reference in there!).

Rootkits...are a potential emerging threat but have not yet reached widespread prevalence.

I don't know if I'd call rootkits a "potential" or "emerging" threat. Yes, rootkits have been around for quite a while, since Greg Hoglund started releasing them with the NTRootkit v.0.40. In fact, commercial companies like Sony have even seen the usefulness of such things. It's also widely known that there are rootkits-for-hire, as well. Well, I guess what it comes down to is how you define "widespread". We'll see how this goes...I have a sneaking suspicion that since it's easy enough to hide something in plain sight, why not do so? That way, an admin can run all the rootkit detection tools they want and never find a thing.

Social engineering attacks represent a significant source of malware infections. Worms that spread through email, peer-to-peer networks, and instant messaging clients account for 35% of the computers cleaned by the tool.

I can't say I'm surprised, really.

These reports are interesting and provide a different view of the malware landscape that many of us might not see on a regular's kind of hard to see the forest for the trees when you're face down in the mud, so to speak.

Even so, what I don't think we see enough of in the IR and computer forensics community is something along the same lines but geared toward information that is important to us, such as forensic artifacts. For example, how do different tools stack up against various rootkits and other malware, and what are the artifacts left by those tools?

Most of the A/V vendors note the various changes made when malware is installed on a system, but sometimes it isn't complete, and other times isn't correct (remember the MUICache issue??).

What would be useful is a repository, or even various sites, that could provide similar information but more geared toward forensic analysts.

Monday, November 27, 2006

Some sites of note

I've been perusing a couple of new sites that I've run across over the past couple of weeks and wanted to pass them on to others, and include some of my thoughts/experiences with the sites...

MultiMediaForensics - At first, this looks like another forum site, similar to ForensicFocus. One of the aspects of this site, however, is that the author maintains the shownotes for the CyberSpeak podcast. One of the most important things about this site is that it provides a forum for members of the computer forensics community to come together. In my experience, this is one of the things missing from the community...a "community". While ForensicFocus is UK-based, MultiMediaForensics appears to be more US-based, and that may have an effect on its popularity here in the US. If you follow the CyberSpeak podcast, or are just interested in computer forensics, make a point of contributing (post, write an article, provide a link, ask a question, etc.) to this site.

Hype-Free - While not specifically related to security or forensics, this site does have some interesting commentary. The author says in the "About" section that the goal of the blog is to "demystify security", so keep your eye out for some good stuff coming down the road.

MS AntiMalware Blog - Don't go to this site expecting to see MS's version of Symantec or some other A/V vendor that posts writeups on malware. However, you may find some interesting white papers and posts that will help you understand issues surrounding malware a bit better. For example, the ISC recently mentioned some highlights from one such white paper.

Sunday, November 26, 2006

A little programming fun, etc.

I was working on writing my book yesterday afternoon (for those of you who live in my area and know how nice it was outside, I know...shame on me for being inside), and was working on a section that has to do with the Windows Recycle Bin. Most forensic analysts are likely familiar with the Recycle Bin and how it works, as well as with Keith Jones's excellent description of the Recycle Bin records (1, 2), so this is more than likely nothing new.

For the book, I wanted to throw together a little script that can be used to parse the INFO2 file, in order to illustrate some concepts that I am trying to convey in section. To my surprise, it was easier than I thought...I got something put together that works amazingly well - my only stumbling block at this point is how to present the data in such a way as to be useful to widest range of folks.

This just shows how useful a little bit of programming skill can be. Something I've been thinking about adding to my list of ProDiscover ProScripts is a script that will go through the Recycler directory and look for files that are out of place...not only files that are in the Recycler directory itself, but those that are in the user's deleted files but not listed in the INFO2 file.

Way Too Funny
Okay, now this is just way too funny...I was doing a search on Amazon to see if my next book is up for early order yet (my publisher said that it would be soon), and I ran across my master's thesis! If you're suffering from insomnia, it's well worth the price to grab a copy of this thesis...I used video teleconferencing traffic to verify the fractal nature of ethernet traffic, so that real statistical models could be used (rather than assumed) for constructing buffers on switches at the edge of ATM clouds.

Anyway, this was really "back in the day" (re: 1996). I was using Java (which, with the help of O'Reilly, I had taught myself) to write an SNMP application to poll Cisco routers for traffic data. I then used MatLab to perform statistical analysis tests of the data to determine its "burstiness" and verify its fractal nature. The video conferencing came in because it allowed me to generate data...I could sit in front of one camera, across the room from the other camera, and have measurable traffic generated across the network. At the time, Notepad was my Java IDE...this was before IBM's Java IDE came out (I saw my first copy of the IBM Java IDE at the end of July, 1997).

Something new from ISC
There were a couple of items I found interesting on the SANS Internet Storm Center handler's diary recently.

One was this item about malware that uses a commercial tool to detect the existence of a virtual machine. Very interesting.

Another is this writeup on the FreeVideo Player Trojan, submitted by Brian Eckman (the writeup also appears on SecGeeks and, among other locations). Brian's writeup is thorough and very comprehensive...I like seeing this sort of thing posted, as it's a nice break to see a submission like this. Brian put a lot of work into the writeup, and looked at a variety of areas on the system that were affected by this malware.

The one question I have (and would have for anyone) is this statement that was made:

Further analysis shows that this appears to inject itself into running processes.

Given the rest of the writeup, there really isn't anything that specifically states how Brian determined this. I'm sure we can come up with a variety of assumptions as to how he did it, or how we would go about doing it, but I'd like to hear from the author. I'm going to submit that as a question to him, and I'll let you know what I find out.

Friday, November 17, 2006

How do you do that voodoo that you do?

After reading through the SecuriTeam documents, and then reading this SANS ISC post, I have to admit, incident response can be extremely complex. This is particularly true if the bad guys catch you with your pants down (figuratively, or hey, even literally). From the SecuriTeam write-ups, we get an insight into how systems are targetted...via Google searches (with books such as Google Hacking from Johnny Long, there's really no big mystery to this) and even freely available vulnerability scanning tools.

In the SecuriTeam documents, the decision to conduct a live response is documented, as a "full forensic analysis" would take too was evidently determined that the attackers hadn't simply defaced a web page, but were actively on the system and involved in creating mayhem. This isn't unusual...many organizations decide that affected systems cannot be taken down. However, with the Team Evil incident, there didn't seem to be any volatile data collected or analyzed. It appears that because a web page was defaced and some directories created, that the investigation and the IR team's reactions focused solely on the web application(s).

One thing I took away from the second SecuriTeam document was that there were...what? Eight attacks?

The post from the Internet Storm Center doesn't seem to make things any easier, does it? After all, not only do multiple downloaders dump a "zoo of malware" (i.e., keylogger, BHOs, even disables the Security Center) on the system, but it also reports the infected system's location so it can be tracked via Google Maps. While this is an interesting new capability, the real issue IMHO is the stuff that's dumped on the system. How do you track it all? If you really don't know a great deal about your systems (hosts and networks) and the applications that you're running, and you haven't taken any real steps to protect those systems, I'd have to say that the ISC recommendation to reinstall is the only option.

If you really think about it, though, maybe this is the intention. If you've been looking at some of the various conference presentations over the past couple of years, there have been some anti-forensics and "how to use the analyst's training against them" presentations. One way to look at these attacks is that the attacker is just being greedy. Another is to think that the intention of dumping multiple programs (keyloggers, BHOs, etc.) on the system is that IF the sysadmin detects any one of them, they'll reinstall the system...and it's likely that the system can be easily reinfected.

So, on the face of things, we've got a denial of service attack...compromise and infect a critical-use system, and it's so important that the sysadmin takes it offline for a reinstall. This also lends itself to persistence...the reinstalled system may have the same or additional vulnerabilities, so the attacker can reinfect the system once it's back up.

Of course, I can't help but think that this could also be a distraction, a little bit of misdirection...get the IT staff looking one direction while the attacker is siphoning data off of another system.

So I guess I have to concede the point that reinstallation is the thing to do. If you (a) don't really know your infrastructure (hosts or network) that well, (b) have no idea where critical (processing or storing sensitive personal information) applications are, (c) haven't really taken any defense-in-depth measures, (d) have no clue what to do, and (e) don't have an IR plan that is supported and endorsed by senior management, I guess there really isn't any other option.


Thursday, November 16, 2006

Case Study from SecuriTeam

Everyone loves a case study! Don't we love to read when someone posts what they did when they responded to an incident? It's great, as it gives us a view into what others are doing. SecuriTeam has not one, but two such posts...well, one post that contains two case studies.

The first is a PDF of the incident write-up for a compromise that occurred in July 2006. The second is a follow-up, and both are very interesting.

I'm going to take a process-oriented view of things. Please don't ask me if someone who defaces web pages and knows some PHP code qualifies as a "cyber-terrorist". I don't want to get mixed up in semantics.

This whole thing seems to have started on or about 11 July 2006 with a web page defacement. Evidently, the bad guys gained access to a vulnerable system (found via Google) and installed a PHP 'shell'. Page 8 of the write-up specifies the need for a "real-time forensic analysis", due to the fact that not only did the good guys need to "stop the bleeding", but they had to counter the bad guys, who were not only deploying their own counter-measures, but attacking the good guys, as well. What's interesting about this is that on page 8, the authors mention having to battle active attacks from the bad guys, saying it was a "fight between the attackers...and the incident response personnel." Oddly, pages 9 - 11 included the 12 steps that the IR personnel went through, yet nothing was mentioned about having to battle an attack. The document continues into "Attack Methodology" and "Conclusions" with no mention of IR personnel being hammered by attacks from the bad guys.

Another interesting point is that the authors mention that the user-agent found in some web logs included the term "SIMBAR", and concluded (with no supporting info) that this meant that the attacker's system was infected with spyware. While I'm not discounting this, I do find it odd that no supporting information was provided. I did a search and found references to adware, but I also found a reference to the SIMS game, as well.

Continuing along into the follow-up report, "SIMBAR" is mentioned again, and this time the authors seem to believe that this user-agent indicates that the same computer was used in multiple attacks. IMHO, this is perhaps dubious at best, particularly if it is indicative of adware...adware can be fairly widespread. Also, beyond that, I'm not sure how significant this information really is to the overall incident.

Overall, both write-ups seem to indicate that the attackers used known exploits to gain access to systems, and used relatively known and easily available tools to gain their foothold. It also seems that the authors made several unsupported assumptions. At first the write-ups are interesting read, but when you really try to dig into them and get some meat, IMHO, it just isn't there.

But hey, who am I? What do you think?

Addendum 18 Nov: Besides the comments here, there's been corresponding posts over on TaoSecurity. I just want to say that I'm not trying to grill the authors of the documents, nor am I trying to point out every little detail that I think was mishandled...not at all. What I am saying is that I think the authors did a great job, but there are things that could have been handled a bit better. Richard Bejtlich mentions "focus and rigor" in one of his recent posts...just because something is done by volunteers, does that mean that quality should suffer?

Van Hensing Rides...uh, Blogs Again!

Robert Hensing, formerly of the MS PSS Security team and now with SWI, has started blogging again! This is great news, because Robert brings a wealth of experience and knowledge to the field of incident response and forensics. Also, due to his resources, he also brings a great deal of insight and detail to the things he investigates and shares with us.

For example, check out his most recent post involving web surfing activity found in the "Default User" directory (he also addressed something similar in his "Anatomy of a WINS Hack" post). What's interesting about this is that if you are using ProDiscover to examine an image, and you populate the Internet History View and find entries for "Default User", you've struck gold!

Robert tends to blog on a wide range of subjects, and his entries about Windows issues tend to be more technical and comprehensive than what you'll find on most sites. So, check it out and leave a comment...

Monday, November 13, 2006

Trends in Digital Forensics, and news

I ran across a Dr. Dobbs article of the same name as the title of this post...very interesting. The subtitle is Are "live" investigations are the trend we are heading towards?

An interesting quote from the article:
Thus the new trend in digital forensics is to to use the corporate network to immediately respond to incidents.

Hhhhmmm...this sounds pretty definitive.

My thoughts on the article are two-fold. First, I have to this, in fact, the trend (or at least a coming trend that we're seeing more of)? Are IT and IR teams using tools like those mentioned in the article (Encase, Wetstone's LiveWire - I have to wonder why the author doesn't mention ProDiscover) to perform incident response via the network? If so, how effective are these efforts?

Overall, the author discusses "live investigations" (which is cool, because my next book covers that, in part) but I have to wonder how much this is being done, and how effective it is.

Now for the "news"...there's a new CyberSpeak podcast out, I just downloaded it and still have to listen to it. I took a look at the show notes (which have moved) and saw that Jesse Kornblum is again being interviewed. Very cool. One of the news items I picked up from the show notes was about a guy in the UK who took over young girls' computers and extorted them into sending him dirty pictures of themselves. The scary thing about the article isn't things like this:

...used some of the most advanced computer programmes seen by police to hack into their PCs...

One of the youngsters said his level of expertise and his power over her PC reminded her of the cult science fiction film Matrix.

Well, okay...I take it back...maybe those excerpts do represent some scary things about the article..."scary" in the sense that an email-borne Trojan of some kind is equated to level of technology seen in the Matrix. Or maybe it's the fact that according to the article, these kids actually fell prey to this guy and sent the pictures, rather than notifying their parents.

Okay, I'm off to listen to the show...

Sunday, November 12, 2006

Evidence Dynamics

One question in particular that I'm seeing more and more is, can volatile data be used as evidence in court? That's a good question, and one that's not easily answered. My initial thought on this is that at one point, most tools (EnCase is perhaps the most popular that will come time mind) and processes that are now accepted in court were, at one time, not accepted. In fact, there was a time when computer/digital evidence was not accepted.

There are two things that responders are facing more and more, and those are (a) an increase in the sophistication and volume of cybercrime, and (b) an increase in instances in which systems cannot be taken down, requiring live response and/or live acquisition. Given these conditions, we should be able to develop processes by which responders can collect volatile data (keeping evidence dynamics in mind) to be used in court as "evidence".

Others have discussed this as well, to include Farmer and Venema, Eoghan Casey, and Chris LT Brown. Much like forensics in the real world, there are forces at play when dealing with live computer evidence, such as Heisenberg's Uncertainty Principle and Locard's Exchange Principle. However, if these forces are understood, and the processes are developed that address soundness and thorough documentation, then one has to ask...why can't volatile data be used in court?

Take the issue of the "Trojan Defense". There was a case in the UK where a defendant claimed that "the Trojan was responsible, not me", and even though no evidence was found a Trojan within the image of his hard drive, he was acquitted. Perhaps collecting volatile data, to include the contents of physical memory, at the time of seizure would have ruled out memory-resident malware as well.

My thoughts are that it all comes down to procedure and documentation. We can no longer brush off the questions of documentation as irrelevant, as they're more important than ever. One of the great things about tools such as the Forensic Server Project is that they're essentially self-documenting...not only does the server component maintain a log of activity, but the FRU media (CD) can be used to clearly show what actions the responder took.

So what do you think?

Additional Resources
Evidence Dynamics: Locard's Exchange Principle & Crime Reconstruction
Computer Evidence: Collection & Preservation
HTCIA 2005 Live Investigations Presentation (PDF)
The Latest in Live Remote Forensics Examinations (PDF)
Legal Evidence Collection
Daubert Standard (1, 2)

Saturday, November 11, 2006

OMG! So says Chicken Little...

I got wind of a post over on the SecurityFocus Forensics list...yes, I still drop by there from time to time, don't hate me for that. Someone posted a couple of days ago about a Registry key in Vista, specifically, a value named "NtfsDisableLastAccessUpdate". Actually, the post was a link to/excerpt from the Filing Cabinet blog. The idea is that this entry allows the admin to optimize a Windows system, as if the setting is enabled (you can do it through fsutil on XP and 2003), then file last access times won't be updated when files are accessed, eliminating that "extra" disk I/O. This is really only an issue on high-volume file servers.

The SF post then ends with "In case this is of value in the forensics of Vista".

Well, not to put too fine a point on it, but "duh!" Even though NTFS last access times are known to have a granularity of about an hour, disabling the ability to track such things takes away one of the tools used by forensic investigators. And even though this functionality is enabled by default on Vista (I'm looking at RC1), it's just one tool. For example, Vista still tracks the user's access to programs via the shell in the UserAssist keys.

In my new book, I recommend checking this value during IR activities, and I also recommend that if forensic investigators find this functionality enabled, then check the LastWrite time on the Registry key to get the date that may correlate to when that change was made to the system. The change can be made through RegEdit or the fsutil application. The fsutil application is not a GUI that is accessed through the shell, so its use won't be tracked via the UserAssist key (although on XP, you may see a reference to fsutil.exe in the Prefetch folder). However, if the change is made, a reboot is required for the change to take effect, so the last access time on the fsutil.exe file (in the system32 directory) may give you an idea of when the change was made, and you may then be able to determine via other logs who made the modification.

Friday, November 10, 2006

Parsing Raw Registry Files

I blogged about this Perl module before, but I've actually started using it myself. The Parse::Win32Registry module is a pretty great tool. Someone asked a question recently about getting some information about when a user account was created, and all that the original poster (OP) had available was an image. There were some good suggestions about getting the creation time of the user's folder in the "Documents and Settings" folder, with the caveat that that isn't really the date/time that the user account was created, but rather the date/time that someone first logged into that account.

Another option is to get the date/time that the password was last reset. If the password hasn't been changed, then the time it was "reset" correlates to the date/time that the account was created (see the HelpAssistant and Support_ accounts below).

Using info from AccessData's Registry Viewer and Peter Nordahl's chntpwd utility source code, I put together some code to parse out quite a bit of info about user accounts from a raw SAM file. For example:

Name : Administrator
Comment: Built-in account for administering the computer/domain
Last Login = Never
Pwd Reset = Tue Aug 17 20:31:47 2004 (UTC)
Pwd Fail = Never
--> Password does not expire
--> Normal user account
Number of logins = 0

Name : Guest
Comment: Built-in account for guest access to the computer/domain
Last Login = Never
Pwd Reset = Never
Pwd Fail = Never
--> Password does not expire
--> Account Disabled
--> Password not required
--> Normal user account
Number of logins = 0

Name : HelpAssistant (Remote Desktop Help Assistant Account)
Comment: Account for Providing Remote Assistance
Last Login = Never
Pwd Reset = Wed Aug 18 00:37:19 2004 (UTC)
Pwd Fail = Never
--> Password does not expire
--> Account Disabled
--> Normal user account
Number of logins = 0

Name : SUPPORT_388945a0 (CN=Microsoft Corporation,L=Redmond,S=Washington,C=US)
Comment: This is a vendor's account for the Help and Support Service
Last Login = Never
Pwd Reset = Wed Aug 18 00:39:27 2004 (UTC)
Pwd Fail = Never
--> Password does not expire
--> Account Disabled
--> Normal user account
Number of logins = 0

Name : Harlan
Last Login = Mon Sep 26 23:37:51 2005 (UTC)
Pwd Reset = Wed Aug 18 00:49:42 2004 (UTC)
Pwd Fail = Mon Sep 26 23:37:47 2005 (UTC)
--> Password does not expire
--> Normal user account
Number of logins = 35

Pretty cool. This is a little more info than you'd see if you were using RV, and I haven't even started to pull apart the global account information yet. The really neat thing about this is that the Parse::Win32Registry module doesn't use Windows API calls, so you can install the module on Linux and MacOSX, as well.

Monday, November 06, 2006

ComputerWorld: Undisclosed Flaws Undermine IT Defenses

I ran across this article today. All I can say is...sad.

Undisclosed flaws may undermine IT defenses...but that presupposes that the organization has some kind of "IT defenses" in place. Take the first sentence of the article:

Attacks targeting software vulnerabilities that haven’t been publicly disclosed pose a silent and growing problem for corporate IT.

Silent? The vulnerabilities themselves may be "silent", as they're just sitting there. But I have to ask...if someone attempts to exploit the vulnerability, is that silent, and if so, why?

One of my favorite examples of defense (the military analogies just don't work today, as there are simply more and more folks in the IT realm who don't have any military experience) is from the first Mission Impossible movie. Cruise's character, Ethan Hawke, has made his way back to a safehouse after the catastrophic failure of a mission. As he approaches the top of the stairs, he removes his jacket, unscrews the lightbulb from the socket, crushes the bulb inside his jacket, and scatters the shards on the floor as he walks backward to the door of his room. How is this defensive? If someone approaches the room, the hallway is dark and they end up stepping on the shards (which they can't see) and making noise...announcing their presence.

The problem is that there are great number of organizations out there that have very little in the way of protection, defense mechanisms, etc. Defense in depth includes not only putting a firewall in place, but patching/configuring systems, monitoring, etc. Many organizations have some of these in place, to varying degrees, and the ones that have more/most of them generally do not appear in the press.

This also highlights another issue near and dear to my heart...incident response. The current "state of IR" within most organizations is abysmal. Incidents get ignored, or the default reaction is to wipe the "victim" system and reinstall the operating system and data. This issue, as well as that of preparedness, can only be adequately addressed when senior management understands that IT and IT security are business processes, rather than necessary expenses of doing business. IT and in particular IT security/infosec can be business enablers, rather than drains to the bottom line, IF they are recognized as such by senior management. If organizations dedicated resources to IT and infosec the way they did to payroll, billing and collections, HR, etc., there would actually be IT defenses in place.

An interesting quote from a Gartner analyst is that much of the confusion about what constitutes a zero-day threat stems from the manner in which some security vendors have used the term when pitching their products. Remember, folks, it's all market-speak. Keep that in mind when you read/listen to "security vendors".

Another quote from the same analyst: But the reality is that most organizations “aren’t experiencing pain” from less-than-zero-day attacks. I firmly believe that this is perhaps partly true, due to the fact that many incidents simply go undetected. If an attack or incident is limited in scope and doesn't do anything to get the IT staff's attention, then it's likely that it simply isn't noticed, or if it is, it's simply ignored.

Friday, November 03, 2006

DoD CyberCrime 2007

Right now, I'm scheduled to present on Windows Memory Analysis at the DoD CyberCrime 2007 conference. I'll be presenting early on Thu morning. This presentation will be somewhat expanded over the one I gave at GMU2006, and even though I've submitted the presentation for approval, I'm hoping that between my day job and writing my book, I can put some additional goodies together.

Going over the agenda for DC3, there seem to be some really good presentations, from Wed through Fri. I'd love to see/hear what Bill Harback is talking about regarding the Registry, and I see that Charles Garzoni will be talking about hash sets. Also, Alden Dima is talking about NTFS Alternate Data Streams, something near and dear to my heart.

Interestingly, there are a total of (I count) four presentations about Windows memory. Tim Vidas and Tim Goldsmith are presenting, as is Jesse Kornblum (title: Recovering Executables from Windows Memory Images). This topic must be getting pretty popular. I hope to have the opportunity to meet with many of the folks presenting and attending the conference.

Thursday, November 02, 2006

Forensics Blogs/Podcasts

Does anyone know of any other purely forensics blogs and/or podcasts out there? I know that there're several neat, not-specifically-forensics sites, but I'm looking for just forensics; hardware or software, Windows, Mac, Linux, Atari, Commodore...anything specifically related to computer forensics.

Updates - Code and such

I updated some code recently and thought I'd post a quick note about it...

First off, if you're using from the OS Detection package on my site, I've updated the code to actually tell you which OS is running. The output is a little cleaner, and provides arguments so you can get a verbose output if you need it. Also, the script has been renamed to "", replacing

In the simplest form, you simply pass the name of the RAM dump file to the script, and get the output:

C:\memory> d:\hacking\boomer-win2003.img

Or, you could use the arguments (use '-h' to see the syntax info) to get more info:

C:\memory> -f d:\hacking\boomer-win2003.img -v
OS : 2003
Product : Microsoft« Windows« Operating System ver 5.2.3790.0

I posted yesterday about the updates to the disk documenting script, which I thought was pretty neat. Thanks again to Jon Evans for pointing out the Win23_PhysicalMedia class for getting the serial number. My thought is that something like this is extremely useful, and easy to use (i.e., just cut-and-paste the output into your acquisition worksheet after you've imaged a drive). The "Signature" helps you tie the drive to the image because that's the same value listed in the first DWORD of some of the MountedDevices entries.

Shifting gears, the e-Evidence site was updated yesterday. Every month, Christine puts up new stuff and it's great to see what's out there. For example, I wonder how many folks have taken the time to reflect on their first incident the way Brian did. Also, check out the presentation about the anatomy of a hard drive. There's always some interesting stuff on this site.

Wednesday, November 01, 2006

Documenting Disk Information

One of the things that computer forensics nerds (present company included, of course) need to do a lot of is documentation...something they, uh, I mean "we" are notoriously bad at. Come on, face it, most nerds don't like to document what they just gets in the way of doing the fun stuff.

What about documenting information about disks you image? You know...hook up a write-blocker, fire up the acquisition software...what about documenting the disk information for your acquisition worksheet, or your chain-of-custody forms?

One way to do this is with DiskID. Another way is to use WMI. By correlating data from the Win32_DiskDrive, Win32_DiskPartition, and (thanks to Jon Evans for pointing this one out) Win32_PhysicalMedia, you can get the same info as with other tools (on Windows). For example, correlating the data from the three Win32 classes mentioned give me the following - first, for my local hard drive:

Model : ST910021AS
Interface : IDE
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x41ab2316
Serial No : 3MH0B9G3

\\.\PHYSICALDRIVE0 Partition Info :
Disk #0, Partition #0
Installable File System
Boot Partition
Primary Partition

Disk #0, Partition #1
Extended w/Extended Int 13

Then for a USB thumb drive (gelded of the U3 utilities):

Model : Best Buy Geek Squad U3 USB Device
Interface : USB
Media : Removable media other than floppy
Capabilities :
Random Access
Supports Writing
Supports Removable Media
Signature : 0x0
Serial No :

\\.\PHYSICALDRIVE2 Partition Info :
Disk #2, Partition #0
Win95 w/Extended Int 13
Boot Partition
Primary Partition

See? Pretty cool. In some cases, you won't get a serial number, but for the ones I tested, I wasn't able to get a serial number via DiskID32, either.

Now, the cool thing is the "Signature". This is the DWORD value located at offset 0x1b8 in the MBR of a hard drive, and appears in the MountedDevices key entry for that drive (the data entry should be 12 bytes in length).

Using the Win32_LogicalDrive class, you can get the following:

Drive Type File System Path Free Space
----- ----- ----------- ----- ----------
C:\ Fixed NTFS 21.52 GB
D:\ Fixed NTFS 41.99 GB
E:\ CD-ROM 0.00
F:\ Removable FAT 846.70 MB
G:\ Fixed NTFS 46.91 GB

Not bad, eh? And the cool thing is that not only can these tools (which were written in Perl and can be 'compiled' using Perl2Exe) be used with the FSP, but they can also be used by administrators to query systems remotely.

Addendum, 3 Nov: I hooked up an old HDD to a write blocker today and ran the tools against's what I got from

Model : Tableau FireWire-to-IDE IEEE 1394 SBP2 Device
Interface : 1394
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x210f210e
Serial No : 01cc0e00003105e0

\\.\PHYSICALDRIVE1 Partition Info :
Disk #1, Partition #0
Installable File System
Primary Partition

I verified the signature by previewing the drive in ProDiscover, and locating the DWORD at offset checked out. The Windows Device Manager shows the Tableau write-blocker to be the HDD controller.

Here's what I got from

F:\ Fixed NTFS 9.39 GB

The Disk Manager also reports an NTFS partition on the drive. Still looks pretty useful to me. DiskID32 reported some weird info, probably due to the write-blocker.

Monday, October 30, 2006

New Today

New Perl module on CPAN

I received an email this morning that directed me to a "new" Perl module called Parse::Win32Registry. From the description, the module allows you to read the keys and values of a registry file without going through the Windows API. Very cool! This goes a step or two beyond my own Perl code for doing this, as it uses an Object Oriented approach (it also covers the Win95 Registry files, as well).

Even though this one isn't up on the ActiveState site yet, it looks like a great option, and will be extremely useful. The install went really easy for me...I downloaded the .tar.gz file, extracted everything, and just copied the contents of the \lib directory into my C:\Perl\site\lib directory. From there, I began running tests with the sample file, and things went really well.

Some of you are probably looking at this thinking, "yeah...whatever." Well, something like this can make parsing through Registry files to get all sorts of data much, much easier. I can also see this being used to parse through the Registry files in XP's System Restore points much easier, even to the point of running 'diffs'.

New CyberSpeak show
Brett's got a new CyberSpeak podcast's a short one this time, folks. Brett mentioned feedback and encouragement in the show - I don't know what kind of feedback he and Ovie get, but like everyone else, I can see the comments on the LibSyn site. These guys are putting forth a great effort, and it can't be easy to parse out the kind of time it takes every week to put the show together. I tend to wonder if the show could be used as a focal point to a sort of extended e-zine for the community. I've thought before about how useful a digital forensics e-zine would be...something practical, useful, hands-on...not at an academic level, but more on the level of "here's what you can do, and here's how you do it". Since there's a wide range of subjects out there (I'm interested in Windows systems, Brett mentioned Macs in today's show...) I don't think that it would be too hard to put something together each week...folks sending in links to articles, etc. Using the LibSyn site, or some other site (like what the Hak5 guys do with their show notes in a Wiki format), a neat little e-zine can be put together each week/month, with lots of folks contributing. Just a thought...

Desktop Capture Software
Oh, and if anyone has any input or thoughts on some good, free desktop capture software, drop me a line. I want to capture my desktop while I speak into the mic on my laptop, so I can record HOW-TOs for my next book. I think that following along visually is better for some folks (and probably much more interesting) that reading a bunch of steps. Right now, I'm considering WebEx Recorder, but would like to get some input on other options. The requirements are simple...good quality output (with regards to video and audio), and the recorder and player should both be freely available. I know that the IronGeek uses CamStudio...anyone have any thoughts or input on that?

Friday, October 27, 2006

It Bears Repeating

There are just somethings that you can't say often enough. For example, tell the ones you love that you love them. Also, tell them that when they suspect an incident, do NOT shut the system off!

Think of Brad Pitt and Ed Norton (no, not that way!!) in Fight Club, but paraphrase..."the first rule of incident response is...don't panic. The second rule of incident response is...DO NOT PANIC!!"

Okay, let's go back to the beginning...well, maybe not that far. Let's go back to an InformationWeek article published in Aug, 2006, in which Kevin Mandia was's one interesting quote:

One of the worst things users can do if they think their systems have been compromised by a hacker is to shut off their PCs, because doing so prevents an investigator from analyzing the contents of the machine's RAM, which often contains useful forensic evidence, Mandia said.

So, what do you think? Is he right? Don't say, "well, yeah, he must be...he's Kevin Mandia!" Think about it. While you're doing that, let me give you a're an admin, and you detect anomalous traffic on your network. First, you see entries in firewall or IDS logs. You may even turn on a sniffer to capture network traffic information. So let's say that at that point, you determine that a specific system is connecting to a server on the Internet on an extremely high port, and the traffic appears to be IRC traffic. What do you do? More importantly, what do you want to know?

The usual scenario continues like this...the system is taken offline and shutdown, and stored in an office someplace. An outside third party may be contacted to provide assistance. At this point, once the incident is reported to management, the questions become...what is our risk or exposure? Was sensitive information taken? Were other machines affected? If so, how many?

This is where panic ensues, because not only do most organizations not know the answers to the questions, but they don't know how to get the answers themselves, either. Here's another quote from the article:

...Mandia said rumors of a kernel level rootkits always arise within the company that's being analyzed.

You'll see this in the public lists a lot..."I don't know what this issue is, and I really haven't done any sort of's just easier to assume that it's a rootkit, because at the very least, that says the attacker is a lot smarter than me." Just the term rootkit implies a certain sexiness to the incident, doesn't it? After all, it implies that you've got something someone else wants, and an incredibly sophisticated attacker, a cyberninja, is coming after you. In some cases, it ends up being just a rumor.

The point of all this is that many times, the questions that management has about an incident cannot be answered, at least not definitively, if the first step is to shut down the system. At the very least, if you have detected anomalous traffic on your network, and traced it back to a specific system, rather than shutting the system off, take that additional step to collect process and process-to-port mapping info (for a list of tools, see chapter 5 of my first book) from the system. That way, you've closed the've not only tied the traffic to a system, but you've also tied it to a specific process on the system.

Of course, if you're going to go that far, you might was well use something like the Forensic Server Project to gather even more information.

Thursday, October 26, 2006

Windows Memory Updates

Andreas recently released a new tool called This lets you run searches across memory pools, looking for things such as timestamps, IP addresses, etc. You need to start by using to develop an index, and from there you can search through various pools for specific items.

This is definitely a start in the right direction. As Andreas has pointed out, the pool header contains a size value, so we know how many bytes the pool occupies. He has also shown that some pool contents can reveal network connections. From here, IMHO, we should focus on "interesting" pooltags, and attempt to understand their structure. Once we do, then this can be added to tools such as, and the data parsed and presented accordingly. In addition to network connections, for example, the contents of the clipboard (pool tag = Uscb) may be interesting, as well. MS provides a listing of pooltags in the pooltags.txt file that's part of the Debugger Tools. You can see an online version of the file here.

Also, there's been an update to the SANS ISC Malware Analysis: Tools of the Trade toolkit listing...and it looks like I'd better get cracking on finishing up the tools I've been putting together to address not just Windows 2000 RAM dumps! ;-)

Monday, October 23, 2006

For those who haven't seen it, you should check out, home of PTFinderFE and SSDeepFE. There's also a number of links on the site to cell phone stuff, particularly for forensics. Very cool stuff.

Vista, RAM dumps, and OS detection (oh, my!!)

I received an email from Andreas today, and one of the things he mentioned is that the offset for the Vista kernel in memory is 0x81800000...this could be added to my os detection script. So I made the change to my script and ran it against a memory dump that I had from a Vista machine. Nothing. Nada. No impact, no idea. I opened up the memory dump in UltraEdit and saw that there was nothing at the offset...well, at least no PE header.

I then fired up my Vista RC1 VMWare session, and ran LiveKD to see what it reported the kernel base address as (0x81800000), and then I suspended the session. I opened up the resulting .vmem file and ran the script against it...and saw the following:

File Description : NT Kernel & System
File Version : 6.0.5600.16384 (vista_rc1.060829-2230)
Internal Name : ntkrpamp.exe
Original File Name :
Product Name : Microsoft« Windows« Operating System
Product Version : 6.0.5600.16384
File Description : Boot Man╕╝ ç(╝ ç
File Version : 6.0.5600.16384╜ çp╝ çsta_rc1.060829-2230)
Internal Name :
Original File Name :
Product Name : Microsoft« Winh╛ ç╪╜ ç« Operating System
Product Version :

Very cool. To make this change yourself, just add '0x81800000 => "VistaRC1"' to the %kb hash.

Addendum 24 Oct: Sent by Andreas, and confirmed this morning...add '0x82000000 => "VistaBeta2"' (remove outer single quotes) to the %kb hash in

Friday, October 20, 2006

Pool tag finder

Andreas released his poolfinder tool the other day, a Perl script for locating pool tags in a memory dump. This script accompanies a paper he wrote for IMF 2006 (here's his paper from DFRWS 2006).

So, what's a pool tag, and why is it important? According to Andreas, the kernel allocates what's referred to as a "pool" of memory, so that if an application requests some memory that is less than a 4K page, rather than wasting all of that space, only what is required will be allocated. The pool tag header is a way of keeping track of this pool.

The pool header specifies the size of the pool, but there isn't any publicly available method for parsing that pool into something usable. I've used this technique to locate the contents of the Clipboard in the DFRWS 2005 memory challenge dumps, and Andreas has used this to locate network socket activity. Perhaps someone with experience in developing drivers for Windows can assist in developing a methodology for revealing the format of the various structures.

Additional Resources:
Who's using the pool?
How to use PoolMon.exe

Addendum, 23 Oct: I've made some comments to Andreas's blog, and we've gone back and forth a bit. What I'd like to try to do is identify various pool tags that are of interest to forensic examiners (ie, TCPA for network connections, clipboard, etc.). Thanks to Andreas's work, finding the pool tags and their sizes is relatively easy...deciphering them into something readable is something else entirely.

Restore Point Forensics

I was doing some digging around the other day, researching System Restore points in XP...they're only in XP (and ME, but we're only concerned with the NT-style Windows OSs), and not something you'll find in 2000 or 2003.

So, what is "System Restore"? SR is a function of XP that creates "restore points" or limited backups of certain important files, depending upon certain triggers. For example, by default, a restore point will be created every calendar day. Also, restore points are created whenever the user installs software. This, of course, means that you could technically have multiple restore points created during a single day. Restore points can be used in case something goes wrong with the installation and the system is affected...the user can select a previous restore point, and "roll back" the system to that point. This is a very cool thing, and I've used it more than once myself. Keep in mind, though, restore points do not affect certain things, such as login passwords or a user's data, so don't be afraid to select a restore point from 3 days ago because you think you might loose that spreadsheet you built yesterday.

So what do restore points mean to us? Well, they're full of a wealth of information about the system. I've presented on the topic of "the Registry as a logfile" in the past, and this is a great example of that. Important files are backed up, but so are portions of the Registry, such as the System and Software files, the user's profile(s), and portions of the SAM. These Registry files have keys in them, and the keys have LastWrite times.

Where do these come into play? I've seen several questions in some of the lists lately, asking about things like "was this user account on the system", or "did the user change their name", as well as other questions where, on the face, would best be answered if there were some way to take a glimpse into the past. Restore points let you do that.

The structure of the restore points is pretty simple. On the root of the system drive, you'll find a "System Volume Information" directory, and beneath that a directory named "_restore{GUID}". Beneath that, you'll have numbered restore points...RP35, RP36, RP37, etc. You get the idea. Within each of these "RP**" directories, you'll have some backed up files (depends on the nature of the restore point), and a file called 'rp.log'. This is a binary file that contains the description string (null terminated string starting at offset 0x10) for the restore point, as well as the FILETIME object of when it was created (QWORD starting at offset 0x210). Finally, there is a snapshot directory holding all of the Registry file backups.

If you're interested in peering into these restore points, but don't have an image available, you can look at them on your own XP system. One way to get some information is through WMI. Using the SystemRestore and SystemRestoreConfig objects, you can get information about each restore point, as well as configuration settings about the System Restore, respectively. I wrote a Perl script to do's an excerpt of what I got on my system:

64 13:28:07 09/27/2006 Installed Windows Live Messenger
65 15:07:48 09/28/2006 System Checkpoint
73 12:45:41 10/09/2006 System Checkpoint
74 14:03:53 10/09/2006 Installed QuickTime
75 19:25:09 10/09/2006 Installed Microsoft .NET Framework 1.1
76 20:21:54 10/10/2006 System Checkpoint
77 20:52:59 10/11/2006 System Checkpoint
78 22:00:48 10/12/2006 System Checkpoint
79 22:20:18 10/12/2006 Installed Windows Media Player 11
80 22:20:41 10/12/2006 Installed Windows XP Wudf01000.
81 10:45:08 10/14/2006 Software Distribution Service 2.0
82 12:35:54 10/18/2006 System Checkpoint
83 18:29:09 10/19/2006 System Checkpoint
84 20:38:49 10/19/2006 Removed ProDiscover IR 4.8a
85 20:43:48 10/19/2006 Installed ProDiscover IR 4.84

Kind of cool. This gives me something of a view of the activity of the system, not only when it was online, but also what was going on. Notice that on 19 Oct, three restore points were created. Not only was the normal System Checkpoint created, but there was a software removal and an installation. This script is very useful, not only for incident response (include it with the FRU and FSP), but also for general system administration. It works remotely as well as locally, and gives the admin some good visibility into the system.

The other interesting thing about this is the order of the restore points. Notice that as the restore point indexes increase, so do the timestamps. If I were to have modified my system time several days ago, things might not appear to be in this is a good little sanity check to see if maybe the system time was modified. It's not definitive, mind you, but a good check.

If you just want to take a look around, go to SysInternals and get a copy of psexec. Once it's on your system, type:

psexec -s cmd

and a command prompt will open, but you'll have SYSTEM level privileges. Then type:

cd \Sys*
cd _restor*

Now, select one of the restore points and cd to that directory, then to the snapshot directory. If you do a 'dir' at this point, you'll see all of the various Registry files that are backed up, each one beginning with "_REGISTRY_MACHINE_*". At this point, you can copy any of these files out to a "normal" directory, and either open the file in a hex editor, or run it through the Offline Registry Parser and see the contents in ASCII. I tried this and dumped out one of the SAM files I found, and saw that I could see the C values that hold group membership information, as well as the F and V values for user's account information. One of the things I've got in the works is a set of scripts that will parse through these files (and the "normal" Registry files), reporting on what it finds...but in a way that's readable to an investigator. So, rather than spewing out binary data for the F and V values for a user account, translate that into something readable, like my UserDump ProScript. Something like this could be used to 'diff' the various files from restore points.

I've developed a ProDiscover ProScript for sorting through the restore points on an XP system, and I'll make it available when the next version is released.

Additional Resources:
Steve Bunting's site (he uses EnCase)
Bobbie Harder's site (links to KB articles)
Windows XP System Restore
Wang's Blog (tells you how to open the RP** Registry files in RegEdit)
Restore Point Log Format

Wednesday, October 11, 2006

New HaxDoor variant

I picked this one up from the Securiteam blog this seems there's a new HaxDoor variant out. Symantec's technical write-up contains a lot of detail (useful to forensic analysts...the guys on the FIRST list should take note), and even though they do classify this one as a "backdoor", they do actually use the word "rootkit". For further reading:
This variant captures keystrokes, steals passwords, allows the attacker to download files, in addition to hiding itself, installing itself as a service, attempting to disable the Windows Security Center, and maintaining additional persistence by adding an entry to the following Registry key:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify\

Here's some info from MS on that Winlogon\Notify entry.

One thing I think that's needed is an understanding of how clear, detailed write-ups on things like this are important to the IR/CF community. This sort of thing is very helpful when you're trying to ascertain the sophistication of an intrusion, and perhaps even develop some next steps. Remember, things such as NTFS ADSs and esoteric Registry key entries may be ho-hum boring to a lot of folks, but they're all pieces of the puzzle if you're doing IR/CF.

If you think you may have a rootkit, check out one of these tools.

PTFinder Front End

Andreas posted this on his blog, and I took a look at it...very cool. This is a front-end for running PTFinder, and even allows you to choose which operating system you want to run it against. This makes things a little easier to use.

If you have a RAM dump and don't remember which OS it is, as long as it's between Win2k and Win2K3SP1, you can use my OS detection script.

The neat thing is that you can get output that looks like this. Very cool!

Monday, October 09, 2006

Are you a Spider man?

I found an announcement on another list today for Spider, a tool from Cornell University for locating sensitive personal information (SSN, credit card numbers, etc.) on a system.

I'm told that the intended use is to run it against the system after you've acquired an image, which makes sense. Either image the system live, and then run the tool on the rebooted system, or remove the hard drive, image it, replace it, boot it, and then run the tool. I wouldn't recommend it as an initial step in IR, as it will change the MAC times on all of the files, but it is a great idea for IT admins. Many times during an engagement, one of the questions that will be asked is, "Is/was there any sensitive data on the system, and if so, was it accessed/copied?" Well, IT admins can run a tool like this as part of their SOP...after all, if an incident does occur, you're going to be asking that question anyway, right?

Also, through in something like LiveView, particularly if all you have is an image. Boot the image, and run Spider against it. Unfortunately, you're going to have to install .Net on the booted image, but VMWare is great with snapshots.

So, for the Windows version, you need to install the .Net framework first. Note that the documentation for downloading and installing Spider refers to the install file as "spider_nsis.exe", and the Windows version (2.19a) comes as 'setup.exe'. The configuration of the tool is a bit outside the 'norm' don't just launch Spider from the command line with switches.

Interesting capabilities include the ability to add regex's, scanning of ADSs, and logging to the Event Log or to syslog.

A great tool, but I really see it being more of an IT admin's SOP than a response tool, 'though it does have it's uses. Keep in mind that the documentation states that you can have a high incident of false positives, but that's the case even with regex searches in EnCase, and just something you have to deal with for now.

Parsing Registry files

Last week, I mentioned making adaptations to a tool to perform specific tasks. Specifically, adapting the Offline Registry Parser so that instead of dumping all of the stuff in a Registry file, dump specific keys, values and their data, and translate that data into something human-readable (and parsable), rather than simply spewing it to STDOUT.

Where I thought this might be useful is with the SAM file, to start. Run through the file and pull out all of the user information, group membership info, and even the audit policy (translated into something similar to auditpol.exe's output). A side benefit of this is that you could run it against the current SAM file, as well as any located in System Restore points, and get a rough timeline of when changes occurred.

This could also be done for the NTUSER.DAT files.

Another benefit of this is data reduction. Rather than dumping the entire contents of the Software hive, you could extract only those keys, values, and data that you would most usually be interested. From there, you'd have less to analyze, and still have the original data.

Saturday, October 07, 2006


This post is likely to be the first of several, as it's something I've been thinking about for quite a while, and it takes new form and shape every time it pops into my head. So...please bear with me...

We see all the time that cybercrime is increasing in sophistication. We see this in reports and surveys, as well as in quotes. From this, we can assume (correctly) that there is a widening gap in abilities and resources between those committing the crimes, and those investigating the crimes. This gap is created when innovation occurs on one side of the equation, and not on the other.

I guess we need to start with the question, is there a need for innovation in the field of incident response (IR), and consequently computer forensic (CF) analysis?

I know that this is going to open up a whole can of worms, because not everyone who reads this is going to interpret it the same way. Even though I get this question running around inside my brain housing group from time to time, I don't think I really have a solid grasp of the concept yet. I see things and I think to myself, "Hey, we could really use some innovation here", or as in the case of Jesse Kornblum's ssdeep, "Hey, THAT'S an innovation!!"

I know what isn't an innovation, though...hash lists are not an innovation. Not any more. Sorry. 'nuff said.

Let's look at it this way...right now, Windows systems are being investigated all the time. I'm on several public and member-only forums, so I see the questions...some of the same ones appear all the time. There are just some things that folks don't know about yet, or don't have a clear understanding of, and simply don't have the time to research it themselves. From a more general perspective, there are areas of a Windows system that are not investigated on a wide basis, simply due to lack of understanding (of what data is available and how it could affect the investigation). I firmly believe that if there were more of an understanding and more knowledge of these areas, some investigations reap significant benefits.

So, is the innovation need in the area of knowledge, communication, or both?

Vista is bringing about innovations in technology. Un- or under-documented file formats require application-specific innovations (and these include Registry entries, not just binary format).

See what I mean? It's kind of hard to put your finger on, even though it's there...just outside your direct line of vision, like trying to see someone at a distance, at night. On the one hand, cybercrime has a motivation to Innovations are made out of necessity. But what about other cases or issues, such as missing childern? Business innovations in technology and applications (MySpace, Xanga, IM applications, etc.) just naturally require innovations in the areas of understanding, investigations, and subsequently communications. Outside innovations in storage media have led to different (albiet, not new) means of committing information theft and fraud.


Friday, October 06, 2006

d00d, you can do it on Windows, too!

Jesse Kornblum had a couple of interesting posts recently on his blog, both relating to ssdeep. Yes, Jesse, I found the ssdeep stuff to be more interesting than the cat stuff. Sorry! One post was about using ssdeep to discover code re-use by comparing files in directories, and the other one was about using ssdeep to tie a portion of a file to the original. Very cool stuff.

I've gotta say that ssdeep is one of the true innovations in incident response and computer forensics. This isn't a new/different implementation of something that's already there...this is truly something new.

Thursday, October 05, 2006

VMWare Converter (beta)

I received an email this morning that mentioned the release of VMWare's Converter tool, which is still in beta. Clicking on the "Registry Now" button lets you register for the beta.

The Converter is a tool that allows you to convert between formats. From the web page:
  • Supports cloning a wide variety of Windows OS platforms including Windows XP, Windows Server 2003, Windows 2000, Windows NT 4 (SP4 +) and 64-bit Windows Support (Windows XP and Windows Server 2003).
  • Supports conversion of standalone VMware virtual machine disk formats (VMware Player, VMware Workstation, VMware GSX Server and VMware Server) across all VMware virtual machine platforms (including ESX 3.0 server and VC 2.0 managed servers as target host).
  • Supports conversion of 3rd party disk image formats such as Microsoft Virtual PC, Microsoft Virtual Server, Symantec Backup Exec System Recovery (formerly LiveState Recovery) and Norton Ghost9 (or higher) to VMware virtual machine disk format.
Pretty cool. Looks like additional functionality to what you get from LiveView. BTW, the latest CyberSpeak podcast has an interview of the author of LiveView.

Tuesday, October 03, 2006

Rootkits revisted

I was browsing the F-Secure blog this morning and found something interesting...from last Friday, there was this post about reselling stolen information. Now, this is nothing new...this is just part of how organized online crime is becoming. Rather than one person doing everything, someone will purchase malware and use it to infect systems, then collect the data from Protected Storage, keystroke loggers, etc. This information is then sold to others for fraud, identity theft, etc.

For a good example of this, take a look at Brian Krebs' story from 19 Feb 06.

What I thought was most interesting about the F-Secure blog entry was this:

These changing Haxdoor variants are generated with a toolkit known as "A-311 Death".

The toolkit itself is sold on the Internet by its author, known as "Corpse" or "Korpsov".

Okay, this is nothing new, either. Selling malware toolkits or custom rootkits is nothing new, either. This toolkit is based on Haxdoor. I started taking a look around and I found some interesting links. One was from the nmap-dev's a discussion of a service detection signature for rootkits produced from this toolkit.

My post on Gromozon has some links to rootkit detection software.

Additional Resources:
McAfee Rootkits: The Growing Threat paper
Symantec C variant, D variant

FSP/FRU File Copy Client posted

I posted the File Copy Client (FCli) that was mentioned in Windows Forensics and Incident Recovery. After getting enough questions about it, I thought that it was about time that I uploaded it to the SourceForge site.

The FCli is a GUI client that the investigator can use to select files to be copied from the 'victim' system, over to the FSP server. Here's how you use it (I am going to create webinar/movie files for this stuff for the book) the archive from the SF site, and keep all of the files (EXE and associated DLLs) together in the same directory. For initial testing, you may want to have them separate from other tools and files. Launch FCli, and choose File -> Config...enter the IP address and port of the FSP server. Then choose File -> Open and use the dialog to select the files that you want to copy (web server log files, etc.). Once you've selected the files that you want, click OK to close the dialog and the file names will be added to the FCli ListView (you can go back and open the File -> Open dialog again, if you wish).

Once you have selected the files you want to copy, click "OK" in the main FCli window. The status bar on the bottom of the window will show you your may go fairly quickly. Once all the files are copied over, simply close the window.

What FCli does is first collect metadata about the file...size, MAC times, and MD5/SHA-1 hashes. This data is sent to the FSP server and archived. The file itself is then copied over to the server, at which point the server verifies the hashes. Here's an extract from the case log file:

Mon Oct 2 21:26:14 2006;DATA command received: bho2.dat
Mon Oct 2 21:26:14 2006;HASH bho2.dat:885f60ff031496a3fe37aa15454c6a46:bf402abbbe76e417105d18ad9193436315dd7343
Mon Oct 2 21:26:14 2006;FILE command received:
Mon Oct 2 21:26:14 2006; created and opened.
Mon Oct 2 21:26:14 2006; closed. Size 1522
Mon Oct 2 21:26:14 2006;MD5 hashes confirmed for
Mon Oct 2 21:26:14 2006;SHA-1 hashes confirmed for

First, the metadata is sent to the server and saved in a file with the .dat extension (next version needs to change this, in case of a conflict with a file with a .dat extension), and the hashes are pulled out of the file and logged. Then the file is copied over and the size of the file on the server, after it's been written, is logged (you can verify this later with the size recorded in the metadata). Then the file hashes are computed for the file that is now on the server, and confirmed against the metadata.

The archive you want from the SF site is This archive contains the executable file and all supporting DLLs, as well as the Perl source code for the FCli.

Thanks, and enjoy!

Addendum: It seems that WinRAR-compressed files don't always play well with other compression utilities, so I updated the archive and uploaded it...look for

Monday, October 02, 2006 site updated

The E-Evidence site was updated earlier always, lots of interesting stuff! Thanks, Christine!