Wednesday, November 29, 2006

Artifact classes

I've been doing some thinking about IR and CF artifacts over the past couple of weeks, and wanted to share my thoughts on something that may be of use, particularly if its developed a bit...

When approaching many things in life, particularly a case I'm investigating, I tend to classify things (image the scene in the Matrix where Agent Smith has Morpheus captive, and tells him that he's classified the human species as a virus*) based on information I've received...incident reports, interviews with the client, etc. By classify, I mean categorizing the incident in my mind...web page defacement, intrusion/compromise, inappropriate use, etc. To some extent, I think we all do this...and the outcome of this is that we tend to look for artifacts that support this classification. If I don't find these artifacts, or the artifacts that I do find do not support my initial classification, then I modify my classification.

A by-product of this is that if I've classified a case as, say, an intrusion, I'm not necessarily going to be looking for something else, such as illicit images, particularly if it hasn't been requested by the client. Doing so would consume more time, and when you're working for a client, you need to optimize your time to meet their needs. After all, they're paying for your time.

Now, what got me thinking is that many time in the public lists (and some that require membership) I'll see questions or comments that indicate that the analyst really isn't all that familiar with either the operating system in the image, or the nature of the incident they're investigating, or both. This is also true (perhaps more so) during incident response activities...not understanding the nature of an issue (intrusion, malware infection, DoS attack, etc.) can many times leave the responder either pursuing the wrong things, or suffering from simple paralysis and not knowing where to begin.

So, understanding how we classify things in our minds can lead us to classifying events and incidents, as well as classifying artifacts, and ultimately mapping between the two. This then helps us decide upon the appropriate course of action, during both live response (ie, an active attack) and post-mortem activities.

My question to the community is this...even given the variables involved (OS, file system, etc.), is there any benefit to developing a framework for classification, to include artifacts, to provide (at the very least) a roadmap for investigating cases?

Addendum, 30 Nov: Based on an exchange going on over on FFN, I'm starting to see some thought being put into this, and it's helping me gel (albiet not crystalize, yet) up my thinking, as well. Look at it this way...doctors have a process that they go through to diagnose patients. There are things that occur every time you show up at the doctor's office (height, weight, temperature, blood pressure), and there are those things that the doctor does to diagnose your particular "issue du jour". Decisions are made based on the info the doctor receives from the patient, and courses of action are decided. The doctor will listen to the patient, but also observe the patient's reaction to certain stimuli...because sometimes patients lie, or the doctor may be asking the wrong question.

Continuing with the medical analogy, sometimes it's a doctor that responds, sometimes a nurse or an EMT. Either way, they've all had training, and they all have knowledge of the human body...enough to know what can possibly be wrong and how to react.

Someone suggested that this may not be the right framework to establish...IMHO, at least it's something. Right now we have nothing. Oh, and I get to be Dr. House. ;-)

*It's funny that I should say that...I was interviewed on 15 May 1989 regarding the issue of women at VMI, and I said that they would initially be treated like a virus.

Tuesday, November 28, 2006

MS AntiMalware Team paper

I know I'm a little behind on this one, but I saw this morning that back in June, the MS AntiMalware team released an MSRT white paper entitled "Windows Malicious Software Removal Tool: Progress Made, Trends Observed".

I've seen some writeups and overviews of the content, particularly at the SANS ISC. Some very interesting statistics have been pulled from the data collected, and as always, I view this sort of thing with a skeptical eye. From the overview:

This report provides an in-depth perspective of the malware landscape based on the data collected by the MSRT...

It's always good for the author to set the reader's expectations. What this tells me is that we're only looking at data provided by the MS tool.

Here are a couple of statements from the overview that I found intersting:

Backdoor Trojans, which can enable an attacker to control an infected computer and steal confidential information, are a significant and tangible threat to Windows users.

Yes. Very much so. If a botmaster can send a single command to a channel and receive the Protected Storage data from thousands (or tens of thousands) of systems, this would represent a clear and present danger (gotta get the Tom Clancy reference in there!).

Rootkits...are a potential emerging threat but have not yet reached widespread prevalence.

I don't know if I'd call rootkits a "potential" or "emerging" threat. Yes, rootkits have been around for quite a while, since Greg Hoglund started releasing them with the NTRootkit v.0.40. In fact, commercial companies like Sony have even seen the usefulness of such things. It's also widely known that there are rootkits-for-hire, as well. Well, I guess what it comes down to is how you define "widespread". We'll see how this goes...I have a sneaking suspicion that since it's easy enough to hide something in plain sight, why not do so? That way, an admin can run all the rootkit detection tools they want and never find a thing.

Social engineering attacks represent a significant source of malware infections. Worms that spread through email, peer-to-peer networks, and instant messaging clients account for 35% of the computers cleaned by the tool.

I can't say I'm surprised, really.

These reports are interesting and provide a different view of the malware landscape that many of us might not see on a regular basis...it's kind of hard to see the forest for the trees when you're face down in the mud, so to speak.

Even so, what I don't think we see enough of in the IR and computer forensics community is something along the same lines but geared toward information that is important to us, such as forensic artifacts. For example, how do different tools stack up against various rootkits and other malware, and what are the artifacts left by those tools?

Most of the A/V vendors note the various changes made when malware is installed on a system, but sometimes it isn't complete, and other times isn't correct (remember the MUICache issue??).

What would be useful is a repository, or even various sites, that could provide similar information but more geared toward forensic analysts.


Monday, November 27, 2006

Some sites of note

I've been perusing a couple of new sites that I've run across over the past couple of weeks and wanted to pass them on to others, and include some of my thoughts/experiences with the sites...

MultiMediaForensics - At first, this looks like another forum site, similar to ForensicFocus. One of the aspects of this site, however, is that the author maintains the shownotes for the CyberSpeak podcast. One of the most important things about this site is that it provides a forum for members of the computer forensics community to come together. In my experience, this is one of the things missing from the community...a "community". While ForensicFocus is UK-based, MultiMediaForensics appears to be more US-based, and that may have an effect on its popularity here in the US. If you follow the CyberSpeak podcast, or are just interested in computer forensics, make a point of contributing (post, write an article, provide a link, ask a question, etc.) to this site.

Hype-Free - While not specifically related to security or forensics, this site does have some interesting commentary. The author says in the "About" section that the goal of the blog is to "demystify security", so keep your eye out for some good stuff coming down the road.

MS AntiMalware Blog - Don't go to this site expecting to see MS's version of Symantec or some other A/V vendor that posts writeups on malware. However, you may find some interesting white papers and posts that will help you understand issues surrounding malware a bit better. For example, the ISC recently mentioned some highlights from one such white paper.

Sunday, November 26, 2006

A little programming fun, etc.

I was working on writing my book yesterday afternoon (for those of you who live in my area and know how nice it was outside, I know...shame on me for being inside), and was working on a section that has to do with the Windows Recycle Bin. Most forensic analysts are likely familiar with the Recycle Bin and how it works, as well as with Keith Jones's excellent description of the Recycle Bin records (1, 2), so this is more than likely nothing new.

For the book, I wanted to throw together a little script that can be used to parse the INFO2 file, in order to illustrate some concepts that I am trying to convey in section. To my surprise, it was easier than I thought...I got something put together that works amazingly well - my only stumbling block at this point is how to present the data in such a way as to be useful to widest range of folks.

This just shows how useful a little bit of programming skill can be. Something I've been thinking about adding to my list of ProDiscover ProScripts is a script that will go through the Recycler directory and look for files that are out of place...not only files that are in the Recycler directory itself, but those that are in the user's deleted files but not listed in the INFO2 file.

Way Too Funny
Okay, now this is just way too funny...I was doing a search on Amazon to see if my next book is up for early order yet (my publisher said that it would be soon), and I ran across my master's thesis! If you're suffering from insomnia, it's well worth the price to grab a copy of this thesis...I used video teleconferencing traffic to verify the fractal nature of ethernet traffic, so that real statistical models could be used (rather than assumed) for constructing buffers on switches at the edge of ATM clouds.

Anyway, this was really "back in the day" (re: 1996). I was using Java (which, with the help of O'Reilly, I had taught myself) to write an SNMP application to poll Cisco routers for traffic data. I then used MatLab to perform statistical analysis tests of the data to determine its "burstiness" and verify its fractal nature. The video conferencing came in because it allowed me to generate data...I could sit in front of one camera, across the room from the other camera, and have measurable traffic generated across the network. At the time, Notepad was my Java IDE...this was before IBM's Java IDE came out (I saw my first copy of the IBM Java IDE at the end of July, 1997).

Something new from ISC
There were a couple of items I found interesting on the SANS Internet Storm Center handler's diary recently.

One was this item about malware that uses a commercial tool to detect the existence of a virtual machine. Very interesting.

Another is this writeup on the FreeVideo Player Trojan, submitted by Brian Eckman (the writeup also appears on SecGeeks and First.org, among other locations). Brian's writeup is thorough and very comprehensive...I like seeing this sort of thing posted, as it's a nice break to see a submission like this. Brian put a lot of work into the writeup, and looked at a variety of areas on the system that were affected by this malware.

The one question I have (and would have for anyone) is this statement that was made:

Further analysis shows that this appears to inject itself into running processes.

Given the rest of the writeup, there really isn't anything that specifically states how Brian determined this. I'm sure we can come up with a variety of assumptions as to how he did it, or how we would go about doing it, but I'd like to hear from the author. I'm going to submit that as a question to him, and I'll let you know what I find out.

Friday, November 17, 2006

How do you do that voodoo that you do?

After reading through the SecuriTeam documents, and then reading this SANS ISC post, I have to admit, incident response can be extremely complex. This is particularly true if the bad guys catch you with your pants down (figuratively, or hey, even literally). From the SecuriTeam write-ups, we get an insight into how systems are targetted...via Google searches (with books such as Google Hacking from Johnny Long, there's really no big mystery to this) and even freely available vulnerability scanning tools.

In the SecuriTeam documents, the decision to conduct a live response is documented, as a "full forensic analysis" would take too long...it was evidently determined that the attackers hadn't simply defaced a web page, but were actively on the system and involved in creating mayhem. This isn't unusual...many organizations decide that affected systems cannot be taken down. However, with the Team Evil incident, there didn't seem to be any volatile data collected or analyzed. It appears that because a web page was defaced and some directories created, that the investigation and the IR team's reactions focused solely on the web application(s).

One thing I took away from the second SecuriTeam document was that there were...what? Eight attacks?

The post from the Internet Storm Center doesn't seem to make things any easier, does it? After all, not only do multiple downloaders dump a "zoo of malware" (i.e., keylogger, BHOs, even disables the Security Center) on the system, but it also reports the infected system's location so it can be tracked via Google Maps. While this is an interesting new capability, the real issue IMHO is the stuff that's dumped on the system. How do you track it all? If you really don't know a great deal about your systems (hosts and networks) and the applications that you're running, and you haven't taken any real steps to protect those systems, I'd have to say that the ISC recommendation to reinstall is the only option.

If you really think about it, though, maybe this is the intention. If you've been looking at some of the various conference presentations over the past couple of years, there have been some anti-forensics and "how to use the analyst's training against them" presentations. One way to look at these attacks is that the attacker is just being greedy. Another is to think that the intention of dumping multiple programs (keyloggers, BHOs, etc.) on the system is that IF the sysadmin detects any one of them, they'll reinstall the system...and it's likely that the system can be easily reinfected.

So, on the face of things, we've got a denial of service attack...compromise and infect a critical-use system, and it's so important that the sysadmin takes it offline for a reinstall. This also lends itself to persistence...the reinstalled system may have the same or additional vulnerabilities, so the attacker can reinfect the system once it's back up.

Of course, I can't help but think that this could also be a distraction, a little bit of misdirection...get the IT staff looking one direction while the attacker is siphoning data off of another system.

So I guess I have to concede the point that reinstallation is the thing to do. If you (a) don't really know your infrastructure (hosts or network) that well, (b) have no idea where critical (processing or storing sensitive personal information) applications are, (c) haven't really taken any defense-in-depth measures, (d) have no clue what to do, and (e) don't have an IR plan that is supported and endorsed by senior management, I guess there really isn't any other option.

Thoughts?

Thursday, November 16, 2006

Case Study from SecuriTeam

Everyone loves a case study! Don't we love to read when someone posts what they did when they responded to an incident? It's great, as it gives us a view into what others are doing. SecuriTeam has not one, but two such posts...well, one post that contains two case studies.

The first is a PDF of the incident write-up for a compromise that occurred in July 2006. The second is a follow-up, and both are very interesting.

I'm going to take a process-oriented view of things. Please don't ask me if someone who defaces web pages and knows some PHP code qualifies as a "cyber-terrorist". I don't want to get mixed up in semantics.

This whole thing seems to have started on or about 11 July 2006 with a web page defacement. Evidently, the bad guys gained access to a vulnerable system (found via Google) and installed a PHP 'shell'. Page 8 of the write-up specifies the need for a "real-time forensic analysis", due to the fact that not only did the good guys need to "stop the bleeding", but they had to counter the bad guys, who were not only deploying their own counter-measures, but attacking the good guys, as well. What's interesting about this is that on page 8, the authors mention having to battle active attacks from the bad guys, saying it was a "fight between the attackers...and the incident response personnel." Oddly, pages 9 - 11 included the 12 steps that the IR personnel went through, yet nothing was mentioned about having to battle an attack. The document continues into "Attack Methodology" and "Conclusions" with no mention of IR personnel being hammered by attacks from the bad guys.

Another interesting point is that the authors mention that the user-agent found in some web logs included the term "SIMBAR", and concluded (with no supporting info) that this meant that the attacker's system was infected with spyware. While I'm not discounting this, I do find it odd that no supporting information was provided. I did a search and found references to adware, but I also found a reference to the SIMS game, as well.

Continuing along into the follow-up report, "SIMBAR" is mentioned again, and this time the authors seem to believe that this user-agent indicates that the same computer was used in multiple attacks. IMHO, this is perhaps dubious at best, particularly if it is indicative of adware...adware can be fairly widespread. Also, beyond that, I'm not sure how significant this information really is to the overall incident.

Overall, both write-ups seem to indicate that the attackers used known exploits to gain access to systems, and used relatively known and easily available tools to gain their foothold. It also seems that the authors made several unsupported assumptions. At first the write-ups are interesting read, but when you really try to dig into them and get some meat, IMHO, it just isn't there.

But hey, who am I? What do you think?

Addendum 18 Nov: Besides the comments here, there's been corresponding posts over on TaoSecurity. I just want to say that I'm not trying to grill the authors of the documents, nor am I trying to point out every little detail that I think was mishandled...not at all. What I am saying is that I think the authors did a great job, but there are things that could have been handled a bit better. Richard Bejtlich mentions "focus and rigor" in one of his recent posts...just because something is done by volunteers, does that mean that quality should suffer?

Van Hensing Rides...uh, Blogs Again!

Robert Hensing, formerly of the MS PSS Security team and now with SWI, has started blogging again! This is great news, because Robert brings a wealth of experience and knowledge to the field of incident response and forensics. Also, due to his resources, he also brings a great deal of insight and detail to the things he investigates and shares with us.

For example, check out his most recent post involving web surfing activity found in the "Default User" directory (he also addressed something similar in his "Anatomy of a WINS Hack" post). What's interesting about this is that if you are using ProDiscover to examine an image, and you populate the Internet History View and find entries for "Default User", you've struck gold!

Robert tends to blog on a wide range of subjects, and his entries about Windows issues tend to be more technical and comprehensive than what you'll find on most sites. So, check it out and leave a comment...

Monday, November 13, 2006

Trends in Digital Forensics, and news

I ran across a Dr. Dobbs article of the same name as the title of this post...very interesting. The subtitle is Are "live" investigations are the trend we are heading towards?

An interesting quote from the article:
Thus the new trend in digital forensics is to to use the corporate network to immediately respond to incidents.

Hhhhmmm...this sounds pretty definitive.

My thoughts on the article are two-fold. First, I have to ask...is this, in fact, the trend (or at least a coming trend that we're seeing more of)? Are IT and IR teams using tools like those mentioned in the article (Encase, Wetstone's LiveWire - I have to wonder why the author doesn't mention ProDiscover) to perform incident response via the network? If so, how effective are these efforts?

Overall, the author discusses "live investigations" (which is cool, because my next book covers that, in part) but I have to wonder how much this is being done, and how effective it is.

Now for the "news"...there's a new CyberSpeak podcast out, I just downloaded it and still have to listen to it. I took a look at the show notes (which have moved) and saw that Jesse Kornblum is again being interviewed. Very cool. One of the news items I picked up from the show notes was about a guy in the UK who took over young girls' computers and extorted them into sending him dirty pictures of themselves. The scary thing about the article isn't things like this:

...used some of the most advanced computer programmes seen by police to hack into their PCs...

One of the youngsters said his level of expertise and his power over her PC reminded her of the cult science fiction film Matrix.

Well, okay...I take it back...maybe those excerpts do represent some scary things about the article..."scary" in the sense that an email-borne Trojan of some kind is equated to level of technology seen in the Matrix. Or maybe it's the fact that according to the article, these kids actually fell prey to this guy and sent the pictures, rather than notifying their parents.

Okay, I'm off to listen to the show...

Sunday, November 12, 2006

Evidence Dynamics

One question in particular that I'm seeing more and more is, can volatile data be used as evidence in court? That's a good question, and one that's not easily answered. My initial thought on this is that at one point, most tools (EnCase is perhaps the most popular that will come time mind) and processes that are now accepted in court were, at one time, not accepted. In fact, there was a time when computer/digital evidence was not accepted.

There are two things that responders are facing more and more, and those are (a) an increase in the sophistication and volume of cybercrime, and (b) an increase in instances in which systems cannot be taken down, requiring live response and/or live acquisition. Given these conditions, we should be able to develop processes by which responders can collect volatile data (keeping evidence dynamics in mind) to be used in court as "evidence".

Others have discussed this as well, to include Farmer and Venema, Eoghan Casey, and Chris LT Brown. Much like forensics in the real world, there are forces at play when dealing with live computer evidence, such as Heisenberg's Uncertainty Principle and Locard's Exchange Principle. However, if these forces are understood, and the processes are developed that address soundness and thorough documentation, then one has to ask...why can't volatile data be used in court?

Take the issue of the "Trojan Defense". There was a case in the UK where a defendant claimed that "the Trojan was responsible, not me", and even though no evidence was found a Trojan within the image of his hard drive, he was acquitted. Perhaps collecting volatile data, to include the contents of physical memory, at the time of seizure would have ruled out memory-resident malware as well.

My thoughts are that it all comes down to procedure and documentation. We can no longer brush off the questions of documentation as irrelevant, as they're more important than ever. One of the great things about tools such as the Forensic Server Project is that they're essentially self-documenting...not only does the server component maintain a log of activity, but the FRU media (CD) can be used to clearly show what actions the responder took.

So what do you think?

Additional Resources
Evidence Dynamics: Locard's Exchange Principle & Crime Reconstruction
Computer Evidence: Collection & Preservation
HTCIA 2005 Live Investigations Presentation (PDF)
The Latest in Live Remote Forensics Examinations (PDF)
Legal Evidence Collection
Daubert Standard (1, 2)

Saturday, November 11, 2006

OMG! So says Chicken Little...

I got wind of a post over on the SecurityFocus Forensics list...yes, I still drop by there from time to time, don't hate me for that. Someone posted a couple of days ago about a Registry key in Vista, specifically, a value named "NtfsDisableLastAccessUpdate". Actually, the post was a link to/excerpt from the Filing Cabinet blog. The idea is that this entry allows the admin to optimize a Windows system, as if the setting is enabled (you can do it through fsutil on XP and 2003), then file last access times won't be updated when files are accessed, eliminating that "extra" disk I/O. This is really only an issue on high-volume file servers.

The SF post then ends with "In case this is of value in the forensics of Vista".

Well, not to put too fine a point on it, but "duh!" Even though NTFS last access times are known to have a granularity of about an hour, disabling the ability to track such things takes away one of the tools used by forensic investigators. And even though this functionality is enabled by default on Vista (I'm looking at RC1), it's just one tool. For example, Vista still tracks the user's access to programs via the shell in the UserAssist keys.

In my new book, I recommend checking this value during IR activities, and I also recommend that if forensic investigators find this functionality enabled, then check the LastWrite time on the Registry key to get the date that may correlate to when that change was made to the system. The change can be made through RegEdit or the fsutil application. The fsutil application is not a GUI that is accessed through the shell, so its use won't be tracked via the UserAssist key (although on XP, you may see a reference to fsutil.exe in the Prefetch folder). However, if the change is made, a reboot is required for the change to take effect, so the last access time on the fsutil.exe file (in the system32 directory) may give you an idea of when the change was made, and you may then be able to determine via other logs who made the modification.

Friday, November 10, 2006

Parsing Raw Registry Files

I blogged about this Perl module before, but I've actually started using it myself. The Parse::Win32Registry module is a pretty great tool. Someone asked a question recently about getting some information about when a user account was created, and all that the original poster (OP) had available was an image. There were some good suggestions about getting the creation time of the user's folder in the "Documents and Settings" folder, with the caveat that that isn't really the date/time that the user account was created, but rather the date/time that someone first logged into that account.

Another option is to get the date/time that the password was last reset. If the password hasn't been changed, then the time it was "reset" correlates to the date/time that the account was created (see the HelpAssistant and Support_ accounts below).

Using info from AccessData's Registry Viewer and Peter Nordahl's chntpwd utility source code, I put together some code to parse out quite a bit of info about user accounts from a raw SAM file. For example:

Name : Administrator
Comment: Built-in account for administering the computer/domain
Last Login = Never
Pwd Reset = Tue Aug 17 20:31:47 2004 (UTC)
Pwd Fail = Never
--> Password does not expire
--> Normal user account
Number of logins = 0

Name : Guest
Comment: Built-in account for guest access to the computer/domain
Last Login = Never
Pwd Reset = Never
Pwd Fail = Never
--> Password does not expire
--> Account Disabled
--> Password not required
--> Normal user account
Number of logins = 0

Name : HelpAssistant (Remote Desktop Help Assistant Account)
Comment: Account for Providing Remote Assistance
Last Login = Never
Pwd Reset = Wed Aug 18 00:37:19 2004 (UTC)
Pwd Fail = Never
--> Password does not expire
--> Account Disabled
--> Normal user account
Number of logins = 0

Name : SUPPORT_388945a0 (CN=Microsoft Corporation,L=Redmond,S=Washington,C=US)
Comment: This is a vendor's account for the Help and Support Service
Last Login = Never
Pwd Reset = Wed Aug 18 00:39:27 2004 (UTC)
Pwd Fail = Never
--> Password does not expire
--> Account Disabled
--> Normal user account
Number of logins = 0

Name : Harlan
Last Login = Mon Sep 26 23:37:51 2005 (UTC)
Pwd Reset = Wed Aug 18 00:49:42 2004 (UTC)
Pwd Fail = Mon Sep 26 23:37:47 2005 (UTC)
--> Password does not expire
--> Normal user account
Number of logins = 35

Pretty cool. This is a little more info than you'd see if you were using RV, and I haven't even started to pull apart the global account information yet. The really neat thing about this is that the Parse::Win32Registry module doesn't use Windows API calls, so you can install the module on Linux and MacOSX, as well.

Monday, November 06, 2006

ComputerWorld: Undisclosed Flaws Undermine IT Defenses

I ran across this article today. All I can say is...sad.

Undisclosed flaws may undermine IT defenses...but that presupposes that the organization has some kind of "IT defenses" in place. Take the first sentence of the article:

Attacks targeting software vulnerabilities that haven’t been publicly disclosed pose a silent and growing problem for corporate IT.

Silent? The vulnerabilities themselves may be "silent", as they're just sitting there. But I have to ask...if someone attempts to exploit the vulnerability, is that silent, and if so, why?

One of my favorite examples of defense (the military analogies just don't work today, as there are simply more and more folks in the IT realm who don't have any military experience) is from the first Mission Impossible movie. Cruise's character, Ethan Hawke, has made his way back to a safehouse after the catastrophic failure of a mission. As he approaches the top of the stairs, he removes his jacket, unscrews the lightbulb from the socket, crushes the bulb inside his jacket, and scatters the shards on the floor as he walks backward to the door of his room. How is this defensive? If someone approaches the room, the hallway is dark and they end up stepping on the shards (which they can't see) and making noise...announcing their presence.

The problem is that there are great number of organizations out there that have very little in the way of protection, defense mechanisms, etc. Defense in depth includes not only putting a firewall in place, but patching/configuring systems, monitoring, etc. Many organizations have some of these in place, to varying degrees, and the ones that have more/most of them generally do not appear in the press.

This also highlights another issue near and dear to my heart...incident response. The current "state of IR" within most organizations is abysmal. Incidents get ignored, or the default reaction is to wipe the "victim" system and reinstall the operating system and data. This issue, as well as that of preparedness, can only be adequately addressed when senior management understands that IT and IT security are business processes, rather than necessary expenses of doing business. IT and in particular IT security/infosec can be business enablers, rather than drains to the bottom line, IF they are recognized as such by senior management. If organizations dedicated resources to IT and infosec the way they did to payroll, billing and collections, HR, etc., there would actually be IT defenses in place.

An interesting quote from a Gartner analyst is that much of the confusion about what constitutes a zero-day threat stems from the manner in which some security vendors have used the term when pitching their products. Remember, folks, it's all market-speak. Keep that in mind when you read/listen to "security vendors".

Another quote from the same analyst: But the reality is that most organizations “aren’t experiencing pain” from less-than-zero-day attacks. I firmly believe that this is perhaps partly true, due to the fact that many incidents simply go undetected. If an attack or incident is limited in scope and doesn't do anything to get the IT staff's attention, then it's likely that it simply isn't noticed, or if it is, it's simply ignored.

Friday, November 03, 2006

DoD CyberCrime 2007

Right now, I'm scheduled to present on Windows Memory Analysis at the DoD CyberCrime 2007 conference. I'll be presenting early on Thu morning. This presentation will be somewhat expanded over the one I gave at GMU2006, and even though I've submitted the presentation for approval, I'm hoping that between my day job and writing my book, I can put some additional goodies together.

Going over the agenda for DC3, there seem to be some really good presentations, from Wed through Fri. I'd love to see/hear what Bill Harback is talking about regarding the Registry, and I see that Charles Garzoni will be talking about hash sets. Also, Alden Dima is talking about NTFS Alternate Data Streams, something near and dear to my heart.

Interestingly, there are a total of (I count) four presentations about Windows memory. Tim Vidas and Tim Goldsmith are presenting, as is Jesse Kornblum (title: Recovering Executables from Windows Memory Images). This topic must be getting pretty popular. I hope to have the opportunity to meet with many of the folks presenting and attending the conference.

Thursday, November 02, 2006

Forensics Blogs/Podcasts

Does anyone know of any other purely forensics blogs and/or podcasts out there? I know that there're several neat, not-specifically-forensics sites, but I'm looking for just forensics; hardware or software, Windows, Mac, Linux, Atari, Commodore...anything specifically related to computer forensics.

Updates - Code and such

I updated some code recently and thought I'd post a quick note about it...

First off, if you're using kern.pl from the OS Detection package on my SF.net site, I've updated the code to actually tell you which OS is running. The output is a little cleaner, and provides arguments so you can get a verbose output if you need it. Also, the script has been renamed to "osid.pl", replacing kern.pl.

In the simplest form, you simply pass the name of the RAM dump file to the script, and get the output:

C:\memory>osid.pl d:\hacking\boomer-win2003.img
2003

Or, you could use the arguments (use '-h' to see the syntax info) to get more info:

C:\memory>osid.pl -f d:\hacking\boomer-win2003.img -v
OS : 2003
Product : Microsoft« Windows« Operating System ver 5.2.3790.0

I posted yesterday about the updates to the disk documenting script, which I thought was pretty neat. Thanks again to Jon Evans for pointing out the Win23_PhysicalMedia class for getting the serial number. My thought is that something like this is extremely useful, and easy to use (i.e., just cut-and-paste the output into your acquisition worksheet after you've imaged a drive). The "Signature" helps you tie the drive to the image because that's the same value listed in the first DWORD of some of the MountedDevices entries.

Shifting gears, the e-Evidence site was updated yesterday. Every month, Christine puts up new stuff and it's great to see what's out there. For example, I wonder how many folks have taken the time to reflect on their first incident the way Brian did. Also, check out the presentation about the anatomy of a hard drive. There's always some interesting stuff on this site.

Wednesday, November 01, 2006

Documenting Disk Information

One of the things that computer forensics nerds (present company included, of course) need to do a lot of is documentation...something they, uh, I mean "we" are notoriously bad at. Come on, face it, most nerds don't like to document what they do...it just gets in the way of doing the fun stuff.

What about documenting information about disks you image? You know...hook up a write-blocker, fire up the acquisition software...what about documenting the disk information for your acquisition worksheet, or your chain-of-custody forms?

One way to do this is with DiskID. Another way is to use WMI. By correlating data from the Win32_DiskDrive, Win32_DiskPartition, and (thanks to Jon Evans for pointing this one out) Win32_PhysicalMedia, you can get the same info as with other tools (on Windows). For example, correlating the data from the three Win32 classes mentioned give me the following - first, for my local hard drive:

DeviceID : \\.\PHYSICALDRIVE0
Model : ST910021AS
Interface : IDE
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x41ab2316
Serial No : 3MH0B9G3

\\.\PHYSICALDRIVE0 Partition Info :
Disk #0, Partition #0
Installable File System
Bootable
Boot Partition
Primary Partition

Disk #0, Partition #1
Extended w/Extended Int 13

Then for a USB thumb drive (gelded of the U3 utilities):

DeviceID : \\.\PHYSICALDRIVE2
Model : Best Buy Geek Squad U3 USB Device
Interface : USB
Media : Removable media other than floppy
Capabilities :
Random Access
Supports Writing
Supports Removable Media
Signature : 0x0
Serial No :

\\.\PHYSICALDRIVE2 Partition Info :
Disk #2, Partition #0
Win95 w/Extended Int 13
Bootable
Boot Partition
Primary Partition

See? Pretty cool. In some cases, you won't get a serial number, but for the ones I tested, I wasn't able to get a serial number via DiskID32, either.

Now, the cool thing is the "Signature". This is the DWORD value located at offset 0x1b8 in the MBR of a hard drive, and appears in the MountedDevices key entry for that drive (the data entry should be 12 bytes in length).

Using the Win32_LogicalDrive class, you can get the following:

Drive Type File System Path Free Space
----- ----- ----------- ----- ----------
C:\ Fixed NTFS 21.52 GB
D:\ Fixed NTFS 41.99 GB
E:\ CD-ROM 0.00
F:\ Removable FAT 846.70 MB
G:\ Fixed NTFS 46.91 GB


Not bad, eh? And the cool thing is that not only can these tools (which were written in Perl and can be 'compiled' using Perl2Exe) be used with the FSP, but they can also be used by administrators to query systems remotely.

Addendum, 3 Nov: I hooked up an old HDD to a write blocker today and ran the tools against it...here's what I got from di.pl:

DeviceID : \\.\PHYSICALDRIVE1
Model : Tableau FireWire-to-IDE IEEE 1394 SBP2 Device
Interface : 1394
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x210f210e
Serial No : 01cc0e00003105e0

\\.\PHYSICALDRIVE1 Partition Info :
Disk #1, Partition #0
Installable File System
Primary Partition

I verified the signature by previewing the drive in ProDiscover, and locating the DWORD at offset 0x1b8...it checked out. The Windows Device Manager shows the Tableau write-blocker to be the HDD controller.

Here's what I got from ldi.pl:

F:\ Fixed NTFS 9.39 GB

The Disk Manager also reports an NTFS partition on the drive. Still looks pretty useful to me. DiskID32 reported some weird info, probably due to the write-blocker.