Saturday, March 31, 2007

Book updates - NukeOnDelete and SysProt AntiRootkit

Now that my book is available, I'll be posting things like updates and errata here in the blog.

As I mentioned before, one of chapters in the book addresses alternative analysis methods, and mentions the use of Mount Image Pro to mount the image as a read-only file system. I see this as a great opportunity for analysis for several different reasons...malware scanning, etc.

One of the things that didn't make it into the Registry Analysis chapter was a Registry key that controls the Recycle Bin. The following key is the one in question:


If there is a value beneath this key named "NukeOnDelete" (DWORD) and the value is set to "1", then the Recycle Bin on the system is effectively disabled. This value isn't there by default, it must be added.

So how would we use this information in our analysis? Well, first off, just the fact that the value was added may indicate that (a) the user is sophisticated, or that (b) the user has used one of the privacy tools available on the Internet (we can correlate this to other data, as well).

Next, notice that the key is in the HKEY_LOCAL_MACHINE hive, meaning that it applies to the entire system, rather than just a single user. This doesn't mean that you should expect to see the Recycle Bin for each user (in the case of more than one user) empty, with a really small INFO2 (my book includes code for parsing the INFO2 file) file. Had the value been set after a user had sent files to their Recycle Bin, that INFO2 file should still contain the entries. We can then corrrelate those dates and times with the LastWrite time on the Registry key to determine approximately when the key was modified.

Okay, so does disabling the Recycle Bin slow down a forensic examiner? Not any more! We all know that deleted files aren't really gone.

Another update that I need to add for Chapter 7, Rootkits and Rootkit Detection is SysProt AntiRootkit. This one wasn't out before the book/chapter went to production, so I didn't have time to add it, but it's definitely worth looking at and adding to your toolkit.

One of the core concepts of my book is that by understanding how artifacts are created and modified, we see that the absence of an artifact where we expect to see one is, in itself, an artifact. Checking for this value and then correlating what you find to other findings on the system will provide clues not only to what happened on the system and when, but also to the technical sophistication of the user.

One final word...the book includes a DVD that contains an updated copy of my Registry spreadsheet, which contains several worksheets that list Registry keys and values of interest, why they are interesting for forensic analyists, and references to their function (where applicable). Be sure to add this one to your copy of the spreadsheet.

Friday, March 30, 2007

"Windows Forensic Analysis" is now available!

My new book, Windows Forensic Analysis, is now available! Go here to purchase the ebook now. Go here for Amazon pre-order.

While I was writing the book, I reached out to the community and tried to get input on the types of things others would like to see in a book like this. A lot of the responses I got back didn't have anything to do with Windows, and some didn't even have anything to do with forensic analysis. Since I've finished the book, I've been able to get my head out of the trenches and think about things for a bit, and I already see avenues for improvement and additions to the book, or at least to the material itself.

For example, by combining the material in the chapters on Windows memory analysis, Registry analysis, and file analysis, and then stirring in a little imagination, you may be able to come up with some effective methods to disprove the "Trojan Defense", as well as some counter-anti-forensics techniques.

Please feel free to send me any comments, thoughts, questions, or criticisms you have about the book. Any additional information about the material in the book will most likely be posted here first.

Teaching IR/CF

My recent post on mounting a dd image got me thinking that there are plenty of freeware tools available for performing incident response as well as computer forensic analysis. Given this, how cool would it be to teach high school kids computer forensic techniques? Or, if you added something like this to the curriculum, you could easily add another course or two to a community college or undergraduate degree program.

Let's say that all you have avaible is a couple of systems. You can easily set up a "lab" with these systems, and then with minimal cost, add some external drives for storing images. There's plenty of free software available for acquiring and analyzing images, even if you're restricted to a particular platform.

I think that one of the benefits of this is that at the end of the program you'd have folks who had experience with different situations, different tools, and actually had to think through their approach to collection and analysis, not simply clicked a button.

Wednesday, March 28, 2007

Change Analysis Diagnostic Tool for Windows XP

Microsoft recently released this KB article, titled The Change Analysis Diagnostic Tool for Windows XP is available.

So, why is this interesting? Bear with me for just a moment. Reading the article, we see that the tool looks at programs, OS components, BHOs, drivers, ActiveX controls, and ASEPs (MS's term for autostart locations). Okay, so not entirely interesting, per se...there are tools that already do this, I know. However, the really interesting part is this:

The Change Analysis Diagnostic tool queries the System Restore data for the number of days that the user selects. The tool finds the changes to the registry and to the file system that are relevant to these categories. Then, the tool presents the changes together with contextual information.

Is that sweet or what? Tools like this generally require a baseline, such as when we're performing dynamic malware analysis (ie, snapshot the system, install malware, snapshot the system again, and compare the two). In this case, MS is using the Restore Points as the snapshots. Makes me glad that I took the time to address Restore Point analysis in my book!

Why current IR models don't work, part deux

As a follow-up to my previous post, I thought of another way to (hopefully) shed some light on this issue.

Sometimes, we (forensics weenies such as myself) like to illustrate technical points with real/analog world analogies. As some of you may know, one of my favorites is the stabbing victim you find in an alleyway, and call 911...the EMTs (incident responders) arrive, triage the victim, stabilize and move him, etc. The cops are ultimately able to locate the perpetrator of the crime, and he can be convicted, even given the fact that the victim was moved, etc. Using the traditional, purist CF model, a doctor would have to kill the victim on the scene and perform an autopsy in order to catch the bad guy.

Let's take a look at a similar analogy for IR, using real-world crime as an example. Say that an office building is a network infrastructure, and within that building is are file cabinets that contains sensitive data. The office building has doors, windows, elevators, roof access, etc., just like a normal office building...these are akin to access points to the network infrastructure.

Let's say that during the night, someone breaks into the building and attempts to steal some of the sensitive data kept there. What happens in the real/analog world? Usually, if there were no alarms, then someone notices something when they get into work the next day (or you hope that they do, anyway) and alerts security, who calls the local police (ie, first responders). Access to the area is immediately restricted, and an "incident response" plan of some kind kicks in...eventually a report is produced, and the perpetrators may be caught. If the folks who owned the building or occupied the office spaces actually made it difficult for someone to break in (by locking doors, using additional levels of access control, etc.) or monitored the area (video cameras, etc.) then there would be more evidence available for the police to review, and it would be more likely that the bad guy would be caught.

So, what does this have to do with anything? Well, here are some of what usually happens in today's digital realm, mapped to the real world:

1. Many buildings have no restrictions to access...doors (front, loading dock, etc.) are all open and unmonitored. First and second floor windows are open for convenience, to make it easier for employees (users) to come and go (yes, I've actually seen employees climb out first floor windows to avoid being seen leaving by their boss). There is usually a back door that's propped open. Access via adjacent buildings and even the roof is very often unfettered.

2. Bad guys get into the building at all hours, even during regular working hours. They come in through the front door, loading dock entrance, etc. They aren't challenged or even noticed. They lock/unlock doors, clog toilets, and generally make a nuisance of themselves. Many times, they are there for several months, taking files, sitting at employee's workstations, etc., and no one seems to notice or even respond.

3. When someone does notice that a bad guy is or has been there, the usual approach is that one of the employees does not alert anyone, and tries to investigate the situation themselves. As they have no training in this sort of thing (or did receive training, but have not used it, or been required to keep that training current), their investigation misses a lot of very basic stuff, like footprints, fingerprints, etc., and a lot of obvious stuff (contents of file cabinets and storage closets thrown all over the room, etc.). This may go on for weeks or even months, and when someone finally does call the police, there's been no record of who did what, what was found and when, and there may even have been broken doors and windows replaced.

Does any of this make any sense in the real world? Probably not. So why do we see this so often in the digital realm?

So, I then have to ask, is there any reason why IT staffs cannot be trained in first response, providing a tier 1 response capability? Basically, tier 1 IR is akin to advanced troubleshooting. Corporations would pay a nominal fee to have experts come on-site and train their IT staff in basic IR procedures, raising the level of their awareness and expanding their skill sets. The IT staff would realize the benefit of things like network diagrams, troubleshooting and IR procedures, etc (which, by the way, are required by most regulatory bodies, including FISMA and Visa PCI). They would then be able to handle first level/tier 1 response. Then, if additional assistance is required, reach out to those experts for tier 2 and/or 3 response, but only when needed. The flip side is that corporations end up paying much, much more because the first responders are called for every apparent "incident" that occurs; they resolve the situation, provide a report, and go back to wait for the next call...but no knowledge transfer occurs and the "victim" is no better off than they were before the "incident" occurred.

Another benefit of having that on-site, functional, hands-on training and producing things like network diagrams is that the IT staff gets to "know" their network. By working through scenarios ahead of time, network administrators learn what the important pieces of network- and host-based information are that they are interested in when investigating an incident. Sometimes just asking the question of "what systems store or process sensitive data?" during training evolutions is much more beneficial than asking that same question after someone has p0wned the box and carted that data off.

Current IR models don't work because we don't spend enough time looking at what works in the real world and mapping that same sort of mechanism into the digital realm.

Mounting a DD image

One of the things I mentioned in my new book was an alternative analysis method for performing computer forensic analysis. I specifically mentioned the use of Mount Image Pro for mounting a dd image as a read-only file structure, which opens up some areas of analysis that many may benefit from using. For example, during an intrusion case, one thing you may want to do is scan the image with AV software. This may save you a great deal of time trying to locate hacker tools by hand. Also, this is something you may want to do when you may be faced with the "Trojan Defense".

Another thought/useful option is this - we all have things that we look for everytime we open an acquired image of a Windows system, and there are other things that we look for on a case-by-case basis. Most often we do this through our forensic analysis software package, such as ProDiscover or EnCase. However, even though these packages ship with scripts to do some initial data collection and parsing for us, sometimes, they aren't as complete as they could be, or we'd like them to be, and it takes forever to get scripts updated because the few folks that actually write their own scripts are busy with other things. Sometimes you may be in a rush or under pressure, and may forget something that you would normally look for. So what if we had a script or a tool that would run through an image, pulling things out for us each time...all automated so that we wouldn't have to remember all of the different places could look, but at the same time, its all documented? Say, the tool would check to see if the Recycle Bin had been disabled, and then move on to parsing the INFO2 file for one user, or all users. Or, the tool would collect the audit policy from the system, check the Registry entries for the Event Logs, and then collect statistics from the Event Log files themselves, or automatically parse the Event Log files to .csv format (or both). Would that be useful?

Another use of something like this is for forensic analysis training. Mounting the image as a drive letter lets you dig into aspects of forensic analysis that while accessible via commercial forensic analysis applications, may be somewhat easier to grasp and work with, particularly for new students, or junior members of an IR/CF team or CSIRT.

Okay, so now, how do we do this? How do we start with just a dd image, and get to a read-only drive letter (ie., F:\, G:\) so that we can point an AV scanner or some other tool at it?

To get the dd image to begin with, you can use ProDiscover, Helix, straight dd, or even FTK Imager Lite. If you're using ProDiscover, you can use the Tools -> Image Conversion Tools -> VMWare Support for DD Images... option to create the necessary .vmdk file.

Another option for creating the .vmdk file is to use LiveView. Point LiveView at the dd image, choose "Generate Config Only" in the GUI (maybe even designate the OS rather than having LiveView guess it) and you'll end up with several files, to include two .vmdk files and a snapshot. LiveView makes use of the VMWare DiskMount utility (don't forget the free VMPlayer), and even though this does not mount the image as a read-only file structure, you can set the read attribute on the image file itself (attrib +r imagefile) as a precaution. Use the vmware-mount.exe to mount the snapshot (imagefile-000001.vmdk)and all of the writes will end up in the snapshot.

Another option is to look at FileDisk. From the screenshot, FileDisk appears to have a read-only (/ro) option. I haven't tried this one yet...installation requires a reboot for the driver to be installed. However, FileDisk will also let you mount ISOs as CDs using the "/cd" switch.

Once you have your .vmdk file, take a look at VDK and VDKWin (scroll down to VDKWin). VDK gives you the .exe and .sys file for mounting image files as read-only file structures (according to the credits, VDK owes a great deal to FileDisk), and VDKWin gives you a GUI for managing it all. A nice thing I noticed about VDKWin is that it's simply a GUI for managing all of the command line switches in VDK. For example, let's say you image a Dell system that wasn't reformatted before being installed, and it has one of those Dell maintenance partitions at the beginning of the physical drive. VDK lets you list available partitions, and then select the one you want to mount. Rememer, when you grab VDKWin, don't forget to also get the core files from the top of the page.

I did test this out using a sample image. I started with the ProDiscover solution, and launched the .vmdk file via VDKWin with no problems. I tried LiveView, and though the DiskMount (vmware-mount.exe) approach worked fine, VDKWin balked at an "unknown extent type". The extent description section of the LiveView .vmdk file had two lines...simply deleting the second one caused VDKWin to hang. So, I removed the second line, and added the size entry to the first line, in essence replicating what I found in the .vmdk file produced by ProDiscover, and then everything worked just fine.

One caveat: I already have VMWare installed on my system...I read in a Google News post somewhere that in order to use some of these tools, you may need to have VMPlayer or one of the VMWare products installed, as the tools may use some of the DLLs. Just be aware of this if this is an avenue you're going to take.

So, what we're left with is a Windows-based solution, using freeware tools, to obtain an image, and then mount that image as a read-only file structure, for analysis. Many of the tools on the DVD that comes with my book, such as SAMParse, are designed to be run against raw Registry files and are perfect for use with this methodology. Sweet!

Addendum: I have an image that was created using FTK Imager Lite, broken into 2GB chunks. I opened the image in ProDiscover using the PDS file format, and started my analysis. Last night, I used the ProDiscover capability for creating .vmdk files ("VMWare support for DD images...") to create the .vmdk file by pointing the tool at the .pds file. Then, I installed VDK and VDKWin, but not any of the other VMWare tools. After setting the read-only bit on all of the image files (attrib +h), I ran VDKWin and successfully mounted the image as the K:\ drive, in read-only mode.

Even though I use a fully licensed version of ProDiscover IR, ProDiscover Basic is free (as in beer) and includes the ability to create the .vmdk files.

Sunday, March 25, 2007

Why current IR models don't work

A while ago, a buddy of mine was on travel, leaving his family at home. He called home from the airport whilst awaiting a flight and got some interesting news. He'd winterized their house several months earlier, which included shutting off the hose bibs from inside the house and purging the water from the pipes so they didn't freeze and burst. Well, about 36 hours prior to his call, there had been a need to run water from the faucet on the back of the house, so his wife had turned the hose bib on. However, being unfamiliar with all of the pipes and everything running through the closet, she'd also accidently turned off the gas to the house, which caused all of the pilot lights (on the gas range, in the gas fireplaces, and for the water heater) to go out. Listening to this, my friend started asking questions of his wife, many of which were answered "I don't know".

Listening to him tell this story, it became clear what his concerns were, but I wanted to hear it from him. While his wife was saying, "honey, you're going to come home from a long flight with several stops, layovers, and delays and not have any hot water to shower...sorry", he was hearing that there were possibly places in the house where gas could be seeping into the house. Thankfully, the gas used in homes smells, so it can be detected, but if the source is in an enclosed see where I'm going with this.

I thought about this for a bit while I was running the other day, and it occurred to me that this almost exactly mirrors current incident response models. I know, I know...bear with me here. Consider the wife in this story to be the CEO (hey, don't we all???), and my friend his a CISO or security admin. The kids and pets are the staff. In this case, we have an "incident"...the gas coming into the house was shut off long enough to cause the pilot lights to go out. The CEO, based on her sphere of perception, understands the issue to be one of inconvenience...lack of hot water means no hot shower. She feels that they can wait until my buddy gets home to deal with it. However, my friend sees an even bigger "threat"...not only to property (ie, his house) but more importantly, to the health and safety of his family (corporate officers failing to accurately identify critical assets).

In response to this, my friend decided to embark on a training and educational approach to changing the "corporate culture" of his family to be even more cognizant of the issues and potential impacts of such incidents. In part, this is where the story takes a turn and ends differently from what happens in business (corporate, government, etc.) organizations today.

So how does all this apply to these business organizations? Look around...or look here. It's quite a long list, but see any duplicates, either in what reportedly occurred, or in the name of the organization?

So, how do we fix this, you ask? Glad you asked!! Apparently, security overall (and not just IR) is something that is not part of the "corporate culture" of many organizations. Responding to incidents often times gets me nothing but blank stares when I ask questions about the status of systems, where systems are "located" in relation to each other, etc. That's nothing unusual.

Speaking of "corporate culture", when I was in the Marine Corps, we had a corporate culture, one that was easy to remember - "every Marine a rifleman." Basically, what that meant was that every single Marine, officer or enlisted, must be qualified in care, feeding, and effective and deadly use of the M-16 rifle. Every Marine received training in its use, and had to go through annual requalification. The same was true for officers...up through the rank of Captain, every officer had to qualify annually with the M-16 as well as their own TO weapon, the M-9. But this wasn't all we did...this was part of the many things we did, but it was part of our corporate culture. I believe this served us well in many incidents, particular in Iraq, where the "front lines" were often right in front of you, and it didn't matter if you were an infantry Marine or a cook or a Motor T driver.

My point is that we can talk about IR all day long, but things won't change unless someone with the ability to change the corporate culture does so. Everyone, particularly the IT staff, needs to be more security conscious, and be more familiar with the assets their protecting. What are the critical assets of the company, as defined by the CEO? How does an IT admin's job of maintaining servers and systems relate to accomlishing the mission of protecting those assets?

Also, consider the military, everyone's trained in "immediate actions". If an M-16 jams, every Marine knows the immediate actions to perform to get that weapon back into service. Marines on patrol know that if caught in a near ambush, then the response is to attack the source of the ambush, immediately and with maximum violence. Consider this...why can't the IT staff be trained in "immediate actions", as well? If unusual traffic is seen on the network or anomolous behavior appears on a system, there are things that the IT staff can do immediately to identify the issue. There are other things that they can do in order to quickly address the situation.

It all starts with a top-down approach to the corporate culture. From there, IT staffs need to receive training...functional, hands-on training that applies to the systems they work with every day. Going away to Linux-centric training for someone in an all- or predominantly-Windows shop is a waste of time and money. There needs to be core, central set of knowledge and training that every IT staff member receives and is responsible for the Marines, every officer, be they a pilot, a communicator, or a supply officer, goes through the same training at The Basic School. From there, certain members of the IT staff should receive specific training based on their areas of responsibility, be they routers, firewalls, servers, applications, etc. They need to understand these areas and systems inside-out, in much the same way a Marine can field strip, clean, and reassemble the M-16.

So, if you're reading all this and you're not someone in the IR business, you're probably thinking, "oh, he's just saying this so that he can get our money." No, I'm saying this so that you don't loose my SSN, my credit card number, etc. Or my wife's. Or, several years from now, my daughter's or my niece's and nephew's. We see it in the media all the time, with these high profile "hacks" involving someone breaking in and stealing data, or sensitive personal information being lost. Or critical applications and systems being taken over, compromised, and p0wned. These incidents are no longer the result of joy riders on the Information Superhighway...that ended a while ago. There is a profit motive behind this...these crimes are being committed because there is money involved. There needs to be a shift in the corporate culture such that this data is better protected, and when something does happen, the response is something more than paralysis. What would you do if you had a fire in your home? Would you evacuate your family, and if the fire were small and isolated enough, would you attempt to put it out? Would you call the fire department? Dumb questions, I know...but when incidents are occurring today, many corporate cultures make it okay to ignore the situation and simply go back to sleep. Worse, the situation is recognized as something unusual, but everyone is paralyzed, and the fire department gets called only after the house has burned down.


Friday, March 16, 2007

Thoughts on Incident Response/Management

Others have recently posted their thoughts on incidents and incident response (1, 2), and since this is an IR blog, of sorts, I thought I'd throw my hat into the ring, as well.

I don't really have a list of thoughts on IR, per se...its more a case of one thought from which all others spring. Basically, that is that organizations really need to start taking a proactive approach to Incident Response and Incident Management.

Incidents are going to happen. We can take that as a given. The days of joy-riding on the Information Superhighway are about over...there's simply too strong a profit motive these days. Why get a copy of BO2K onto a system and open and close the user's "cup holder", when you can quietly collect the contents of Protected Storage, capture keystrokes as the user logs into his online banking site, etc.

So what do you do? Well, first off, senior management needs to take this "security thing" seriously and treat it like a business process, not an ad hoc thing that you scramble to implement when you need it, and then drop it (cut the budget) when you don't. Walk around headquarters one day and look at the finance department...or marketing...or Ops. HR. All of these are business processes that you need to run a successful company. But how did you know that? Did someone tell you? Or did you suddenly see the need? Either way, people have been saying for years that you need to have security, and if you don't recognize the need simply by reading the paper everyday...

At some point, someone's going to throw up their hands, throw a huge budget at security, hire a bunch of (the wrong) people, and then proclaim security a failure. It doesn't work! Sound familiar? Is that how you would run your Finance Department? How about Payroll?

This security thing is really simple. Start by looking at your assets...what are you trying to protect? Is it sensitive personal data (a la CA SB1386)? Are we talking about critical systems, applications, or data...or some combination?

Once you've identified the assets, look at the threats. But keep in mind that this is no longer seige warfare, where you watch in wonder while barbarians pound at your gates...or your firewall. We're well into what the United States Marine Corps refers to as "manuever warfare". We're facing a battle with no defined front lines; ingress points include web browsers, rogue WAPs, etc. Have an assessment done, and if the result is "pay us a ton of money to fix it", then you may need to ask for your money back! In all infrastructures, there are things that can be done that don't cost money, but do have an inherent cost...things like cultural and political changes, changes in how you do things (ie, communal Admin/root accounts, etc.).

Train your people and hold them accountable. You're not going to hire an IR staff and keep them in a room with "break glass in case of emergency" written on the door. However, a great deal can be accomplished by training your current staff in how to manage and respond to incidents. Designate some personnel as incident responders, with a suitable manager, and get them some training. Even if they are familiar with handling some incidents, the best training is functional and hands-on. Have regular refresher training in the form of incident response exercises...maybe even coordinate that with an unannounced pen test.

In fact, here's a great way to go about the whole thing. Get your folks the training they need to be tier 1 incident responders. This means that there are come tasks that they'll be able to handle, like EMTs. Then, seek out a professional response firm, and keep them on retainer so that they're available should you call, and then can be on-scene and paid on a per-incident basis. This kind of approach has is a win-win, particularly if the professional responders are the ones who are training and working with your team...they get to know and trust each other, and there's a great deal of knowledge transfer that goes on.

Where this kind of thing breaks down is when the responders are held accountable for something, and senior management isn't. Security (in general) and incident response are business processes, folks, and need to be thought of and treated that way.

BTW, one of the links in the first line of this post is to a fairly new blog: ForensicIR. Welcome aboard!

Sunday, March 11, 2007

Forensic Challenges

Whenever something new comes out, one of the things people in particular fields ask is, how will this affect us and what we do? This is especially true in our field. With the recent release of new technologies, not the least of which is Vista, lots of folks have been asking about the challenges to digital forensics these new technologies will pose.

Thinking about this, I would suggest that the challenges don't come from "new" technologies being introduced, but rather from our community's myopic point of view.

I know what you're thinking...what did he just say? Well, I'm suggesting that new technologies...increased storage capacities, increased sophistication in cybercrime, new operating systems, etc...aren't imposing the "challenges" we think they are...we are. As a community, we're limiting ourselves, and imposing these challenges on ourselves somewhat artificially.

Rather than trying to describe my reasoning, let's look at a couple of examples. First, increased storage capacity...newer, smaller hard drives with greater capacity make things like iPods and cell phones that do everything for you possible. However, this is something that the forensic community has been dealing with for some time. This is not a 'new' challenge at all. The same holds true with new technologies, like Vista. New operating systems have been coming out all the one time, Windows NT 4.0 was "new" (heck, even I remember that!).

What about drive encryption? Is this particularly a "new" challenge? Encryption has been around for a while, and we have to deal with encrypted files all the time. With freeware encryption for files, and even commercial products, it's not unusual to have to deal with such things. Those of us that haven't had to deal with such things specifically need to keep some knowledge of what to do, an "SOP", if you will, in mind in case we do encounter these things.

IMHO, the real challenges to the digital forensic community are largely self-imposed. New technology doesn't necessarily impose new challenges on the community, as the introduction of "new" technologies is almost a steady-state in this industry, isn't it? DOS led to Windows 3.1 and OS/2, which led to Windows 95 and NT 3.51/4.0, etc., etc. Storage capacity has increased over time. New devices have been introduced. There's really no "challenge" in this, per se...simply wait until someone produces a product to deal with the "new" technology, and things continue as before.

It appears that the real challenge is incorporating new ways of doing things, such as live response. Now, we won't always have the opportunity to employ live response, as not all of us have the benefit of talking to the "victim" prior to them taking some action on the affected system(s), but live response is one of those things that flies in the face of the traditional (dare I say "purist") approach to computer/digital forensics. However, live response can do a great deal to help us solve some of the other perceived challenges, if we can change the mindsets of the major players in the community. From there, this mindset change will permeate the minds of others...corporate IT, lawyers, etc.

What challenges do you see?

Monday, March 05, 2007

Getting service information during IR

During his BlackHat DC presentation last week, Kevin Mandia said that the persistence method used by many malware authors seems to have shifted to Windows Services. During the presentation, he mentioned using psservice.exe from MS/SysInternals to get information about the services on a system, and said that psservice.exe doesn't show the executable image used by the service, and that you'd have to get that information from the Registry.

Well, maybe not. Kevin's a really bright guy, and very busy. There are ways to get the executable image path...using WMI for example. Writing a quick Perl script (and then compiling using Perl2Exe so that it can be used easily with the FRUC/FSP), one can get the following:

Name : wltrysvc
Display : Dell Wireless WLAN Tray Service
Start : LocalSystem
Desc : Provides automatic configuration for the 802.11 adapter using the Broadcom supplicant.
PID : 716
Path : C:\WINDOWS\System32\WLTRYSVC.EXE C:\WINDOWS\System32\bcmwltry.exe
Mode : Auto
State : Running
Status : OK
Type : Own Process

Pretty cool, eh? Path the executable image, PID, start mode and state. Of course, CSV output is easier to parse...and yes, this program does come included on the DVD accompanying my book.

Thursday, March 01, 2007

Book & Attending BlackHat DC

Recently, Troy Larson (bio and picture) graciously offered to review/tech edit my upcoming book, so to start, I asked the editor to send him two chapters; the one that covers volatile data collection and the one that covers Registry analysis. Here's a quote that I received from Troy regarding the Registry analysis chapter:
I really liked the registry chapter.  It is worth the price of the book alone.
Sweet! In the same email, Troy also mentioned that there wasn't as much info on Vista as there was on the other versions of Windows covered (2000, XP, 2003), but also said that this is largely due to the fact that there just isn't that much information available yet. secondedition

Speaking of the book, I found it on Amazon this morning, available for pre-order.

I spent most of yesterday at BlackHat DC and caught the first three presentations in the Forensics track...well most of them. I caught all of Kevin Mandia's presentation, as well as part of the presentation that followed on web app forensics, and then after lunch, I went to Nick Petroni and AAron Walter's presentation on VolaTools. Unfortunately, I didn't get to stay for the entire presentation.

Kevin's presentation was very interesting, addressing the need for live response. There were several comments that Kevin made, stating the need for "fast and light" response.

IMHO, there is a need for a greater understanding of live response...not just the technical aspects of it, but how those technical aspects can be applied to and affect the business infrastructure of an organization.

Following the presentation, I took a few minutes to jot down some notes and thoughts...I started by attempting to classify incidents (think of "Agent Smith" in The Matrix, classifying humans as a "virus") in terms of the speed of response required for each. I started by defining speed of response in general terms at first ("ASAP", etc.) and then attempting to become more specific but not to the point of minutes and seconds. Regardless of the angle I approached the problem from, the one thing that kept coming to mind were business requirements...what are the needs of the business? Sure, we've got all these great technical things we can do, and as Kevin pointed out (as did Nick and AAron), a lot of folks that he's talked to have said, "we've already got all this data (file system data, etc.) to analyze...we don't do live response because the last thing we need is more data!"

I guess the question this point is, is the data that you're currently collecting for analysis meeting your business needs? If you're backlogged 6 months in your analysis...probably not. Depending upon the nature (or class) of the incident and your business environment, it may/most likely will be useful to rapidly collect and analyze a specific subset of volatile data during live response so that you can get a better picture of the issue, and progress through your analysis more accurately and efficiently.

One of the things Kevin pointed out is that malware (can anyone verify that Kevin only said "malware" 6 times...I wasn't keeping track) is including the capability of modifying the MAC times of files written to the system. This is an anti-forensic technique that severely hampers those using the current "Nintendo forensics" approach to analysis.

I think that some people are beginning to recognize the need for live response and how it can be useful in an investigation. However, I think that the issue now is one of do those currently doing response of any kind (live or otherwise) break out of their current mindset, and shift gears? By shifting to a more holistic approach and using live response where applicable, and in the appropriate manner, those people will begin to actually see how useful this activity can be.

The presentation I found to be the most interesting was Nick and AAron's "VolaTools" presentation. Once Nick finished with the overview (presenting some great information on why you'd want to use tools like this), AAron kicked off into the how portion of the talk. I think their approach is an excellent one, in that they've identified a major stumbling block for this kind of analysis...the tools that are available are not included with EnCase and are not "push the Find-All-Evidence button" kinds of tools, and people aren't using them because the tools themselves are too different. In a nutshell, there's a confidence factor that needs to be addressed. While Nick and AAron's tools are written in Python, they do provide a command-line interface in the Basic version of their tools that allow the user to extract information from a RAM dump in nearly the same manner as you would on a live list the modules used, you would use a "dlllist" command (as opposed to listdlls.exe), and to list handles, you'd use "handles".

I can't wait to download and look at their tools...these two guys are really bright and have just moved the issue of Windows memory analysis forward a couple of huge leaps. And thanks to guys like Jesse Kornblum for things such as his Buffalo paper, because we can then use AAron and Nick's work as a basis for incorporating the pagefile into the memory analysis, as well.

Oh, and while I did meet both Ovie and Bret, unfortunately Ovie wasn't wearing his CyberSpeak t-shirt, so I guess I don't get to have a beer with Bret! ;-(

One final thing about BlackHat DC...seeing Jesse do his impression of a sardine was worth the price of admission! Speaking of Jesse, it looks like he's been busy on the ForensicWiki...