I've been on the road a bit lately, and as such don't really have much info for a single post, but I have a couple of small tidbits that fit nicely when mashed into a single post...
I caught Hogfly's Beware the Key post...in it, he points to this KB article about correcting the "disable Autorun" capability in the Registry. By default, removable storage devices do not (and should not) have autorun capabilities enabled, but preventative measures pushed out via GPOs if necessary are a good thing. Why? Check out this article from India (here from Wired) about an iPod being used to steal data. The usbstor2 plugin for RegRipper was designed just for this type of incident; to help you consolidate data from multiple systems in one place for correlation and analysis. In an infrastructure with dispersed systems, F-Response Enterprise Edition would be extremely useful...I mean, "da bomb-shizzle".
If you do any incident response work at all, you should probably take a look at Didier Stevens' pdf-parser.py Python code. He made this code available from his blog post on analyzing a malicious PDF file. This can be used to narrow down the attack vector for an intrusion or malware on the system.
I've posted about free tools before (here, as well), and one that I wanted to add was a free, open-source AV scanner called MoonAV. If you're looking for other free tools, I'd suggest staying up on the NirSoft Blog, just like Claus. There are a number of very, very useful freeware utilities at NirSoft and more coming soon.
With respect to open-source AV, there are some other interesting, older links over at the OpenAntiVirus project page, as well.
Christine updated the e-Evidence site today! This is the closest thing to a monthly digital forensics e-zine available! Be sure to bookmark this site and check it out regularly.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Pages
▼
Friday, October 31, 2008
Thursday, October 23, 2008
Bridging the Gap
I work for a large corporation that includes a vulnerability research organization. This organization is responsible for discovering vulnerabilities to operating systems, applications, etc. When they discover a vulnerability and develop a successful exploit, that information goes into the protection mechanisms produced by another part of the company, which are monitored by yet another part of the company. What this ultimately means is that attacks using those techniques are monitored or stopped before the systems themselves are compromised.
So I always think its cool when someone from a monitoring organization reveals some information about an attack that they've seen, such as this post from Symantec. I think its interesting to see this sort of thing, not only to see what an attack "looks like", but also to see if my analysis process includes the ability to find this sort of thing. As an analyst, I've been able to determine that yes, your system was compromised, Mrs. Jones, but when it comes down to it, how would I go about determining how it was compromised? There are a number of browser-borne attacks out there that are possible...how do you narrow down which one was used (if the initial compromise was via the browser?)?
In this particular case, the exploit involves overwriting a file opened by the MS Help and Support Center, so a quick way to check to see if this was the case is to use the following command (either on a live system, or on a system mounted with SmartMount):
C:\>dir /tw windows\pchealth\helpctr\system\sysinfo
Also, point #4 in the blog post states that the MS Help and Support Center Viewer handles "hcp://" links... to see the file association, check the Registry (in the following key) to see that:
HKEY_CLASSES_ROOT\HCP\shell\open\command
You should see something like this in the "(Default)" value:
%SystemRoot%\PCHEALTH\HELPCTR\Binaries\HelpCtr.exe -FromHCP -url "%1"
What posts like this allow us to do is to bridge the gap between vulnerability and exploit, and incident response and forensic analysis.
Another such example can be seen at the MS Malware Protection Center, particularly with this post about Win32\Rustock. Looking at the write-up, you're probably wondering how you'd go about using this information...the malware comes in three components, all encrypted, packed with ApLib, and oh, yeah...its polymorphic. Nice! However, as Jesse pointed out in his rootkit paradox paper, these things usually have a persistence mechanism, and in this case, a driver is installed. Even if it's not given the same name, RegRipper can be used to parse through the System hive file using the "services2" plugin, listing the installed services and device drivers, sorted by the key LastWrite times. If you don't want to acquire an image of the system, you can extract the System hive from a running system using FTK Imager, or show how cool you are by using F-Response.
One interesting piece that I pulled out of the write-up was that early versions of the malware apparently "...used alternative streams to store the installer...", but this technique was dropped. What they're referring to here is NTFS alternate data streams. Leave it to the folks at MS to use "polymorphic terminology" (MS has referred to these as "multiple", "alternate", and "alternative", as well as simply called them "data streams") to describe something so simple.
So I always think its cool when someone from a monitoring organization reveals some information about an attack that they've seen, such as this post from Symantec. I think its interesting to see this sort of thing, not only to see what an attack "looks like", but also to see if my analysis process includes the ability to find this sort of thing. As an analyst, I've been able to determine that yes, your system was compromised, Mrs. Jones, but when it comes down to it, how would I go about determining how it was compromised? There are a number of browser-borne attacks out there that are possible...how do you narrow down which one was used (if the initial compromise was via the browser?)?
In this particular case, the exploit involves overwriting a file opened by the MS Help and Support Center, so a quick way to check to see if this was the case is to use the following command (either on a live system, or on a system mounted with SmartMount):
C:\>dir /tw windows\pchealth\helpctr\system\sysinfo
Also, point #4 in the blog post states that the MS Help and Support Center Viewer handles "hcp://" links... to see the file association, check the Registry (in the following key) to see that:
HKEY_CLASSES_ROOT\HCP\shell\open\command
You should see something like this in the "(Default)" value:
%SystemRoot%\PCHEALTH\HELPCTR\Binaries\HelpCtr.exe -FromHCP -url "%1"
What posts like this allow us to do is to bridge the gap between vulnerability and exploit, and incident response and forensic analysis.
Another such example can be seen at the MS Malware Protection Center, particularly with this post about Win32\Rustock. Looking at the write-up, you're probably wondering how you'd go about using this information...the malware comes in three components, all encrypted, packed with ApLib, and oh, yeah...its polymorphic. Nice! However, as Jesse pointed out in his rootkit paradox paper, these things usually have a persistence mechanism, and in this case, a driver is installed. Even if it's not given the same name, RegRipper can be used to parse through the System hive file using the "services2" plugin, listing the installed services and device drivers, sorted by the key LastWrite times. If you don't want to acquire an image of the system, you can extract the System hive from a running system using FTK Imager, or show how cool you are by using F-Response.
One interesting piece that I pulled out of the write-up was that early versions of the malware apparently "...used alternative streams to store the installer...", but this technique was dropped. What they're referring to here is NTFS alternate data streams. Leave it to the folks at MS to use "polymorphic terminology" (MS has referred to these as "multiple", "alternate", and "alternative", as well as simply called them "data streams") to describe something so simple.
Wednesday, October 22, 2008
What do you need as a responder?
Many times when working with customers or talking with folks who interface with incident responders (or are incident responders), the topic of what information is pertinent to a responder comes up. When receiving calls from customers, many times the people who make the call and interface with the consultants are not themselves responders (or in some cases, even technical people), and instead represent their company from a business or legal/compliance perspective. This can often lead to misunderstandings due to differing perspectives, which ultimately leads to a lack of (accurate) information. However, this isn't restricted only to non-technical people...sometimes having the sysadmins on the call can lead to the same sort of thing, which is, again, largely due to perspective.
So...what do responders need to know when you call, or what information do your responders need access to?
Well, the first thing that an incident responder is going to need to know is your goals, or what you hope to achieve from the response activities.
In an ideal world, there are four basic sources of technical data that responders will be looking to...network traffic captures, network device log data (ie, firewalls, routers w/ ACLs, etc.), and host-based volatile (memory) and non-volatile (system image) data. There may be instances when all of this information is available, and there may be times when portions of this data is available...however, in many cases, for those unprepared for an incident, very little if any of this is available to a responder. The amount of information available to the responder is going to play a direct role in how completely the responder is going to be able to help you achieve your goals.
I once received a call from someone who needed help with a laptop. Apparently, the laptop had been infected with malware, and the immediate response was to take the laptop off of the network, wipe the hard drive, and install an updated version of the operating system. The caller wanted to know if I could tell them what the malware was, and what (if any) data had been taken. Seriously.
Some questions you're likely to be asked when you call someone to perform incident response include (but are not limited to):
- What type of incident are you experiencing? How would you characterize the incident?
- Do any of the affected systems contain "sensitive data" (per PCI, HIPAA, etc.)?
- How many systems are involved? What are their status? What are their hard drive types, configurations, and capacities (SAN/NAS, RAID 5 with a total of 100GB, mirrored, single 1TB drive, boot-from-SAN)?
- What actions have you taken already (and be honest...running an AV scan and deleting files isn't "nothing")?
- What data have you already collected, if any?
- Do you have a logical network diagram, particularly of the affected systems?
Lots of questions...do you have the answers?
So...what do responders need to know when you call, or what information do your responders need access to?
Well, the first thing that an incident responder is going to need to know is your goals, or what you hope to achieve from the response activities.
In an ideal world, there are four basic sources of technical data that responders will be looking to...network traffic captures, network device log data (ie, firewalls, routers w/ ACLs, etc.), and host-based volatile (memory) and non-volatile (system image) data. There may be instances when all of this information is available, and there may be times when portions of this data is available...however, in many cases, for those unprepared for an incident, very little if any of this is available to a responder. The amount of information available to the responder is going to play a direct role in how completely the responder is going to be able to help you achieve your goals.
I once received a call from someone who needed help with a laptop. Apparently, the laptop had been infected with malware, and the immediate response was to take the laptop off of the network, wipe the hard drive, and install an updated version of the operating system. The caller wanted to know if I could tell them what the malware was, and what (if any) data had been taken. Seriously.
Some questions you're likely to be asked when you call someone to perform incident response include (but are not limited to):
- What type of incident are you experiencing? How would you characterize the incident?
- Do any of the affected systems contain "sensitive data" (per PCI, HIPAA, etc.)?
- How many systems are involved? What are their status? What are their hard drive types, configurations, and capacities (SAN/NAS, RAID 5 with a total of 100GB, mirrored, single 1TB drive, boot-from-SAN)?
- What actions have you taken already (and be honest...running an AV scan and deleting files isn't "nothing")?
- What data have you already collected, if any?
- Do you have a logical network diagram, particularly of the affected systems?
Lots of questions...do you have the answers?
Tuesday, October 21, 2008
New Tools
Thought I'd take a page from Claus's book and update everyone as to some new tools that are available...
First, over on the Volatility Tumbleblog, there's mention of two new Volatility plugins that Jesse Kornblum developed for use with Volatility. Also on the same blog is a link to a tool to search memory dumps for GMail artifacts. Both look like exceptional utilities.
Speaking of Claus, be sure to check out the updated browser analysis tools over at NirSoft. Claus is also the one who pointed out that AutoRuns has been updated. Thanks! AutoRuns a great tool to use in IR, and to keep up to date on autostart locations.
Brian Carrier updated the TSK tools to v3.0, which includes the Windows versions of the tools. I've updated my own regtime.pl script to produce output similar to that of fls, so that the output of the script can be appended to the output of fls, and parsed with mactime to produce a timeline of file (and now Registry) activity on a system. At this time, only the path and mtime fields are populated for Registry keys, as the LastWrite time is analogous to the last modification times for files. With the currently available tools, deleted keys can be added to the list. I'll need to update the plugins to have RegRipper and rip produce the same output, which would be more targeted (and perhaps more valuable) than dumping all of the keys.
First, over on the Volatility Tumbleblog, there's mention of two new Volatility plugins that Jesse Kornblum developed for use with Volatility. Also on the same blog is a link to a tool to search memory dumps for GMail artifacts. Both look like exceptional utilities.
Speaking of Claus, be sure to check out the updated browser analysis tools over at NirSoft. Claus is also the one who pointed out that AutoRuns has been updated. Thanks! AutoRuns a great tool to use in IR, and to keep up to date on autostart locations.
Brian Carrier updated the TSK tools to v3.0, which includes the Windows versions of the tools. I've updated my own regtime.pl script to produce output similar to that of fls, so that the output of the script can be appended to the output of fls, and parsed with mactime to produce a timeline of file (and now Registry) activity on a system. At this time, only the path and mtime fields are populated for Registry keys, as the LastWrite time is analogous to the last modification times for files. With the currently available tools, deleted keys can be added to the list. I'll need to update the plugins to have RegRipper and rip produce the same output, which would be more targeted (and perhaps more valuable) than dumping all of the keys.
Monday, October 20, 2008
'Zines
For a while now, I've been asking others about what they thought about a forensics/IR magazine...a 'zine (print or electronic) dedicated to IR/CF topics. Most of the responses I've received have been mixed...some good, not many disagreements, but also not a great deal of interest in providing content. Okay, okay...maybe that was too much to ask, but I just have to wonder if there's any interest at all.
Usually the first thing folks say is that the subject is too niche...but have you been to the bookstores lately? There are multiple magazines on tattoos and skin art, antique doll collecting...you name it. If a lot of these subjects/topics aren't "niche", I don't know what topics would be.
Other 'zines that cater to the techie-nerd crowd include Hackin9, Make, (IN)Secure, and Linux Sysadmin Magazine. Even a recent issue of Linux Pro Magazine had a couple of articles aimed at incident response...while not overly technical or "forensic-y", they were useful and an interesting read. Most such articles seem to be aimed at system administrators, with the goal of introducing them to the topic, rather than immediately taking a deep dive into the subject matter.
Content - A 'zine such as this could have all sorts of great content, if some of the emails and blogs I've seen have been any indication. There are folks out there with lots of great ideas. Of course, the "usual suspects" would be included...hardware/software reviews, maybe even book reviews, etc. Sections or articles could be specific to Linux, AS/400, Windows, MacOSX, cellphones/PDAs, etc. The stuff that goes into a 'zine like this could be limitless...challenges, reader emails, ads for conferences and products, new tools or techniques, new versions of software, etc.
Audience - who would this type of media be aimed at? State, local, and federal law enforcement, college (grad/undergrad) students in computer forensics tracks, corporate responders, consultants, even hobbyists. Pretty much anyone who does or is interested in this kind of work, regardless of from which perspective (host- or network-based).
I have no idea what it takes to start something like this from the ground up...and I'm not even going to assume that it's something I can do myself. I would be very interested in contributing...heck, anyway to get CPE points, right?...and trying to get others to contribute, as well. Right now, there is the Digital Investigation Journal, and I would like to be part of attempting to make this a more hands-on journal, and less academic in nature. I have no experience planning something like this, and besides, like most of you, I already have a day job. What I would be interested in doing is perhaps assisting someone who already has the infrastructure available, and working with others to plan out an agenda or content list a year out. One way to go about this might be to email the editor-in-chief of DIJ, expressing an interest in either creating content for the journal, or simply send in your thoughts. Also, I've asked Monika of the Hackin9 staff if this is something that they'd be interested in producing...if you like the idea, and are willing to subscribe or even contribute (dude...CPE points!!!), email her and tell her so!
Usually the first thing folks say is that the subject is too niche...but have you been to the bookstores lately? There are multiple magazines on tattoos and skin art, antique doll collecting...you name it. If a lot of these subjects/topics aren't "niche", I don't know what topics would be.
Other 'zines that cater to the techie-nerd crowd include Hackin9, Make, (IN)Secure, and Linux Sysadmin Magazine. Even a recent issue of Linux Pro Magazine had a couple of articles aimed at incident response...while not overly technical or "forensic-y", they were useful and an interesting read. Most such articles seem to be aimed at system administrators, with the goal of introducing them to the topic, rather than immediately taking a deep dive into the subject matter.
Content - A 'zine such as this could have all sorts of great content, if some of the emails and blogs I've seen have been any indication. There are folks out there with lots of great ideas. Of course, the "usual suspects" would be included...hardware/software reviews, maybe even book reviews, etc. Sections or articles could be specific to Linux, AS/400, Windows, MacOSX, cellphones/PDAs, etc. The stuff that goes into a 'zine like this could be limitless...challenges, reader emails, ads for conferences and products, new tools or techniques, new versions of software, etc.
Audience - who would this type of media be aimed at? State, local, and federal law enforcement, college (grad/undergrad) students in computer forensics tracks, corporate responders, consultants, even hobbyists. Pretty much anyone who does or is interested in this kind of work, regardless of from which perspective (host- or network-based).
I have no idea what it takes to start something like this from the ground up...and I'm not even going to assume that it's something I can do myself. I would be very interested in contributing...heck, anyway to get CPE points, right?...and trying to get others to contribute, as well. Right now, there is the Digital Investigation Journal, and I would like to be part of attempting to make this a more hands-on journal, and less academic in nature. I have no experience planning something like this, and besides, like most of you, I already have a day job. What I would be interested in doing is perhaps assisting someone who already has the infrastructure available, and working with others to plan out an agenda or content list a year out. One way to go about this might be to email the editor-in-chief of DIJ, expressing an interest in either creating content for the journal, or simply send in your thoughts. Also, I've asked Monika of the Hackin9 staff if this is something that they'd be interested in producing...if you like the idea, and are willing to subscribe or even contribute (dude...CPE points!!!), email her and tell her so!
Saturday, October 18, 2008
Summit Takeaways
I recently received an email from the organizers of the SANS Forensic Summit and was asked to provide my biggest takeaways from the summit, and I wanted to post what I came up with, to see if others are seeing the same sorts of things...so, here's what I provided:
I would say that the key take-away for me is that speed is essential, but should not be sacrificed for accuracy. Incident responders need to respond quickly in order to identify, classify, and as appropriate, contain an incident. In part, this puts the onus on the organizations...even if they have consultants on retainer, it will still take time for them to arrive, and then they are slowed down even further having to get up to speed and learn the environment. Consultants can assist an organization in developing a tier 1 response, training them in triage, diagnosis, preview, containment, etc., even across an enterprise. I would also say that tools such as F-Response are paramount to this sort of activity, as well. Once the initial training is complete, the consultants can respond as tier 2 or 3 responders, assisting as necessary, but knowing that the necessary data has been collected.
It's clear that state (via PII notification laws) and federal legislation, as well as regulatory oversight (PCI, HIPAA, etc.), play a huge role in incident response. In fact, these are the primary drivers to IR at this point. Think about it...if a company was not required to notify someone when a breach of sensitive data occurred, would they?
To that end, there needs to be a much more immediate response...calling a consulting firm to come on-site to assist after the incident has occurred and been detected is a fantastic idea, but one of two things are going to happen; you're either going to continue "bleeding" data while you're waiting, or you're going to stomp all over the data and destroy the indicators (aka, "evidence") that those consultants will be looking for in order to answer the questions that need to be answered.
Okay, brief digression here...what questions will need to be answered? There are essentially three questions that need to be answered in the face of a breach of sensitive data...(a) was a system compromised, (b) did the system contain sensitive data, and (c) was that sensitive data exposed or compromised as a result of the breach? In order to determine (c), in most cases, you need to minimize "temporal proximity" (thanks, AAron, for that very Star Trek-y term!!)...that is, you need to detect the breach and collect data as close to the time of the breach as possible.
Rather than fall victim to a breach and not get the answers you need (by "you", perhaps a more appropriate way of saying it would be "your legal counsel or compliance officer"), why not get someone in ahead of time, before an incident occurs, to work with you in setting up a response plan, and training your staff in a timely, accurate response?
I've said before that tools like F-Response and Volatility (the combination of the two being referred to as Voltage) have changed the face of incident response. Having these tools available to you allow you to quickly collect and analyze memory in order to triage and categorize incidents. Too many times I've been asked to come on-site only to find out that prior to the call, the customer had already turned systems off and taken them offline. These tools will help you collect more data than every before, reduce that "temporal proximity", and at the very least have data available for tier 2 incident responders to process and analyze.
Thoughts?
I would say that the key take-away for me is that speed is essential, but should not be sacrificed for accuracy. Incident responders need to respond quickly in order to identify, classify, and as appropriate, contain an incident. In part, this puts the onus on the organizations...even if they have consultants on retainer, it will still take time for them to arrive, and then they are slowed down even further having to get up to speed and learn the environment. Consultants can assist an organization in developing a tier 1 response, training them in triage, diagnosis, preview, containment, etc., even across an enterprise. I would also say that tools such as F-Response are paramount to this sort of activity, as well. Once the initial training is complete, the consultants can respond as tier 2 or 3 responders, assisting as necessary, but knowing that the necessary data has been collected.
It's clear that state (via PII notification laws) and federal legislation, as well as regulatory oversight (PCI, HIPAA, etc.), play a huge role in incident response. In fact, these are the primary drivers to IR at this point. Think about it...if a company was not required to notify someone when a breach of sensitive data occurred, would they?
To that end, there needs to be a much more immediate response...calling a consulting firm to come on-site to assist after the incident has occurred and been detected is a fantastic idea, but one of two things are going to happen; you're either going to continue "bleeding" data while you're waiting, or you're going to stomp all over the data and destroy the indicators (aka, "evidence") that those consultants will be looking for in order to answer the questions that need to be answered.
Okay, brief digression here...what questions will need to be answered? There are essentially three questions that need to be answered in the face of a breach of sensitive data...(a) was a system compromised, (b) did the system contain sensitive data, and (c) was that sensitive data exposed or compromised as a result of the breach? In order to determine (c), in most cases, you need to minimize "temporal proximity" (thanks, AAron, for that very Star Trek-y term!!)...that is, you need to detect the breach and collect data as close to the time of the breach as possible.
Rather than fall victim to a breach and not get the answers you need (by "you", perhaps a more appropriate way of saying it would be "your legal counsel or compliance officer"), why not get someone in ahead of time, before an incident occurs, to work with you in setting up a response plan, and training your staff in a timely, accurate response?
I've said before that tools like F-Response and Volatility (the combination of the two being referred to as Voltage) have changed the face of incident response. Having these tools available to you allow you to quickly collect and analyze memory in order to triage and categorize incidents. Too many times I've been asked to come on-site only to find out that prior to the call, the customer had already turned systems off and taken them offline. These tools will help you collect more data than every before, reduce that "temporal proximity", and at the very least have data available for tier 2 incident responders to process and analyze.
Thoughts?
Wednesday, October 15, 2008
SANS Forensic Summit
The SANS Forensic Summit, a first-of-its-kind event for incident responders and forensic analysts, is over and I have to give a hearty and whole-hearted thanks to Rob Lee for chairing the event and bringing everyone...consultants, practitioners, and yes, even vendors...into such a unique forum. The combination panel and presentation format provided a great opportunity for attendees to interact with speakers in ways other than just listening to their presentations.
Speaking of which, there were a number of exceptional presentations throughout the two days. Rob talked about using TSK's fls and ils to generate file system timelines, which led me to think that it wouldn't be too great a stretch to add the same sort of capability to RegRipper, and have the Registry data included in the timeline information. The guys from Verizon gave a great presentation on their incident statistics, and the Mandiant presentation illustrated some interesting artifacts from a real-world examination.
One prevalent theme throughout the summit was that there was a lot of folks "calling the baby ugly". As humorous as that may sound, that was the euphemism for being up-front and letting folks know, yes, we have a problem. At least one of the issues identified that both Richard Bejtlich and I (and others) seemed to agree on was that the need to protect data is no longer the driver for incident response...if it ever truly was. Currently, legislation (state notification laws) and regulatory oversight (PCI, HIPAA, etc.) are the drivers for incident response.
Also, a common thread from the consultants to the admins in the audience seemed to be, help us help you. At one point during a panel, Rob Lee asked something along the lines of, how soon should someone who's been breached call for help, and my response was "before it happens." Seriously. Get someone on-site before youhave find a breach, and have them look at your response plan and capabilities, and help you bring them in line with what's needed for your business. Think of the first folks to deal with a breach or some other incident as EMTs and folks like me as doctors and surgeons...if you find someone who needs help, are you going to stand around and watch the guy die, or do you want to know what you can do to (at the very least) contain the issue until you can someone with a greater skillset on-site to assist?
All in all, it was a great event, very beneficial to attendees and speakers alike. Rob did a great job pulling together talent such as Richard Bejtlich of GE and TaoSecurity fame, AAron Walters, Mike Poor and Tom Liston of InGuardians, Lance Mueller, Eoghan Casey, Bret Padres and Ovie Carroll, as well as Kris Harms, Wendi Rafferty and Ken Bradley from Mandiant, and Monty McDougal. Jennifer Kolde was there representing the FBI, as was Matt Shannon...F-Response is and was a huge hit. I was talking with a couple of folks who attended the summit and when the topic of F-Response came up, you could see the light come on in their eyes as they realized the potential that could be realized through a product like this.
It was also great to be able to talk with folks like Jeff Caplan, and (me being really bad with names) Doug and the guy from Ford.
One of the big take-aways that I got from the summit is the fact that folks like the speakers (consultants, in most cases) and attendees (admins, etc.) face a lot of the same problems with respect to incident response...namely, how to preview and triage systems, and how to do so in an enterprise environment.
I'm hoping to be invited to and be able to attend the next SANS Forensic Summit, in July 2009!
See what others thought:
AAron
Matt from F-Response
Speaking of which, there were a number of exceptional presentations throughout the two days. Rob talked about using TSK's fls and ils to generate file system timelines, which led me to think that it wouldn't be too great a stretch to add the same sort of capability to RegRipper, and have the Registry data included in the timeline information. The guys from Verizon gave a great presentation on their incident statistics, and the Mandiant presentation illustrated some interesting artifacts from a real-world examination.
One prevalent theme throughout the summit was that there was a lot of folks "calling the baby ugly". As humorous as that may sound, that was the euphemism for being up-front and letting folks know, yes, we have a problem. At least one of the issues identified that both Richard Bejtlich and I (and others) seemed to agree on was that the need to protect data is no longer the driver for incident response...if it ever truly was. Currently, legislation (state notification laws) and regulatory oversight (PCI, HIPAA, etc.) are the drivers for incident response.
Also, a common thread from the consultants to the admins in the audience seemed to be, help us help you. At one point during a panel, Rob Lee asked something along the lines of, how soon should someone who's been breached call for help, and my response was "before it happens." Seriously. Get someone on-site before you
All in all, it was a great event, very beneficial to attendees and speakers alike. Rob did a great job pulling together talent such as Richard Bejtlich of GE and TaoSecurity fame, AAron Walters, Mike Poor and Tom Liston of InGuardians, Lance Mueller, Eoghan Casey, Bret Padres and Ovie Carroll, as well as Kris Harms, Wendi Rafferty and Ken Bradley from Mandiant, and Monty McDougal. Jennifer Kolde was there representing the FBI, as was Matt Shannon...F-Response is and was a huge hit. I was talking with a couple of folks who attended the summit and when the topic of F-Response came up, you could see the light come on in their eyes as they realized the potential that could be realized through a product like this.
It was also great to be able to talk with folks like Jeff Caplan, and (me being really bad with names) Doug and the guy from Ford.
One of the big take-aways that I got from the summit is the fact that folks like the speakers (consultants, in most cases) and attendees (admins, etc.) face a lot of the same problems with respect to incident response...namely, how to preview and triage systems, and how to do so in an enterprise environment.
I'm hoping to be invited to and be able to attend the next SANS Forensic Summit, in July 2009!
See what others thought:
AAron
Matt from F-Response
Tuesday, October 07, 2008
RegRipper Plugin Generator
Jason Koppe, whom I met at DFRWS (Jason, Cory and I "helped" close out the tab at the reception at the Wharf Rat on Monday evening...you're welcome, Brian! ;-) ) has written a RegRipper plugin generator! Check it out!
Basically, what Jason has done is come up with a good overall plan to encapsulate my vision for the plugins themselves. I've found it difficult to really come up with a discernible pattern, due to the variations in the plugins...all start at a root key, and then some look for subkeys, some for specific values, some for all values, etc. Look at the UserAssist key plugin (no, seriously...open it up in an editor)...not only are the values extracted, but they need to be ROT-13 "decrypted". The USBStor2 plugin parses and correlates information from multiple keys.
Regardless of the eccentricities of my brain and perspective, Jason's done a great job of putting together a basic RegRipper plugin generator. While there are a number of dependencies to the GTk+ UI code that James used for regview.pl, it looks like Jason has made yet another excellent argument for installing them!
Great job, Jason!
Question: Besides PlainSight, where else is RegRipper being used?
Basically, what Jason has done is come up with a good overall plan to encapsulate my vision for the plugins themselves. I've found it difficult to really come up with a discernible pattern, due to the variations in the plugins...all start at a root key, and then some look for subkeys, some for specific values, some for all values, etc. Look at the UserAssist key plugin (no, seriously...open it up in an editor)...not only are the values extracted, but they need to be ROT-13 "decrypted". The USBStor2 plugin parses and correlates information from multiple keys.
Regardless of the eccentricities of my brain and perspective, Jason's done a great job of putting together a basic RegRipper plugin generator. While there are a number of dependencies to the GTk+ UI code that James used for regview.pl, it looks like Jason has made yet another excellent argument for installing them!
Great job, Jason!
Question: Besides PlainSight, where else is RegRipper being used?
Monday, October 06, 2008
New Registry Analysis Tools
As I mentioned earlier, James Macfarlane has released an update to his Parse::Win32Registry Perl module. In order to install the module, I simply downloaded the .tar.gz file, extracted everything, and copied the contents of the lib directory in the tarball to my C:\Perl\site\lib directory. Yep, that's it.
James have been nice enough to include a number of useful utilities to demonstrate the new capabilities of his modules...some of those new utilities are:
- regstats.pl - gives you statistics about a file, such as the number of keys and values, with an option to show the number of different value types
- regtimeline.pl - prints a timeline of all Registry key LastWrite times
- regexport.pl - exports the Registry hive file (or part of it) as a .reg file
One of the most interesting tools James included with his updates is regscan.pl, which parses through a Registry hive file, identifying different cells. However, instead of following the various links between cells, regscan.pl simply parses through the binary hive file one cell at a time. Running the script produces some interesting output:
0x38cd48 1 nk $$$PROTO.HIV\ControlSet003\Control\Nls\MUILanguage\RCV2\
umaxu40.dll [2004-08-23T21:24:25Z]
0x38cda8 1 vk 0 (REG_BINARY) = 00 00 28 0a 01 00 05 00
0x38cdc8 1 ..
0x38cdd8 1 vk 1 (REG_BINARY) = ab c4 e9 4c d8 fd a5 7c de 59 ff 05 93 9e 87 ba 02
0a e6 17 27 02 f9 a3 42 27 95 a0 61 0e 66 fd
Pretty cool, eh? Probably not...doesn't look very useful, does it? Well, see that "1" following the offset? That tells you whether the key or value is "in use"; that is, is it in allocated or unallocated space within the hive file? Sound interesting? Check it out...running the following command:
C:\Perl>regscan.pl d:\cases\system | find "0 nk"
...showed me a list of all of the Registry keys found to NOT be in use in the hive file! Basically, what this script and command line are showing me are a list of deleted keys in the hive file. Here's an excerpt of the output:
0x1ef020 0 nk (Invalid Parent Key)\04 [2004-08-18T00:39:36Z]
0x1f0020 0 nk (Invalid Parent Key)\Security [2004-08-18T00:32:16Z]
0x1f2020 0 nk $$$PROTO.HIV\ControlSet001\Enum\Root\LEGACY_DCOMLAUNCH
\0000\Services\SCardDrv [2004-08-23T21:34:10Z]
Notice that the first two keys don't have their full paths traced all the way back to the root key, but the last one does.
Pretty neat stuff, eh? I'm considering including the use of regscan.pl in my SANS Forensic Summit presentation demos, but I'm not sure if the usefulness of this type of information will really be apparent...what do you think?
James have been nice enough to include a number of useful utilities to demonstrate the new capabilities of his modules...some of those new utilities are:
- regstats.pl - gives you statistics about a file, such as the number of keys and values, with an option to show the number of different value types
- regtimeline.pl - prints a timeline of all Registry key LastWrite times
- regexport.pl - exports the Registry hive file (or part of it) as a .reg file
One of the most interesting tools James included with his updates is regscan.pl, which parses through a Registry hive file, identifying different cells. However, instead of following the various links between cells, regscan.pl simply parses through the binary hive file one cell at a time. Running the script produces some interesting output:
0x38cd48 1 nk $$$PROTO.HIV\ControlSet003\Control\Nls\MUILanguage\RCV2\
umaxu40.dll [2004-08-23T21:24:25Z]
0x38cda8 1 vk 0 (REG_BINARY) = 00 00 28 0a 01 00 05 00
0x38cdc8 1 ..
0x38cdd8 1 vk 1 (REG_BINARY) = ab c4 e9 4c d8 fd a5 7c de 59 ff 05 93 9e 87 ba 02
0a e6 17 27 02 f9 a3 42 27 95 a0 61 0e 66 fd
Pretty cool, eh? Probably not...doesn't look very useful, does it? Well, see that "1" following the offset? That tells you whether the key or value is "in use"; that is, is it in allocated or unallocated space within the hive file? Sound interesting? Check it out...running the following command:
C:\Perl>regscan.pl d:\cases\system | find "0 nk"
...showed me a list of all of the Registry keys found to NOT be in use in the hive file! Basically, what this script and command line are showing me are a list of deleted keys in the hive file. Here's an excerpt of the output:
0x1ef020 0 nk (Invalid Parent Key)\04 [2004-08-18T00:39:36Z]
0x1f0020 0 nk (Invalid Parent Key)\Security [2004-08-18T00:32:16Z]
0x1f2020 0 nk $$$PROTO.HIV\ControlSet001\Enum\Root\LEGACY_DCOMLAUNCH
\0000\Services\SCardDrv [2004-08-23T21:34:10Z]
Notice that the first two keys don't have their full paths traced all the way back to the root key, but the last one does.
Pretty neat stuff, eh? I'm considering including the use of regscan.pl in my SANS Forensic Summit presentation demos, but I'm not sure if the usefulness of this type of information will really be apparent...what do you think?
Bluetooth in the Registry
Does anyone out there have Registry hive files available from a system that they've used (or know has been used) to complete Bluetooth pairings?
I'd like to see about writing some RegRipper plugins to assist with the analysis of situations is which this is important.
If someone wants to loan me a laptop and some Bluetooth devices, so that I can do the testing and analysis myself, I will happily return the equipment when I'm done with it (yeah, I know....that didn't work with the USB stuff, or when folks wanted info on Firewire devices, either - hopefully my luck will change).
I'd like to see about writing some RegRipper plugins to assist with the analysis of situations is which this is important.
If someone wants to loan me a laptop and some Bluetooth devices, so that I can do the testing and analysis myself, I will happily return the equipment when I'm done with it (yeah, I know....that didn't work with the USB stuff, or when folks wanted info on Firewire devices, either - hopefully my luck will change).
Saturday, October 04, 2008
Hackin9 Registry Analysis Article
I received word this past week that an article I had written on Registry Analysis would be appearing in an upcoming issue of Hackin9 magazine. I wrote this article a bit ago, and it covers some basics, and includes an image/picture from an older version of RegRipper, but I hope that they'll ask me back, and I'll be able to write something a bit more up-to-date and get it out in print sooner.
SANS Forensic Summit
As the SANS Forensic Summit draws nigh, I've been preparing the demos for my presentation, Secrets of Registry Analysis Revealed. In addition to this presentation, I'll will also be participating in some panels. I've seen the current program, and there's even a panel on volatile data, which I think is very topical and very much needed. Also, Richard Bejtlich will be giving the keynote speech on the second day.
Part of my presentation will include demos (yes, Rob told me I can't just talk the whole time...) of RegRipper and rip.exe (CLI version of RegRipper), as well as a new tool I call ripXP. Before I say anything else about this, I have to say that ripXP was an idea that Rob had several months ago...he told me something like, "hey, wouldn't it be cool if you wrote a tool like RegRipper, only it would also run the plugin against the hive files in the XP Restore Points?" So, in my copious amounts of free time (HTML really needs a sarcasm or smart-@$$ tag), I put together ripXP.
Okay, so what IS ripXP? RipXP is similar to rip.exe, in that it is a CLI tool and that it uses the same plugins as RegRipper and rip.exe. You give ripXP (as command line arguments) the hive file, the directory where the Restore Points reside (more on that later), and a plugin to run. Once you have all this, ripXP will then:
-> Access the hive file and guess what kind (SAM, System, NTUSER.DAT, Software, or Security) hive file it is (if it's an NTUSER.DAT file, it will attempt to retrieve the user's SID
-> Compare the type of hive file to the hive file that the plugin was written for; that is, if you pass it a System hive file, it won't let you run a plugin meant for an NTUSER.DAT file (just like rip.exe, ripXP includes the "-l" option so you can list all available plugins)
-> Run the plugin against the hive file you selected
-> Access the System Restore RP directories, and run the plugin against the appropriate hive
All this happens automatically, and the output goes to STDOUT, so all you have to do is redirect the output to a file.
Oh, yeah...when ripXP accesses an RP directory, it also displays the Description, Type, and Creation Date of the Restore Point.
Okay, so besides being totally, AWESOMELY, AMAZINGLY cool...so what? Well, for the demos, I'm using Lance Mueller's practical images, so the number of RP directories is limited. However, in a real examination, a tool like this would allow you to see a historical progression of data. I've used only a couple of the plugins in my testing thus far, such as userassist, acmru, and a couple of others. But look at the MountedDevices key, or any of the MRU listings in the NTUSER.DAT file...this would allow you to see a historical progression over time of how the data changed.
Also, consider a Restore Point created one day, and then the following day, some data within that key was deleted by the user. Those historical artifacts would still exist in the hive files in the Restore Points, and would not only be accessible, but would also be visible sequentially.
Finally, like rip.exe, ripXP can be deployed within a batch file, and you could even create/use a Perl script to create that batch file, based on a standard methodology. Oh, yeah...the RP directories. So, you have an image...raw dd, split raw dd, EWF, whatever. What I did was open the image file in FTK Imager and export the RP directories to another location; in my case, D:\test\XP1. Then, because I wanted to use them easily and repeatedly, I burned them to CD, so I now access them as E:\XP1\RP1, RP2, RP3, etc. What I need to do is test them for use with SmartMount, and other tools like it. Yes, this will make the command line a bit longer, but it should work just fine. (Addendum: Testing using a mounted image is complete and extremely successful!)
Anyway, this will be one of my demos. If you're going to be at the Summit, be sure to stop by when we talk about Registry analysis.
Part of my presentation will include demos (yes, Rob told me I can't just talk the whole time...) of RegRipper and rip.exe (CLI version of RegRipper), as well as a new tool I call ripXP. Before I say anything else about this, I have to say that ripXP was an idea that Rob had several months ago...he told me something like, "hey, wouldn't it be cool if you wrote a tool like RegRipper, only it would also run the plugin against the hive files in the XP Restore Points?" So, in my copious amounts of free time (HTML really needs a sarcasm or smart-@$$
Okay, so what IS ripXP? RipXP is similar to rip.exe, in that it is a CLI tool and that it uses the same plugins as RegRipper and rip.exe. You give ripXP (as command line arguments) the hive file, the directory where the Restore Points reside (more on that later), and a plugin to run. Once you have all this, ripXP will then:
-> Access the hive file and guess what kind (SAM, System, NTUSER.DAT, Software, or Security) hive file it is (if it's an NTUSER.DAT file, it will attempt to retrieve the user's SID
-> Compare the type of hive file to the hive file that the plugin was written for; that is, if you pass it a System hive file, it won't let you run a plugin meant for an NTUSER.DAT file (just like rip.exe, ripXP includes the "-l" option so you can list all available plugins)
-> Run the plugin against the hive file you selected
-> Access the System Restore RP directories, and run the plugin against the appropriate hive
Oh, yeah...when ripXP accesses an RP directory, it also displays the Description, Type, and Creation Date of the Restore Point.
Okay, so besides being totally, AWESOMELY, AMAZINGLY cool...so what? Well, for the demos, I'm using Lance Mueller's practical images, so the number of RP directories is limited. However, in a real examination, a tool like this would allow you to see a historical progression of data. I've used only a couple of the plugins in my testing thus far, such as userassist, acmru, and a couple of others. But look at the MountedDevices key, or any of the MRU listings in the NTUSER.DAT file...this would allow you to see a historical progression over time of how the data changed.
Also, consider a Restore Point created one day, and then the following day, some data within that key was deleted by the user. Those historical artifacts would still exist in the hive files in the Restore Points, and would not only be accessible, but would also be visible sequentially.
Finally, like rip.exe, ripXP can be deployed within a batch file, and you could even create/use a Perl script to create that batch file, based on a standard methodology. Oh, yeah...the RP directories. So, you have an image...raw dd, split raw dd, EWF, whatever. What I did was open the image file in FTK Imager and export the RP directories to another location; in my case, D:\test\XP1. Then, because I wanted to use them easily and repeatedly, I burned them to CD, so I now access them as E:\XP1\RP1, RP2, RP3, etc. What I need to do is test them for use with SmartMount, and other tools like it. Yes, this will make the command line a bit longer, but it should work just fine. (Addendum: Testing using a mounted image is complete and extremely successful!)
Anyway, this will be one of my demos. If you're going to be at the Summit, be sure to stop by when we talk about Registry analysis.