While attending DFRWS 2008, I also attended the Forensic Rodeo. I say "attended" because I originally had no intention of participating, as I had wanted to watch what others did with the data and the process they used to approach their analysis. It was very interesting to watch the team at my table, and the forensic rodeo was actually somewhat realistic.
Want to give it a shot yourself? The files for the forensic rodeo have been posted here.
I'd be interested to hear your approach, and the tools you used.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Pages
▼
Sunday, August 31, 2008
Saturday, August 30, 2008
RegRipper News and Mentions
I'm never really sure who's using RegRipper and how they're using it, or how they'd like to use it. However, getting input or feedback from the folks using it inevitably leads to making RegRipper a better tool.
James E. Martin mentioned RegRipper in his Detection of Data Hiding in Computer Forensics presentation. In the presentation, Mr. Martin demonstrated the use of RegRipper to extract USB device information from a System hive file.
I was recently discussing the issue of presenting USB data from multiple systems in an easy-to-view and -manage manner using RegRipper with another examiner. RR is a GUI tool that parses one file at a time...however, rip.exe comes along with it (another user recently contacted me and informed me that he made a couple of minor modifications and now runs rip.pl on Linux) and is a command line interface (CLI) tool that is easy to automate via a batch file. In order to provide something useful to the examiner, I opened up the usbstor.pl plugin, and within minutes made some minor modifications so that the output was .csv format. I then added the code from the mountdev.pl plugin to map USB removeable storage devices to a drive letter, if the information is available. Finally, I added the code from the compname.pl plugin to extract the name of the system from the System hive file...if you're running this across multiple hive files, you will need a way to differentiate the various systems in your output.
So, the resulting plugin, which took all of maybe 30 minutes to create, tweak and test can be run via rip.exe like so:
C:\Perl\forensics\rr>rip -r d:\cases\system -p usbstor2
The output for this System hive file looks like:
PETER,Disk&Ven_&Prod_USB_DISK&Rev_1.13,0738015025AC&0,
1127776426,USB DISK USB Device,
7&2713a8a1&0,\DosDevices\E:
So, the output is:
- System name
- Device class ID
- Serial Number
- LastWrite time from the unique ID key, 'normalized' to Unix time
- The "FriendlyName" value from the unique ID key
- The ParentIdPrefix value, if available
- The DosDevice listed in the MountedDevices key, if the ParentIdPrefix value exists
So, to run this against multiple System hive files, simply create a batch file that contains lines that look like this:
C:\Perl\forensics\rr>rip -r System -p usbstor2 >> usbstor.csv
Once you run this, the usbstor.csv file can be opened in Excel and you can quickly and easily determine devices that were connected to multiple systems, etc.
This just shows you how easy-to-use and flexible this tool set is. To see even more, don't miss the SANS Forensic Summit, where I'll be discussing Registry analysis and demonstrating these tools, as well as something else very special!
James E. Martin mentioned RegRipper in his Detection of Data Hiding in Computer Forensics presentation. In the presentation, Mr. Martin demonstrated the use of RegRipper to extract USB device information from a System hive file.
I was recently discussing the issue of presenting USB data from multiple systems in an easy-to-view and -manage manner using RegRipper with another examiner. RR is a GUI tool that parses one file at a time...however, rip.exe comes along with it (another user recently contacted me and informed me that he made a couple of minor modifications and now runs rip.pl on Linux) and is a command line interface (CLI) tool that is easy to automate via a batch file. In order to provide something useful to the examiner, I opened up the usbstor.pl plugin, and within minutes made some minor modifications so that the output was .csv format. I then added the code from the mountdev.pl plugin to map USB removeable storage devices to a drive letter, if the information is available. Finally, I added the code from the compname.pl plugin to extract the name of the system from the System hive file...if you're running this across multiple hive files, you will need a way to differentiate the various systems in your output.
So, the resulting plugin, which took all of maybe 30 minutes to create, tweak and test can be run via rip.exe like so:
C:\Perl\forensics\rr>rip -r d:\cases\system -p usbstor2
The output for this System hive file looks like:
PETER,Disk&Ven_&Prod_USB_DISK&Rev_1.13,0738015025AC&0,
1127776426,USB DISK USB Device,
7&2713a8a1&0,\DosDevices\E:
So, the output is:
- System name
- Device class ID
- Serial Number
- LastWrite time from the unique ID key, 'normalized' to Unix time
- The "FriendlyName" value from the unique ID key
- The ParentIdPrefix value, if available
- The DosDevice listed in the MountedDevices key, if the ParentIdPrefix value exists
So, to run this against multiple System hive files, simply create a batch file that contains lines that look like this:
C:\Perl\forensics\rr>rip -r System
Once you run this, the usbstor.csv file can be opened in Excel and you can quickly and easily determine devices that were connected to multiple systems, etc.
This just shows you how easy-to-use and flexible this tool set is. To see even more, don't miss the SANS Forensic Summit, where I'll be discussing Registry analysis and demonstrating these tools, as well as something else very special!
Wednesday, August 27, 2008
The Need for Speed
The recent Best Western issue illustrates an important point, which was mentioned in many of the posted articles on this issue...
Compliance != Security
In the face of compromises or any other potential/verified breach, a quick response is essential. You don't know if you have sensitive data (PCI, PHI, PII, etc.) leaving your network, and your first, most immediate and natural reaction (i.e., disconnecting systems) will likely expose you to more risk than the incident itself. Wait...what? Well, here's the deal, kids...if a system has sensitive data on it, and was subject to a compromise (intrusion, malware infection, etc.), and you cannot explicitly prove that the sensitive data was not compromised, you may (depending upon the legal or regulatory requirements for the data) be required to notify, regardless.
So...better to know than to not know...right?
What you need to do is quickly collect the following items:
Remember to DOCUMENT everything you do! The rule of thumb is, if you didn't document it, you didn't do it.
What other tools are available? In the case of Best Western, as well as any other organization with remote systems (located in distant data centers or storefronts), something like F-Response may prove to be extremely valuable! If you're not sure about F-Response and don't believe the testimonials, give the Beta Program a try. With the Enterprise Edition of F-Response already deployed (or simply pushed out remotely as needed), getting the data you need is amazingly straightforward!
So why do all this? Why go through all this trouble? Because you will likely have to answer the question, was sensitive data leaving my network? The fact of the matter is that you're not going to be able to answer that question with nothing more than a hard drive image, and the single biggest impediment to doing the right thing (as opposed to something) in a case like this is time...when you don't have the tools, training or support from executive management, the only reaction left is to unplug systems and hope for the best.
Unfortunately, where will that leave you? It'll leave you having to answer the question, why weren't you prepared? Would rather have to face that question, or actually be prepared?
If you want to learn what it takes to be prepared, come on by the SANS Forensic Summit and learn about this subject from the guys and gals who do it for a living!
Resources
CSO Online - Data Breach Notification Laws, State by State
SC Magazine - Data Breach Blog
Compliance != Security
In the face of compromises or any other potential/verified breach, a quick response is essential. You don't know if you have sensitive data (PCI, PHI, PII, etc.) leaving your network, and your first, most immediate and natural reaction (i.e., disconnecting systems) will likely expose you to more risk than the incident itself. Wait...what? Well, here's the deal, kids...if a system has sensitive data on it, and was subject to a compromise (intrusion, malware infection, etc.), and you cannot explicitly prove that the sensitive data was not compromised, you may (depending upon the legal or regulatory requirements for the data) be required to notify, regardless.
So...better to know than to not know...right?
What you need to do is quickly collect the following items:
- Pertinent network (i.e., firewall, etc.) logs
- Network packet capture(s)
- Full or partial contents of physical memory
- An image acquired from the affected system
Remember to DOCUMENT everything you do! The rule of thumb is, if you didn't document it, you didn't do it.
What other tools are available? In the case of Best Western, as well as any other organization with remote systems (located in distant data centers or storefronts), something like F-Response may prove to be extremely valuable! If you're not sure about F-Response and don't believe the testimonials, give the Beta Program a try. With the Enterprise Edition of F-Response already deployed (or simply pushed out remotely as needed), getting the data you need is amazingly straightforward!
So why do all this? Why go through all this trouble? Because you will likely have to answer the question, was sensitive data leaving my network? The fact of the matter is that you're not going to be able to answer that question with nothing more than a hard drive image, and the single biggest impediment to doing the right thing (as opposed to something) in a case like this is time...when you don't have the tools, training or support from executive management, the only reaction left is to unplug systems and hope for the best.
Unfortunately, where will that leave you? It'll leave you having to answer the question, why weren't you prepared? Would rather have to face that question, or actually be prepared?
If you want to learn what it takes to be prepared, come on by the SANS Forensic Summit and learn about this subject from the guys and gals who do it for a living!
Resources
CSO Online - Data Breach Notification Laws, State by State
SC Magazine - Data Breach Blog
Sunday, August 24, 2008
The Demented Musings of an Incident Responder
I respond, therefore I am. H. Carvey, 2008
I thought I'd jot down some of the things I see from time to time in the field...
In some network infrastructures, there's no need to use rootkits or other obfuscation methods.
In fact, these attempts to hide an intruder's activity may actually get the intruder noticed...incorrectly programmed or implemented rootkits lead to BSODs, which get you noticed. Look at some of the SQL injection attacks...not the ones you saw in the news, but the ones you didn't see...well, okay...if you didn't see them, then you can't look at them. I get it. Anyway, the other SQL injection attacks would punch completely through into the interior network infrastructure, and the intruder would be on systems with privileges above Administrator. From there, they'd punch out of the infrastructure to get their tools...via TFTP, FTP (the ftp.exe client on Windows systems is CLI and allows you to use scripts of commands), their own wget.exe, etc. From there, the intruder would use their tools to extend their reach...create user accounts, etc...and many times go unnoticed.
However...while we are on the subject of rootkits...I was reading a post over on the Volatility Tumblr blog, and came across this interesting bit of analysis of the Storm bot from TippingPoint's Digital Vaccine Labs (DVL). Apparently, the rootkit capabilities of this malware were "copied" (their word, not mine) from existing code. The point of this is that some issues with rootkits are being solved by using existing, proven code. Symantec has a post on the Storm bot, showing that it doesn't just blow up on a system.
Many times, an organization's initial response exposes them to greater risk than the incident itself, simply by making it impossible to answer the necessary questions.
Perhaps more so than anything else, state notification laws (CA SB-1386, AB-1298, etc.) and compliance standards set forth by regulatory bodies ('nuff said!) are changing the face of incident response. What I mean by that is that rather than just cleaning systems (running AV, deleting files and accounts, or simply wiping and reinstalling the system) is no longer an option, because now we need to know if (a) there was "sensitive data" on the system, and then (b) if the malware or compromise led to the exposure of that data.
What this leads us to is that the folks closest to the systems...helpdesk, IT admins, etc...need to be trained in proper response techniques and activities. They need to have the knowledge and the tools in place in order to react quickly...like an EMT. For example, rather than pulling a system offline and wiping it, obtain a physical memory dump, disconnect the system from the network, obtain an image, and then wipe the system. Properly control, preserve, and maintain custody of the data so that forensic analysts can do their thing. This process is somewhat over-simplified, I know, but it's something that can be used as a basis for getting those important questions answered. Add to that network traffic captures and any available device logs, and you've pretty much got most of what incident responders such as myself are looking for when we are asked to respond.
I thought I'd jot down some of the things I see from time to time in the field...
In some network infrastructures, there's no need to use rootkits or other obfuscation methods.
In fact, these attempts to hide an intruder's activity may actually get the intruder noticed...incorrectly programmed or implemented rootkits lead to BSODs, which get you noticed. Look at some of the SQL injection attacks...not the ones you saw in the news, but the ones you didn't see...well, okay...if you didn't see them, then you can't look at them. I get it. Anyway, the other SQL injection attacks would punch completely through into the interior network infrastructure, and the intruder would be on systems with privileges above Administrator. From there, they'd punch out of the infrastructure to get their tools...via TFTP, FTP (the ftp.exe client on Windows systems is CLI and allows you to use scripts of commands), their own wget.exe, etc. From there, the intruder would use their tools to extend their reach...create user accounts, etc...and many times go unnoticed.
However...while we are on the subject of rootkits...I was reading a post over on the Volatility Tumblr blog, and came across this interesting bit of analysis of the Storm bot from TippingPoint's Digital Vaccine Labs (DVL). Apparently, the rootkit capabilities of this malware were "copied" (their word, not mine) from existing code. The point of this is that some issues with rootkits are being solved by using existing, proven code. Symantec has a post on the Storm bot, showing that it doesn't just blow up on a system.
Many times, an organization's initial response exposes them to greater risk than the incident itself, simply by making it impossible to answer the necessary questions.
Perhaps more so than anything else, state notification laws (CA SB-1386, AB-1298, etc.) and compliance standards set forth by regulatory bodies ('nuff said!) are changing the face of incident response. What I mean by that is that rather than just cleaning systems (running AV, deleting files and accounts, or simply wiping and reinstalling the system) is no longer an option, because now we need to know if (a) there was "sensitive data" on the system, and then (b) if the malware or compromise led to the exposure of that data.
What this leads us to is that the folks closest to the systems...helpdesk, IT admins, etc...need to be trained in proper response techniques and activities. They need to have the knowledge and the tools in place in order to react quickly...like an EMT. For example, rather than pulling a system offline and wiping it, obtain a physical memory dump, disconnect the system from the network, obtain an image, and then wipe the system. Properly control, preserve, and maintain custody of the data so that forensic analysts can do their thing. This process is somewhat over-simplified, I know, but it's something that can be used as a basis for getting those important questions answered. Add to that network traffic captures and any available device logs, and you've pretty much got most of what incident responders such as myself are looking for when we are asked to respond.
Saturday, August 23, 2008
Browser Artifact Analysis
There are a number of times where an analyst would need to know a bit about a user's web browsing activities in order to determine what was happening on a system; was the user in violation of acceptable use policies, or did the user go someplace that ended up getting the system infected, etc? Sometimes this is how systems initially get infected.
There are two excellent articles from Jones and Belani (published on SecurityFocus here and here) that, while a little more than 3 yrs old, are excellent sources of information and a great way to begin understanding what is available via browser forensics, and how to go about collecting information.
One of the things I tend to do when setting up an examination is to open the image in a ProDiscover project, and populate the Internet History Viewer. With PDIR v5.0, this is smoother than with previous versions, and it gives me a quick overview of the browser activity on the system. However, you don't need commercial tools to do this kind of analysis...there are tools out there that you can use either against live systems, or by mounting an image as a read-only file system.
At this point, what you look for is totally up to you. Many times when performing analysis, I have a timeframe in mind, based on information I received from the customer about the date and time of the incident. Other times, I may start with Registry analysis and have some key LastWrite times to work with. In several examinations, I had user profile creation dates, so I used that as my search criteria...locate anything useful that occurred prior to the profile creation date (which, by the way, I correlated with data extracted from the SAM file using RegRipper!!).
Don't forget this little tidbit about web history located for the Default User from Rob "van" Hensing's blog. I used to see this in the SQL injection exams, where the intruder would dump wget.exe on a system, and then use that to pull down his other tools. Wget.exe would use the WinInet APIs to do its work, which would end up as "browser history"...and because the intruder was running as System-level privileges, the history would end up in the Default User account. More recently, I've seen write-ups for malware that use a "hidden" IE window...running at System privileges will leave these same artifacts.
Tools and Resources:
Mork file format
mork.pl - Perl script for parsing the Mork file format
NirSoft.net browser tools
Mandiant WebHistorian
FoxAnalysis - FireFox 3 browser artifact analysis
CacheBack 2.0 - Internet browser cache and history analysis (commercial)
FireFox Forensics (F3) - Forensic artifact analysis tool for FireFox
Historian - Converts browser history files to .csv...also does LNK and INFO2 files
OperaCacheView - Thanks for the link, Claus!
There are two excellent articles from Jones and Belani (published on SecurityFocus here and here) that, while a little more than 3 yrs old, are excellent sources of information and a great way to begin understanding what is available via browser forensics, and how to go about collecting information.
One of the things I tend to do when setting up an examination is to open the image in a ProDiscover project, and populate the Internet History Viewer. With PDIR v5.0, this is smoother than with previous versions, and it gives me a quick overview of the browser activity on the system. However, you don't need commercial tools to do this kind of analysis...there are tools out there that you can use either against live systems, or by mounting an image as a read-only file system.
At this point, what you look for is totally up to you. Many times when performing analysis, I have a timeframe in mind, based on information I received from the customer about the date and time of the incident. Other times, I may start with Registry analysis and have some key LastWrite times to work with. In several examinations, I had user profile creation dates, so I used that as my search criteria...locate anything useful that occurred prior to the profile creation date (which, by the way, I correlated with data extracted from the SAM file using RegRipper!!).
Don't forget this little tidbit about web history located for the Default User from Rob "van" Hensing's blog. I used to see this in the SQL injection exams, where the intruder would dump wget.exe on a system, and then use that to pull down his other tools. Wget.exe would use the WinInet APIs to do its work, which would end up as "browser history"...and because the intruder was running as System-level privileges, the history would end up in the Default User account. More recently, I've seen write-ups for malware that use a "hidden" IE window...running at System privileges will leave these same artifacts.
Tools and Resources:
Mork file format
mork.pl - Perl script for parsing the Mork file format
NirSoft.net browser tools
Mandiant WebHistorian
FoxAnalysis - FireFox 3 browser artifact analysis
CacheBack 2.0 - Internet browser cache and history analysis (commercial)
FireFox Forensics (F3) - Forensic artifact analysis tool for FireFox
Historian - Converts browser history files to .csv...also does LNK and INFO2 files
OperaCacheView - Thanks for the link, Claus!
Wednesday, August 20, 2008
What's in your clipboard
When I've written about performing incident response data collection (here and here), I've mentioned retrieving any available data from the clipboard. Others have mentioned the same thing. I've mentioned it as a way of collecting as complete a set of information as possible...what might appear to be the work of a Trojan may, in fact, be the doing of the user themselves. In the past, while working for a telecomm company, we found that a user was attempting to access routers using a GUI telnet app, and had copied the password he was using to the clipboard so that he could easily paste it into the GUI.
Just today, I saw this blog entry that identified malware that actually overwrites the contents of the clipboard, so that if the user pastes a URL into the address bar of the browser, the malicious one will be pasted instead.
So, just a thought...maybe it's about time for me to add entry back into my IR script. You can obtain a copy of pclip.exe here.
Just today, I saw this blog entry that identified malware that actually overwrites the contents of the clipboard, so that if the user pastes a URL into the address bar of the browser, the malicious one will be pasted instead.
So, just a thought...maybe it's about time for me to add entry back into my IR script. You can obtain a copy of pclip.exe here.
Saturday, August 16, 2008
Volatility 1.3 is out!
Volatility 1.3 is out! Volatile Systems (AAron, et al.) improves upon their venerable open-source memory analysis tool with this latest version, adding capabilities for extracting executables and a process's addressable memory, support for formats other than a dd-style memory dump (such as via mdd). Volatility 1.3 supports memory dumps from Windows XP SP2 & 3, and in addition, there is preliminary support for Linux memory dumps, as well.
The world of incident response is seeing changes. Incident responders have known for some time that particularly in the face of state notification laws and compliance standards from regulatory bodies, a new dimension has been added to what we do. Simple containment and eradication...get the bad guy or malware out...is no longer sufficient, as traditional first response obviates an organizations ability to answer questions about data exfiltration. Its long been known that new procedures and methodologies are needed, and in many cases, new tools. Well, folks, Volatility is one of those tools. When combined with tools like F-Response, the speed of response is maximized, allowing for greater data protection and business preservation. With F-Response increasing a responder's ability to collect data, and Volatility increasing the breadth and depth of analysis that can be performed on the memory dumps, a brave new world is opened up for incident responders!
The need for speed (without sacrificing a thorough and accurate response) is further illustrated in this SecurityFocus article, which illustrates something that incident responders and forensic analysts see all the time...AV doesn't always work the way we think it should.
Not only that, Volatility adds to a forensic analyst's toolkit, as well. The latest version has the ability to parse crash dumps, as well as hibernation files. Thanks to Moyix for all of his current and up-coming contributions, as well as folks such as Matthieu, Jesse, and all of those who put effort into the framework. Forensic analysts now have the capability to parse through and analyze additional historic remnants on a system.
Volatility requires Python, which for Windows systems is freely available from ActiveState.
Addendum: Yesterday, Matthieu updated both win32dd and the Sandman Framework. I tried out win32dd, and it worked like a champ! My first attempt resulted in some issues as a result of operator error...if you hit Ctrl-C while win32dd is running, you will get an error during every consecutive attempt to run the tool again, until you reboot. However, one of my tests as to run mdd and win32dd consecutively, and the result was files of identical size. The next step is to run Volatility 1.3 against each of them and see if there are any differences in results.
The world of incident response is seeing changes. Incident responders have known for some time that particularly in the face of state notification laws and compliance standards from regulatory bodies, a new dimension has been added to what we do. Simple containment and eradication...get the bad guy or malware out...is no longer sufficient, as traditional first response obviates an organizations ability to answer questions about data exfiltration. Its long been known that new procedures and methodologies are needed, and in many cases, new tools. Well, folks, Volatility is one of those tools. When combined with tools like F-Response, the speed of response is maximized, allowing for greater data protection and business preservation. With F-Response increasing a responder's ability to collect data, and Volatility increasing the breadth and depth of analysis that can be performed on the memory dumps, a brave new world is opened up for incident responders!
The need for speed (without sacrificing a thorough and accurate response) is further illustrated in this SecurityFocus article, which illustrates something that incident responders and forensic analysts see all the time...AV doesn't always work the way we think it should.
Not only that, Volatility adds to a forensic analyst's toolkit, as well. The latest version has the ability to parse crash dumps, as well as hibernation files. Thanks to Moyix for all of his current and up-coming contributions, as well as folks such as Matthieu, Jesse, and all of those who put effort into the framework. Forensic analysts now have the capability to parse through and analyze additional historic remnants on a system.
Volatility requires Python, which for Windows systems is freely available from ActiveState.
Addendum: Yesterday, Matthieu updated both win32dd and the Sandman Framework. I tried out win32dd, and it worked like a champ! My first attempt resulted in some issues as a result of operator error...if you hit Ctrl-C while win32dd is running, you will get an error during every consecutive attempt to run the tool again, until you reboot. However, one of my tests as to run mdd and win32dd consecutively, and the result was files of identical size. The next step is to run Volatility 1.3 against each of them and see if there are any differences in results.
Friday, August 15, 2008
NRDFI
I received an email from AccessData the other day in my work inbox, advertising something called the National Repository for Digital Forensic Intelligence, or NRDFI. According to DC3, this is a joint effort between DC3 and OSU, and it was discussed in this PDF, and in Nov, 2007, some funding was provided.
The AccessData email said that NRDFI is a "knowledge management platform for collecting and sharing digital forensic information." The email goes on to say that the repository has been seeded with over 1000 documents - examiner tips and tricks, whitepapers, digital forensic tool collections, etc.
Sound interesting. Too bad it's completely off-limits to non-LE such as myself, those who have an interest and desire to contribute, but are not sworn officers. To some extent, IMHO, while the NRDFI is definitely a step in the right direction, it's leaving out a lot of folks who can and are willing to contribute.
The AccessData email said that NRDFI is a "knowledge management platform for collecting and sharing digital forensic information." The email goes on to say that the repository has been seeded with over 1000 documents - examiner tips and tricks, whitepapers, digital forensic tool collections, etc.
Sound interesting. Too bad it's completely off-limits to non-LE such as myself, those who have an interest and desire to contribute, but are not sworn officers. To some extent, IMHO, while the NRDFI is definitely a step in the right direction, it's leaving out a lot of folks who can and are willing to contribute.
Data Exfiltration
Similar to copied files, one of the questions that responders and analysts get from time to time is, how do I tell what data was copied off of a system, or exfiltrated from the network?
This is not an easy question to answer, and often depends upon a number of variables. To make things a bit simpler, let's look at a couple of scenarios:
1. You show up on-site, are handed a hard drive or an image, and asked to tell if data (either any data, or specific data) was copied off of the system.
In this case, as the responder/analyst, you've got a couple of options. First, try to determine what the customer envisions with respect to how the data may have been taken. Was it copied to a USB removable storage device? Was it accessed by a remote intruder (via a backdoor or bot), or perhaps sent off of the system by the user, via web-based email or FTP? All of these may give you clues on where to start looking for some kind of logs. In most cases, you may not find any. However, I've found indications of partial conversations and even attachments as part of web-based email. Also, when an incident has involved remote unauthorized shell-level access, I've found indications with Registry hive files of activity by the intruder accesses or searching for files.
You might also try to determine what other devices or monitoring applications may have logs available that might be of use. Firewall and router ACL logs, for instance, may give you indications of untoward outbound connections. In the absence of those logs, you could hope that someone captured some network traffic.
A note on logs: Some logs may include data about an outbound connection, to include the total number of packets transferred. While this might be an indicator that something may have been sent over the network connection, you will still be unable to determine what actual content was sent from simply the packet size alone. True, traffic analysis might give you an indication, particularly when combined with testing of malware (if malware was the root of the issue) in a virtual environment, but the fact of the matter is that knowing that three guys just ran by your house doesn't tell you the color of their shirts, or their names.
If the issue in question was thought to be associated with malware found on a system, running that malware in a virtual environment may provide indications of specific files or content that the malware was looking for, but this may not always be the case. Sometimes, malware (bots, for instance) won't be programmed to look for files or content themselves, but can be commanded to do so via the C&C server, and your analysis may be predicated on a particular time window, or having access to the Internet so that the malware you're testing can connect to the C&C server.
2. You arrive on-site after a worm infection has been eradicated, and asked to determine what, if any data had been exfiltrated from the network.
In this case, the systems in question may still be live, but the malware may have been eradicated. Under such circumstances, you're not going to have processes to look at - running processes maintain a list of handle objects, and you may be able to get an idea of the file handles that a process has open. However, beyond that, this is scenario leaves you in much the same position as scenario #1.
3. You arrive on-site, and there are systems that are infected still running, that haven't been taken off-line or cleaned, and are still generating network traffic. At this point, what data do you need to determine data exfiltration?
The best source of information for determining data exfiltration at this point is going to be network traffic captures. At this point, you can see the data itself, and using tools like Wireshark, reconstruct entire network conversations and see what data was transmitted, and from which system. You can then use words and phrases that appear to be unique to do keyword searches on the systems themselves for the actual content that was accessed.
Another good source (albeit not the best) of information will be from the hosts themselves. Using either live response tools such as handle.exe, or dumping the contents of physical memory and using Volatility to parse out the list of open file handles from each process, you will be able to see which files the process in question had open. This will give you yet another data point in determining not only data exfiltration, but also perhaps any targeted nature of malware.
To sum up, in an ideal world, when faced with the question of data exfiltration via the network, the responder should be looking for a live system with an active process, and network packet captures. For other incidents where the exfiltration mechanism isn't know, non-digital sources such as CCTV, etc., might prove to be valuable.
A word about preparedness - F-Response. Yep, that's it. Data leaving your network, particularly when you don't want it to, is a very bad thing. Determining the cause, what data is being taken, where it's going, etc. - all the important questions that need to be answered - are all questions that require a quick and accurate response to answer. Even the fastest incident responders will take time to arrive on-site, whereas if you have F-Response Enterprise Edition already installed, the response time is reduced to however long it takes a responder to log in and access the central appliance. The advantage of tools like F-Response are a faster, greener response that doesn't replace having someone come on-site, but instead augments it. This all assumes, of course, that you're response plan isn't to run AV scans, delete stuff, take systems off-line, and then call someone...
This is not an easy question to answer, and often depends upon a number of variables. To make things a bit simpler, let's look at a couple of scenarios:
1. You show up on-site, are handed a hard drive or an image, and asked to tell if data (either any data, or specific data) was copied off of the system.
In this case, as the responder/analyst, you've got a couple of options. First, try to determine what the customer envisions with respect to how the data may have been taken. Was it copied to a USB removable storage device? Was it accessed by a remote intruder (via a backdoor or bot), or perhaps sent off of the system by the user, via web-based email or FTP? All of these may give you clues on where to start looking for some kind of logs. In most cases, you may not find any. However, I've found indications of partial conversations and even attachments as part of web-based email. Also, when an incident has involved remote unauthorized shell-level access, I've found indications with Registry hive files of activity by the intruder accesses or searching for files.
You might also try to determine what other devices or monitoring applications may have logs available that might be of use. Firewall and router ACL logs, for instance, may give you indications of untoward outbound connections. In the absence of those logs, you could hope that someone captured some network traffic.
A note on logs: Some logs may include data about an outbound connection, to include the total number of packets transferred. While this might be an indicator that something may have been sent over the network connection, you will still be unable to determine what actual content was sent from simply the packet size alone. True, traffic analysis might give you an indication, particularly when combined with testing of malware (if malware was the root of the issue) in a virtual environment, but the fact of the matter is that knowing that three guys just ran by your house doesn't tell you the color of their shirts, or their names.
If the issue in question was thought to be associated with malware found on a system, running that malware in a virtual environment may provide indications of specific files or content that the malware was looking for, but this may not always be the case. Sometimes, malware (bots, for instance) won't be programmed to look for files or content themselves, but can be commanded to do so via the C&C server, and your analysis may be predicated on a particular time window, or having access to the Internet so that the malware you're testing can connect to the C&C server.
2. You arrive on-site after a worm infection has been eradicated, and asked to determine what, if any data had been exfiltrated from the network.
In this case, the systems in question may still be live, but the malware may have been eradicated. Under such circumstances, you're not going to have processes to look at - running processes maintain a list of handle objects, and you may be able to get an idea of the file handles that a process has open. However, beyond that, this is scenario leaves you in much the same position as scenario #1.
3. You arrive on-site, and there are systems that are infected still running, that haven't been taken off-line or cleaned, and are still generating network traffic. At this point, what data do you need to determine data exfiltration?
The best source of information for determining data exfiltration at this point is going to be network traffic captures. At this point, you can see the data itself, and using tools like Wireshark, reconstruct entire network conversations and see what data was transmitted, and from which system. You can then use words and phrases that appear to be unique to do keyword searches on the systems themselves for the actual content that was accessed.
Another good source (albeit not the best) of information will be from the hosts themselves. Using either live response tools such as handle.exe, or dumping the contents of physical memory and using Volatility to parse out the list of open file handles from each process, you will be able to see which files the process in question had open. This will give you yet another data point in determining not only data exfiltration, but also perhaps any targeted nature of malware.
To sum up, in an ideal world, when faced with the question of data exfiltration via the network, the responder should be looking for a live system with an active process, and network packet captures. For other incidents where the exfiltration mechanism isn't know, non-digital sources such as CCTV, etc., might prove to be valuable.
A word about preparedness - F-Response. Yep, that's it. Data leaving your network, particularly when you don't want it to, is a very bad thing. Determining the cause, what data is being taken, where it's going, etc. - all the important questions that need to be answered - are all questions that require a quick and accurate response to answer. Even the fastest incident responders will take time to arrive on-site, whereas if you have F-Response Enterprise Edition already installed, the response time is reduced to however long it takes a responder to log in and access the central appliance. The advantage of tools like F-Response are a faster, greener response that doesn't replace having someone come on-site, but instead augments it. This all assumes, of course, that you're response plan isn't to run AV scans, delete stuff, take systems off-line, and then call someone...
F-Response: Imaging RAIDs
Matt Shannon posted an excellent article to his blog this morning called Singing in the RAID...it's a must read, folks. Take a look.
One thing that Matt quite correctly points out is that sometimes live acquisition (acquiring an image of system while the system is running) is your only option. There are times when a customer will tell you that the system you need to image can't be taken down, so removing the drives, imaging each, and rebuilding the array is not an option. Of course, there are other issues, such as SAS drives, boot-from-SAN systems, etc., that can all put a responder in the position to have to acquire the system in a live condition. This can be done by running acquisition tools such as FTK Imager from a CD or USB removable device, or the circumstances may permit or require you to use F-Response with its built-in write-blocking capability to access the system and perform the live acquisition (imaging done with the acquisition tool of your choice).
Another advantage of using F-Response is that it obviates the need for expensive enterprise licenses.
One thing that Matt quite correctly points out is that sometimes live acquisition (acquiring an image of system while the system is running) is your only option. There are times when a customer will tell you that the system you need to image can't be taken down, so removing the drives, imaging each, and rebuilding the array is not an option. Of course, there are other issues, such as SAS drives, boot-from-SAN systems, etc., that can all put a responder in the position to have to acquire the system in a live condition. This can be done by running acquisition tools such as FTK Imager from a CD or USB removable device, or the circumstances may permit or require you to use F-Response with its built-in write-blocking capability to access the system and perform the live acquisition (imaging done with the acquisition tool of your choice).
Another advantage of using F-Response is that it obviates the need for expensive enterprise licenses.
DFRWS2008
I have to say...I'm not an avid conference goer/crasher, but I have been to a few security conferences, starting with Usenix back on '00, and including BlackHat, DefCon (presented at DefCon 9), GMU/RCFG, etc. That said, by far, DFRWS is the best conference I've ever attended! Based on content, program structure, and attendees, this was hands-down THE best conference I've ever attended! The only reservation I have in saying that is that the OMFW was a workshop, but I would like to see it somehow either expanded in its own right, or included in the DFRWS conference.
Location, Venue - The conference location was great, and in this case, easy to get to as it was relatively close to my location, and to the airport. The facilities were great, and there were plenty of places to eat locally...although the food provided at the conference was really pretty good. Interestingly enough, when Cory and I checked in on Sunday, Oticon (anime conference) was just finishing up, so there were all these kids dressed up as anime cartoon characters...which was actually pretty funny. Cory posited that DFRWS was the second nerdiest conference in town, and I think he was right! Perhaps this was sort of the unintended entertainment for the computer nerds...
Speakers - The speakers were excellent all around, and the technical committee deserves a round of applause for being able to select the papers that were presented from the range of of submissions. Of course, there were a couple of papers I was particularly interested in, such as Tim Morgan's paper on recovering deleted data from within Registry hive files. Also, the first keynote address by SA Ryan Moore, on network traffic analysis of compromised POS systems was interesting...to hear that the USSS is involved in PCI-type engagements and to what degree.
Networking - One of the best things about conferences like this is that it draws folks from within the community that you hear about or hear from online, but don't actually get to meet face-to-face...until the conference. For example, I don't live too far from Richard of TaoSecurity, but we never cross paths, and got to chat for a few minutes at the conference. The same is true for folks like Moyix, Andreas, AAron, Brian, Eoghan, Michael and many others. In some cases, I've been under the impression that some folks were like unicorns...there were emails and blogs with their names on them, but few would admit to sightings, particularly while sober...and yet, there they were! Another great benefit of the conference was for folks like Tim and JT to meet up...they'd each been working on the same thing (ie, deleted keys within Registry hive files) and neither knew that the other was working away! Having them collaborate can only be a good thing!
Forensics Rodeo - The Forensics Rodeo on Tues evening was a great time. I didn't participate, per se, although Cory did. I mostly wanted to watch and see how others go about their analysis when given materials/data, so I took notes and yelled out the instructions and questions to be answered across the table. In this case, each team was given a Windows memory dump and an image of a thumb drive, and a set of questions. Our team won...which is to say that Dr. Michael Cohen won, using Volatility and PyFlag, and the rest of us within the ECR (military acronym meaning the "effective casualty radius") won through proximity.
If I had one criticism about this event, it would be the fact that the Wharf Rat, the venue for the reception on Monday evening, as out of the one beer that I went there to try! The waitress said that the menu on the web site isn't kept up-to-date...for shame! How dare you play tricks on an old man!
Location, Venue - The conference location was great, and in this case, easy to get to as it was relatively close to my location, and to the airport. The facilities were great, and there were plenty of places to eat locally...although the food provided at the conference was really pretty good. Interestingly enough, when Cory and I checked in on Sunday, Oticon (anime conference) was just finishing up, so there were all these kids dressed up as anime cartoon characters...which was actually pretty funny. Cory posited that DFRWS was the second nerdiest conference in town, and I think he was right! Perhaps this was sort of the unintended entertainment for the computer nerds...
Speakers - The speakers were excellent all around, and the technical committee deserves a round of applause for being able to select the papers that were presented from the range of of submissions. Of course, there were a couple of papers I was particularly interested in, such as Tim Morgan's paper on recovering deleted data from within Registry hive files. Also, the first keynote address by SA Ryan Moore, on network traffic analysis of compromised POS systems was interesting...to hear that the USSS is involved in PCI-type engagements and to what degree.
Networking - One of the best things about conferences like this is that it draws folks from within the community that you hear about or hear from online, but don't actually get to meet face-to-face...until the conference. For example, I don't live too far from Richard of TaoSecurity, but we never cross paths, and got to chat for a few minutes at the conference. The same is true for folks like Moyix, Andreas, AAron, Brian, Eoghan, Michael and many others. In some cases, I've been under the impression that some folks were like unicorns...there were emails and blogs with their names on them, but few would admit to sightings, particularly while sober...and yet, there they were! Another great benefit of the conference was for folks like Tim and JT to meet up...they'd each been working on the same thing (ie, deleted keys within Registry hive files) and neither knew that the other was working away! Having them collaborate can only be a good thing!
Forensics Rodeo - The Forensics Rodeo on Tues evening was a great time. I didn't participate, per se, although Cory did. I mostly wanted to watch and see how others go about their analysis when given materials/data, so I took notes and yelled out the instructions and questions to be answered across the table. In this case, each team was given a Windows memory dump and an image of a thumb drive, and a set of questions. Our team won...which is to say that Dr. Michael Cohen won, using Volatility and PyFlag, and the rest of us within the ECR (military acronym meaning the "effective casualty radius") won through proximity.
If I had one criticism about this event, it would be the fact that the Wharf Rat, the venue for the reception on Monday evening, as out of the one beer that I went there to try! The waitress said that the menu on the web site isn't kept up-to-date...for shame! How dare you play tricks on an old man!
Monday, August 11, 2008
Open Memory Forensics Workshop
This is the first time this workshop has been put on, but I have to say that it was a rousing success right off the starting blocks! An excellent format, excellent schedule, and excellent speakers. More importantly for me, there was a great deal of information and discussion that was either immediately practical, or would lead to something practical and useful in a hands-on manner within a relative short period.
A couple of the big-brain take-away thoughts that came out of this 1/2 day workshop were:
There seemed to be agreement amongst the assembled panel (as well as the attendees) that open-source is the way to go with tools like memory parsing tools. Open-source allows for verification of your findings and how various items were found, transparency, as well as extensibility.
When performing memory acquisition and analysis (parsing, really), what are the essential or pertinent objects/items/elements? What parts of, say, an EPROCESS structure are absolutely essential for determining if you're looking at an actual EPROCESS structure?
The subject of anti-forensics came up as well, and a thought was that if the bad guys know about what the good guys are doing, and know what important elements have been identified simply by looking at the open-source code, then they can easily come up with ways to combat those tools and techniques, and obfuscate what they're doing. This has in part to do with the discussion of essential/critical structure elements. For example, many of the tools that do a brute-force linear scan through a memory dump looking for EPROCESS structures look for specific elements of the structure itself in order to identify, as close as possible, a legitimate structure. Someone could obfuscate what they're doing by discovering which of those elements they can modify in order to avoid detection. Without identifying these critical elements...elements that cannot change without crashing the system...then this relatively new area of memory analysis is more open to anti-forensic and obfuscation techniques. However, Jesse pointed out something very important...a preponderance of anti-forensics and obfuscation (i.e., the over-abundance or relative lack of artifacts that an examiner would expect to see) activity should be a clear indicator to the examiner that something is amiss.
Also, Jesse used the term "tool marks" in his presentation...from what I saw, it sounded like "artifacts" to me, albeit specific to a particular "tool". This can be an important tool (I need to discuss this interesting topic w/ Jesse some more...) in that it can be a very useful data reduction tool to assist the examiner in identifying unusual things. For instance, something that came out of the DFRWS2008 Forensic Rodeo was that an unusual string in memory may indicate the use of TrueCrypt.
Overall, the quality of the presentations and speakers, as well as the panels, made for an excellent workshop! My hat's off to AAron and everyone else who put their time and effort into this event! It was great to finally meet folks like Moyix, Andreas, and have a chance to listen to their thoughts, and thank them. I hope to see this workshop again next year!
A couple of the big-brain take-away thoughts that came out of this 1/2 day workshop were:
There seemed to be agreement amongst the assembled panel (as well as the attendees) that open-source is the way to go with tools like memory parsing tools. Open-source allows for verification of your findings and how various items were found, transparency, as well as extensibility.
When performing memory acquisition and analysis (parsing, really), what are the essential or pertinent objects/items/elements? What parts of, say, an EPROCESS structure are absolutely essential for determining if you're looking at an actual EPROCESS structure?
The subject of anti-forensics came up as well, and a thought was that if the bad guys know about what the good guys are doing, and know what important elements have been identified simply by looking at the open-source code, then they can easily come up with ways to combat those tools and techniques, and obfuscate what they're doing. This has in part to do with the discussion of essential/critical structure elements. For example, many of the tools that do a brute-force linear scan through a memory dump looking for EPROCESS structures look for specific elements of the structure itself in order to identify, as close as possible, a legitimate structure. Someone could obfuscate what they're doing by discovering which of those elements they can modify in order to avoid detection. Without identifying these critical elements...elements that cannot change without crashing the system...then this relatively new area of memory analysis is more open to anti-forensic and obfuscation techniques. However, Jesse pointed out something very important...a preponderance of anti-forensics and obfuscation (i.e., the over-abundance or relative lack of artifacts that an examiner would expect to see) activity should be a clear indicator to the examiner that something is amiss.
Also, Jesse used the term "tool marks" in his presentation...from what I saw, it sounded like "artifacts" to me, albeit specific to a particular "tool". This can be an important tool (I need to discuss this interesting topic w/ Jesse some more...) in that it can be a very useful data reduction tool to assist the examiner in identifying unusual things. For instance, something that came out of the DFRWS2008 Forensic Rodeo was that an unusual string in memory may indicate the use of TrueCrypt.
Overall, the quality of the presentations and speakers, as well as the panels, made for an excellent workshop! My hat's off to AAron and everyone else who put their time and effort into this event! It was great to finally meet folks like Moyix, Andreas, and have a chance to listen to their thoughts, and thank them. I hope to see this workshop again next year!
Friday, August 08, 2008
File Associations
Now and again, I'll see posts in the lists asking questions about files based on the names and/or the extensions of the files. If the name is unique, sometimes an analyst can find out quite a bit about the file by doing a Google search. This should not be considered the be-all and end-all, however...too many folks have tottered down the wrong path based on what they've found in this manner.
[digression=momentary]I can't tell you how many times I've walked into an engagement where the other party's entire "analysis" consisted solely of performing a Google search![/digression]
So let's say that you find several files on a Windows system with a particular extension that you, as an examiner, don't recognize. Sure, many of us recognize ".doc" and ".txt", and right off-hand, we know what programs these files are associated with. In some cases, these programs (most often MSWord and Notepad, respectively) are used to create files with these extensions, and we know from experience that when we receive one of these files and double-click it, the operating system will automatically open the file using a specific application. How does Windows know how to do this?
This information is maintained in...wait for it...wait for it....that's right, the Registry!! Through normal usage on a system, a user might interact with these areas of the Registry through a context menu (specifically, "Open With..."), and an uber-user might actually use ftype and assoc at the command line. However, if you're only got an image of a system, how would you very quickly attempt to determine which application was associated with a particular file extension?
So, let's say that we want to determine the file association for the ".doc" extension. We can start on a live system by typing the following command:
C:\>assoc .doc
.doc=Word.Document.8
Okay, that's pretty straightforward. The same command works for PDFs, etc. In order to find out what application specifically is used to open the file when the user double-clicks it, we can use this command:
C:\>ftype AcroExch.Document
AcroExch.Document="C:\Program Files\Adobe\Reader 8.0\Reader\AcroRd32.exe" "%1"
Pretty neat, eh? Okay, great. So how do we find this in an image of a system? You can start by going to the following key:
HKLM\Software\Classes
Beneath this key, there are a LOT of subkeys, and most of the first ones that you will see are the file extensions. Scroll down to ".doc", and you'll see that the value named "(Default)" within the key contains the name that was saw above when we ran the "assoc" command. In some cases, the extensions won't have a great deal of information (subkeys and values) but in others (as with ".doc"), you'll see a number of entries. Now, using "Word.Document.8", keep scrolling down beneath the Classes key, all the way down until you find that name. Once you find the Registry key with that name, you then want to find the "shell\open\command" subkey, and within that subkey, the "(Default)" value. So, at this point, the full Registry path that we should be at is:
HKLM\Software\Classes\Word.Document.8\shell\open\command
The "(Default)" value within this subkey will give you the path to and command line for the application used to open the files with the extension we started with.
Pretty cool stuff, eh?
Oh, and don't forget to check the NTUSER.DAT files, as well, for the same paths.
Addendum: I created a Software hive plugin for RegRipper, but it can take a while to run, so I'd recommend running it with rip.exe instead. Want to know the default web browser on the system? Check the entries for HTTP and HTTPS. Also, I created a RegRipper plugin for the FileExts key in the NTUSER.DAT file.
[digression=momentary]I can't tell you how many times I've walked into an engagement where the other party's entire "analysis" consisted solely of performing a Google search![/digression]
So let's say that you find several files on a Windows system with a particular extension that you, as an examiner, don't recognize. Sure, many of us recognize ".doc" and ".txt", and right off-hand, we know what programs these files are associated with. In some cases, these programs (most often MSWord and Notepad, respectively) are used to create files with these extensions, and we know from experience that when we receive one of these files and double-click it, the operating system will automatically open the file using a specific application. How does Windows know how to do this?
This information is maintained in...wait for it...wait for it....that's right, the Registry!! Through normal usage on a system, a user might interact with these areas of the Registry through a context menu (specifically, "Open With..."), and an uber-user might actually use ftype and assoc at the command line. However, if you're only got an image of a system, how would you very quickly attempt to determine which application was associated with a particular file extension?
So, let's say that we want to determine the file association for the ".doc" extension. We can start on a live system by typing the following command:
C:\>assoc .doc
.doc=Word.Document.8
Okay, that's pretty straightforward. The same command works for PDFs, etc. In order to find out what application specifically is used to open the file when the user double-clicks it, we can use this command:
C:\>ftype AcroExch.Document
AcroExch.Document="C:\Program Files\Adobe\Reader 8.0\Reader\AcroRd32.exe" "%1"
Pretty neat, eh? Okay, great. So how do we find this in an image of a system? You can start by going to the following key:
HKLM\Software\Classes
Beneath this key, there are a LOT of subkeys, and most of the first ones that you will see are the file extensions. Scroll down to ".doc", and you'll see that the value named "(Default)" within the key contains the name that was saw above when we ran the "assoc" command. In some cases, the extensions won't have a great deal of information (subkeys and values) but in others (as with ".doc"), you'll see a number of entries. Now, using "Word.Document.8", keep scrolling down beneath the Classes key, all the way down until you find that name. Once you find the Registry key with that name, you then want to find the "shell\open\command" subkey, and within that subkey, the "(Default)" value. So, at this point, the full Registry path that we should be at is:
HKLM\Software\Classes\Word.Document.8\shell\open\command
The "(Default)" value within this subkey will give you the path to and command line for the application used to open the files with the extension we started with.
Pretty cool stuff, eh?
Oh, and don't forget to check the NTUSER.DAT files, as well, for the same paths.
Addendum: I created a Software hive plugin for RegRipper, but it can take a while to run, so I'd recommend running it with rip.exe instead. Want to know the default web browser on the system? Check the entries for HTTP and HTTPS. Also, I created a RegRipper plugin for the FileExts key in the NTUSER.DAT file.
Thursday, August 07, 2008
Upcoming Events
It seems that one of my partners-in-crime and I will be attending a couple of events together this year...stay tuned for some good times!
OMFW - Open Memory Forensics Workshop, 10 Aug 2008, Baltimore - AAron's putting on a great workshop on the subject, which is pretty cool, considering he's one of the guys who's creating the absolute bleeding edge in this area. There are some big names, not only in this field, but in the field of forensic analysis, who will be attending. So, bring your cameras and dollar bills, and see if you can get guys like Mike...excuse me, Dr. Michael...Cohen to sign various body parts! ;-) Be sure to say hi to Jesse, too!
DFRWS - Digital Forensics Research Workshopt, 11/13 Aug 2008, Baltimore - DFRWS is always a great conference, or so I've been told. This will be my first (hopefully not my last) time attending this conference, and the lineup of speakers and presentations is very impressive. I'm particularly looking forward to presentations regarding Registry analysis, such as Tim Morgan's Recovering Deleted Data from the Windows Registry.
Don't forget to drop by the Wharf Rat for the reception on Monday, and enjoy a little hot monkey love!
SANS Forensic Summit - 13/14 Oct 2008, Las Vegas - Rob Lee is really making 2008 the year for forensic conferences with this one! There is already an awesome list of speakers, which makes me wonder why I'm speaking! ;-) Hey, if you can't find something interesting to listen to, come watch me mutter my way through something about the Windows Registry! This summit is turning out to be less of a speaker's conference, and more of a practitioner's workshop...some of the topics that are going to be addressed are along the lines of what works and what doesn't, from the folks who are doing the do!
These are THE MUST ATTEND events for 2008...for no other reason than the fact that The Cory Altheide will be there! Hey, that's why I'm going!
OMFW - Open Memory Forensics Workshop, 10 Aug 2008, Baltimore - AAron's putting on a great workshop on the subject, which is pretty cool, considering he's one of the guys who's creating the absolute bleeding edge in this area. There are some big names, not only in this field, but in the field of forensic analysis, who will be attending. So, bring your cameras and dollar bills, and see if you can get guys like Mike...excuse me, Dr. Michael...Cohen to sign various body parts! ;-) Be sure to say hi to Jesse, too!
DFRWS - Digital Forensics Research Workshopt, 11/13 Aug 2008, Baltimore - DFRWS is always a great conference, or so I've been told. This will be my first (hopefully not my last) time attending this conference, and the lineup of speakers and presentations is very impressive. I'm particularly looking forward to presentations regarding Registry analysis, such as Tim Morgan's Recovering Deleted Data from the Windows Registry.
Don't forget to drop by the Wharf Rat for the reception on Monday, and enjoy a little hot monkey love!
SANS Forensic Summit - 13/14 Oct 2008, Las Vegas - Rob Lee is really making 2008 the year for forensic conferences with this one! There is already an awesome list of speakers, which makes me wonder why I'm speaking! ;-) Hey, if you can't find something interesting to listen to, come watch me mutter my way through something about the Windows Registry! This summit is turning out to be less of a speaker's conference, and more of a practitioner's workshop...some of the topics that are going to be addressed are along the lines of what works and what doesn't, from the folks who are doing the do!
These are THE MUST ATTEND events for 2008...for no other reason than the fact that The Cory Altheide will be there! Hey, that's why I'm going!
Tuesday, August 05, 2008
MRT
The SANS Internet Storm Center had an interesting post the other day about the MS Malicious Software Removal Tool (aka, MRT). What I took away from the post is that KB 891716 says that whenever MRT is run, the "Version" value is updated with a new GUID. This information can be compared to the list of GUIDs from that same KB article, and correlated against the MRT.log file itself. KB 890830 contains a list of malicious software that MRT is intended to protect against.
From a forensic analysis perspective, this provides some good information with respect to malware that may or may not be on the system.
I put together a quick RegRipper plugin to address this key, and when run via rip.exe, the output looks as follows:
C:\Perl\forensics\rr>rip -r d:\cases\lenovo\software -p mrt
Launching MRT v.20080804
Key Path: Microsoft\RemovalTools\MRT
LastWrite Time Wed Jan 9 22:28:00 2008 (UTC)
Version: 330FCFD4-F1AA-41D3-B2DC-127E699EEF7D
Analysis Tip: Go to http://support.microsoft.com/kb/891716/ to see when MRT was last run. According to the KB article, each time MRT is run, a new GUID is written to the Version value.
If you check KB 891716 for the above listed GUID, you'll see that it corresponds to Jan 2008, which correlates to the LastWrite time for the key itself. By checking the chart in the KB, you can see the malware that the system is supposed to be protected against.
Notice I've added an "Analysis Tip" to this plugin. I've also included some additional information in the header to the plugin itself, which is simply a text-based file that can be opened in any editor...much like Nessus plugins.
From a forensic analysis perspective, this provides some good information with respect to malware that may or may not be on the system.
I put together a quick RegRipper plugin to address this key, and when run via rip.exe, the output looks as follows:
C:\Perl\forensics\rr>rip -r d:\cases\lenovo\software -p mrt
Launching MRT v.20080804
Key Path: Microsoft\RemovalTools\MRT
LastWrite Time Wed Jan 9 22:28:00 2008 (UTC)
Version: 330FCFD4-F1AA-41D3-B2DC-127E699EEF7D
Analysis Tip: Go to http://support.microsoft.com/kb/891716/ to see when MRT was last run. According to the KB article, each time MRT is run, a new GUID is written to the Version value.
If you check KB 891716 for the above listed GUID, you'll see that it corresponds to Jan 2008, which correlates to the LastWrite time for the key itself. By checking the chart in the KB, you can see the malware that the system is supposed to be protected against.
Notice I've added an "Analysis Tip" to this plugin. I've also included some additional information in the header to the plugin itself, which is simply a text-based file that can be opened in any editor...much like Nessus plugins.
Monday, August 04, 2008
ProDiscover 5 is out!
Technology Pathways announced today that ProDiscover 5.0 is now available! Updates include:
- Added the ability to investigate, extract, and report on Microsoft client email formats including Outlook and Outlook Express.
- Added the ability to read and add E01 (Expert Witness) formatted images.
- Added UNICODE support and localization for Japanese and Chinese character sets.
- Improved Microsoft Vista support for remote agent and client.
- Improved overall file I/O and Hashing performance.
- Fixed issue with random crash during content searches of specifically formatted images.
- Fixed long path issue effecting extraction of very long path items of interest from images.
If you're not a licensed user of ProDiscover, you can try out the Free Basic Version.
I've enjoyed using ProDiscover IR for a number of years now, to the point of using FTK Imager to convert EnCase .E0x files to dd format (which, apparently, I would no longer need to do) in order to open the case in ProDiscover. PD uses Perl as its scripting language, which for me, really rocks! The interface for PD 5 hasn't seen any radical changes, which is good...firing it up for the first time, I saw a lot of the familiar settings, with some additions of new ones, particularly the Email Viewer.
Sunday, August 03, 2008
The Question of "whodunnit?"
One of the questions that comes up from time to time during an examination is "whodunnit?" Take an examination involving, let's say...illicit images. The accused claims that they didn't do it, so the question becomes, who did? Sometimes the answer might be that someone else sat at the keyboard of the computer and performed the actions that lead to the images being on the system, and in other cases, the answer might be that the images were the result of a remote attacker/hacker or malware. The latter is sometimes referred to as the Trojan Horse Defense, or malware defense. In 2003, this guy used the malware defense and was acquitted of breaking into gov't computer systems...his claim was that someone had hacked his system and then launched the attack. From the article:
A forensic examination of Mr Caffrey's PC had found no trace of a hidden program with the instructions for the attack.
And yet, he was still acquitted. Since then, this has been a concern to many a forensic examiner and law enforcement officer...what happens if the accused makes that claim? How can a forensic examination corroborate that claim, or disprove it all together?
First, let me say that there is no 100% certainty in every examination that any or all of these techniques are going to work. There are certain things that computer forensic analysis will not reveal...one of them being things that simply are not there (ie, an analyst cannot find CCNs or artifacts of an intrusion if there simply are none to be found). However, what I'd like to discuss is some of the finer points of technical forensic analysis that can provide a good deal of information, such that the proper authorities (counsel, jury, etc.) have a better foundation on which to base their decision.
One of the areas of incident response that we're moving into fairly rapidly now...this area has been picking up steam a good bit lately since its introduction in 2005...is the collection and analysis of physical memory. In just about a week, the OMFW will occur and there will be a good number of folks presenting on and discussing this topic. There are a number of tools available now that will allow a responder to dump the contents of physical memory from a Windows system (XP, as well as Vista), and then analyze that dump...locate running processes, network connections, etc. In addition, PTFinder (Andreas appears to be attending OMFW) may still allow the examiner to identify exited processes (lsproc does this for Windows 2000 and can be ported to other versions of Windows)...procL from ScanIT appears to do something similar. A number of other articles provide information on retrieving image files, Registry keys, and even Event Log records from a physical memory dump. Further, a recent print issue of Linux Pro Magazine has an article on pg. 30 entitled Foremost and Scalpel: Restoring deleted files, in which the authors state, "Foremost and Scalpel ignore the filesystem and can even restore data from RAM dumps and swap files."
Within the realm of computer forensic analysis, there are a number of areas of a Windows system in which artifacts indicating user activity may be found. These go beyond the traditional examination of browser history artifacts, etc., and can provide indications of user activity, as well as historical indications of when the user was logged into the system. Windows Event Log and Registry analysis are two of these areas, along with the overall correlation of artifacts from different parts of the system...the more artifacts that the examiner is able to pull together, the more complete a picture that can be developed.
For example, for a backdoor to be useful to an intruder, it has to remain persistent (see Jesse Kornblum's paper, Exploiting the Rootkit Paradox with Windows Memory Analysis) across reboots. There are only so many ways this can occur, with persistent stores being primarily within the Registry and the file system. Using tools such as RegRipper, the persistence mechanisms with the system and user Registry hive files can be displayed, and the file system persistence mechanisms can be viewed, as well, for any indications of suspicious entries.
The Windows Registry holds a wealth of information about software applications installed on the system. Some of this information differentiates between those apps run by the user, and those run automatically by the system. In addition, the applications themselves have traces and artifacts...antivirus applications generally maintain configuration information in addition to log files. The Windows firewall installed on XP and above maintains it's configuration information in the Registry. Many GUI applications...to include image and movie viewing apps...maintain lists of files that have been opened and viewed by those applications.
Knowing where to look and what to look for can give the analyst the ability to paint a very detailed picture of what occurred on the system. Windows XP is something of a fickle lover, as it will provide the knowledgeable examiner with a wealth of information, while at the same time using it's own inherent anti-forensic techniques to deprive more traditional examiners of those artifacts on which they traditionally rely. Remember Harlan's Corollary to the First Law of Computer Forensics?
The great thing about all this is that while it may appear to be magical, requiring knowledge beyond the reach of all but a few individuals...that's simply not the case at all. All of this can be incorporated into the examiner's forensic analysis process and methodology.
So, the question isn't, was there or was there a Trojan or backdoor on the system what was responsible for this activity...it's now, do you want to answer the Trojan Defense before you walk into the interview room with the defendant and their attorney?
Resources:
Ex Forensis post
A forensic examination of Mr Caffrey's PC had found no trace of a hidden program with the instructions for the attack.
And yet, he was still acquitted. Since then, this has been a concern to many a forensic examiner and law enforcement officer...what happens if the accused makes that claim? How can a forensic examination corroborate that claim, or disprove it all together?
First, let me say that there is no 100% certainty in every examination that any or all of these techniques are going to work. There are certain things that computer forensic analysis will not reveal...one of them being things that simply are not there (ie, an analyst cannot find CCNs or artifacts of an intrusion if there simply are none to be found). However, what I'd like to discuss is some of the finer points of technical forensic analysis that can provide a good deal of information, such that the proper authorities (counsel, jury, etc.) have a better foundation on which to base their decision.
One of the areas of incident response that we're moving into fairly rapidly now...this area has been picking up steam a good bit lately since its introduction in 2005...is the collection and analysis of physical memory. In just about a week, the OMFW will occur and there will be a good number of folks presenting on and discussing this topic. There are a number of tools available now that will allow a responder to dump the contents of physical memory from a Windows system (XP, as well as Vista), and then analyze that dump...locate running processes, network connections, etc. In addition, PTFinder (Andreas appears to be attending OMFW) may still allow the examiner to identify exited processes (lsproc does this for Windows 2000 and can be ported to other versions of Windows)...procL from ScanIT appears to do something similar. A number of other articles provide information on retrieving image files, Registry keys, and even Event Log records from a physical memory dump. Further, a recent print issue of Linux Pro Magazine has an article on pg. 30 entitled Foremost and Scalpel: Restoring deleted files, in which the authors state, "Foremost and Scalpel ignore the filesystem and can even restore data from RAM dumps and swap files."
Within the realm of computer forensic analysis, there are a number of areas of a Windows system in which artifacts indicating user activity may be found. These go beyond the traditional examination of browser history artifacts, etc., and can provide indications of user activity, as well as historical indications of when the user was logged into the system. Windows Event Log and Registry analysis are two of these areas, along with the overall correlation of artifacts from different parts of the system...the more artifacts that the examiner is able to pull together, the more complete a picture that can be developed.
For example, for a backdoor to be useful to an intruder, it has to remain persistent (see Jesse Kornblum's paper, Exploiting the Rootkit Paradox with Windows Memory Analysis) across reboots. There are only so many ways this can occur, with persistent stores being primarily within the Registry and the file system. Using tools such as RegRipper, the persistence mechanisms with the system and user Registry hive files can be displayed, and the file system persistence mechanisms can be viewed, as well, for any indications of suspicious entries.
The Windows Registry holds a wealth of information about software applications installed on the system. Some of this information differentiates between those apps run by the user, and those run automatically by the system. In addition, the applications themselves have traces and artifacts...antivirus applications generally maintain configuration information in addition to log files. The Windows firewall installed on XP and above maintains it's configuration information in the Registry. Many GUI applications...to include image and movie viewing apps...maintain lists of files that have been opened and viewed by those applications.
Knowing where to look and what to look for can give the analyst the ability to paint a very detailed picture of what occurred on the system. Windows XP is something of a fickle lover, as it will provide the knowledgeable examiner with a wealth of information, while at the same time using it's own inherent anti-forensic techniques to deprive more traditional examiners of those artifacts on which they traditionally rely. Remember Harlan's Corollary to the First Law of Computer Forensics?
The great thing about all this is that while it may appear to be magical, requiring knowledge beyond the reach of all but a few individuals...that's simply not the case at all. All of this can be incorporated into the examiner's forensic analysis process and methodology.
So, the question isn't, was there or was there a Trojan or backdoor on the system what was responsible for this activity...it's now, do you want to answer the Trojan Defense before you walk into the interview room with the defendant and their attorney?
Resources:
Ex Forensis post