F-Response
I know I've mentioned this already, but this one is worth repeating...Matt's up to v3.09.07, which includes support for access to physical memory on x64 systems, and a COM scripting object for Windows systems.
I have to say, back when Chris opted to add Perl as the scripting language for ProDiscover, I was really excited, and released a number of ProScripts for use. Matt's really big on showing how easy it is to use F-Response (Matt is incredibly responsive and has released Mission Guides), and has released a number of samples that demo the ability to script tasks with F-Response EE through VBScript, Python, and yes...Perl! My favorite part of his demos is where he has the comment, "Do Work". ;-)
Last year, Matt released the FEMC, taking the use of F-Response EE to an entirely new level, and with the release of the scripting object, he's done it again. Imagine being able to provide a list of systems (and the necessary credentials) to a script, and sitting back to allow it to run; reach out to each system, perform some sort of RegRipper-like queries of the system, look for (and possibly copy) some files...and then you come back from doing other work to view a log file.
I've done some work, with Matt's help, to get a Perl version of his script working against a Windows system, just to kind of get familiar with working with some of the functions his COM object exposes. I have VMWare Workstation and a couple of Windows VMs available, and the biggest issues I ran into were having the firewall running (turn it off for testing), and an issue with connecting to remote shares, even default ones...I found the answer here. I made the change described just so that I could map a share as a means for testing, and then found out by flipping the setting from "Classic" back to "Guest Only" that an untrapped error would be thrown. Once I had the F-Response License Manager running on my analysis system and the adjustment made on my target testing system, the script ran just fine and returned the expected status.
The script I wrote, with Matt's help, runs through the process of connecting to and installing F-Response, running it, enumerating targets and then uninstalling F-Response. The output of the script looks like:
Status for 192.168.247.131: Avail, not installed
Status for 192.168.247.131: Installed, stopped
Status for 192.168.247.131: Installed, started
iqn.2008-02.com.f-response.harlan-202f63d5:disk-0
iqn.2008-02.com.f-response.harlan-202f63d5:vol-c
iqn.2008-02.com.f-response.harlan-202f63d5:pmem
Status for 192.168.247.131: Avail, not installed
I should note that in order to get pmem as a target, I had to open FEMC and check the appropriate box in the Host Configuration dialog.
As you can see, the script cycled through the various states with respect to the target system, from the system being available with no F-Response installed, to installing and viewing targets, to removing F-Response. With this, I can now completely automate accessing and checking systems across an enterprise; connect to systems, log into the appropriate target (vol-c, for example), and as Matt says in his sample scripts, "do work". Add in a little RegRipper action, maybe checking for files, etc.
Awesome stuff, Matt! For more awesome stuff, including a video and Mission Guide for the COM scripting object, check out the F-Response blog posts!
Awesome Sauce Addendum: The script can now access/mount the volume from the remote system, then uses some WMI and Perl hash magic to get the mounted drive letter on the local system, and then determines if Windows\system32 can be found. Pretty awesome sauce!
Documents
Didier has updated his PDFid code recently to detect and disarm the "/Launch" functionality. Didier mentions that this is becoming more prevalent, something important for analysts to note. When performing root cause analysis, we need to understand what's possible with respect to initial infection vectors. Many times, the questions that first responders need to answer (or assist in answering) include how did this get on the system, and what can we do to prevent it in the future? Many times, the actual malware is really the secondary or tertiary download, after the initial infection through some sort of phishing attack.
Over on the Offensive Computing blog, I saw that JoeDoc, a novel runtime analysis system for detecting exploits in documents like pdf and doc, has been released in beta. Check out JoeDoc here...it currently supports PDF format.
If you're trying to determine if there's malware in Flash or Javascript files, you might want to check out wepawet.
Viewers
In the first edition of the ITB, Don wrote an article on potential issues when using an Internet-connected system to view files in FTK Imager. John McCash posted to the SANS Forensic Blog recently regarding a similar topic that ThotCon recently.
Okay, so what's the issue here? Well, perhaps rather than making artifacts difficult to find, an intruder could put out a very obviously interesting artifact in hopes that the analyst will view it, resulting in the infection of or damage to the analysis system.
Also check out iSEC's Breaking Forensic Software paper...
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Friday, April 30, 2010
Friday, April 23, 2010
Links...and whatnot
There seems to be a theme to this post...something along the lines of accessing data through alternate means or sources...and whatnot...
Blog Update - Mounting EWF files on Windows
Over in the Win4n6 Yahoo group, a question was posted recently regarding mounting multiple (in this case, around 70) .E0x files, and most of the answers involved using the SANS SIFT v2.0 Workstation. This is a good solution; I had posted a bit ago regarding mounting EWF (Expert Witness Format, or EnCase) files on Windows, and Bradley Schatz provided an update, removing the use of the Visual Studio runtime files and using only freely available tools, all on Windows.
ImDisk
Speaking of freeware tools, if you're using ImDisk, be sure to get the updated version, available as of March of this year. There have been some updates to allow for better functionality on Windows 7, etc.
Also, FTK Imager (as well as the Lite version) are up to version 2.9.
...Taking Things A Step Further...
If you've got segmented image files, as with multiple .E0x or raw/dd format .00x files, and you want to get file system metadata for inclusion in a timeline, you have a number of options available to you using freely available tools on Windows.
For the raw/dd format files, one option is to use the 'type' command to reassemble the image segments into a full image file. Another option...whether you've got a VMWare .vmdk file, or an image composed of multiple EWF or raw/dd segments...is to open the image in FTK Imager. Once the image is open and you can see the file system, you can (a) re-acquire the image to a single, raw/dd format image file, or (b) export a directory listing.
You can also use FTK Imager to export file system metadata from live systems, but this can be a manual process, as you have to add the physical drive via the GUI, etc. This process may be a bit more than you need. To meet the needs of a live IR script, I created a CLI tool called mt.exe (short for MACTimes) that is a compiled Perl script. Mt.exe will get the MAC times of files in a directory, and can recurse directories...it will also get MD5 hashes (it gets the MAC times before computing a hash) for the files, and has the option to output everything in TSK v3.x bodyfile format. I plan to use this to get file listings for specific directories, in order to optimize response and augment follow-on analysis.
Into The Shadows
Lee Whitfield posted his SANS EU Forensics Summit presentation, Into The Shadows, for your listening/viewing pleasure. In the presentation, Lee presents what he refers to as "the fourth way" to analyze Volume Shadow Copies. Watching the video, it appears that Lee is deciphering the actual files that the created by the Volume Shadow Service, and using that information to extract meaningful data.
You should also be able to work with Volume Shadow Copies as we discussed earlier, but like Lee says (and was mentioned by Troy Larson), if you're going to image the entire VSC, you're going to need to have additional space available. However, what if you were to mount the VSC in question and only extract selected files? Sure, this would require knowledge of what you were attempting to achieve and how you'd go about doing it, but you wouldn't require the additional space, and you would still have the VSC available to be mounted later, if need be.
EVTX Parsing
SANS has postponed the Forensics Summit in London due to the Krakatoa-like volcanic eruption that has been obscuring the airspace over Europe. As such, Andreas has posted his slides regarding Vista (and above) Event Log format. Very cool stuff, and very useful.
Articles
Finally, Christa pointed me to an interesting article at CSOOnline about how fraud is no longer considered by banks and financial institutions to be just a cost of doing business. Very interesting, and it demonstrates how incident preparation, detection, and response are becoming more visible as business processes.
From that article, I found a link to another article, this one discussing the basics of incident detection, response and forensics with Richard Bejtlich. Very much well worth the read...
Blog Update - Mounting EWF files on Windows
Over in the Win4n6 Yahoo group, a question was posted recently regarding mounting multiple (in this case, around 70) .E0x files, and most of the answers involved using the SANS SIFT v2.0 Workstation. This is a good solution; I had posted a bit ago regarding mounting EWF (Expert Witness Format, or EnCase) files on Windows, and Bradley Schatz provided an update, removing the use of the Visual Studio runtime files and using only freely available tools, all on Windows.
ImDisk
Speaking of freeware tools, if you're using ImDisk, be sure to get the updated version, available as of March of this year. There have been some updates to allow for better functionality on Windows 7, etc.
Also, FTK Imager (as well as the Lite version) are up to version 2.9.
...Taking Things A Step Further...
If you've got segmented image files, as with multiple .E0x or raw/dd format .00x files, and you want to get file system metadata for inclusion in a timeline, you have a number of options available to you using freely available tools on Windows.
For the raw/dd format files, one option is to use the 'type' command to reassemble the image segments into a full image file. Another option...whether you've got a VMWare .vmdk file, or an image composed of multiple EWF or raw/dd segments...is to open the image in FTK Imager. Once the image is open and you can see the file system, you can (a) re-acquire the image to a single, raw/dd format image file, or (b) export a directory listing.
You can also use FTK Imager to export file system metadata from live systems, but this can be a manual process, as you have to add the physical drive via the GUI, etc. This process may be a bit more than you need. To meet the needs of a live IR script, I created a CLI tool called mt.exe (short for MACTimes) that is a compiled Perl script. Mt.exe will get the MAC times of files in a directory, and can recurse directories...it will also get MD5 hashes (it gets the MAC times before computing a hash) for the files, and has the option to output everything in TSK v3.x bodyfile format. I plan to use this to get file listings for specific directories, in order to optimize response and augment follow-on analysis.
Into The Shadows
Lee Whitfield posted his SANS EU Forensics Summit presentation, Into The Shadows, for your listening/viewing pleasure. In the presentation, Lee presents what he refers to as "the fourth way" to analyze Volume Shadow Copies. Watching the video, it appears that Lee is deciphering the actual files that the created by the Volume Shadow Service, and using that information to extract meaningful data.
You should also be able to work with Volume Shadow Copies as we discussed earlier, but like Lee says (and was mentioned by Troy Larson), if you're going to image the entire VSC, you're going to need to have additional space available. However, what if you were to mount the VSC in question and only extract selected files? Sure, this would require knowledge of what you were attempting to achieve and how you'd go about doing it, but you wouldn't require the additional space, and you would still have the VSC available to be mounted later, if need be.
EVTX Parsing
SANS has postponed the Forensics Summit in London due to the Krakatoa-like volcanic eruption that has been obscuring the airspace over Europe. As such, Andreas has posted his slides regarding Vista (and above) Event Log format. Very cool stuff, and very useful.
Articles
Finally, Christa pointed me to an interesting article at CSOOnline about how fraud is no longer considered by banks and financial institutions to be just a cost of doing business. Very interesting, and it demonstrates how incident preparation, detection, and response are becoming more visible as business processes.
From that article, I found a link to another article, this one discussing the basics of incident detection, response and forensics with Richard Bejtlich. Very much well worth the read...
Wednesday, April 14, 2010
More Links
RegRipper in Use
Simon posted to the Praetorian Prefect blog recently regarding WinPE. In his post, Simon described installing and using RegRipper from the Windows Forensic Environment (WinFE). Very cool! RegRipper (well, specifically ripXP) was designed for XP System Restore Point analysis, and RegRipper itself has been used via F-Response, and on an analysis system to extract Registry data from mounted Volume Shadow Copies.
F-Response
Speaking of WinFE, Matt posted recently on creating a WinFE bootable CD with F-Response pre-installed! Matt shows you how to use Troy's WinFE instructions to quickly make the installation F-Response ready! Along with the Linux bootable CDs Matt's put together, it looks like he's building out a pretty complete set.
XP Mode Issues
I found this on Securabit, how the installation and use of XP Mode in Windows 7 exposes the system to XP OS vulnerabilities, as well as vulnerabilities to applications run through XP Mode. This is going to be a bit more of an issue, now that MS has removed that hardware virtualization requirement for XP Mode to run, making it accessible to everyone. Core Security Technologies announced a VPC hypervisor memory protection bug...wait, is that bad?
Well, I like to look at this stuff from an IR/DF perspective. Right now, we've got enough issues trying to identify the initial infection vector...now we've got to deal with it in two operating systems! I've installed XP Mode, and during the installation, the XP VM gets these little icons for the C: and D: drives in my host...hey, wait a sec! So XP can "see" my Windows 7? Uh...is that bad?
Installing and using Windows 7 isn't bad in and of itself...it's really no different from when we moved from Windows 2000 to XP. New challenges and issues were introduced, and the IT community, as well as those of us in the IR/DF community learn to cope. In this case, IT admins need to remain even more vigilant because now we're adding old issues in with the new...don't think that we've closed the hole by installing Windows 7, only to be running a legacy app...with it's inherent vulnerabilities...through XP Mode.
Volume Shadow Copies
Found this excellent post over on the Forensics from the Sausage Factory blog, detailing mounting a Volume Shadow Copy with EnCase and using RoboCopy to grab files. Rob Lee has posted on creating timelines from Volume Shadow Copies, and accessing VSCs has been addressed several times (here, and here). With Vista and now Windows 7 systems becoming more pervasive (a friend of mine in LE has already had to deal with a Windows 7 system), accessing Volume Shadow Copies is going to become more and more of an issue...and by that I mean requirement and necessity. So it's good that the information is out there...
MFT Analysis
Rob Lee posted to the SANS Forensic Blog regarding Windows 7 MFT Entry Timestamp Properties. This is a very interesting approach, because there's been some discussion in other forums, including the Win4n6 Yahoo group, around using information from the MFT to create or augment a timeline. For example, using most tools to get file system metadata, you'll get the entries from the $STANDARD_INFORMATION (SIA) attribute, but the information in the $FILE_NAME (FNA) attribute can also be valuable, particularly if the creation dates are different.
When tools are used to alter file time stamps, you'll notice the differences in the SIA and FNA time values, as Lance pointed out. Brian also mentions this like three times in one of the chapters of his File System Forensic Analysis book. So, knowing how various actions can affect file system time stamps can be extremely important to creating or adding context to a timeline, as well as to the overall analysis.
The Future
Rob's efforts in this area got me to thinking...how long will it be before forensic analysts sit down at their workstation to being analysis, and have a Kindle or an iPad or similar device right there with them, to assist them with their analysis workflow? Given the complexity and variety of devices and operating systems, it would stand to reason that an organization would have a workflow with supporting information (docs like what Rob's putting together, etc.), possibly even online in one location. The analyst would access the online (internally, of course) app and enter their information and begin their case, and they would be presented with workflow, processes and supporting information. In fact, something like that could also provide for case management, as well as case notes and collaboration, and even ease reporting.
Is the workflow important? I'd suggest that yes, it is...wholeheartedly. I've seen a number of folks stumble over what they were looking for and spend a lot of time doing things that really didn't get them any closer to their goals...if they had and understood their goals! This would not obviate the need for training, of course, particularly in basic skills, but having some kind of Wiki-ish framework with a workflow template for an analyst to follow would definitely be beneficial...that is, aside from its CSI coolness (I still yell, "Lt Dan!" at the TV whenever Gary Sinise comes on screen in CSI:NY).
Simon posted to the Praetorian Prefect blog recently regarding WinPE. In his post, Simon described installing and using RegRipper from the Windows Forensic Environment (WinFE). Very cool! RegRipper (well, specifically ripXP) was designed for XP System Restore Point analysis, and RegRipper itself has been used via F-Response, and on an analysis system to extract Registry data from mounted Volume Shadow Copies.
F-Response
Speaking of WinFE, Matt posted recently on creating a WinFE bootable CD with F-Response pre-installed! Matt shows you how to use Troy's WinFE instructions to quickly make the installation F-Response ready! Along with the Linux bootable CDs Matt's put together, it looks like he's building out a pretty complete set.
XP Mode Issues
I found this on Securabit, how the installation and use of XP Mode in Windows 7 exposes the system to XP OS vulnerabilities, as well as vulnerabilities to applications run through XP Mode. This is going to be a bit more of an issue, now that MS has removed that hardware virtualization requirement for XP Mode to run, making it accessible to everyone. Core Security Technologies announced a VPC hypervisor memory protection bug...wait, is that bad?
Well, I like to look at this stuff from an IR/DF perspective. Right now, we've got enough issues trying to identify the initial infection vector...now we've got to deal with it in two operating systems! I've installed XP Mode, and during the installation, the XP VM gets these little icons for the C: and D: drives in my host...hey, wait a sec! So XP can "see" my Windows 7? Uh...is that bad?
Installing and using Windows 7 isn't bad in and of itself...it's really no different from when we moved from Windows 2000 to XP. New challenges and issues were introduced, and the IT community, as well as those of us in the IR/DF community learn to cope. In this case, IT admins need to remain even more vigilant because now we're adding old issues in with the new...don't think that we've closed the hole by installing Windows 7, only to be running a legacy app...with it's inherent vulnerabilities...through XP Mode.
Volume Shadow Copies
Found this excellent post over on the Forensics from the Sausage Factory blog, detailing mounting a Volume Shadow Copy with EnCase and using RoboCopy to grab files. Rob Lee has posted on creating timelines from Volume Shadow Copies, and accessing VSCs has been addressed several times (here, and here). With Vista and now Windows 7 systems becoming more pervasive (a friend of mine in LE has already had to deal with a Windows 7 system), accessing Volume Shadow Copies is going to become more and more of an issue...and by that I mean requirement and necessity. So it's good that the information is out there...
MFT Analysis
Rob Lee posted to the SANS Forensic Blog regarding Windows 7 MFT Entry Timestamp Properties. This is a very interesting approach, because there's been some discussion in other forums, including the Win4n6 Yahoo group, around using information from the MFT to create or augment a timeline. For example, using most tools to get file system metadata, you'll get the entries from the $STANDARD_INFORMATION (SIA) attribute, but the information in the $FILE_NAME (FNA) attribute can also be valuable, particularly if the creation dates are different.
When tools are used to alter file time stamps, you'll notice the differences in the SIA and FNA time values, as Lance pointed out. Brian also mentions this like three times in one of the chapters of his File System Forensic Analysis book. So, knowing how various actions can affect file system time stamps can be extremely important to creating or adding context to a timeline, as well as to the overall analysis.
The Future
Rob's efforts in this area got me to thinking...how long will it be before forensic analysts sit down at their workstation to being analysis, and have a Kindle or an iPad or similar device right there with them, to assist them with their analysis workflow? Given the complexity and variety of devices and operating systems, it would stand to reason that an organization would have a workflow with supporting information (docs like what Rob's putting together, etc.), possibly even online in one location. The analyst would access the online (internally, of course) app and enter their information and begin their case, and they would be presented with workflow, processes and supporting information. In fact, something like that could also provide for case management, as well as case notes and collaboration, and even ease reporting.
Is the workflow important? I'd suggest that yes, it is...wholeheartedly. I've seen a number of folks stumble over what they were looking for and spend a lot of time doing things that really didn't get them any closer to their goals...if they had and understood their goals! This would not obviate the need for training, of course, particularly in basic skills, but having some kind of Wiki-ish framework with a workflow template for an analyst to follow would definitely be beneficial...that is, aside from its CSI coolness (I still yell, "Lt Dan!" at the TV whenever Gary Sinise comes on screen in CSI:NY).
Sunday, April 11, 2010
Links...and whatnot
Security Ripcord
Don posted recently on his experiences attending the Rob Lee's SANS SEC 508 course. Don has some very interesting insights, so take a look. Read what he says carefully, and think about it before reacting based on your initial impression. Don's an experienced responder that I have the honor of knowing, and the pleasure of having worked with...he's "been there and done that", likely in more ways than you can imagine. Many times when we read something from someone else, we'll apply the words to our own personal context, rather than the context of the author...so when you read what Don's said, take a few minutes to think about what he's saying.
One example is Don's statement regarding the court room vs. the data center. To be honest, I think he's absolutely right. For far too long, what could possibly go on in the court room has been a primary driver for response, when that shouldn't be the case. I've seen far too many times where someone has said, "I won't do live response until it's accepted by the courts." Okay, fine.
Another one that I see a lot is the statement, "...a competent defense counsel could ask this question and spread doubt in the mind of the jury." Ugh. Really. I saw on a list recently where someone made that statement with respect using MD5 hashes to validate data integrity, and how a defense attorney could bring up "MD5 rainbow tables". Again...ugh. There are more issues with this than I want to go into here, but the point is that you cannot let what you think might happen in court deter you from doing what you can, and what's right.
DFI Newsletter
I subscribe to the DFI Newsletter, and I found a couple of interesting items in the one I received on Fri, 9 April. Specifically, one of my blog posts appear in the In The Blogs section. Okay, that was pretty cool!
Also, there was a link to an FCW article by Ben Bain regarding how Bill Bratton "said local police departments have been behind the curve for most of their history in tackling computer-related crime and cybersecurity" and that "it's a resource issue."
I know a couple of folks who have assisted local LE in their area, and that seems to be something of a beneficial relationship, particular for LE.
File System Tunneling
Okay, this is a new one on me...I ran across this concept on a list recently, and thought I'd look into it a bit. In short, it seems that there's long been functionality built into NTFS that allows, under specific conditions and for a short period of time (default is 15 seconds) for file metadata (specifically, the file creation time) to be reused. In short, if a file with a specific name is deleted, and then another file with the same name created in that directory within 15 seconds, the first file's metadata will be reused. Fortunately (note the sarcasm...), this functionality can be extended or disabled.
Okay, so what does this mean to forensic analysts? Under most conditions, probably not a lot. But this is definitely something to be aware of and understand. I mean, under normal circumstances, time stamps are hard enough to keep up with...add into that tunneling, anti-forensics, and the fact that on Vista and above, updating of last access times is disabled.
More than anything else, this really illustrates how important it is, when considering potential issues or asking questions about systems, to identify things like the OS, the version (i.e., XP vs. Win7), the file system, etc.
Resources
MS KB 172190
MS KB 299648
Daniel Schneller's thoughts
Old New Thing blog post
MSDN: File Times
eEvidence
The eEvidence what's new site was updated a bit ago. Christina is always able to find some very interesting resources, so take some time to browse through what's there. Sometimes there's case studies, sometimes some really academic stuff, but there's always something interesting.
MoonSol
Matthieu has released the MoonSol Windows Memory Toolkit, with a free community edition. Check it out.
Don posted recently on his experiences attending the Rob Lee's SANS SEC 508 course. Don has some very interesting insights, so take a look. Read what he says carefully, and think about it before reacting based on your initial impression. Don's an experienced responder that I have the honor of knowing, and the pleasure of having worked with...he's "been there and done that", likely in more ways than you can imagine. Many times when we read something from someone else, we'll apply the words to our own personal context, rather than the context of the author...so when you read what Don's said, take a few minutes to think about what he's saying.
One example is Don's statement regarding the court room vs. the data center. To be honest, I think he's absolutely right. For far too long, what could possibly go on in the court room has been a primary driver for response, when that shouldn't be the case. I've seen far too many times where someone has said, "I won't do live response until it's accepted by the courts." Okay, fine.
Another one that I see a lot is the statement, "...a competent defense counsel could ask this question and spread doubt in the mind of the jury." Ugh. Really. I saw on a list recently where someone made that statement with respect using MD5 hashes to validate data integrity, and how a defense attorney could bring up "MD5 rainbow tables". Again...ugh. There are more issues with this than I want to go into here, but the point is that you cannot let what you think might happen in court deter you from doing what you can, and what's right.
DFI Newsletter
I subscribe to the DFI Newsletter, and I found a couple of interesting items in the one I received on Fri, 9 April. Specifically, one of my blog posts appear in the In The Blogs section. Okay, that was pretty cool!
Also, there was a link to an FCW article by Ben Bain regarding how Bill Bratton "said local police departments have been behind the curve for most of their history in tackling computer-related crime and cybersecurity" and that "it's a resource issue."
I know a couple of folks who have assisted local LE in their area, and that seems to be something of a beneficial relationship, particular for LE.
File System Tunneling
Okay, this is a new one on me...I ran across this concept on a list recently, and thought I'd look into it a bit. In short, it seems that there's long been functionality built into NTFS that allows, under specific conditions and for a short period of time (default is 15 seconds) for file metadata (specifically, the file creation time) to be reused. In short, if a file with a specific name is deleted, and then another file with the same name created in that directory within 15 seconds, the first file's metadata will be reused. Fortunately (note the sarcasm...), this functionality can be extended or disabled.
Okay, so what does this mean to forensic analysts? Under most conditions, probably not a lot. But this is definitely something to be aware of and understand. I mean, under normal circumstances, time stamps are hard enough to keep up with...add into that tunneling, anti-forensics, and the fact that on Vista and above, updating of last access times is disabled.
More than anything else, this really illustrates how important it is, when considering potential issues or asking questions about systems, to identify things like the OS, the version (i.e., XP vs. Win7), the file system, etc.
Resources
MS KB 172190
MS KB 299648
Daniel Schneller's thoughts
Old New Thing blog post
MSDN: File Times
eEvidence
The eEvidence what's new site was updated a bit ago. Christina is always able to find some very interesting resources, so take some time to browse through what's there. Sometimes there's case studies, sometimes some really academic stuff, but there's always something interesting.
MoonSol
Matthieu has released the MoonSol Windows Memory Toolkit, with a free community edition. Check it out.
Tuesday, April 06, 2010
WFA 2/e Amazon Ranking!
Looking today, I noticed that Amazon also has a Kindle version priced out...very cool. I don't have a Kindle, but I hope that anyone who does and has a copy of WFA on it finds it just as useful as a paperback edition, if not more so.
Again, thanks to everyone who has purchased a copy of WFA!
Monday, April 05, 2010
ITB 0x1 is out!
Don has posted the new ITB, issue 0x1!
This issue has an article on plist files, and a write-up on Don's review of the Super Drivelock. Also, Chris has provided some insights into PCI data breach investigations and Richard Harmon's release of his poorcase tool.
Shoutz to Don for putting the work into putting this together! It's a lot of work getting articles, and getting them together. This is a community-based newsletter, and NEEDS input from the community! So, if there's something you'd like to write about, or something you'd like to see discussed, drop Don a line.
Also, Bret and Ovie released their latest podcast this weekend...great job, guys!
This issue has an article on plist files, and a write-up on Don's review of the Super Drivelock. Also, Chris has provided some insights into PCI data breach investigations and Richard Harmon's release of his poorcase tool.
Shoutz to Don for putting the work into putting this together! It's a lot of work getting articles, and getting them together. This is a community-based newsletter, and NEEDS input from the community! So, if there's something you'd like to write about, or something you'd like to see discussed, drop Don a line.
Also, Bret and Ovie released their latest podcast this weekend...great job, guys!
Friday, April 02, 2010
New stuff
Speaking
The first ever Sleuth Kit and Open Source Digital Forensics Conference is coming up in June, 2010, and I'll be one of the speakers, joining Brian Carrier, Jamie Butler, Dario Forte, Rob Joyce and Simson Garfinkel. I'll be talking about creating timelines using TSK and other open source tools.
Timelines and Last Access Times
Many examiners make use of file last access times during an examination, to some degree. Most examiners are also aware that MS has a Registry value called NtfsDisableLastAccessUpdate which can be used to disable updating of last access times on files, and that it's enabled (i.e., set to 1) by default beginning with Windows Vista. However, that doesn't mean that these values are never changed...take a look at the links in the Resource section, and consider doing some testing of your own.
Resources
Windows XP - explanation of fsutil, including disablelastaccess behavior
DigFor explanation - when 'disabled' doesn't entirely mean disabled
MS KB 299648 - Description of NTFS date and time stamps
Jones Dykstra and Assoc. article
Tools
Over on the SANS ISC blog, Pedro Bueno posted a link to some tools he uses and wanted to share with others. There's some interesting tools there, one of which is WinAPIOverride32. If you're doing malware analysis, these look like some very interesting and useful tools.
If you're doing any sort of malware analysis and encounter obfuscated JavaScript, check out JSUnpack.
GrandStream Dream Updates
Claus is back to posting again, and has a heavy version available. Rather than re-hashing what Claus has posted here, I'd recommend that you take a look...trying to summarize what he's done simply won't do it justice. Yes, Claus has pulled together links from other blogs, but sometimes it's good to see a bunch of like posts grouped together.
One of the things that Claus mentioned that I hadn't seen before is StreamArmor, a tool for locating NTFS alternate data streams, that goes beyond your normal "here's an ADS" approach. StreamArmor looks for malicious streams, skipping over "normal" streams so as not to overwhelm the analyst. This has benefits...if you're familiar with how ADSs work. I can see something like this being used when an acquired image is mounted (SmartMount, ImDisk, P2 eXplorer, etc.) and scanned with AV/anti-spyware tools...just hit it with StreamArmor while it's mounted.
Documents
Didier Stevens has been posting a lot lately on proof-of-concepts for getting malware on systems by embedding it into PDF documents. If you didn't already have a reason to search TIF and email attachment directories for PDF documents, then reading through his posts should make you realize why you need to make it part of your analysis process.
The first ever Sleuth Kit and Open Source Digital Forensics Conference is coming up in June, 2010, and I'll be one of the speakers, joining Brian Carrier, Jamie Butler, Dario Forte, Rob Joyce and Simson Garfinkel. I'll be talking about creating timelines using TSK and other open source tools.
Timelines and Last Access Times
Many examiners make use of file last access times during an examination, to some degree. Most examiners are also aware that MS has a Registry value called NtfsDisableLastAccessUpdate which can be used to disable updating of last access times on files, and that it's enabled (i.e., set to 1) by default beginning with Windows Vista. However, that doesn't mean that these values are never changed...take a look at the links in the Resource section, and consider doing some testing of your own.
Resources
Windows XP - explanation of fsutil, including disablelastaccess behavior
DigFor explanation - when 'disabled' doesn't entirely mean disabled
MS KB 299648 - Description of NTFS date and time stamps
Jones Dykstra and Assoc. article
Tools
Over on the SANS ISC blog, Pedro Bueno posted a link to some tools he uses and wanted to share with others. There's some interesting tools there, one of which is WinAPIOverride32. If you're doing malware analysis, these look like some very interesting and useful tools.
If you're doing any sort of malware analysis and encounter obfuscated JavaScript, check out JSUnpack.
GrandStream Dream Updates
Claus is back to posting again, and has a heavy version available. Rather than re-hashing what Claus has posted here, I'd recommend that you take a look...trying to summarize what he's done simply won't do it justice. Yes, Claus has pulled together links from other blogs, but sometimes it's good to see a bunch of like posts grouped together.
One of the things that Claus mentioned that I hadn't seen before is StreamArmor, a tool for locating NTFS alternate data streams, that goes beyond your normal "here's an ADS" approach. StreamArmor looks for malicious streams, skipping over "normal" streams so as not to overwhelm the analyst. This has benefits...if you're familiar with how ADSs work. I can see something like this being used when an acquired image is mounted (SmartMount, ImDisk, P2 eXplorer, etc.) and scanned with AV/anti-spyware tools...just hit it with StreamArmor while it's mounted.
Documents
Didier Stevens has been posting a lot lately on proof-of-concepts for getting malware on systems by embedding it into PDF documents. If you didn't already have a reason to search TIF and email attachment directories for PDF documents, then reading through his posts should make you realize why you need to make it part of your analysis process.
Thought of the Day
I've been working on writing my latest book and got to a section of the first chapter where I talk about analysis and what that means. As I was writing, it occurred to me that there are some basic concepts that some analysts take for granted, and others simply do not understand. Keeping these concepts in mind can help us a great deal with our exams.
Locard's Exchange Principle
Edmund Locard was a French scientist in the early part of the 20th century who originated the principle that when two objects come into contact, material is transferred between them. This is as true in the digital realm as it is in the physical world. When malware reaches out to find other systems to infect or to contact a CnC server, there is information about these connections on the system. When an intruder accesses a system, there is information about the system their connection is coming from as well as information about their activities on the compromised system. In many cases, the information may be degraded due to temporal proximity to the event, but it will have been there.
Least Frequency of Occurrence
I credit Pete Silberman of Mandiant with this phrase, not because he was the first to use it (some searches indicated pretty quickly that it is used in other fields), but because his use of it was the first time I heard it applied in a profound manner to the IR community. At the SANS Forensic Summit in 2009, Pete used this to describe the occurrence of malware on systems, and looking back over other exams I'd performed, it occurred to me that the same thing is true for intrusions.
The concept is amazing in its simplicity...given normal system activity, malware and intrusions occur least frequently on that system. Say, malware is installed as a Windows service set to launch at system startup as part of the SvcHost process...you've got one file (DLL on the drive), a few keys/values in the System hive, and one in the Software hive.
Okay, so the practical application of this is that we're NOT looking for massive toolsets being loaded onto the system. Listing all of the files on the system and sorting by creation date (in the absence of modification of time stamps) is more likely to show you OS and application updates than it is when the malware was installed. The same sort of thing applies to an intrusion.
Goals
In short, goals are the beginning and the end of your exam. What are you looking for, what questions are you trying to answer? Your exam goals drive your analysis approach and what tools you need to use...and it should never be the other way around. Tools should never drive your analysis.
HTH.
Locard's Exchange Principle
Edmund Locard was a French scientist in the early part of the 20th century who originated the principle that when two objects come into contact, material is transferred between them. This is as true in the digital realm as it is in the physical world. When malware reaches out to find other systems to infect or to contact a CnC server, there is information about these connections on the system. When an intruder accesses a system, there is information about the system their connection is coming from as well as information about their activities on the compromised system. In many cases, the information may be degraded due to temporal proximity to the event, but it will have been there.
Least Frequency of Occurrence
I credit Pete Silberman of Mandiant with this phrase, not because he was the first to use it (some searches indicated pretty quickly that it is used in other fields), but because his use of it was the first time I heard it applied in a profound manner to the IR community. At the SANS Forensic Summit in 2009, Pete used this to describe the occurrence of malware on systems, and looking back over other exams I'd performed, it occurred to me that the same thing is true for intrusions.
The concept is amazing in its simplicity...given normal system activity, malware and intrusions occur least frequently on that system. Say, malware is installed as a Windows service set to launch at system startup as part of the SvcHost process...you've got one file (DLL on the drive), a few keys/values in the System hive, and one in the Software hive.
Okay, so the practical application of this is that we're NOT looking for massive toolsets being loaded onto the system. Listing all of the files on the system and sorting by creation date (in the absence of modification of time stamps) is more likely to show you OS and application updates than it is when the malware was installed. The same sort of thing applies to an intrusion.
Goals
In short, goals are the beginning and the end of your exam. What are you looking for, what questions are you trying to answer? Your exam goals drive your analysis approach and what tools you need to use...and it should never be the other way around. Tools should never drive your analysis.
HTH.
Tuesday, March 30, 2010
Links
Timelines
Grayson finished off a malware/adware exam recently by creating a timeline, apparently using the several different tools including fls/mactime, Mandiant's Web Historian, regtime.pl from SIFT v2.0, etc. Not a bad way to get started, really...and I do think that Grayson's post, in a lot of ways, starts to demonstrate the usefulness of creating timelines and illustrating activity that you wouldn't necessarily see any other way. I mean, how else would you see that a file was created and modified at about the same time, and then see other files "nearby" that were created, modified, or accessed? Add to that logins visible via the Event Log, Prefetch files being created or modified, etc...all of which adds to our level of confidence in the data, as well as adding to the context of what we're looking at.
I've mentioned this before but another timeline creation tool is AfterTime from NFILabs. In some ways this looks like timeline creation is gaining some attention as an analysis technique. More tools and techniques are coming out, but I still believe that considerable thought needs to go into visualization. I think that automatically parsing and adding every data source you have available to a timeline can easily overwhelm any analyst, particularly when malware and intrusion incidents remain least frequency of occurrence on a system.
Event Log Parsing and Analysis
I wanted to point out Andrea's EvtxParser v1.0.4 tools again. I've seen where some folks have gotten into positions where they're interested in parsing Windows Event Log files, and Andreas has done a great deal of work in providing a means for folks to do so with Vista systems and above, without having a like system available.
IR in the Cloud
Here's an interesting discussion about IR in the cloud that I found via TaoSecurity. While there are a number of views and thoughts in the thread, in most cases I would generally tend to stay away from discussions where folks start with, "...I'm not a lawyer nor expert in cloud computing or forensics..."...it's not that I feel that anyone needs to be an expert in any particular area, but that kind of statement seems to say, "I have no basis upon which to form an opinion...but I will anyway." The fact of the matter is that there're a lot of smart folks (even the one who admitted to not being a lawyer...something I'd do every day! ;-) ) in the thread...and sometimes that toughest question that can be asked is "why?"
Cloud computing is definitely a largely misunderstood concept at this point, and to be honest, it really depends on the implementation. By that, I mean that IR depends on the implementation...just as IR activities depend on whether the system I'm reacting to is right in front of me, or in another city.
Incident Preparedness
On the subject of IR, let's take a step back to incident preparedness. Ever seen the first Mission: Impossible movie? Remember when Ethan makes it back to the safe house, gets to the top of the stairs and removes a light bulb, crushes it in his jacket and lays out the shards in the darkened hallway as he backs toward his room? He's just installed rudimentary incident detection...anyone who steps into the now-dark hallway will step on shards of the glass, alerting him to their presence.
Okay, so who should be worried about incidents? Well, anyone who uses a computer. Seriously. Companies like Verizon, TrustWave and Mandiant have released reports based on investigations they've been called in for, and Brian Krebs makes it pretty clear in his blog that EVERYONE is susceptible...read this.
Interestingly, in Brian's experience, folks hit with this situation have also been infected with Zbot or Zeus. The MMPC reported in Sept 2009 that Zbot was added to MRT; while it won't help those dentists now, I wonder what level of protection they had at the time. I also wonder how they feel now about spending $10K or less in setting up some kind of protection.
I can see the economics in this kind of attack...large organizations (TJX?) may not see $200K as an issue, but a small business will. It will be a huge issue, and may be the difference between staying open or filing for bankruptcy. So why take a little at a time from a big target when you can drain small targets all over, and then move on to the next one? If you don't think that this is an issue, keep an eye on Brian's blog.
Malware Recovery
Speaking of Brian, he also has an excellent blog post on removing viruses from systems that won't boot. He points to a number of bootable Linux CDs, any of which are good for recovery and IR ops. I've always recommended the use of multiple AV scanners as a means of detecting malware, because even in the face of new variants that aren't detected, using multiple tools is still preferable over using just one.
F-Response
For those of you who aren't aware, F-Response has Linux boot CD capability now, so you can access systems that have been shut off.
Dougee posted an article on using an F-Response boot CD from a remote location...something definitely worth checking out, regardless of whether you have F-Response or not. Something like this could be what gets your boss to say "yes"!
Extend your arsenal!
Browser Forensics
For anyone who deals with cases involving user browser activity on a system, you may want to take a look at BrowserForensics.org. There's a PDF (and PPT) of the browser forensics course available that looks to be pretty good, and well worth the read. There's enough specialization required just in browser forensics, so much to know, that I could easily see a training course and reference materials just for that topic.
Bablodos
Speaking of malware, the folks over at Dasient have an interesting post on the "Anatomy of..." a bit of malware...this one called Bablodos. These are always good to read as they can give a view into trends, as well as specifics regarding a particular piece of malware.
Google has a safe browsing diagnostic page for Bablodos here.
Book Translations
I got word from the publisher recently that Windows Forensic Analysis is being translated into French, and will be available at some point in the future. Sorry, but that's all I have at the moment...hopefully, that will go well and other translations will be (have been, I hope) picked up.
Grayson finished off a malware/adware exam recently by creating a timeline, apparently using the several different tools including fls/mactime, Mandiant's Web Historian, regtime.pl from SIFT v2.0, etc. Not a bad way to get started, really...and I do think that Grayson's post, in a lot of ways, starts to demonstrate the usefulness of creating timelines and illustrating activity that you wouldn't necessarily see any other way. I mean, how else would you see that a file was created and modified at about the same time, and then see other files "nearby" that were created, modified, or accessed? Add to that logins visible via the Event Log, Prefetch files being created or modified, etc...all of which adds to our level of confidence in the data, as well as adding to the context of what we're looking at.
I've mentioned this before but another timeline creation tool is AfterTime from NFILabs. In some ways this looks like timeline creation is gaining some attention as an analysis technique. More tools and techniques are coming out, but I still believe that considerable thought needs to go into visualization. I think that automatically parsing and adding every data source you have available to a timeline can easily overwhelm any analyst, particularly when malware and intrusion incidents remain least frequency of occurrence on a system.
Event Log Parsing and Analysis
I wanted to point out Andrea's EvtxParser v1.0.4 tools again. I've seen where some folks have gotten into positions where they're interested in parsing Windows Event Log files, and Andreas has done a great deal of work in providing a means for folks to do so with Vista systems and above, without having a like system available.
IR in the Cloud
Here's an interesting discussion about IR in the cloud that I found via TaoSecurity. While there are a number of views and thoughts in the thread, in most cases I would generally tend to stay away from discussions where folks start with, "...I'm not a lawyer nor expert in cloud computing or forensics..."...it's not that I feel that anyone needs to be an expert in any particular area, but that kind of statement seems to say, "I have no basis upon which to form an opinion...but I will anyway." The fact of the matter is that there're a lot of smart folks (even the one who admitted to not being a lawyer...something I'd do every day! ;-) ) in the thread...and sometimes that toughest question that can be asked is "why?"
Cloud computing is definitely a largely misunderstood concept at this point, and to be honest, it really depends on the implementation. By that, I mean that IR depends on the implementation...just as IR activities depend on whether the system I'm reacting to is right in front of me, or in another city.
Incident Preparedness
On the subject of IR, let's take a step back to incident preparedness. Ever seen the first Mission: Impossible movie? Remember when Ethan makes it back to the safe house, gets to the top of the stairs and removes a light bulb, crushes it in his jacket and lays out the shards in the darkened hallway as he backs toward his room? He's just installed rudimentary incident detection...anyone who steps into the now-dark hallway will step on shards of the glass, alerting him to their presence.
Okay, so who should be worried about incidents? Well, anyone who uses a computer. Seriously. Companies like Verizon, TrustWave and Mandiant have released reports based on investigations they've been called in for, and Brian Krebs makes it pretty clear in his blog that EVERYONE is susceptible...read this.
Interestingly, in Brian's experience, folks hit with this situation have also been infected with Zbot or Zeus. The MMPC reported in Sept 2009 that Zbot was added to MRT; while it won't help those dentists now, I wonder what level of protection they had at the time. I also wonder how they feel now about spending $10K or less in setting up some kind of protection.
I can see the economics in this kind of attack...large organizations (TJX?) may not see $200K as an issue, but a small business will. It will be a huge issue, and may be the difference between staying open or filing for bankruptcy. So why take a little at a time from a big target when you can drain small targets all over, and then move on to the next one? If you don't think that this is an issue, keep an eye on Brian's blog.
Malware Recovery
Speaking of Brian, he also has an excellent blog post on removing viruses from systems that won't boot. He points to a number of bootable Linux CDs, any of which are good for recovery and IR ops. I've always recommended the use of multiple AV scanners as a means of detecting malware, because even in the face of new variants that aren't detected, using multiple tools is still preferable over using just one.
F-Response
For those of you who aren't aware, F-Response has Linux boot CD capability now, so you can access systems that have been shut off.
Dougee posted an article on using an F-Response boot CD from a remote location...something definitely worth checking out, regardless of whether you have F-Response or not. Something like this could be what gets your boss to say "yes"!
Extend your arsenal!
Browser Forensics
For anyone who deals with cases involving user browser activity on a system, you may want to take a look at BrowserForensics.org. There's a PDF (and PPT) of the browser forensics course available that looks to be pretty good, and well worth the read. There's enough specialization required just in browser forensics, so much to know, that I could easily see a training course and reference materials just for that topic.
Bablodos
Speaking of malware, the folks over at Dasient have an interesting post on the "Anatomy of..." a bit of malware...this one called Bablodos. These are always good to read as they can give a view into trends, as well as specifics regarding a particular piece of malware.
Google has a safe browsing diagnostic page for Bablodos here.
Book Translations
I got word from the publisher recently that Windows Forensic Analysis is being translated into French, and will be available at some point in the future. Sorry, but that's all I have at the moment...hopefully, that will go well and other translations will be (have been, I hope) picked up.
Sunday, March 28, 2010
Thought of the Day
When starting an exam, what is the first question that comes to mind? If it's "...now where did I leave my dongle?", then maybe that's the wrong question. I'm a pretty big proponent for timeline creation and analysis, but I don't always start an exam by locating every data source and adding it to a timeline...because that just doesn't make sense.
For example, if I'm facing a question of the Trojan Defense, I may not even create a timeline...because for the most part, we already know that the system contains contraband images, and we may already know, or not be concerned with, how they actually got there. If the real question is whether or not the user was aware that the images were there, I'll pursue other avenues first.
Don't let your tools guide you. Don't try to fit your exam to whichever tool you have available or were trained in. You should be working on a copy of the data, so you're not going to destroy the original, and the data will be there. Focus on the goals of your exam and let those guide your analysis.
Saturday, March 27, 2010
Thought of the Day
Today's TotD is this...what are all of the legislative and regulatory requirements that have been published over the last...what is it...5 or more years?
By "legislative", I mean laws...state notification laws. By "regulatory", I mean stuff like HIPAA, PCI, NCUA, etc., requirements. When you really boil them down, what are they?
They're all someone's way of saying, if you're going to carry the egg, don't drop it. Think about it...for years (yes, years), auditors and all of us in the infosec consulting field have been talking about the things organizations can do to provide a modicum of information security within their organizations. Password policies...which includes having a password (hello...yes, I'm talking to you, sa account on that SQL Server...) - does that sound familiar? Think about it...some auditor said it was necessary, and now there's some compliance or regulatory measure that says the same thing.
As a consultant, I have to read a lot of things. Depending upon the customer, I may have to read up on HIPAA one week, and then re-familiarize myself with the current PCI DSS the next. Okay, well, not so much anymore...but my point is that when I've done this, there's been an overwhelming sense of deja vu...not only have infosec folks said these things, but in many ways, under the hood, a lot of these things say the same thing.
With respect to IR, the PCI DSS specifically states, almost like "thou shalt...", that an organization must have an incident response capability (it's in chapter 12). Ever read CA SB 1386? How would any organization comply with this state law (or any of the other...how many...state laws?) without having some sort of incident detection and response capability?
My point...and my thought...is that this is really no different from being a parent. For years, organizations have been told by auditors and consultants that they need to tighten up infosec, and that they can do so without impacting business ops. Now, regulatory organizations and even legislatures have gotten into the mix...and there are consequences. While a fine from not being in compliance may not amount to much, the act of having to tell someone what happened does have an impact.
Finally, please don't think that I'm trying to equate compliance to security. Compliance is a snapshot in time, security is a business process. But when you're working with organizations that have been around for 30 or so years and have not really had much in the way of a security infrastructure, compliance is a necessary first step.
By "legislative", I mean laws...state notification laws. By "regulatory", I mean stuff like HIPAA, PCI, NCUA, etc., requirements. When you really boil them down, what are they?
They're all someone's way of saying, if you're going to carry the egg, don't drop it. Think about it...for years (yes, years), auditors and all of us in the infosec consulting field have been talking about the things organizations can do to provide a modicum of information security within their organizations. Password policies...which includes having a password (hello...yes, I'm talking to you, sa account on that SQL Server...) - does that sound familiar? Think about it...some auditor said it was necessary, and now there's some compliance or regulatory measure that says the same thing.
As a consultant, I have to read a lot of things. Depending upon the customer, I may have to read up on HIPAA one week, and then re-familiarize myself with the current PCI DSS the next. Okay, well, not so much anymore...but my point is that when I've done this, there's been an overwhelming sense of deja vu...not only have infosec folks said these things, but in many ways, under the hood, a lot of these things say the same thing.
With respect to IR, the PCI DSS specifically states, almost like "thou shalt...", that an organization must have an incident response capability (it's in chapter 12). Ever read CA SB 1386? How would any organization comply with this state law (or any of the other...how many...state laws?) without having some sort of incident detection and response capability?
My point...and my thought...is that this is really no different from being a parent. For years, organizations have been told by auditors and consultants that they need to tighten up infosec, and that they can do so without impacting business ops. Now, regulatory organizations and even legislatures have gotten into the mix...and there are consequences. While a fine from not being in compliance may not amount to much, the act of having to tell someone what happened does have an impact.
Finally, please don't think that I'm trying to equate compliance to security. Compliance is a snapshot in time, security is a business process. But when you're working with organizations that have been around for 30 or so years and have not really had much in the way of a security infrastructure, compliance is a necessary first step.
Friday, March 26, 2010
Responding to Incidents
Lenny Zeltser has an excellent presentation on his web site that discusses how to respond to the unexpected. This presentation is well worth the time it takes to read it...and I mean that not only for C-suite executives, but also IT staff and responders, as well. Take a few minutes to read through the presentation...it speaks a lot of truth.
When discussing topics like this, in the past, I've thought it would be a good idea to split the presentation, discussing responding as (a) a consultant and (b) as a full-time employee (FTE) staff member. My reasoning was that as a consultant, you're walking into a completely new environment, and as a member of FTE staff, you've been working in that environment for weeks, months, or even years, and would tend to be very familiar with the environment. However, as a consultant/responder, my experience has been that most response staff isn't familiar with the incident response aspect of their...well...incident response. FTE response staff is largely ad-hoc, as well as under-trained, in response...so while they may be very familiar with the day-to-day management of systems, most times they are not familiar with how to respond to incidents in a manner that's consistent with senior management's goals. If there are any. After all, if they were, folks like me wouldn't need to be there, right?
So, the reason I mention this at all is that when an incident occurs, the very first decisions made and actions that occur have a profound effect on how the incident plays out. If (not when) an incident is detected, what happens? Many times, an admin pulls the system, wipes the drive, and then reloads the OS and data, and puts the system back into service. While this is the most direct route to recovery, it does absolutely nothing to determine the root cause, or prevent the incident from happening again in the future.
The general attitude seems to be that the needs of infosec in general, and IR activities in particular, run counter to the needs of the business. IR is something of a "new" concept to most folks, and very often, the primary business goal is to keep systems running and functionality available, whereas IR generally wants to take systems offline. In short, security breaks stuff.
Well, this can be true, IF you go about your security and IR blindly. However, if you look at incident response specifically, and infosec in general, as a business process, and incorporate it along with your other business processes (i.e., marketing, sales, collections, payroll, etc.), then you can not only maintain your usability and productivity, but you're going to save yourself a LOT of money and headaches. You're not only going to be in compliance (name your legislative or regulatory body/ies of choice with that one...) and avoid costly audits, re-audits and fines, but you're also likely going to save yourself when (notice the use of the word when, rather than if...) an incident happens.
I wanted to present a couple of scenarios based on culminations my own experience performing incident response in various environments for over 10 years. I think these scenarios are important, because like other famous events in history, they can show us what we've done right or wrong.
Scenario 1: An incident, a malware infection, is detected and the local IT staff reacts quickly and efficiently, determining that the malware was on 12 different systems in the infrastructure and eradicating each instance. During the course of response, someone found a string, and without any indication that it applied directly to the incident, Googled the string and added a relationship with a keystroke logger to the incident notes. A week later at a director's meeting, the IT director described the incident and applauded his staff's reactions. Legal counsel, also responsible for compliance, took issue with the incident description, due to the possibility of and the lack of information regarding data exfiltration. Due to the location and use of the "cleaned" systems within the infrastructure, regulatory and compliance issues are raised, due in part to the malware association with a keystroke logger, but questions cannot be answered, as the actual malware itself was never completely identified nor was a sample saved. Per legislative and regulatory requirements, the organization must now assume that any sensitive data that could have been exfiltrated was, in fact, compromised.
Scenario 2: An incident is detected involving several e-commerce servers. The local IT staff is not trained, nor has any practical knowledge of IR, and while their manager reports potential issues to his management, a couple of admins begin poking around on the servers, installing and running AV (nothing found), deleting some files, etc. Management decides to wait to see if the "problem" settles down. Two days later, one of the admins decides to connect a sniffer to the outbound portion of the network, and sees several files being moved off of the systems. Locating those files on the systems, the admin determines that the files contain PCI data; however, the servers themselves cannot be shut down. The admin reports this, but it takes 96 hrs to locate IR consultants, get them on-site, and have the IT staff familiarize the consultants with the environment. It takes longer due to the fact that the one IT admin who knows how the systems interact and where they're actually located in the data center is on vacation.
Scenario 3: A company that provided remote shell-based access for their employees was in the process of transitioning to two-factor authentication when a regular log review detected that particular user credentials were being used to log in from a different location. IT immediately shut down all remote access, and changed all admin-level passwords. Examination of logs indicated that the intruder had accessed the infrastructure with one set of credentials, used those to transition to another set, but maintained shell-based access. The second account was immediately disabled, but not deleted. While IR consultants were on their way on-site, the local IT staff identified systems the intruder had accessed. A definitive list of files known to contain 'sensitive data' (already compiled) was provided to the consultants, who determined through several means that there were no indications that those files had been accessed by the intruder. The company was able to report this with confidence to regulatory oversight bodies, and while a small fine was imposed, a much larger fine, as well as notification and disclosure costs, followed by other costs (i.e., cost to change credit/debit cards, pay for credit monitoring, civil suits, etc.) were avoided.
Remember, we're not talking about a small, single-owner storefront here...we're talking about companies that store and process data about you and me...PII, PHI, PCI/credit card data, etc. Massive amounts of data that someone wants because it means massive amounts of money to them.
So, in your next visit from an auditor, when they ask "Got IR?" what are you going to say?
When discussing topics like this, in the past, I've thought it would be a good idea to split the presentation, discussing responding as (a) a consultant and (b) as a full-time employee (FTE) staff member. My reasoning was that as a consultant, you're walking into a completely new environment, and as a member of FTE staff, you've been working in that environment for weeks, months, or even years, and would tend to be very familiar with the environment. However, as a consultant/responder, my experience has been that most response staff isn't familiar with the incident response aspect of their...well...incident response. FTE response staff is largely ad-hoc, as well as under-trained, in response...so while they may be very familiar with the day-to-day management of systems, most times they are not familiar with how to respond to incidents in a manner that's consistent with senior management's goals. If there are any. After all, if they were, folks like me wouldn't need to be there, right?
So, the reason I mention this at all is that when an incident occurs, the very first decisions made and actions that occur have a profound effect on how the incident plays out. If (not when) an incident is detected, what happens? Many times, an admin pulls the system, wipes the drive, and then reloads the OS and data, and puts the system back into service. While this is the most direct route to recovery, it does absolutely nothing to determine the root cause, or prevent the incident from happening again in the future.
The general attitude seems to be that the needs of infosec in general, and IR activities in particular, run counter to the needs of the business. IR is something of a "new" concept to most folks, and very often, the primary business goal is to keep systems running and functionality available, whereas IR generally wants to take systems offline. In short, security breaks stuff.
Well, this can be true, IF you go about your security and IR blindly. However, if you look at incident response specifically, and infosec in general, as a business process, and incorporate it along with your other business processes (i.e., marketing, sales, collections, payroll, etc.), then you can not only maintain your usability and productivity, but you're going to save yourself a LOT of money and headaches. You're not only going to be in compliance (name your legislative or regulatory body/ies of choice with that one...) and avoid costly audits, re-audits and fines, but you're also likely going to save yourself when (notice the use of the word when, rather than if...) an incident happens.
I wanted to present a couple of scenarios based on culminations my own experience performing incident response in various environments for over 10 years. I think these scenarios are important, because like other famous events in history, they can show us what we've done right or wrong.
Scenario 1: An incident, a malware infection, is detected and the local IT staff reacts quickly and efficiently, determining that the malware was on 12 different systems in the infrastructure and eradicating each instance. During the course of response, someone found a string, and without any indication that it applied directly to the incident, Googled the string and added a relationship with a keystroke logger to the incident notes. A week later at a director's meeting, the IT director described the incident and applauded his staff's reactions. Legal counsel, also responsible for compliance, took issue with the incident description, due to the possibility of and the lack of information regarding data exfiltration. Due to the location and use of the "cleaned" systems within the infrastructure, regulatory and compliance issues are raised, due in part to the malware association with a keystroke logger, but questions cannot be answered, as the actual malware itself was never completely identified nor was a sample saved. Per legislative and regulatory requirements, the organization must now assume that any sensitive data that could have been exfiltrated was, in fact, compromised.
Scenario 2: An incident is detected involving several e-commerce servers. The local IT staff is not trained, nor has any practical knowledge of IR, and while their manager reports potential issues to his management, a couple of admins begin poking around on the servers, installing and running AV (nothing found), deleting some files, etc. Management decides to wait to see if the "problem" settles down. Two days later, one of the admins decides to connect a sniffer to the outbound portion of the network, and sees several files being moved off of the systems. Locating those files on the systems, the admin determines that the files contain PCI data; however, the servers themselves cannot be shut down. The admin reports this, but it takes 96 hrs to locate IR consultants, get them on-site, and have the IT staff familiarize the consultants with the environment. It takes longer due to the fact that the one IT admin who knows how the systems interact and where they're actually located in the data center is on vacation.
Scenario 3: A company that provided remote shell-based access for their employees was in the process of transitioning to two-factor authentication when a regular log review detected that particular user credentials were being used to log in from a different location. IT immediately shut down all remote access, and changed all admin-level passwords. Examination of logs indicated that the intruder had accessed the infrastructure with one set of credentials, used those to transition to another set, but maintained shell-based access. The second account was immediately disabled, but not deleted. While IR consultants were on their way on-site, the local IT staff identified systems the intruder had accessed. A definitive list of files known to contain 'sensitive data' (already compiled) was provided to the consultants, who determined through several means that there were no indications that those files had been accessed by the intruder. The company was able to report this with confidence to regulatory oversight bodies, and while a small fine was imposed, a much larger fine, as well as notification and disclosure costs, followed by other costs (i.e., cost to change credit/debit cards, pay for credit monitoring, civil suits, etc.) were avoided.
Remember, we're not talking about a small, single-owner storefront here...we're talking about companies that store and process data about you and me...PII, PHI, PCI/credit card data, etc. Massive amounts of data that someone wants because it means massive amounts of money to them.
So, in your next visit from an auditor, when they ask "Got IR?" what are you going to say?
Links
Evtx Parsing
Andreas has released an update to his Evtx Parser tools, bringing the version up to 1.0.4. A great big thanks to Andreas for providing these tools, and the capability for parsing this new format from MS.
F-Response Boot CD
As if F-Response wasn't an amazing enough tool as it is, Matt's now got a boot CD for F-Response! Pretty soon, Matt's going to hem everyone in and the only excuse you'll have for NOT having and using F-Response is that you live in a cave, don't have a computer, and don't gets on the InterWebs...
Malware & Bot Detection for the IT Admin
I recently attended a presentation, during and after which, the statement was made that the Zeus bot is/was difficult to detect. What I took away from this was that the detection methodology was specific to network traffic, or in some cases, to banking transactions. Tracking and blocking constantly changing domains and IP addresses, changes in how data is exfiltrated, etc., can be very difficult for even teams of network administrators.
As most of us remember, there's been discussion about toolkits that allow someone, for about $700US, to create their very own Zeus. By it's nature, this made the actual files themselves difficult to detect on a host system with AV. Again, detection is said to be difficult.
Remember when we talked about initial infection vectors of malware, and other characteristics? Another characteristic is the persistence mechanism...how malware or an intruder remains persistent on a system across reboots and user logins. These artifacts can often be very useful in identifying malware infections where other methods (i.e., network traffic analysis, AV, etc.) fail.
ZBot was also covered by the MMPC. A total of four variants are listed, but look at what they have in common...they all add data to a Registry value, specifically:
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\UserInit
The same could be said for Conficker. According to the MMPC, there were two Registry artifacts that remained fairly consistent across various families of Conficker; creating a new, randomly named value beneath the Run key that pointed to rundll32.exe and the malware parameters, as well as Windows service set to run under svchost -k netsvcs.
That being the case, how can IT admins use this information? When I was in an FTE position with a financial services company, I wrote a script that would go out to each system in the infrastructure and grab all entries from a specific set of Registry keys. As I scanned the systems, I'd verify entries and remove them from my list. So, in short order, I would start the scan and head to lunch, and when I got back I'd have a nice little half page report on my desktop, giving me a list of systems with entries that weren't in my whitelist.
Admins can do something similar with something as simple as reg.exe, or something more complex written into a Perl script. So while someone else is scanning firewall logs or monitoring network traffic, someone else can target specific artifacts to help identify infected systems.
SIFT 2.0
Rob Lee has released SIFT 2.0, an Ubuntu-based VMWare appliance that comes with about 200 tools, including log2timeline, Wireshark, ssdeep/md5deep, Autopsy, PyFlag, etc.
To get your copy, go here, click on the "Forensics Community" tab at the top of the page, and choose Downloads.
If you're taken the SEC 508 course with Rob...or now with Ovie, or Chris...you have probably seen the SIFT workstation in action.
Andreas has released an update to his Evtx Parser tools, bringing the version up to 1.0.4. A great big thanks to Andreas for providing these tools, and the capability for parsing this new format from MS.
F-Response Boot CD
As if F-Response wasn't an amazing enough tool as it is, Matt's now got a boot CD for F-Response! Pretty soon, Matt's going to hem everyone in and the only excuse you'll have for NOT having and using F-Response is that you live in a cave, don't have a computer, and don't gets on the InterWebs...
Malware & Bot Detection for the IT Admin
I recently attended a presentation, during and after which, the statement was made that the Zeus bot is/was difficult to detect. What I took away from this was that the detection methodology was specific to network traffic, or in some cases, to banking transactions. Tracking and blocking constantly changing domains and IP addresses, changes in how data is exfiltrated, etc., can be very difficult for even teams of network administrators.
As most of us remember, there's been discussion about toolkits that allow someone, for about $700US, to create their very own Zeus. By it's nature, this made the actual files themselves difficult to detect on a host system with AV. Again, detection is said to be difficult.
Remember when we talked about initial infection vectors of malware, and other characteristics? Another characteristic is the persistence mechanism...how malware or an intruder remains persistent on a system across reboots and user logins. These artifacts can often be very useful in identifying malware infections where other methods (i.e., network traffic analysis, AV, etc.) fail.
ZBot was also covered by the MMPC. A total of four variants are listed, but look at what they have in common...they all add data to a Registry value, specifically:
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\UserInit
The same could be said for Conficker. According to the MMPC, there were two Registry artifacts that remained fairly consistent across various families of Conficker; creating a new, randomly named value beneath the Run key that pointed to rundll32.exe and the malware parameters, as well as Windows service set to run under svchost -k netsvcs.
That being the case, how can IT admins use this information? When I was in an FTE position with a financial services company, I wrote a script that would go out to each system in the infrastructure and grab all entries from a specific set of Registry keys. As I scanned the systems, I'd verify entries and remove them from my list. So, in short order, I would start the scan and head to lunch, and when I got back I'd have a nice little half page report on my desktop, giving me a list of systems with entries that weren't in my whitelist.
Admins can do something similar with something as simple as reg.exe, or something more complex written into a Perl script. So while someone else is scanning firewall logs or monitoring network traffic, someone else can target specific artifacts to help identify infected systems.
SIFT 2.0
Rob Lee has released SIFT 2.0, an Ubuntu-based VMWare appliance that comes with about 200 tools, including log2timeline, Wireshark, ssdeep/md5deep, Autopsy, PyFlag, etc.
To get your copy, go here, click on the "Forensics Community" tab at the top of the page, and choose Downloads.
If you're taken the SEC 508 course with Rob...or now with Ovie, or Chris...you have probably seen the SIFT workstation in action.
Tuesday, March 23, 2010
Even More Thoughts on Timelines
I realize that I've talked a bit about timeline creation and analysis, and I know that others (Chris, as well as Rob) have covered this subject, as well.
I also realize that I may have something of a different way of going about creating timelines and conducting analysis. I don't think that, out of the ways I've seen so far that there's a wrong way, I just think that we approach things a bit differently.
For example, I am not a fan of adding everything to a timeline, at least not as an initial step. I've found that IMHO, there's a lot of noise in just the file system metadata...many times, by the time I get notified of an incident, an admin has already logged into the system, installed and run some tools (including anti-virus), maybe even deleted files and accounts, etc. When interviewed about the incident and the specific question of "what actions have you performed on the system?" is asked, that admin most often says, "nothing"...only because they tend to do these things every day and they are therefore trivial. To add to that, there's all the other stuff...automatic updates to Windows or any of the applications, etc., that also adds a great deal of stuff that needs to be culled through in order to find what you're looking for. Adding a lot of raw data to the timeline right up front may mean that you're adding a lot of noise into that timeline, and not a lot of signal.
Goals
Generally, the first step I take is to look at the goals of my exam, and try to figure out which data sources may provide me the best source of data. Sometimes it's not a matter of just automatically parsing file system metadata out of an image; in a recent exam involving "recurring software failures", my initial approach was to parse just the Application Event Log to see if I could determine the date range and relative frequency of such events, in order to get an idea of when those recurring failures would have occurred. In another exam, I needed to determine the first occurrence within the Application Event Log of a particular event ID generated by the installed AV software; to do so, I created a "nano-timeline" by parsing just the Application Event Log, and running the output through the find command to get the event records I was interested in. This provided me with a frame of reference for most of the rest of my follow-on analysis.
Selecting Data Sources
Again, I often select data sources to include in a timeline by closely examining the goals of my analysis. However, I am also aware that in many instances, the initial indicator of an incident often is only the latest indicator, albeit the first to be recognized as such. That being the case, when I start to create a timeline for analysis, I generally start off by creating a file of events from the file system metadata, as well as available Event Log records, as well as running the .evt files through evtrpt.pl to get an idea of what I should expect to see from the Event Logs. I also run the auditpol.pl RegRipper plugin against the Security hive in order to see (a) what events are being logged, and (b) when the contents of that key were last modified. If this date is pertinent to the exam, I'll be sure to include an appropriate event in my events file.
Once my initial timeline has been established and analysis begins, I can go back to my image and select the appropriate data sources to add to the timeline. For example, during investigations involving SQL injection, the most useful data source is very likely going to be the web server logs...in many cases, these may provide an almost "doskey /history"-like timeline of commands. However, adding all of the web server logs to the timeline means that I'm going to end up inundating my timeline with a lot of normal and expected activity...if any of the requested pages contains images, then there will be a lot of additional, albeit uninteresting, information in the timeline. As such, I would narrow down the web log entries to those of interest, beginning with an interative analysis of the web logs, and add the resulting events to my timeline.
That's what I ended up doing, and like I said, the results were almost like running "doskey /history" on a command prompt, except I also had the time at which the commands were entered as well as the IP address from which they originated. Having them in the timeline let me line up the incoming commands with their resulting artifacts quite nicely.
Registry Data
The same holds true with Registry data. Yes, the Registry can be a veritable gold mine of information (and intelligence) that is relevant to your examination, but like a gold mine, that data must be dug out and refined. While there is a great deal of interesting and relevant data in the system hives (SAM, Security, Software, and System), the real wealth of data will often come from the user hives, particularly for examinations that involve some sort of user activity.
Chris and Rob advocate using regtime.pl to add Registry data to the timeline, and that's fine. However, it's not something I do. IMHO, adding Registry data to a timeline by listing each key by its LastWrite time is way too much noise and not nearly enough signal. Again, that's just my opinion, and doesn't mean that either one of us is doing anything wrong. Using tools like RegRipper, MiTec's Registry File Viewer, and regslack, I'm able to go into to a hive file and get the data I'm interested in. For examinations involving user activity, I may be most interested in the contents of the UserAssist\Count keys (log2timeline extracts this data, as well), but the really valuable information from these keys isn't the key LastWrite times; rather, it's the time stamps embedded in the binary value data within the subkey values.
If you're parsing any of the MRU keys, these too can vary with respect to where the really valuable data resides. In the RecentDocs key, for example, the values (as well as those within the RecentDocs subkeys) are maintained in an MRU list; therefore, the LastWrite time of the RecentDocs key is of limited value in and of itself. The LastWrite time of the RecentDocs key has context when you determine what action caused the key to be modified; was a new subkey created, or was another file opened, or was a previously-opened file opened again, modifying the MRUListEx value?
Files opened via Adobe Reader are maintained in an MRU list, as well, but with a vastly different format from what's used by Microsoft; the most recently opened document is in the AVGeneral\cRecentFiles\c1 subkey, but the name of the actual file is embedded in a binary value named tDIText. On top of that, when a new file is opened, it becomes the value in the c1 subkey, so all of the other subkeys that refer to opened documents also get modified (what was c1 becomes c2, c2 becomes c3, etc.), and the key LastWrite times are updated accordingly, all to the time that the most recent file was opened.
Browser Stuff
Speaking of user activity, let's not forget browser cache and history files, as well as Favorites and Bookmarks lists. It's very telling to see a timeline based on file system metadata with a considerable number of files being created in browser cache directories, and then to add data from the browser history files and get that contextual information regarding where those new files came from. If a system was compromised by a browser drive-by, you may be able to discover the web site that served as the initial infection vector.
Identifying Other Data Sources
There may be times when you would want to determine other data sources that may be of use to your examination. I do tend to check the contents of the Task Scheduler service log file (schedLgU.txt) for indications of scheduled tasks being run, particularly during intrusion exams; however, this log file can also be of use if you're trying to determine if the system were up and running during a specific timeframe. This may much more pertinent to laptops and desktop systems than to servers, which may not be rebooted often, but if you look in the file, you'll see messages such as:
"Task Scheduler Service"
Started at 3/21/2010 6:18:19 AM
"Task Scheduler Service"
Exited at 3/21/2010 9:10:32 PM
In this case, the system was booted around 6:16am EST on 21 March, and shut down around 9:11pm that same day. These times are listed relative to the local system time, and may need to be adjusted based on the timezone, but it does provide information regarding when the system was operating, particularly when combined with Registry data regarding the start type of the service, and can be valuable in the absence of, or when combined with, Event Log data.
There may be other data sources available, but not all of them may be of use, depending upon the goals of your examination. For example, some AV applications record scan and detection information in the Event Log as well as in their own log files. If the log file provides no indication that any malware had been identified, is it of value to include all scans (with "no threats found") in the timeline, or would it suffice to note that fact in your case notes and the report?
Summary
Adding all available data sources to a timeline can quickly make that timeline very unwieldy and difficult to manage and analyze. Through education, training, and practice, analysts can begin to understand what data and data sources would be of primary interest to an examination. Again, this all goes back to your examination goals...understand those, and the rest comes together quite nicely.
Finally, I'm not saying that it's wrong to incorporate all available data sources into a timeline...not at all. Personally, I like having the flexibility to create mini-, micro- or nano-timelines that show specific details, and then being able to go back to the overall timeline to be able to see what I learned viewed in the context of the overall timeline.
I also realize that I may have something of a different way of going about creating timelines and conducting analysis. I don't think that, out of the ways I've seen so far that there's a wrong way, I just think that we approach things a bit differently.
For example, I am not a fan of adding everything to a timeline, at least not as an initial step. I've found that IMHO, there's a lot of noise in just the file system metadata...many times, by the time I get notified of an incident, an admin has already logged into the system, installed and run some tools (including anti-virus), maybe even deleted files and accounts, etc. When interviewed about the incident and the specific question of "what actions have you performed on the system?" is asked, that admin most often says, "nothing"...only because they tend to do these things every day and they are therefore trivial. To add to that, there's all the other stuff...automatic updates to Windows or any of the applications, etc., that also adds a great deal of stuff that needs to be culled through in order to find what you're looking for. Adding a lot of raw data to the timeline right up front may mean that you're adding a lot of noise into that timeline, and not a lot of signal.
Goals
Generally, the first step I take is to look at the goals of my exam, and try to figure out which data sources may provide me the best source of data. Sometimes it's not a matter of just automatically parsing file system metadata out of an image; in a recent exam involving "recurring software failures", my initial approach was to parse just the Application Event Log to see if I could determine the date range and relative frequency of such events, in order to get an idea of when those recurring failures would have occurred. In another exam, I needed to determine the first occurrence within the Application Event Log of a particular event ID generated by the installed AV software; to do so, I created a "nano-timeline" by parsing just the Application Event Log, and running the output through the find command to get the event records I was interested in. This provided me with a frame of reference for most of the rest of my follow-on analysis.
Selecting Data Sources
Again, I often select data sources to include in a timeline by closely examining the goals of my analysis. However, I am also aware that in many instances, the initial indicator of an incident often is only the latest indicator, albeit the first to be recognized as such. That being the case, when I start to create a timeline for analysis, I generally start off by creating a file of events from the file system metadata, as well as available Event Log records, as well as running the .evt files through evtrpt.pl to get an idea of what I should expect to see from the Event Logs. I also run the auditpol.pl RegRipper plugin against the Security hive in order to see (a) what events are being logged, and (b) when the contents of that key were last modified. If this date is pertinent to the exam, I'll be sure to include an appropriate event in my events file.
Once my initial timeline has been established and analysis begins, I can go back to my image and select the appropriate data sources to add to the timeline. For example, during investigations involving SQL injection, the most useful data source is very likely going to be the web server logs...in many cases, these may provide an almost "doskey /history"-like timeline of commands. However, adding all of the web server logs to the timeline means that I'm going to end up inundating my timeline with a lot of normal and expected activity...if any of the requested pages contains images, then there will be a lot of additional, albeit uninteresting, information in the timeline. As such, I would narrow down the web log entries to those of interest, beginning with an interative analysis of the web logs, and add the resulting events to my timeline.
That's what I ended up doing, and like I said, the results were almost like running "doskey /history" on a command prompt, except I also had the time at which the commands were entered as well as the IP address from which they originated. Having them in the timeline let me line up the incoming commands with their resulting artifacts quite nicely.
Registry Data
The same holds true with Registry data. Yes, the Registry can be a veritable gold mine of information (and intelligence) that is relevant to your examination, but like a gold mine, that data must be dug out and refined. While there is a great deal of interesting and relevant data in the system hives (SAM, Security, Software, and System), the real wealth of data will often come from the user hives, particularly for examinations that involve some sort of user activity.
Chris and Rob advocate using regtime.pl to add Registry data to the timeline, and that's fine. However, it's not something I do. IMHO, adding Registry data to a timeline by listing each key by its LastWrite time is way too much noise and not nearly enough signal. Again, that's just my opinion, and doesn't mean that either one of us is doing anything wrong. Using tools like RegRipper, MiTec's Registry File Viewer, and regslack, I'm able to go into to a hive file and get the data I'm interested in. For examinations involving user activity, I may be most interested in the contents of the UserAssist\Count keys (log2timeline extracts this data, as well), but the really valuable information from these keys isn't the key LastWrite times; rather, it's the time stamps embedded in the binary value data within the subkey values.
If you're parsing any of the MRU keys, these too can vary with respect to where the really valuable data resides. In the RecentDocs key, for example, the values (as well as those within the RecentDocs subkeys) are maintained in an MRU list; therefore, the LastWrite time of the RecentDocs key is of limited value in and of itself. The LastWrite time of the RecentDocs key has context when you determine what action caused the key to be modified; was a new subkey created, or was another file opened, or was a previously-opened file opened again, modifying the MRUListEx value?
Files opened via Adobe Reader are maintained in an MRU list, as well, but with a vastly different format from what's used by Microsoft; the most recently opened document is in the AVGeneral\cRecentFiles\c1 subkey, but the name of the actual file is embedded in a binary value named tDIText. On top of that, when a new file is opened, it becomes the value in the c1 subkey, so all of the other subkeys that refer to opened documents also get modified (what was c1 becomes c2, c2 becomes c3, etc.), and the key LastWrite times are updated accordingly, all to the time that the most recent file was opened.
Browser Stuff
Speaking of user activity, let's not forget browser cache and history files, as well as Favorites and Bookmarks lists. It's very telling to see a timeline based on file system metadata with a considerable number of files being created in browser cache directories, and then to add data from the browser history files and get that contextual information regarding where those new files came from. If a system was compromised by a browser drive-by, you may be able to discover the web site that served as the initial infection vector.
Identifying Other Data Sources
There may be times when you would want to determine other data sources that may be of use to your examination. I do tend to check the contents of the Task Scheduler service log file (schedLgU.txt) for indications of scheduled tasks being run, particularly during intrusion exams; however, this log file can also be of use if you're trying to determine if the system were up and running during a specific timeframe. This may much more pertinent to laptops and desktop systems than to servers, which may not be rebooted often, but if you look in the file, you'll see messages such as:
"Task Scheduler Service"
Started at 3/21/2010 6:18:19 AM
"Task Scheduler Service"
Exited at 3/21/2010 9:10:32 PM
In this case, the system was booted around 6:16am EST on 21 March, and shut down around 9:11pm that same day. These times are listed relative to the local system time, and may need to be adjusted based on the timezone, but it does provide information regarding when the system was operating, particularly when combined with Registry data regarding the start type of the service, and can be valuable in the absence of, or when combined with, Event Log data.
There may be other data sources available, but not all of them may be of use, depending upon the goals of your examination. For example, some AV applications record scan and detection information in the Event Log as well as in their own log files. If the log file provides no indication that any malware had been identified, is it of value to include all scans (with "no threats found") in the timeline, or would it suffice to note that fact in your case notes and the report?
Summary
Adding all available data sources to a timeline can quickly make that timeline very unwieldy and difficult to manage and analyze. Through education, training, and practice, analysts can begin to understand what data and data sources would be of primary interest to an examination. Again, this all goes back to your examination goals...understand those, and the rest comes together quite nicely.
Finally, I'm not saying that it's wrong to incorporate all available data sources into a timeline...not at all. Personally, I like having the flexibility to create mini-, micro- or nano-timelines that show specific details, and then being able to go back to the overall timeline to be able to see what I learned viewed in the context of the overall timeline.
Monday, March 22, 2010
Links
File Initialization
Eoghan had an excellent post on issues/pitfalls of file initialization, particularly on Windows systems. After reading through this post, I can think of a number of exams I've done in the past where I wish I could link to this post in the report. In one particular instance, while performing data breach exams, I've found sensitive data "in" Registry hive files, and a closer look showed me that the search hits were actually in either the file slack or the uninitialized space within the file. During another exam, I was looking for indications of how an intruder ended up being able to log into a system via RDP. I ran my timeline tools across the Event Log (.evt) files extracted from the image, and found that I had RDP logins (event ID 528, type 10) in my timeline that fell outside of the date range (via evtrpt.pl) of the Security Event Log. Taking a closer look, I found that while the Event Logs had been cleared and one of the .evt file headers (and Event Viewer) reported that there were NO event records in the file, I found that the uninitialized portion of the file was comprised of data from previously used sectors...which included entire event records!
More than anything else, this concept of uninitialized space really reinforces to me how process and procedures are important, and how knowledgeable analysts really need to be. After all, say you have a list of keywords you're searching for, or a regex you're grep'ing for...when you get a hit, it is really in the file?
Unexpected Events
I ran across a link to Lenny Zeltser's site recently that pointed to a presentation entitled, How To Respond To An Unexpected Security Event. I'll have to put my thoughts on Lenny's presentation in another post, but for the most part, I think that the presentation has some very valid points that may often to unrecognized. More about that later, though.
Regtime
Following a recent SANS blog post (Digital Forensic SIFTing: SUPER Timeline Analysis and Creation), I received a couple of requests to release regtime.pl. This functionality was part of RegRipper before regtime.pl was included in SIFT. You can see this if you use rip.pl to list the plugins in .csv format, but instead of sending them to a file, pipe the output through the "find" command:
C:\Perl\forensics\rr>rip.pl -l -c | find "regtime" /i
regtime,20080324,All,Dumps entire hive - all keys sorted by LastWrite time
If you need the output to be in a different format, well, it's Perl and open source!
Timeline Creation
Chris has posted part 2 of his Timeline Analysis posts (part 1 is here), this time including Registry data by running regtime.pl from the SANS SIFT Toolkit v2.0 against the Registry hives. It's really good to see more folks bringing this topic up and mentioning it in blogs, etc. I hesitate to say "discussion" because I'm not really seeing much discussion of this topic in the public arena, although I have heard that there's been some discussion offline.
Looking at this, and based on some requests I've received recently, if you're looking for a copy of regtime.pl, you'll need to contact Rob about that. I did provide a copy of regtime.pl for him to use, but based on the output I saw in Chris's post, there appear to have been modifications to the script I provided. So if you're interested in a copy of that script, reach out to Rob.
Also, one other thought...I'm not a big proponent for adding information from the Registry in this manner. Now, I'm not saying its wrong...I'm just saying that this isn't how I would go about doing it. The reason for that is that there's just way too much noise being added to the timeline for my taste...that's too much extra stuff that I need to wade through. This is why I'll use RegRipper and the Timeline Entry GUI (described here) to enter specific entries into the timeline. By "specific entries", I mean those from the UserAssist and RecentDocs keys, as well as any MRU list keys. My rationale for this is that I don't want to see all of the Registry key LastWrite times because (a) too many of the entries in the timeline will simply be irrelevant and unnecessary, and (b) too many of the entries will be without any context whatsoever.
More often, I find that the really valuable information is what occurred to cause the change to the key's LastWrite time, and in many cases, that has to do with a Registry value, such as one being added or removed. In other cases, the truly valuable data can come from the binary contents of a Registry value, such as those within the UserAssist\Count subkeys.
Again, I'm not saying that there's anything wrong with adding Registry data to a timeline in this manner; rather, I'm just sharing my own experiences, those being that more valuable data can be found in a more tactical approach to creating a timeline.
AfterTime
I received an email recently from the folks at NFILabs, out of the Netherlands, regarding a tool called AfterTime, which is a Java-based timeline tool based on Snorkel. This looks like a very interesting tool, with evaluation versions running on both Windows and Linux. Based on some of the screenshots from the NFILabs site regarding AfterTime, it looks like each event has a number of values that relate to that event and can provide the examiner with some context to that event.
AfterTime also uses color-coded graphics to show the numbers and types of events based on the date. I've said in the past that one of the problems with this approach (i.e., illustrating abundance) is that most times, an intrusion or malware incident will be the least frequency of occurrence (shoutz to Pete Silberman of Mandiant) on a system. As such, I'm not entirely sure that a graphical approach to analysis is necessarily the way to go, but this is definitely something worth looking at.
Eoghan had an excellent post on issues/pitfalls of file initialization, particularly on Windows systems. After reading through this post, I can think of a number of exams I've done in the past where I wish I could link to this post in the report. In one particular instance, while performing data breach exams, I've found sensitive data "in" Registry hive files, and a closer look showed me that the search hits were actually in either the file slack or the uninitialized space within the file. During another exam, I was looking for indications of how an intruder ended up being able to log into a system via RDP. I ran my timeline tools across the Event Log (.evt) files extracted from the image, and found that I had RDP logins (event ID 528, type 10) in my timeline that fell outside of the date range (via evtrpt.pl) of the Security Event Log. Taking a closer look, I found that while the Event Logs had been cleared and one of the .evt file headers (and Event Viewer) reported that there were NO event records in the file, I found that the uninitialized portion of the file was comprised of data from previously used sectors...which included entire event records!
More than anything else, this concept of uninitialized space really reinforces to me how process and procedures are important, and how knowledgeable analysts really need to be. After all, say you have a list of keywords you're searching for, or a regex you're grep'ing for...when you get a hit, it is really in the file?
Unexpected Events
I ran across a link to Lenny Zeltser's site recently that pointed to a presentation entitled, How To Respond To An Unexpected Security Event. I'll have to put my thoughts on Lenny's presentation in another post, but for the most part, I think that the presentation has some very valid points that may often to unrecognized. More about that later, though.
Regtime
Following a recent SANS blog post (Digital Forensic SIFTing: SUPER Timeline Analysis and Creation), I received a couple of requests to release regtime.pl. This functionality was part of RegRipper before regtime.pl was included in SIFT. You can see this if you use rip.pl to list the plugins in .csv format, but instead of sending them to a file, pipe the output through the "find" command:
C:\Perl\forensics\rr>rip.pl -l -c | find "regtime" /i
regtime,20080324,All,Dumps entire hive - all keys sorted by LastWrite time
If you need the output to be in a different format, well, it's Perl and open source!
Timeline Creation
Chris has posted part 2 of his Timeline Analysis posts (part 1 is here), this time including Registry data by running regtime.pl from the SANS SIFT Toolkit v2.0 against the Registry hives. It's really good to see more folks bringing this topic up and mentioning it in blogs, etc. I hesitate to say "discussion" because I'm not really seeing much discussion of this topic in the public arena, although I have heard that there's been some discussion offline.
Looking at this, and based on some requests I've received recently, if you're looking for a copy of regtime.pl, you'll need to contact Rob about that. I did provide a copy of regtime.pl for him to use, but based on the output I saw in Chris's post, there appear to have been modifications to the script I provided. So if you're interested in a copy of that script, reach out to Rob.
Also, one other thought...I'm not a big proponent for adding information from the Registry in this manner. Now, I'm not saying its wrong...I'm just saying that this isn't how I would go about doing it. The reason for that is that there's just way too much noise being added to the timeline for my taste...that's too much extra stuff that I need to wade through. This is why I'll use RegRipper and the Timeline Entry GUI (described here) to enter specific entries into the timeline. By "specific entries", I mean those from the UserAssist and RecentDocs keys, as well as any MRU list keys. My rationale for this is that I don't want to see all of the Registry key LastWrite times because (a) too many of the entries in the timeline will simply be irrelevant and unnecessary, and (b) too many of the entries will be without any context whatsoever.
More often, I find that the really valuable information is what occurred to cause the change to the key's LastWrite time, and in many cases, that has to do with a Registry value, such as one being added or removed. In other cases, the truly valuable data can come from the binary contents of a Registry value, such as those within the UserAssist\Count subkeys.
Again, I'm not saying that there's anything wrong with adding Registry data to a timeline in this manner; rather, I'm just sharing my own experiences, those being that more valuable data can be found in a more tactical approach to creating a timeline.
AfterTime
I received an email recently from the folks at NFILabs, out of the Netherlands, regarding a tool called AfterTime, which is a Java-based timeline tool based on Snorkel. This looks like a very interesting tool, with evaluation versions running on both Windows and Linux. Based on some of the screenshots from the NFILabs site regarding AfterTime, it looks like each event has a number of values that relate to that event and can provide the examiner with some context to that event.
AfterTime also uses color-coded graphics to show the numbers and types of events based on the date. I've said in the past that one of the problems with this approach (i.e., illustrating abundance) is that most times, an intrusion or malware incident will be the least frequency of occurrence (shoutz to Pete Silberman of Mandiant) on a system. As such, I'm not entirely sure that a graphical approach to analysis is necessarily the way to go, but this is definitely something worth looking at.
Wednesday, March 17, 2010
Timeline Creation and Analysis
I haven't really talked about timelines in a while, in part because I've been creating and analyzing them as part of every engagement I've worked. I do this because in most...well, in all cases...the analysis I need to do involves something that happened a certain time. Sometimes it's a matter of determining what the event or events were, other times it's a matter of determining when the event(s) happened. The fact is that the analysis involves something happened at some point in time...and that's the perfect time to create a timeline.
With respect to creating timelines, I'm not the only using timelines. Chris posted recently on using the TSK tool fls to create a bodyfile from a live system, and Rob posted on creating timelines from Windows Volume Shadow Copies. Using Volume Shadow Copies to create timelines is a great way to get a view into the state of the system at some point in the past...something that can be extremely valuable in an investigation.
These are great places to start, but consider all that you could do if you took advantage of other data on the system. In order to get a more granular view into what happened when on a system, timelinesshould need to incorporate other data sources. Incorporating Event Log (.evt, .evtx) records may show you who was logged on to the system, and how (i.e., locally, via RDP, etc.). Now, auditing isn't always enabled, or enabled enough to provide indications of what you're looking for, but many times, there's some information there that may be helpful.
Including user web browser activity into a timeline has been extremely useful in tracking down things like browser drive-bys, etc. For example, by including web browsing activity, you may see the site that the user visited just prior to a DLL being created on the system and a BHO being added to the Registry. Also, don't forget to check the user's Bookmarks or Favorites...there at timestamps in those files, as well.
When I was working at IBM and conducting data breach investigations, many times we'd see SQL Injection being used in some manner. Parsing all of the web server logs for the necessary data required an iterative approach (i.e., search for SQL injection, collect IP addresses, re-run searches for the IP addresses, etc.), but adding those log entries to the timeline can provide a great deal of context to youranalysis. Say, for example, that the MS SQL Server database is on the same system as the IIS web server...any commands run via SQLi would leave artifacts on that system, just as creating/modifying files would. If the database is on another system entirely, and you're using the five field TLN format, you can easily correlate data from both systems in the same timeline (taking clock skew into account, of course). This works equally well for exams involving ASP or PHP web shells, as you can see where the command was sent (in the web server logs), as well as the artifacts (within the file system, other artifacts), all right there together.
Consider all of the other sources of data on a system, such as other application (i.e., AV, etc.) logs. And don't get me started on the Registry...well, okay, there you go. There're also Task Scheduler Logs, Prefetch files, as well as metadata from other files (PDF, Office documents, etc.) that can be added to a timeline as necessary. Depending on the system, and what you're looking for, there can be quite a lot of data.
But what does this work get me? Well, a couple of things, actually. For one, there's context. Say you start with the file system metadata in your timeline, and you kind of have a date that you're interested in, when you think the incident may have happened. So, you add the contents of the Event Logs, and you see that the user "Kanye" logged in...event ID 528, type 10. Hey, wait a sec...since when does "Kanye" log in via RDP? Why would he, if he's in the office, sitting at this desk? So then we add the user Registry hive information, and we see "cmd/1" in the RunMRU key (the most recent entry) and shortly thereafter we notice that "Kany3" logged in via RDP. We can get the user information from the SAM hive, as well as any additional information from the newly-created user profile. So as we add data, we begin to also add context with respect to activity we're seeing on the system.
We can also use the timeline to provide an increasing or higher level of overall confidence in the data itself. Let's say that we start with the file system metadata...well, we know that this may not be entirely accurate, as file system MAC times can be easily manipulated. These times, as presented by most tools, are usually derived from the $STANDARD_INFORMATION attribute within the MFT. However, what if I add the creation date of the file from the $FILE_NAME attribute, or simply compare that value to the creation date from the $STANDARD_INFORMATION attribute? Okay, maybe now I've raised my relative confidence level with respect to the data. So now, I add other sources of data, so rather than just seeing a file creation or modification event, I now see other activity (within close temporal proximity) that leads to that event?
Let's say that I start off with a timeline based just on file system metadata (Windows XP), and I see a file creation event for a Prefetch file. The Prefetch file is for an application accessed through the Windows shell, so I would want to perhaps see if the Event Log contained any login information so I could determine which user was logged in, as well as when they'd logged in; however, I find out that auditing of login events is not enabled. Okay, I check the ProfileList key in the Software hive against the user profile directories, and I find out that all users who've logged into the system are local users...so I can go into the SAM hive and get things like Last Login dates. I then parse the UserAssist key for each user, and I find that just prior to the Prefetch file I'm interested in being created, the user "Kanye" navigated through the Start button to launch that application. Now, the file system time may be easily changed, but I now have less mutable data (i.e., a timestamp embedded in a binary Registry value) that corroborates the file system time, which increases my relative level of confidence with respect to the data.
Now, jump ahead a couple of days in time...other things had gone on on the system prior to acquisition, and this time, I'm interested in the creation AND modification times of this Prefetch file. It's been a couple of days, and what I find at this point is that the UserAssist information tells me that the application referred to by the Prefetch file has actually been run several times, between the creation and modification date; now, my UserAssist information corresponds to modification time of the file. So, now I add metadata from the Prefetch file, and I have data that supports the modification time (the last time the application was run, the timestamp for which is embedded in the Prefetch file, would correspond to when the Prefetch file was last modified), as well as the number of times the user launched the application.
Now, if the application of interest is something like MSWord, I might also be interested in things such as any documents recently accessed, particularly via common dialogs. The point is that most analysts understand that file system metadata may be easily modified, or perhaps misinterpreted; by adding additional information to the timeline, I not only add context, but by adding data sources that are less likely to be modified (timestamps embedded in files as metadata, Registry key LastWrite times, etc.), I can raise my relative level of confidence in the data itself.
One final point...incident responders increasingly face larger and larger data sets, requiring some sort of triage to identify, reduce, or simply prioritize the scope of an engagement. As such, having access to extra eyes and hands...quickly...can be extremely valuable. So consider this...which is faster? Imaging 50 systems and sitting down and going through them, or collecting specific data (file system metadata and selected files) and providing to someone else to analyze, while on-site response activities continue? The off-site analyst gets the data, processes it and beings analysis, narrowing the scope...now we're down from 50 systems, to 10...and most importantly, we're already starting to get answers.
Let's say that I have a system with a 120GB system partition, of which 50 GB is used. Which is faster to collect...the overall image, or file system metadata? Which is smaller? Which can be more easily provided to someone off-site? Let's say that the file created when collecting file system metadata is 11MB. Okay...add Registry data, Event Logs, and maybe some specific files, and I'm up to...what...13MB. This is even smaller if I zip it...let's say 9MB. So now, while the next 12oGB system is being acquired, I'm providing the data to an off-site analyst, and she's able to follow a process for creating a timeline, and begin analyzing the data. Uploading a 9MB file is much faster than shipping a 120GB image via FedEx.
As a responder, I've had customers in California. Call me after happy hour on the East Coast, and the first flight out will be sometime in the next 12 hrs. It's usually 4 1/2 hrs to the San Jose area, but 6 hrs to LA or Seattle, WA. Then depending on where the customer is actually located, it may be another 2 hrs for me to get to the gate, get a rental car and arrive on-site. However, if there are trained first responders on staff, I can begin analyzing data (and requesting additional data) within, say, 2 hours of the initial call.
So another way cool thing is that this can also be used in data breach cases. How's that? Well, if you're shipping compressed file system metadata to someone (and you've encrypted it), you're not shipping file contents...so you're not exposing sensitive data. Providing the necessary information may not answer the question, but it can definitely narrow down the answer and help to identify and reduce the overall scope of an incident.
With respect to creating timelines, I'm not the only using timelines. Chris posted recently on using the TSK tool fls to create a bodyfile from a live system, and Rob posted on creating timelines from Windows Volume Shadow Copies. Using Volume Shadow Copies to create timelines is a great way to get a view into the state of the system at some point in the past...something that can be extremely valuable in an investigation.
These are great places to start, but consider all that you could do if you took advantage of other data on the system. In order to get a more granular view into what happened when on a system, timelines
Including user web browser activity into a timeline has been extremely useful in tracking down things like browser drive-bys, etc. For example, by including web browsing activity, you may see the site that the user visited just prior to a DLL being created on the system and a BHO being added to the Registry. Also, don't forget to check the user's Bookmarks or Favorites...there at timestamps in those files, as well.
When I was working at IBM and conducting data breach investigations, many times we'd see SQL Injection being used in some manner. Parsing all of the web server logs for the necessary data required an iterative approach (i.e., search for SQL injection, collect IP addresses, re-run searches for the IP addresses, etc.), but adding those log entries to the timeline can provide a great deal of context to youranalysis. Say, for example, that the MS SQL Server database is on the same system as the IIS web server...any commands run via SQLi would leave artifacts on that system, just as creating/modifying files would. If the database is on another system entirely, and you're using the five field TLN format, you can easily correlate data from both systems in the same timeline (taking clock skew into account, of course). This works equally well for exams involving ASP or PHP web shells, as you can see where the command was sent (in the web server logs), as well as the artifacts (within the file system, other artifacts), all right there together.
Consider all of the other sources of data on a system, such as other application (i.e., AV, etc.) logs. And don't get me started on the Registry...well, okay, there you go. There're also Task Scheduler Logs, Prefetch files, as well as metadata from other files (PDF, Office documents, etc.) that can be added to a timeline as necessary. Depending on the system, and what you're looking for, there can be quite a lot of data.
But what does this work get me? Well, a couple of things, actually. For one, there's context. Say you start with the file system metadata in your timeline, and you kind of have a date that you're interested in, when you think the incident may have happened. So, you add the contents of the Event Logs, and you see that the user "Kanye" logged in...event ID 528, type 10. Hey, wait a sec...since when does "Kanye" log in via RDP? Why would he, if he's in the office, sitting at this desk? So then we add the user Registry hive information, and we see "cmd/1" in the RunMRU key (the most recent entry) and shortly thereafter we notice that "Kany3" logged in via RDP. We can get the user information from the SAM hive, as well as any additional information from the newly-created user profile. So as we add data, we begin to also add context with respect to activity we're seeing on the system.
We can also use the timeline to provide an increasing or higher level of overall confidence in the data itself. Let's say that we start with the file system metadata...well, we know that this may not be entirely accurate, as file system MAC times can be easily manipulated. These times, as presented by most tools, are usually derived from the $STANDARD_INFORMATION attribute within the MFT. However, what if I add the creation date of the file from the $FILE_NAME attribute, or simply compare that value to the creation date from the $STANDARD_INFORMATION attribute? Okay, maybe now I've raised my relative confidence level with respect to the data. So now, I add other sources of data, so rather than just seeing a file creation or modification event, I now see other activity (within close temporal proximity) that leads to that event?
Let's say that I start off with a timeline based just on file system metadata (Windows XP), and I see a file creation event for a Prefetch file. The Prefetch file is for an application accessed through the Windows shell, so I would want to perhaps see if the Event Log contained any login information so I could determine which user was logged in, as well as when they'd logged in; however, I find out that auditing of login events is not enabled. Okay, I check the ProfileList key in the Software hive against the user profile directories, and I find out that all users who've logged into the system are local users...so I can go into the SAM hive and get things like Last Login dates. I then parse the UserAssist key for each user, and I find that just prior to the Prefetch file I'm interested in being created, the user "Kanye" navigated through the Start button to launch that application. Now, the file system time may be easily changed, but I now have less mutable data (i.e., a timestamp embedded in a binary Registry value) that corroborates the file system time, which increases my relative level of confidence with respect to the data.
Now, jump ahead a couple of days in time...other things had gone on on the system prior to acquisition, and this time, I'm interested in the creation AND modification times of this Prefetch file. It's been a couple of days, and what I find at this point is that the UserAssist information tells me that the application referred to by the Prefetch file has actually been run several times, between the creation and modification date; now, my UserAssist information corresponds to modification time of the file. So, now I add metadata from the Prefetch file, and I have data that supports the modification time (the last time the application was run, the timestamp for which is embedded in the Prefetch file, would correspond to when the Prefetch file was last modified), as well as the number of times the user launched the application.
Now, if the application of interest is something like MSWord, I might also be interested in things such as any documents recently accessed, particularly via common dialogs. The point is that most analysts understand that file system metadata may be easily modified, or perhaps misinterpreted; by adding additional information to the timeline, I not only add context, but by adding data sources that are less likely to be modified (timestamps embedded in files as metadata, Registry key LastWrite times, etc.), I can raise my relative level of confidence in the data itself.
One final point...incident responders increasingly face larger and larger data sets, requiring some sort of triage to identify, reduce, or simply prioritize the scope of an engagement. As such, having access to extra eyes and hands...quickly...can be extremely valuable. So consider this...which is faster? Imaging 50 systems and sitting down and going through them, or collecting specific data (file system metadata and selected files) and providing to someone else to analyze, while on-site response activities continue? The off-site analyst gets the data, processes it and beings analysis, narrowing the scope...now we're down from 50 systems, to 10...and most importantly, we're already starting to get answers.
Let's say that I have a system with a 120GB system partition, of which 50 GB is used. Which is faster to collect...the overall image, or file system metadata? Which is smaller? Which can be more easily provided to someone off-site? Let's say that the file created when collecting file system metadata is 11MB. Okay...add Registry data, Event Logs, and maybe some specific files, and I'm up to...what...13MB. This is even smaller if I zip it...let's say 9MB. So now, while the next 12oGB system is being acquired, I'm providing the data to an off-site analyst, and she's able to follow a process for creating a timeline, and begin analyzing the data. Uploading a 9MB file is much faster than shipping a 120GB image via FedEx.
As a responder, I've had customers in California. Call me after happy hour on the East Coast, and the first flight out will be sometime in the next 12 hrs. It's usually 4 1/2 hrs to the San Jose area, but 6 hrs to LA or Seattle, WA. Then depending on where the customer is actually located, it may be another 2 hrs for me to get to the gate, get a rental car and arrive on-site. However, if there are trained first responders on staff, I can begin analyzing data (and requesting additional data) within, say, 2 hours of the initial call.
So another way cool thing is that this can also be used in data breach cases. How's that? Well, if you're shipping compressed file system metadata to someone (and you've encrypted it), you're not shipping file contents...so you're not exposing sensitive data. Providing the necessary information may not answer the question, but it can definitely narrow down the answer and help to identify and reduce the overall scope of an incident.
Monday, March 15, 2010
Tidbits
Windows 7 XPMode
I was doing some Windows 7 testing not long ago, during which I installed a couple of applications in XPMode. The first thing I found was that you actually have to open the XP VPC virtual machine and install the application there; once you're done, the application should appear on your Windows 7 desktop.
What I found in reality is that not all applications installed in XPMode appear to Windows 7. I installed Skype and Google Chrome, and Chrome did not and would not appear on the Windows 7 desktop.
Now, the next step is to examine the Registry...for both the Windows 7 and XPMode sides...to see where artifacts reside.
When it comes to artifacts, there are other issues to consider, as well...particularly when the system you're examining may not have the hardware necessary to run XPMode, so the user may opt for something a bit easier, such as VirtualBox's Seamless Mode (thanks to Claus and the MakeUsOf blog for that link!).
Skype IMBot
Speaking of Skype, the guys at CERT.at have an excellent technical analysis and write-up on the Skype IMBot. Some of what this 'bot does is pretty interesting, in it's simplicity...for example, disabling AV through 'net stop' commands.
I thought that the Registry persistence mechanism was pretty interesting, in that it used the ubiquitous Run key and the Image File Execution Options key. As much as I've read about the IFEO key, and used it in demos to show folks how the whole thing worked, I've only seen it used once in the wild.
The only thing I'd say that I really wasn't 100% on-board with about the report overall was on pg 7 where the authors refer to "very simple rootkit behavior"...hiding behavior, yes, but rootkit? Really?
ZBot
I found an update about ZBot over at the MMPC site. I'd actually seen variant #4 before.
Another interesting thing about this malware is something I'd noticed in the past with other malware, particularly Conficker. Even though there are multiple variants, and as each new variant comes out, everyone...victims, IR teams, and AV teams...is in react mode, there's usually something about the malware that remains consistent across the entire family.
In the case of ZBot, one artifact or characteristic that remains persistent across the family is the Registry persistence mechanism; that is, this one writes to the UserInit value. This can be extremely important in helping IT staff and IR teams in locating other infected systems on the network, something that is very often the bane of IR; how to correctly and accurately scope an issue. I mean, which would you rather do...image all 500 systems, or locate the infected ones?
From the MMPC write-up, there are appears to be another Registry value (i.e., the UID value mentioned in the write-up) that IT staff can use to identify potentially infected systems.
The reason I mention this is that responders can use this information to look for infected systems across their domain, using reg.exe in a simple batch file. Further, checks of these Registry keys can be added to tools like RegRipper, so that a forensic analyst can quickly...very quickly...check for the possibility of such an infection, either during analysis or even as part of the data in-processing. With respect to RegRipper, there are already plugins available that pull the UserInit value, and it took me about 5 minutes (quite literally) to write one for the UID value.
Millenials
I was talking with a colleague recently about how pervasive technology is today for the younger set. Kids today have access to so much that many of us didn't have when we were growing up...like computers and cell phones. Some of us may use one email application, whereas our kids go through dozens, and sometimes use more than one email, IM, and/or social networking application at a time.
I've also seen where pictures of people have been posted to social networking sites without their knowledge. It seems that with the pervasiveness of technology comes an immediate need for gratification and a complete dismissal for the privacy and rights of others. While some people won't like it when it happens to them, they have no trouble taking pictures of people and posting them to social network sites without their knowledge or permission. Some of the same people will freely browse social networking sites, making fun of what others have posted...but when it's done to them, suddenly it's creepin' and you have no right.
Combine these attitudes with the massive pervasiveness of technology, and you can see that there's a pretty significant security risk.
From a forensics perspective, I've seen people on two social networking sites, while simultaneously using three IM clients (none of which is AIM) on their computer (and another one on an iTouch), all while texting others from their cell phone. Needless to say, trying to answer a simple question like, "...was any data exfiltrated?" is going to be a bit of a challenge.
Accenture has released research involving these millenials, those younger folks for whom technology is so pervasive, and in many cases, for whom "privacy" means something totally different from what it means to you, me, and even our parents. In many ways, I think that this is something that lots of us need to read. Not too long ago, I noted that when examining a system and looking for browser activity, a number of folks responded that they started by looking at the TypedURLs key, and then asked, what if IE isn't the default browser? Let's take that a step further...what if the computer isn't the default communications device? Many times LE will try to tie an IP address from log files to a specific device used by a particular person...but now the question is, which device? Not only will someone have a laptop and a cell phone, now what happens when they tether the devices?
The next time you see some younger folks sitting around in a group, all of them texting other people, or you see someone using a laptop and a cell phone, think about the challenges inherent to answering the most basic questions.
USN Journal
Through a post and some links in the Win4n6 Yahoo Group, I came across some interesting links regarding the NTFS USN Journal file (thanks, Jimmy!). Jimmy pointed to Lance's EnScript for parsing the NTFS transaction log; Lance's page points to the MS USN_RECORD structure definition. Thanks to everyone who contributed to the thread.
An important point about the USN Journal...the change journal is not enabled by default on XP, but it is on Vista.
So, why's this important? Well, a lot of times you may find something of value by parsing files specifically associated with the file system, such as the MFT (using analyzeMFT.py, for example).
Another example is that I've found value bits in the $LogFile file, both during practice, and while working real exams. I simply export the file and run it through strings or BinText, and in a couple of cases I've found information from files that didn't appear in the 'active' file system.
For more detailed (re: regarding structures, etc.) information about the NTFS file system, check out NTFS.com and the NTFS documentation on Sourceforge.
Resources
MFT Analysis/Data Structures
Missing MFT Entry
LNK Files
Speaking of files...Harry Parsonage has an interesting post on Windows shortcut/LNK files, including some information on using the available timestamps (there's 9 of them) to make heads or tails of what was happening on the system. Remember that many times, LNK files are created through some action taken by a user, so this analysis may help you build a better picture of user activity.
I was doing some Windows 7 testing not long ago, during which I installed a couple of applications in XPMode. The first thing I found was that you actually have to open the XP VPC virtual machine and install the application there; once you're done, the application should appear on your Windows 7 desktop.
What I found in reality is that not all applications installed in XPMode appear to Windows 7. I installed Skype and Google Chrome, and Chrome did not and would not appear on the Windows 7 desktop.
Now, the next step is to examine the Registry...for both the Windows 7 and XPMode sides...to see where artifacts reside.
When it comes to artifacts, there are other issues to consider, as well...particularly when the system you're examining may not have the hardware necessary to run XPMode, so the user may opt for something a bit easier, such as VirtualBox's Seamless Mode (thanks to Claus and the MakeUsOf blog for that link!).
Skype IMBot
Speaking of Skype, the guys at CERT.at have an excellent technical analysis and write-up on the Skype IMBot. Some of what this 'bot does is pretty interesting, in it's simplicity...for example, disabling AV through 'net stop' commands.
I thought that the Registry persistence mechanism was pretty interesting, in that it used the ubiquitous Run key and the Image File Execution Options key. As much as I've read about the IFEO key, and used it in demos to show folks how the whole thing worked, I've only seen it used once in the wild.
The only thing I'd say that I really wasn't 100% on-board with about the report overall was on pg 7 where the authors refer to "very simple rootkit behavior"...hiding behavior, yes, but rootkit? Really?
ZBot
I found an update about ZBot over at the MMPC site. I'd actually seen variant #4 before.
Another interesting thing about this malware is something I'd noticed in the past with other malware, particularly Conficker. Even though there are multiple variants, and as each new variant comes out, everyone...victims, IR teams, and AV teams...is in react mode, there's usually something about the malware that remains consistent across the entire family.
In the case of ZBot, one artifact or characteristic that remains persistent across the family is the Registry persistence mechanism; that is, this one writes to the UserInit value. This can be extremely important in helping IT staff and IR teams in locating other infected systems on the network, something that is very often the bane of IR; how to correctly and accurately scope an issue. I mean, which would you rather do...image all 500 systems, or locate the infected ones?
From the MMPC write-up, there are appears to be another Registry value (i.e., the UID value mentioned in the write-up) that IT staff can use to identify potentially infected systems.
The reason I mention this is that responders can use this information to look for infected systems across their domain, using reg.exe in a simple batch file. Further, checks of these Registry keys can be added to tools like RegRipper, so that a forensic analyst can quickly...very quickly...check for the possibility of such an infection, either during analysis or even as part of the data in-processing. With respect to RegRipper, there are already plugins available that pull the UserInit value, and it took me about 5 minutes (quite literally) to write one for the UID value.
Millenials
I was talking with a colleague recently about how pervasive technology is today for the younger set. Kids today have access to so much that many of us didn't have when we were growing up...like computers and cell phones. Some of us may use one email application, whereas our kids go through dozens, and sometimes use more than one email, IM, and/or social networking application at a time.
I've also seen where pictures of people have been posted to social networking sites without their knowledge. It seems that with the pervasiveness of technology comes an immediate need for gratification and a complete dismissal for the privacy and rights of others. While some people won't like it when it happens to them, they have no trouble taking pictures of people and posting them to social network sites without their knowledge or permission. Some of the same people will freely browse social networking sites, making fun of what others have posted...but when it's done to them, suddenly it's creepin' and you have no right.
Combine these attitudes with the massive pervasiveness of technology, and you can see that there's a pretty significant security risk.
From a forensics perspective, I've seen people on two social networking sites, while simultaneously using three IM clients (none of which is AIM) on their computer (and another one on an iTouch), all while texting others from their cell phone. Needless to say, trying to answer a simple question like, "...was any data exfiltrated?" is going to be a bit of a challenge.
Accenture has released research involving these millenials, those younger folks for whom technology is so pervasive, and in many cases, for whom "privacy" means something totally different from what it means to you, me, and even our parents. In many ways, I think that this is something that lots of us need to read. Not too long ago, I noted that when examining a system and looking for browser activity, a number of folks responded that they started by looking at the TypedURLs key, and then asked, what if IE isn't the default browser? Let's take that a step further...what if the computer isn't the default communications device? Many times LE will try to tie an IP address from log files to a specific device used by a particular person...but now the question is, which device? Not only will someone have a laptop and a cell phone, now what happens when they tether the devices?
The next time you see some younger folks sitting around in a group, all of them texting other people, or you see someone using a laptop and a cell phone, think about the challenges inherent to answering the most basic questions.
USN Journal
Through a post and some links in the Win4n6 Yahoo Group, I came across some interesting links regarding the NTFS USN Journal file (thanks, Jimmy!). Jimmy pointed to Lance's EnScript for parsing the NTFS transaction log; Lance's page points to the MS USN_RECORD structure definition. Thanks to everyone who contributed to the thread.
An important point about the USN Journal...the change journal is not enabled by default on XP, but it is on Vista.
So, why's this important? Well, a lot of times you may find something of value by parsing files specifically associated with the file system, such as the MFT (using analyzeMFT.py, for example).
Another example is that I've found value bits in the $LogFile file, both during practice, and while working real exams. I simply export the file and run it through strings or BinText, and in a couple of cases I've found information from files that didn't appear in the 'active' file system.
For more detailed (re: regarding structures, etc.) information about the NTFS file system, check out NTFS.com and the NTFS documentation on Sourceforge.
Resources
MFT Analysis/Data Structures
Missing MFT Entry
LNK Files
Speaking of files...Harry Parsonage has an interesting post on Windows shortcut/LNK files, including some information on using the available timestamps (there's 9 of them) to make heads or tails of what was happening on the system. Remember that many times, LNK files are created through some action taken by a user, so this analysis may help you build a better picture of user activity.
Subscribe to:
Posts (Atom)