At times, there appear be a number of different sub-disciplines within the DFIR community, and those of us in that community may tend to separate ourselves based on somewhat arbitrary distinctions. However, when I sit back and think about such things, it occurs to me that these separations are really just obstacles to our own overall success, and do nothing whatsoever to strengthen any one sub-discipline. Instead, these divisions tend to weaken us all.
I once had a law enforcement official tell me, "You do intrusion and malware investigations, we do CP and fraud cases." At first, I thought...uhm...okay. But as I thought more about what he'd said, I began to think, what happens in a CP case when the accused claims the "Trojan Defense"? Doesn't it then become something of a malware case? If a Trojan or other malware is discovered during an exam, do we assume that it was, in fact, the culprit, or do we perform additional analysis to determine its capabilities, and whether or not it even executed?
The same can be said with respect to other issues, and spoliation is a great example. Melia recently blogged about a case experience, and even gave an excellent DFIROnline presentation, during which she discussed certain aspects of an exam that involved spoliation. Her issue involved determining the use of CCleaner, yet many of those skills she used to resolve the case could be easily used in other areas...especially the part about Registry analysis.
Another example of a spoliation exam can involve the defrag utility on Windows, as the use of this utility following a preservation order can be seen to be a violation of that order. After all, we all know that deleting a file doesn't necessarily make it gone, but defragging the hard drive after deleting the file can make that file much harder to recover. During such an exam, the analyst might find a Prefetch file for the defrag utility and determine that it had been run in violation of the order...but had it? Windows XP, by default, runs a limited defrag (see the "Prefetch" section of this page) every three days. Windows 7 includes a Scheduled Task for running the defrag utility every Wednesday...and examining the Application Event Log, you can look for events with source "Defrag" and ID 528 to see the status of some of the defrag runs. You'll also want to check the UserAssist subkeys for indications of the user launching the defrag utility, in order to separate default system behavior from intentional actions performed by the user.
Like other examinations, spoliation cases aren't isolated to their own group of specialists. I've been involved in CP cases where the first step was to address the Trojan Defense, and then the issue of counter-forensics techniques being used. In this case, the convergence with spoliation exams came in through the examination of user activities...in this case, a user had run an older version of WindowWasher and deleted several artifacts, to include the RecentDocs key (the entire key, not just the subkeys) from the NTUSER.DAT hive for the account. In this particular case, the anomaly was found via RegRipper, and determined to be the result of explicit actions taken by the user.
The same can be said for PCI forensic assessments...there are a lot of skills involved in such things that translate to other areas of the DFIR community. Don't believe me? Check out Chris's Core Duo post, and be sure to catch Chris's Sniper Forensics v3 presentation at the SANS Forensic Summit in Austin, TX, this summer.
So let's not isolate ourselves with the arbitrary distinctions. Instead, share what keeps you up at night with others within the community, as doing so will likely result in some innovative and interesting solutions.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Pages
▼
Thursday, April 26, 2012
Wednesday, April 25, 2012
Updates
Power of Open Source
I've mentioned this before, and it bears repeating...open source is a very powerful thing.
I recently posted about the AppCompatCache finding that the Mandiant folks shared, and in looking at it, I found that some tools are able to handle 'big data', and some aren't. Specifically, Yaru from TZWorks had no trouble at all letting me view the data, but the Parse::Win32Registry Perl module (the basis for RegRipper) didn't. I did some troubleshooting of the module (thanks goes to the author, James Macfarlane, for providing some useful debugging features) and found that it doesn't address 'big data' at all. I contacted the author, and he said that when developing the module, he hadn't had any issues, nor instances of 'big data'. If you look around on the Internet, you'll see that the first versions of the module were uploaded to CPAN something in 2007, so that wouldn't be too unexpected.
Knowing a little Perl, I was able to locate where within the module the value data is populated from a binary hive file, and I wrote an update to the specific file. I've been doing a bit of testing, and below is some of the sample output:
The code also works equally well with 32-bit Windows XP hives, as well. So, it appears that the update is working well, and I've provided the fix to the module author for inclusion in any updates.
Again...this isn't about a plugin, or RegRipper. It's about how powerful open source can be. Due to the information that Mandiant released regarding this Registry value, it was a simple matter to determine which tools worked in addressing this, and which didn't. For those that didn't, and were open source, a fix or update was pretty easy to develop. Another example of this would include updates to some of the current RegRipper plugins that allow for parsing of MAC addresses. Open source is a great way to fill the gap in capability, going from none to something fully functional in a relatively short amount of time.
I've uploaded a copy of the patch to the module, along with a readme file with instructions for how to install it, here. You can also find it at the RegRipper site that Brett set up, and the module author has also addressed the issue in the code, and will be providing it in the next update, so that the module can be updated much easier...if you're using ActiveState Perl, it would simply be a matter of typing the following command:
Alternatively, you could download the module (as a gzipped tarball) from CPAN (when it's available).
I've mentioned this before, and it bears repeating...open source is a very powerful thing.
I recently posted about the AppCompatCache finding that the Mandiant folks shared, and in looking at it, I found that some tools are able to handle 'big data', and some aren't. Specifically, Yaru from TZWorks had no trouble at all letting me view the data, but the Parse::Win32Registry Perl module (the basis for RegRipper) didn't. I did some troubleshooting of the module (thanks goes to the author, James Macfarlane, for providing some useful debugging features) and found that it doesn't address 'big data' at all. I contacted the author, and he said that when developing the module, he hadn't had any issues, nor instances of 'big data'. If you look around on the Internet, you'll see that the first versions of the module were uploaded to CPAN something in 2007, so that wouldn't be too unexpected.
Knowing a little Perl, I was able to locate where within the module the value data is populated from a binary hive file, and I wrote an update to the specific file. I've been doing a bit of testing, and below is some of the sample output:
C:\Perl\forensics\rr>rip.pl -r d:\cases\win7\system -p appcompatcache
Launching appcompatcache v.20120424
Length of data: 228872
Signature: 0xbadc0fee
Win7/2008R2 (64-bit)
C:\Perl\forensics\rr>rip.pl -r f:\win7\system -p appcompatcache
Launching appcompatcache v.20120424
Length of data: 11104
Signature: 0xbadc0fee
Win7/2008R2 (32-bit)
Launching appcompatcache v.20120424
Length of data: 228872
Signature: 0xbadc0fee
Win7/2008R2 (64-bit)
C:\Perl\forensics\rr>rip.pl -r f:\win7\system -p appcompatcache
Launching appcompatcache v.20120424
Length of data: 11104
Signature: 0xbadc0fee
Win7/2008R2 (32-bit)
The code also works equally well with 32-bit Windows XP hives, as well. So, it appears that the update is working well, and I've provided the fix to the module author for inclusion in any updates.
Again...this isn't about a plugin, or RegRipper. It's about how powerful open source can be. Due to the information that Mandiant released regarding this Registry value, it was a simple matter to determine which tools worked in addressing this, and which didn't. For those that didn't, and were open source, a fix or update was pretty easy to develop. Another example of this would include updates to some of the current RegRipper plugins that allow for parsing of MAC addresses. Open source is a great way to fill the gap in capability, going from none to something fully functional in a relatively short amount of time.
I've uploaded a copy of the patch to the module, along with a readme file with instructions for how to install it, here. You can also find it at the RegRipper site that Brett set up, and the module author has also addressed the issue in the code, and will be providing it in the next update, so that the module can be updated much easier...if you're using ActiveState Perl, it would simply be a matter of typing the following command:
ppm install parse-win32registry
Alternatively, you could download the module (as a gzipped tarball) from CPAN (when it's available).
Monday, April 23, 2012
Updates and Links
Malware Detection
Jamie posted an MBR Parser recently. You're probably thinking, "Yeah...and???" Well, in her post regarding the code that she released, she does reference Chad's graphic regarding MBR malware, which illustrates the increase of MBR infectors over time. Jamie's code not only disassembles the instructions, but parses the partition table, as well. Yes, I know that there are other tools out there that parse the partition table, but I think that what Jamie provided is a means for gathering more information about systems when conducting analysis.
For example, let's assume that you're given a system and told, "We think this system was infected with malware", and nothing else. What do you do at that point? In most cases, I'm sure that an AV scan (or three) is run against the required image, but what else? As an analyst, do you specifically look for MBR infectors?
I've posted several times regarding the need to check for MBR infectors when you're working a "we think this system may be infected with malware" engagement. Mebroot can infect a system as a result of a browser drive-by; often times, I tend to think that we don't hear more about this sort of malware simply because analysis of infected systems isn't being done, or the MBR infectors aren't being effectively considered during analysis.
The code I wrote to help me detect MBR infectors parses the initial sectors of the acquired image; many time (albeit not always...), the first sector (sector 0) of a physical image contains the MBR, and sector 63 may be where the first partition is located. What the script does is read through the intermediate sectors and identify those that do not contain all zeros; many (again...not all) MBR infectors will copy the original MBR to one or more sectors before laying down the infection.
Note: I did, at one point, release the MBR infector detection script publicly, but I took it down, as I don't think that some of the analysts who were running the code really understood what they were doing. I was told by one person that they didn't understand why the code wasn't parsing MFT records, and another person ran the script against a Windows memory dump. It just got too time consuming to provide support for analysts who weren't using the code properly; however, I still use it as part of my malware detection process.
Where Jamie's code can be extremely valuable is when collecting information in preparation for your analysis. Let's say that you're collecting this information as part of your regular analysis, and you find that a system is, in fact, infected with an MBR infector. Now you have a means for identifying artifacts (or "IoCs") associated with that infection. This is another piece of information you can provide, another bit of intel that you can share, particularly when you're able to compare it not only to other systems you've examine before, but if the particular MBR infector is one of the variants that makes a copy of the original MBR, you can use that as a comparison. This can, in turn, help you and others more quickly identify indications of similar infections in the future.
I really think that the code that Jamie provided is not only very useful in and of itself (and as previously described), but also serves to illustrate the power of open source. Someone with some skill and interest sits down and provides a capability that previously didn't exist, or wasn't easily accessible to the majority of the community. Jamie produced this code of her own accord; another great example of how new capabilities come about was illustrated when Corey had an idea, and this guy brought it to life.
Resources
MBR Reference
WikiPedia: MBR
"The Standard MBR"
Malware Analysis
Speaking of malware (or "mall-wear"), there is an excellent analysis of malware over on the System Forensics blog. It's so nice sometimes to see analysis of malware that involves something more than just running strings on the file; in this case, a number of tools are used, including Dependency Walker, PeID, Resource Hacker, and CaptureBAT. If this is your kind of thing, take a look...it's a great walk-through on how to get some good intelligence about a malware sample, as well as an excellent example of what you can discover about an executable file beyond running strings on it.
Breaches and Sharing Intel
BankInfoSecurity recently posted an interview with the Heartland CEO, where breach response was the topic of discussion. Very early in the interview, Carr says that "information sharing is key", and he's right...some bad guys might be in other people's systems. Even if it isn't the same group or individual, techniques and tools may be similar, and sharing of information and intel may lead to a much more effective response, because you're not starting over, or starting completely from square 1...you've got something of a leg up.
Not long ago, I blogged about the Need for Analysis in an Intel-Driven Defense. That post referenced Dan Guido's paper on the same topic, and what I was pushing for in my post was the need to look at the analysis function within organizations, and how actually performing analysis would lead to more intel being available.
Something that Dan mentioned in his paper...specifically, the number of possible vulnerabilities versus those that are actually exploited...recently came up again, this time via the MMPC site; specifically, via a post about analysis of the Eleonore exploit pack shellcode. Remember, Dan had said that there were over 8000 vulnerabilities tracked last year...and it appears that the exploit pack in questions uses a grand total of...seven. Understanding and sharing this kind of information, as well as the indicators of a compromise, will provide responders and analysts with a means for a more effective response. Now all that needs to happen is that we need more posts like Corey's great exploit artifact posts.
As Dan pointed out in his paper, looking at the hard numbers (how very Deming-esque of him to say that...), we see where we need to focus our efforts. Within a large organization, would it be more advantageous to staff for the 8000+ possible vulnerabilities, or to beef up the analysis capability, so that (a) analysis actually gets done, and (b) it gets done in a timely (timely enough to be useful) and accurate manner?
In order to get the intel we need to defend our infrastructure, we need to conduct analysis. If this isn't being done, we need to take a hard look at why it isn't being done? Do we not have sufficient staff? Is our staff not sufficiently trained or do they not have sufficient capability? What can we do to improve that? If we're not collecting intel, then what can we share with others, so that everyone can better protect themselves?
I'd suggest that the single biggest thing you can do is get senior management recognition and support for the need for intel.
Jamie posted an MBR Parser recently. You're probably thinking, "Yeah...and???" Well, in her post regarding the code that she released, she does reference Chad's graphic regarding MBR malware, which illustrates the increase of MBR infectors over time. Jamie's code not only disassembles the instructions, but parses the partition table, as well. Yes, I know that there are other tools out there that parse the partition table, but I think that what Jamie provided is a means for gathering more information about systems when conducting analysis.
For example, let's assume that you're given a system and told, "We think this system was infected with malware", and nothing else. What do you do at that point? In most cases, I'm sure that an AV scan (or three) is run against the required image, but what else? As an analyst, do you specifically look for MBR infectors?
I've posted several times regarding the need to check for MBR infectors when you're working a "we think this system may be infected with malware" engagement. Mebroot can infect a system as a result of a browser drive-by; often times, I tend to think that we don't hear more about this sort of malware simply because analysis of infected systems isn't being done, or the MBR infectors aren't being effectively considered during analysis.
The code I wrote to help me detect MBR infectors parses the initial sectors of the acquired image; many time (albeit not always...), the first sector (sector 0) of a physical image contains the MBR, and sector 63 may be where the first partition is located. What the script does is read through the intermediate sectors and identify those that do not contain all zeros; many (again...not all) MBR infectors will copy the original MBR to one or more sectors before laying down the infection.
Note: I did, at one point, release the MBR infector detection script publicly, but I took it down, as I don't think that some of the analysts who were running the code really understood what they were doing. I was told by one person that they didn't understand why the code wasn't parsing MFT records, and another person ran the script against a Windows memory dump. It just got too time consuming to provide support for analysts who weren't using the code properly; however, I still use it as part of my malware detection process.
Where Jamie's code can be extremely valuable is when collecting information in preparation for your analysis. Let's say that you're collecting this information as part of your regular analysis, and you find that a system is, in fact, infected with an MBR infector. Now you have a means for identifying artifacts (or "IoCs") associated with that infection. This is another piece of information you can provide, another bit of intel that you can share, particularly when you're able to compare it not only to other systems you've examine before, but if the particular MBR infector is one of the variants that makes a copy of the original MBR, you can use that as a comparison. This can, in turn, help you and others more quickly identify indications of similar infections in the future.
I really think that the code that Jamie provided is not only very useful in and of itself (and as previously described), but also serves to illustrate the power of open source. Someone with some skill and interest sits down and provides a capability that previously didn't exist, or wasn't easily accessible to the majority of the community. Jamie produced this code of her own accord; another great example of how new capabilities come about was illustrated when Corey had an idea, and this guy brought it to life.
Resources
MBR Reference
WikiPedia: MBR
"The Standard MBR"
Malware Analysis
Speaking of malware (or "mall-wear"), there is an excellent analysis of malware over on the System Forensics blog. It's so nice sometimes to see analysis of malware that involves something more than just running strings on the file; in this case, a number of tools are used, including Dependency Walker, PeID, Resource Hacker, and CaptureBAT. If this is your kind of thing, take a look...it's a great walk-through on how to get some good intelligence about a malware sample, as well as an excellent example of what you can discover about an executable file beyond running strings on it.
Breaches and Sharing Intel
BankInfoSecurity recently posted an interview with the Heartland CEO, where breach response was the topic of discussion. Very early in the interview, Carr says that "information sharing is key", and he's right...some bad guys might be in other people's systems. Even if it isn't the same group or individual, techniques and tools may be similar, and sharing of information and intel may lead to a much more effective response, because you're not starting over, or starting completely from square 1...you've got something of a leg up.
Not long ago, I blogged about the Need for Analysis in an Intel-Driven Defense. That post referenced Dan Guido's paper on the same topic, and what I was pushing for in my post was the need to look at the analysis function within organizations, and how actually performing analysis would lead to more intel being available.
Something that Dan mentioned in his paper...specifically, the number of possible vulnerabilities versus those that are actually exploited...recently came up again, this time via the MMPC site; specifically, via a post about analysis of the Eleonore exploit pack shellcode. Remember, Dan had said that there were over 8000 vulnerabilities tracked last year...and it appears that the exploit pack in questions uses a grand total of...seven. Understanding and sharing this kind of information, as well as the indicators of a compromise, will provide responders and analysts with a means for a more effective response. Now all that needs to happen is that we need more posts like Corey's great exploit artifact posts.
As Dan pointed out in his paper, looking at the hard numbers (how very Deming-esque of him to say that...), we see where we need to focus our efforts. Within a large organization, would it be more advantageous to staff for the 8000+ possible vulnerabilities, or to beef up the analysis capability, so that (a) analysis actually gets done, and (b) it gets done in a timely (timely enough to be useful) and accurate manner?
In order to get the intel we need to defend our infrastructure, we need to conduct analysis. If this isn't being done, we need to take a hard look at why it isn't being done? Do we not have sufficient staff? Is our staff not sufficiently trained or do they not have sufficient capability? What can we do to improve that? If we're not collecting intel, then what can we share with others, so that everyone can better protect themselves?
I'd suggest that the single biggest thing you can do is get senior management recognition and support for the need for intel.
Saturday, April 21, 2012
Tools, Updates...
Tools
Didier has released an updated version of an older viewer that he'd written, called InteractiveSieve. Based on the description of the viewer, this looks like an excellent tool for performing timeline analysis.
Here's what I would do...may times, I will want to look at a particular date range, so I would run the parse.pl script to extract just that date range from my events file. I would then open the resulting mini-timeline in Didier's viewer and go about deleting those things that I didn't want to see, and colorize things that might be important, or interesting but not specifically relevant without further investigation.
AppCompatCache
The folks over at Mandiant posted to the M-unition blog regarding the Application Compatibility Cache, which is maintained in the Registry (see their paper). They've released a free tool to view this data, and in less than 30 minutes, I wrote up a RegRipper plugin to parse this data. The first test data that I had available was 32-bit XP, so it's limited, but it's a start, and I think that it really shows the power of open source. I don't say this to take anything away from the efforts of the Mandiant folks...rather, I thank them for their willingness to share the results of their research with the community at large. I provided a copy of the plugin to the SANSForensics team, and gave them permission to post the code via the SANS Case Leads. Rob contacted me with the test results, which weren't good. It appears that the module I use has an issue, which I describe below in the "Troubleshooting" section.
Now, how is this information useful? Check out Mandiant's paper...this particular data source is very rich in data, and I'll be updating the plugins once I get the module "fixed".
Open Source
I had posted to the Win4n6 Yahoo group some thoughts I had on the power of open source tools, with respect to the information Mandiant released. The purpose of the post was not to say, "hey, look at me...I wrote another plugin!!", but rather to demonstrate the power and flexibility of open source tools, and how they can be quickly be extended to provide a capability that might take days or weeks for commercial applications. Andrew provided another example, one that involved extending Volatility during Stuxnet analysis. As someone who's done DFIR work for a long time, I really appreciate having the ability to decide what analysis I will do, rather than being penned in by a commercial tool or framework.
Troubleshooting
Okay, back to the information Mandiant provided regarding the Registry value...one of the members of the Win4n6 group (Ben) sent me a Windows 7 System hive...I have several, which I had opened in MiTeC's WRR and found the value in question to be all zeros...yet Mandiant's free tool pulled a great deal of data from the hive. I checked again, and sure enough, the value in both ControlSets was all zeros. So, based on a suggestion from Ben, I tried Yaru from TZWorks, and found all of the data. I also wrote up a quick Perl script to extract the data from the value and place it into a file; from there, I opened up the file in a hex editor and could easily view the data. It turns out that the issue is that Yaru is apparently the only tool of those I looked at that correctly handles 'db' node types within the Registry. I have attempted to contact the author of the Parse::Win32Registry module about this, in hopes that it's an easy fix.
Registry Unallocated Space
Another interesting aspect of TZWorks' Yaru is that when you load the hive, it indexes the contents of the hive...and finds deleted keys. Pretty cool!
I wanted to see how that compared to regslack, so I ran regslack against the same hive and got a bit different information; I got the same deleted key that Yaru found, plus one other, and I also got a LOT of unallocated space! The web page for Yaru says that finding deleted keys is an experimental capability, which is great...it's also great that someone else is working on this topic. Yolanta's work and release of regslack were a significant milestone for Registry analysis (here is one of my first blog posts on the topic).
The description of Yaru also states that you can view "cell slack", or unused "key value data space"...that's something else that might be very interesting to look into, although I'm not completely clear on what value there may be in data included in cell slack.
A while back, while I was involved in PCI forensic assessments, I used our documented process once I was back in the lab, and my scan for CCNs with an acquired image turned up hits within the Software and NTUSER.DAT hives on a system. I thought that was odd...looking at the data surrounding the hits, it wasn't 100% clear to me that these were actual CCNs; that is, there were no indications that this was track data. So I exported the hives and ran searches across the "live" Registry...and got nothing. It turned out that the CCNs were part of unallocated space within the hive files...so understanding that there is unallocated space within a hive file can mean the difference between saying, "CCNs were found in the Registry", and actually providing accurate information in your report (as it can affect your customer).
Malware Persistence
The MMPC site has a description of Trojan:Win32/Reveton.A,which provides a good deal of information about this bit of ransomware. Apparently, this baddie locks the infected system and displays a warning to the user that they've been reported to authorities as possessing illicit material.
Okay, so what is the persistence mechanism? This one creates a Windows shortcut (LNK file) for itself in the Windows Startup folder. Since the malware arrives as a DLL, it uses rundll32.exe to launch itself via the shortcut.
So what this gives us is some very good info to add to our malware detection checklist, doesn't it? Not only should we check the Startup folders shortcuts (easy check; can accomplish this with a simple 'dir' command), but we might want to get some additional information via Prefetch analysis, particular of Prefetch files that start with "rundll32.exe".
"Community"
There's a new post up over on the Hexacorn blog, and comments are turned off, so I can't comment there...so let's do it here. ;-) Overall, let me say that I find most of the posts here interesting, and this one is no exception, but in this case, I'm only interested in four words from the post:
Like many before me...
I think that what a lot of us loose sight of is the fact that with few exceptions, there's probably someone out there who has faced the same challenges we've seen, and had to deal with the same...or at least very similar issues, as we have. So, when faced with these challenges, we have options...we can seek help, or (if we're racing the clock to get stuff done) we can try to muddle through things and figure things out for ourselves. I've heard people say this...that they want to wrestle with the issue and try to figure things out for themselves...even though there are others willing to help, or material and documentation available. This is very noble, but think about it...is there any wonder why we don't see anything from them about what they learned later? It's probably because they spent so much time wrestling that they don't have much time for anything else.
Recently, Girl, Unallocated gave an excellent DFIROnline presentation that involved a spoliation case and CCleaner. Now, I've had an opportunity recently to work with the latest version of this tool...not in a forensic analysis capacity...so I'm a little familiar with it. However, not long ago, I dealt with a case that involved an older version of WindowWasher. So this shows that in a lot of ways, there are very few "new" cases; that is to say, it's likely that technology aside (XP vs. Win7, for example), there are very few, "This is what I need to determine..." cases.
Need to recover Event Log records (or MFT records) from unallocated space on an XP workstation or Win2003 server? Perform USB device analysis? Determine if that malware on the system actually executed? I'm sure that someone else has run into this before...probably many "somebodys".
So, what to do? Well, we can start by recognizing that if we have a road block, there's a way around it that someone else may already have found. Ask for assistance. It's also helpful that (a) the response not be taken "off list", and (b) that when the dust has settled, there's some final feedback or closure.
Remember, no one of us is as smart as all of us together.
To close this out, I recently had an event that made the issue of "community" clear to me. Back in 2009, I'd written a script to parse Windows XP Scheduled Task/.job files (pull out the command run, the last time the job was run, and the status), and I have it is part of my personal stash of timeline tools. In recent weeks, I had two different people ask me for a copy of the script, which I was somewhat hesitant to do because of my past experience with doing this sort of thing. I decided to give the first person a chance and sent them the script. I was notified that they received it, but getting feedback on how well it worked was like pulling hen's teeth. So the second person came along, and I was just gonna say, "No, thanks" to their request...but they had a compelling need. And they ran into an issue with the script...during testing, I'd never encountered a .job file that had been created but never run. Now, I simply don't have any of those types of files available...but I asked Corey for some help, and thankfully, he was able to provide a couple of files. All in all, the script is working very well, and providing not just some useful output, but it will also provide that output in TLN format. Thanks to Corey providing some sample files, that meant I didn't need to go find an XP install disk, set up a VM, etc., etc., and was able to provide a solution much sooner.
Didier has released an updated version of an older viewer that he'd written, called InteractiveSieve. Based on the description of the viewer, this looks like an excellent tool for performing timeline analysis.
Here's what I would do...may times, I will want to look at a particular date range, so I would run the parse.pl script to extract just that date range from my events file. I would then open the resulting mini-timeline in Didier's viewer and go about deleting those things that I didn't want to see, and colorize things that might be important, or interesting but not specifically relevant without further investigation.
AppCompatCache
The folks over at Mandiant posted to the M-unition blog regarding the Application Compatibility Cache, which is maintained in the Registry (see their paper). They've released a free tool to view this data, and in less than 30 minutes, I wrote up a RegRipper plugin to parse this data. The first test data that I had available was 32-bit XP, so it's limited, but it's a start, and I think that it really shows the power of open source. I don't say this to take anything away from the efforts of the Mandiant folks...rather, I thank them for their willingness to share the results of their research with the community at large. I provided a copy of the plugin to the SANSForensics team, and gave them permission to post the code via the SANS Case Leads. Rob contacted me with the test results, which weren't good. It appears that the module I use has an issue, which I describe below in the "Troubleshooting" section.
Now, how is this information useful? Check out Mandiant's paper...this particular data source is very rich in data, and I'll be updating the plugins once I get the module "fixed".
Open Source
I had posted to the Win4n6 Yahoo group some thoughts I had on the power of open source tools, with respect to the information Mandiant released. The purpose of the post was not to say, "hey, look at me...I wrote another plugin!!", but rather to demonstrate the power and flexibility of open source tools, and how they can be quickly be extended to provide a capability that might take days or weeks for commercial applications. Andrew provided another example, one that involved extending Volatility during Stuxnet analysis. As someone who's done DFIR work for a long time, I really appreciate having the ability to decide what analysis I will do, rather than being penned in by a commercial tool or framework.
Troubleshooting
Okay, back to the information Mandiant provided regarding the Registry value...one of the members of the Win4n6 group (Ben) sent me a Windows 7 System hive...I have several, which I had opened in MiTeC's WRR and found the value in question to be all zeros...yet Mandiant's free tool pulled a great deal of data from the hive. I checked again, and sure enough, the value in both ControlSets was all zeros. So, based on a suggestion from Ben, I tried Yaru from TZWorks, and found all of the data. I also wrote up a quick Perl script to extract the data from the value and place it into a file; from there, I opened up the file in a hex editor and could easily view the data. It turns out that the issue is that Yaru is apparently the only tool of those I looked at that correctly handles 'db' node types within the Registry. I have attempted to contact the author of the Parse::Win32Registry module about this, in hopes that it's an easy fix.
Registry Unallocated Space
Another interesting aspect of TZWorks' Yaru is that when you load the hive, it indexes the contents of the hive...and finds deleted keys. Pretty cool!
I wanted to see how that compared to regslack, so I ran regslack against the same hive and got a bit different information; I got the same deleted key that Yaru found, plus one other, and I also got a LOT of unallocated space! The web page for Yaru says that finding deleted keys is an experimental capability, which is great...it's also great that someone else is working on this topic. Yolanta's work and release of regslack were a significant milestone for Registry analysis (here is one of my first blog posts on the topic).
The description of Yaru also states that you can view "cell slack", or unused "key value data space"...that's something else that might be very interesting to look into, although I'm not completely clear on what value there may be in data included in cell slack.
A while back, while I was involved in PCI forensic assessments, I used our documented process once I was back in the lab, and my scan for CCNs with an acquired image turned up hits within the Software and NTUSER.DAT hives on a system. I thought that was odd...looking at the data surrounding the hits, it wasn't 100% clear to me that these were actual CCNs; that is, there were no indications that this was track data. So I exported the hives and ran searches across the "live" Registry...and got nothing. It turned out that the CCNs were part of unallocated space within the hive files...so understanding that there is unallocated space within a hive file can mean the difference between saying, "CCNs were found in the Registry", and actually providing accurate information in your report (as it can affect your customer).
Malware Persistence
The MMPC site has a description of Trojan:Win32/Reveton.A,which provides a good deal of information about this bit of ransomware. Apparently, this baddie locks the infected system and displays a warning to the user that they've been reported to authorities as possessing illicit material.
Okay, so what is the persistence mechanism? This one creates a Windows shortcut (LNK file) for itself in the Windows Startup folder. Since the malware arrives as a DLL, it uses rundll32.exe to launch itself via the shortcut.
So what this gives us is some very good info to add to our malware detection checklist, doesn't it? Not only should we check the Startup folders shortcuts (easy check; can accomplish this with a simple 'dir' command), but we might want to get some additional information via Prefetch analysis, particular of Prefetch files that start with "rundll32.exe".
"Community"
There's a new post up over on the Hexacorn blog, and comments are turned off, so I can't comment there...so let's do it here. ;-) Overall, let me say that I find most of the posts here interesting, and this one is no exception, but in this case, I'm only interested in four words from the post:
Like many before me...
I think that what a lot of us loose sight of is the fact that with few exceptions, there's probably someone out there who has faced the same challenges we've seen, and had to deal with the same...or at least very similar issues, as we have. So, when faced with these challenges, we have options...we can seek help, or (if we're racing the clock to get stuff done) we can try to muddle through things and figure things out for ourselves. I've heard people say this...that they want to wrestle with the issue and try to figure things out for themselves...even though there are others willing to help, or material and documentation available. This is very noble, but think about it...is there any wonder why we don't see anything from them about what they learned later? It's probably because they spent so much time wrestling that they don't have much time for anything else.
Recently, Girl, Unallocated gave an excellent DFIROnline presentation that involved a spoliation case and CCleaner. Now, I've had an opportunity recently to work with the latest version of this tool...not in a forensic analysis capacity...so I'm a little familiar with it. However, not long ago, I dealt with a case that involved an older version of WindowWasher. So this shows that in a lot of ways, there are very few "new" cases; that is to say, it's likely that technology aside (XP vs. Win7, for example), there are very few, "This is what I need to determine..." cases.
Need to recover Event Log records (or MFT records) from unallocated space on an XP workstation or Win2003 server? Perform USB device analysis? Determine if that malware on the system actually executed? I'm sure that someone else has run into this before...probably many "somebodys".
So, what to do? Well, we can start by recognizing that if we have a road block, there's a way around it that someone else may already have found. Ask for assistance. It's also helpful that (a) the response not be taken "off list", and (b) that when the dust has settled, there's some final feedback or closure.
Remember, no one of us is as smart as all of us together.
To close this out, I recently had an event that made the issue of "community" clear to me. Back in 2009, I'd written a script to parse Windows XP Scheduled Task/.job files (pull out the command run, the last time the job was run, and the status), and I have it is part of my personal stash of timeline tools. In recent weeks, I had two different people ask me for a copy of the script, which I was somewhat hesitant to do because of my past experience with doing this sort of thing. I decided to give the first person a chance and sent them the script. I was notified that they received it, but getting feedback on how well it worked was like pulling hen's teeth. So the second person came along, and I was just gonna say, "No, thanks" to their request...but they had a compelling need. And they ran into an issue with the script...during testing, I'd never encountered a .job file that had been created but never run. Now, I simply don't have any of those types of files available...but I asked Corey for some help, and thankfully, he was able to provide a couple of files. All in all, the script is working very well, and providing not just some useful output, but it will also provide that output in TLN format. Thanks to Corey providing some sample files, that meant I didn't need to go find an XP install disk, set up a VM, etc., etc., and was able to provide a solution much sooner.
Monday, April 16, 2012
Metadata
I've blogged about metadata before (here, and here), but it's been a while, and this is a subject worth revisiting every so often. Metadata has long been an issue for users, and a valuable resource for investigators and forensic analysts. There are a number of file types (images, documents) that allow for embedded metadata...this doesn't mean that it's always populated, but I think you'd be surprised how much information is, in fact, leaked via embedded metadata. MS Office documents, PDFs, and JPG images are all known to be capable of carrying a range of embedded metadata.
One example of embedded metadata coming back to bite someone that I've referenced in my books is the Blair issue discussed by the ComputerBytesMan. This particular issue dated back to 2003, and it's clear that an older version of MS Word was used at the time. This version of MS Word used the OLE "structured storage" format; more recent versions of the Office documents don't use this format any longer, but it is used in Jump Lists, Sticky Notes, and IE session restore files.
Metadata has also brought down others. In the spring of 2012, metadata embedded in an image taken with a smartphone was used to track down the hacker "w0rmer".
One of the best tools I've found for collecting metadata from a wide range of file types (images, documents) is Phil Harvey's EXIFTool. This is a command line tool (available for Windows, Mac OS X, and Linux), which means it's easy to script; you can write simple batch files to extract metadata from all files in a folder, or all files of a particular type (JPG, DOC/DOCX, etc.) in a directory structure. If you prefer GUI tools, check out the EXIFToolGUI...simply remove the "(-k)" from the EXIFTool file name and put the GUI application in the same directory, and you're ready to go.
For more recent versions of MS Office documents, you might consider using read_open_xml_win.pl.
Removing embedded metadata can be pretty easy without employing any special tools. For example, you can remove embedded metadata from JPG images (the format used on digital cameras and smartphones) by using MS Paint to convert the image to TIFF format, then back to JPG.
Metadata can be a very valuable resource of investigators. Computer systems may include a number of images or documents from which metadata can be extracted. When examining systems, analysts should be sure to include looking for smartphone backups files, as images found in these backups may have considerable intelligence value.
Finding images or documents to check for embedded metadata is easy. Start with your own hard drive or file server. Alternatively, you can run Google searches (i.e., "site:domain.com filetype:doc") and find a great deal of documents available online.
Speaking of metadata, one file type that contains some interesting metadata is XP/2003 .job files. About three years ago, I had written a script to parse these files, and was recently asked to provide a copy of this script. I don't usually do that, as most often I don't hear back as to how well the script ran, if at all...but I decided to make an exception and provide the script this time. It turns out that the script had an issue, and Corey Harrell was nice enough to provide a couple of .job files for testing. As it turns out, when I wrote the script, I hadn't had any .job files that had never been run, and the script was failing because I hadn't dealt with the case where the time fields were all zero. Thanks to Corey, I was able to quickly get that fixed and provide a working copy.
One example of embedded metadata coming back to bite someone that I've referenced in my books is the Blair issue discussed by the ComputerBytesMan. This particular issue dated back to 2003, and it's clear that an older version of MS Word was used at the time. This version of MS Word used the OLE "structured storage" format; more recent versions of the Office documents don't use this format any longer, but it is used in Jump Lists, Sticky Notes, and IE session restore files.
Metadata has also brought down others. In the spring of 2012, metadata embedded in an image taken with a smartphone was used to track down the hacker "w0rmer".
One of the best tools I've found for collecting metadata from a wide range of file types (images, documents) is Phil Harvey's EXIFTool. This is a command line tool (available for Windows, Mac OS X, and Linux), which means it's easy to script; you can write simple batch files to extract metadata from all files in a folder, or all files of a particular type (JPG, DOC/DOCX, etc.) in a directory structure. If you prefer GUI tools, check out the EXIFToolGUI...simply remove the "(-k)" from the EXIFTool file name and put the GUI application in the same directory, and you're ready to go.
For more recent versions of MS Office documents, you might consider using read_open_xml_win.pl.
Removing embedded metadata can be pretty easy without employing any special tools. For example, you can remove embedded metadata from JPG images (the format used on digital cameras and smartphones) by using MS Paint to convert the image to TIFF format, then back to JPG.
Metadata can be a very valuable resource of investigators. Computer systems may include a number of images or documents from which metadata can be extracted. When examining systems, analysts should be sure to include looking for smartphone backups files, as images found in these backups may have considerable intelligence value.
Finding images or documents to check for embedded metadata is easy. Start with your own hard drive or file server. Alternatively, you can run Google searches (i.e., "site:domain.com filetype:doc") and find a great deal of documents available online.
Speaking of metadata, one file type that contains some interesting metadata is XP/2003 .job files. About three years ago, I had written a script to parse these files, and was recently asked to provide a copy of this script. I don't usually do that, as most often I don't hear back as to how well the script ran, if at all...but I decided to make an exception and provide the script this time. It turns out that the script had an issue, and Corey Harrell was nice enough to provide a couple of .job files for testing. As it turns out, when I wrote the script, I hadn't had any .job files that had never been run, and the script was failing because I hadn't dealt with the case where the time fields were all zero. Thanks to Corey, I was able to quickly get that fixed and provide a working copy.
Thursday, April 12, 2012
Registry Analysis
I have several upcoming speaking engagements, during which I will be talking about digital forensic analysis of Windows systems (with a focus on Windows 7), as well as about Registry analysis. When preparing for presentations, I like to stay current (as much as I can) on what's being seen and visible through the media, so that I can provide up-to-date examples of what can be found on systems, particularly in the Registry.
The Microsoft Malware Protection Center recently provided information regarding April 2012 updates to the Microsoft Malicious Software Removal Tool (referred to as either "MSRT" or "MRT"). The April update includes coverage for three threat families. The reason I mention this is, in part, because some of this malware is pretty "interesting"...one mines BitCoins from the compromised user.
The other reason I mention this is, even with the apparent sophistication of the malware itself, most authors still want their malware to remain persistent. Win32/Gamarue can create an entry in the "load" value, or create a value beneath the HKLM\..\Run key. Win32/Claretore creates a value beneath the HKCU\..\Run key. These and other artifacts can make the malware easy to detect, even when all you have to examine is an acquired image.
You're probably thinking, "so what", right? Big deal. Well, you know why malware authors continue to use these "obvious" locations to have their code remain persistent? Because it works. That's pretty much it. You see, the bad guys aren't trying to hide from me...they're trying to hide from you and your users. If neither you nor your users are aware of these persistence mechanisms, and your approach to malware is to use commercial AV products, then you can be pretty sure that the bad guy is going to remain undetected long enough to get what they want. I did emergency incident response for ISS (later, IBM ISS) for 3 1/2 years...the very nature of what we did was predicated by the incident already having happened, very often days, weeks or even months before we were called.
Regarding the AV products, Rob Lee recently posted some very interesting findings regarding AV products and the use of real-world malware in the capstone exercise for his FOR508 course. Why would I bring this up? Well, Rob's post illustrates an important factor to keep in mind when dealing with current issues...sometimes, the solution we've employed may not be sufficient. When I teach my malware detection course, I describe four characteristics of malware that incident responders and digital forensic analysts can use as a framework for determining if a system had been infected with malware.
As an example, consider the Conficker/Downadup family of malware...I'm sure many of you remember it (perhaps not too fondly). MS's write-up of this malware family includes five variants...and the one thing that they all have in common is their persistence mechanism...they all create a randomly-named Windows service...as such, there are a number of ways to detect the presence of this malware without using AV, including Registry and Event Log analysis.
I can remember back when I started getting into Registry analysis, and thinking that malware authors were pretty sophisticated in their use of persistence mechanisms. Back then, it appeared that just the Registry locations that malware authors could use for persistence were near-infinite. Now, a dozen or more years later, I still see the same Registry keys being used time and time again. Because they work.
Registry analysis is only one component of your overall analysis framework. However, many times, I feel as if it's misunderstood or simply not performed. The nature of the Registry makes it an incredibly rich artifact environment, providing a wealth of information, intelligence and context.
Need resources to better understand the Registry? I may be available to speak to your group (depends on scheduling...feel free to contact me) or take a look at my books...Mike Ahrendt recently posted a review of two of them.
The Microsoft Malware Protection Center recently provided information regarding April 2012 updates to the Microsoft Malicious Software Removal Tool (referred to as either "MSRT" or "MRT"). The April update includes coverage for three threat families. The reason I mention this is, in part, because some of this malware is pretty "interesting"...one mines BitCoins from the compromised user.
The other reason I mention this is, even with the apparent sophistication of the malware itself, most authors still want their malware to remain persistent. Win32/Gamarue can create an entry in the "load" value, or create a value beneath the HKLM\..\Run key. Win32/Claretore creates a value beneath the HKCU\..\Run key. These and other artifacts can make the malware easy to detect, even when all you have to examine is an acquired image.
You're probably thinking, "so what", right? Big deal. Well, you know why malware authors continue to use these "obvious" locations to have their code remain persistent? Because it works. That's pretty much it. You see, the bad guys aren't trying to hide from me...they're trying to hide from you and your users. If neither you nor your users are aware of these persistence mechanisms, and your approach to malware is to use commercial AV products, then you can be pretty sure that the bad guy is going to remain undetected long enough to get what they want. I did emergency incident response for ISS (later, IBM ISS) for 3 1/2 years...the very nature of what we did was predicated by the incident already having happened, very often days, weeks or even months before we were called.
Regarding the AV products, Rob Lee recently posted some very interesting findings regarding AV products and the use of real-world malware in the capstone exercise for his FOR508 course. Why would I bring this up? Well, Rob's post illustrates an important factor to keep in mind when dealing with current issues...sometimes, the solution we've employed may not be sufficient. When I teach my malware detection course, I describe four characteristics of malware that incident responders and digital forensic analysts can use as a framework for determining if a system had been infected with malware.
As an example, consider the Conficker/Downadup family of malware...I'm sure many of you remember it (perhaps not too fondly). MS's write-up of this malware family includes five variants...and the one thing that they all have in common is their persistence mechanism...they all create a randomly-named Windows service...as such, there are a number of ways to detect the presence of this malware without using AV, including Registry and Event Log analysis.
I can remember back when I started getting into Registry analysis, and thinking that malware authors were pretty sophisticated in their use of persistence mechanisms. Back then, it appeared that just the Registry locations that malware authors could use for persistence were near-infinite. Now, a dozen or more years later, I still see the same Registry keys being used time and time again. Because they work.
Registry analysis is only one component of your overall analysis framework. However, many times, I feel as if it's misunderstood or simply not performed. The nature of the Registry makes it an incredibly rich artifact environment, providing a wealth of information, intelligence and context.
Need resources to better understand the Registry? I may be available to speak to your group (depends on scheduling...feel free to contact me) or take a look at my books...Mike Ahrendt recently posted a review of two of them.
Tuesday, April 10, 2012
Value of Targeted Timeline Analysis in Research
I recently conducted some research into a specific event/activity on a system, and found that timeline analysis proved to be an extremely valuable technique to use in this case. I see a good number of questions online in the various forums asking about what could lead to a particular event, and in most cases, I ask the poster if they'd considered generating a timeline. After all, any time you're interested in an event that occurred, particularly one that occurred at a specific time, generating and analyzing a timeline is a great way to answer that question. It's also a great way to conduct analysis as part of research.
As indicated in my previous post, I had read online that the volume GUIDs found in the value names within the MountedDevices key, as well as within a user's MountPoints2 subkey names follow the UUID v1 format discussed in RFC 4122. As such, this GUID contains a node identifier, which is most often a MAC address from the system. Okay, that's easy enough. There is also a 60-bit time stamp embedded in the GUID, which tracks 100 nano-second intervals since 15 Oct 1582...and I got interested in determining to what event that time stamp referred or was associated, specifically on my Windows 7 Ultimate test system.
I purchased a SanDisk Blade thumb drive (it was on sale), as I knew that such a device had never been connected to my system. I then recorded the time and date that I booted system, as well as when I first connected the device to the system. After connecting the device and noting the drive letter to which it was mapped, I disconnected it, waited a couple of hours and reconnected the device. After disconnecting it, I shut the system down. Again, I recorded the significant times in my notes.
Several days later, I copied the setupapi.dev.log file, extracted the System, Software, and NTUSER.DAT hives via FTK Imager, and dumped the System Event Log via LogParser, using the following command:
logparser -i:evt -o:csv "Select RecordNumber,TO_UTCTIME(TimeGenerated),EventID,SourceName,Strings from System" > d:\cases\local\system.csv
Using the appropriate tools, I generated my events file in preparation for creating a timeline. I wrote a specific script to parse the System Event Log entries into the necessary five-field format, and crafted a specific variant of the RegRipper mp2.pl plugin to parse the volume GUIDs based on the UUID v1 time stamps. I used regtime.pl (part of my timeline tools) to parse the Registry hives into the necessary format.
Now, what I didn't do was create a supertimeline from all data sources, as it wasn't necessary to do so. No new Prefetch files had been created during the testing process, and I hadn't created or edited files on the thumb drive, so there really was no need to collect file system metadata. I focused solely on the System Event Log as I understand that a number of event records are created in that log file with respect to drivers and USB devices, as well as a number of system-specific records.
What I was able to determine is that the time stamp within the volume GUID refers to the boot time of the boot session during which the device was connected to the system. For example, I booted my test system at approximately 4:29pm EDT on 4 April 2012 (as indicated by a Microsoft-Windows-Kernel-General record with event ID 12); however, I didn't plug the device into the system until 4:55pm EDT (remember, I disconnected it and then plugged it again later at 9:02pm EDT). When I parsed the volume GUID for the time stamp and normalized it to 32-bit time format, the time was within a second of the boot time of the system. Throughout the rest of the timeline, I saw the creation of keys beneath the Enum\USB and Enum\USBStor keys within the System hive, as well as several other keys that need to be researched further and possibly included in the USB device analysis process. I was also able to see that there were a number of event records that pertain to the use of USB devices.
Another example where this sort of analysis can be applied is in determining artifacts corresponding to a CD or DVD being burned on a Windows 7 system. The System Event Log will have an event record with a source name of "cdrom" and an event ID of 133, indicating that the CDROM was locked for exclusive use. If the user burned an ISO image, you'll see an indication of the access to the ISO image beneath the user's RecentDocs key, just prior to the ID 133 event. If the user chose to send a selection of files to the CD/DVD drive and to "burn a disk", you'll likely see an isoburn.exe Prefetch file.
Again, it's very clear that timelines are an extremely valuable technique, not only for analysis of acquired images, but also when conducting research with respect to developing context and an increased relative confidence in the data and events in which you may be interested.
As indicated in my previous post, I had read online that the volume GUIDs found in the value names within the MountedDevices key, as well as within a user's MountPoints2 subkey names follow the UUID v1 format discussed in RFC 4122. As such, this GUID contains a node identifier, which is most often a MAC address from the system. Okay, that's easy enough. There is also a 60-bit time stamp embedded in the GUID, which tracks 100 nano-second intervals since 15 Oct 1582...and I got interested in determining to what event that time stamp referred or was associated, specifically on my Windows 7 Ultimate test system.
I purchased a SanDisk Blade thumb drive (it was on sale), as I knew that such a device had never been connected to my system. I then recorded the time and date that I booted system, as well as when I first connected the device to the system. After connecting the device and noting the drive letter to which it was mapped, I disconnected it, waited a couple of hours and reconnected the device. After disconnecting it, I shut the system down. Again, I recorded the significant times in my notes.
Several days later, I copied the setupapi.dev.log file, extracted the System, Software, and NTUSER.DAT hives via FTK Imager, and dumped the System Event Log via LogParser, using the following command:
logparser -i:evt -o:csv "Select RecordNumber,TO_UTCTIME(TimeGenerated),EventID,SourceName,Strings from System" > d:\cases\local\system.csv
Using the appropriate tools, I generated my events file in preparation for creating a timeline. I wrote a specific script to parse the System Event Log entries into the necessary five-field format, and crafted a specific variant of the RegRipper mp2.pl plugin to parse the volume GUIDs based on the UUID v1 time stamps. I used regtime.pl (part of my timeline tools) to parse the Registry hives into the necessary format.
Now, what I didn't do was create a supertimeline from all data sources, as it wasn't necessary to do so. No new Prefetch files had been created during the testing process, and I hadn't created or edited files on the thumb drive, so there really was no need to collect file system metadata. I focused solely on the System Event Log as I understand that a number of event records are created in that log file with respect to drivers and USB devices, as well as a number of system-specific records.
What I was able to determine is that the time stamp within the volume GUID refers to the boot time of the boot session during which the device was connected to the system. For example, I booted my test system at approximately 4:29pm EDT on 4 April 2012 (as indicated by a Microsoft-Windows-Kernel-General record with event ID 12); however, I didn't plug the device into the system until 4:55pm EDT (remember, I disconnected it and then plugged it again later at 9:02pm EDT). When I parsed the volume GUID for the time stamp and normalized it to 32-bit time format, the time was within a second of the boot time of the system. Throughout the rest of the timeline, I saw the creation of keys beneath the Enum\USB and Enum\USBStor keys within the System hive, as well as several other keys that need to be researched further and possibly included in the USB device analysis process. I was also able to see that there were a number of event records that pertain to the use of USB devices.
Another example where this sort of analysis can be applied is in determining artifacts corresponding to a CD or DVD being burned on a Windows 7 system. The System Event Log will have an event record with a source name of "cdrom" and an event ID of 133, indicating that the CDROM was locked for exclusive use. If the user burned an ISO image, you'll see an indication of the access to the ISO image beneath the user's RecentDocs key, just prior to the ID 133 event. If the user chose to send a selection of files to the CD/DVD drive and to "burn a disk", you'll likely see an isoburn.exe Prefetch file.
Again, it's very clear that timelines are an extremely valuable technique, not only for analysis of acquired images, but also when conducting research with respect to developing context and an increased relative confidence in the data and events in which you may be interested.
Thursday, April 05, 2012
New Tools, Registry Findings
Zena Forensics: Recipe -Recipes are cool. I like to cook. Not everyone cooks the same way...if they did, how much fun would that be? Not long ago, on a blog not far away, I ran across a "recipe" where someone used some of the scripts I wrote for parsing Windows XP/2003 Event Log records, and adapted them to Windows Vista+ Windows Event Logs, by tying in MS's LogParser.
LogParser is an extremely useful tool. I recently sought some assistance on the Win4n6 Yahoo group with getting LogParser to spit out Windows Event Log records TimeGenerated times in UTC format, rather than local time (from my analysis system). This was an important issue in some testing I was doing recently,
The command I ended up using looked similar to the following:
The key element was the bolded statement. Using this statement, I didn't have to reset my system clock to GMT time, or add more "moving parts" or do anything else that would have potentially led to errors.
What I really like about this recipe is that the author needed to do something, needed to do something that he may not have had the ability to do through commercial tools, and did it. Not only that but he did it open source, and provided it for others. Great job, and I, for one, greatly appreciate the fact that the author decided to share the script.
RegRipper Plugin Maintenance Script - Recently, Corey Harrell and Cheeky4n6Monkey (why does that name make me think of the episode of "Family Guy" where Chris and Peter made up the "Handi-Quacks" cartoon??) put their heads together and come up with a very interesting tool. In short, the apparent back-story is that Corey mused how "it would be cool" to have a tool that would run through the RegRipper plugins directory, as well as the profiles, and see which plugins were available that had not been included in a profile. Cheeky took this and ran with it, and came up with the RegRipper plugin maintenance script.
Up to this point, per Windows Registry Forensics, you could view a listing of plugins a couple of ways. One is to use rip.pl/.exe at the command line; the following command will print out (to STDOUT) a list of available plugins:
If you add the "-c" switch, you can get that listing in .csv format.
The other way to view the plugins is to use the Plugin Browser, which is a graphical tool that lets you browse the available plugins, as well as create your own profile.
As with the recipe listed in the first part of this post, I greatly appreciate the effort that went into creating this script, as well as the fact that it was provided to the community at large. You can download the script from Cheeky's site, or you may see it referenced at Brett's RegRipper site. I also think that this is a great benefit to the community, as I'm sure that there are folks out there who didn't even know that they could use this script, but will end up finding it to be extremely valuable.
Registry Findings
A while back, I wrote some code to parse Windows 7 Jump Lists, and as part of my research for that project, I ran into volume GUIDs (within the LNK format TrackerData block), which are based on the UUID v1 format specification, detailed in RFC 4122. Two of the pieces of information of interest that can be parsed from the GUID are a time stamp, and a MAC address.
I recently caught something online that indicated to me that there might be other opportunities to make use of the work that I'd already done. Specifically, some of the values beneath the MountedDevices key (those that begin with "\??\Volume"), as well as some of the subkeys beneath a user's MountPoints2 key, are volume GUIDs maintained in the UUID v1 format. We already know that these two pieces of information are used to help us map the use of USB devices on systems, so how does this initial information about the GUID format help us?
The usefulness of the MAC address should be somewhat intuitive. Perhaps more than anything else, what this means is that the MAC address is, in fact, stored in the Registry...not so much "stored" in the sense that there's a value named "MAC address", but more so in the sense that it's there. Testing using a live system corroborates the fact that a MAC address is used; I say "a" because my testing system is a laptop, and has a number of IPv4 interfaces (LAN, WLAN, VirtualBox). Interestingly, one of the MAC addresses that appeared in several GUIDs was from VMWare, which had been installed at one point on my system (I removed it). As entries from the UserAssist key showed when I had installed VMPlayer (launched the installer), I had a time frame that pertained to when the device(s) could have been used.
Note: Over on the Girl, Unallocated blog, Case Experience #2.4 illustrates some good examples of the volume GUIDs, from both the MountedDevices key, and the user's MountPoints2 key.
In a couple of cases, I found devices...specifically, the laptop's built-in hard drive and DVD/optical drive...where the node identifier didn't correlate to a MAC address from my system. This was an interesting finding, but not something that really needs to be run down at the moment.
What this demonstrates is that the MAC address of a system is, in fact, recorded in the Registry, albeit not necessarily as a value named "MAC address".
More research is required regarding the time stamp.
Addendum, 20120410: Testing indicates that the time stamp points to the boot time for the boot session during which the device was connected.
LogParser is an extremely useful tool. I recently sought some assistance on the Win4n6 Yahoo group with getting LogParser to spit out Windows Event Log records TimeGenerated times in UTC format, rather than local time (from my analysis system). This was an important issue in some testing I was doing recently,
The command I ended up using looked similar to the following:
Logparser -i:evt -o:csv "SELECT RecordNumber,TO_UTCTIME(TimeGenerated),EventID,SourceName,Strings from System" > system.csv
The key element was the bolded statement. Using this statement, I didn't have to reset my system clock to GMT time, or add more "moving parts" or do anything else that would have potentially led to errors.
What I really like about this recipe is that the author needed to do something, needed to do something that he may not have had the ability to do through commercial tools, and did it. Not only that but he did it open source, and provided it for others. Great job, and I, for one, greatly appreciate the fact that the author decided to share the script.
RegRipper Plugin Maintenance Script - Recently, Corey Harrell and Cheeky4n6Monkey (why does that name make me think of the episode of "Family Guy" where Chris and Peter made up the "Handi-Quacks" cartoon??) put their heads together and come up with a very interesting tool. In short, the apparent back-story is that Corey mused how "it would be cool" to have a tool that would run through the RegRipper plugins directory, as well as the profiles, and see which plugins were available that had not been included in a profile. Cheeky took this and ran with it, and came up with the RegRipper plugin maintenance script.
Up to this point, per Windows Registry Forensics, you could view a listing of plugins a couple of ways. One is to use rip.pl/.exe at the command line; the following command will print out (to STDOUT) a list of available plugins:
rip.pl -l
If you add the "-c" switch, you can get that listing in .csv format.
The other way to view the plugins is to use the Plugin Browser, which is a graphical tool that lets you browse the available plugins, as well as create your own profile.
As with the recipe listed in the first part of this post, I greatly appreciate the effort that went into creating this script, as well as the fact that it was provided to the community at large. You can download the script from Cheeky's site, or you may see it referenced at Brett's RegRipper site. I also think that this is a great benefit to the community, as I'm sure that there are folks out there who didn't even know that they could use this script, but will end up finding it to be extremely valuable.
Registry Findings
A while back, I wrote some code to parse Windows 7 Jump Lists, and as part of my research for that project, I ran into volume GUIDs (within the LNK format TrackerData block), which are based on the UUID v1 format specification, detailed in RFC 4122. Two of the pieces of information of interest that can be parsed from the GUID are a time stamp, and a MAC address.
I recently caught something online that indicated to me that there might be other opportunities to make use of the work that I'd already done. Specifically, some of the values beneath the MountedDevices key (those that begin with "\??\Volume"), as well as some of the subkeys beneath a user's MountPoints2 key, are volume GUIDs maintained in the UUID v1 format. We already know that these two pieces of information are used to help us map the use of USB devices on systems, so how does this initial information about the GUID format help us?
The usefulness of the MAC address should be somewhat intuitive. Perhaps more than anything else, what this means is that the MAC address is, in fact, stored in the Registry...not so much "stored" in the sense that there's a value named "MAC address", but more so in the sense that it's there. Testing using a live system corroborates the fact that a MAC address is used; I say "a" because my testing system is a laptop, and has a number of IPv4 interfaces (LAN, WLAN, VirtualBox). Interestingly, one of the MAC addresses that appeared in several GUIDs was from VMWare, which had been installed at one point on my system (I removed it). As entries from the UserAssist key showed when I had installed VMPlayer (launched the installer), I had a time frame that pertained to when the device(s) could have been used.
Note: Over on the Girl, Unallocated blog, Case Experience #2.4 illustrates some good examples of the volume GUIDs, from both the MountedDevices key, and the user's MountPoints2 key.
In a couple of cases, I found devices...specifically, the laptop's built-in hard drive and DVD/optical drive...where the node identifier didn't correlate to a MAC address from my system. This was an interesting finding, but not something that really needs to be run down at the moment.
What this demonstrates is that the MAC address of a system is, in fact, recorded in the Registry, albeit not necessarily as a value named "MAC address".
More research is required regarding the time stamp.
Addendum, 20120410: Testing indicates that the time stamp points to the boot time for the boot session during which the device was connected.