The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Wednesday, September 28, 2011
NoVA Forensics Meetup Reminder
Just a quick reminder that the next NoVA Forensics Meetup will be Wed, 5 Oct 2011. Time and location remains the same. We're planning to have a presentation on mobile forensics.
Friday, September 23, 2011
Friday Stuff
ADSs
I've been fascinated by NTFS alternate data streams for almost 14 years now, and I caught MHL's recent blog post on detecting stealth ADSs with TSK tools. The idea behind a "stealth ADS" came from this Exploit Monday post, and both posts were very interesting reads.
ADSs are one of those NTFS artifacts that many folks (DF analysts, admins, etc.) don't really know a whole lot about, and I'm not sure why. I guess it's a chicken-or-the-egg issue; how do you know that there aren't any ADSs on your systems if you're not looking for them? If you don't look for them, why do you then need to know about them...right? I remember about 11 years ago, Benny and Ratter of the group 29A wrote Win32/Stream, mostly as a proof of concept.
F-Response
If you haven't heard of Matthew Shannon's F-Response, I'd really have to question where've you been. F-Response is one of those tools that have really pushed incident response work ahead by leaps and bounds. Using F-Response, you can reach out systems on another floor, in another building, or even in another city, and make a read-only connection to the physical disk, and from there, run tools to search for specific items, collect specific files, or even conduct a physical or logical acquisition. With Windows systems, you can even collect the contents of physical memory.
Matt's added the FlexScript scripting capability to F-Response, and through Powershell, recently demonstrated how to use F-Response to automate large collections. As always, Matt includes a video so you can see what he did, in addition to providing the scripts along with the F-Response Mission Guides.
This adds a whole new dimension to an already-valuable tool; being able to automate large-scale collections is a powerful capability. If an incident occurs, an organization can use this capability to automate quickly connecting to systems and either collecting data, or
Live Forensics
Speaking of Matt Shannon, thanks to his Twitter account, I was recently directed to this paper, which is intended to dispel some of the myths of live digital forensics. The paper is just 5 pages long, so I printed it out so I could read it...and found it very interesting. The paper essentially addresses (and shoots down) three common myths that are encountered within the digital forensics community regarding live forensics, and does so only with respect to the admissibility of "live" digital evidence in a US court of law. I can see how this distinction is important for the paper, particularly in driving its point home. Additional discussion in dispelling the myths would extend the length of the paper unnecessarily, and potentially make the argument a bit murky. In short, each of the myths is addressed with "...the Court makes no requirement...".
This is similar to conversations I've had with Chris Pogue, during which we've discussed "court certified" tools; this is something we've both heard, and the long and short of the discussion is that there is no such thing, regardless of what folks (including marketing staff) my choose to believe or say.
Volatility
Here's a post on the malwarereversing blog that discusses (and provides) the vscan.py plugin for Volatility 2.0, which allows you to submit malicious stuff you've found in a Windows memory dump to an online AV scanning site (the post uses Jotti).
The blog post also mentions MHL's avsubmit.py plugin, which allows for the submission of stuff you've found to VirusTotal.
Tools
I ran across the CERT Linux Forensics Tools Repository recently; very cool. Not only are some of my tools posted there (i.e., RegRipper, tln_tools) but many of the ones listed also run on Winderz!
Mark Woan recently updated JumpLister to include parsing of DestList streams, as well as looking up AppIDs. It appears from the JumpLister web page that the DestList parsing capability was added based on the information available in the ForensicsWiki, which really shows how useful and powerful a resource the ForensicsWiki can be. Mark's application downloads as part of an installer package, and it only runs on Windows. The installer adds 11 files to your system, and when you run it, you can load one autodest JumpList at a time. The tool did a great job of parsing the DestList stream on the few files I loaded. Mark mentioned in the Win4n6 Yahoo group that he changed the functionality of the tool, so that instead of loading the entire compound file, it first parses the DestList stream, and then looks for the numbered streams identified in the DestList stream. Jimmy Weg reports that XWays now supports parsing autodest JumpLists, including the DestList streams.
I hope that with the information in the ForensicsWiki, and a number of available tools (free and otherwise) supporting parsing of these artifacts, that maybe this will push folks to start looking at these files as a valuable forensics resource. Since I started posting about Jump List analysis, I've created my own code for parsing these files, including not only the compound files that the autodest Jump Lists are stored in, but also the LNK streams and the DestList stream. This code allows me a great deal of flexibility, not only to troubleshoot issues with "misbehaving" Jump List files, but also to modify the output into any format I desire (CSV, TLN), either to analyze separately or include in a timeline. I've seen the value of Jump Lists in forensic analysis, and I hope others begin to parse these files and include them in their analysis.
AutoRuns Update
AutoRuns has been updated to version 11, to include a "jump to folder" capability, as well as several new autostart locations. I haven't gone through all of them yet, but this looks very promising.
Speaking of autostart mechanisms, Martin Pillion recently posted to the HBGary blog regarding malware's use of Local Group Policy to maintain persistence on a system. I found the blog post fascinating (it's always interested to see stuff you've seen talked about before), albeit a bit hard to follow in some places; for example, just below figure 4, the second sentence states:
What section? I see the "following line", but for the casual reader (or perhaps someone not quite as knowledgeable in this area), this can be confusing. Overall, however, this doesn't really take much away from this persistence mechanism. I mention it here (rather than in its own section), as according the blog post, the new version of AutoRuns does NOT detect this persistence mechanism.
Several other MS/SysInternals tools have been updated, as well, to including ProcDump and Process Explorer.
DFF
DFF RC 1.2 is available.
I've been fascinated by NTFS alternate data streams for almost 14 years now, and I caught MHL's recent blog post on detecting stealth ADSs with TSK tools. The idea behind a "stealth ADS" came from this Exploit Monday post, and both posts were very interesting reads.
ADSs are one of those NTFS artifacts that many folks (DF analysts, admins, etc.) don't really know a whole lot about, and I'm not sure why. I guess it's a chicken-or-the-egg issue; how do you know that there aren't any ADSs on your systems if you're not looking for them? If you don't look for them, why do you then need to know about them...right? I remember about 11 years ago, Benny and Ratter of the group 29A wrote Win32/Stream, mostly as a proof of concept.
F-Response
If you haven't heard of Matthew Shannon's F-Response, I'd really have to question where've you been. F-Response is one of those tools that have really pushed incident response work ahead by leaps and bounds. Using F-Response, you can reach out systems on another floor, in another building, or even in another city, and make a read-only connection to the physical disk, and from there, run tools to search for specific items, collect specific files, or even conduct a physical or logical acquisition. With Windows systems, you can even collect the contents of physical memory.
Matt's added the FlexScript scripting capability to F-Response, and through Powershell, recently demonstrated how to use F-Response to automate large collections. As always, Matt includes a video so you can see what he did, in addition to providing the scripts along with the F-Response Mission Guides.
This adds a whole new dimension to an already-valuable tool; being able to automate large-scale collections is a powerful capability. If an incident occurs, an organization can use this capability to automate quickly connecting to systems and either collecting data, or
Live Forensics
Speaking of Matt Shannon, thanks to his Twitter account, I was recently directed to this paper, which is intended to dispel some of the myths of live digital forensics. The paper is just 5 pages long, so I printed it out so I could read it...and found it very interesting. The paper essentially addresses (and shoots down) three common myths that are encountered within the digital forensics community regarding live forensics, and does so only with respect to the admissibility of "live" digital evidence in a US court of law. I can see how this distinction is important for the paper, particularly in driving its point home. Additional discussion in dispelling the myths would extend the length of the paper unnecessarily, and potentially make the argument a bit murky. In short, each of the myths is addressed with "...the Court makes no requirement...".
This is similar to conversations I've had with Chris Pogue, during which we've discussed "court certified" tools; this is something we've both heard, and the long and short of the discussion is that there is no such thing, regardless of what folks (including marketing staff) my choose to believe or say.
Volatility
Here's a post on the malwarereversing blog that discusses (and provides) the vscan.py plugin for Volatility 2.0, which allows you to submit malicious stuff you've found in a Windows memory dump to an online AV scanning site (the post uses Jotti).
The blog post also mentions MHL's avsubmit.py plugin, which allows for the submission of stuff you've found to VirusTotal.
Tools
I ran across the CERT Linux Forensics Tools Repository recently; very cool. Not only are some of my tools posted there (i.e., RegRipper, tln_tools) but many of the ones listed also run on Winderz!
Mark Woan recently updated JumpLister to include parsing of DestList streams, as well as looking up AppIDs. It appears from the JumpLister web page that the DestList parsing capability was added based on the information available in the ForensicsWiki, which really shows how useful and powerful a resource the ForensicsWiki can be. Mark's application downloads as part of an installer package, and it only runs on Windows. The installer adds 11 files to your system, and when you run it, you can load one autodest JumpList at a time. The tool did a great job of parsing the DestList stream on the few files I loaded. Mark mentioned in the Win4n6 Yahoo group that he changed the functionality of the tool, so that instead of loading the entire compound file, it first parses the DestList stream, and then looks for the numbered streams identified in the DestList stream. Jimmy Weg reports that XWays now supports parsing autodest JumpLists, including the DestList streams.
I hope that with the information in the ForensicsWiki, and a number of available tools (free and otherwise) supporting parsing of these artifacts, that maybe this will push folks to start looking at these files as a valuable forensics resource. Since I started posting about Jump List analysis, I've created my own code for parsing these files, including not only the compound files that the autodest Jump Lists are stored in, but also the LNK streams and the DestList stream. This code allows me a great deal of flexibility, not only to troubleshoot issues with "misbehaving" Jump List files, but also to modify the output into any format I desire (CSV, TLN), either to analyze separately or include in a timeline. I've seen the value of Jump Lists in forensic analysis, and I hope others begin to parse these files and include them in their analysis.
AutoRuns Update
AutoRuns has been updated to version 11, to include a "jump to folder" capability, as well as several new autostart locations. I haven't gone through all of them yet, but this looks very promising.
Speaking of autostart mechanisms, Martin Pillion recently posted to the HBGary blog regarding malware's use of Local Group Policy to maintain persistence on a system. I found the blog post fascinating (it's always interested to see stuff you've seen talked about before), albeit a bit hard to follow in some places; for example, just below figure 4, the second sentence states:
We do this by adding the following line in the section:
What section? I see the "following line", but for the casual reader (or perhaps someone not quite as knowledgeable in this area), this can be confusing. Overall, however, this doesn't really take much away from this persistence mechanism. I mention it here (rather than in its own section), as according the blog post, the new version of AutoRuns does NOT detect this persistence mechanism.
Several other MS/SysInternals tools have been updated, as well, to including ProcDump and Process Explorer.
DFF
DFF RC 1.2 is available.
Monday, September 19, 2011
Links and Updates
iTunes Forensic Analysis
I ran across a very interesting read regarding the forensic analysis of an iTunes installation via DFINews. One of the things I see consistently within the community is that folks really want to see how someone else has done something, how they've gone about conducting an exam or investigation, and this is a good example of that.
Volatility Updates
Keep your eye on the Volatility site for updates that include support for Windows 8, thanks to the efforts of @iMHLv2, @gleeda, @moyix, and @attrc.
Speaking of Volatility, the folks at p4r4ni0d take a look at Morto. Great work, using a great tool set. If you want to see how others using Volatility, take a look at the blog post.
NetworkMiner
A new version of NetworkMiner has been released. If your work involves pcap capture and analysis, this is one tool that I'd definitely recommend that you have in your kit.
Registry
Andrew Case (@attrc) put together a very good paper (blog post here) on how he went about recovering and analyzing deleted Registry hives. Now, this is not recovering deleted keys from within hive files...Andrew recovered entire hive files from unallocated space after (per his paper) the system had been formatted and the operating system reinstalled. Take a look at the process he went through to this...this may be something that you'd want to incorporate into your tool kit.
If you're read Windows Registry Forensics, you'll understand Andrew's approach; Registry hive files (including those from Windows 8) start with 'regf' at the first 4 bytes of the file. The hive files are broken into 4k (4096 bytes) pages, with the first one beginning with 'regf'; the subsequent pages start with 'hbin'.
I've done something similar with respect to Windows XP Event Logs, carving specifically for individual records rather than entire Event Log (.evt) files. In much the same way, Andrew looked that goals of his examination, and then used the tools he had available to accomplish those goals. Notice that in the paper, he didn't discuss re-assembling every possible hive file, but instead only those that might contain the data/information of interest to his examination. Nor did he attempt to carve every possible file type using scalpel; he only went after the types of files that he thought were necessary.
When I wrote my event record carving tool, I had the benefit of knowing that each record contains the record size as part of its metadata; Andrew opted to grab 25MB of contiguous data from the identified offset, and from his paper, he appears to have been successful.
Also, page 4 includes a statement that is extremely important; "This is necessary as USBSTOR keys timestamps are not always reliable." As you're reading through the paper, you'll notice that Andrew focused on the USBStor keys in order to identify the devices he was looking for, but as you'll note from other sources (WRF, as well as Rob Lee's documentation), the time stamps on the USBStor keys are NOT a reliable indicator of when a USB device was last connected to (or inserted into) a system. This is extremely important, and I believe very often confused.
More importantly, I think that Andrew deserves a great big "thanks" for posting his process so clearly and concisely. This isn't something that we see in the DFIR community very often...I can only think of a few folks who do this work who've stepped up to share this sort of information. Clearly, this is a huge benefit to the community, as I would think that there will be folks reading his paper who will think to themselves, "wow, I could've used that on that examination!", just as others will likely be using it before 2011 closes out. Notice that there's nothing in the write-up that specifically identifies a customer or really gives away any case-specific data.
Andrew's paper is an excellent contribution to the community, and provides an excellent look at a thorough process for approaching and solving a problem using what you, as an examiner, have available to you. Something else to consider would be to look for remnants of the (for Windows XP) setupapi.log file, which would provide an indication of devices that had been connected (plugged in, attached, inserted into) to the system. I've done something similar with both the pagefile and unallocated space...knowing what I was looking for, I used Perl to locate indications of the specific artifacts, and then grab X number of bytes on either side of that artifact. As an example, you could use the following entry from a setupapi.log file:
Now, search for all instances of the above string, and then grab 200 or more bytes on either side of that offset and write it to a file. This could provide some very useful information, as well.
Timelines
Corey has an excellent post up regarding the thought processes behind putting a timeline together. I'd posted recently on how to go about creating mini-timelines from a subset of data; in his post, Corey discusses the thought process that he goes through when creating timelines, in general...he also provides an example. If you look at the "Things to consider" section of his post, you'll notice some similarity to stuff I've written, as well as to Chris Pogue's Sniper Forensics presentations; in particular, the focus on the goals of the examination.
In his post, Corey mentions two approaches to timelines; the "kitchen sink" (including everything you can, and then performing analysis) and the "minimalist" approach. From my perspective, the minimalist approach is very similar to what Corey describes in his post; you can add data sources to a timeline via a "layering" approach, in that you can start with specific data sets (file system metadata, Event Logs, Prefetch file metadata, Registry metadata, Jump Lists, etc.), and then as you begin to develop a more solid picture, add successive layers, or even just specific items, to your timeline. The modular approach to the tools I use and have made available makes this approach (as well as creating mini-timelines) extremely easy. For example, during an examination involving a SQL injection attack, I put together a timeline using just file system metadata and pertinent web server log entries. For an incident involving user activity on a system, I would create a timeline using file system metadata, Registry key LastWrite times (as well as specific entries, such as UserAssist data), Event Log entries, Prefetch file metadata, and if the involved system were Windows 7, Jump List metadata (including parsing the DestList stream and sorting the entries in MFU/MRU order). In a malware detection case, I may not initially be interested in the contents of any user's web surfing activity, with the exception of the LocalService or "Default User" user accounts.
This is not to say that one way is better or more correct than another; rather, the approach used really depends upon the needs of the examination, skill set of the analyst, etc. I've simply found, through my own experience, that adding everything available to a timeline and then sorting things out doesn't provide me with the level of data reduction I'm looking for, whereas a more targeted approach allows me to keep focused on the goals of the examination.
I ran across a very interesting read regarding the forensic analysis of an iTunes installation via DFINews. One of the things I see consistently within the community is that folks really want to see how someone else has done something, how they've gone about conducting an exam or investigation, and this is a good example of that.
Volatility Updates
Keep your eye on the Volatility site for updates that include support for Windows 8, thanks to the efforts of @iMHLv2, @gleeda, @moyix, and @attrc.
Speaking of Volatility, the folks at p4r4ni0d take a look at Morto. Great work, using a great tool set. If you want to see how others using Volatility, take a look at the blog post.
NetworkMiner
A new version of NetworkMiner has been released. If your work involves pcap capture and analysis, this is one tool that I'd definitely recommend that you have in your kit.
Registry
Andrew Case (@attrc) put together a very good paper (blog post here) on how he went about recovering and analyzing deleted Registry hives. Now, this is not recovering deleted keys from within hive files...Andrew recovered entire hive files from unallocated space after (per his paper) the system had been formatted and the operating system reinstalled. Take a look at the process he went through to this...this may be something that you'd want to incorporate into your tool kit.
If you're read Windows Registry Forensics, you'll understand Andrew's approach; Registry hive files (including those from Windows 8) start with 'regf' at the first 4 bytes of the file. The hive files are broken into 4k (4096 bytes) pages, with the first one beginning with 'regf'; the subsequent pages start with 'hbin'.
I've done something similar with respect to Windows XP Event Logs, carving specifically for individual records rather than entire Event Log (.evt) files. In much the same way, Andrew looked that goals of his examination, and then used the tools he had available to accomplish those goals. Notice that in the paper, he didn't discuss re-assembling every possible hive file, but instead only those that might contain the data/information of interest to his examination. Nor did he attempt to carve every possible file type using scalpel; he only went after the types of files that he thought were necessary.
When I wrote my event record carving tool, I had the benefit of knowing that each record contains the record size as part of its metadata; Andrew opted to grab 25MB of contiguous data from the identified offset, and from his paper, he appears to have been successful.
Also, page 4 includes a statement that is extremely important; "This is necessary as USBSTOR keys timestamps are not always reliable." As you're reading through the paper, you'll notice that Andrew focused on the USBStor keys in order to identify the devices he was looking for, but as you'll note from other sources (WRF, as well as Rob Lee's documentation), the time stamps on the USBStor keys are NOT a reliable indicator of when a USB device was last connected to (or inserted into) a system. This is extremely important, and I believe very often confused.
More importantly, I think that Andrew deserves a great big "thanks" for posting his process so clearly and concisely. This isn't something that we see in the DFIR community very often...I can only think of a few folks who do this work who've stepped up to share this sort of information. Clearly, this is a huge benefit to the community, as I would think that there will be folks reading his paper who will think to themselves, "wow, I could've used that on that examination!", just as others will likely be using it before 2011 closes out. Notice that there's nothing in the write-up that specifically identifies a customer or really gives away any case-specific data.
Andrew's paper is an excellent contribution to the community, and provides an excellent look at a thorough process for approaching and solving a problem using what you, as an examiner, have available to you. Something else to consider would be to look for remnants of the (for Windows XP) setupapi.log file, which would provide an indication of devices that had been connected (plugged in, attached, inserted into) to the system. I've done something similar with both the pagefile and unallocated space...knowing what I was looking for, I used Perl to locate indications of the specific artifacts, and then grab X number of bytes on either side of that artifact. As an example, you could use the following entry from a setupapi.log file:
#I121 Device install of
Now, search for all instances of the above string, and then grab 200 or more bytes on either side of that offset and write it to a file. This could provide some very useful information, as well.
Timelines
Corey has an excellent post up regarding the thought processes behind putting a timeline together. I'd posted recently on how to go about creating mini-timelines from a subset of data; in his post, Corey discusses the thought process that he goes through when creating timelines, in general...he also provides an example. If you look at the "Things to consider" section of his post, you'll notice some similarity to stuff I've written, as well as to Chris Pogue's Sniper Forensics presentations; in particular, the focus on the goals of the examination.
In his post, Corey mentions two approaches to timelines; the "kitchen sink" (including everything you can, and then performing analysis) and the "minimalist" approach. From my perspective, the minimalist approach is very similar to what Corey describes in his post; you can add data sources to a timeline via a "layering" approach, in that you can start with specific data sets (file system metadata, Event Logs, Prefetch file metadata, Registry metadata, Jump Lists, etc.), and then as you begin to develop a more solid picture, add successive layers, or even just specific items, to your timeline. The modular approach to the tools I use and have made available makes this approach (as well as creating mini-timelines) extremely easy. For example, during an examination involving a SQL injection attack, I put together a timeline using just file system metadata and pertinent web server log entries. For an incident involving user activity on a system, I would create a timeline using file system metadata, Registry key LastWrite times (as well as specific entries, such as UserAssist data), Event Log entries, Prefetch file metadata, and if the involved system were Windows 7, Jump List metadata (including parsing the DestList stream and sorting the entries in MFU/MRU order). In a malware detection case, I may not initially be interested in the contents of any user's web surfing activity, with the exception of the LocalService or "Default User" user accounts.
This is not to say that one way is better or more correct than another; rather, the approach used really depends upon the needs of the examination, skill set of the analyst, etc. I've simply found, through my own experience, that adding everything available to a timeline and then sorting things out doesn't provide me with the level of data reduction I'm looking for, whereas a more targeted approach allows me to keep focused on the goals of the examination.
Friday, September 16, 2011
Links...and whatnot
How'd you do that??
One thing I've found to be very true about the community is that folks love to see how other analysts have done things. This is very helpful to know when it comes to writing articles or giving presentations.
Frank Boldewin recently posted CSI:Internet Episode 3: A trip into RAM, which provides an excellent walk-through on how he collected the contents of physical memory from a live Windows system, and then used Volatility (including the malfind, volshell, apihooks plugins) to locate malware. Frank's article is well worth a look, as it is an excellent read.
Advice
Need advice or input on getting started in DFIR work? Corey recently posted links to various articles and posts (including my own), and provided some considerable (and excellent) advice of his own. Even if you're already in the field, this is an excellent source of advice.
HowTos
I posted a quick-and-dirty blog post recently on how to create mini-timelines, and received a comment asking for more of these types of posts. I've considered writing "HowTo" posts in the past, but quickly found myself running short on topics. I'm considering posting more of these, but like I said...I'm kind of running short of topics.
Windows 8
I recently installed the available developer build of Windows 8 into VirtualBox (running on 64-bit Windows 7) using these instructions. So far, so good. During the setup, I opted to use the .vhd disk format (rather than the VirtualBox .vdi, or .vmdk) so that I could later add the .vhd file to a Windows system to see what things look like. I installed the OS, poked around a bit, and then shut the VM down and opened the .vhd file in FTK Imager. The Registry hives that I looked at (NTUSER.DAT) appear to follow the same format as previous versions; as Windows 8 is running in a VM, I won't be able to see things like wireless connectivity, etc. It also appears that Windows 8 uses Jump Lists (good thing I wrote that code to parse those bad boys, eh?); I'll definitely have to take a closer look at them, that's for sure. Looking at the Jump List files in the FTK Imager hex view, I see the file signature for the OLE/compound document binary format file, as well as the "Root Entry" and "DestList" stream names.
From the TwitterVerse, it seems that I'm not the only one moving along these lines...moyix has taken the first steps toward adding Win8 support to Volatility (see it working here).
APT
I know, I know...no one wants to hear about the "Advanced Persistent Treat" anymore. However, it appears that there was an APT Summit in DC this past summer, and RSA recently published an overview document of the findings from the summit. The PDF doc is 3 pages long, and pretty interesting read.
Windows Post-Exploitation
Thanks to Chad Tilbury, I was directed to this page (at pentestmonkey.net) which discusses various means of getting from Local Admin to Domain Admin once a system has been compromised. Looking for artifacts of these approaches can provide indications of what the intruder may have been up to.
One thing I've found to be very true about the community is that folks love to see how other analysts have done things. This is very helpful to know when it comes to writing articles or giving presentations.
Frank Boldewin recently posted CSI:Internet Episode 3: A trip into RAM, which provides an excellent walk-through on how he collected the contents of physical memory from a live Windows system, and then used Volatility (including the malfind, volshell, apihooks plugins) to locate malware. Frank's article is well worth a look, as it is an excellent read.
Advice
Need advice or input on getting started in DFIR work? Corey recently posted links to various articles and posts (including my own), and provided some considerable (and excellent) advice of his own. Even if you're already in the field, this is an excellent source of advice.
HowTos
I posted a quick-and-dirty blog post recently on how to create mini-timelines, and received a comment asking for more of these types of posts. I've considered writing "HowTo" posts in the past, but quickly found myself running short on topics. I'm considering posting more of these, but like I said...I'm kind of running short of topics.
Windows 8
I recently installed the available developer build of Windows 8 into VirtualBox (running on 64-bit Windows 7) using these instructions. So far, so good. During the setup, I opted to use the .vhd disk format (rather than the VirtualBox .vdi, or .vmdk) so that I could later add the .vhd file to a Windows system to see what things look like. I installed the OS, poked around a bit, and then shut the VM down and opened the .vhd file in FTK Imager. The Registry hives that I looked at (NTUSER.DAT) appear to follow the same format as previous versions; as Windows 8 is running in a VM, I won't be able to see things like wireless connectivity, etc. It also appears that Windows 8 uses Jump Lists (good thing I wrote that code to parse those bad boys, eh?); I'll definitely have to take a closer look at them, that's for sure. Looking at the Jump List files in the FTK Imager hex view, I see the file signature for the OLE/compound document binary format file, as well as the "Root Entry" and "DestList" stream names.
From the TwitterVerse, it seems that I'm not the only one moving along these lines...moyix has taken the first steps toward adding Win8 support to Volatility (see it working here).
APT
I know, I know...no one wants to hear about the "Advanced Persistent Treat" anymore. However, it appears that there was an APT Summit in DC this past summer, and RSA recently published an overview document of the findings from the summit. The PDF doc is 3 pages long, and pretty interesting read.
Windows Post-Exploitation
Thanks to Chad Tilbury, I was directed to this page (at pentestmonkey.net) which discusses various means of getting from Local Admin to Domain Admin once a system has been compromised. Looking for artifacts of these approaches can provide indications of what the intruder may have been up to.
Thursday, September 15, 2011
NoVA Forensics Meetup Group
Based on some advice from a friend, I set up a NoVA 4n6 Yahoo Group. I've updated the blog page with the information, but will be posting information about location, meeting times, etc., to this group. This will also provide us with a place for folks to upload files (i.e., presentations, etc.), ask questions, continue discussions, etc.
Also, I've received comments from folks who've indicated that it's far too difficult to find information regarding the meetups, so I wanted to put everything in one place...or one more place...because what we want to do is grow the meetup group, not make it a right of passage just trying to find the place.
Thanks.
Also, I've received comments from folks who've indicated that it's far too difficult to find information regarding the meetups, so I wanted to put everything in one place...or one more place...because what we want to do is grow the meetup group, not make it a right of passage just trying to find the place.
Thanks.
HowTo: Mount and Access VSCs
I've posted before regarding how to mount and access Volume Shadow Copies (VSCs), but I thought it might be useful to revisit this topic, as there's a great deal that you can do once you've mounted a VSC.
If you received/have an image acquired from a Vista or Win7 system, you'll likely want to mount the image and access data within the available VSCs at some point. Commercial tools such as ProDiscover provide access to the VSCs within an image (PDF here), but how can you access this source of data in a more economical fashion?
Well, there are a couple of ways to go about this, both of which require that you're using a version of Windows that supports VSCs, such as Windows 2008 or Windows 7.
VMDK Method
Starting with your image, download a copy of either raw2vmdk or LiveView and create a VMWare virtual disk (.vmdk) file for the image (I say "for" because the .vmdk file will most likely contain a reference to the image file). Once you've done this, you can add this .vmdk file as an additional hard drive to a VMWare virtual machine (VM), and then boot that VM. You can add a .vmdk file as an additional hard drive via VMPlayer, but if you have VMWare Workstation, you can add the .vmdk file as an independent, non-persistent disk, which means that no changes are made to the .vmdk file.
Note: You should always work on a copy of an image, not the original image file itself.
As a test, I opened VMPlayer running on a Windows 7 64-bit host system and selected a 32-bit Windows 2008 guest VM. I added a .vmdk file from a 32-bit Windows 7 guest VM to the Win2008 VM as an additional hard drive, and booted the Win2008 VM. Once I logged in, I was able to list the available VSCs from the Windows 7 .vmdk file (mounted as the E:\ volume) using the command vssadmin list shadows /for=e:. From that point, it was simply a matter of using the mklink command to mount a VSC.
VHD Method
To use this method, download a copy of vhdtool, and use it to convert the image to a VHD file (i.e., vhdtool /convert). The tool adds a VHD footer to the image file, so the extension of the image file won't change automatically, although that's not needed in order to mount the VHD file (you can change the extension manually, if you like). You can then use the Disk Management tool to add the VHD file to a Windows 2008 or Windows 7 system as a read-only disk.
What now?
Once you've mounted the image file, you can list the available VSCs using the vssadmin command, and even create a batch file that will mount each VSC using the mklink command, run various tools on the mounted VSC (i.e., rip.pl/.exe, LogParser, etc.), and then unmount each VSC using the rmdir or rd command.
I've used this method to cycle through the VSCs within an image from a Vista system to extract information from a user's UserAssist key using the userassist_tln.pl RegRipper plugin (via rip.pl), in order to determine not only the last time that the user launched an application, but previous times, as well.
Resources
This section provides links to blog posts from other analysts to demonstrate what they've done while having access to VSCs...
- Stacey Edwards' SANS Forensic Blog post on using LogParser against VSCs
- Corey's "A Little Help with VSCs" post
- SANS Forensics Blog post (using TSK tools)
If you received/have an image acquired from a Vista or Win7 system, you'll likely want to mount the image and access data within the available VSCs at some point. Commercial tools such as ProDiscover provide access to the VSCs within an image (PDF here), but how can you access this source of data in a more economical fashion?
Well, there are a couple of ways to go about this, both of which require that you're using a version of Windows that supports VSCs, such as Windows 2008 or Windows 7.
VMDK Method
Starting with your image, download a copy of either raw2vmdk or LiveView and create a VMWare virtual disk (.vmdk) file for the image (I say "for" because the .vmdk file will most likely contain a reference to the image file). Once you've done this, you can add this .vmdk file as an additional hard drive to a VMWare virtual machine (VM), and then boot that VM. You can add a .vmdk file as an additional hard drive via VMPlayer, but if you have VMWare Workstation, you can add the .vmdk file as an independent, non-persistent disk, which means that no changes are made to the .vmdk file.
Note: You should always work on a copy of an image, not the original image file itself.
As a test, I opened VMPlayer running on a Windows 7 64-bit host system and selected a 32-bit Windows 2008 guest VM. I added a .vmdk file from a 32-bit Windows 7 guest VM to the Win2008 VM as an additional hard drive, and booted the Win2008 VM. Once I logged in, I was able to list the available VSCs from the Windows 7 .vmdk file (mounted as the E:\ volume) using the command vssadmin list shadows /for=e:. From that point, it was simply a matter of using the mklink command to mount a VSC.
VHD Method
To use this method, download a copy of vhdtool, and use it to convert the image to a VHD file (i.e., vhdtool /convert). The tool adds a VHD footer to the image file, so the extension of the image file won't change automatically, although that's not needed in order to mount the VHD file (you can change the extension manually, if you like). You can then use the Disk Management tool to add the VHD file to a Windows 2008 or Windows 7 system as a read-only disk.
What now?
Once you've mounted the image file, you can list the available VSCs using the vssadmin command, and even create a batch file that will mount each VSC using the mklink command, run various tools on the mounted VSC (i.e., rip.pl/.exe, LogParser, etc.), and then unmount each VSC using the rmdir or rd command.
I've used this method to cycle through the VSCs within an image from a Vista system to extract information from a user's UserAssist key using the userassist_tln.pl RegRipper plugin (via rip.pl), in order to determine not only the last time that the user launched an application, but previous times, as well.
Resources
This section provides links to blog posts from other analysts to demonstrate what they've done while having access to VSCs...
- Stacey Edwards' SANS Forensic Blog post on using LogParser against VSCs
- Corey's "A Little Help with VSCs" post
- SANS Forensics Blog post (using TSK tools)
Wednesday, September 14, 2011
HowTo: File Extension Analysis
Subtitle: Determining which application a file "belongs" to
Many times when I am browsing through online lists and forums, I see questions geared along this avenue; an analyst finds a file with a specific extension, and wants to know which application uses it or may have been used to modify that file. Most times, this is just a small part of a much larger question, and initial attempts to answer the question via Google searches may have led to additional confusion (specified application does not appear to be installed on the system, etc.). However, there are things that an analyst can do to answer that question using the data currently available, within the collected image.
File Extension Analysis
So you have a file that you're interested in, along with a path, name, and extension, and you want to know which application may have been used to create or modify that document. One way we can go about this is to use Registry analysis. Within the acquired image, locate the Software hive (usually in the path "\Windows\system32\config"), and within that hive, look to the Classes key. Many of the first subkeys that you'll see beneath this key are file extensions, such as ".3g2". The "(Default)" value of this key is "QuickTime.3g2", which indicates that this system will attempt to open a file with this extension using the QuickTime application. Additionally, the "OpenWithList" subkey includes a subkey named "QuickTimePlayer.exe". Locating the key "Classes\QuickTime.3g2", I saw that that key had a "shell\open\command" subkey with a "(Default)" value that pointed to QuickTimePlayer.exe (along with the complete path to that file).
As another example, beneath the "Classes\.aa" key, the "OpenWithList" subkey contains a subkey named "iTunes.exe", which indicates that the iTunes application will be used to open a file that ends in the ".aa" extension. Some extensions may have multiple subkeys beneath the "OpenWithList" key, which serves as an indicator to the type of file with which the extension is associated.
Other keys beneath the "Classes" key may have different information that may indicate how the file had been accessed or used on the system. On a system I was looking at, I found the ".rnk" extension, and the key only had a "(Default)" value with "rnkfile". I then located the "Classes\rnkfile" key, which had a "shell" subkey, with additional subkeys that referred to different commands. When I went to the command line on that system and typed "assoc rnkfile", the response was "rnkfile=Dial-Up Shortcut".
As this technique is based on Registry analysis, analysts need to keep in mind that it may often be unique to the system being analyzed, and findings on one system may not necessarily map directly to or represent those on another system. Also, these artifacts are based on file associations, which many times will be set when an application is installed, during the installation process. As such, when the application is uninstalled, those associations may be removed.
As this technique involves Registry analysis, there are other areas you can check, as well. For example, each user hive (XP) has a "Software\Classes" key within the NTUSER.DAT hive that may contain file associations specific to the user. On Vista and above systems, this information will be located in the root of the USRCLASS.DAT hive. You can also look to the RecentDocs key within the NTUSER.DAT hive to see which files the user has accessed, by extension. Also, if you suspect that someone may have purposely deleted any of the keys or values of interest, be sure to use regslack to check the unallocated space within the hive files for those artifacts.
If you have a file name (as opposed to just an extension) you might open up the user's hives in something like MiTeC's Windows Registry Recovery tool or the Registry Decoder from DFS, and search for the file name...you may find a reference in the application MRU listing.
Jump Lists
Jump Lists are artifacts that are new to Windows 7, and appear to contain most frequently used or most recently used (MFU/MRU) information with respect to applications and files. The *.automaticDestinations-ms Jump List files are created by the operating system, with only interact from the user being to open the file. However, testing indicates so far that Jump Lists created as a result of an application being used will persist after the application itself has been removed or uninstalled from a system. Therefore, an analyst with a specific file extension of interest should be sure to check the available Jump Lists (assuming that the image is from a Windows 7 system, of course) for indications of the extension or the complete file name. From there, the analyst can then map the AppID (first part of the Jump List name, before the '.') to the application, using the list on the ForensicsWiki, or on ForensicArtifacts.com.
Timeline Analysis
When presenting on timeline analysis, one of the benefits of this analysis technique that I try to get across is that it can provide context to what we're looking at; for example, creating a timeline from multiple data sources (including data from the user profile) may provide clear indications as to how a file with a specific extension was created or modified. Timelines very often include (if available) file system metadata, Prefetch file metadata, as well as time stamped data from a user's NTUSER.DAT, including (but not limited to) UserAssist data, RecentDocs data, etc. Through a timeline, you may find that a user opened an application, and shortly thereafter a Prefetch file was created or modified for that application, and then the file in question was created or modified. At this point, you'd not only know when the file was created, but using with application, and by which user.
VSCs
Volume Shadow Copies (VSCs) may provide some considerable information that may not be available via other sources. If an artifact does not persist when an application has been uninstalled from a system (such as may be the case with file extension associations), there may be historic remnants available in the VSCs (Vista, Windows 7).
Resources
Jump List Analysis (part 1, part 2, part 3)
Many times when I am browsing through online lists and forums, I see questions geared along this avenue; an analyst finds a file with a specific extension, and wants to know which application uses it or may have been used to modify that file. Most times, this is just a small part of a much larger question, and initial attempts to answer the question via Google searches may have led to additional confusion (specified application does not appear to be installed on the system, etc.). However, there are things that an analyst can do to answer that question using the data currently available, within the collected image.
File Extension Analysis
So you have a file that you're interested in, along with a path, name, and extension, and you want to know which application may have been used to create or modify that document. One way we can go about this is to use Registry analysis. Within the acquired image, locate the Software hive (usually in the path "\Windows\system32\config"), and within that hive, look to the Classes key. Many of the first subkeys that you'll see beneath this key are file extensions, such as ".3g2". The "(Default)" value of this key is "QuickTime.3g2", which indicates that this system will attempt to open a file with this extension using the QuickTime application. Additionally, the "OpenWithList" subkey includes a subkey named "QuickTimePlayer.exe". Locating the key "Classes\QuickTime.3g2", I saw that that key had a "shell\open\command" subkey with a "(Default)" value that pointed to QuickTimePlayer.exe (along with the complete path to that file).
As another example, beneath the "Classes\.aa" key, the "OpenWithList" subkey contains a subkey named "iTunes.exe", which indicates that the iTunes application will be used to open a file that ends in the ".aa" extension. Some extensions may have multiple subkeys beneath the "OpenWithList" key, which serves as an indicator to the type of file with which the extension is associated.
Other keys beneath the "Classes" key may have different information that may indicate how the file had been accessed or used on the system. On a system I was looking at, I found the ".rnk" extension, and the key only had a "(Default)" value with "rnkfile". I then located the "Classes\rnkfile" key, which had a "shell" subkey, with additional subkeys that referred to different commands. When I went to the command line on that system and typed "assoc rnkfile", the response was "rnkfile=Dial-Up Shortcut".
As this technique is based on Registry analysis, analysts need to keep in mind that it may often be unique to the system being analyzed, and findings on one system may not necessarily map directly to or represent those on another system. Also, these artifacts are based on file associations, which many times will be set when an application is installed, during the installation process. As such, when the application is uninstalled, those associations may be removed.
As this technique involves Registry analysis, there are other areas you can check, as well. For example, each user hive (XP) has a "Software\Classes" key within the NTUSER.DAT hive that may contain file associations specific to the user. On Vista and above systems, this information will be located in the root of the USRCLASS.DAT hive. You can also look to the RecentDocs key within the NTUSER.DAT hive to see which files the user has accessed, by extension. Also, if you suspect that someone may have purposely deleted any of the keys or values of interest, be sure to use regslack to check the unallocated space within the hive files for those artifacts.
If you have a file name (as opposed to just an extension) you might open up the user's hives in something like MiTeC's Windows Registry Recovery tool or the Registry Decoder from DFS, and search for the file name...you may find a reference in the application MRU listing.
Jump Lists
Jump Lists are artifacts that are new to Windows 7, and appear to contain most frequently used or most recently used (MFU/MRU) information with respect to applications and files. The *.automaticDestinations-ms Jump List files are created by the operating system, with only interact from the user being to open the file. However, testing indicates so far that Jump Lists created as a result of an application being used will persist after the application itself has been removed or uninstalled from a system. Therefore, an analyst with a specific file extension of interest should be sure to check the available Jump Lists (assuming that the image is from a Windows 7 system, of course) for indications of the extension or the complete file name. From there, the analyst can then map the AppID (first part of the Jump List name, before the '.') to the application, using the list on the ForensicsWiki, or on ForensicArtifacts.com.
Timeline Analysis
When presenting on timeline analysis, one of the benefits of this analysis technique that I try to get across is that it can provide context to what we're looking at; for example, creating a timeline from multiple data sources (including data from the user profile) may provide clear indications as to how a file with a specific extension was created or modified. Timelines very often include (if available) file system metadata, Prefetch file metadata, as well as time stamped data from a user's NTUSER.DAT, including (but not limited to) UserAssist data, RecentDocs data, etc. Through a timeline, you may find that a user opened an application, and shortly thereafter a Prefetch file was created or modified for that application, and then the file in question was created or modified. At this point, you'd not only know when the file was created, but using with application, and by which user.
VSCs
Volume Shadow Copies (VSCs) may provide some considerable information that may not be available via other sources. If an artifact does not persist when an application has been uninstalled from a system (such as may be the case with file extension associations), there may be historic remnants available in the VSCs (Vista, Windows 7).
Resources
Jump List Analysis (part 1, part 2, part 3)
Monday, September 12, 2011
HowTo: Creating Mini-Timelines
There are times when you don't want (or need) a super timeline, but instead just want to focus on one piece of available data, such as Event Log entries or Registry key LastWrite times. I've had occasion to focus on just specific entries in the Security Event Logs; specifically, event ID 528, type 10, indicating RDP logins to a system. I used one of the timeline tools I wrote, evtparse.pl, to parse the appropriate records from the Security Event Log and then create a timeline from just those records.
So, let's say that you have something specific that you want to look for, such as all Registry keys that were created or modified between two specific dates. You'd want to start by either extracting the appropriate hives from the acquired image via FTK Imager, or using FTK Imager to mount the acquired image as a volume on your analysis system.
For the next steps, go here and download the tln_tools.zip archive...do NOT download regtime.zip for this exercise. From the tln_tools.zip archive, we will be working specifically with the regtime.pl and parse.pl tools (note that regtime also ships with a standalone EXE...you must have the p2x588.dll file in the same directory along with the EXE).
The first thing you'll need to do is create your events file of the Registry key LastWrite times. One thing you'll need is the name of the system you're analyzing. This can be something that's already in your case documentation; however, if you don't have that information, you can either enter a designator, or leave it blank...for what we're doing, it isn't critical. If you have RegRipper installed, this is very easy to get, using the following command:
We can then use the returned information in your mini-timeline instead of the "SERVER" value in the below commands.
Next, we'll parse the Software and System hives (assume that the image is mounted as H:\):
Now that we have the events file, we can use parse.pl to generate our timeline. If you type just "parse.pl" at the command prompt (or "parse.pl -h"), you'll see that the script has a couple of options, one of which is to specify a date range. Let's say that you want all events from your events file, between 3 March and 4 April 2011, inclusive. You would use the following command:
This command provides an ASCII output format that I've always found very easy to view and understand. If you would like .csv output, which Excel is much happier with, type the following command (note the "-c" switch):
There you go...that's it. You can also add other hives to your events file, even NTUSER.DAT hives (adding the username after the "-u" switch can help you tell different user's apart).
This blog post has been brought to you by the open source tool, "regtime.pl", and the redirection operator ">".
So, let's say that you have something specific that you want to look for, such as all Registry keys that were created or modified between two specific dates. You'd want to start by either extracting the appropriate hives from the acquired image via FTK Imager, or using FTK Imager to mount the acquired image as a volume on your analysis system.
For the next steps, go here and download the tln_tools.zip archive...do NOT download regtime.zip for this exercise. From the tln_tools.zip archive, we will be working specifically with the regtime.pl and parse.pl tools (note that regtime also ships with a standalone EXE...you must have the p2x588.dll file in the same directory along with the EXE).
The first thing you'll need to do is create your events file of the Registry key LastWrite times. One thing you'll need is the name of the system you're analyzing. This can be something that's already in your case documentation; however, if you don't have that information, you can either enter a designator, or leave it blank...for what we're doing, it isn't critical. If you have RegRipper installed, this is very easy to get, using the following command:
C:\rr>rip -r H:\Windows\system32\config\system -p compname
We can then use the returned information in your mini-timeline instead of the "SERVER" value in the below commands.
Next, we'll parse the Software and System hives (assume that the image is mounted as H:\):
C:\tools>regtime -r H:\Windows\system32\config\system -m HKLM/System -s SERVER > D:\case\key_events.txt
C:\tools>regtime -r H:\Windows\system32\config\software -m HKLM/Software -s SERVER >> D:\case\key_events.txt
Now that we have the events file, we can use parse.pl to generate our timeline. If you type just "parse.pl" at the command prompt (or "parse.pl -h"), you'll see that the script has a couple of options, one of which is to specify a date range. Let's say that you want all events from your events file, between 3 March and 4 April 2011, inclusive. You would use the following command:
C:\tools>parse.pl -f D:\case\key_events.txt -r 03/03/2011-04/04/2011 > D:\case\key_tln.txt
This command provides an ASCII output format that I've always found very easy to view and understand. If you would like .csv output, which Excel is much happier with, type the following command (note the "-c" switch):
C:\tools>parse.pl -f D:\case\key_events.txt -r 03/03/2011-04/04/2011 -c > D:\case\key_tln.csv
There you go...that's it. You can also add other hives to your events file, even NTUSER.DAT hives (adding the username after the "-u" switch can help you tell different user's apart).
This blog post has been brought to you by the open source tool, "regtime.pl", and the redirection operator ">".
Friday, September 09, 2011
Growing the NoVA Forensics Meetup
I received an email today from Tim/@bug_bear asking about the format that we use for the NoVA Forensic Meetups, as he may be looking at starting something in his area. In responding, I started thinking about what we currently do, and whether or not we're "serving" our members. What I'm looking at is what we can do to get folks interested in attending and interacting, more so than we have now.
One of the things I've noticed about the meetings is that we have a very small core of regulars...folks who can make it out on a regular basis. We do have folks who happen to be in the area for another event and stop by, which is very cool...this last meetup, we had some folks from a defense contractor, one of whom is a former Marine.
So I thought I'd share some thoughts I had on expanding the meetups, and see if we can't get some feedback or additional thoughts and comments from others on what we can do to expand, so that we're not just doing the same thing every time.
I'd like to see something more than just presentations; our presentations have been very good, and I greatly appreciate everyone who has (and will be) stepped up to give a presentation. But I'd also like to see about maybe getting a little more interactive and offer some additional value to our members. To that end, I'd like to get some input from the members
Some other ideas I've had, in part from my exchange with Tim, include:
Collaborative projects - Tim mentioned this, and I think that the idea has some very good possibilities. One of the aspects of ReverseSpace that our hosts remind us of is that they have something of a network infrastructure themselves. However, this doesn't have to be the sole avenue for collaborative projects; all it takes is the desire. Thoughts? Ideas?
Wiki - this is also something that Tim brought up that I thought might be interesting. Taking nothing at all away from the ForensicsWiki, there are resources such as WikiSpaces available, as well.
Mentoring - our membership includes a number of folks who are interested in forensics, but perhaps don't "do" it, or do it on a regular basis. We also have members who have other backgrounds...network and system admins, IDS analysts, etc. We've had folks attend who do online investigations, as well as various levels (local, state, federal) LE. There are folks who specialize in or just work with Mac, Linux or Windows systems, as well as mobile devices. What I like about all this is that we have folks from a range of backgrounds who are willing to answer questions. "Mentoring" doesn't have to be anything more than someone is willing to provide.
Lightning talks - I've thought about this before; instead of having one 45-50 minute presentation, have two (or more) shorter ones, just covering very specific, limited topics. Speaking of which, the ReverseSpace (our hosts) folks have hosted DojoCon at their location; what would be the interest in a ForensiCon? I've noticed online that there are a number of conferences that have moved to a combination of talks, lightning talks, and even panels, and have been very successful. We may be able do something like this on a Saturday during the messy winter weather, but it would really depend on what sort of attendance we could get.
Logo - Would it be cool if we had a logo? I'd put up either a signed copy of DFwOST (signed by both authors) or of WRF to whomever comes up with the winning logo design, if we have folks who want to design a logo that our membership could vote on.
I'd like to get input from our membership, as well as from anyone else who has some thoughts along these lines.
Something else I will be doing going forward is sending out reminders via other media besides just this blog; I'll be pushing more reminders out via the Win4n6 group, Twitter, LinkedIn, Facebook, etc. Also, I'll be sure to bring these topics up during the admin/intro portion of our next meetup.
One of the things I've noticed about the meetings is that we have a very small core of regulars...folks who can make it out on a regular basis. We do have folks who happen to be in the area for another event and stop by, which is very cool...this last meetup, we had some folks from a defense contractor, one of whom is a former Marine.
So I thought I'd share some thoughts I had on expanding the meetups, and see if we can't get some feedback or additional thoughts and comments from others on what we can do to expand, so that we're not just doing the same thing every time.
I'd like to see something more than just presentations; our presentations have been very good, and I greatly appreciate everyone who has (and will be) stepped up to give a presentation. But I'd also like to see about maybe getting a little more interactive and offer some additional value to our members. To that end, I'd like to get some input from the members
Some other ideas I've had, in part from my exchange with Tim, include:
Collaborative projects - Tim mentioned this, and I think that the idea has some very good possibilities. One of the aspects of ReverseSpace that our hosts remind us of is that they have something of a network infrastructure themselves. However, this doesn't have to be the sole avenue for collaborative projects; all it takes is the desire. Thoughts? Ideas?
Wiki - this is also something that Tim brought up that I thought might be interesting. Taking nothing at all away from the ForensicsWiki, there are resources such as WikiSpaces available, as well.
Mentoring - our membership includes a number of folks who are interested in forensics, but perhaps don't "do" it, or do it on a regular basis. We also have members who have other backgrounds...network and system admins, IDS analysts, etc. We've had folks attend who do online investigations, as well as various levels (local, state, federal) LE. There are folks who specialize in or just work with Mac, Linux or Windows systems, as well as mobile devices. What I like about all this is that we have folks from a range of backgrounds who are willing to answer questions. "Mentoring" doesn't have to be anything more than someone is willing to provide.
Lightning talks - I've thought about this before; instead of having one 45-50 minute presentation, have two (or more) shorter ones, just covering very specific, limited topics. Speaking of which, the ReverseSpace (our hosts) folks have hosted DojoCon at their location; what would be the interest in a ForensiCon? I've noticed online that there are a number of conferences that have moved to a combination of talks, lightning talks, and even panels, and have been very successful. We may be able do something like this on a Saturday during the messy winter weather, but it would really depend on what sort of attendance we could get.
Logo - Would it be cool if we had a logo? I'd put up either a signed copy of DFwOST (signed by both authors) or of WRF to whomever comes up with the winning logo design, if we have folks who want to design a logo that our membership could vote on.
I'd like to get input from our membership, as well as from anyone else who has some thoughts along these lines.
Something else I will be doing going forward is sending out reminders via other media besides just this blog; I'll be pushing more reminders out via the Win4n6 group, Twitter, LinkedIn, Facebook, etc. Also, I'll be sure to bring these topics up during the admin/intro portion of our next meetup.
Updates and Links
NoVA Forensic Meetup
The most recent NoVA Forensics Meetup was a great time! Mitch Harris gave a great "Botnets 101" presentation and opened the door for 201. Mitch described botnets and their command-and-control (C2) structures, and is leaving mitigation techniques for his follow-on presentation. A huge thanks to Mitch for presenting, and everyone for showing up, especially those who came by because they were in the area. We're looking forward to having Mitch come back for that follow-on presentation in the future.
BIOS Malware
Speaking of Mitch's presentation, he also mentioned malware that infects systems by writing to the BIOS. Oddly enough, I ran across Mebromi this morning, which Norman describes as a "BIOS-flashing Trojan".
An excellent point brought up in the writeup is also something that we discussed during the meetup; that is that the reason why we're not all infected with malware that writes to the BIOS (or to the GPU on our graphics card, etc.) is that this sort of malware is "hard to do", because it's very hardware-specific. In fact, the writeup also indicates that the Trojan attempts to modify Award BIOS's only. Mebromi also apparently infects the MBR, as well.
Here is Symantec's writeup on Mebromi.
VirusTotal
Okay, this is the last thing I'm going to say about malware in this post...seriously. I ran across this ComputerWorld article this morning, which mentioned that the same spearphish attack code used against RSA had also been used against other organizations; in fact, the first sample of that code had actually been submitted on 4 March, whereas (according to the article) it wasn't until 19 March that a sample was submitted from someone at EMC (the company that owns RSA).
Folks, what this tells us is that those tools that we use to quickly gather intelligence about stuff we find on our systems can then be used against us. In spearphishing attacks, you can be sure that the attackers know exactly to whom the emails were sent...that's sort of the nature and definition of a spearphishing attack, and is also why we don't simply call it "random-spray-and-pray". Remember, "public" websites are usually exactly that...public. And available to everyone. This is why you might want to develop an organic, in-house malware analysis capability.
Also, there was apparently some metadata analysis of the actual spreadsheets that had been submitted, as well...take a look at the article and see if you agree with what was said about that...
Jump Lists
The new 4n6k blog has a post up that extends Jump List Analysis by adding a whole bunch of AppIDs. I posted recently regarding using timeline analysis to fill in the gaps in analysis when you're either attempting to determine the app associated with an unknown AppID, or if the user had deleted the application itself prior to the acquisition of the system.
Community
One of the comments to the blog post that I mentioned above was along the lines of, hey, locating indications of apps being run has been discussed before...my thought along those lines is, okay, but do we do it enough? Seriously. How often do we really share analysis techniques, or findings? Something may have been discussed before...but where, and with whom?
What would have happened within the community if no one took USB device analysis any further after the first research was published? What if Rob Lee had never decided to take what was published a step or two further? What if the first iterations of the Volatility Framework had never been developed?
It's important that we discuss these things, and keep discussing them. The problem that we face as a community is that nothing about what we do is static; everything's changing all the time. Discussing analysis techniques and findings allows us to not only engage other analysts who may not have seen what we've seen (or didn't know that they did), but it also allows us to metaphorically go beyond the next ridge and see if the world really is flat.
ProDiscover
ProDiscover recently turned 7...that's right, version 7 was released, adding MacOSX support (HFS+ file system, DMG images), EVTX Event Log format support, and there's a Fedora Linux live boot disk, as well. Chris Brown has graciously provided me with a license for ProDiscover IR since version 3, so I've seen this application go through a lot of growth, as Perl ProScripting support was added, as well as support for parsing PST/OST files. Just prior to version 7, Chris added support for parsing Windows 7 Jump Lists, making PD the first commercial forensic analysis application (that I'm aware of) to support parsing this artifact. ProDiscover was also the first commercial app to include native parsing of VSCs...right clicking on the partition within the app gives you a list of VSCs to choose from, and the ones you selected would appear as volumes/drive letters right there in the app UI.
Timelines
Corey's got a great post up called "What's a Timeline", which is a very good post that helps explain what a timeline is, or should be. It doesn't matter whether you're new to timelines or not, it's worth a look.
ESEDB Format
BugBear posted recently regarding using Joachim Metz's libesedb project tools to parse data from the Windows Desktop Search database, based on the Extensible Storage Engine (or 'ESE'). Joachim also wrote a paper that documents the format of the database. While Joachim's tools are Linux-based, Mark Woan provides his EseDbViewer for Windows systems.
From reading the materials available, it would appear that the ESE DB format, particularly for the Windows Desktop Search, may provide some very interesting forensic artifacts. It would be good to hear if any analysts out there are already using information from within the ESE database in their examinations.
If you're interested in developing your own code, iiobo has an ESE library and toolkit (C++/C#) available.
The most recent NoVA Forensics Meetup was a great time! Mitch Harris gave a great "Botnets 101" presentation and opened the door for 201. Mitch described botnets and their command-and-control (C2) structures, and is leaving mitigation techniques for his follow-on presentation. A huge thanks to Mitch for presenting, and everyone for showing up, especially those who came by because they were in the area. We're looking forward to having Mitch come back for that follow-on presentation in the future.
BIOS Malware
Speaking of Mitch's presentation, he also mentioned malware that infects systems by writing to the BIOS. Oddly enough, I ran across Mebromi this morning, which Norman describes as a "BIOS-flashing Trojan".
An excellent point brought up in the writeup is also something that we discussed during the meetup; that is that the reason why we're not all infected with malware that writes to the BIOS (or to the GPU on our graphics card, etc.) is that this sort of malware is "hard to do", because it's very hardware-specific. In fact, the writeup also indicates that the Trojan attempts to modify Award BIOS's only. Mebromi also apparently infects the MBR, as well.
Here is Symantec's writeup on Mebromi.
VirusTotal
Okay, this is the last thing I'm going to say about malware in this post...seriously. I ran across this ComputerWorld article this morning, which mentioned that the same spearphish attack code used against RSA had also been used against other organizations; in fact, the first sample of that code had actually been submitted on 4 March, whereas (according to the article) it wasn't until 19 March that a sample was submitted from someone at EMC (the company that owns RSA).
Folks, what this tells us is that those tools that we use to quickly gather intelligence about stuff we find on our systems can then be used against us. In spearphishing attacks, you can be sure that the attackers know exactly to whom the emails were sent...that's sort of the nature and definition of a spearphishing attack, and is also why we don't simply call it "random-spray-and-pray". Remember, "public" websites are usually exactly that...public. And available to everyone. This is why you might want to develop an organic, in-house malware analysis capability.
Also, there was apparently some metadata analysis of the actual spreadsheets that had been submitted, as well...take a look at the article and see if you agree with what was said about that...
Jump Lists
The new 4n6k blog has a post up that extends Jump List Analysis by adding a whole bunch of AppIDs. I posted recently regarding using timeline analysis to fill in the gaps in analysis when you're either attempting to determine the app associated with an unknown AppID, or if the user had deleted the application itself prior to the acquisition of the system.
Community
One of the comments to the blog post that I mentioned above was along the lines of, hey, locating indications of apps being run has been discussed before...my thought along those lines is, okay, but do we do it enough? Seriously. How often do we really share analysis techniques, or findings? Something may have been discussed before...but where, and with whom?
What would have happened within the community if no one took USB device analysis any further after the first research was published? What if Rob Lee had never decided to take what was published a step or two further? What if the first iterations of the Volatility Framework had never been developed?
It's important that we discuss these things, and keep discussing them. The problem that we face as a community is that nothing about what we do is static; everything's changing all the time. Discussing analysis techniques and findings allows us to not only engage other analysts who may not have seen what we've seen (or didn't know that they did), but it also allows us to metaphorically go beyond the next ridge and see if the world really is flat.
ProDiscover
ProDiscover recently turned 7...that's right, version 7 was released, adding MacOSX support (HFS+ file system, DMG images), EVTX Event Log format support, and there's a Fedora Linux live boot disk, as well. Chris Brown has graciously provided me with a license for ProDiscover IR since version 3, so I've seen this application go through a lot of growth, as Perl ProScripting support was added, as well as support for parsing PST/OST files. Just prior to version 7, Chris added support for parsing Windows 7 Jump Lists, making PD the first commercial forensic analysis application (that I'm aware of) to support parsing this artifact. ProDiscover was also the first commercial app to include native parsing of VSCs...right clicking on the partition within the app gives you a list of VSCs to choose from, and the ones you selected would appear as volumes/drive letters right there in the app UI.
Timelines
Corey's got a great post up called "What's a Timeline", which is a very good post that helps explain what a timeline is, or should be. It doesn't matter whether you're new to timelines or not, it's worth a look.
ESEDB Format
BugBear posted recently regarding using Joachim Metz's libesedb project tools to parse data from the Windows Desktop Search database, based on the Extensible Storage Engine (or 'ESE'). Joachim also wrote a paper that documents the format of the database. While Joachim's tools are Linux-based, Mark Woan provides his EseDbViewer for Windows systems.
From reading the materials available, it would appear that the ESE DB format, particularly for the Windows Desktop Search, may provide some very interesting forensic artifacts. It would be good to hear if any analysts out there are already using information from within the ESE database in their examinations.
If you're interested in developing your own code, iiobo has an ESE library and toolkit (C++/C#) available.
Thursday, September 08, 2011
Jump List Analysis, Pt III
Dan recently posted on Jump Lists on his blog, and provided a list of AppIDs, which can be used to augment what Mark posted on the ForensicsWiki.
So what happens if you run across an AppID that's not on one of the lists? Recently Jamie (Twitter: @gleeda) suggested determining the algorithm used to generate AppIDs, but what if the algorithm is a one-way hash, similar to what is used to compute the hashes in Prefetch file names? If that's the case, then having the AppID alone doesn't provide a means for determining the application name (i.e., if the hash is one-way). So, what else can be done?
Well, this would be a great time for timeline analysis. After all, it's unlikely that the only thing you'll want to determine is the name of the application that corresponds to the AppID; it's much more likely that that will be the first (or one) question of several.
When developing your timeline of system activity from a Windows 7 system, you'll likely have Prefetch files, Windows Event Log (EVTX) files, file system metadata, Registry key LastWrite times, time stamped information from Registry values, etc. Also, when you parse the DestList stream from the *.automaticDestinations Jump List files (particularly the one you're interested in), you'll have an MRU/MFU listing that you can add to the timeline. So what you'd normally look for (as with a Windows XP system) is that an entry was added to or modified within the Jump List file around the time that an application .exe was accessed...but wait...by default, Windows 7 (and Vista) doesn't update last access times on files! Holy MFT, Batman! What now?
Or, what happens if the user installs an application, does some stuff, and then deletes the application? Even if the Windows 7 system had been tweaked to update last access times on files, if the application is deleted and the executable file isn't available (MFT entry is overwritten...)...well, you can see where I'm going with this...
There is a solution to determining which application used in both of the above scenarios. One of the forensically interesting aspects of Jump Lists is that they persist on the system even after the application has been removed. We can use other, similar artifacts, such as Prefetch file metadata and UserAssist key entries (which also persist on a system after the application that was launched has been removed) to correlate the necessary information. For example, if a user installed an application (via an MSI package), you'd see that activity in the UserAssist key (as well as the MSI key listing). If they then launched the installed app, you'd also likely (depending upon how it was launched) see that in the UserAssist key, and then you'd see a Prefetch file being created in close proximity to the launch. You should also see the Jump List file being created within close proximity to the UserAssist key and Prefetch file data.
Once the application is removed from the system, you shouldn't see any further modifications to the Prefetch file or Jump List file data. If you found that the application appeared to have been run multiple times, then you should be sure to look to VSCs for additional available time stamped data.
What I hope this demonstrates is how analysis techniques such as timelines not only provide context to the data that you're looking at, but by incorporating multiple data sources, you increase your relative level of confidence in the data itself. Understanding the nature and value of those data sources also means that not only do you understand what should be there, but you can also fill the gaps when something (i.e., an application) is intentionally removed or deleted.
So what happens if you run across an AppID that's not on one of the lists? Recently Jamie (Twitter: @gleeda) suggested determining the algorithm used to generate AppIDs, but what if the algorithm is a one-way hash, similar to what is used to compute the hashes in Prefetch file names? If that's the case, then having the AppID alone doesn't provide a means for determining the application name (i.e., if the hash is one-way). So, what else can be done?
Well, this would be a great time for timeline analysis. After all, it's unlikely that the only thing you'll want to determine is the name of the application that corresponds to the AppID; it's much more likely that that will be the first (or one) question of several.
When developing your timeline of system activity from a Windows 7 system, you'll likely have Prefetch files, Windows Event Log (EVTX) files, file system metadata, Registry key LastWrite times, time stamped information from Registry values, etc. Also, when you parse the DestList stream from the *.automaticDestinations Jump List files (particularly the one you're interested in), you'll have an MRU/MFU listing that you can add to the timeline. So what you'd normally look for (as with a Windows XP system) is that an entry was added to or modified within the Jump List file around the time that an application .exe was accessed...but wait...by default, Windows 7 (and Vista) doesn't update last access times on files! Holy MFT, Batman! What now?
Or, what happens if the user installs an application, does some stuff, and then deletes the application? Even if the Windows 7 system had been tweaked to update last access times on files, if the application is deleted and the executable file isn't available (MFT entry is overwritten...)...well, you can see where I'm going with this...
There is a solution to determining which application used in both of the above scenarios. One of the forensically interesting aspects of Jump Lists is that they persist on the system even after the application has been removed. We can use other, similar artifacts, such as Prefetch file metadata and UserAssist key entries (which also persist on a system after the application that was launched has been removed) to correlate the necessary information. For example, if a user installed an application (via an MSI package), you'd see that activity in the UserAssist key (as well as the MSI key listing). If they then launched the installed app, you'd also likely (depending upon how it was launched) see that in the UserAssist key, and then you'd see a Prefetch file being created in close proximity to the launch. You should also see the Jump List file being created within close proximity to the UserAssist key and Prefetch file data.
Once the application is removed from the system, you shouldn't see any further modifications to the Prefetch file or Jump List file data. If you found that the application appeared to have been run multiple times, then you should be sure to look to VSCs for additional available time stamped data.
What I hope this demonstrates is how analysis techniques such as timelines not only provide context to the data that you're looking at, but by incorporating multiple data sources, you increase your relative level of confidence in the data itself. Understanding the nature and value of those data sources also means that not only do you understand what should be there, but you can also fill the gaps when something (i.e., an application) is intentionally removed or deleted.
Wednesday, September 07, 2011
Registry Stuff
I ran across a tweet recently from Andrew Case (@attrc on Twitter) regarding a Registry key with some interesting entries; specifically, the key HKLM\Software\Microsoft\RADAR\HeapLeakDetection.
To get an idea of what this key might be all about, I did some research and found this page at the Microsoft site, with an embedded video. From watching the video, I learned that RADAR is a technology embedded in Windows 7 that monitors memory leaks so that data can be collected and used to correct issues with memory leaks in applications. The developer being interviewed in the video give four primary goals for RADAR:
- To perform as near real-time as possible memory leak detection
- To perform high granularity detection, down to the function
- To perform root cause analysis; data must be sufficient enough to diagnose the issue
- To respect user privacy (do not collect user data)
So, what does this mean to the analyst? Well, looking around online, I see hits for gaming pages, but not much else, with respect to the Registry keys. Looking at one of my own systems, I see that beneath the above key that there is a subkey named "DiagnosedApplications", and beneath that several subkeys with the names of applications, one of which is "Attack Surface Analyzer.exe". Beneath each of these keys is a value called "LastDetectionTime", and the QWORD data appears to be a FILETIME object.
At first glance, this would likely be a good location to look for indications of applications being run; while I agree, I also think that we (analysts) need to have a better understanding of what applications would appear in these keys; under what conditions are artifacts beneath these keys created or modified. There definitely needs to be more research into this particular key. Perhaps one way of determining this is to create a timeline of system activity, and add the LastDetectionTime information for these keys to the timeline.
Andrew also recently released his Registry Decoder, "an open source tool that automates the acquisition, analysis, and reporting of Microsoft Windows registry contents. The tool was initially funded by the National Institute of Justice (NIJ) and is now ready for public release." I had an opportunity to take a look at a beta version of this tool, and I can definitely see the value of having all of the listed functionality available in one application. |
To get an idea of what this key might be all about, I did some research and found this page at the Microsoft site, with an embedded video. From watching the video, I learned that RADAR is a technology embedded in Windows 7 that monitors memory leaks so that data can be collected and used to correct issues with memory leaks in applications. The developer being interviewed in the video give four primary goals for RADAR:
- To perform as near real-time as possible memory leak detection
- To perform high granularity detection, down to the function
- To perform root cause analysis; data must be sufficient enough to diagnose the issue
- To respect user privacy (do not collect user data)
So, what does this mean to the analyst? Well, looking around online, I see hits for gaming pages, but not much else, with respect to the Registry keys. Looking at one of my own systems, I see that beneath the above key that there is a subkey named "DiagnosedApplications", and beneath that several subkeys with the names of applications, one of which is "Attack Surface Analyzer.exe". Beneath each of these keys is a value called "LastDetectionTime", and the QWORD data appears to be a FILETIME object.
At first glance, this would likely be a good location to look for indications of applications being run; while I agree, I also think that we (analysts) need to have a better understanding of what applications would appear in these keys; under what conditions are artifacts beneath these keys created or modified. There definitely needs to be more research into this particular key. Perhaps one way of determining this is to create a timeline of system activity, and add the LastDetectionTime information for these keys to the timeline.
Getting Started
We see it all the time...someone starts off an email or post to a forum with "I'm new to the field..." or "I want to get into DFIR work..." and they ask for advice on how to "break in" to the field.
Digital forensic analysis can be a large, daunting field. There's a lot out there (operating systems, platforms, mobile devices, tablets, GPS, applications, etc.), and in many cases, courses available through community colleges and universities sort of lay it all on the table for you, and let you see the enormity of the field, but there simply isn't enough time in the course work to allow for focusing the interest and attention of the future analyst into one particular area of specialization. Add IR work to that and the field expands even more. So, if you're in school looking ahead to graduation and getting a job, or if you're looking to change professions, or if you're just looking to break into the field and get started...how do you do that?
Eat the Elephant
DF is a daunting field. It's huge...expansive. There's a lot out there. There are a lot of different devices that can be (and have been) the subject of forensic analysis...computers, laptops, cell phones, smart phones, tablets, Internet kiosks, GPS devices, smart cars...the list goes on. So how do you get started? The same way you eat an elephant...one bite at a time. Pick something, and start there. A journey of a thousand miles starts with a single step...so get t' steppin'!
This is going to do a couple of things for you. First, it's going to give you some experience. Regardless of where you start, when you do get employment in the field, at some point, you're going to have a sense of deja vu...hey, I've seen this before. It could be during the interview process or it may be during a case. It may be some virtualization software, or a particular version of a browser...whatever. It doesn't matter where you start, the fact that you started is just going to benefit you in the long run.
Second, it's going to show an employer that you can pick stuff up on your own, and that you don't have to be sent away to a training course in order to learn something. Think about it...would you rather have an employee who can learn on their own, or pick up the basics and then go to the intermediate course, or would you rather have someone who simply can't grow beyond where they are without being spoon fed?
Don't have access to some of the materials you'd like? What about your local library? Seriously. Libraries and even used book stores are fantastic resources for some of the available books that cover topics in the field. Maybe you can borrow a book or two from a friend or professor.
However, books aren't necessarily are requirement...a lot of what you need may not be in books. Let's say that you want to become familiar with browser forensics; start with Google, and then branch out from there. Most of the browsers are freely available, so do some testing and analysis, using tools and techniques you've read about.
Have a Passion
I attended the PFIC conference in 2010, and while I was there, Amber talked about accessing the Windows Sync in her car. I thought this was pretty cool because she didn't show up at work everyday and wait for someone to contact her or give her something to do. In this industry, you can't sit back and wait for stuff to come to you...you have to go after it.
There are a LOT of resources available for you to gain experience in the DFIR field. There are images and virtual machines available online that you can download and interact with, and there are a wide range for free and open source analysis frameworks available for you to get experience in analysis, as well.
Even if you don't want to go that route, look around you. How many computer systems do you have access to in your home? How about via friends? There are image acquisition tools and even bootable Linux environments that you can download for free to get experience in acquisition...and once you have an image, you can engage in analysis.
So...pick something, and get started. Even if all you have is a thumb drive, try downloading a tool for dumping physical memory from your Windows system, dump it, and then download a tool to analyze it.
Engage with the Community
There are a number of lists and forums (forii??) out there that are free and open, and allow you to engage with other members of the community. Start reading, and start asking smart questions. By that, I mean, don't post a question because you're too lazy to research it yourself...do some research first. Have a question about carving files? Do some research on the topic, and ask a well thought out question.
This also helps when directing questions at one particular person, or working with a mentor...the better developed your questions are, the easier they are to address and answer.
Resources are not just online...there are IRL resources, as well. In my area, we have the NoVA Forensics Meetups once a month. Don't have one in your area? Start one.
An "artifact" of engaging within the community is that you will likely be recognized for your contributions, and if you're looking to change jobs (or get one), you will be "known" to some degree.
Learn to Write
Shakespeare wrote in "Hamlet", "...there are more things on heaven and earth...than are dreamt of in your philosophy", and that holds true for DFIR work, as well. One of the aspects of the field that a lot of folks don't tell you is that being the best analyst...EVER...is worthless if you can't communicate clearly. And most folks...whether you're in the public or private sectors...want a report. Writing is hard, but only because we don't like to do it. I have the benefit of a wide range of experience...college, military, graduate school, and private sector experience...and I've seen a lot of folks go through a lot of pain to provide the benefit of their abilities to customers, simply because they don't like to write. If you engage in a community as mentioned above, and you've starting asking (and maybe answering) questions, you've already started down the road of developing some writing skills.
When writing, think about your audience. If you're engaged in an online forum, it might be safe to assume that some of the folks reading your questions or posts have a technical background. But what if you decide to start writing tutorials? Let's say that you started to take a look at file carving, and after you had done a great deal of research and study, and worked with several tools, you decided to write up what you learned, either as a tutorial document or a blog post. At that point, your audience may be a little less technical, and you're providing the benefit of your experience so that others can learn.
Now, take that a step further...let's say that you're working in the private sector and just completed analysis for a customer. This report is likely going to go to a high-level (possibly C-suite) manager, who isn't highly technical, and needs information in order to make a business decision. What does he or she want to know? Were we hacked? Who hacked us, how did they do it, what did they take? What risk or compliance issues are we exposed to?
I mentioned getting access to books earlier in this post...going to the library, or a friend, or a professor. One thing you can do besides using that book as a reference or resource is to write a review. How do you do that? Don't reiterate the table of contents...instead, talk about what you found useful (or not so much) in the book. Then post your review in a public location (book retailer's web site, your own blog, etc.)...with your name on it. Why do this? When posting anonymously, we tend to take a much different approach than when we know that what we write can be attributed directly to us, and when you're writing a report in the public or private sector, you can be that the report will be attributed back to you. Do you seriously think that a prosecutor or a CIO is going accept (and pay for) a report submitted by "anonymous"?
Sharing
Writing also gives you the ability to give back to and share with the DFIR community. Mark McKinnon added a list of Jump List AppIDs to the ForensicsWiki not too long ago...he did it by noting which AppIDs were already in the Jump List folder, running another application, and identifying the one that was added...and doing that over and over again. He then added the table to the wiki. That's one way of sharing, and there are others. Put together a white paper. Review an application or tool. Start a blog. Review some material about a particular subject and if you find something within that literature that isn't fully described or even mentioned, blog about it.
There's no requirement within the community or profession that you be able to program, and release open source tools. However, one of the best ways to expand our knowledge and understanding isn't to hoard it, but to share it.
Digital forensic analysis can be a large, daunting field. There's a lot out there (operating systems, platforms, mobile devices, tablets, GPS, applications, etc.), and in many cases, courses available through community colleges and universities sort of lay it all on the table for you, and let you see the enormity of the field, but there simply isn't enough time in the course work to allow for focusing the interest and attention of the future analyst into one particular area of specialization. Add IR work to that and the field expands even more. So, if you're in school looking ahead to graduation and getting a job, or if you're looking to change professions, or if you're just looking to break into the field and get started...how do you do that?
Eat the Elephant
DF is a daunting field. It's huge...expansive. There's a lot out there. There are a lot of different devices that can be (and have been) the subject of forensic analysis...computers, laptops, cell phones, smart phones, tablets, Internet kiosks, GPS devices, smart cars...the list goes on. So how do you get started? The same way you eat an elephant...one bite at a time. Pick something, and start there. A journey of a thousand miles starts with a single step...so get t' steppin'!
This is going to do a couple of things for you. First, it's going to give you some experience. Regardless of where you start, when you do get employment in the field, at some point, you're going to have a sense of deja vu...hey, I've seen this before. It could be during the interview process or it may be during a case. It may be some virtualization software, or a particular version of a browser...whatever. It doesn't matter where you start, the fact that you started is just going to benefit you in the long run.
Second, it's going to show an employer that you can pick stuff up on your own, and that you don't have to be sent away to a training course in order to learn something. Think about it...would you rather have an employee who can learn on their own, or pick up the basics and then go to the intermediate course, or would you rather have someone who simply can't grow beyond where they are without being spoon fed?
Don't have access to some of the materials you'd like? What about your local library? Seriously. Libraries and even used book stores are fantastic resources for some of the available books that cover topics in the field. Maybe you can borrow a book or two from a friend or professor.
However, books aren't necessarily are requirement...a lot of what you need may not be in books. Let's say that you want to become familiar with browser forensics; start with Google, and then branch out from there. Most of the browsers are freely available, so do some testing and analysis, using tools and techniques you've read about.
Have a Passion
I attended the PFIC conference in 2010, and while I was there, Amber talked about accessing the Windows Sync in her car. I thought this was pretty cool because she didn't show up at work everyday and wait for someone to contact her or give her something to do. In this industry, you can't sit back and wait for stuff to come to you...you have to go after it.
There are a LOT of resources available for you to gain experience in the DFIR field. There are images and virtual machines available online that you can download and interact with, and there are a wide range for free and open source analysis frameworks available for you to get experience in analysis, as well.
Even if you don't want to go that route, look around you. How many computer systems do you have access to in your home? How about via friends? There are image acquisition tools and even bootable Linux environments that you can download for free to get experience in acquisition...and once you have an image, you can engage in analysis.
So...pick something, and get started. Even if all you have is a thumb drive, try downloading a tool for dumping physical memory from your Windows system, dump it, and then download a tool to analyze it.
Engage with the Community
There are a number of lists and forums (forii??) out there that are free and open, and allow you to engage with other members of the community. Start reading, and start asking smart questions. By that, I mean, don't post a question because you're too lazy to research it yourself...do some research first. Have a question about carving files? Do some research on the topic, and ask a well thought out question.
This also helps when directing questions at one particular person, or working with a mentor...the better developed your questions are, the easier they are to address and answer.
Resources are not just online...there are IRL resources, as well. In my area, we have the NoVA Forensics Meetups once a month. Don't have one in your area? Start one.
An "artifact" of engaging within the community is that you will likely be recognized for your contributions, and if you're looking to change jobs (or get one), you will be "known" to some degree.
Learn to Write
Shakespeare wrote in "Hamlet", "...there are more things on heaven and earth...than are dreamt of in your philosophy", and that holds true for DFIR work, as well. One of the aspects of the field that a lot of folks don't tell you is that being the best analyst...EVER...is worthless if you can't communicate clearly. And most folks...whether you're in the public or private sectors...want a report. Writing is hard, but only because we don't like to do it. I have the benefit of a wide range of experience...college, military, graduate school, and private sector experience...and I've seen a lot of folks go through a lot of pain to provide the benefit of their abilities to customers, simply because they don't like to write. If you engage in a community as mentioned above, and you've starting asking (and maybe answering) questions, you've already started down the road of developing some writing skills.
When writing, think about your audience. If you're engaged in an online forum, it might be safe to assume that some of the folks reading your questions or posts have a technical background. But what if you decide to start writing tutorials? Let's say that you started to take a look at file carving, and after you had done a great deal of research and study, and worked with several tools, you decided to write up what you learned, either as a tutorial document or a blog post. At that point, your audience may be a little less technical, and you're providing the benefit of your experience so that others can learn.
Now, take that a step further...let's say that you're working in the private sector and just completed analysis for a customer. This report is likely going to go to a high-level (possibly C-suite) manager, who isn't highly technical, and needs information in order to make a business decision. What does he or she want to know? Were we hacked? Who hacked us, how did they do it, what did they take? What risk or compliance issues are we exposed to?
I mentioned getting access to books earlier in this post...going to the library, or a friend, or a professor. One thing you can do besides using that book as a reference or resource is to write a review. How do you do that? Don't reiterate the table of contents...instead, talk about what you found useful (or not so much) in the book. Then post your review in a public location (book retailer's web site, your own blog, etc.)...with your name on it. Why do this? When posting anonymously, we tend to take a much different approach than when we know that what we write can be attributed directly to us, and when you're writing a report in the public or private sector, you can be that the report will be attributed back to you. Do you seriously think that a prosecutor or a CIO is going accept (and pay for) a report submitted by "anonymous"?
Sharing
Writing also gives you the ability to give back to and share with the DFIR community. Mark McKinnon added a list of Jump List AppIDs to the ForensicsWiki not too long ago...he did it by noting which AppIDs were already in the Jump List folder, running another application, and identifying the one that was added...and doing that over and over again. He then added the table to the wiki. That's one way of sharing, and there are others. Put together a white paper. Review an application or tool. Start a blog. Review some material about a particular subject and if you find something within that literature that isn't fully described or even mentioned, blog about it.
There's no requirement within the community or profession that you be able to program, and release open source tools. However, one of the best ways to expand our knowledge and understanding isn't to hoard it, but to share it.
Monday, September 05, 2011
Stuff...and whatnot
Speaking Engagements
I received notification last week that my submission for the 2012 DoD CyberCrime Conference was accepted. I'll be giving a presentation on timeline analysis at this conference, and I hope to have some new material ready and available to share well before the presentation.
I will also be speaking at PFIC 2011 this year; I actually have two presentations, with a total (according to the schedule) three sessions on the podium. I'll be presenting on "Scanning for Low-Hanging Fruit in an Investigation", as well as "Intro to Windows Forensics". Once I've completed WFA 3/e and everything's been submitted, I plan to focus on some of the material for the first presentation, in particular the scanning framework. This presentation will be a follow-on to my OSDFC presentation from this past June.
The closest alligator to the boat, however, would be presentations I'll be giving at ETCSS in Oct. I'll be giving two presentations on 12 Oct..."What's new in Windows 7: An Analyst's Perspective", and "Incident Preparedness".
eEvidence
The eEvidence What's New page has been updated...I always find a lot of great reading material there. This time around, there are a number of excellent presentations linked from the page...all of which are well worth taking a look at.
NoVA Forensics Meetup
Our next meetup is this Wednesday, 7 Sept. Mitch Harris will be presenting on botnets.
Please take a look at the "NoVA Forensics Meetup" page linked on the right-hand panel of this blog, under "Pages", if you have any questions regarding location, times, fees, attendance requirements, etc. Thanks.
If you still feel the need to ask about attendance requirements and fees, you will have to pay eleventy-three dollars at the door as a cover charge, and you have to come dressed as a clown.
RegRipper Plugins
An archive of new RegRipper plugins was recently released and is available for download at this Google Code site. I didn't write these plugins, but I will say that it is really cool to see folks taking the time to take full advantage of an open source tool such as RegRipper, and create what they need to get the job done.
I did modify some code for one of the plugins. -- contact the author --
Now, I've seen a couple of comments and received an email or two recently regarding adding these new plugins to RegRipper. First, download the archive and copy the plugins into the plugins directory...however, from there, there seems to be some confusion regarding how to get RegRipper to use these new plugins.
The RegRipper plugins folder generally contains types of files; the ones that end with the ".pl" extension should be plugins (Perl scripts), and those that have no extensions should be profiles, or lists of plugins that you'd like to run against a particular hive. The profiles don't have to have a specific name...the ones that were originally shipped with RegRipper (software, system, sam, ntuser, etc.) are just examples, nothing more. You can name one of these files "steve" if you like, it doesn't matter. As long as the file does not have an extension, it will appear in the dropdown list in the RegRipper GUI.
So, when you get a new plugin and add it to the plugin folder, yes, you do sort of have to figure out what you want to do with it. I designed it this way to give analysts the flexibility to run their exams the way they want, to give them choices (hopefully based on knowledge, education, and experience). In order to facilitate determining which hive a plugin is intended for, I added some functionality to rip.pl (or rip.exe, whichever version you're using); for example, if you run rip.pl with the "-l" switch, you will see a listing of plugins with information about each one output to STDOUT. If you add the "-c" switch, the output will in .csv format, which is great for redirecting to a file, which you can then open in Excel. From there, it's pretty easy to create or modify a profile via Notepad.
I also created the Plugin Browser, which was released along with the code/programs for Windows Registry Forensics (RR.zip). This tool provides a graphical method for an analyst to browse through the plugins (hence the name) and even create a profile.
When I sat down and came up with this tool, I wanted the user/analyst to have the ability to decide which plugins to run. After all, there isn't always a need to run all of the available plugins against a hive file; this may simply be too much information to dig through. Some plugins may be redundant, parsing the same information, but just displaying it a different manner (yes, I was once contacted by someone who had run all three plugins that parse UserAssist subkey data and present it in different formats...they asked me what the difference was between them...). There may also be instances in which a plugin may be used in different profiles; for example, I would include the plugin that parses the XP firewall settings in a profile that gets general information about the system, as well as one specifically used to determine if there are any indications of malware on the system.
What this ultimately means is that the analyst is going to have to do something. I'm really sorry about that...but as an analyst using RegRipper, you're going to have make some decisions and take some actions.
Note: Something I wanted to mention again...tools such as RegRipper (and rip) are only as powerful as the analyst using them. If you sit down and expect RegRipper to extract some particular information from the Registry for you, without understanding what the tool is doing, or if there is even a plugin that gets that information, you may be disappointed.
What do I do if it don't work?
If something doesn't appear to be right about the tool you're using, it's usually most helpful if you go directly to the author, and provide information beyond, "it don't work". The response may be simply an update, particularly if it's a known issue. Or you may be using the tool incorrectly...those pesky readme files are such a PITA, aren't they? Or it could be an unanticipated condition...such as when I was working on the Jump List parser and found out what a Jump List "looks like" when it hasn't yet been closed by the operating system (the Jump List was extracted from an image file produced during a live acquisition).
What if a plugin I need isn't in RegRipper?
If there's a particular plugin that you need and can't seem to find, contacting me with a clear description of what you're looking for, as well as providing a sample hive, will usually result in a new or updated plugin in fairly short order. And no, I don't make a habit of sharing the fact that you asked, or sharing the contents of the hive, or the sharing the plugin. I tend to securely delete the hive file once I'm done, and I leave it up to you to share the plugin...unless it's really cool, but then, I'll ask you first. So if you have any trepidation about asking for help, I hope what I've said here will quell those concerns or fears.
Resources
I've posted on using RegRipper to the blog before; there's a link here, and one here. There is also a great deal of information about using RegRipper available in chapter 2 of Windows Registry Forensics.
Books (Again)
I had a section in my last post regarding the use of books I've written or co-authored being used in courses to teach computer forensics. I received an email from Joshua Bartolomie, Adjunct Lecturer at Utica College, and have provided the entirety of his statement, quoted below, with his permission:
I'm sure that by now, many of us have heard of this guy, who got 6 yrs for "hacking" user's systems and taking over their webcams and mics, and using information (pictures, video, stuff he listened in to) to extort his victims.
Something else to be aware of, folks, is how this sort of information is presented in the media...notice that the first sentence of the third paragraph mentions "undetectable malware", but later the article actually names some of the malware used (i.e., Poison Ivy).
Tools
If you're into digital forensics analysis of Windows systems, particularly those formatted NTFS, then you should consider taking a look at a couple of tools.
First off, Willi Ballenthin released a Python script for parsing INDX files; Willi's also done an excellent job of providing background information about the tool, as well as why you'd want to use it, so take a look.
Then there's the Windows NTFS journal change log parser from TZWorks, LLC. Tim Mugherini provides a great example of how "jp" was used during a case.
I haven't used either of these tools yet, but I can see where they would be very useful during an examination. I've found indications of files in directories via the INDX files (appear as "$I30" in FTK Imager) when malware or an intruder's tool kit was deleted after use.
I received notification last week that my submission for the 2012 DoD CyberCrime Conference was accepted. I'll be giving a presentation on timeline analysis at this conference, and I hope to have some new material ready and available to share well before the presentation.
I will also be speaking at PFIC 2011 this year; I actually have two presentations, with a total (according to the schedule) three sessions on the podium. I'll be presenting on "Scanning for Low-Hanging Fruit in an Investigation", as well as "Intro to Windows Forensics". Once I've completed WFA 3/e and everything's been submitted, I plan to focus on some of the material for the first presentation, in particular the scanning framework. This presentation will be a follow-on to my OSDFC presentation from this past June.
The closest alligator to the boat, however, would be presentations I'll be giving at ETCSS in Oct. I'll be giving two presentations on 12 Oct..."What's new in Windows 7: An Analyst's Perspective", and "Incident Preparedness".
eEvidence
The eEvidence What's New page has been updated...I always find a lot of great reading material there. This time around, there are a number of excellent presentations linked from the page...all of which are well worth taking a look at.
NoVA Forensics Meetup
Our next meetup is this Wednesday, 7 Sept. Mitch Harris will be presenting on botnets.
Please take a look at the "NoVA Forensics Meetup" page linked on the right-hand panel of this blog, under "Pages", if you have any questions regarding location, times, fees, attendance requirements, etc. Thanks.
If you still feel the need to ask about attendance requirements and fees, you will have to pay eleventy-three dollars at the door as a cover charge, and you have to come dressed as a clown.
RegRipper Plugins
An archive of new RegRipper plugins was recently released and is available for download at this Google Code site. I didn't write these plugins, but I will say that it is really cool to see folks taking the time to take full advantage of an open source tool such as RegRipper, and create what they need to get the job done.
I did modify some code for one of the plugins. -- contact the author --
Now, I've seen a couple of comments and received an email or two recently regarding adding these new plugins to RegRipper. First, download the archive and copy the plugins into the plugins directory...however, from there, there seems to be some confusion regarding how to get RegRipper to use these new plugins.
The RegRipper plugins folder generally contains types of files; the ones that end with the ".pl" extension should be plugins (Perl scripts), and those that have no extensions should be profiles, or lists of plugins that you'd like to run against a particular hive. The profiles don't have to have a specific name...the ones that were originally shipped with RegRipper (software, system, sam, ntuser, etc.) are just examples, nothing more. You can name one of these files "steve" if you like, it doesn't matter. As long as the file does not have an extension, it will appear in the dropdown list in the RegRipper GUI.
So, when you get a new plugin and add it to the plugin folder, yes, you do sort of have to figure out what you want to do with it. I designed it this way to give analysts the flexibility to run their exams the way they want, to give them choices (hopefully based on knowledge, education, and experience). In order to facilitate determining which hive a plugin is intended for, I added some functionality to rip.pl (or rip.exe, whichever version you're using); for example, if you run rip.pl with the "-l" switch, you will see a listing of plugins with information about each one output to STDOUT. If you add the "-c" switch, the output will in .csv format, which is great for redirecting to a file, which you can then open in Excel. From there, it's pretty easy to create or modify a profile via Notepad.
I also created the Plugin Browser, which was released along with the code/programs for Windows Registry Forensics (RR.zip). This tool provides a graphical method for an analyst to browse through the plugins (hence the name) and even create a profile.
When I sat down and came up with this tool, I wanted the user/analyst to have the ability to decide which plugins to run. After all, there isn't always a need to run all of the available plugins against a hive file; this may simply be too much information to dig through. Some plugins may be redundant, parsing the same information, but just displaying it a different manner (yes, I was once contacted by someone who had run all three plugins that parse UserAssist subkey data and present it in different formats...they asked me what the difference was between them...). There may also be instances in which a plugin may be used in different profiles; for example, I would include the plugin that parses the XP firewall settings in a profile that gets general information about the system, as well as one specifically used to determine if there are any indications of malware on the system.
What this ultimately means is that the analyst is going to have to do something. I'm really sorry about that...but as an analyst using RegRipper, you're going to have make some decisions and take some actions.
Note: Something I wanted to mention again...tools such as RegRipper (and rip) are only as powerful as the analyst using them. If you sit down and expect RegRipper to extract some particular information from the Registry for you, without understanding what the tool is doing, or if there is even a plugin that gets that information, you may be disappointed.
What do I do if it don't work?
If something doesn't appear to be right about the tool you're using, it's usually most helpful if you go directly to the author, and provide information beyond, "it don't work". The response may be simply an update, particularly if it's a known issue. Or you may be using the tool incorrectly...those pesky readme files are such a PITA, aren't they? Or it could be an unanticipated condition...such as when I was working on the Jump List parser and found out what a Jump List "looks like" when it hasn't yet been closed by the operating system (the Jump List was extracted from an image file produced during a live acquisition).
What if a plugin I need isn't in RegRipper?
If there's a particular plugin that you need and can't seem to find, contacting me with a clear description of what you're looking for, as well as providing a sample hive, will usually result in a new or updated plugin in fairly short order. And no, I don't make a habit of sharing the fact that you asked, or sharing the contents of the hive, or the sharing the plugin. I tend to securely delete the hive file once I'm done, and I leave it up to you to share the plugin...unless it's really cool, but then, I'll ask you first. So if you have any trepidation about asking for help, I hope what I've said here will quell those concerns or fears.
Resources
I've posted on using RegRipper to the blog before; there's a link here, and one here. There is also a great deal of information about using RegRipper available in chapter 2 of Windows Registry Forensics.
Books (Again)
I had a section in my last post regarding the use of books I've written or co-authored being used in courses to teach computer forensics. I received an email from Joshua Bartolomie, Adjunct Lecturer at Utica College, and have provided the entirety of his statement, quoted below, with his permission:
To expand a bit on some detail – my associate and I just finished one of our 8 week classes (Computer Forensic Investigations I) in the Cyber Security Master Program at Utica College where we utilized your Windows Forensic Analysis 2ED book as the primary ‘text’ book, with supplemental/ancillary reading via online texts and reports as needed for core concepts and research. We walked through the book and leveraged your examples and case studies in a lot of our discussions and hands-on lab concepts – for the most part the hands-on labs were specifically set to look for, preliminarily evaluate, and compare/contrast available technologies within the vein of the topic at hand. The students responded well to this type of instruction and even those that have done forensic analysis before are keeping your book handy as a practical reference.
We also just started the follow-on class (Computer Forensic Investigations II) and are leveraging the Open Source Digital Forensics book you co-authored as our primary textbook – with the same caveat as above regarding supplemental/ancillary reading via online texts and reports as needed for core concepts and research. The plan that we've outlined in this class is to walk the book front to back and evaluate/compare/use the 'forensic workstations' that are being built. We are building both a Linux and Windows VM concurrently to compare/contrast the environments, their applicable usages, and pro’s and con’s. We are also utilizing these VM’s for analysis and examination hands-on labs as we progress; leveraging standard and/or available forensic test images such as those offered by NIST, Honeynet project, etc. At the end of the class - all of our students should have two fully functional, usable, and relatively cheap/free forensic environments to continue their learning and expansion in this field.
The goal of our classes and overall program is to take a different approach to the traditional theory based Graduate programs, and instead provide our students with viable, practical, and production/operations grade hands-on instruction and usage. The two courses I mentioned above are being taught between myself, with a corporate security focus/background, and one of my associates at Utica College that is also the lead computer forensic investigator for a local police department, with an obvious law enforcement focus/background. We both instruct portions of each of our classes and by tag-teaming them we are able to highlight concepts, protocol/procedures, and issues from our respective areas of expertise. By executing the classes in this manner, we are able to provide them with insight from two generally different operational approaches/angles, and integrating your book(s) provides a solid foundation for hands-on real-world applicability.
The goal of our classes and overall program is to take a different approach to the traditional theory based Graduate programs, and instead provide our students with viable, practical, and production/operations grade hands-on instruction and usage. The two courses I mentioned above are being taught between myself, with a corporate security focus/background, and one of my associates at Utica College that is also the lead computer forensic investigator for a local police department, with an obvious law enforcement focus/background. We both instruct portions of each of our classes and by tag-teaming them we are able to highlight concepts, protocol/procedures, and issues from our respective areas of expertise. By executing the classes in this manner, we are able to provide them with insight from two generally different operational approaches/angles, and integrating your book(s) provides a solid foundation for hands-on real-world applicability.
This is a great endorsement for all of the books mentioned! When I develop training materials myself, my focus (time permitting) is usually to give those I'm engaged with something that they can use immediately, right there in the course (or as soon as they leave)...that "practical...operations-grade hands on instruction". I do that, because that's what I look for in training courses, as well, regardless of whether it's a 60 minute presentation or a half day of instruction. I tend to look for something I can put my hands on and use. Oddly enough, it turns out that others look for the same thing. So, again...endorsements like this are great, and they're much better than a "review" that simply reiterates the table of contents of the book.
CyberCrimeI'm sure that by now, many of us have heard of this guy, who got 6 yrs for "hacking" user's systems and taking over their webcams and mics, and using information (pictures, video, stuff he listened in to) to extort his victims.
Something else to be aware of, folks, is how this sort of information is presented in the media...notice that the first sentence of the third paragraph mentions "undetectable malware", but later the article actually names some of the malware used (i.e., Poison Ivy).
Tools
If you're into digital forensics analysis of Windows systems, particularly those formatted NTFS, then you should consider taking a look at a couple of tools.
First off, Willi Ballenthin released a Python script for parsing INDX files; Willi's also done an excellent job of providing background information about the tool, as well as why you'd want to use it, so take a look.
Then there's the Windows NTFS journal change log parser from TZWorks, LLC. Tim Mugherini provides a great example of how "jp" was used during a case.
I haven't used either of these tools yet, but I can see where they would be very useful during an examination. I've found indications of files in directories via the INDX files (appear as "$I30" in FTK Imager) when malware or an intruder's tool kit was deleted after use.
Friday, September 02, 2011
Friday Updates
Prefetch Analysis
I received an email recently that referred to an older post in this blog regarding Prefetch file analysis. The sender mentioned that while doing research into Prefetch files, he'd run across this post (in Polish) that indicated that under certain circumstances, the run count in the Prefetch file "isn't precise". So, being curious, I ran the text of the site through Google Translate, and got the following (in part):
In short, what this says is that if someone runs an application several times in quick succession (i.e., 120 seconds, or 2 min), the Prefetch metadata isn't modified accordingly. Interesting stuff, and worth a further look, as if this is information that truly pans out and can be replicated, then it would likely have a significant impact on analysis and reporting. One thing I have thought about, however, is...does this happen? I mean, if a user launches Solitaire, what would be the purpose of launching it again within 2 min? What about malware? Let's say an intruder gains access to a system, and copies over some malware...what would be the purpose of launching it several times, within a 2 min period?
Books
I've known for some time that various courses make use of my books, either as recommended reading or as required texts. For example, I understand from some recent emails that Utica College uses some of my books in their Cyber Security curriculum. As an author, this is pretty validating, and in a way, better than a review; rather than posting a review that says what's in the book, the instructors are actually recommending it or using it. Also, it's great marketing for the books.
Below is a recommendation for Windows Registry Forensics from Andy Spruill (Senior Director of Risk Management/FSO, GSI), posted here with his permission:
As an author, I usually sit back after a book has been out for a while and wonder if the information is of use to folks out there in the community; is the content of any benefit? I see the reviews posted to sites (Amazon, blogs, etc.) but many of them simply reiterate the table of contents without going into whether the reviewer found the information useful or not. I get sporadic emails from people saying that they liked the book, but don't often get much of a response when I ask what they liked about it. So when someone like Andy, with his background, experience, and credibility, uses and recommends the book, that's much better than a review. This isn't me suggesting to folks that it's a resource...after all, I'm the author, so what else am I going to say? It's someone like Andy...a practitioner and an instructor, teaching up-and-coming practitioners...saying that it's a resource that lends that statement credibility. So, a great big "thanks" to Andy, and to all of the other instructors, teachers, mentors, and practitioners out there who recommend books like WRF and DFwOST to their charges and colleagues.
Analysis
I recently posted on Jump List and Sticky Notes analysis, and also released a Sticky Notes parsing tool. As of 11am, 31 Aug, there were just 10 downloads. One of the folks who downloaded the tool has apparently actually used it, and sent me an email...I received the following in that email from David Nides (quoted here with his permission):
Time after time I see examiners that aren't performing what I would consider comprehensive analysis because they don't go beyond push buttons forensics.
This is something I've mentioned time and again, using the term "Nintendo forensics". Chris Pogue also discusses this in his Sniper Forensics presentations. When developing the tools I wrote for parsing Jump Lists and Sticky Notes, I didn't find a great number of posts on the Interwebs from folks asking for assistance or how to parse these types of files...in fact, I really didn't find any. But I do know of folks are currently (and have been) analyzing Windows 7 systems; when doing so, do they understand the significance of Jump Lists and Sticky Notes, and are these artifacts being examined? Or is most of the analysis that's being done out there simply a matter of loading the acquired image into a commercial forensic analysis application and clicking a button?
Windows 8
What? Windows 8?!? We were just talking about Windows 7, and you've already moved on to Windows 8...and perhaps rightly so. It's coming folks...and I ran across this interesting post regarding improvements in the file operations (copy, move, etc.) experience. There are some interesting statistics described in the blog post, which were apparently derived from analysis of anonymous data provided by Windows 7 users (anyone remember Dr. W. Edwards Deming??). The post indicates that there's some significant tracking and optimization within the new version of Windows with respect to these file operations, and that users are granted a more granular level of control over these operations.
Okay, great...but in the words of Lon Solomon (who's a fantastic speaker, by the way...), "so what?" Well, if you remember when Windows XP came out, there was some trepidation amongst the DFIR community, with folks up in arms, screaming, "what is this new thing?!?"...yet over time, we've come to realize that for the sake of the "user eXPerience", there are significantly more artifacts for analysts. The same is true with Windows 7...so should we (DFIR analysts) expect anything less from Windows 8?
CDFS
If you have had any thoughts or questions regarding the CDFS, or why you should join, here's another resource that provides an excellent view into answering that question. This is a timely post, considering this post that rehashes issues with accreditation and certification in the DFIR industry. Yes, I joined this week, and I'm looking forward to the opportunity to have a say in the direction of my chosen profession.
Google Artifacts
Imagine a vendor or software developer actually providing forensic artifacts...yeah, it's just like that! It seems that Google is doing us DFIR folks a favor and providing offline access to GMail. Looking at some of the reviews for the app, it doesn't look as if there's overwhelming enthusiasm for the idea, but this is definitely something to look for and take advantage of if you find it.
I received an email recently that referred to an older post in this blog regarding Prefetch file analysis. The sender mentioned that while doing research into Prefetch files, he'd run across this post (in Polish) that indicated that under certain circumstances, the run count in the Prefetch file "isn't precise". So, being curious, I ran the text of the site through Google Translate, and got the following (in part):
It turns out the meter program starts in the file. Pf is not as accurate. When its value reaches 0A, it is no longer so "eager" to increase at subsequent runs of the program. It also does not update the date of the last run. You can see a correlation here. If the field meter [0x90] updates, it also updates the date of the last run [0x78] (actually in this statement is not only the implication that even equivalence).
I did some small tests and it turns out that if the difference between the current date and the date last run stored in the. Pf is less than 2 minutes (120 seconds) it will not update the counter. Also, if any program (even malware) runs many times in a short period of time and we would like to know the date of the last of his starts, and the number - is on file in the folder Perfetch we can ride well.
Another interesting fact is that if you change the date (even using the watch in the address bar), we can easily cheat files. Pf. Assume that X was the last time the program was launched in July this year. Someone gained physical access to our computer and wants to run our named with letter X program. Of course would not want the contents of the Prefetch betrayed that there was an unauthorized launch. The method is trivial. The attacker changes the date (eg year 2002) and fires the program. The difference between 2002 and 2011 is less than 2 minutes (sounds weird, but subtract the smaller number of larger - we get a negative value). File. Pf remains unchanged, and the program X is seamlessly (from the standpoint of Perfetch) run.
If someone wants a really effective analysis, it appears that the files in the folder Perfetch rather not help him.
I did some small tests and it turns out that if the difference between the current date and the date last run stored in the. Pf is less than 2 minutes (120 seconds) it will not update the counter. Also, if any program (even malware) runs many times in a short period of time and we would like to know the date of the last of his starts, and the number - is on file in the folder Perfetch we can ride well.
Another interesting fact is that if you change the date (even using the watch in the address bar), we can easily cheat files. Pf. Assume that X was the last time the program was launched in July this year. Someone gained physical access to our computer and wants to run our named with letter X program. Of course would not want the contents of the Prefetch betrayed that there was an unauthorized launch. The method is trivial. The attacker changes the date (eg year 2002) and fires the program. The difference between 2002 and 2011 is less than 2 minutes (sounds weird, but subtract the smaller number of larger - we get a negative value). File. Pf remains unchanged, and the program X is seamlessly (from the standpoint of Perfetch) run.
If someone wants a really effective analysis, it appears that the files in the folder Perfetch rather not help him.
In short, what this says is that if someone runs an application several times in quick succession (i.e., 120 seconds, or 2 min), the Prefetch metadata isn't modified accordingly. Interesting stuff, and worth a further look, as if this is information that truly pans out and can be replicated, then it would likely have a significant impact on analysis and reporting. One thing I have thought about, however, is...does this happen? I mean, if a user launches Solitaire, what would be the purpose of launching it again within 2 min? What about malware? Let's say an intruder gains access to a system, and copies over some malware...what would be the purpose of launching it several times, within a 2 min period?
Books
I've known for some time that various courses make use of my books, either as recommended reading or as required texts. For example, I understand from some recent emails that Utica College uses some of my books in their Cyber Security curriculum. As an author, this is pretty validating, and in a way, better than a review; rather than posting a review that says what's in the book, the instructors are actually recommending it or using it. Also, it's great marketing for the books.
Below is a recommendation for Windows Registry Forensics from Andy Spruill (Senior Director of Risk Management/FSO, GSI), posted here with his permission:
I don’t know anyone who is on the fence about your book. As far as I am concerned, it is a mandatory item for anyone in this field.
I have a copy sitting in the lab here at Guidance and another sitting in the lab at the Westminster Police Department, where I am a reserve officer with their high-tech crimes unit. I have another personal copy that I use as an Adjunct Instructor at California State University, Fullerton, where I teach a year-long certificate program in Computer Forensics.
As an author, I usually sit back after a book has been out for a while and wonder if the information is of use to folks out there in the community; is the content of any benefit? I see the reviews posted to sites (Amazon, blogs, etc.) but many of them simply reiterate the table of contents without going into whether the reviewer found the information useful or not. I get sporadic emails from people saying that they liked the book, but don't often get much of a response when I ask what they liked about it. So when someone like Andy, with his background, experience, and credibility, uses and recommends the book, that's much better than a review. This isn't me suggesting to folks that it's a resource...after all, I'm the author, so what else am I going to say? It's someone like Andy...a practitioner and an instructor, teaching up-and-coming practitioners...saying that it's a resource that lends that statement credibility. So, a great big "thanks" to Andy, and to all of the other instructors, teachers, mentors, and practitioners out there who recommend books like WRF and DFwOST to their charges and colleagues.
Analysis
I recently posted on Jump List and Sticky Notes analysis, and also released a Sticky Notes parsing tool. As of 11am, 31 Aug, there were just 10 downloads. One of the folks who downloaded the tool has apparently actually used it, and sent me an email...I received the following in that email from David Nides (quoted here with his permission):
Time after time I see examiners that aren't performing what I would consider comprehensive analysis because they don't go beyond push buttons forensics.
This is something I've mentioned time and again, using the term "Nintendo forensics". Chris Pogue also discusses this in his Sniper Forensics presentations. When developing the tools I wrote for parsing Jump Lists and Sticky Notes, I didn't find a great number of posts on the Interwebs from folks asking for assistance or how to parse these types of files...in fact, I really didn't find any. But I do know of folks are currently (and have been) analyzing Windows 7 systems; when doing so, do they understand the significance of Jump Lists and Sticky Notes, and are these artifacts being examined? Or is most of the analysis that's being done out there simply a matter of loading the acquired image into a commercial forensic analysis application and clicking a button?
Windows 8
What? Windows 8?!? We were just talking about Windows 7, and you've already moved on to Windows 8...and perhaps rightly so. It's coming folks...and I ran across this interesting post regarding improvements in the file operations (copy, move, etc.) experience. There are some interesting statistics described in the blog post, which were apparently derived from analysis of anonymous data provided by Windows 7 users (anyone remember Dr. W. Edwards Deming??). The post indicates that there's some significant tracking and optimization within the new version of Windows with respect to these file operations, and that users are granted a more granular level of control over these operations.
Okay, great...but in the words of Lon Solomon (who's a fantastic speaker, by the way...), "so what?" Well, if you remember when Windows XP came out, there was some trepidation amongst the DFIR community, with folks up in arms, screaming, "what is this new thing?!?"...yet over time, we've come to realize that for the sake of the "user eXPerience", there are significantly more artifacts for analysts. The same is true with Windows 7...so should we (DFIR analysts) expect anything less from Windows 8?
CDFS
If you have had any thoughts or questions regarding the CDFS, or why you should join, here's another resource that provides an excellent view into answering that question. This is a timely post, considering this post that rehashes issues with accreditation and certification in the DFIR industry. Yes, I joined this week, and I'm looking forward to the opportunity to have a say in the direction of my chosen profession.
Google Artifacts
Imagine a vendor or software developer actually providing forensic artifacts...yeah, it's just like that! It seems that Google is doing us DFIR folks a favor and providing offline access to GMail. Looking at some of the reviews for the app, it doesn't look as if there's overwhelming enthusiasm for the idea, but this is definitely something to look for and take advantage of if you find it.
Subscribe to:
Posts (Atom)