I was talking to some really, really smart folks last week about some things you could do with data that resulted from computer forensic analysis, and the topic of geolocation came up. I had some ideas, and when I returned from my trip, I started taking a look into how I could use historical information derived from an acquired image to perform geolocation. I sat down yesterday...it was rainy, so it's a nice day to code...and worked up a proof-of-concept that came out quite nicely.
So basically, here's how it works....during the course of an exam, you may determine that the system was used to connect to multiple wireless access points (WAPs). As discussed earlier, there may be more than just the SSID of the WAP recorded in the Registry...for example, the MAC address of the WAP is also recorded. Pretty neat.
So what? So you have a MAC address...what would you do with this information? Look up the vendor? Well...that's a start, as it can help you confirm that you do, in fact, have the right type of device. But in a few easy steps, you may be able to find out where that WAP is physically located. I put heavy emphasis on may because this isn't a 100% done deal...but it is way kewl nonetheless.
So the steps go a little something like this...
1. Run RegRipper (or rip or even ripXP) against the Software hive to get the SSID and MAC address of the WAP, as well as the last time the WAP was connected to. For XP systems, the updated ssid plugin is what you want to use, and for Vista and above systems, I wrote a plugin called networklist.
Note: There's a date associated with the SSID within the binary data of the Registry value on XP systems...however, I have no idea what this date means. On Vista systems and above, the MAC address has a distinct value (ie, does not need to be stripped out of a binary data stream), and a date/time stamp that indicates when the WAP was last connected to.
As an example, here's the data I retrieved from an XP system:
Launching ssid v.20090807
SSID
Microsoft\WZCSVC\Parameters\Interfaces
NIC: 11a/b/g Wireless LAN Mini PCI Express Adapter
Key LastWrite: Thu Feb 7 10:38:43 2008 UTC
Wed Oct 3 16:44:25 2007 tmobile MAC: 00-19-07-5B-36-92
For completeness sake, the output of the networklist plugin looks like this:
Launching networklist v.20090811
Microsoft\Windows NT\CurrentVersion\NetworkList\Profiles
linksys
Key LastWrite : Mon Feb 18 16:02:48 2008 UTC
DateLastConnected: Mon Feb 18 11:02:48 2008
DateCreated : Sat Feb 16 12:02:15 2008
DefaultGatewayMac: 00-0F-66-58-41-ED
2. Submit your MAC address to the SkyHook WiFi Geolocation database...for metropolitan areas, you may get a lat/long pair back...it's not guaranteed, of course.
C:\Perl>skyhook.pl 00-19-07-5B-36-92
Latitude = 38.9454029
Longitude = -77.4444937
Note: The code for skyhook.pl was based on this code...many thanks to Joshua! I'm doing this on Windows, and I couldn't find a version of XML::LibXML that installed on Windows, so I used XML::Simple. Also, I made a number of other modifications with respect to programming style, but Joshua did most of the heavy lifting.
3. Using the lat/long pair, create a URL for Google Maps (you can include some additional information, such as the SSID and date last connected), which will give you a map with a pushpin and any additional information you add. For multiple WAPs and to plot multiple pushpins on the same map, you may need to create a KML or KMZ file and host it someplace that can be reached by Google Maps, and then submit the appropriate URL (on the KML Update page, hover over the link that ends in cropcircles.kmz...).
For the WAP in our example, the URL might look like this. Here's an article that describes how WiFi geolocation can be used to recover stolen laptops.
Again, this isn't 100%. Not every area is mapped, and its highly unlikely that SOHO WAPs have been mapped. Still, if you can get something out of this, it might be useful.
Resources
Google Gears Geolocation API gets Wifi
SkyHook Wireless How It Works page
Firefox GeoLocation add-on
Addendum: Updated my Perl script tonight, thanks to input from Colin Shepard on Net::MAC::Vendor (for Windows, download the .tar.gz file, can extract the .pm file into site\lib\Net\MAC in your Perl install...). Now, the script takes either a WAP MAC address (if no SSID is provided, uses "Unknown") or the path to a file containing MAC addresses and SSIDs on single lines, separated by semi-colons. The output is any vendor and address information returned by the OUI lookup, and a URL that can be pasted into your browser window to get a Google Map (if lat/longs are available). For example:
C:\Perl>maclookup.pl -w 00:19:07:5B:36:92 -s tmobile
OUI lookup for 00:19:07:5B:36:92...
Cisco Systems
80 West Tasman Dr.
SJ-M/1
San Jose CA 95134
UNITED STATES
Google Map URL (paste into browser):
http://maps.google.com/maps?q=38.9454029,+-77.4444937+%28tmobile%29&iwloc=A&hl=en
Pretty sweet...
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Pages
▼
Sunday, September 27, 2009
Friday, September 25, 2009
Linkity-Link
Now and again, the question comes up about writing technical forensic examination reports. Often in some forums, you'll see someone say that they feel that folks should publish their report formats...most often without doing so themselves. Funny how that works, eh?
Here's a link to a recent DFI article that describes what a report should contain.
Not long ago, John H. Sawyer wrote a nice article for DarkReading that mentioned my name...very cool, and a very nice reference. Thanks, John!
From the sausage factory, there's a great blog post about Windows Photo Gallery artifacts. IMHO, for the most part, we don't see enough of these kinds of posts...great work! Here's another, similar post from the ThinkTankForensics blog.
This past week, I had an opportunity to be around and talk to some really smart people, and had some really interesting thoughts about WiFi geolocation data extracted from acquired images. Okay, it's not quite as simple as that, per se, but I do think that for some folks (in particular, law enforcement), this sort of data exploitation will be extremely useful.
Ran across a reference to the Digital Forensic Framework last week, and thought I'd take a look...yes, Virginia, there is a Windows version! I'll have to read a bit more about it and give it a run.
Speaking of frameworks, ProDiscover version 6 is available! Thanks to Chris Brown's generosity, I've been using PD since version 3, and have written several ProScripts, which is the Perl scripting interface into ProDiscover. Some of the updates in version 6 are very, very welcome, including the ability to conduct regular expression raw mode searches. Very cool! I also ran across some comments in various lists that version 6 also supports access to Vista Volume Shadow Copy files...this is something I definitely need to check out. One of the things I've always loved about ProDiscover is the cleaner interface than some other tools, and I really like the Perl scripting capability!
Here's a link to a recent DFI article that describes what a report should contain.
Not long ago, John H. Sawyer wrote a nice article for DarkReading that mentioned my name...very cool, and a very nice reference. Thanks, John!
From the sausage factory, there's a great blog post about Windows Photo Gallery artifacts. IMHO, for the most part, we don't see enough of these kinds of posts...great work! Here's another, similar post from the ThinkTankForensics blog.
This past week, I had an opportunity to be around and talk to some really smart people, and had some really interesting thoughts about WiFi geolocation data extracted from acquired images. Okay, it's not quite as simple as that, per se, but I do think that for some folks (in particular, law enforcement), this sort of data exploitation will be extremely useful.
Ran across a reference to the Digital Forensic Framework last week, and thought I'd take a look...yes, Virginia, there is a Windows version! I'll have to read a bit more about it and give it a run.
Speaking of frameworks, ProDiscover version 6 is available! Thanks to Chris Brown's generosity, I've been using PD since version 3, and have written several ProScripts, which is the Perl scripting interface into ProDiscover. Some of the updates in version 6 are very, very welcome, including the ability to conduct regular expression raw mode searches. Very cool! I also ran across some comments in various lists that version 6 also supports access to Vista Volume Shadow Copy files...this is something I definitely need to check out. One of the things I've always loved about ProDiscover is the cleaner interface than some other tools, and I really like the Perl scripting capability!
Wednesday, September 16, 2009
Parsing .job Files
John McCash recently posted an interesting article to the SANS Forensic blog on decoding the binary Scheduled Task .job file format (John's article referred to tasks created using at.exe).
Based on John's article, I wrote a Perl script that parses the binary structure of the .job file header, and also gets some of the variable data that follows the header. For right now, I've got a number of the header fields translated and extracted, and I've been testing against a couple of .job files I've created on my own system; the output appears as follows:
C:\Perl\forensics>jobparse.pl d:\cases\apple.job
Command : :C:\Program Files\Apple Software Update\SoftwareUpdate.exe -task
Status : Task is disabled
Last Run Date: Thu Jul 16 16:21:00 2009 Exit Code : 0x0
So, this is a good start. I've had engagements where analysis of the Scheduled Tasks log file proved to be critical, so I can see how being able to get details from the .job file might also be important. Also, as John pointed out, information about the binary structure of .job files may assist you if you find indications of deleted .job files in unallocated space. If you get a hit and are able to backtrack a bit, you may find enough information in unallocated space to be able to decipher the header fields.
As a side note, I did find some minor issues with how the MS documentation identifies some of the fields in the structures; for example, the documentation shows a total of 9 2-byte fields for the UUID, but UUIDs are defined as being 128-bits long (ie, 8 bytes); this threw my initial parsing code off by 2 bytes. Interestingly enough, this applies to Windows XP systems, and the last run time for the job is maintained in the header as a 128-bit wide field; here I was thinking that the new date format was only part of Vista and Windows 7! Eesh!
Addendum: Interesting thought...the timestamps (and other data) embedded in .job files are updated when the task is run; therefore, this information can be used to provide indications of activity that modifies file MAC times, such as AV scanning, searches, or AF tactics...
Based on John's article, I wrote a Perl script that parses the binary structure of the .job file header, and also gets some of the variable data that follows the header. For right now, I've got a number of the header fields translated and extracted, and I've been testing against a couple of .job files I've created on my own system; the output appears as follows:
C:\Perl\forensics>jobparse.pl d:\cases\apple.job
Command : :C:\Program Files\Apple Software Update\SoftwareUpdate.exe -task
Status : Task is disabled
Last Run Date: Thu Jul 16 16:21:00 2009 Exit Code : 0x0
So, this is a good start. I've had engagements where analysis of the Scheduled Tasks log file proved to be critical, so I can see how being able to get details from the .job file might also be important. Also, as John pointed out, information about the binary structure of .job files may assist you if you find indications of deleted .job files in unallocated space. If you get a hit and are able to backtrack a bit, you may find enough information in unallocated space to be able to decipher the header fields.
As a side note, I did find some minor issues with how the MS documentation identifies some of the fields in the structures; for example, the documentation shows a total of 9 2-byte fields for the UUID, but UUIDs are defined as being 128-bits long (ie, 8 bytes); this threw my initial parsing code off by 2 bytes. Interestingly enough, this applies to Windows XP systems, and the last run time for the job is maintained in the header as a 128-bit wide field; here I was thinking that the new date format was only part of Vista and Windows 7! Eesh!
Addendum: Interesting thought...the timestamps (and other data) embedded in .job files are updated when the task is run; therefore, this information can be used to provide indications of activity that modifies file MAC times, such as AV scanning, searches, or AF tactics...
Thursday, September 10, 2009
Mo' Stuff
Man, there is just so much stuff sometimes, that it takes several posts to get it all in...
Perhaps one of the biggest things to come through my inbox lately has been that F-Response now has a scripting interface! That's right! According to Matt:
The F-Response Enterprise installation package now includes a partial implementation of the F-Response Enterprise Management Console in a language neutral fully scriptable COM object. This object will allow a technical user of F-Response Enterprise to script actions typically initiated manually in the FEMC.
Sweet! What this technical jargon amounts to is that F-Response EE can now be scripted through Perl! Imagine automating collection of system names from the network, pushing out F-Response EE, collecting information, disconnecting and moving on to the next system!
Here's an analysis tip that I picked up based on a question posted to one of the forums...when posting a question about time stamp oddities (in particular) on Windows systems, it helps to specify which version of Windows you're working with. For example, if you have a last modified date that is three days after the last accessed date on a file, and the system is Vista, then you may have encountered an artifact that has to do with the fact that by default, Vista does not update last access times on files. If the system is XP or Win2K3, you may want to check the NtfsDisableLastAccessUpdate value. Even variations within the same version of Windows (XP Professional vs. Home) can be important...and yes, you can use RegRipper to get this information for you.
Lee posted an excellent write-up regarding issues of MSOffice document metadata, along with an interesting example that many of us have likely run across. This is an issue that's been around for some time, and even though MS's updated the application itself, this is still something that can be very important during an examination, as Chris found out recently.
Are you tired of using RegRipper? Mark Woan has created RegExtract, which according to Mark, is based completely on RegRipper, and implements most of the plugins that are currently publicly available (I currently have 118, a few of which are proof-of-concept and for testing purposes only...) with RegRipper. I downloaded and installed RegExtract and took a look at it. It requires .NET 3.5, and it only runs on Windows (unless you can get .NET 3.5 installed on wine), but if you need a better looking UI and perhaps slightly faster processing, you should take a look at this. A couple of things I found with RegExtract is that while it can run all plugins designed for the NTUSER.DAT hive (or any other hive type) against that hive file, it doesn't appear to be able to automagically determine the hive type, or allow you to run only specific plugins in a predetermined order against the hive file. The results of the plugins are sent to a text field in the UI, and the user doesn't appear to have the ability...at least not obviously...to save the results to a file, for inclusion in a report (directly into the report, or an appendix). You can, however, select what you want from the text field, copy it to the Clipboard, and then paste it into a report or text file.
Finally, something else to consider...the plugins are compiled DLLs, and are not open. RegRipper's plugins are essentially text files that can be opened in Notepad or an editor of your choice (much like Nessus's plugins), so if you ever have any questions about what the plugin does or why it doesn't work, you can just open the plugin up and have a look-see.
Speaking of RegRipper, Rob posted the results of his updated research on USB devices to the SANS Forensic blog yesterday. This is an excellent bit of work that Rob's done and really taken the original work that Cory and I did oh, lo those many years ago to new levels. Drop by and check out the handouts Rob's put together. Based on an IM exchange with Rob, I've updated mountdev.pl a bit to pull out the sector offset information for the partitions, so that there's less confusion when you see two volumes or drive letters with the same disk signature or MBR disk identity (they're the same thing...the 4 bytes found at offset 0x01B8). The plugin now returns the same information seen when you open the image in FTK Imager and look at the start sector for the partition. With this new information, I need to clean up the plugin a bit to make the output a bit easier to understand.
Finally, Matthieu's been busy, it seems, working away on win32dd (now referred to as "windd") and looking for beta testers (ignore the "2008" in the changelog info...it's 2009). Take a look!
Perhaps one of the biggest things to come through my inbox lately has been that F-Response now has a scripting interface! That's right! According to Matt:
The F-Response Enterprise installation package now includes a partial implementation of the F-Response Enterprise Management Console in a language neutral fully scriptable COM object. This object will allow a technical user of F-Response Enterprise to script actions typically initiated manually in the FEMC.
Sweet! What this technical jargon amounts to is that F-Response EE can now be scripted through Perl! Imagine automating collection of system names from the network, pushing out F-Response EE, collecting information, disconnecting and moving on to the next system!
Here's an analysis tip that I picked up based on a question posted to one of the forums...when posting a question about time stamp oddities (in particular) on Windows systems, it helps to specify which version of Windows you're working with. For example, if you have a last modified date that is three days after the last accessed date on a file, and the system is Vista, then you may have encountered an artifact that has to do with the fact that by default, Vista does not update last access times on files. If the system is XP or Win2K3, you may want to check the NtfsDisableLastAccessUpdate value. Even variations within the same version of Windows (XP Professional vs. Home) can be important...and yes, you can use RegRipper to get this information for you.
Lee posted an excellent write-up regarding issues of MSOffice document metadata, along with an interesting example that many of us have likely run across. This is an issue that's been around for some time, and even though MS's updated the application itself, this is still something that can be very important during an examination, as Chris found out recently.
Are you tired of using RegRipper? Mark Woan has created RegExtract, which according to Mark, is based completely on RegRipper, and implements most of the plugins that are currently publicly available (I currently have 118, a few of which are proof-of-concept and for testing purposes only...) with RegRipper. I downloaded and installed RegExtract and took a look at it. It requires .NET 3.5, and it only runs on Windows (unless you can get .NET 3.5 installed on wine), but if you need a better looking UI and perhaps slightly faster processing, you should take a look at this. A couple of things I found with RegExtract is that while it can run all plugins designed for the NTUSER.DAT hive (or any other hive type) against that hive file, it doesn't appear to be able to automagically determine the hive type, or allow you to run only specific plugins in a predetermined order against the hive file. The results of the plugins are sent to a text field in the UI, and the user doesn't appear to have the ability...at least not obviously...to save the results to a file, for inclusion in a report (directly into the report, or an appendix). You can, however, select what you want from the text field, copy it to the Clipboard, and then paste it into a report or text file.
Finally, something else to consider...the plugins are compiled DLLs, and are not open. RegRipper's plugins are essentially text files that can be opened in Notepad or an editor of your choice (much like Nessus's plugins), so if you ever have any questions about what the plugin does or why it doesn't work, you can just open the plugin up and have a look-see.
Speaking of RegRipper, Rob posted the results of his updated research on USB devices to the SANS Forensic blog yesterday. This is an excellent bit of work that Rob's done and really taken the original work that Cory and I did oh, lo those many years ago to new levels. Drop by and check out the handouts Rob's put together. Based on an IM exchange with Rob, I've updated mountdev.pl a bit to pull out the sector offset information for the partitions, so that there's less confusion when you see two volumes or drive letters with the same disk signature or MBR disk identity (they're the same thing...the 4 bytes found at offset 0x01B8). The plugin now returns the same information seen when you open the image in FTK Imager and look at the start sector for the partition. With this new information, I need to clean up the plugin a bit to make the output a bit easier to understand.
Finally, Matthieu's been busy, it seems, working away on win32dd (now referred to as "windd") and looking for beta testers (ignore the "2008" in the changelog info...it's 2009). Take a look!
Wednesday, September 09, 2009
Thoughts on Tool Verification
A recent thread over on ForensicFocus got me to thinking about the issue of tool verification. In that thread, there is discussion of verifying the results produced by RegRipper. Chris picked up on this and posted to his blog, as well.
First off, I'm a proponent of verification, but I also think that we, as a community and a profession, shouldn't be blindly running on autopilot when it comes to tool verification. Specifically, what is "tool verification"?
One way to look at this is that you'd like to verify that a tool does what it's supposed to do correctly. For example, if one tool (ie, AccessData's FTK Imager) generates an MD5 hash of an acquired image, you might want to verify that the algorithm used to generate that hash was done implemented correctly, so you'd use another tool (one that you trust as being correct...perhaps Jesse Kornblum's md5deep) to confirm that the MD5 hash is, indeed, correctly implemented. The implication here is that if both tools produce the same output, then the implementation employed by FTK Imager is correct and verified.
A similar way to look at this is with respect to a tool such as RegRipper, but how would you verify the results? For one, what other tools do what RegRipper does...besides RegExtract, which is, according to the author, based on RegRipper?
Well, similar to Nessus, RegRipper implements its actions based on plugins that are text-based. So if you want to determine what a plugin does (ie, what keys, values or data it extracts, and how it extracts or parses that data), all you need to do is open the plugin in Notepad or an editor. I know that a lot of folks will look at the Perl code and slip in to a coma, but for the most part, the data extraction and parsing logic is right there...in some cases, it's easy to see: read the binary data into a variable/Perl scalar, go to this offset within the data, read this many bytes, and translate it based on this table of values. Easy.
If you're not sure that what a plugin does is correct, I try to make sure that I include either descriptions in the comments of the plugin (lines that start with "#") or references to credible sources about the structure or use of the key, value, or data being parsed. In most cases, much like WFA 2/e, these references consist of MS KB articles. If there's some question about the data and how it is or should be interpreted...well, that's what I provide the references for, and as many of you know, I've tried to answer more than a few questions along those lines.
So, how do you verify what a tool like RegRipper does? One way is to use another tool that does what RegRipper does...but outside of RegExtract, there are no other tools that do what RegRipper does. You can open the hive file in a Registry viewer...some of the plugins present information in a manner similar to the way a viewer would, and you can always write your own plugins to do this. But what about parsing/interpreting binary data? Or "decrypting" ROT-13 value names? At that point, the only way to verify what RegRipper does is to sit down with the hive file open in a hex editor and do it all by hand.
Again, I'm not saying that tool verification isn't important...because it is. But so is understanding what a tool does. Blindly saying that I need a tool verify my findings against another tool is really pointless unless I understand what both tools do, and how they do it. I mean, the same holds true in other areas, such as live response. If I run two tools that both use to the same API calls to show me the active process list or the network connections, have I verified anything? No, I haven't. However, if I run two tools that use disparate APIs (and don't converge until somewhere very low in the API stack) then I have verified the tools and the findings...but only to the point where they ultimately converge somewhere down the API stack.
So, before discussing tool verification, we need to have an understand of the purpose and use of the tool, as well as an understanding of how the tool functions.
First off, I'm a proponent of verification, but I also think that we, as a community and a profession, shouldn't be blindly running on autopilot when it comes to tool verification. Specifically, what is "tool verification"?
One way to look at this is that you'd like to verify that a tool does what it's supposed to do correctly. For example, if one tool (ie, AccessData's FTK Imager) generates an MD5 hash of an acquired image, you might want to verify that the algorithm used to generate that hash was done implemented correctly, so you'd use another tool (one that you trust as being correct...perhaps Jesse Kornblum's md5deep) to confirm that the MD5 hash is, indeed, correctly implemented. The implication here is that if both tools produce the same output, then the implementation employed by FTK Imager is correct and verified.
A similar way to look at this is with respect to a tool such as RegRipper, but how would you verify the results? For one, what other tools do what RegRipper does...besides RegExtract, which is, according to the author, based on RegRipper?
Well, similar to Nessus, RegRipper implements its actions based on plugins that are text-based. So if you want to determine what a plugin does (ie, what keys, values or data it extracts, and how it extracts or parses that data), all you need to do is open the plugin in Notepad or an editor. I know that a lot of folks will look at the Perl code and slip in to a coma, but for the most part, the data extraction and parsing logic is right there...in some cases, it's easy to see: read the binary data into a variable/Perl scalar, go to this offset within the data, read this many bytes, and translate it based on this table of values. Easy.
If you're not sure that what a plugin does is correct, I try to make sure that I include either descriptions in the comments of the plugin (lines that start with "#") or references to credible sources about the structure or use of the key, value, or data being parsed. In most cases, much like WFA 2/e, these references consist of MS KB articles. If there's some question about the data and how it is or should be interpreted...well, that's what I provide the references for, and as many of you know, I've tried to answer more than a few questions along those lines.
So, how do you verify what a tool like RegRipper does? One way is to use another tool that does what RegRipper does...but outside of RegExtract, there are no other tools that do what RegRipper does. You can open the hive file in a Registry viewer...some of the plugins present information in a manner similar to the way a viewer would, and you can always write your own plugins to do this. But what about parsing/interpreting binary data? Or "decrypting" ROT-13 value names? At that point, the only way to verify what RegRipper does is to sit down with the hive file open in a hex editor and do it all by hand.
Again, I'm not saying that tool verification isn't important...because it is. But so is understanding what a tool does. Blindly saying that I need a tool verify my findings against another tool is really pointless unless I understand what both tools do, and how they do it. I mean, the same holds true in other areas, such as live response. If I run two tools that both use to the same API calls to show me the active process list or the network connections, have I verified anything? No, I haven't. However, if I run two tools that use disparate APIs (and don't converge until somewhere very low in the API stack) then I have verified the tools and the findings...but only to the point where they ultimately converge somewhere down the API stack.
So, before discussing tool verification, we need to have an understand of the purpose and use of the tool, as well as an understanding of how the tool functions.
Tuesday, September 08, 2009
RegRipper Wishlist
I posted something similar to the RegRipper forums, but I'll do the same here...I'm looking to consolidate a RegRipper wishlist, so I can see what folks are looking for in RegRipper, see what is most/least looked for, and then prioritize the creation/update of features.
So...please, comment here, or email me. Thanks!
So...please, comment here, or email me. Thanks!
Tools for mounting images
A while ago, I posted on mounting DD images, and I wanted to provide an updated list of some of the tools that you can use to do just that on a Windows systems.
When would you need to use such tools? I like to use tools such as these as in most cases, you can do your analysis whilst goin' commando (sans dongle, as it were), and in many cases, do a great deal more deep analysis than you could using one of the commercial forensic analysis suites. In most cases, it's as simple as mounting the image and using your tools, many of which are CLI and can be run via a batch file. Mounting the image gives you access to most (be cognizant of permissions issues) of the files in the file system without that system being live and without requiring a password to log into the system (as with LiveView), simply because you're not actually booting the system, you're just reading the file structure.
I won't got into any particular detail about these tools, as I want to simply provide them here as a means of identifying those that are available.
VDKWin (free) - Excellent UI for VDK.
ImDisk (free) - installs as a Control Panel applet
SMARTMount (pay) - Andy Rosen's superb mounting utility; requires a dongle, mounts raw, SMART, EWF, SAW, VMWare virtual disk format images, and detects a wide variety of file systems.
P2Explorer (free, requires registration) - Lots of cool features, mounts a variety of formats.
Captain Nemo (pay) - Mounts raw and RAID Reconstructor images from Linux, MS, and Novell systems.
Other similar tools that may be of use:
MKS Software's mount utility
MS's Virtual CD-ROM drive from XP (1, 2, 3)
Mounting ISO images on Vista/Win7
WinCDEmu - mount ISO images
For those of you doing live response, the guys over at CommandLineKungFu posted a great blog on determining information about mounted drives and shares. Some of the tools I've written and provided on the DVD that accompanies WFA 2/e implements similar functionality as the wmic commands that Ed posted, but in most cases going just a bit further.
When would you need to use such tools? I like to use tools such as these as in most cases, you can do your analysis whilst goin' commando (sans dongle, as it were), and in many cases, do a great deal more deep analysis than you could using one of the commercial forensic analysis suites. In most cases, it's as simple as mounting the image and using your tools, many of which are CLI and can be run via a batch file. Mounting the image gives you access to most (be cognizant of permissions issues) of the files in the file system without that system being live and without requiring a password to log into the system (as with LiveView), simply because you're not actually booting the system, you're just reading the file structure.
I won't got into any particular detail about these tools, as I want to simply provide them here as a means of identifying those that are available.
VDKWin (free) - Excellent UI for VDK.
ImDisk (free) - installs as a Control Panel applet
SMARTMount (pay) - Andy Rosen's superb mounting utility; requires a dongle, mounts raw, SMART, EWF, SAW, VMWare virtual disk format images, and detects a wide variety of file systems.
P2Explorer (free, requires registration) - Lots of cool features, mounts a variety of formats.
Captain Nemo (pay) - Mounts raw and RAID Reconstructor images from Linux, MS, and Novell systems.
Other similar tools that may be of use:
MKS Software's mount utility
MS's Virtual CD-ROM drive from XP (1, 2, 3)
Mounting ISO images on Vista/Win7
WinCDEmu - mount ISO images
For those of you doing live response, the guys over at CommandLineKungFu posted a great blog on determining information about mounted drives and shares. Some of the tools I've written and provided on the DVD that accompanies WFA 2/e implements similar functionality as the wmic commands that Ed posted, but in most cases going just a bit further.
Stuff
Been out of pocket, enjoying my temporary unemployment for a while, and now I'm back in the game...
Richard Bejtlich posted his review of WFA 2/e on Amazon...thanks, RB! It's validating to see luminaries in the security industry picking up, reading, and commenting on books like mine, particularly because Richard is so well known for NSM, something my book doesn't really cover.
Thanks to a recent comment to one of my blog posts, I was led to the Eating Security blog...always nice to get different perspectives on topics. I like some of the recent posts about IR teams and documentation, as well as the "does it make sense" test. I've done considerable work over the years with respect to CSIRP development and IR team testing, and I like to see how others approach the problem. While each customer brings a unique perspective to the problem through their own infrastructure and team organization, it's always interesting to see the thoughts that others have...most often due to the fact that developing a new IR team with a customer doesn't allow for a great deal of feedback beyond, "wow...okay...this is new."
Speaking of CSIRP development, here's an interesting blog post that I picked up a while back. Interesting how information on "crisis management" so easily maps to a CSIRP.
The guys over at cmdLabs had a nice post on SQLite for forensics nerds that has some good information...something to read, maybe play around with a bit, and bookmark.
Claus had an excellent forensic roundup post recently...he really can cover a LOT of stuff. I'm still sifting through this one. Claus and I read each other's blogs pretty regularly, so you see not only crosslinks but comments, as well...but Claus never ceases to amaze me with the breadth of useful information he's able to come up with. He's got a couple of links for pcap visualization that I need to take a look at.
All in all, not a bad catch this time around.
Addendum: Just a bit later, I ran across a couple of interesting blogs, and rather than wait to post another blog entry, I thought I'd add them here...
ForensicInnovations had an interesting blog, and the post that initially got my attention was Push for Live Forensics. I started into the post expecting yet another argument for developing a live response process, and got some of that...but the post ends with an announcement for a data profiler product that appears to provide a list of various file types found on the system. I'm not entirely sure how I'd use such a product, as rather than "show me all of the file types" I might take the approach of just "show me all graphics images", or something a bit more directed (based on the goals of my response).
I also found Simple Techniques that Fool Forensics Tools in the archive, as well as a couple of other posts. There's mention of NTFS Alternate Data Streams in this post, something I've been writing about for years, and oddly enough, is still something that is an effective hiding place for data, as many sysadmins and forensic analysts still do not understand how ADSs are used by the OS, let alone by the bad guys.
Also, I found a link to this CyberCrime blog that has some interesting stuff to read.
RegRipper used to validate Neeris infection...with lots of other good stuff in the post, as well...a good deal of which I use in creating timelines for analysis...
Richard Bejtlich posted his review of WFA 2/e on Amazon...thanks, RB! It's validating to see luminaries in the security industry picking up, reading, and commenting on books like mine, particularly because Richard is so well known for NSM, something my book doesn't really cover.
Thanks to a recent comment to one of my blog posts, I was led to the Eating Security blog...always nice to get different perspectives on topics. I like some of the recent posts about IR teams and documentation, as well as the "does it make sense" test. I've done considerable work over the years with respect to CSIRP development and IR team testing, and I like to see how others approach the problem. While each customer brings a unique perspective to the problem through their own infrastructure and team organization, it's always interesting to see the thoughts that others have...most often due to the fact that developing a new IR team with a customer doesn't allow for a great deal of feedback beyond, "wow...okay...this is new."
Speaking of CSIRP development, here's an interesting blog post that I picked up a while back. Interesting how information on "crisis management" so easily maps to a CSIRP.
The guys over at cmdLabs had a nice post on SQLite for forensics nerds that has some good information...something to read, maybe play around with a bit, and bookmark.
Claus had an excellent forensic roundup post recently...he really can cover a LOT of stuff. I'm still sifting through this one. Claus and I read each other's blogs pretty regularly, so you see not only crosslinks but comments, as well...but Claus never ceases to amaze me with the breadth of useful information he's able to come up with. He's got a couple of links for pcap visualization that I need to take a look at.
All in all, not a bad catch this time around.
Addendum: Just a bit later, I ran across a couple of interesting blogs, and rather than wait to post another blog entry, I thought I'd add them here...
ForensicInnovations had an interesting blog, and the post that initially got my attention was Push for Live Forensics. I started into the post expecting yet another argument for developing a live response process, and got some of that...but the post ends with an announcement for a data profiler product that appears to provide a list of various file types found on the system. I'm not entirely sure how I'd use such a product, as rather than "show me all of the file types" I might take the approach of just "show me all graphics images", or something a bit more directed (based on the goals of my response).
I also found Simple Techniques that Fool Forensics Tools in the archive, as well as a couple of other posts. There's mention of NTFS Alternate Data Streams in this post, something I've been writing about for years, and oddly enough, is still something that is an effective hiding place for data, as many sysadmins and forensic analysts still do not understand how ADSs are used by the OS, let alone by the bad guys.
Also, I found a link to this CyberCrime blog that has some interesting stuff to read.
RegRipper used to validate Neeris infection...with lots of other good stuff in the post, as well...a good deal of which I use in creating timelines for analysis...
Wednesday, September 02, 2009
More Links
I found another review of WFA 2/e this morning, this one on the ISC^2 web site, from Jesse Lands...thanks! I greatly appreciate when folks take the time to let me know what they thought (what they liked or disliked) about the book. To whomever...Thanks! This is in addition to the 14 reviews on the Amazon page for WFA 2/e...
Seems the folks at SpiderLabs will be putting on a Malware Freakshow presentation at a conference in Toronto coming up soon. Also, it sounds as if the presentation is going to be pretty interesting...take a look at this excerpt from the DarkReading article:
In a memory-dumping attack, the attacker reads the unencrypted transaction or other information that sits in memory before it goes to the actual application. The hotel attack included several pieces of malware, including code that dumps the contents of the memory onto the attacker's machine, and another that performs data parsing. "One piece installs itself as a service so the malware can come back when it needs to boot up," Ilyas says.
I'd run into the same malware myself during an exam, and one of our team members had seen an earlier version a year so prior to that. This shows how pervasive this stuff can get! In the instances I'm aware of, the intruder gained access to the internal network and then gained domain admin access...most often due to weak passwords...and then targeted very specific systems, and installed the software mentioned in the article.
Also, if you're going to the conference, be sure to check out Chris Pogue's Sniper Forensics presentation.
On the topic of visualization, check out the VizSec2009 conference. This looks really interesting, as visualizing malware program flow, for example, can be extremely helpful. One of the things I've been struggling with is to understand how to graphically represent timeline data so that an analyst can use that to determine what happened and when; however, there's just potentially so much data that I've been having a very difficult time trying to come up with some way to represent what happened. Right now, its a manual process to sift through the data looking for the smoking gun(s), and representing the findings can be done with a table or an Excel spreadsheet.
It looks like I'll be attending HackerHalted in Miami later this month, and Syngress is not only a sponsor but will have books available. I'll try to be available to sign books...if I'm not at the booth, just grab me (beer's always welcome!!).
And finally, for the VMI alumni in the readership, I picked up follower number 89 today!
Seems the folks at SpiderLabs will be putting on a Malware Freakshow presentation at a conference in Toronto coming up soon. Also, it sounds as if the presentation is going to be pretty interesting...take a look at this excerpt from the DarkReading article:
In a memory-dumping attack, the attacker reads the unencrypted transaction or other information that sits in memory before it goes to the actual application. The hotel attack included several pieces of malware, including code that dumps the contents of the memory onto the attacker's machine, and another that performs data parsing. "One piece installs itself as a service so the malware can come back when it needs to boot up," Ilyas says.
I'd run into the same malware myself during an exam, and one of our team members had seen an earlier version a year so prior to that. This shows how pervasive this stuff can get! In the instances I'm aware of, the intruder gained access to the internal network and then gained domain admin access...most often due to weak passwords...and then targeted very specific systems, and installed the software mentioned in the article.
Also, if you're going to the conference, be sure to check out Chris Pogue's Sniper Forensics presentation.
On the topic of visualization, check out the VizSec2009 conference. This looks really interesting, as visualizing malware program flow, for example, can be extremely helpful. One of the things I've been struggling with is to understand how to graphically represent timeline data so that an analyst can use that to determine what happened and when; however, there's just potentially so much data that I've been having a very difficult time trying to come up with some way to represent what happened. Right now, its a manual process to sift through the data looking for the smoking gun(s), and representing the findings can be done with a table or an Excel spreadsheet.
It looks like I'll be attending HackerHalted in Miami later this month, and Syngress is not only a sponsor but will have books available. I'll try to be available to sign books...if I'm not at the booth, just grab me (beer's always welcome!!).
And finally, for the VMI alumni in the readership, I picked up follower number 89 today!