Thursday, February 24, 2011

Webcheck.dll

I saw via Twitter today that a new post had gone up on the TrustWave SpiderLabs Anterior blog, regarding some malware that the TW folks (by that, I mean Chris Pogue) had detected during some engagements.

I think it's great when analysts and organizations share this kind of information, so that the rest of us can see what others are seeing.  So, a big thanks goes out to TrustWave, and the next time you see Chris at a conference, be sure to say hi and buy him a beer...or better yet, treat him some bread pudding!


What I'd like to do is take a moment to go through the post and discuss some things that might add something of a different perspective or view to the issue, or perhaps something .

As you can see from the post, Chris uses timeline analysis to locate the malware in question, and he's got some really good information in the post about creating the timeline for analysis (Chris uses the log2timeline tools).  I'm sure that there's quite a bit about the engagement and the analysis that weren't mentioned in the post, as Chris jumps right to the target date within his timeline and locates the malware.

I like the fact that Chris uses multiple analysis techniques to corroborate and check his findings.  For example, in the post, Chris mentions looking at the file's $STANDARD_INFORMATION and $FILE_NAME attributes in the MFT, and confirming that there were no indications of "time stomping" going on.  This is a great example that demonstrates that anti-forensics techniques target the analyst and their training, and that a knowledgeable analyst isn't slowed down by these techniques.  I think that the post also demonstrates how timelines can be used to add context to what you're looking at, as well as increase the level of confidence that the analyst has in that data.

One of the things that kind of struck me as odd in the post is that there's mention of the "regedit/1" entry in the RunMRU key, and then the post jumps right to discussing the InProcServer32 key, based on the timeline.  The RunMRU information (ie, key LastWrite time) is from a user's hive, so another key of interest to check might be the following:

Software\Microsoft\Windows\CurrentVersion\Applets\Regedit

As you'd think, this key contains information about the key that had focus when the user closed RegEdit.  Specifically, the LastKey value (mentioned in MS KB 244004) contains the name of that key.  This value might be used to make a bit of a transition between the RunMRU data and the changes to the InProcServer32 key that's mentioned in the post, and possible provide insight into how the malware was actually deployed on the system. 

As Chris points out in the post, the value type being changed from "REG_SZ" (string value) to "REG_EXPAND_SZ" does allow for the use of unexpanded references to environment variables, such as %SystemRoot%.  One statement that I don't really follow is:

So now the threading for webcheck.dll is no longer pointing to the legitimate file, but to the malware!

The threading model listed doesn't have anything to do with the path...I'm going to have to reach to Chris and find out what he was referring to in that statement.  He follows that up with this statement later in the post:

So not only did the attackers use a legitimate threading, but they made sure to use a shell extension that was trusted by Windows.

Again, I'm not clear on the "threading" part of that statement, but Chris is quite correct about the shell extension issue.  Basically, the Windows shell (Explorer.exe), which is launched when a user logs into the system, will load the approved shell extensions, which includes this particular malware.  Trust seems to be implicit, as there are no checks run when Explorer goes to load a shell extension DLL.  This is a bit different from the shell extension issue I'd mentioned last August, in part because it doesn't use the DLL Search Order issue.  Instead, it simply points Explorer directly to the malware through the use of an explicit path.

All in all, I'm glad to see that Chris and the TrustWave folks sharing this kind of thing with the community.  I do think that there's more that isn't being said, like how the malware actually got on the system (ie, how it was deployed), but hey, we all know that there are some things that can't be said about engagements.  And that's okay.

Tuesday, February 22, 2011

Links

WFA Translations
Thanks to the great folks at Elsevier and BJ Public, Windows Forensic Analysis 2/e is now available in Korean!  Very cool!  A copy of this goes on my shelf right next to the French translation, as well as the Chinese translation of WFA 1/e.  I have to say, this is very, very cool...to look up at my bookshelf and see that someone thought enough of something I wrote to translate it.  I have no idea what it says, but it's cool, just the same!

Security Ripcord
After a lengthy absence, Don is finally back blogging again...and he returns with a bang!  Don's latest post, It will never be too expensive, takes a shot at the statement made by security "professionals" and "experts" to their customers for years...that one should employ/deploy enough "security" to make it too expensive for an attacker to get in, and they'll go away.  It's pretty clear from Don's comments and reasoning that this is clearly a statement that has been made over and over for years, without folks really thinking about what's being said, and how the threat itself has evolved.  This statement may have been true in the early days of the Internet, but the Internet and cybercrime have changed and developed...but apparently, the thinking behind the statement hasn't.  Don does a great job of pointing this out, and with all the talk about advanced persistent and subversive multi-vector threats, doesn't it stand to reason that for some folks, for the really dedicated folks, they're just gonna laugh at your expensive security as they blow right on through it?  Much like he did when he was in the Corps, Don takes a shot at this statement and hits it squarely, center of mass.

Sniper Forensics
Speaking of snipers, Part 3 of Chris's Sniper Forensics series has been posted to the SpiderLabs Anterior blog, be sure to check it out.  I think that one of the biggest things that Chris points out in this segment is defining the target...with respect to IR, why are you there?  What are you there to do?  When it comes to digital forensic analysis, the same thing applies; if someone where to ship me a drive (or image), with clearly defined goals of what they were looking for, I might usually estimate an analysis time of 24 hrs or less.  However, if you were to ship me a drive and say, "...find all bad stuff...", I could spend several weeks looking and never find why you define as "bad"...in part, because you never defined it.  I've seen analysts leave site with a drive, having been told to find "bad stuff"; some initial searching found clear indications of the existence and use of "hacker" tools.  However, upon reporting that to the customer, the analyst was told that the former employee's job was to hack stuff, so the existence of hacking tools wasn't "bad".  Hey, who knew.

Defining the goals of an engagement, and the parameters for success, are critical to that engagement.  Having a contract that just says "do analysis" without any defined goals of that analysis leave the analyst or the team with...what?  Once the hours in the contract have been consumed, a report is delivered to the customer who then says, "no, this isn't what we wanted."

This isn't just an important point for consultants, it's equally important (or perhaps even more so) for the customer.  If you're seeking IR or DF services, ensure that you clearly define what you're looking for, and ensure that this is stated clearly in the contract before you sign it.

MFT Slack
Lance posted an Enscript recently that was written to extract the slack space from MFT records.  This isn't something that I've had a need to do before, but it is interesting and worth taking a look at.  I'm not entirely sure what context you can get from items found in MFT slack, as MFT records are 1024 bytes long; however, Lance thought enough of this to write an Enscript for it, so there must be something there!

On that note, I recently updated the online repository for the Windows Registry Forensics tools, to include Jolanta's regslack tool for locating deleted keys and values in hive files.  For usage and examples of how this tool has been used, check out the book.

Thursday, February 17, 2011

Links

WRF Book Reviews
Thanks to everyone who's purchased a copy of Windows Registry Forensics, and in particular to those who've taken the opportunity to post their thoughts, or more formally write a review.  Paul Sanderson (@sandersonforens on Twitter), from across the pond, posted a review of the book to his blog recently.  Paul has the Kindle version of the book, as that's all that's available over there at the moment.

Speaking of the book, it seems to be doing well, if the Amazon ranking is any indicator.  There are a couple pretty helpful things to point out about this book that might be helpful in understanding why it might be useful for you.

First, this is the first book of it's kind.  Ever.

Seriously...there are no books out there that come at the topic from the perspective of a forensic analyst.  There are a number of books that include discussion of some Registry keys that are of use to an analyst, but none that discusses the Registry at the depth and breadth of WRF.  My primary motivation for writing the book was to fill that gap, because I really feel that the Registry is a critical and too-often-overlooked source of forensic artifacts on a system.

Second, the book makes use of primarily free and open-source tools.  This can be very important to smaller organizations, such as LE, professors and students at local community colleges, and even just to those learning or looking to develop new skills, as it puts the described capability within reach.  I know that not everyone can (or wants to) learn Perl, but the tools are there...and to make it even easier, many of the tools I've written have been provided as standalone EXEs (compiled with Perl2Exe).

SPoE
I was bopping around the TwitterVerse recently and ran across a link to Mike Murr's blog, and from there found a two year old post on the Single Piece of Evidence (SPoE) myth.  Mike's always been good for some well-thought-out opinions and comments, and this one is no different.  What he mentions with regards to anti-forensics techniques is similar to the reactions surrounding the release of XP and then Windows 7..."OMG!  What is this going to do to my investigations!"  The fact is that, while each brings it's own nuances and inherent anti-forensics, each also brings with it a whole bunch of new artifacts.  As Mike points out, when a user interacts with a Windows system, there are a lot of artifacts...and the same is true, albeit to a somewhat lesser degree, for malware, as well.

Hacking
If you've been watching the news lately, you're probably aware that no one seems to be immune to being compromised.  Even small boroughs in PA are susceptible.  While the linked article doesn't have a great deal of supporting information (no mention of the use of online banking, did the bank see a legit login from a different IP address, what the independent consultant determined, etc.).  In the past, a lot of these types of issues have been attributed to the Zeus bot, and in some cases, there haven't been any indications of Zeus on what were thought to be the affected systems.

Mayor Mowen was just being a mayor when he said, "You guard against it by increasing your firewall, which is what we are in the process of doing."

I bring this up, because there's traffic on Twitter recently with respect to the RSA conference, and the definition of "cyberwar", or more specifically, just "war".  I'd suggest that any action that forces your adversary to turn away for his main objective (attacking, etc.) and focus resource elsewhere, even if this tactic is used purely for an attrition effect, should be considered part of "war", declared or otherwise.

Analysis
F-Secure recently posted this analysis of an MBR infector.  The analysis includes a good number of hex views and listings of code in assembly language and lots of detail.  It would be a good idea to read through this carefully and take note of what's said.  For example, this baddy infects the MBR, and appears to make a copy (like Mebroot, here, and here) of the original MBR, placing it in sector 5.

The analysis includes the following:
Why an MBR File System Infector? Probably because it can bypass Windows File Protection (WFP). As WFP is running in protected mode, any WFP-protected file will be restored immediately if the file is replaced.


I'm not entirely sure that this is a good line of reasoning in and of itself, in part because the bad guys have done a good job at shutting WFP off for a minute, and installing their malware.  WFP is not a polling service, so when it wakes back up, it has no way of knowing that a protected file was modified or replaced. However, it appears from the write-up that the malware infects userinit.exe at startup, possibly prior to WFP starting up, which may be the entire reason for infecting the MBR first.


I'm curious as to which version of Windows this malware was executed on, as the version of userinit.exe on my XP system, as well as a Windows XP VM I have, is 26Kb in size, slightly larger than what's shown in the write-up.  However, the write-up does provide a couple of good tips that an analyst can use to detect the presence of this malware.


If you're a responder or forensic analyst, and this one is tough for you to read and follow, keep something in mind that Cory posted to Twitter today (17 Feb), which included (in part), "...you shouldn't rely on malware fetishists for incident response advice."  Excellent point.

FakeAV
Another interesting analysis popped up...thanks to Ken for posting his analysis of a FakeAV bit of malware.  In his write-up, Ken does a pretty thorough job of documenting what he did, how he did it, and what he found.  For example, one of the Registry key that were modified is:

HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Associations

According to Ken, the "LowRiskFileTypes" value was added with the data ".exe".  This is interesting, but how does it apply to the malware infection.  MS KB article 883260 provides some hints to this.  This same KB article provides a hint about why the following key might also be updated:

HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Attachments

In Ken's write-up, this key had the "SaveZoneInformation" value added, with "1" as the data.  Remember how back with Windows XP SP2, files downloaded via Outlook or IE had a "zoneid" Alternate Data Stream (ADS) attached to the file (this is described on pg. 314 of WFA 2/e)?  When analyzing a system with ADS on it, it's usually pretty easy to see them at a glance, as some of the commercial forensic analysis applications list them in red.  However, this setting appears to tell the system to not create the ADS.

Finally, Ken mentions that there was a change to the following key:

HKCU\Software\Microsoft\Internet Explorer\Download

In this case, the CheckExeSignatures value was changed from 'yes' to 'no'.  TechNet has some information about the value (maybe...the path is a little different,using "Main" in the key path, rather than "Download") and what effect it may have on the system. Other malware seems to modify the same set of keys, as seen here, and with this FakeAV write-up from Sophos.

There are a couple of interesting take-aways from this write-up, I think.  First of all, it's great that Ken took the time not only to run this analysis, but to share it with others.  I think one of the biggest mistakes analysts make when it comes to sharing and collaboration is assuming that everyone else has already seen everything.  I tried to shoot this one down at the WACCI conference last year, where I actually met Ken.  I don't see many FakeAV issues at all, to be honest.

Another take-away is that publicly available information provided by the vendor is utilized to take some steps towards not only allowing the malware to run, but also towards anti-forensics.  Think about it...by telling the system to not create the ADS for downloaded files, that removes one of the indicators that analysts could use to identify the malware's propagation mechanism.

Finally, this clearly demonstrates that there are steps analysts can use to detect malware on a system beyond running an AV scanner.  For example, there are number of Registry keys that Ken identified, as well as what's listed in other write-ups, that could be used to detect the potential presence of malware, even if the malware itself is not detected by AV.  The rather odd value in the user's Run key is something of a give-away, but RegRipper already has a plugin that dumps the contents of that key.  Now we have other keys that can be used with RegRipper or a forensic scanner to comb through the appropriate hives and check for the possible existence of malware.

IR
Last, but certainly not least, Chris posted today regarding IR activities; in particular, using a batch file to dump Registry hives from a live system for analysis.  Chris actually posted the batch file he used...take a look.  This one shows that Chris's command line kung fu is very good!

Monday, February 14, 2011

Links, Tools and Stuff

PDF Stream Dumper
From over at RE Corner comes the PDF Stream Dumper tool; actually, this one has been out for some time now.  This tool was written in VB6, and comes with a number of automation scripts.  Swing on by Lenny's blog for some create examples of how to use it, or check out this KernelMode page for some other examples of the dumper being used.

If you're not too put off by CLI tools, you might consider using this in conjunction with Didier's PDF tools.  Didier's stuff is also in use by VirusTotal.  That's not to say that one's better to use than the other...it's good to have both available.

While we're on the subject of document metadata, it's a good idea to mention Kristinn Gudjonsson, creator of log2timeline, also created the read_open_xml.pl Perl script for extracting metadata from MS Word 2007 documents (use and output described at the SANS Forensic Blog).

TechRadar
There's an interesting article up on TechRadar about how to perform a forensic PC investigation, and it references using OSForensics, available from PassMark Software.  I have to say, I'm a bit concerned about articles like this, even when they suggest early in the article that performing the actions described in the article can be "a little morally dubious".

The beta of OSForensics was recently made available for a limited time, for free.  However, that offer was originally made as "LE only", but seems to have changed recently.

OSForensics
It looks like the folks at PassMark Software removed the LE-only restriction for downloading the OSForensics beta, so I downloaded the 32-bit version to my XP system this morning.

After installing OSForensics and looking around (noticed the nice icons and graphics), I created a new case, and then began looking for a way to load a test image into the tool.  I didn't have much luck, so I went immediately to the Help, which is provided online, in HTML format.  I went through the index and found the word "Image", and from there found this:

In many cases it may be desirable to work with data from a disk image rather than the physical disk itself. Whilst OSForensics does not deal with disk images directly itself Passmark provides a set of free external tools in order to support working with disk images.

So, it appears that OSForensics is not intended for dead-box/post-mortem analysis.  Some of the available tools, such as System Information and Memory Viewer, pertain to the system on which OSForensics is running.  PassMark does offer the OSMount program, which allows you to mount a raw/dd image as a drive letter, and from there you can use OSForensics in the intended fashion.  As such, I'd guess that there'd be no issues using any of the various other mounting techniques and tools, including accessing VSCs.

Of all of the functionality, the one that really jumps out is the hash set comparison tools.  PassMark provides a number of hash sets for known-good OS files at their download site; however, as with any similar functionality based on hash sets, I can easily see how this can become cumbersome very quickly.  You either scan for all of the hashes, or you run into issues with analysts deciding which hash sets to run, and (more importantly) documenting those that they do run.

OSForensics also provides string and file name search functionality, logging of activity, and the ability to install OSForensics to a USB drive.  I'm sure that this tool will be useful to examiners; for my own uses, however, it simply does not provide enough of the core functionality that I tend to use during my examinations. As a test, I mounted a test image as a read-only F:\ drive and opened OSForensics, and I have to say, moving through the interface wasn't the most intuitive, or easy to use.  However, I may be somewhat biased, given my experience and usual work processes.
No Alternative
Eric's got a rather insightful post over at the AFoD blog.  More and more folks are getting into the cell phone and smart phone market, and those little buggers are really very powerful when you take a look at them.  They also tend to contain more and more storage space.  Of course, we need to keep in mind that the tablet market is still there in that space between the smart phone and the laptop, as well.

I can see where Eric's going with the post, but I have to say from the private/corporate perspective, this isn't such a huge issue.  I would expect that if it ever does become and issue, it'll be an emergency (for legal/compliance purposes) and one-off, not something that gets done on a regular basis, with the cost of applications and training being amortized across multiple customers.  However, from a public perspective, I can definitely see how this is going to be more and more of an issue...after all, how "gangsta" can you really be lugging around a Dell Latitude laptop?

Reading/Education
There are some great new resources over at the e-Evidence site, including stuff about MacOSX artifacts, iPhone and smart devices, Windows artifacts, etc.  This site is always a great place to go and find lots of new and interesting stuff.

Network and Wireless
A question popped up on a list this morning regarding wireless assessments and tools.  The original question asked about an alternative to NetStumbler, that supported a specific NIC, and the first response was for ViStumbler.  ViStumbler is open-source and was originally written to be supported by Vista, but apparently runs on Windows 7, as well.

If you're doing any network forensics, you might also consider NetworkMiner as a viable resource, and something to add to your toolkit right alongside Wireshark.

Tool Sites
ForensicCtrl had a listing of free computer forensics tools available.
List of Windows open source tools
Check out the Collaborative RCE Tools library for a wide range of tools.

Tuesday, February 08, 2011

Carving

I was looking at a Windows 2003 system, and found that I was somewhat short on Event Log entries, with respect to the incident window. As I looked and used my evtrpt.pl Perl script to get statistics on the Sec, Sys, and App Event Logs, I noticed that Sec and Sys Event Logs only contained a few days of event records. The Application Event Logs actually went back a while past the incident window. I looked a bit closer to the output of evtrpt.pl and noticed that the Security Event Log had an event ID 517 record, indicating that the Event Log had been cleared.

So the first thing I did was run TSK blkls against the image to extract the unallocated space from the image file. I then ran MS's strings.exe (with the "-o" and "-n 4" switches), and then had two files to work with...the unallocated space, and a list of strings found in unallocated space, along with the offset of each string. So I then wrote a Perl script that would go through the strings output and find each line the contained "LfLe", the "magic number" for Windows 2000/XP/2003 event records.

With this list, the script would then run through the unallocated space by first going to the offset of the "LfLe" string, and backing up 4 bytes (DWORD). According to the well-documented event record structure, this DWORD should be the size of the record. As values can vary, and there is no one specific value that is correct, the way to check for a valid event record is to advance through unallocated space for the length provided by the DWORD, and the last DWORD in this blob should be the same as the size of the record. For example, if the initial size DWORD is 124 bytes, you should be able to advance through the file 120 bytes, and the next DWORD should also be 124.

Using this approach, I was able to extract over 330 deleted event records. I've used similar techniques in the past, to extract 100 bytes on either side of a keyword from the pagefile. This is an excellent way to gather additional information that you wouldn't normally be able to 'see' through most tools, as well as to look for and carve well-defined structures from unstructured data.

Tools and Stuff

RegRipper
Brett Shavers, who maintains the RegRipper site, has compiled an archive of new plugins and posted them for download. Brett's done a fantastic service for the DF community, in not only setting up the site for RegRipper, but maintaining it, and posting this archive of plugins. A huge thanks to Brett...and if you see him at a conference, be sure to buy him a beer!

As a side note, along with the release of Windows Registry Forensics, I had posted the DVD contents here, as well. The archive contains what's on the DVD, so while you can get it, it's really most helpful when used in conjunction with the book.

El Jefe
Over at the HolisticInfoSec blog, Russ shared a little El Jefe love recently. Russ says that El Jefe is a Windows-based process monitoring tool that "intercepts native Windows API process creation calls, allowing you to track, monitor, and correlate process creation events. " Very cool. The tool is in version 1.1 and is available from the good folks at Immunity, and runs on Windows 2000/XP through Windows 7, reportedly in both 32- and 64-bit versions. This looks like a great tool not only for dynamic malware analysis, but perhaps also for incident preparation. I mean, wouldn't you like to know what ran on a system?

Anti-Rootkit
I haven't been doing a lot of live box forensics/IR work, but I ran across the Tuluka kernel inspector recently, and it caught my eye. If you've read my books, you know that I've used GMER in the past. I can't say that I've really had issues with rootkits, and many times I just get to do "dead box" forensics, but this looks like another tool that folks may find useful.

NetworkMiner
Erik sent out an email recently to say that NetworkMiner had gone to version 1.0. Congrats to Erik and all the folks who've worked on or used NetworkMiner! NM is an excellent compliment to other network data analysis tool such as Wireshark. Per Erik, some of the new features include:

Here are some new features in NetworkMiner since the previous version:

* Support for Per-Packet Information header (WTAP_ENCAP_PPI) as used by Kismet and sometimes Wireshark WiFi sniffing.
* Extraction of
Facebook as well as Twitter messages into the message tab. Added support to extract emails sent with Microsoft Hotmail (I.e. Windows Live) into Messages tab.
* Extraction of twitter passwords from when settings are changed. Facebook user account names are also extracted (but not Facebook passwords).
* Extraction of gmailchat parameter from cookies in order to identify users through their
Google account logins.
* Protocol parser for Syslog. Syslog messages are displayed on the Parameter tab.

Pretty cool stuff! Check it out, and be sure to check out the NM Wiki if you have any questions! Along with tools like Wireshark and NetWitness Investigator, NetworkMiner can be extremely useful for IR from a network perspective.

EvtxParser
Andreas has released v1.0.7 of his EvtxParser, a Perl-based approach for parsing Vista and Windows 7 Windows Event Log/EVTX files.

Highlighter
Mandiant has released a new version of Highlighter. Not much else to say, really...if you use this tool, take a look at the updates. I know several folks who find Highlighter to be very useful.

PointSec
More of a process than a tool, the folks over at Digital Forensic Solutions have posted to their blog about how to go about examining PointSec-encrypted drives. I can't say that I've had issues with encrypted drives...I've either had the admin boot the system and we'd image it live, or I acquired images of the drives with the customer knowing full well that the images would be encrypted (imaging job, no analysis). However, DFS's post provides some great information.

Java
Also not a tool, but really kind of cool...Corey's written up a nice post about some analysis he did that involved looking into the Java cache folder. Corey walks through identification of the issue, going so far as to demonstrate decompiling a Java .jar file. What I really like about Corey's posts is how complete they are, without giving away any case specific information. This isn't something that you see very often in the IR/DF community...but Corey clearly demonstrates how easy it is to do this and provide a valuable teaching moment. Great job, Corey...thanks!

Thursday, January 27, 2011

WRF book available!!

It seems that the Windows Registry Forensics book is available, as it was shipped to the DoD CyberCrime Conference. I'm looking forward to getting my copy!

If you have the Kindle edition of this book, and want the DVD contents, go here. Also, I've added a Books page to the blog, so check there in the future.

Addendum: Reviews!
Brad and Dave have been nice enough to post reviews of the book thus far! Thanks so much, guys...your efforts are greatly appreciated!

Now, they're both up on the Amazon page for the book, as well...

Speaking of reviews, but specific to WFA 2/e, Eric Huber posted this So You'd Like to...Learn Digital Forensics page on Amazon. In it, he says:

Harlan Carvey's Windows Forensic Analysis DVD Toolkit, Second Edition is the best book available on Windows digital forensics.

Thanks, Eric!!

Friday, January 21, 2011

New Tools and Links

ProDiscover
Chris Brown has updated ProDiscover to version 6.8. This may not interest a lot of folks but if you haven't kept up with PD, you should consider taking a look.

If you go to the Resource Center, you'll find a couple of things. First off, there's a whitepaper that demonstrates how to use ProDiscover to access Volume Shadow Copies on live remote systems. There's also a webinar available that demonstrates this. Further down the page, ProDiscover Basic Edition (BE) v 6.8 is available for download...BE now incorporates the Registry, EventLog and Internet History viewers.

Chris also shared with me that PD v6.8 (not BE, of course) includes the following:

Added full support for Microsoft Bitlocker protected disks on Vista and Windows7. This means that users can add any bitlocker protected disk/image to a project and perform all investigative functions provided that they have the bitlocker recovery key.

The image compare feature in the last update is very cool for getting the diff's on volume shadow copies.

Added support for Linux Ext4 file system.

Added a Thumbs.db viewer.


These are just some of the capabilities he mentioned, and there are more updates to come in the future. Chris is really working hard to make ProDiscover a valuable resource.

MS Tool
Troy Larson reached to me the other day to let me know that MS had released the beta of their Attack Surface Analyzer tool. I did some looking around with respect to this tool, and while there are lot of 'retweets', there isn't much out there showing its use.

Okay, so here's what the tool does...you install the tool and run a baseline of the system. After you do something...install or update an app, for example...you rerun the tool. In both cases, .cab files are created, and you can then run a diff between the two of them. I see two immediate uses for something like this...first, analysts and forensic researchers can add this to their bag of tricks and see what happens on a system when an app is installed or updated, or when updates are installed. The second, which I don't really see happening, is that organizations can install this on their critical systems (after testing, of course) and create baselines of systems, which can be compared to another snapshot after an incident.

I'll admit, I haven't worked with this tool yet, so I don't know if it creates the .cab files in a specific location or the user can specify the location, or even what's covered in the snapshot, but something like this might end up being very useful. Troy says that this tool has "great potential for artifact hunters", and I agree.

CyberSpeak is back!
After a bit of an absence, Ovie is back with the CyberSpeak podcast, posting an interview with Mark Wade of the Harris Corporation. The two of them talked about an article that Mark had written for DFINews...the interview was apparently based on pt. 1 of the article, now there's a pt. 2. Mark's got some great information based on his research into the application prefetch files generated by Windows systems.

During the interview, Mark mentioned being able to use time-based analysis of the application prefetch files to learn something about the user and their actions. Two thoughts on this...unless the programs that were run are in a specific user's profile directory (and in some cases, even if they are...), you're going to have to do more analysis to tie the prefetch files to when a user was logged in...application prefetch files are indirect artifacts generated by the OS, and are not directly tied to a specific user.

The second thought is...timeline analysis! All you would need to do to perform the analysis Mark referred to is generate a nano-timeline using only the metadata from the application prefetch files themselves. Of course, you could build on that, using the file system metadata for those files, and the contents of the UserAssist subkeys (and possibly the RecentDocs key) to build a more complete picture of the user's activities.

Gettin' Local
A recent article in the Washington Post stated that Virginia has seen a rise in CP cases. I caught this on the radio, and decided to see if I could find the article. The article states that the increase is a result of the growth of the Internet and P2P sharing networks. I'm sure that along with this has been an increase in the "I didn't do it" claims, more commonly referred to as the "Trojan Defense".

There's a great deal of analysis that can be done quickly and thoroughly to obviate the "Trojan Defense", before it's ever actually raised. Analysts can look to Windows Forensic Analysis, Windows Registry Forensics, and the upcoming Digital Forensics with Open Source Tools for solutions on how to address this situation. One example is to create a timeline...one that shows the user logging into the system, launching the P2P application, and then from there add any available logs of file down- or up-loads, launching an image viewing application (and associated MRU list...), etc.

Another issue that needs to be addressed involves determining what artifacts "look like" when a user connects a smart phone to a laptop in order to copy or move image or video files (or uploads them directly from the phone), and then share them via a P2P network.

Free Stuff
Ken Pryor has posted his second article about doing "Digital Forensics on a (less than) shoestring budget" to the SANS Forensic blog. Ken's first post addressed training options, and his second post presents some of the tools described in the upcoming Digital Forensics with Open Source Tools book.

What I like about these posts is that by going the free, open-source, and/or low cost route for tools, we start getting analysts to understand that analysis is not about tools, it's about the process. I think that this is critically important, and it doesn't take much to understand why...just look around at all of the predictions for 2011, and see what they're saying about cybercrime being and continuing to become more sophisticated.

Tuesday, January 18, 2011

More VSCs

I was doing some writing last night, specifically documenting the process described in my previous blog post on accessing VSCs. I grabbed an NTUSER.DAT from within a user profile from the mounted image/VHD file, as well as the same file from within the oldest VSC available, and ran my RegRipper userassist plugin against both of the files.

Let me say that I didn't have to use robocopy to extract the files...I could've just run the plugin against the mounted files/file systems. However, I had some other thoughts in mind, and wanted the copies of the hive files to try things out. Besides, robocopy is native to Windows 7.

If the value of VSCs has not been recognized or understood by now, then we have a serious issue on our hands. For example, we know that the UserAssist key values can tell use the last time that a user performed a specific action via the shell (ie, clicked on a desktop shortcut, followed the Start->Programs path, etc.) and how often they've done so. So, the 15th time a user performs a certain action, we only see the information about that instance, and not the previous times.

By mounting the oldest VSC and parsing the user hive file, I was able to get additional historical information, including other times that applications (Quick Cam, Skype, iTunes, etc.) had been launched by the user. This provides some very significant historical data that can be used to fill in gaps in a timeline, particularly when there's considerable time between when an incident
occurred and when it was detected.

Here's an excerpt of the UserAssist values from the NTUSER.DAT in the mounted VHD:

Thu Jan 21 03:10:26 2010 Z
UEME_RUNPATH:C:\Program Files\Skype\Phone\Skype.exe (14)
Tue Jan 19 00:37:46 2010 Z
UEME_RUNPATH:C:\Program Files\iTunes\iTunes.exe (296)

And here's an excerpt of similar values from the NTUSER.DAT within the mounted VSC:

Sat Jan 9 11:40:31 2010 Z
UEME_RUNPATH:C:\Program Files\iTunes\iTunes.exe (293)
Fri Jan 8 04:13:40 2010 Z
UEME_RUNPATH:C:\Program Files\Skype\Phone\Skype.exe (8)

Some pretty valuable information there...imagine how this could be used to fill in a timeline.

And the really interesting thing is that just about everything else you'd do with a regular file system, you can do with the mounted VSC...run AV scans, run RegRipper or the forensic scanner, etc.

Thursday, January 13, 2011

More Stuff

More on Malware
No pun intended. ;-)

The MMPC has another post up about malware, this one called Kelihos. Apparently, there are some similarities between Kelihos and Waledac, enough that the folks at the MMPC stated that there was likely code reuse. However, there's quite a bit more written about Waledac...and that's what concerns me. The write-up on Kelihos states that the malware "allows unauthorized access and control of an affected computer", but there's no indication as to how that occurs. The only artifact that's listed in the write-up is a file name and the persistence mechanism (i.e., the Run key). So how does this control occur? Might it be helpful to IT and network admins to know a little bit more about this?

Also, take a close look at the Kelihos write-up...it mentions a file that's dropped into the "All Users" profile and an entry in the HKLM\...\Run key...but that Run key entry apparently doesn't point to the file that's listed.

I understand that the MMPC specifically and AV companies in general aren't in the business of providing more comprehensive information, but what would be the harm, really? They have the information...and I'm not talking about complete reverse engineering of the malware, so there's no need to do a ton of extra work and then post it for free. Given that this affects Microsoft operating systems, I would hope that some organization with MS could provide information that would assist organizations that use those OSs in detecting and reacting to infections in a timely manner.

Interview
Eric Huber posted a very illuminating interview with Hal Pomeranz over on the AFoD blog. Throughout the interview, Hal addresses several questions (from his perspective) that you see a lot in lists and forums...in particular, there are a lot of "how I got started in the business" responses. I see this sort of question all the time, and it's good to see someone like Hal not only discussing what he did to "break into the business", as it were, but also what he looks for with respect to new employees. If you have the time, take a read through the questions and answers, and see what Hal has to offer...it will definitely be worth your time.

Personally, having received an iTouch for Christmas, I think that a podcast would be a great forum for this sort of thing. I'm just sayin'... ;-)

Artifacts
Corey Harrell posted the results of some research on his Journey into Incident Response blog; he's performed some analysis regarding locating AutoPlay and Autorun artifacts. He's done some pretty thorough research regarding this topic, and done a great job of documenting what he did.

Results aside, the most important and valuable thing about what Corey did was share what he found. Have you ever had a conversation with someone where maybe you showed them something that you'd run across, or just asked them a question, and their response was, "yeah, I've been doing that for years"? How disappointing is that? I mean, to know someone in the industry, and to have a problem (or even just be curious about something) and know someone who's known the answer but never actually said anything? And not just not said anything at that moment...but ever.

I think that's where we could really improve as a community. There are folks like Corey who find something, and share it. And there are others in the community who have things that they do all the time, but no one else knows until the topic comes up and that person says, "yeah, I do that all the time."

Process Improvement
I think that one of the best shows on TV now is Undercover Boss. Part of the reason I like it is because rather than showing people treating themselves and each other in a questionable manner, the show has CEOs going out and engaging with front line employees. At the end of the show, the employees generally get recognized in some way for their hard work and dedication.

One topic jumped out in particular from the UniFirst episode...that front line employees who were the ones doing the job were better qualified to suggest and make changes to make the task more efficient. After all, who is better qualified than that person to come up with a way to save time and money at a task?

When I was in the military, I was given training in Total Quality Management (TQM) and certified by the Dept of the Navy to teach it to others. Being a Marine, there were other Marines who told me that TQM (we tried to call it "Total Quality Leadership" to get Marines to accept it) would never be accepted or used. I completely agree now, just as I did then...there are some tasks that process improvement won't provide a great deal of benefit, but there others that will. More than anything else, the one aspect I found from TQM/TQL that Marines could use everywhere was the practice of engaging with the front line person performing the task in order to seek improvement. A great example of this was my radio operators, who had to assemble RC-292 antennas all the time; one of my Marines had used wire, some epoxy and the bottom of a soda can to create "cobra heads", or field-expedient antenna kits that could be elevated (and the radios operational) before other Marines could go to the back of the Hummer, pull out an antenna kit, and start putting the mast together. This improved the process of getting communications up and available, and it was a process developed by those on the "front lines" who actually do the work.

So what does that have to do with forensics or incident response? Well, one of the things I like to do now and again is look at my last engagement, or look back over a couple of engagements, and see what I can improve upon. What can I do better going forward, or what can I do if there's a slight change in one of the aspects of the examination?

While on the IBM team and performing data breach investigations, I tried to optimize what I was doing. Sometimes taking a little more time up front, such as making a second working copy of the image, would allow me to perform parallel operations...I could use one working copy for analysis, and the other would be subject to scans. Or, I could extract specific files and data from one working copy, start my analysis, and start scanning the two working images. Chris Pogue, a SANS Thought Leader who was on our team at the time, got really good at running parallel analysis operations, by setting up multiple VMs to do just that.

The point is that we were the ones tasked with performing the work, and we looked at the requirements of the job, and found ways to do a better, more comprehensive job in a more efficient manner, and get that done in fewer ticks of the clock. One thing that really benefited us was collaborating and sharing what we knew. For example, Chris was really good at running multiple VMs to complete tasks in parallel, and he shared that with the other members of the team. I wrote Perl scripts that would take the results of scans for potential credit card numbers, remove duplicate entries, and then separate the resulting list into separate card brands for archiving and shipping (based on the required process). We shared those with the team, and Chris and I worked together to teach others to use them.

So why does any of this matter? When I was taking the TQM training, we were told that Deming originally shared his thoughts on process improvement with his fellow Americans, who laughed him out of the country, but others (the Japanese) absorbed what he had to say because it makes sense. In manufacturing processes, errors in the process can lead to increase cost, delays in delivery, and ultimately a poor reputation. The same is true for what we do. Through continual process improvement, we can move beyond where we are now, and provide a better, more comprehensive service in a timely manner.

In closing, use this as a starting point...a customer comes to you with an image, and says that they think that there's malware on the system, and that's it. Think about what you can provide them, in a report, at the end of 40 hours...5 days, 8 hrs a day of work. Based on what you do right now, and more specifically, the last malware engagement you did, how complete, thorough, and accurate will your report be?

Friday, January 07, 2011

Links and stuff

Windows Registry Forensics
The folks at Syngress tweeted recently that Windows Registry Forensics is due to be published this month! It's listed here on Amazon, along with editorial reviews from Troy Larson and Rob Lee. I, for one, cannot wait! Seriously.

A word about the book...if you're interested in an ebook/Kindle version, or if you have trouble getting the contents of the DVD with your ebook purchase, please contact the publisher first. Once the book has been sent in for printing, I (and authors in general) have very little to do with the book beyond marketing it in ways that the publisher doesn't.

Rules
Jesse Kornblum posted his Four Rules for Investigators recently. I would say that it was refreshing to see this, but I've gotta say, I've been saying most of the same things for some time...I think the big exception has been #2, and not because I disagree, but as a consultant, I generally assume that that's already been addressed and handled.

Jesse's other rules remind me a great deal of some of the concepts I and others have been discussing:

Rule 1 - Have a plan...that kind of sounds like "what are the goals of your investigation and how do you plan to address it with the data you have?"

Rule 2 - Have permission...definitely. Make sure the contract is signed before you forensicate.

Rule 3 - Write down what you do...Documentation! Now, I know some folks have said that they don't keep case notes, as those would be discoverable, and they don't want the defense counsel using their speculation and musings against them. Well, I'd suggest that that's not what case notes are about, or for. Case notes let you return to a case 6 months or a year later and see what you did, and even why. They also let someone else pick up where you left off, in case you get sick, or hit by a bus. What I really don't like seeing is the folks who say that they spent hours researching something that was part of a case, but they didn't document it, so they can't remember it...they then have to re-do all of that research the next time they encounter that issue. Also, consider this...one person on a team conducts research that takes 10 hrs to complete. If they don't document and share the results of the research, then the other 9 people on the team are going to spend a total of 90 hrs doing that research themselves...when the original research could have been shared via email, or in a 1/2 hr brown bag training session.

Rule 4 - Work on a copy...Always! Never work on the original data. I've had instances where immediately after I finished making copies of the images, the original media (shipped by the customer) died. Seriously. Now, imagine where I'd've been had I not followed procedure and made the copies...my boss would've said, "...that's okay, because you made copies...right?" I'm generally one of those folks who follows procedure because it's the right thing to do, and I tend not to make arbitrary judgments as to when I will or won't follow the procedure.

Jesse isn't the only one saying these things. Take a look at Sniper Forensics: Part 1 over at the SpiderLabs Anterior blog. Chris has gotten a lot of mileage out of the Sniper Forensics presentations, and what his talks amount to include putting structure around what you do, and the KISS principle. That's "keep it simple, stupid", NOT listening to Love Gun while you forensicate (although I have done that myself from time to time).

Is it StuxNet, or is it APT?
I found this DarkReading article about targeted attacks tweeted about over and over again. I do agree with the sentiment of the article, particularly as the days of joyriding on the Information Superhighway are over with, my friends. No one is really deploying SubSeven any longer, just to mess with someone and open and close their CD-Rom tray. There's an economic driver behind what's going on, and as such, steps are being taken to minimize the impact of unauthorized presence on compromised systems. One thing's for sure...it appears that these skilled, targeted attacks are going to continue to be something that we see in the news.

USB Issues and Timelines
Okay, this isn't about the USB issues you might be thinking of...instead, it's about a question I get now and again, which is, why do all of the subkeys beneath the USBStor key in the System hive all have the same LastWrite time? While I have noticed this, it hasn't been something that's pertinent to my exam, so I really haven't pursued it. I have seen where others have said that they've looked into it and found that the LastWrite time corresponded with an update.

Rather than speculating as to the cause, I thought what I'd do is recommend that folks who see this create a timeline. Use the file system metadata, LastWrite times from the keys in the System and Software hives, and Event Log data, to start. This should give you enough granularity to begin your investigation. I'd also think about adding Prefetch file metadata (if you have any Prefetch files...), as well as data from the Task Scheduler log (that is, if it says anything besides the Task Scheduler service starting...).

Tuesday, January 04, 2011

Accessing Volume Shadow Copies

Over the past months, I've had some work that involved Windows systems beyond XP...specifically, one Windows 7 system, and I had some work involving a Vista system. Now, most of what I looked at had to do with timelines, but that got me to thinking...with the number of systems now available with Windows 7 pre-installed (which is pretty much everything that comes with Windows pre-installed), how much do we really know about accessing Volume Shadow Copies (VSCs)?

Troy Larson, the senior forensic-y dude from Microsoft, has been talking about Volume Shadow Copies for quite some time. In his presentations, Troy has talked about accessing and acquiring VSCs on live systems, using George M. Garner, Jr's FAU dd.exe; however, this requires considerable available disk space.

Troy's SANS Forensic Summit 2010 presentation can be found here. In his presentation, Troy demonstrates (among other things) how to access VSCs remotely on live systems, using freely available tools.

ProDiscover - Chris Brown has a presentation available online (from TechnoSecurity 2010) in which he discusses using ProDiscover to access and mount Volume Shadow Copies on live systems...remotely. Pretty cool.

I ran across a QCCIS whitepaper recently that discusses mounting an acquired image using EnCase with the PDE module, accessing the VSCs using the same method Troy pointed out, and then copying files from the VSCs using robocopy. There are also a number of posts over at the Forensics from the sausage factory blog that address VSCs, and a couple that include the use of robocopy. As I often work with acquired images, being able to access VSCs within those images is something I'm very interested in being able to do. However, most of my online research points to folks using EnCase with the PDE module to mount their acquired images when demonstrating how to access VSCs within those images...and I don't have EnCase.

So...what if you don't have access to EnCase or the PDE module? How could you then access Volume Shadow Copies within an acquired image?

Testing
I started out with a host system that is a fresh install of Windows 7 Professional, 64-bit. The acquired image I started with is of a physical disk of from a 32-bit Vista system; as it's an image from the physical disk, it has several partitions, including a maintenance partition. The acquired image is called "disk0.001". I also extracted the active Vista partition as a separate raw/dd image, calling it "system.001". I verified the file systems of both of these images using FTK Imager to ensure that I could 'see' the files.

So here are the tools I installed/used:
FTK Imager 3.0.0.1443
ImDisk 1.3.1
Mount Image Pro 4.4.8.813 (14 day trial)
Shadow Explorer 0.8.430.0

So the first thing I did was mount the image using FTK Imager 3.0, and noted the drive letter for the third partition...in this case, I:\. I opened a command prompt and used the 'dir' command to verify that I could access the volume. I then typed the following command:

vssadmin list shadows /for=i:

This got me an error message:

Error: Either the specified volume was not found or it is not a local volume.

Okay. I fired up ShadowExplorer, but the I:\ drive was not one of the options available for viewing.

I tried mounting the system.001 file, and then tried both image files again using ImDisk, and each time got the same result...running the vssadmin command I got the above error message. I also tried using the "Logical Only" option in FTK Imager 3.0's "Mount Type" option, and that didn't work, either. So, at this point, I was failing to even identify the VSCs, so I could forget accessing them.

I reached out the QCCIS guys and John responded that FTK Imager 3.0 seems to mount images so that they appear as remote/network drives to the host OS; as such, vssadmin doesn't 'see' VSCs on these drives. This also explains why ShadowExplorer doesn't 'see' the volumes, and why I get the same error message when using ImDisk. I also got in touch with Olof, the creator of ImDisk, and he said that ImDisk was written using the method for creating drive letters available in NT 4.0, prior to the Volume Mount Manger being included in Windows; as such, getting ImDisk to mount the volumes as local disks would require a re-write. Thanks to Olof for ImDisk, and thanks to Olof and the QCCIS guys for responding!

I then installed the VMWare VDDK 1.2 in order to get a version of vmware-mount that would run on Windows 7. I had booted the acquired image using LiveView, so I had a .vmdk file on my drive for this image. After installing the VDDK, I ran "vmware-mount /p", and clearly saw the 4 volumes within the image...I knew that I wanted to access volume 3. I then ran the following command:

vmware-mount /v:3 x: f:\vista\disk0.001.vmdk

This resulted in an error message stating that vmware-mount could not mount the virtual disk. Checking the log file that was produced, the first message I see is that the image file, disk0.001, "failed to open (38): AIOMgr_Open failed." I'm still researching this one...

Getting It To Work
So, at this point, I'm stuck...I want to access files within the VSCs in an acquired image, and I don't have EnCase/PDE. So far, my attempts to just see the VSCs have failed. So, I grabbed a copy of vhdtool.exe, which is available from MSDN (it is described as "unmanaged code"). Originally, I wanted to get a copy of this as I have XPMode installed on my Windows 7 Professional system, which means I have Virtual PC...but I don't want to boot the vhd file at this point, so that's not a required component. So I made a copy of system.001 to another storage location and ran the vhdtool.exe with the "/convert" switch. This apparently adds a footer to the file...which I'd read about during my research and is the reason I made a copy of system.001 to work with (don't want to muck up my original in case all of this doesn't work...know what I mean?). I should note here that running the tool adds the VHD footer to the file without changing the file name...so even though I apparently now have a VHD file, I can still see only "system.001".

Next, I opened the Computer Management interface in Windows 7 and fired up the Disk Manager. I then chose Action -> Attach VHD, and browsed to my new VHD file. Before clicking "OK", I made sure to check the "Read-only" box. I then had Disk2 added to Disk Manager, and the Volume listing included a new G:\ volume. In both instances, the drive icon was light blue, as opposed to the grey drive icon for the other drives on my system. When I ran the vssadmin command against the G:\ drive, I could see the VSCs! Oddly enough, the G:\ drive is NOT a visible option in ShadowExplorer.

Next, I ran the mklink command against the last VSC identified on the G:\ drive. To do this, I selected everything on the line following "Shadow Copy Name:"...the stuff that says "\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy40".

mklink /d c:\vista \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy40\

Note: The final trailing "\" is EXTREMELY important!

From here, I could now access the files within the VSC. Several folks have mentioned using robocopy to get copies of files from within the VSCs, using something as simple as a batch file. See the QCCIS whitepaper for some good examples of the batch file. This is a great way to achieve data reduction...rather than acquiring an image of the VSC, simply mount it and access the files you need. Another idea would be to use RegRipper (include RegRipper's rip.exe in a batch file) or a forensic scanner to access and retrieve only the data you want. For example, when you access a user's UserAssist key and parse the information, if they've taken an action before (launched an application, etc.), you only see the date/time that they last took that action, and how many times they've done this. By accessing VSCs and running RegRipper, you could get historical information including the previous dates that they took those actions. Let's say that you use something similar to the batch file outlined in the QCCIS whitepaper, and include a command similar to the following (you may need to modify this to meet your environment):

rip.exe -p userassist -r D:\vsc%n\ > user_ua%n.txt

Now, this assumes that your VSCs are mounted with mklink under D:\vsc; %n refers to the VSC number.

Something similar would also be true with respect to MRU lists within the Registry...we know that the LastWrite time for the key tells us when the most recently accessed file (for example) was accessed; accessing VSCs, we can see when the other files in the MRU list were accessed.

When you're done accessing the mounted VSC, simply remove the directory using rd or rmdir. Then go back to your Disk Manager, right-click the box to the left in the lower right pane, and select "Detach VHD".

If you need a command line method for doing this, take a look at diskpart...you can even include the list of commands you want to run in a script file (similar to ftp.exe) and run the script using:

diskpart /s

This, of course, leads to other questions. For example, we've seen how, if you have a full image, you can use vhdtool to convert the image file to a vhd file, or extract the pertinent partition (using FTK Imager) and convert that raw/dd image to a vhd file. But, what if you have a vmdk file from a virtual environment? One option would be to use FTK Imager to re-acquire the file to raw/dd format; FTK Imager opens vmdk files, as well as EnCase format images, just fine.

There's a tool available from the VMToolkit (produced by the VMWare folks) that will reportedly convert vmdk files to vhd format. Apparently, Starwind has a free converter, and some have mentioned that WinImage should work, as well. I haven't tried any of these, so YMMV.

Monday, January 03, 2011

Links and Updates

It's been a while since I posted a list of links and resources from across the Internet. I thought that since things have been quiet toward the end of 2010, I'd post some of the things I'd run across and found interesting...so, here goes...

GSD
Looks like Claus is back with an interesting update to his site. Claus hasn't been updating his site as much as he had done in the past, but it is always good to see is posts. A lot of what Claus posts that is oriented toward forensics is from an admin's perspective, which is great for a guy like me...I'm not an admin (nor do I play one on TV), so I often find that it's good to get a reminder of the admin's perspective. Besides, Claus always seems to be able to find the really good stuff...

One of the interesting things I found in Claus's post was the mention of a new mounting tool, OSFMount, for mounting images. I find it useful to be able to do this, and have been using FTK Imager 3.0. Claus also mentions in his post that ImDisk was updated recently...like OSFMount, it comes with a 64-bit version, in addition to the 32-bit version.

So, what does this tell us about image mounting tools? There are several other free and for-pay tools, some of varying quality, and others with vastly greater capabilities. So why does it seem that there's an increase in the number of tools that you can use to mount images? After all, you can use LiveView to convert a raw dd image to a vmdk and open it in VMPlayer, or you can use vhdtool to convert a raw dd image to a vhd and open it in MS's Virtual PC, which is freely available.

eEvidence
I watched for a long time and didn't see any updates for a while...while I wasn't watching, Christine updated the e-Evidence.info site with a lot of great reading material back in November. This site has always been a great source for information.

VSS
Based on a link from the e-Evidence site, I did some reading about mounting images, and accessing and recovering data from Volume Shadow Copies. The first resource I looked at was from QCCIS.com; the whitepaper provides an explanation of what the Volume Shadow Service does, and provides a simple example (albeit without a great deal of exacting detail) of mounting and extracting data from shadow copies. This is a good way to get started, and I've started looking at ways to implement this...so far, I've used Windows 7 Professional 64-bit as a base system, mounted an image (with FTK Imager 3.0) that includes a Vista 32-bit volume, and not been able to access the shadow copies. I'll be trying some different things to see if I can mount images/volumes in order to access the Volume Shadow Copies.

Malicious Streams
This site isn't strictly Windows-oriented...in fact, it's decidedly focused on MacOSX. However, Malicious-streams.com contains information about PDF malware, a bit of code geared toward Windows systems, and some good overall reading. Also, the author is working on a version of autoruns for MacOSX and I hope that this gets released as a full version early this year, as it would be a great way to start things off in 2011.

Resources
Derek Newton's list of Forensic Tools
Open Source Digital Forensics Site
LNK Parser written in Python

Wednesday, December 29, 2010

Mining MSRC analysis for forensic info

Anyone who's followed this blog for a while is familiar with...call them my "rants"...against AV vendors and the information they post about malware; specifically, AV vendors and malware analysts have, right in front of them, information that is extremely useful to incident responders and forensic analysts, but they do not release or share it, because they do not recognize its value. This could be due to the AV mindset, or it could be due to their business model (the more I think about that, the more it sounds like a chicken-egg discussion...).

When I was on the IBM ISS ERS team, we did a good deal of malware response. In several instances, team members were on-site with an AV vendor rep, whose focus was to get a copy of the malware to his RE team, so that an updated signature file could be provided to the customer. However, in the time it takes to get all this done, the customer is hemorrhaging...systems are getting infected and re-infected, data is/maybe flooding off of the infrastructure, etc. Relying on known malware characteristics, our team members were able to assist in stemming the tide and getting the customer on the road to recovery, even in the face of polymorphic malware.

What I find useful sometimes is to look at malware write-ups from several sites, and search across the 'net (via Google) to see what others may be saying about either the malware or specific artifacts.

I watched this video recently, in which Bruce Dang of Microsoft's MSRC talked about analyzing StuxNet to figure out what it did/does. The video is of a conference presentation, and I'd have to say that if you get get past Bruce saying "what the f*ck" way too many times, there's some really good information that he discusses, not just for malware RE folks, but also for forensic analysts. Here are some things I came away with after watching the video:

Real analysis involves symbiotic relationships. I've found this to be very true in some of the analysis I've done. I have worked very closely with our own RE guy, giving him copies of the malware, dependency files (ie, DLLs), and information such as paths, Registry keys, etc. In return, I've received unique strings, domain names, etc., which I've rolled back into iterative analysis. As such, we've been able to develop analysis that is much greater than the sum of its parts. This is also good reason to keep a copy of Windows Internals on your bookshelf, and keep a copy of Malware Analyst's Cookbook within easy reach.

Malware may behave differently based on the eco-system. I've seen a number of times where malware behaves differently based on the eco-system it infects. For example, Zeus takes different steps if the infected user has Administrator rights or not. I've seen other malware infections be greatly hampered by the fact that the user who got infected was a regular user and not an admin...indicating that the variant does not have a mechanism for check for and handling different privilege levels. Based on what Bruce discussed in his presentation, StuxNet takes different steps depending upon the version of Windows (i.e., XP vs. Vista+) that its running on.

Task Scheduler. I hear the question all the time, "what's different in Windows 7, as compared to XP?" Well, this seems to be a never ending list. Oy. Vista systems (and above) use Task Scheduler 2.0, which is different from the version that runs on XP/Windows 2003 in a number of ways. For example, TS 1.o .job files are binary, whereas TS 2.0 files are XML based. Also, according to Bruce's presentation, when a task is created, a checksum for the task .job file is computed and stored in the Registry. Before the task is run, the checksum is recalculated and compared to the stored value, to check for corruption. Bruce stated that when StuxNet hit, the hash algorithm used was CRC32, and that generating collisions for this algorithm is relatively easy...because that's part of what StuxNet does. Bruce mentioned that the algorithm has since been updated to SHA-256.

The Registry key in question is:

HKLM\Software\Microsoft\Windows NT\CurrentVersion\Schedule\TaskCache

A lot more research needs to be done regarding how forensic analysts (and incident responders) can parse and use the information in this key, and in its subkeys and values.

MOF files. Bruce mentioned in his presentation that Windows has a thread that continually polls the system32\wbem\mof directory looking for new files, and when it finds one, runs it. In short, MOF files are compiled scripts, and StuxNet used such a file to launch an executable; in short, put the file in as a Guest, the executable referenced in the file gets run as System.

Management needs actionable information. This is true in a number of situations, not just the kind of analysis work that Bruce was performing. This also applies to IR and DF tasks, as well...sure, analysts can find a lot of "neat" and extremely technical stuff, but the hard part...and what we're paid to do...is to translate that wealth of technical information into actionable intelligence that the customer can use to make decisions, within their environment. What good does it do a customer if you pile a 70 page report on them, expecting them to sift through it for data, and figure out how to use it? I've actually seen analysts write reports, and when I've asked about the significance or usefulness of specific items, been told, "...they can Google it." Uh...no. So, through experience, Bruce's point is well-taken...analysts sift through all of the data to produce the nuggets, then filter those to produce actionable intelligence that someone else can use to make decisions.

A final thought, not based specifically on the video...it helps forensic analysts and incident responders to engage sources that are ancillary to their field, and not directly related specifically to what we do every day. This helps us to see the forest for the trees, as it were...

Sunday, December 26, 2010

Writing Books, pt IV

Okay, by now, you've probably/likely decided to write that book, and that you've opted to do so through a publisher to get it on the bookshelves and onto Kindles and other ereaders. Remember, this isn't the only way to get something published, but it is one of the only ways to get your book published and have someone else take care of getting it on shelves and in front of your intended audience through Amazon, etc. Your alternatives include self-publishing through sites like Lulu.com, or simply writing your "book" and printing your manuscript to a PDF file, rather than a printer. There are advantages and disadvantages to each approach, but we're going to go with the assumption that you'll be working with a publisher.

Working with the publisher
When you're working with the publisher, don't set your expectations (at all, or too high) of what that's like ahead of time. Remember, the publisher's staff are people, too, and may be working with multiple authors. You very likely won't be the only author that they're working with, nor the only schedule. In addition, remember that in the current economy many people are wearing multiple hats in their jobs...so the editor you're dealing with may not get your email because they're traveling or at a book show. I've worked with staff who, in 2009/2010, do not have remote access to their email (can you imagine that??), so I won't hear back from them for weeks at a time.

While working with the publisher, I've had editors and even editors assistants changed part way through the writing process. The result was that chapters that I had sent in for review could no longer be found. I know this sounds like a bit much, but keep track of what you send in, when, and to whom. This can really help...particularly in instances where you have to resend things.

One of the things I've run into several times is that I've submitted what I thought was everything...DVD contents, revised chapters, etc...and asked the staff I was working with if everything was received, and if I needed to provide anything else. I'd been told, no, that's everything...only to be contacted three weeks later and told that something else was needed (review the proofs, provide a bio, etc.). The key to this is to see what's in other books, and keep a list of what you've provided...have you written a preface yet? A bio? How about that acknowledgment or the dedication page?

In short, be flexible. Focus on meeting your schedule in the contract. If you're not going to meet the schedule for some reason, do the professional thing and let them know ahead of time. Don't worry about what anyone else does or is doing. In the long run, it'll help.

Working with reviewers
When you're working with reviewers, keep in mind what their role is in the process. They're generally there to review your work, so don't take what they say or comment on personally.

There are generally two kinds of reviewers...those who do the grammar, spelling and formatting review for the publisher (they tend to work for the publisher), and those who are supposed to review your work from a technical perspective, to ensure that it's accurate (although why you'd put that amount of time into writing something that is completely off base, I have no idea). Generally speaking, whatever the grammar/spelling reviewer suggests is probably advisable to accept. However, this won't always be the case, particularly when you've written a turn of phrase that you really want to use, or are using acronyms specific to your field. I remember that I had an issue with the acronym "MAC"...did it refer to file "MAC" times, or to a NIC's "MAC" address? Kind of depended on the chapter and context.

As far as your technical reviewers go, that's another story. There's no reason that you have to accept any of their proposed changes, or follow what their comments say. Hey, I know that's kind of blunt, but that's the reality of it. In every book I've worked on, to my knowledge, the technical reviewer has had no prior contact with me, my book proposal, or my thought process prior to getting my draft chapters. Therefore, they are missing a great deal of context...and in some cases, their comments have made little sense when you consider the overall scope and direction of the book.

For some reason, the publishing process seems to be something of a maze of Chinese walls. You get an author who's writing a technical book, working with a publisher who knows publishing, but not the subject that's being addressed. One person reviewing the book and working for the publisher knows spelling, grammar, and formatting, and that's good...but often times, the technical reviewer may not know a great deal about the subject being addressed, and knows nothing at all about the author, the goals and direction of the book, or much in the way of overall context. In my mind, this is just a short-coming of the process, and something that you need to keep in mind. I've worked with a LOT of folks with respect to writing technical reports, and there are generally two things that most folks do with suggested changes and comments...they either accept them all unconditionally, or they delete and ignore them. I would suggest that when you are going through the document that you receive back from the technical reviewer, make your changes and add your own comments to theirs, justifying your actions. Then save the document, copy it, and (if it's written in MSWord) run the copy through the document inspection process, accepting the edits and removing comments. That way, you have a clean copy to send back, but you also have a clear record of what was suggested and what you chose to do about it.

Another thing to keep in mind is that people have varying schedules...if you submit a couple of chapters and you feel that you aren't getting much in the way of a review, or one that's technical, get in touch with the editor and request someone else. Or, suggest someone to them up front...after all, if you really know the subject that you're writing about, you will likely know someone else in the field who (a) knows enough about it to review your work, (b) has the time to do a good review, and (c) has the interest in working with you. I've had folks offer to review my work completely aside from the publisher...that's okay, too, but it also means you may submit a chapter and not hear back at all. Remember, in the technical field, you don't make enough money to support yourself writing books, so neither writing nor reviewing books is a full-time job, and people have day jobs, too.

Working with co-authors

Writing a book as the sole author can be tough, as it is a lot of work...but I think that writing a book as multiple authors, particularly when none of the authors ever actually sit in a room together, is much harder. There are a lot of decisions that need to be made and coordinated ahead of time, and continually revisited throughout the process. Again, writing books in this field is NOT a full-time job...as such, people's day jobs and lives tend to take precedence. Family illness, holidays, vacations, etc., all play a role in the schedule that needs to be worked out ahead of time.

Another thing to consider is that someone has to take the lead on tone...or not. You need to decide early on what the division of labor will be (split up chapters or sections), and whether or not you feel it's important to have a single tone throughout the book. There will be times when it makes sense to have a single tone, and there will be other times when it's pretty clear that you aren't going to have a single tone, as the various authors take the lead on the chapters for which they have the most expertise in the subject matter.

Providing Materials With Your Book
I'm one of those folks who writes some of my own code, and I tend to create my own tools, whether they be a batch file or a Perl script. As such, it's helpful to others if I make those tools available to them in some manner, and this is often done by putting those tools on a CD or DVD included with the book. I think that a lot of times, this increases the value of the book, but it can also be a bit difficult to deal with...so how you provide the materials is something to consider up front. Another item that a lot of folks find interesting and very valuable is "cheat sheets"...if you list or explain a process in your book, and it covers a good portion of a chapter, it might be a good idea to provide a cheat sheet that the reader can print out (perhaps modify to meet their own needs) and use. How you intend to provide these, and other materials (i.e., videos that show the viewer how to do something step-by-step, etc.) is something that you need to consider ahead of time.

The point is that if there are materials you're going to refer to in your book, you have to figure out ahead of time how you're going to provide them. In my experience, there's two ways you can do this...provide the materials on the DVD that comes with the book, or provide them separately. I have usually opted to provide the materials on a DVD, but after having written a couple of books, I think I'm going to move to something completely separate, and provide the materials online.

I have decided to do this for a couple of reasons. One is that there's always someone out there who ends up purchasing a copy of the book that mysteriously doesn't have a DVD. Or they loose the DVD. Or they leave it at home or at work, when they need it in the other location. Then there's the folks who purchase ebooks for their Kindle or other ereader, and never got the email that says, "...go here to download additional content." Or they did, but the publisher modified their infrastructure so now the instructions or path aren't valid. And, of course, there's always the person who's going to contact you directly because they want to ensure that they have the latest copy of the materials.

My thinking is that a lot of these issues can be avoided if you choose a site like Google Code or something else that is appropriate (and relatively stable/permanent) for hosting your additional materials. That way, you can control what's most up-to-date and not have to rely on someone else's schedule for that. You can refer to the actual tools (and other materials) in the book, so that having the book itself makes the tools more valuable, but by providing them on the web, you can include "here are the absolute latest, newest, most up to date copies" on the page where the reader will go to download those tools.




Self-Marketing
Blogging is a great way to get started and get the feel for writing, without the constraints of editing (and things like spelling, grammar, etc.). Face it, some folks don't take criticism of any kind well, and don't put a great deal stock in checking their own spelling and grammar...so blogging is sort of a way to get into writing without having someone looking over your shoulder. It's also a great way for some folks to realize how important that sort of thing is.

Blogging is also a great way to self-market your book, prior to and following publication. It's a great way to start talking about the book, to answer questions that you get about your book and materials, address errata, etc. In some ways, a blog can also lay the groundwork for a second edition, or even just for your next effort, as you get feedback, read reviews, post new ideas, etc. For example, if you start to see that your book on forensic analysis is linked to another blog on malware reverse engineering, with that author making comments about what you've written (positive or negative), that could be a good indicator for you...what do you need to improve on, expand on, and what were you dead on with in your book?

Take the lead on marketing your book. Present the publisher with ideas, and take the lead on getting the word out there (assuming that that's what you want). When WFA 2/e was coming out, I was excited because this was the first book in a new direction that Syngress was going, something that was exemplified by the new cover design. That summer, the SANS Forensic Summit was going to be in Washington, DC, and I was attending as a speaker. As I looked more and more into the conference, and who was speaking and attending, I counted almost half a dozen Syngress authors who would be there, all of whom had the work "forensics" in their book title. I contacted the publisher to find out if they'd have a bookstore...I thought, between sessions I could answer questions about the book. Well, it turns out that they had NO PLANS for a bookstore!! I thought (and said to them), you've GOT to be kidding me! Here's a conference with "forensics" in the title, and all these authors of "forensics" books will be there...to me, it was a total marketing coup. The short story is that the editor was there with books on a table and it was a huge success for everyone.

Final Thoughts...
And now, some final thoughts as I close out this series of posts.

I hope that in reading these posts, you've enjoyed them and at the same time gotten something out of them. I tend to take something of a blunt approach, in part because I don't want to sugarcoat things for someone who's considering writing a technical book. Yes, it is hard...but if you know up front what you may be facing, you're less likely to let it slow you down. One of the hardest things about writing books is that you're rarely, if ever, face-to-face with anyone from the publisher's staff when discussing your book. In fact, you're rarely face-to-face with anyone throughout the process.

One of the misconceptions a lot of folks who have never written a book have about authors is that they retain some modicum of control over what happens with the book once it's submitted to the printer. Nothing could be further from the truth. When WFA 1/e was released by Syngress, a PDF version of the book was available...for the first couple of weeks, it was provided with each copy of the book purchased through the Syngress web site. After that, it was available for purchase. Later, Syngress was purchased by Elsevier, a company out of Europe that produced all e-format versions of its books EXCEPT PDF. The author's role in any of that, particularly in the availability of a PDF version of their book, is zero. And I say that only because there's nothing less than zero.

Another misconception that I've run across is that most folks think that book authors have access to endless resources, or that somehow, the publishing company will provide those resources. This simply isn't the case. When I submitted the proposal for the Registry forensics book, all of the reviews came back saying that I needed to include discussion of the use of commercial tools, such as EnCase and FTK. Well, the short answer was "no"; the long answer was that I neither have access to, nor have I been able to obtain a temporary license for either...and none of the reviewers was offering such a license. In all fairness, I will say that I was offered a temporary license to one of the commercial tools, but by the time that offer was available, I was too far into the writing process to go back and add that work and material into the book. I would have been particularly time consuming because I don't use those tools regularly. Anyway, my point is that when I have written my books, I tend to do so based on my own experiences, or those interesting experiences that others have shared. I tend not to write about ediscovery, because I've never done it. I likely won't be writing about Registry analysis of a Windows-powered car or Windows 7 phone, because I neither own nor have access to either, nor do I have the tools available to work with either. Like most authors, I don't have access to massive data centers for testing various operating systems and application installations across numerous configurations.

Keep in mind that your book is not going to be everything to everyone. You're going to have critics, and you're also going to have "armchair quarterbacks". You're going to have people who post to public forums that you "should've done this...", and not once have a good thing to say about your work. You're going to have folks who will email you glowing commendations for what you've done, but not post them publicly...even when they purchased your book based on a publicly-posted review. Don't let any of this bother you. One of my good friends who's also written a book has received some not-so-glowing criticism, to which he's responded, "...come see me when you've published a book." In short, don't let criticism get you down, and don't let it be an obstacle that prevents you from writing in the first place.

Finally, I want to say once again that writing technical books is tough. It's tough enough if you're a single person and not at all used to writing. If you're married (particularly newly married) and/or have small children, it can be logarithmically harder, and it will require even more discipline to write. However, it can also be extremely rewarding. Seeing your work published and sitting on a bookshelf is very rewarding. Think about it...you've completed and achieved something that few others have attempted. If you've put the effort in and done the best you can, you should take pride in what you've done...and don't let the little things becomes insurmountable obstacles that prevent you from even trying.