Showing posts with label Tools. Show all posts
Showing posts with label Tools. Show all posts

Sunday, March 12, 2023

On Using Tools

I've written about using tools before in this blog, but there are times when something comes up that provokes a desire to revisit a topic, to repeat it, or to evolve and develop the thoughts around it. This is one of those posts. 

When I first released RegRipper in 2008, my intention was that once others saw the value in the tool, it would organically just grow on its own as practitioners found value in the tool, and sought to expand it. My thought was that once analysts started using it, they'd see the value proposition in the tool, and all see that the real power that comes from it is that it can easily be updated; "easily" by either developing new plugins, or seeking assistance in doing so.

That was the vision, but it's not something that was ever really realized. Yes, over time, some have created their own plugins, and of those, some have shared them. However, for the most part, the "use case" behind RegRipper has been "download and RUNALLTHETHINGS", and that's pretty much it.

On my side, there are a few assumptions I've made with respect to those using RegRipper, specifically around how they were using it. One assumption has been that whomever downloaded and is using the tool has a purposeful, intentional reason for doing so, that they understand their investigative goals and understand that there's value in using tools like RegRipper to extract information for analysis, to validate other findings and add context, and to use as pivot points into further analysis. 

Another assumption on my part is that if they don't find what they're looking for, don't find something that "helps", or don't understand what they do find, that they'll ask. Ask me, ask someone else. 

And finally, I assume that when they find something that either needs to be updated in a plugin, or a new plugin needs to be written to address something, that they'll do so (copy-paste is a great way to start), or reach out to seek assistance in doing so.

Now, I'm assuming here, because it's proved impossible to engage others in the "community" in a meaningful conversation regarding tool usage, but it appears to me that most people who use tools like RegRipper assume that the author is the expert, that they've done and seen everything, that they know everything, and that they've encapsulated all of that knowledge and experience in a free tool. The thing is, I haven't found that to be the case in most tools, and that is most definitely NOT the case when it comes to RegRipper.

Why would anyone need to update RegRipper? 

Lina recently tweeted about the need for host forensics, and she's 10,000% correct! SIEMs only collect those data sources that are pointed at them, and EDR tools can only collect and alert on so much. As such, there are going to be analysis gaps, gaps that need to be filled in via host forensics. And as we've seen over time, a lot changes about various endpoint platforms (not just Windows). For example, we've been aware of the ubiquitous Run keys and how they're used for persistence; however, there are keys that can be used to disable the Run key values (Note: the keys and values can be created manually...) without modifying the Run key itself. As such, if you're checking the contents of the Run key and stating that whatever is listed in the values was executed, without verifying/validating that information, then is this correct? If you're not checking to see if the values were disabled (this can be done via reg.exe), and if you're not validating execution via the Shell-Core and Application Event Logs, then is the finding correct? I saw the value in validating findings when determining the "window of compromise" during PCI forensic exams, because the finding was used to determine any regulatory fines levied against the merchant.

My point is that if you're running a tool and expecting it to do everything for you, then maybe there needs to be a re-examination of why the tool is being run in the first place. If you downloaded RegRipper 6 months ago and haven't updated it in any way since then, is it still providing you with the information you need? If you haven't added new plugins based on information you've been seeing during analysis, at what point does the tool cease to be of value? If you look closely at the RegRipper v3.0 distro available on Github, you'll notice that it hasn't been updated in over 2 1/2 yrs. I uploaded a minor update to the main engine a bit ago, but the plugins themselves exist as they were in August 2020. Since then, I've been developing an "internal" custom version of RegRipper, complete with MITRE ATT&CK and category mappings, Analysis Tips, etc. I've also started developing plugins that output in JSON format. However, all of these are things that either I proposed in 2019 and got zero feedback on, or someone close to me asked about. Not a week goes by when I don't see something online, research it, and it ends up in a plugin (or two, or five...).

If you're using a tool, any tool (RegRipper, plaso, etc.), do you understand it's strengths and weaknesses, do you understand what it does and does not do, or do you just assume that it gives you what you need?

Wednesday, December 04, 2013

Links and News

There have been some exciting developments recently on the Windows digital forensic analysis front, and I thought it would be a good idea to bring them all together in one place.

Recover CMD sessions from the pagefile
If you perform analysis of Windows systems at all, be sure to check out Robert's blog post that discusses how to use page_brute (which I'd mentioned previously here) to recover command prompt sessions from the Windows pagefile.  In the post, the author mentions quite correctly that grabbing a memory image still isn't something that's part of standard incident response procedures.  If you receive a laptop system (or an image thereof) you may find a hibernation file, which you can then analyze, if doing so is something that will help you attain your goals.

Page_brute is based on Yara rules, and Robert shares the rule that he wrote...if you look at it, and follow his reasoning in the post, it's amazingly simple AND it works!

This sort of analysis can be very valuable, particularly if you don't have a memory dump available.  As we learned at OMFW 2013, Volatility is moving in the direction of incorporating the pagefile into analysis, which is fantastic...but that's predicated by the responder's ability to capture a memory dump prior to shutting the system down.

I got yara-python installed (with some help...thanks!) and I then extracted the pagefile from an image I have available.  I had also copied the rule out of Robert's blog post, and pasted it into the default_signatures.yar file that is part of page_brute, and ran the script.  In fact, page_brute.py worked so well, that as it was running through the pagefile and extracting artifacts, MS Security Essentials "woke up" and quarantined several extracted blocks identified as Exploit:js/Blacole, specifically KU and MX variants.  I then opened a couple of the output files from the CMDscan_Optimistic_Blanklines folder, and I wasn't seeing any of the output that Robert showed in his blog post, at least not in the first couple of files.  So, I ran strings across the output files, using the following command:

D:\Tools>strings -n 5 H:\test\output\CMDscan_Optimistic_Blanklines\*.page | find "[Version"

I didn't get anything, so I ran the command again, this time without the "[", and I got a number of strings that looked like Registry key paths.  In the end, this took some setup, downloading a script and running two commands, but you know what...even with that amount of effort, I still got 'stuff' that I would not have gotten as quickly.  Not only has page_brute.py proved to be very useful, it also illustrates what can be done when someone wants to get a job done.

Resources
Excellent Yara post; look here to get the user manual and see how to write rules.

Registry Forensics Class
If you're interested in an online course in analyzing the Windows Registry, Andrew Case, Vico Marziale, and Joe Sylve put together the Registry Analysis Master Class over at The Hacker Academy.  If you're interested in the course, take a look at Ken Pryor's review of the class to see if this is something for you.

Windows Application Experience and Compatibility
Corey's got a new blog post up where he discusses the Windows Application Experience and Compatibility feature, and how the RecentFileCache.bcf file can serve as a data source indicating program execution.  As usual, Corey's post is thorough, referencing and building on previous work.

Corey shared a link to his blog post over on the Win4n6 Yahoo group, and Yogesh responded that he's doing some research along the same lines, as well, with a specific focus on Windows 8 and the AmCache.hve file, which follows the same file format as Windows Registry hives.  Yogesh's blog post regarding the AmCache.hve file can be found here.  Why should you care about this file?  Well, from the post:

This file stores information about recently run applications/programs. Some of the information found here includes Executable full path, File timestamps (Last Modified and Created), File SHA1 hash, PE Linker Timestamp, some PE header data and File Version information (from Resource section) such as FileVersion, ProductName, CompanyName and Description.

This information can be very valuable during analysis; for example, using the SHA-1 hash, an analyst could search VirusTotal for information regarding a suspicious file.  The file reference number from the key name could possibly be used to locate other files that may have been written to the system around the same time.
More Stuff
As I was working on a RegRipper plugin for parsing and presenting the data in the AmCache.hve file, I ran across something interesting, albeit the fact that I have only one sample file to look at, at the moment.  Beneath the Root key is a Programs subkey, and that appears to contain subkeys for various programs.  The values within each of these subkeys do not appear to correspond to what Yogesh describes in his post, but there are some very interesting value data available.  For example, the Files value is a multi-string value that appears to reference various files beneath the Root\Files subkey (as described in Yogesh's post) that may be modules loaded by the program.  This can provide for some very interesting correlation, particularly if it's necessary for your analysis.

Yogesh has been posting some great information over on his blog recently, specifically with respect to Registry and Windows Event Log artifacts associated with USB devices connected to Windows 8 systems.  Be sure to add it to your daily reading, or to your blog roll, in order to catch updates.

Monday, July 22, 2013

HowTo: Add Intelligence to Analysis Processes


How many times do we launch a tool to parse some data, and then sit there looking at the output, wondering how someone would see something "suspicious" or "malicious" in the output?  How many times do we look at lines of data, wondering how someone else could easily look at the same data and say, "there it is...there's the malware"?  I've done IR engagements where I could look at the output of a couple of tools and identify the "bad" stuff, after someone else had spent several days trying to find out what was going wrong with their systems.  How do we go about doing this?

The best and most effective way I've found to get to this point is to take what I learned on one engagement and roll it into the next.  If I find something unusual...a file path of interest, something particular within the binary contents of a file, etc...I'll attempt to incorporate that information into my overall analysis process and use it during future engagements.  Anything that's interesting, as a result of either direct or ancillary analysis will be incorporated into my analysis process.  Over time, I've found that some things keep coming back, while other artifacts are only seen every now and then.  Those artifacts that are less frequent are no less important, not simply because of the specific artifacts themselves, but also for the trends that they illustrate over time.

Before too long, the analysis process includes, "access this data, run this tool, and look for these things..."; we can then make this process easier on ourselves by taking the "look for these things" section of the process and automating it.  After all, we're human, get tired from looking at a lot of data, and we can make mistakes, particularly when there is a LOT of data.  By automating what we look for (or, what we've have found before), we can speed up those searches and reduce the potential for mistakes.

Okay, I know what you're going to say..."I already do keyword searches, so I'm good".  Great, that's fine...but what I'm talking about goes beyond keyword searches.  Sure, I'll open up a lot of lines of output (RegRipper output, web server logs) in UltraEdit or Notepad++, and search for specific items, based on information I have about the particular analysis that I'm working on (what are my goals, etc.).  However, more often than not, I tend to take that keyword search one step further...the keyword itself will indicate items of interest, but will be loose enough that I'm going have a number of false positives.  Once I locate a hit, I'll look for other items in the same line that are of interest.

For example, let's take a look at Corey Harrell's recent post regarding locating an injected iframe.  This is an excellent, very detailed post where Corey walks through his analysis process, and at one point, locates two 'suspicious' process names in the output of a volatile data collection script.  The names of the processes themselves are likely random, and therefore difficult to include in a keyword list when conducting a search.  However, what we can take away from just that section of the blog post is that executable files located in the root of the ProgramData folder would be suspicious, and potentially malicious.  Therefore, a script that that parses the file path and looks for that condition would be extremely useful, and written in Perl, might look something like this:

my @path = split(/\\/,$filepath);
my $len = scalar(@path);
if (lc($path[$len - 2]) eq "programdata" && lc($path[$len - 1]) =~ m/\.exe$/) {
  print "Suspicious path found: ".$filepath."\n";
}

Similar paths of interest might include "AppData\Local\Temp"; we see this one and the previous one in one of the images that Corey posted of his timeline later in the blog post, specifically associated with the AppCompatCache data output.

Java *.idx files
A while back, I posted about parsing Java deployment cache index (*.idx) files, and incorporating the information into a timeline.  One of the items I'd seen during analysis that might indicate something suspicious is the last modified time embedded in the server response be relatively close (in time) to when the file was actually sent to the client (indicated by the "date:" field).  As such, I added a rule to my own code, and had the script generate an alert if the "last modified" field was within 5 days of the "date" field; this value was purely arbitrary, but it would've thrown an alert when parsing the files that Corey ran across and discussed in his blog.

Adding intel is generally difficult to do with third-party, closed source tools that we download from someone else's web site, particularly GUI tools.  In such cases, we have to access the data in question, export that data out to a different format, and then run our analysis process against that data.  This is why I recommend that DFIR analysts develop some modicum of programming skill...you can either modify someone else's open source code, or write your own parsing tool to meet your own specific needs.  I tend to do this...many of the tools I've written and use, including those for creating timelines, will incorporate some modicum of alerting functionality.  For example, RegRipper version 2.8 incorporates alerting functionality directly into the plugins. This alerting functionality can greatly enhance our analysis processes when it comes to detecting persistence mechanisms, as well as illustrating suspicious artifacts as a result of program execution.

Writing Tools
I tend to write my own tools for two basic reasons:

First, doing so allows me to develop a better understanding of the data being parsed or analyzed.  Prior to writing the first version of RegRipper, I had written a Registry hive file parser; as such, I had a very deep understanding of the data being parsed.  That way, I'm better able to troubleshoot an issue with any similar tool, rather than simply saying, "it doesn't work", and not being able to describe what that means.  Around the time that Mandiant released their shim cache parsing script, I found that the Perl module used by RegRipper was not able to parse value "big data"; rather than contacting the author and saying simply, "it doesn't work", I was able to determine what about the code wasn't working, and provide a fix.  A side effect of having this level of insight into data structures is that you're able to recognize which tools work correctly, and select the proper tool for the job.

Second, I'm able to update and make changes to the scripts I write in pretty short order, and don't have to rely on someone else's schedule to allow me to get the data that I'm interested in or need.  I've been able to create or update RegRipper plugins in around 10 - 15 minutes, and when needed, create new tools in an hour or so.

We don't always have to get our intelligence just from our own analysis. For example, this morning on Twitter, I saw a tweet from +Chris Obscuresec indicating that he'd found another DLL search order issue, this one on Windows 8 (application looked for cryptbase.dll in the ehome folder before looking in system32); as soon as I saw that, I thought, "note to self: add checking for this specific issue to my Win8 analysis process, and incorporate it into my overall DLL search order analysis process".

The key here is that no one of us knows everything, but together, we're smarter than any one of us.

I know that what we've discussed so far in this post sounds a lot like the purpose behind the OpenIOC framework.  I agree that there needs to be a common framework or "language" for representing and sharing this sort of information, but it would appear that some of the available frameworks may be too stringent, not offer enough flexibility, or are simply internal to some organizations.  Or, the issue may be as Chris Pogue mentioned during the 2012 SANS DFIR Summit..."no one is going to share their secret sauce."  I still believe that this is the case, but I also believe that there are some fantastic opportunities being missed because so much is being incorporated under the umbrella of "secret sauce"; sometimes, simply sharing that you're seeing something similar to what others are seeing can be a very powerful data point.

Regardless of the reason, we need to overcome our own (possibly self-imposed) roadblocks for sharing those things that we learn, as sharing information between analysts has considerable value.  Consider this post...who had heard of the issue with imm32.dll prior to reading that post?  We all become smarter through sharing information and intelligence.  This way, we're able to incorporate not just our own intelligence into our analysis processes, but we're also able to extend our capabilities by adding intelligence derived and shared by others.

Wednesday, May 29, 2013

Good Reading, Tools


Reading
Cylance Blog - Uncommon Event Log Analysis - some great stuff here showing what can be found with respect to indirect or "consequential" artifacts, particularly within the Windows Event Logs on Vista systems and above.  The author does a pretty good job of pointing out how some useful information can be found in some pretty unusual places within Windows systems.  I'd be interested to see where things fall out when a timeline is assembled, as that's how I most often locate indirect artifacts.

Cylance Blog - Uncommon Handle Analysis - another blog post by Gary Colomb, this one involving the analysis of handles in memory.  I liked the approach taken, wherein Gary explains the why, and provides a tool for the how.  A number of years ago, I had written a Perl script that would parse the output of the MS SysInternals tool handle.exe (ran it as handle -a) and sort the handles found based on least frequency of occurrence, in order to do something similar to what's described in the post.

Security BrainDump - Bugbear found some interesting ZeroAccess artifacts; many of the artifacts are similar to what is seen in other variants of ZA, as well as in other malware families (i.e., file system tunneling), but in this case, the click fraud appeared in the systemprofile folder...that's very interesting. 

SpiderLabs Anterior - The White X - this was an interesting and insightful read, in that it fits right along with Chris Pogue's Sniper Forensics presentations, particularly when he talks about 'expert eyes'.  One thing Chris is absolutely correct about is that we, as a community, need to continue to shift our focus away from tools and more toward methodologies and processes.  Corey Harrell has said the same thing, and I really believe this to be true.  While others have suggested that the tools help to make non-experts useful, I would suggest that the usefulness of these "non-experts" is extremely limited.  I'm not suggesting that one has to be an expert in mechanical engineering and combustion engine design in order to drive a car...rather, I'm simply saying that we have to have an understanding of the underlying data structures and what the tools are doing when we run those tools.  We need to instead focus on the analysis process.

Java Web Vulnerability Mitigation on Windows - Great blog post that is very timely, and includes information that can be used in conjunction with RegRipper to in order to determine initial infection vector (IIV) during analysis.

ForkSec Blog - "new" blog I saw referenced on Twitter one morning, and I started my reading with the post regarding the review of the viaExtract demo.  I don't do any mobile forensics at the moment, but I did enjoy reading the post, as well as seeing the reference to Santoku Linux.

Tools
win-sshfs - ssh(sftp) file system for Windows - I haven't tried this one but it does look interesting.

4Discovery recently announced that they'd released a number of tools to assist in forensic analysis.  I downloaded and ran two of the tools...LinkParser and shellbagger.  I ran LinkParser against a legit LNK file that I'd pulled from a system that contained only a header and a shell item ID list (it had no LinkInfo block), and LinkParser didn't display anything.  I also ran LinkParser against a couple of LNK files that I have been using to test my own tools, and it did not seem to parse the shell item ID lists.  I then ran shellbagger against some test data I've been working with, and found that, similar to other popular tools, it missed some shell items completely.  I did notice that when the tool found a GUID that it didn't know, it said so...but it didn't display the GUID in the GUI so that the analyst could look it up.  I haven't yet had a chance to run some of the other tools, and there are reportedly more coming out in the future, so keep an eye on the web site.

ShadowKit - I saw via Chad Tilbury on G+ recently that ShadowKit v1.6 is availableHere's another blog post that talks about how to use ShadowKit; the process for setting up your image to be accessed is identical to the process I laid out in WFAT 3/e...so, I guess I'm having a little difficulty seeing the advantages of this tool over native tools such as vssadmin + mklink, beyond the fact that it provides a GUI.

Autopsy - Now has a graphical timeline feature; right now, this feature only appears to include the file system metadata, but this approach certainly has potential.  Based on my experience with timeline analysis, I do not see the immediate value in this approach to bringing graphical features to the front end of timeline analysis.  There are other tools that utilize a similar approach, and as with those, I don't see the immediate value, as most often I'm not looking for where or when the greatest number of events occur, but I'm usually instead looking for the needle in stack of needles.  However, I do see the potential for the use of this technique in timeline analysis.  Specifically, adding Registry, Windows Event Log, and other events will only increase the amount of data, but one means for addressing this would be to include alerts in the timeline data, and then show all events as one color, and alerts as another.  Alerts could be based on either direct or indirect/consequential artifacts, and can be extremely valuable in a number of types of cases, directing the analyst's attention to critical areas for analysis.

NTFS TriForce - David Cowen has released the public beta of his NTFS TriForce tool. I didn't see David's presentation on this tool, but I did get to listen to the recording of the DFIROnline presentation - the individual artifacts that David describes are very useful, but real value is obtained when they're all combined.

Auto-rip - Corey has unleashed auto-rip; Corey's done a great job of automating data collection and initial analysis, with the key to this automation being that Corey knows and understands EXACTLY what he's doing and why when he launches auto-rip.  This is really the key to automating any DFIR task..while some will say that "it goes without saying", too often there is a lack of understanding with respect to the underlying data structures and their context when automated tools are run.

WebLogParser - Eric Zimmerman has released a log parser with geolocation, DNS lookups, and more.

Monday, May 13, 2013

Understanding Data Structures

Sometimes at conferences or during a presentation, I'll provide a list of tools for parsing a specific artifact (i.e., MFT, Prefetch files, etc.), and I'll mention a tool or script that I wrote that presents specific data in a particular format.  Invariably when this happens, someone asks for a copy of the tool/script.  Many times, these scripts may not be meant for public consumption, and are only intended to illustrate what data is available within a particular structure.  As such, I'll ask why, with all of the other available tools, someone would want a copy of yet another tool, and the response is most often, "...to validate the output of the other tools."  So, I'm left wondering...if you don't understand the data structure that is being accessed or parsed, how is having another tool to parse it beneficial?

Tools provide a layer of abstraction over the data, and as such, while they allow us access to information within these data structures (or files) in a much more timely manner than if we were to attempt to do so manually, they also tend to separate us from the data...if we allow this to happen.  For many of the more popular data structures or sources available, there are likely multiple tools that can be used to display information from those sources.  But the questions then become, (a) do you understand the data source(s) being parsed, and (b) do you know what the tool is doing to parse those data structures?  Is the tool using an MS API to parse the data, or is it doing so on a binary level? 

A great example of this is what many of us will remember seeing when we have extracted Windows XP Event Logs from an image and attempted to open them in the Event Viewer on our analysis system.  In some cases, we'd see a message that told us that the Event Log was corrupted.  However, it was very often the case that the file wasn't actually corrupted, but instead that our analysis system did not have the appropriate message DLLs installed for some of the records.  Microsoft does, however, provide very clear and detailed definitions of the Event Log structures, and as such, tools that do not use the Windows API to parse the Event Log files can be used to much greater effect, to include parsing individual records from unallocated space.  This could not be done without an understanding of the data structures.

Not long ago, Francesco contacted me about the format of  automaticDestinations Jump List files, because he'd run a text search across an image and found a hit "in" one of these files, but parsing the file with multiple tools gave no indication of the search hit.  It turned out that understanding the format of MS compound file binary files provides us with a clear indication of how to map unallocated 'sectors' within the Jump List file itself, and determine why he'd seen a search hit 'in' the file, but that hit wasn't part of the output of the commonly-used tools for parsing these files.

Another great example of this came my attention this morning via the SQLite: Hidden Data in Plain Sight blog post from the Linuxsleuthing blog.  This blog post further illustrates my point; however, in this case, it's not simply a matter of displaying information that is there but not displayed by the available tools.  Rather, it is also a matter of correlating the various information that is available in a manner that is meaningful and valuable to the analyst.

The Linuxsleuthing blog post also asks the question, how do we overcome the shortcomings of the common SQLite Database analysis techniques?  That's an important question to ask, but it should also be expanded to just about any analysis technique available, and not isolated simply to SQLite databases.  What we need to consider and ask ourselves is, how do we overcome the shortcomings of common analysis techniques?

Tools most often provide a layer of abstraction over available data (structures, files, etc.), allowing for a modicum of automation and allowing the work to be done in a much more timely manner than using a hex editor.  However, much more is available to us than simply parsing raw data structures and providing some of the information to the analyst.  Tools can parse data based on artifact categories, as well as generate alerts for the analyst, based on known-bad or known-suspicious entries or conditions.  Tools can also be used to correlate data from multiple sources, but to really understand the nature and context of that data, the analyst needs to have an understanding of the underlying data structures themselves.

Addendum
This concept becomes crystallized when looking at any shell item data structures on Windows systems.  Shell items are not documented by MS, and yet are more and more prevalent on Windows systems as the versions progress.  An analyst who correctly understands these data structures and sees them as more than just "a bunch of hex" will reap the valuable rewards they hold.

Shell items and shell item ID lists are found in the Registry (shellbags, itempos* values, ComDlg32 subkey values on Vista+, etc.), as well as within Windows shortcut artifacts (LNK files, Win7 and 8 Jump Lists, Photos artifacts on Windows 8, etc.).  Depending upon the type of shell item, they may contain time stamps in DOSDate format (usually found in file and folder entries), or they may contain time stamps in FILETIME format (found in some variable type entries).  Again, tools provide a layer of abstraction over the data itself, and as such, the analyst needs to understand the nature of the time stamp, as well as what that time stamp represents.  Not all time stamps are created equal...for example, DOSDate time stamps within the shell items are created by converting the file system metadata time stamps from the file or folder that is being referred to, reducing the granularity from 100 nanoseconds to 2 seconds (i.e., the seconds value is multiplied times 2).

Resources
Windows Shellbag Forensics - Note: the first colorized hex dump includes a reported invalid SHITEM_FILEENTRY, in green; it's not actually invalid, it's just a different type of shell item.

Monday, October 29, 2012

Links

Being socked in by the weather, I thought it would be a good time to throw a couple of things out there...

Mounting an Image
Folks...in order to test or make use of the Forensic Scanner, you first need to have an image.  If you don't have an image available, you can download sample images from a number of locations online.  Or you can image your own system, or you can use virtual machine files (FTK Imager will mount a .vmdk file with no issues).  However, the Forensic Scanner was not intended to be run against your local, live system.

Once you have an image to work with, you need to mount it as a volume in order to run the Forensic Scanner against it.  If you have a raw/dd image, a .vmdk or .vhd file, or a .E0x file, FTK Imager will allow you to mount any of these in read-only format.

If you have a raw/dd format image file, you can use vhdtool to add a footer to the file, and then use the Disk Manager to attach the VHD file read-only.  If you use this method, or if you mount your image file as VMWare virtual machine, you will also be able to list and mount available VSCs from within the image, and you can run the Scanner against each of those.

If you have any version of F-Response, you can mount a remote system as a volume, and run the Forensic Scanner against it.  Don't take my word for it...see what Matt, the founder of F-Response, says about that!

If you have issues with accessing the contents of the mounted image...Ken Johnson recently tried to access a mounted image of a Windows 8 system from a Windows 7 analysis system...you may run into issues with permissions.  After all, you're not accessing the  image as a logical volume...so, you might try mounting the image as "File System/Read-Only", rather than the default "Block Device/Read-Only", or you may want to run the Scanner using something like RunAsSystem in order to elevate your privileges.

If your circumstances require it, you can even use FTK Imager (FTK Imager Lite v3.x is now available and supports image mounting) to access an acquired image, and then use the export function to export copies of all of the folders and files from the image to a folder on your analysis system, or on a USB external drive, and then run the scanner against that target.

Okay, but what about stuff other than Windows as your target?  Say that you have an iDevice (or an image acquired from one...)...the Forensic Scanner can be updated (it's not part of the current download, folks) to work with these images, courtesy of HFSExplorerCaveat: I haven't tested this yet, but from the very beginning, the Forensic Scanner was designed to be extensible in this manner.

Again, if you opt to run the Forensic Scanner against your local drive (by typing "C:\Windows\system32" into the tool), that's fine.  However, I can tell you it's not going to work, so please don't email me telling me that it didn't work.  ;-)

Forensic Scanner Links
Forensic Scanner Links - links where the Forensic Scanner is mentioned:
F-Response Blog: F-Response and the ASI Forensic Scanner
Grand Stream Dreams: Piles o' Linkage
SANS Forensics Blog: MiniFlame, Open Source Forensics Edition

Apparently, Kiran Vangaveti likes to post stuff that other people write...oh, well, I guess that imitation really is the sincerest form of flattery!  ;-)

Observables
The good folks over at RSA have had some interesting posts of late to their "Speaking of Security" blog, and the most recent one by Branden Williams is no exceptionIn the post, Branden mentions "observables", as well as Locard's Exchange Principle...but what isn't explicitly stated is the power of correlating various events in order to develop situational awareness and context, something that we can do with timeline analysis.

An example of this might be a failed login attempt or a file modification.  In and of themselves, these individual events tell us something, but very little.  If we compile a timeline using the data sources that we have available, we can begin to see much more with regards to that individual event, and we go from, "...well, it might/could be..." to "...this is what happened."

SANS Forensic Summit 2013
The next SANS #DFIR Summit is scheduled for July 2013 (in Austin, TX) and the call for speakers is now open.

Prefetch Analysis
Adam posted recently regarding Prefetch file names and UNC paths, and that reminded me of my previous posts regarding Prefetch Analysis.  The code I currently use for parsing Prefetch files includes parsing of paths that include "temp" anywhere in the path (via grep()), and provides those paths separately at the end of the output (if found).  Parsing of UNC paths (any path that begins with two back slashes, or begins with "\Device") can also be included in that code.  The idea is to let the computer extract and present those items that might be of particular interest, so that the analyst doesn't have to dig through multiple lines of code.

Monday, September 03, 2012

Links, Tools, Etc.

Windows 8 Forensics
There is some great information circulating about the Interwebs regarding Windows 8 forensics.  There's this YouTube video titled A Forensic First Look, this blog post that addresses reset and refresh artifacts, the Windows 8 Forensics Guide (this PDF was mentioned previously on this blog), this blog post on the Windows 8 TypedURLsTime Registry key, and Kenneth Johnson's excellent PDF on Windows 8 File History...in addition to a number of other available resources.  Various beta and pre-beta versions of Windows 8 have been out for some time, and with each release there seems to be something new...when I went from the first version available for developers to the Consumer Preview, one of the first things I noticed was that I was no longer able to disable the Metro interface.

So what does all this mean?  Well, just like when Windows XP was released, there were changes that would affect how we within the digital analysis community would do our jobs, and the same thing has been true since then with every new OS release.  While our overall analysis process wouldn't change, there are aspects of the new operating system and it's included technologies that require us to update the specifics of those processes.

Timeline Analysis
Over at the Sploited blog, there's an excellent post on how to incorporate Java information into your TLN-format timeline, in order to help determine the exploit used to compromise a system.  In addition to the information available in the two previous posts (here, and here, respectively), this post includes code for parsing .idx files, and incorporating log entries into a TLN-format timeline.

Just to be clear, this is NOT a RegRipper plugin (there is often times confusion about this...), but is instead a file parser that you can use to incorporate data into your timeline, similar to parsing Prefetch file metadata.  As such, it can very often add some much-needed detail and context to your analysis.

Posts such as this go hand-in-hand with the excellent work that Corey Harrell has done in determining exploit footprints on compromised systems.

PList Parser
If you do forensics on iDevices, or you get access to iDevice backups via iTunes on a system, you might want to take a look at Maria's PList Parser.  Parsing these files can provide you with a great deal of insight into the user's behavior while using the device.  Maria said that she used RegRipper as the inspiration for her tool, and it's great to see tools like this become available.

ScheduledTask File Parser
Jamie's released a .job file parser, written in Python.  These files, on WinXP and 2003 systems, are in a binary format (in later versions of Windows, they're XML) and like other files (ie, Prefetch files) can contain some significant metadata.  In the past, I've found analysis of these artifacts to particularly useful when responding to incidents involving certain threat actors that moved laterally within the compromised infrastructure...one way of doing so was to schedule tasks on remote systems.

Not only does Jamie provide an explanation of what a .job file "looks like", but she also provides references so that folks can look this information up themselves, and develop a deeper understanding of what the tool is doing, should they choose to do so.  Also, don't forget the great work Jamie has done with her MBR parser, particularly if you're performing some sort of malware detection on an acquired image.

Registry Analysis
I ran across this write-up on Wiper recently via Twitter,

In the write-up, the authors state:

"...we came up with the idea to look into the hive slack space for deleted entries."

Hhhmm...okay.  My understanding of "slack space", with respect to the file system, is that it's usually considered to be what's left over between the logical and physical space consumed by a file.  Let's say that there's a file that's 892 bytes, and in order to save it to disk, the system will allocate 2 512 byte sectors, or 1024 bytes.  As such, the slack space would be that 132 bytes that remains between the logical end of the file, and end of the second physical sector.

Now, this can be true for the hive files themselves, as some data may exist between the logical end of the hive, and the end of the last physical sector.  This may also be true for value data, as well...if the 1024 bytes are allocated for a value, but only 892 bytes are actually written to the allocated space, there may be slack space available.

However, if you look at the graphic associated with the comment (excellent use of Yaru, guys!), the first 4 bytes (DWORD) of the selected data are a positive value, indicating that the key was deleted.  As such, the key becomes part of the unallocated space of the hive file, just like the sectors of a deleted file become part of the unallocated space of a volume or disk.  So, the value appears to have been part of unallocated space of the hive file, rather than slack space.

With respect to overall Registry analysis, perhaps "...we came up with the idea..." isn't the most systematic approach to that analysis.  Admittedly, the authors found something very interesting, but I'd be interested to know if the authors found an enum\Root\Legacy_RAHDAUD64 key in that Registry hive they were looking at, or if they found a Windows Event Log record with source "Service Control Manager" and an ID of 7035 (indicating a service start message had been sent), and then opted to check for deleted keys in the hive after determining that there was no corresponding visible keys for a service of that name in the System hive.

Looking for Suspicious EXEs
Adam wrote an interesting blog post on finding suspicious PE files via clustering...in short, assuming PE files may have been subject to timestomping (ie, intentional modification of MFT $STANDARD_INFORMATION attribute time stamps), and attempting to detect these files by "clustering" the PE file compile times.

You can read more about methods for detecting malicious files by reading Joel Yonts' GIAC Gold Paper, Attributes of Malicious Files.

Friday, May 04, 2012

Links and Tools

Windows 8 Forensics Guide
You can now find a free Windows 8 forensics guide over on the Propeller Head Forensics blog.  Amanda's guide is a great way to get started learning about some of the new things that you're likely to see in Windows 8 (if you aren't already running the Consumer Review edition)

I had an opportunity to meet and listen to Christopher Ard of MS talk about some of the neat new features of Windows 8 recently at the Massachusetts Attorney General's Cyber Crime Conference.  I also sat in on Chris Brown's presentation on ProDiscover, and he stated that he's working on adding support for the new ReFS file system to ProDiscover.  Looks like there are lots of cool things on the horizon with Windows 8 forensic analysis.

Timelines
The Sploited blog has posted part 2 (part 1 is here) of the Forensic Timelines for Beginners series, in which they discuss creating timelines using the tools and techniques illustrated in chapter 7 of Windows Forensic Analysis Toolkit 3/e

File System Behavior
There's an interesting thread over on the Win4n6 Yahoo Group regarding file system behavior when files are deleted, including removed from the Recycle Bin.  During the thread, one of members made the statement that during some vendor training, they'd been told that when files are deleted, Windows will automatically securely wipe the files.  This is, in fact, not the case, as Troy Larson clearly states during the thread.

What this does being up, as Troy says later in the thread, is that Windows systems are extremely active under the hood.  During the thread, several members say that they did their own testing and found that files were not securely deleted...what this comes back to is that some files may be very quickly overwritten by normal system activity.  This is something that I've pointed out in my books and presentations for some time, particularly when talking about the need for immediate response.  Troy even mentions in a follow-up post that "just opening and editing a Word file creates several temporary and scratch files--more than you would image."  Even with no specific user interaction, Windows systems have a great deal of activity that go on behind the scenes...look at some of the performance enhancements for XP described here, and in particular in the "Prefetch" section.  Windows 7 is very similar, in that it ships with a Scheduled Task that performs a defrag once a week, and another that backups up the main Registry hives every 10 days.  Add to that all of the other activity that occurs on Windows systems, and it's not surprising that some folks are seeing, on an inconsistent basis, that Windows appears to be securely wiping files upon deletion.  This is very important for DF analysts to keep in mind while performing analysis and file or record carving, but also for incident responders to keep in mind, particularly when developing IR procedures...the more immediate the response, the fresher and more pristine data you will be able to preserve.

Resources
MS File System Behavior Overview

SQLite WAL Files
The DigitalInvestigation blog has an excellent post on SQLite Write Ahead Log files, and their potential as a forensic resource.  I've seen these, as well, in the course of forensic impact analysis, and this is a very good read for folks who want to get a little bit familiar with what these files are all about, and how they can be useful during an examination.

Scripting
Melissa's got a very good post up that demonstrates how useful scripting skills (or "skillz") can be.  Over the years that I've done infosec work, I've found that an ability to write scripts has been invaluable, and I've found that to be even more true in the DFIR realm.  I once held an FTE position where I wrote a Perl script that would reach out across the enterprise, locate all systems that were turned on, query certain Registry keys and return the results to me.  As I began investigating my findings, I was able to develop a white list, and within relatively short order got to the point where I could launch the script before lunch, and return to find a report that was about half a page in length. 
I was able to provide a viable solution that worked extremely well in my environment (rather than fitting the problem to a commercial tool), for free.

If you're interested in trying out some of the things she demonstrates on your Windows box, check out the Resources section below.

Resources
Unix Command Line Tools for Windows
Utilities and SDK from MS
Unix Utilities

Encryption
Encryption has long been a thorn in the side for examiners.   I've had a number of engagements where I was asked to acquire images of systems known to be encrypted, and more than a few where we found out after we got on-site that some of the systems employed whole disk encryption.  In those cases, we opted for a live acquisition via FTK Imager (fully documented, of course).  It appears that Jesse has found found a free program that can reportedly decrypt BitLocker-protected volumes.

PE Files
If you do PE analysis, check out the CorkAmi Google Code site.  There's a good deal of very good information there, as well as detailed information regarding the PE file format.

Saturday, April 21, 2012

Tools, Updates...

Tools
Didier has released an updated version of an older viewer that he'd written, called InteractiveSieve.  Based on the description of the viewer, this looks like an excellent tool for performing timeline analysis.

Here's what I would do...may times, I will want to look at a particular date range, so I would run the parse.pl script to extract just that date range from my events file.  I would then open the resulting mini-timeline in Didier's viewer and go about deleting those things that I didn't want to see, and colorize things that might be important, or interesting but not specifically relevant without further investigation.

AppCompatCache
The folks over at Mandiant posted to the M-unition blog regarding the Application Compatibility Cache, which is maintained in the Registry (see their paper).  They've released a free tool to view this data, and in less than 30 minutes, I wrote up a RegRipper plugin to parse this data.  The first test data that I had available was 32-bit XP, so it's limited, but it's a start, and I think that it really shows the power of open source.  I don't say this to take anything away from the efforts of the Mandiant folks...rather, I thank them for their willingness to share the results of their research with the community at large.  I provided a copy of the plugin to the SANSForensics team, and gave them permission to post the code via the SANS Case Leads.  Rob contacted me with the test results, which weren't good.  It appears that the module I use has an issue, which I describe below in the "Troubleshooting" section.

Now, how is this information useful?  Check out Mandiant's paper...this particular data source is very rich in data, and I'll be updating the plugins once I get the module "fixed".

Open Source
I had posted to the Win4n6 Yahoo group some thoughts I had on the power of open source tools, with respect to the information Mandiant released.  The purpose of the post was not to say, "hey, look at me...I wrote another plugin!!", but rather to demonstrate the power and flexibility of open source tools, and how they can be quickly be extended to provide a capability that might take days or weeks for commercial applications.  Andrew provided another example, one that involved extending Volatility during Stuxnet analysis.  As someone who's done DFIR work for a long time, I really appreciate having the ability to decide what analysis I will do, rather than being penned in by a commercial tool or framework.

Troubleshooting
Okay, back to the information Mandiant provided regarding the Registry value...one of the members of the Win4n6 group (Ben) sent me a Windows 7 System hive...I have several, which I had opened in MiTeC's WRR and found the value in question to be all zeros...yet Mandiant's free tool pulled a great deal of data from the hive.  I checked again, and sure enough, the value in both ControlSets was all zeros.  So, based on a suggestion from Ben, I tried Yaru from TZWorks, and found all of the data.  I also wrote up a quick Perl script to extract the data from the value and place it into a file; from there, I opened up the file in a hex editor and could easily view the data.  It turns out that the issue is that Yaru is apparently the only tool of those I looked at that correctly handles 'db' node types within the Registry.  I have attempted to contact the author of the Parse::Win32Registry module about this, in hopes that it's an easy fix.

Registry Unallocated Space
Another interesting aspect of TZWorks' Yaru is that when you load the hive, it indexes the contents of the hive...and finds deleted keys.  Pretty cool!

I wanted to see how that compared to regslack, so I ran regslack against the same hive and got a bit different information; I got the same deleted key that Yaru found, plus one other, and I also got a LOT of unallocated space!  The web page for Yaru says that finding deleted keys is an experimental capability, which is great...it's also great that someone else is working on this topic.  Yolanta's work and release of regslack were a significant milestone for Registry analysis (here is one of my first blog posts on the topic).

The description of Yaru also states that you can view "cell slack", or unused "key value data space"...that's something else that might be very interesting to look into, although I'm not completely clear on what value there may be in data included in cell slack.

A while back, while I was involved in PCI forensic assessments, I used our documented process once I was back in the lab, and my scan for CCNs with an acquired image turned up hits within the Software and NTUSER.DAT hives on a system.  I thought that was odd...looking at the data surrounding the hits, it wasn't 100% clear to me that these were actual CCNs; that is, there were no indications that this was track data.  So I exported the hives and ran searches across the "live" Registry...and got nothing.  It turned out that the CCNs were part of unallocated space within the hive files...so understanding that there is unallocated space within a hive file can mean the difference between saying, "CCNs were found in the Registry", and actually providing accurate information in your report (as it can affect your customer).

Malware Persistence
The MMPC site has a description of Trojan:Win32/Reveton.A,which provides a good deal of information about this bit of ransomware.  Apparently, this baddie locks the infected system and displays a warning to the user that they've been reported to authorities as possessing illicit material.

Okay, so what is the persistence mechanism?  This one creates a Windows shortcut (LNK file) for itself in the Windows Startup folder.  Since the malware arrives as a DLL, it uses rundll32.exe to launch itself via the shortcut.

So what this gives us is some very good info to add to our malware detection checklist, doesn't it?  Not only should we check the Startup folders shortcuts (easy check; can accomplish this with a simple 'dir' command), but we might want to get some additional information via Prefetch analysis, particular of Prefetch files that start with "rundll32.exe".

"Community"
There's a new post up over on the Hexacorn blog, and comments are turned off, so I can't comment there...so let's do it here.  ;-)  Overall, let me say that I find most of the posts here interesting, and this one is no exception, but in this case, I'm only interested in four words from the post:

Like many before me...

I think that what a lot of us loose sight of is the fact that with few exceptions, there's probably someone out there who has faced the same challenges we've seen, and had to deal with the same...or at least very similar issues, as we have.  So, when faced with these challenges, we have options...we can seek help, or (if we're racing the clock to get stuff done) we can try to muddle through things and figure things out for ourselves.  I've heard people say this...that they want to wrestle with the issue and try to figure things out for themselves...even though there are others willing to help, or material and documentation available.  This is very noble, but think about it...is there any wonder why we don't see anything from them about what they learned later?  It's probably because they spent so much time wrestling that they don't have much time for anything else.

Recently, Girl, Unallocated gave an excellent DFIROnline presentation that involved a spoliation case and CCleaner.  Now, I've had an opportunity recently to work with the latest version of this tool...not in a forensic analysis capacity...so I'm a little familiar with it.  However, not long ago, I dealt with a case that involved an older version of WindowWasher.  So this shows that in a lot of ways, there are very few "new" cases; that is to say, it's likely that technology aside (XP vs. Win7, for example), there are very few, "This is what I need to determine..." cases.

Need to recover Event Log records (or MFT records) from unallocated space on an XP workstation or Win2003 server?  Perform USB device analysis?  Determine if that malware on the system actually executed?  I'm sure that someone else has run into this before...probably many "somebodys".

So, what to do?  Well, we can start by recognizing that if we have a road block, there's a way around it that someone else may already have found.  Ask for assistance.  It's also helpful that (a) the response not be taken "off list", and (b) that when the dust has settled, there's some final feedback or closure.

Remember, no one of us is as smart as all of us together.

To close this out, I recently had an event that made the issue of "community" clear to me.  Back in 2009, I'd written a script to parse Windows XP Scheduled Task/.job files (pull out the command run, the last time the job was run, and the status), and I have it is part of my personal stash of timeline tools.  In recent weeks, I had two different people ask me for a copy of the script, which I was somewhat hesitant to do because of my past experience with doing this sort of thing.  I decided to give the first person a chance and sent them the script.  I was notified that they received it, but getting feedback on how well it worked was like pulling hen's teeth.  So the second person came along, and I was just gonna say, "No, thanks" to their request...but they had a compelling need.  And they ran into an issue with the script...during testing, I'd never encountered a .job file that had been created but never run.  Now, I simply don't have any of those types of files available...but I asked Corey for some help, and thankfully, he was able to provide a couple of files.  All in all, the script is working very well, and providing not just some useful output, but it will also provide that output in TLN format.  Thanks to Corey providing some sample files, that meant I didn't need to go find an XP install disk, set up a VM, etc., etc., and was able to provide a solution much sooner.



Thursday, April 05, 2012

New Tools, Registry Findings

Zena Forensics: Recipe -Recipes are cool.  I like to cook.  Not everyone cooks the same way...if they did, how much fun would that be?  Not long ago, on a blog not far away, I ran across a "recipe" where someone used some of the scripts I wrote for parsing Windows XP/2003 Event Log records, and adapted them to Windows Vista+ Windows Event Logs, by tying in MS's LogParser.

LogParser is an extremely useful tool.  I recently sought some assistance on the Win4n6 Yahoo group with getting LogParser to spit out Windows Event Log records TimeGenerated times in UTC format, rather than local time (from my analysis system).  This was an important issue in some testing I was doing recently,

The command I ended up using looked similar to the following:

Logparser -i:evt -o:csv "SELECT RecordNumber,TO_UTCTIME(TimeGenerated),EventID,SourceName,Strings from System" > system.csv

The key element was the bolded statement.  Using this statement, I didn't have to reset my system clock to GMT time, or add more "moving parts" or do anything else that would have potentially led to errors.

What I really like about this recipe is that the author needed to do something, needed to do something that he may not have had the ability to do through commercial tools, and did it.  Not only that but he did it open source, and provided it for others.  Great job, and I, for one, greatly appreciate the fact that the author decided to share the script.

RegRipper Plugin Maintenance Script - Recently, Corey Harrell and Cheeky4n6Monkey (why does that name make me think of the episode of "Family Guy" where Chris and Peter made up the "Handi-Quacks" cartoon??) put their heads together and come up with a very interesting tool.  In short, the apparent back-story is that Corey mused how "it would be cool" to have a tool that would run through the RegRipper plugins directory, as well as the profiles, and see which plugins were available that had not been included in a profile.  Cheeky took this and ran with it, and came up with the RegRipper plugin maintenance script.

Up to this point, per Windows Registry Forensics, you could view a listing of plugins a couple of ways.  One is to use rip.pl/.exe at the command line; the following command will print out (to STDOUT) a list of available plugins:

rip.pl -l

If you add the "-c" switch, you can get that listing in .csv format.

The other way to view the plugins is to use the Plugin Browser, which is a graphical tool that lets you browse the available plugins, as well as create your own profile.

As with the recipe listed in the first part of this post, I greatly appreciate the effort that went into creating this script, as well as the fact that it was provided to the community at large.  You can download the script from Cheeky's site, or you may see it referenced at Brett's RegRipper site.   I also think that this is a great benefit to the community, as I'm sure that there are folks out there who didn't even know that they could use this script, but will end up finding it to be extremely valuable.

Registry Findings
A while back, I wrote some code to parse Windows 7 Jump Lists, and as part of my research for that project, I ran into volume GUIDs (within the LNK format TrackerData block), which are based on the UUID v1 format specification, detailed in RFC 4122.  Two of the pieces of information of interest that can be parsed from the GUID are a time stamp, and a MAC address.

I recently caught something online that indicated to me that there might be other opportunities to make use of the work that I'd already done.  Specifically, some of the values beneath the MountedDevices key (those that begin with "\??\Volume"), as well as some of the subkeys beneath a user's MountPoints2 key, are volume GUIDs maintained in the UUID v1 format.  We already know that these two pieces of information are used to help us map the use of USB devices on systems, so how does this initial information about the GUID format help us?

The usefulness of the MAC address should be somewhat intuitive.  Perhaps more than anything else, what this means is that the MAC address is, in fact, stored in the Registry...not so much "stored" in the sense that there's a value named "MAC address", but more so in the sense that it's there.  Testing using a live system corroborates the fact that a MAC address is used; I say "a" because my testing system is a laptop, and has a number of IPv4 interfaces (LAN, WLAN, VirtualBox).  Interestingly, one of the MAC addresses that appeared in several GUIDs was from VMWare, which had been installed at one point on my system (I removed it). As entries from the UserAssist key showed when I had installed VMPlayer (launched the installer), I had a time frame that pertained to when the device(s) could have been used.

Note: Over on the Girl, Unallocated blog, Case Experience #2.4 illustrates some good examples of the volume GUIDs, from both the MountedDevices key, and the user's MountPoints2 key.

In a couple of cases, I found devices...specifically, the laptop's built-in hard drive and DVD/optical drive...where the node identifier didn't correlate to a MAC address from my system.  This was an interesting finding, but not something that really needs to be run down at the moment.

What this demonstrates is that the MAC address of a system is, in fact, recorded in the Registry, albeit not necessarily as a value named "MAC address".

More research is required regarding the time stamp.

Addendum, 20120410: Testing indicates that the time stamp points to the boot time for the boot session during which the device was connected.

Monday, January 02, 2012

Stuff

Using RegRipper
Russ McRee let me know recently that the folks at Passmark recently posted a tutorial on how to use their OSForensics tool with RegRipper.

Speaking of RegRipper, I was contacted not long ago about setting up a German mirror for RegRipper...while it doesn't appear to active yet, the domain has been set aside, and I'm told that the guys organizing it are going to use it not only as a mirror, but also as a site for some of the plugins they'll be getting in that are specific to what they've been doing.

If you're into GenToo Linux, there's also this site from Stefan Reimer which contains a RegRipper ebuild for that platform.


Updated tool:  Stefan over on the Win4n6 Yahoo group tried out the Jump List parser code and found out that, once again, I'd reversed two of the time stamps embedded in the LNK file parsing code.  I updated the code and reposted the archive.  Thanks!

Meetups
With respect to the NoVA Forensics Meetups, I posted here asking what folks thought about moving them to the DFIROnline meetups, and I tweeted something similar.  Thus far, I have yet to receive a response from the blog post, and of the responses I've seen on Twitter, the vast majority (2 or 3..I've only seen like 4 responses...) indicate that moving to the online format is just fine.  I did receive one response from someone who seems to like the IRL format...although that person also admitted that they haven't actually been to a meetup yet.

So...it looks like for 2012, we'll be moving to the online format.  Looking at the lineup thus far, we already seem to be getting some good presentations coming along in the near future.

Speaking of which, offering to either give a presentation or asking for some specific content to be presented on is a great way to contribute to the community.  Just something to keep in mind...if you're going to say, "...I'd like to hear about this topic", be prepared to engage in a discussion.  This isn't to say that someone's going to come after you and try to belittle your idea...not at all.  Instead, someone willing to present on the topic may need more information about your respective, what you've tried (if anything), any research that you've already done, etc.  So...please be willing to share ideas of what you'd like to see presented, but keep in mind that, "...what do you mean by that?" is NOT a slam.

New Tools
File this one under "oh, cr*p..."...

Seems setmace.exe has been released...if you haven't seen this yet, it apparently overcomes some of the issues with timestomp.exe; in particular, it is reportedly capable of modifying the time stamps in both the $STANDARD_INFORMATION and the $FILE_NAME attributes within the MFT.  However, it does so by creating a randomly-named subdirectory within the same volume, copying the file into the new directory, and then copying it back (Note: the description on the web page uses "copy" and "move" interchangeably).

Okay, so what does this mean to a forensic analyst, if something like this is used maliciously?  I'm going to leave that one to the community...

The folks at SimpleCarver have released a new tool to extract contents from the CurrentDatabase_327.wmdb file, a database associated with the Windows 7 Windows Media Player.   If you're working an exam that involves the use of WMP (i.e., you've seen the use of the application via the Registry and/or Jump Lists...), then you may want to consider taking a look at this tool.

You might also want to check out some of their other free tools.

Melissa posted to her blog regarding a couple of interesting tools for pulling information from memory dumps; specifically, pdgmail and Skypeex.  Both tools apparently require that you run strings first, but that shouldn't be a problem...the cost-benefit analysis seems to indicate that it's well worth running another command line tool.  An alternative to running these tools against a memory dump would be using Volatility or the MoonSols Windows Memory Toolkit to convert a hibernation file to a  raw dump format, and then run these tools.

Speaking of tools, Mike posted a list of non-forensics tools that he uses on Windows systems to his WriteBlocked blog.  This is a very good list, with a lot of useful tools (as well as tools I've used) on that list.  I recently used Wireshark to validate some network traffic...another tool that you might consider using alongside Wireshark is NetworkMiner...it's described as an NFAT tool, so I can see why it's not on Mike's list.  I use VirtualBox...I have a copy of the developer's build of Windows 8 running in it.

Wiping Utilities
Claus is back, and this time has a nice list of wiping utilities.  As forensic analysts, many times we have to sanitize the media that we're using, so having access to these tools is a very good thing.  I've always enjoyed Claus's posts, as well, and hope to see him posting more and more often in 2012.

Can anyone provide a technical reason why wiping with 7 passes (or more) is "better" than wiping with just 1 pass?

File Formats
I was reading over Yogesh Khatri's posts over at SwiftForensics.com, and found this post on IE RecoveryStore files.  Most analysts who have done any work with browser forensics are aware of the value of files that allow the browser to recover previous sessions...these resources can hold a good deal of potentially valuable data.

About halfway down the post, Yogesh states:

All files are in the Microsoft OLE structured storage container format.

That's awesome...he's identified the format, which means that we can now parse these files.  Yogesh mentions free tools, and one of the ones I like to use to view the contents of OLE files is MiTeC's SSV, as it not only allows me to view the file format and streams, but I can also extract streams for further analysis. 

Another reason I think that this is cool is that I recently released the code I wrote to parse Windows 7 Jump Lists (I previously released code to parse Win7 Sticky Notes), and the RecoveryStore files follow a similar basic format.  Also, Yogesh mentions that there are GUIDs within the file that include 60-bit UUID v1 time stamps...cool.  The Jump List parser code package includes LNK.pm, which includes some Perl code that I put together to parse these artifacts! 

I don't have, nor do I have access to at this time, any RecoveryStore files to work with (with respect to writing a parser)...however, over time, I'm sure that the value of these artifacts will reach a point such that someone writes, or someone contributes to writing, a parser for these files.
  

Monday, November 14, 2011

Tool Update - WiFi Geolocation

I wanted to let everyone know that I've updated the maclookup.pl Perl script which can be used for WiFi geolocation; that is, taking the MAC address for a WAP and performing a lookup in an online database to determine if there are lat/longs available for that address.  If there are, then you can convert the lat/long coordinates into a Google map for visualization purposes.

A while back I'd posted the location of WiFi WAP MAC addresses within the Vista and Windows 7 Registry to ForensicArtifacts.com.  This information can be used for intelligence purposes, particularly WiFi geolocation, that is, if the WAP MAC address has been mapped and the lat/longs added to an online database, they can then be looked up and plotted on a map (such as Google Maps).  I've blogged about this, and covered it in my upcoming Windows Forensic Analysis 3/e.  I also wrote maclookup.pl, which used a URL to query the Skyhook Wireless database to attempt to retrieve lat/longs for a particular WAP MAC address.  As it turns out, that script no longer works, and I've been looking into alternatives.

One alternative appears to be WiGLE.net; there seems to be a free search functionality that requires registration to use.  Registration is free, and you must agree to non-commercial use during the registration process.  Fortunately, there's a Net::Wigle Perl module available, which means that you can write your own code to query WiGLE, get lat/longs, and produce a Google Map...but you have to have Wigle.net credentials to use it. I use ActiveState Perl, so installation of the module was simply a matter of extracting the Wigle.pm file to the C:\Perl\site\lib\Net directory.

So, I updated the maclookup.pl script, using the Net::Wigle module (thanks to the author of the module, as well as Adrian Crenshaw, for some assistance in using the module).  I wrote a CLI Perl script, macl.pl, which performs the database lookups, and requires you to enter your Wigle.net username/password in clear text at the command line...this shouldn't be a problem, as you'll be running the script from your analysis workstation.  The script takes a WAP MAC address, or a file containing MAC addresses (or both), at the prompt, and allows you to format your output (lat/longs) in a number of ways:

- tabular format
- CSV format
- Each set of lat/longs in a URL to paste into Google Maps
- A KML file that you can load into Google Earth

All output is sent to STDOUT, so all you need to do is add a redirection operator and the appropriate file name, and you're in business.

The code can be downloaded here (macl.zip).  The archive contains a thoroughly-documented script, a readme file, and a sample file containing WAP MAC addresses.  I updated my copy of Perl2Exe in order to try and create/"compile" a Windows EXE from the script, but there's some more work that needs to be done with respect to modules that "can't be found". 

Getting WAP MAC Addresses
So, the big question is, where do you get the WAP MAC addresses?  Well, if you're using RegRipper, the networklist.pl plugin will retrieve the information for you.  For Windows XP systems, you'll want to use the ssid.pl plugin.


Addendum: On Windows 7 systems, information about wireless LANs to which the system has been connected may be found in the Microsoft-Windows-WLAN-AutoConfig/Operational Event Log (event IDs vary based on the particular Task Category).

Important Notes
Once again, there are a couple of important things to remember when running the macl.pl script.  First, you must have Perl and the Net::Wigle Perl module installed.  Neither is difficult to obtain or install.  Second, you MUST have a Wigle.net account.  Again, this is not difficult to obtain.  The readme file in the provided archive provides simple instructions, as well.

Resources
Adrian wrote a tool called IGiGLE.exe (using AutoIT) that allows you to search the Wigle.net database (you have to have a username and password) based on ZIP code, lat/longs, etc.

Here is the GeoMena.org lookup page.

Here is a review of some location service APIs.  I had no idea there were that many.

Thursday, October 27, 2011

Tools and Links

Not long ago, I started a FOSS page for my blog, so I didn't have to keep going back and searching for various tools...if I find something valuable, I'll simply post it to this page and I won't have to keep looking for it.  You'll notice that I really don't have much in the way of descriptions posted yet, but that will come, and hopefully others will find it useful.  That doesn't mean the page is stagnant...not at all.  I'll be updating the page as time goes on.

Volatility
Melissa Augustine recently posted that she'd set up Volatility 2.0 on Windows, using this installation guide, and using the EXE for Distorm3 instead of the ZIP file.  Take a look, and as Melissa says, be sure to thoroughly read and follow the instructions for installing various plugins.  Thanks to Jamie Levy for providing such clear guidance/instructions, as I really think that doing so lowers the "cost of entry" for such a valuable tool.  Remember..."there're more things in heaven and earth than are dreamt of in your philosophy."  That is, performing memory analysis is a valuable skill to have, particularly when you have access to a memory dump, or to a live system from which you can dump memory.  Volatility also works with hibernation files, from whence considerable information can be drawn, as well.

WDE
Now and again, you may run across whole disk encryption, or encrypted volumes on a system.  I've seen these types of systems before...in some cases, the customer has simply asked for an image (knowing that the disk is encrypted) and in others, the only recourse we have to acquire a usable image for analysis is to log into the system as an Admin and perform a live acquisition.

TCHunt
ZeroView from Technology Pathways, to detect WDE (scroll down on the linked page)

You can also determine if the system had been used to access TrueCrypt or PGP volumes by checking the MountedDevices key in the Registry (this is something that I've covered in my books).  You can use the RegRipper mountdev.pl plugin to collect/display this information, either from a System hive extracted from a system, or from a live system that you've accessed via F-Response.

Timelines
David Hull gave a presentation on "Atemporal timeline analysis" at the recent SecTorCA conference (can find the presentation .wmv files here), and posted an abridged version of the presentation to the SANS Forensic blog (blog post here).

When I saw the title, the first thing I thought was...what?  How do you talk about something independent of time in a presentation on timeline analysis?  Well, even David mentions at the beginning of the recorded presentation that it's akin to "asexual sexual reproduction"...so, the title is meant to be an oxymoron.  In short, what the title seems to refer to is performing timeline analysis during an incident when you don't have any sort of time reference from which to start your analysis.  This is sometimes the case...I've performed a number of exams having very little information from which to start my analysis, but finding something associated with the incident often leads me to the timeline, providing a significant level of context to the overall incident.

In this case, David said that the goal was to "find the attacker's code".  Overall, the recorded presentation is a very good example of how to perform analysis using fls and timelines based solely on file system metadata, and using tools such as grep() to manipulate (as David mentions, "pivot on") the data.  In short, the SANS blog post doesn't really address the use of "atemporal" within the context of the timeline...you really need to watch the recorded presentation to see how that term applies.

Sniper Forensics
Also, be sure to check out Chris Pogue's "Sniper Forensics v3.0: Hunt" presentation, which is also available for download via the same page.  There are a number of other presentations that would be very good to watch, as well...some talk about memory analysis.  The latest iteration of Chris's "Sniper Forensics" presentations (Chris is getting a lot of mileage from these things...) makes a very important point regarding analysis...in a lot of a cases, an artifact appears to be relevant to a case based on the analyst's experience.  A lot of analysts find "interesting" artifacts, but many of these artifacts don't relate directly to the goals of their analysis.  Chris gives some good examples of an "expert eye"; in one slide, he shows an animal track.  Most folks might not even really care about that track, but to a hunter, or someone like me (ride horses in a national park), the track tells me a great deal about what I can expect to see.

This applies directly to "Sniper Forensics"; all snipers are trained in observation.  Military snipers are trained to quickly identify military objects, and to look for things that are "different".  For example, snipers will be sent to observe a route of travel, and will recognize freshly turned earth or a pile of trash on that route when the sun comes up the next day...this might indicate an attempt to hide an explosive device.

How does this apply to digital forensic analysis?  Well, if you think about it, it is very applicable.  For example, let's say that you happen to notice that a DLL was modified on a system.  This may stand out as odd, in part because it's not something that you've seen a great deal of...so you create a timeline for analysis, and see that there wasn't a system or application update at that time. 

Much like a sniper, a digital forensic analyst must be focused.  A sniper observes an area in order to gain intelligence...enemy troop movements, civilian traffic through the area, etc.  Is the sniper concerned with the relative airspeed of an unladen swallow?  While that artifact may be "interesting", it's not pertinent to the sniper's goals.  The same holds true with the digital forensic analyst...you may find something "interesting" but how does that apply to your goals, or should you get your scope back on the target?

Data Breach 'Best Practices'
I ran across this article recently on the GovernmentHealthIT site, and while it talks about breach response best practices, I'd strongly suggest that all four of these steps need to be performed before a breach occurs.  After all, while the article specifies PII/PHI, regulatory and compliance organizations for those and other types of data (PCI) specifically state the need for an incident response plan (PCI DSS para 12.9 is just one example).

Item 1 is taking an inventory...I tell folks all the time that when I've done IR work, one of the first things I ask is, where is your critical data.  Most folks don't know.  A few that have have also claimed (incorrectly) that it was encrypted at rest.  I've only been to one site where the location of sensitive data was known and documented prior to a breach, and that information not only helped our response analysis immensely, it also reduced the overall cost of the response (in fines, notification costs, etc.) for the customer.

While I agree with the sentiment of item 4 in the article (look at the breach as an opportunity), I do not agree with the rest of that item; i.e., "the opportunity to find all the vulnerabilities in an organization—and find the resources for fixing them." 

Media Stuff
Brian Krebs has long followed and written on the topic of cybercrime, and one of his recent posts is no exception.  I had a number of take-aways from this post that may not be intuitively obvious:

1.  "Password-stealing banking Trojans" is ambiguous, and could be any of a number of variants.  The "Zeus" (aka, Zbot) Trojan  is mentioned later in the post, but there's no information presented to indicate that this was, in fact, a result of that specific malware.  Anyone who's done this kind of work for a while is aware that there are a number of malware variants that can be used to collect online banking credentials.

2.  Look at the victims mentioned in Brian's post...none of them is a big corporate entity.  Apparently, the bad guys are aware that smaller targets are less likely to have detection and response capabilities (*cough*CarbonBlack*cough*).  This, in turn, leads directly to #3...

3.  Nothing in the post indicates that a digital forensics investigation was done of systems at the victim location.  With no data preserved, no actual analysis was performed to identify the specific malware, and there's nothing on which law enforcement can build a case.

Finally, while the post doesn't specifically mention the use of Zeus at the beginning, it does end with a graphic showing detection rates of new variants of the Zeus Trojan over the previous 60 days; the average detection rate is below 40%.  While the graphic is informative,

More Media Stuff
I read this article recently from InformationWeek that relates to the recent breach of NASDAQ systems; I specifically say "relates" to the breach, as the article specifies, "...two experts with knowledge of Nasdaq OMX Group's internal investigation said that while attackers hadn't directly attacked trading servers...".  The title of the article includes the words "3 Expected Findings", and the article is pretty much just speculation about what happened, from the get-go.  In fact, the article goes on to say, "...based on recent news reports, as well as likely attack scenarios, we'll likely see these three findings:".  That's a lot of "likely" in one sentence, and this much speculation is never a good thing.  


My concern with this is that the overall take-away from this is going to be "NASDAQ trading systems were hit with SQL injection", and folks are going to be looking for this sort of thing...and some will find it.  But others will miss what's really happening while they're looking in the wrong direction.

Other Items
F-Response TACTICAL Examiner for Linux now has a GUI
Lance Mueller has closed his blog; old posts will remain, but no new content will be posted