Pages

Friday, December 30, 2011

Jump List Parser Code Posted

As a follow-up to my recent Jump List Analysis blog post, I've posted the Jump List parser code that I've been talking about.

Again, this isn't a Windows GUI program.  The code consists of two Perl modules that I wrote (and I'm not an expert in either Perl OO programming or writing Perl modules...), and the available archive contains a couple of of example scripts that demonstrate simple uses of the modules. 

I wrote these modules in order to provide maximum flexibility to the analyst.  For example, I use a five-field timeline (TLN) format for a good bit of my analysis, and that's not something I can get from available tools...not without manually exporting the contents of those tools and writing a separate parser.  Also, I know some folks who really love to use SQLite databases (MarkMcKinnon), so providing the code in this manner allows those analysts to write scripts using the Perl DBI to access those databases.

Also, I know that analysts like Corey Harrell will be itching to rip previous versions of Jump List files from VSCs.  As such, scripts can be written to parse just the DestList streams out of previous versions of the *.automaticDestinations-ms Jump List files and correlate that data.

The archive also contains a user guide that I wrote that explains not only the modules but how to use them and what data they can provide to you.

As a side note, I ran the lnk.pl script provided in the archive through Perl2Exe to create a simple, standalone Windows EXE file, and then ran it against the same target file (a shortcut in my own Recent folder) that I had tested the Perl script on, and it worked like a champ.

Once again, I am not an expert.  These modules should be fairly stable, and I wouldn't expect them to crash your box.  However, they are provided as-is, with no warranties or guarantees as to their function. Also, the code uses only core Perl functions and parses the target structures on a binary level, so it's entirely possible that I may have missed a structure or parsed something improperly.  If you find something amiss, I'd greatly appreciate you letting me know, and providing some sample data so that I can replicate and address the issue.

That being said, I hope that folks find this code to be useful.

Wednesday, December 28, 2011

Jump List Analysis

I've recently spoke with a couple of analysts I know, and during the course of these conversations, I was somewhat taken aback by how little seems to be known or available with respect to Jump Lists.  Jump Lists are artifacts that are new to Windows 7 (...not new as of Vista...), and are also available in Windows 8.  This apparent lack of attention to Jump Lists is most likely due to the fact that many analysts simply haven't encountered Windows 7 systems, or that Jump Lists haven't played a significant role in their examinations.  I would suggest, however, that any examination that includes analysis of user activity on a system will likely see some significant benefit from understanding and analyzing Jump Lists.

I thought what I'd try do is consolidate some information on Jump Lists and analysis techniques in one location, rather than having it spread out all over.  I should also note that I have a section on Jump Lists in the upcoming book, Windows Forensic Analysis 3/e, but keep in mind that one of the things about writing books is that once you're done, you have more time to conduct research...which means that the information in the book may not be nearly as comprehensive as what has been developed since I wrote that section.

In order to develop a better understanding of these artifacts, I wrote some code to parse these files.  This code consists of two Perl modules, one for parsing the basic structure of the *.automaticDestinations-ms Jump List files, and the other to parse LNK streams.  These modules not only provide a great deal of flexibility with respect to what data is parsed and how it can be displayed (TLN format, CSV, table, dumped into a SQLite database, etc.), but also the depth to which the data parsing can be performed.

Jump List Analysis
Jump Lists are located within the user profile, and come in two flavors; automatic and custom Jump Lists.  The automatic Jump Lists (*.automaticDestinations-ms files located in %UserProfile%\AppData\Roaming\Microsoft\Windows\Recent\AutomaticDestinations) are created automatically by the shell as the user engages with the system (launching applications, accessing files, etc.).  These files follow the MS-CFB compound file binary format, and each of the numbered streams within the file follows the MS-SHLLINK (i.e., LNK) binary format.

The custom Jump Lists (*.customDestinations-ms files located in %UserProfile%\AppData\Roaming\Microsoft\Windows\Recent\CustomDestinations) are created when a user "pins" an item (see this video for an example of how to pin an item).  The *.customDestinations-ms files are apparently just a series of LNK format streams appended to each other.

Each of the Jump List file names starts with a long string of characters that is the application ID, or "AppID", that identifies the specific application (and in some cases, version) used to access specific files or resources.  There is a list of AppIDs on the ForensicsWiki, as well as one on the ForensicArtifacts site.

From an analysis perspective, the existence of automatic Jump Lists is an indication of user activity on the system, and in particular interaction via the shell (Windows Explorer being the default shell).  This interaction can be via the keyboard/console, or via RDP.  Jump Lists have been found to persist after an application has been deleted, and can therefore provide an indication of the use of a particular application (and version of that application), well after the user has removed it from the system.  Jump Lists can also provide indications of access to specific files and resources (removable devices, network shares). 

Further, the binary structure of the automatic Jump Lists provides access to additional time stamp information.  For example, the structures for the compound binary file directory entries contain fields for creation and modification times for the storage object; while writing and testing code for parsing Jump Lists, I have only seen the creation dates populated.

Digging Deeper: LNK Analysis
Within the automatic Jump List files, all but one of the streams (i.e., the DestList stream) are comprised of LNK streams.  That's right...the various numbered streams are comprised of binary streams following the MS-SHLLINK binary format.  As such, you can either use something like MiTeC's SSV to view and extract the individual streams, and then use an LNK viewer to view the contents of each stream, or you can use Mark Woan's JumpLister to view and extract the contents of each stream (including the DestList stream).  The numbered streams do not have specific MAC times associated with them (beyond time stamps embedded in MS-CFB format structures), but they do contain MAC time stamps associated with the target file. 

Most any analyst who has done LNK file analysis is aware of the wealth of information contained in these files/streams.  My own testing has shown that various applications populate these streams with different contents.  One thing that's of interest...particularly since it was pointed out in Harry Parsonage's The Meaning of LIFE paper...is that some LNK streams (I say "some" because I haven't seen all possible variations of Jump Lists yet, only a few...) contain ExtraData (defined in the binary specfication), including a TrackerDataBlock.  This structure contains a machineID (name of the system), as well as two "Droids", each of which consists a VolumeID GUID and a version 1 UUID (ObjectID).  These structures are used by the Link Tracking Service; the first applies to the new volume (where the target file resides now), and the second applies to the birth volume (where the target file was when the LNK stream was created).  As demonstrated in Harry's paper, this information can be used to determine if a file was moved or copied; however, this analysis is dependent upon the LNK stream being created prior to the action taking place.  The code that I wrote extracts and parses these values into their components, so that checks can be written to automatically determine if the target file was moved or copied.

There's something specific that I wanted to point out here that has to do with LNK and Jump List analysis.  The format specification for the ObjectID found in the TrackerDataBlock is based on UUID version 1, defined in RFC 4122.  Parsing the second half of the "droid" should provide a node identifier in the last 6 bytes of stream.  Most analysts simply seem to think that this is the MAC address (or a MAC address) for the system on which the target file was found.  However, there is nothing that I've found thus far that states emphatically that it MUST be the MAC address; rather, all of the resources I've found indicate that this value can be a MAC address.  Given that a system's MAC address is not stored in the Registry by default, analysis of an acquired image makes this value difficult to verify.  As such, I think that it's very important to point out that while this value can be a MAC address, there is nothing to specifically and emphatically state that it must be a MAC address.

DestList Stream
The DestList stream is found only in the automatic Jump Lists, and does not follow the MS-SHLLINK binary format (go here to see the publicly documented structure of this stream).  Thanks to testing performed by Jimmy Weg, it appears that not only is the DestList stream a most-recently-used/most-frequently-used (MRU/MFU) list, but some applications (such as Windows Media Player) appear to be moving their MRU lists to Jump Lists, rather than continuing to use the Registry.  As such, the DestList streams can be a very valuable component of timeline analysis.

What this means is that the DestList stream can be parsed to see when a file was most recently accessed.  Unlike Prefetch files, Jump Lists do not appear (at this point) to contain a counter of how many times a particular file (MSWord document, AVI movie file, etc.) was accessed or viewed, but you may be able to determine previous times that a file was accessed by parsing the appropriate Jump List file found in Volume Shadow Copies. 

Summary
Organizations are moving away from Windows XP and performing enterprise-wide rollouts of Windows 7.  More and more, analysts will encounter Windows 7 (and before too long, Windows 8) systems, and need to be aware of the new artifacts available for analysis.  Jump Lists can hold a wealth of information, and understanding these artifacts can provide the analyst with a great deal of clarity and context.

Resources
ForensicsWiki: Jump Lists
Jump List Analysis pt. I, II, III
DestList stream structure documented
Harry Parsonage's The Meaning of LIFE paper - a MUST READ for anyone conducting LNK analysis
RFC 4122 - UUID description; sec 4.1.2 describes the structure format found in Harry's paper; section 4.1.6 describes how the Node field is populated
Perl UUID::Tiny module - Excellent source of information for parsing version 1 UUIDs

Monday, December 19, 2011

Even More Stuff

DFIROnline
Last Thu, we had (at one point) 32 attendees to the #DFIROnline online meetup, and my impression is that overall, it went pretty well.  Mike took the time to post his impressions, as well.

I think it would be very helpful to hear from others who attended and find out what they liked or didn't like about this format.  What works, what doesn't, what would folks like to see?  I know that with the NoVA Forensics Meetups, most (albeit not all) of the comments about content that I received were from out of town folks, and included, "...set up a meetup in my town...".  Well, Mike's brought that to you...in fact, you can battend from anywhere.  Mike's survey results indicated that case studies and malware analysis are things that folks are interested in, and that's a great start.

Also, I've been thinking...what do folks think about moving the NoVA Forensics Meetups to DFIROnline?

For those interested, I posted my slides (in PDF format) to the Win4n6 Yahoo Group Files section.

A a great big, huge, Foster's thanks to Mike for setting this up. 

Cool Stuff
If you do timeline analysis, David Nides has posted a great little log2timeline cheat sheet over on the SANS Forensics blog.  David made this cheat sheet available at the recent SANS360 event as a single laminated sheet...if you weren't able to make it and didn't get one, download the PDF and print out your own.  The content of the cheat sheet goes right along with Rob's SANS360 presentation, which you can watch here (actually, it's the entire set of presentations).

A huge thanks to David for putting this together and making it available.  This is another great example of how someone can contribute to the community, without having to be able to stand up in front of people, or write code. 

Jump Lists
I recently received a question about Windows 7 Jump Lists, and dusted off some of the code I wrote last summer for parsing Jump Lists.  Yes, it's in Perl...but the way I wrote it was to use just core Perl functions (i.e., no esoteric, deprecated, or OS-specific modules) so that it is platform-independent, as well as much easier to install and run.  Also, I wrote it as Perl modules, so I have additional flexibility in output formats...in short, I can have a script spit out text in a table format, CSV, or even TLN format.

If you haven't yet, check out Mark Woan's JumpLister...it's at version 1.0.5, and does a great job of parsing not only the LNK streams, but also the DestList stream (partial structure of which was first publicly documented here).  It also maps the AppId to an application name...a list of which can be found here, and here

Another use I've found for this code is Windows 8 forensics.  I've had a VirtualBox VM of Windows 8 Dev Build running, but recently set up a laptop (wiped XP off of it forever) to dual boot Win7 & 8, so that I could look at some of the various artifacts available, such as wireless networks within the Registry, the use of a Windows Live account to log into Win8, and the Jump Lists...yep, Win8 uses Jump Lists and at this point, they appear to be consistent in format with the Win7 Jump Lists.

Speaking Engagements
My upcoming speaking engagements include the DoD CyberCrime Conference (the conference even has a Facebook page), where I'll be presenting on Timeline Analysis.  I've also submitted to the CfP for the SANS Forensic Summit this next summer (topic: Windows 7 Forensic Analysis), so we'll see how that goes.

Friday, December 16, 2011

New Stuff

Some folks are aware that I recently changed positions, and I'm now with Applied Security, Inc.  My new title is "Chief Forensics Scientist", and yes, it is as cool as it sounds.  We do DF analysis of systems and mobile devices for our customers, focus on proactive security in order to promote immediate (as opposed to "emergency") response, and provide in-depth, focused DFIR training.  As part of DF analysis, we also do research-type engagements..."how does this affect that, and what kinds of traces does it leave?"  Pretty cool stuff.

Part of the work we do involves mobile devices, which is not something I'd really had an opportunity to dig into...until now.  Well, I take that back...in the upcoming WFA 3/e (due out on 7 Feb 2012, I'm told...and I've been told folks are already ordering it!), I do mention examining application files, to include backups of mobile devices and smart phones.  These backups...whether via the Blackberry Desktop Manager or iTunes (for iPhones, iTouch devices, or iPads) can contain a good deal of valuable data.  Again...I do not talk about examining the devices, but instead point out that the backup files may be valuable sources of data.

To kind of dabble in mobile device forensics a bit, I recently pulled an old Blackberry 7290 out of mothballs, powered it up and began running through passwords I may have used to lock it.  As it wasn't on the any cellular network and didn't have WiFi capability, it was effectively isolated from any network.  Once I unlocked it, I downloaded the Blackberry Desktop Manager and used it to backup the device, creating a .ipd file.  I then downloaded Elcomsoft's Blackberry Backup Explorer (trial available) and ran that, to pull up old SMS texts, emails, etc. It was pretty interesting the things that I found...kind of a blast from the past.  What I saw got me to thinking about how useful this stuff could be with respect to DF analysis in general.

I should point out that Elcomsoft also has an iOS forensic product (restricted to special customers), as well as a number of password cracking products.

I also gave Reincubate.com's Blackberry Backup Extractor a shot, as well.  The unregistered version of the tool only converts the first 5 entries in any database it finds, and the output is Excel spreadsheets placed in various folders, depending upon the database that's parsed.

Reincubate also has an iPhone Backup Extractor product.

One tool I'm aware of for parsing .ipd files that I haven't tried yet is MagicBerry.

I also wanted to see how JavaLoader worked against the Blackberry device itself, so I installed all of the necessary dependencies and ran that tool...pretty cool stuff.  I dumped device information, the event log, as well as directory listings, directly from the device.  Now, keep in mind, this is not particularly what one would call "forensically sound", but it is a way to gather additional information from the device after you've followed and documented a more stringent procedure.

Some lessons learned with the Blackberry...at this point, if I don't have the password for the device, I'm not getting anywhere.  I couldn't even create a backup/.ipd file for the device if I didn't have a password.  However, I could access the .ipd file with the tools I mentioned without having the password.  This is very useful information if you find that a user has the Blackberry Desktop Manager installed, and has created one or more .ipd files.

Something else that may be of interest to an examiner is that when I start the BB Desktop Manager, with no device connected to my system, the UI has information about the device already displayed.  This has to be stored somewhere on the system...I just haven't found it yet.  I've talked to some LE who like to boot the image they're analyzing and capture screenshots for use during court proceedings...this might be a very useful technique to use.

So, if you're conducting an exam and find that the user had the BlackBerry Desktop Manager installed, and you find an .ipd file (or several files), depending upon the goals of your exam, it may be well worth your time to dig into that backup.

In some ways, this is a pretty timely post, given this FoxNews article...seems that old hard drives aren't the only source of valuable information.

Thursday, December 15, 2011

More Stuff

Online DFIR Meetups
Tonight (Thu, 15 Dec) at 8pm EST is the first Online DFIR Meetup, hosted by Mike Wilkinson.  Stop by and check it out...Mike and I will be presenting during this first meetup.

I think that we need to come up with a good hashtag for the event, particularly something that's unique to the event.

Future of IR
If you haven't already, check out the Carbon Black white paper  on the future of IR, by moving from a purely response posture to a proactive, incident preparedness posture.

Moving to a proactive posture just makes sense for a lot of reasons.  First, it doesn't matter which annual report you read...Verizon, Mandiant, TrustWave...they all pretty much state that it doesn't matter who or where you are...if you have a computer connected to the Internet, you will be compromised at some point.  In fact, you may very likely already have been compromised; you may simply not realize it yet.  Second, if all of the studies show that you're gonna get punched in the face, why keep your hands down?  Why not put on head gear, get into a good stance, and get your hands up?  If it's gonna happen, why not be ready for it, and be able to react to minimize the effects?  Finally, there are a lot of regulatory bodies out there that are all telling the organizations that they oversee that they have to take a more proactive approach to security.  Paragraph 12.9 of the PCI DSS states that organizations subject to the PCI will have (as in, "thou shalt") an incident response capability, and the subparagraphs provide additional details.

At this point, one would think that there's enough reason to have an IR capability within your organization, and be ready.  One would think...

Now, does a tool like Cb obviate the need for that response capability?  I mean, if you're able to VPN into a system and diagnose and scope an incident within minutes, does that mean we'll no longer need to do DFIR?

No, not at all.  What Cb does bring to the table is a solution for rapidly triaging, scoping, and responding to an incident; however, it does NOT obviate the need for dedicated analysis.  Once the incident has been scoped, you can then target the systems from which you need to acquire data...dumps of physical memory, selective files, or acquire full images.

As a consultant, I can see the immediate value of Cb.  The traditional "emergency response" model dictates that someone be deployed to the location, requiring the expense of last minute travel and open-ended lodging arrangements.  There's also the "cost" of the time it takes for an analyst to arrive on-site.  Remember, costs are multiplied (travel, lodging, hourly rate, etc.) for multiple analysts. 

Let's say I have a customer who has several sensors rolled out and their own internal Cb server.  With their permission, I could VPN into the infrastructure and access the server via RDP, pull up the Cb interface and being investigating the incident while we're on the phone.  Based on what is available via Cb, I could begin answering questions in very short order, with respect to the severity and scope of the issue.  I could also obtain a copy of any particular malware that is involved in the incident and send it to a malware analyst so she can rip it apart (if such activity is within scope).   Having deployed Cb, the customer has already decided to be proactive in their security posture, so we can have local IT staff immediately begin isolating and acquiring data from systems, for analysis.

So, this is the difference between the traditional "emergency response", and the future of IR (i.e., immediate response).  And yes, this is only true if you've already got Cb installed...but, as described in the white paper, Cb is still useful if it is installed after the incident.

Now, Cb also does not obviate the need for working with customers and developing relationships, so don't think that someone's going to arrive on-site, install something on your network, poke a hole in your perimeter, and you never see them again.  Rather, deploying Cb requires that an even stronger relationship be built with the customer, for two reasons.  First, being proactive is an entirely new posture for many organizations, and can require something of a shift in culture.  This is new to a lot of organizations, and new things can be scary.  Organizations who recognize the need for and are open to change are still going to tread lightly and slowly at first.

Second, Cb itself is new.  However, Cb as a number of case studies behind it already that not only demonstrate its utility as an immediate response tool, but also as a tool to solve a variety of other problems.  So, organizations rolling out Cb are going to need some help in identifying problems that can be solved via the use of Cb, as well as how to go about doing so.

During the recent SANS360 event, Mike Cloppert (see Mike's Attacking the Kill Chain post) suggested that rather than competing with an adversary on their terms on your infrastructure, that we need to change the playing field and make the adversary react to us.  With only 6 minutes, Mike didn't have the time to suggest how to do that, but Cb gives you that capability.  Cb allows you to change the IR battlefield all together.

File Extension Analysis
I posted a HowTo on file extension analysis a bit ago, and as something of a follow up, I've been working on an article for a Microsoft portal.

I guess what I find most interesting about this post is that even though I see the question that spawned the post asked in online forums and lists, the blog post doesn't have a single comment.  You'd think that as many times as I've seen this in lists and forums, someone would have looked at the post, and maybe found it useful.  Well, I tried the "HowTo" approach to the blog posts, and that didn't seem to be too well received...

Tuesday, December 13, 2011

Stuff

Contributing
Rob Lee recently had a very thought provoking post to the SANS Forensics blog titled How to Make a Difference in the Digital Forensics and Incident Response Community.  In that article, Rob highlights the efforts of Kristinn Gudjonsson in creating and developing log2timeline, which is a core component of the SIFT Workstation and central to developing super timelines.

I love reading stuff like this...it's the background and the context to efforts like this (the log2timeline framework) that I find very interesting, in much the same way that I use an archeological version of the NIV Bible to get social and cultural background about the passages being read.  There's considerable context in the history of something, as well as the culture surrounding it and the efforts it took to get something going, that you simply don't see when you download and run the tool.  As an example, Columbus discovering the Americas isn't nearly as interesting if you leave out all of the stuff the came before.

However, I also thought that for the vast majority of folks within the community, the sort of thing that Rob talked about in the post can be very intimidating.  While there are a good number of folks out there with SANS certifications, many (if not most) likely obtained those certifications in order to do the work, but not so much to learn how to contribute to the community.  Also, many analysts don't program.  While the ability to program in some language is highly recommended as a valuable skill within the community, it's not a requirement.

As such, it needs to be said that there are other ways to contribute, as well.  For example, use the tools and techniques that get discussed or presented, and discuss their feasibility and functionality.  Are they easy to understand and use?  Is the output of the tool understandable?  What were your specific circumstances, and did the tool or technique work for you?  What might improve the tool or technique, and make it easier to use?

Another way to contribute is to ask questions.  By that, I'm not suggesting that you run a tool and if it doesn't work or you don't understand the output, to then go and cross-post "it don't work" or "I don't get it" across multiple forums.  What I am saying is that when you encounter an issue of some kind, do some of your own research and work first...then, if you still have a question, ask it.  This does a couple of things...first, it makes others aware of what your needs are, providing the goals of your exam, what you're using to achieve those goals, etc.  Second, it lets others see what you've already done...and maybe gives them hints as to how to approach similar problems.  If nothing else, it shows that you've at least attempted to do your homework.

A reminder: When posting questions about Windows, in particular, the version of Windows that you're looking at matters a great deal.  I was talking to someone last night about an issue of last access time versus last modification time on a file on a Windows system, and I asked which version of Windows were we talking about...because it's important.  I've received questions such as, why are there no Prefetch files on a Windows systems, only to find out after several emails being exchanged that we were talking about Windows 2008.

Post a book or paper review; not a rehash of the table of contents, but instead comment on what was valuable to you, and how you were able (or unable) to use the information in the book or paper to accomplish a task.  Did what you read impact what you do?

I think that one of the biggest misconceptions within the community is that a lot of folks feel that they're "junior" or don't have anything to contribute...and nothing could be further from the truth.  None of us has seen everything that there is to see, and it is very likely that someone working an exam may run across something (a specific ADS, a particular application artifact, etc.) that few have seen before.  As such, there's no reason why you can't share what you found...just because one person may have seen it before, doesn't mean that everyone has...and God knows that many of us could simply use reminders now and again. Tangential to that is the misconception that you have to expose attributable case data to share anything.  Nothing could be further from the truth.  There are a number of folks out there in the community that share specific artifacts without exposing any attributable case data.

SANS360
Speaking of Rob Lee...

I'll be in DC on Tuesday night at the SANS360 Lightning Talk event; my little portion is on accessing VSCs.  If you can't be there, register for the simulcast, and follow along on Twitter via the #SANS360 hashtag.

Bulk_Extractor Updates
Back during the OSDFC this passed summer, I learned about Simson Garfinkel's bulk_extractor tool, and my first thought was that it was pretty cool...I mean, being about to just point an executable at an image and let it find all the things would be pretty cool.  Then I started thinking about how to employ this sort of thing...because other than the offset within the image file of where the artifact was found, there really wasn't much context to what would be returned.  When I was doing PCI work, we had to provide the location (file name) where we found the data (CCNs), and an email address can have entirely different context depending on where it's found...in an EXE, in a file, in an email (To:, From:, CC: lines, message body, etc.).

Well, I haven't tried it yet, but there's a BEViewer tool available now that reportedly lets you view the features that bulkextractor found within the image.  As the description says, you have to have bulk_extractor and BEViewer installed together.  This is going to be a pretty huge leap forward because, as I mentioned before, running bulk_extractor by itself leaves you with a bunch of features without any context, and context is where we get part of the value of data that we find.

For example, when talking about bulk_extractor at OSDFC, Simson mentioned finding email addresses and how many addresses you can expect to find in a fresh Windows installation.  Well, an email address will have very different context depending on where it's found...in an email To:, From: or CC: block, in the body of an email, within an executable file, etc.  Yes, there is link analysis, but how to you add that email address to your analysis if you have no context.  The same is true with PCI investigations; having done these in the past, I know that MS has a couple of PE files that contain what appear to be CCNs...sequences of numbers that meet the three criteria that examiners look for with respect to CCNS (i.e., length, BIN, Luhn check).  However, these numbers are usually found within a GUID embedded in the PE file.

As such, BEViewer should be a great addition to this tool.  I've had a number of exams when I've extracted just unallocated space or the pagefile, and run strings across it just to look for specific things...but something like this would be useful to run in parallel during an exam, just to see what else may be there.

FOSS Page
While we're on the topic of tools, you may have noticed that I've made some updates to my FOSS page recently, mostly in the area of mobile device forensics.  My new position provides me with more opportunities with these devices, but I have talked about examining mobile device backups on Windows systems (BlackBerrys backed up with the Desktop Manager, iPhones/iPads backed up via iTunes, etc.) before, and covered some FOSS tools for accessing these files in WFA 3/e

These tools (there is a commercial tool listed, but it has a trial version available) can be very important.  Say that you have a friend that backs up their phone and has lost something...you may be able to use these tools to recover what they lost from the backup.  Also, in other instances, you have find data critical to what you're looking at in the phone backup.

Simulations
Corey had a great post recently on keeping sharp through simulations; this is a great idea.  Corey links to a page that lists some sites that include sample images, and I've got a couple listed here.  In fact, I've not only used some of these myself and in training courses I've put together, but I also posted an example report to the Files section of the Win4n6 Yahoo Group ("acme_report.doc").

Another opportunity that you have available for analysis includes pretty much any computer system you have available...your kids, friends, spouse, etc.  Hey, what better source for practicing is there than someone right there...say, they get infected with something, and you're able to acquire and analyze the image and track the issue back to JavaScript embedded in a specific file?

How about your own systems?  Do you use Skype?  Acquire your own system and see how well some of the available tools work when it comes to parsing the messages database...or write your own tools (Perl has a DBI interface for accessing SQLite databases).  Or, install a P2P application, perform some "normal" user functions over time, and then analyze your own system.

Not only are these great for practice, but you can also make a great contribution to the community with your findings.  Consider trying to use a particular tool or technique...if it doesn't work, ask why in order to clarify the use, and if it still doesn't work, let someone know.  Your contribution may be pointing out a bug.

Mall-wear Updates
I ran across an interesting tweet one morning recently, which stated that one of the annoying fake AV bits of malware, AntiVirii 2011, uses the Image File Execution Options key in the Registry.  I thought this was interesting for a number of reasons.

First, we see from the write-up linked above that there are two persistence mechanisms (one of the malware characteristics that we've talked about before), the ubiquitous Run key, and this other key.  Many within the DFIR community are probably wondering, "why use the Run key, because we all know to look there?"  The answer to that is...because it works.  It works because not everyone knows to look there for malware.  Many DFIR folks aren't well versed in Registry analysis, and the same is true for IT admins.  Most AV doesn't automatically scan autostart locations and specifically target the executables listed within them (I say "most" because I haven't seen every legit AV product).

Second, the use of the Image File Execution Options key is something that I've only seen once in the wild, during an incident that started with a SQL injection attack.  What was interesting about this incident is that none of the internal systems that the bad guy(s) moved to had the same artifacts.  We'd find one system that was compromised, determine the IOCs, and search for those IOCs across other systems...and not find anything.  Then we'd determine another system that had been compromised, and find different IOCs.

Analysis
I ran across this article that talks about the analysis of an apparent breach of an Illinois water treatment facility, via Twitter.  While the title of the article calls for "analytical competence", the tweet that I read stated "DHS incompetence".  However, I don't think that the need for critical and analytical thinking (from the article) is something that should be reserved for just DHS.

The incident in question was also covered here, by Wired.  The Wired article really pointed out very quickly that the login from a Russian-owned IP address and a failing pump were two disparate events that were five months apart, and were correlated through a lack of competent analysis.

In a lot of ways, these two articles point out a need for reflection...as analysts, are we guilty of some of the same failings mentioned in these articles?  Did we submit "analysis" that was really speculation, simply because we were too lazy to do the work, or didn't know enough about what we were looking at to know that we didn't know enough?  Did we submit a report full of rampant speculation, in the hopes that no one would see or question it?

It's impossible to know everything about everything, even within a narrowly-focused community such as DFIR.  However, it is possible to think critically and examine the data in front of you, and to ask for assistance from someone with a different perspective.  We're much smarter together than we are individually, and there's no reason that we can't do professional/personal networking to build up a system of trusted advisers.  Something like the DHS report could have been avoided using these networks, not only for the analysis, but also for peer review of the report.

Thursday, December 08, 2011

Meetup

Last night's meetup was a great success!  Sam not only gave a great presentation, he also peppered the audience with some amazing card tricks!  Sam really knows how to deliver on not only the technical information, but also with the magic, and did a great job of keeping everyone entertained on both fronts.  Yes, Sam is an accomplished magician

Copies of the slides for Sam's presentation are posted to the NoVA4n6Meetup and Win4n6 Yahoo groups.

I ended up taking notes on my iPhone (using the Notepad app), but here are a couple of take-aways that I had from the presentation:

- I really liked the way Sam broke down and categorized the whole process through visualization.  The third slide of the presentation has a "tool analysis pyramid" (it also appeared later in the presentation)...maybe a better title would be "tool-analysis pyramid".  Based on the work that I've done on the Windows side of things, I really like how Sam broke things down into easy-to-understand categories, which has the effect of making it much easier to communicate your findings, thoughts or needs to others that also understand the framework.

- "Supported means supported."  Depending on the equipment or software you have, and the device, "supported" can mean different things.

- Sam programs in Perl.  Uh...that's the most awesome thing.  EVER.  If you find yourself doing something over and over again, automation is a wonderful thing.  It's also a force multiplier...someone like Sam can write something useful, and someone else who understands the issue and Perl can leverage what Sam did, reducing the time it takes to reach that same level of understanding and effectiveness.

- Sam runs races.  I've run some similar distances as what Sam runs, but that was 20 years ago.  I'd be honored if Sam were to come out and run the Tough Mudder with me...we'll have to see what the future holds.  Maybe I'll have to go out ahead of him and leave either some old cell phones or some antique decks of playing cards along the route...  ;-)

Overall, 32 attendees was a great showing...I thank everyone who braved the weather to come out and see Sam, and I hope that everyone had a great time.  And I wanted to thank Sam for taking the time to put together a wonderful presentation, as well as to come out and give that presentation to all of us.  Many of us have families and other commitments, and I for one greatly appreciate the time and effort that Sam, as well as our other presenters, have taken to put materials together and get up in front of their peers.

Online DFIR Meetups
Back when I attended (and presented at) PFIC 2011, I had a chance to talk to Mike Wilkinson, an instructor in digital forensics at Champlain College.  Mike decided to start online DFIR meetups via his Adobe Connect Meeting Room. The first meetup is on Thu, 15 Dec 2011 at 8pm EST.  Be sure to have Adobe Flash installed on your system, and come join us.  I did see a request that Mike record the meetups...I hope that this ends up being the case.

Tuesday, December 06, 2011

Stuff

MaaS
For quite a while now, when I've been presenting or discussing the state of DFIR with others, I've talked about how attackers and threat actors have long since moved away from digital joyriding on the Information Superhighway (how's that for a cliche?) and how cybercrime is more targeted and focused, and has an economic stimulus.  In many cases, I've mentioned this in the "us-and-them" context, how those of us on one side of the fence are faced with strict (or more often, no) budgets, while those that we're working against have a monetary motivation to not only innovate, but to do so rapidly...to fail quickly, learn and move on.

Back in the day, I knew some folks who wrote custom rootkits, albeit for a fee (I've always wanted to use albeit in a sentence).  That's right, skilled programmers who were tired of the pointy-haired bosses that they worked for, so they hung out their own cyber shingle and began writing custom rootkits for a fee, in order to support themselves.

I ran across this HelpNet Security article that describes the pricing structure for a number of available services, including infections.

Malware (or "mallware", but you'd have to take a drink...)
This section's title is based on my mispronunciation of the word "malware" (I was saying "mall-wear") during my presentation at OSDFC this past June, which led Cory Altheide to start a drinking game.  And here I was thinking that it was my mad presentation skillz that got the back of the room excited.  ;-)

Anyway, Claus is back with a great post on some malware detection resources, stuff you can use particularly if your analysis systems are "air-gapped" from other networks, and in particular the Internet itself.  It's always good to have resources like these in your toolkit, or just your back pocket, as you never know when you're going to need them.

Thoughts on WFP
Chris Pogue (@cpbeefcake on Twitter) has a new post up on the SpiderLabs blog, entitled "Manipulating Windows File Protection and Indicators of Compromise".  As you can see in his post, it is based in part on a discussion Chris and I engaged in a while back regarding the topic of malware and disabling WFP.


Chris makes the following statement in his blog post, regarding IOCs:


"Apart from dllhost.exe being present, apart from the timeline, there were not any IOCs of modification."

I would suggest that beyond what Chris mentions, there are IOCs of this sort of activity, particularly the specific activity that Chris outlined in his post.  Chris even goes so far as to mention in his post that I include a discussion of this topic in WFA 2/e, on pp.328-330; on those pages I point out a means for detecting files that were changed using the process that Chris describes in his post...in short, identifying some IOCs of this sort of activity.  Chris even describes how the target file (that was modified) has a different hash after the modification.  While there may not be anything immediately obvious in the volatile data from a compromised system, there is definitely at least two IOCs; MFT entries Chris describes, and the "new" hash of the modified file.

One of the steps I included in my malware detection checklist is to run a "WFP check".  Essentially, what this is is a Perl script that I wrote that accesses a mounted image, goes into the system32\dllcache directory, gets all of the names and hashes of the files.  Then, it goes into the system32 directory, locates any file of the same name as one of the ones found in the dllcache dir, gets the hash and performs a comparison.  I decided to limit the second-level search to the system32 directory because there a number of false positives on systems that have updates...you'll get a number of "hash mismatch" messages for older versions of files that have since been updated.

I haven't posted the Perl script that I use because, to be honest, I haven't seen where anyone's interested in this sort of capability.  I use it as part of my malware detection process...it's quick, it's easy to use, and it gives me a quick look at something that could very easily turn up a smoking gun, if not the smoking gun.  Another reason I haven't released the script on a wide-spread basis is that I have found a lot of folks who...have had trouble...using some of my more esoteric tools.  I recently had someone ask for a copy of mbr.pl, and once they had it, they ran it against a .vmem file.  And when many folks can't get the tool to work as it should, or as advertized, they tend not to contact me about the issue.

Further, in my experience, this issue (WFP being disabled) isn't one that's easily understood by a number of analysts, and when the topic is first presented, it can lead to a good bit of confusion.  For one, WFP is not intended to be a 'security' mechanism...rather, it is intended to protect the user from inadvertent actions from...well...the user.  As Chris pointed out in his post, the capability to easily circumvent this functionality has existed for some time.  Also, understanding how the detection process makes use of available IOCs can also lead to confusion.

Virtual Systems
For anyone who's done any testing of any kind, in particular of exploits and compromises, it's always nice to have some virtual systems around that you can test against.  For folks who use VMWare and VirtualBox, there are sites where you can go and download virtual machines...but most of them are non-Windows based systems.  Well, if you're using VPC, you can go to Microsoft and download IE App Compatible VHD systems; these are intended for testing web sites, but I'm sure that even with the baked-in operation limits that Claus mentions, these are a lot more accessible than a full-on MSDN subscription (particularly because several of the systems are fully patched up to a certain date).

Linkz
Here's a good post on CD/DVD Forensics from one of the "Hacking Exposed: Computer Forensics" authors.  One thing that really stood out about this post was the statement:

At G-C (my company) we try to have an internal training topic for about 30 minutes to an hour every day (that I'm in the office).

This is an excellent way to share information, particularly about exams and anything new that some has learned...or even something that's not new.

Ken Johnson has posted some really good information regarding a Windows 8 forensic overview.   Not only is this just some great info, but it's very timely...I'm putting together a submission for the 2012 SANS Forensic Summit, for a presentation where I will be discussing forensic analysis of Windows 7 systems, with a good deal of information regarding Windows 8, as well.  Take a look at what Ken mentions about the Windows 8 File History feature...pretty interesting stuff.  With that, and accessibility of "the cloud", incident responders may have something of a scoping challenge on their hands.

What's Old is New
I caught a thread on a popular list recently regarding the topic of ADSs...NTFS alternate data streams.  It's simply amazing to me, given the amount of information available on the topic,that more folks don't know about them.  What this can lead one to think is that if folks (IT/IR staff, forensic analysts, etc.) don't know about these artifacts, then they may not be looking for them.

After all, the capability to create arbitrary ADSs has existed in NTFS since the early days of the file system.  Until Vista, there were no tools native to the platform that allowed admins to view the existence of arbitrary ADSs...and even then, it's a CLI capability (i.e., dir /r).  Tools and even scripts can be launched from within ADS, and the toolkit for the Poison Ivy Trojan includes an option to use ADSs. 

Books
I was on Amazon recently, and ran across the listing for WFA 3/e.  I'm told by the publisher that this book is due to be out on or about 7 Feb 2012, and I'm really looking forward to it.

However, there are a couple of things I wanted to address about the book.  First, this one looks almost exactly like WFA 2/e.  I've already run into instances where owners of WFA 2/e don't pick up on the differences in cover art between that book and WRF.  Now, the third edition is coming out, and it's going to be even harder to tell which one you have.

Second, the third edition is NOT (I repeat...NOT) intended to replace the second edition...instead, it's a companion book.  That is, if you have one, you'll want the other.  The third edition focuses much more on Windows 7, and includes several new topics.  After all, there was really no point in reprinting the content regarding the PE file format if it didn't change, right?

Registry Analysis
I ran across this interesting blog post recently that discusses how the TypedURLs key can be populated, depending upon the version of IE used.  This simply shows, once again, that the version of Windows (and now IE) that you're dealing with is important, particularly when you're looking for assistance.

Part 2 of this article states that it is, "...becoming increasingly common for some of the TypedURLs entries to be written by malware and not typed by the user at all."  Interesting.  So the question then becomes, when conducting analysis, how do you tell the difference?

Friday, December 02, 2011

New Stuff

Speaking
I recently returned from visiting with the great folks at the CT HTCIA.  They had invited me up to speak at their meeting a while back, and in order to keep costs down, I did an up-and-back trip.  I gave two presentations, each about an hour in length...the first was on using RegRipper, the second was on understanding malware (via the four characteristics I've talked about in this blog).  Overall, it was was a great opportunity for me to get out and meet some new folks and see some faces I hadn't seen in a while. 

As to other speaking engagements, I'll be taking part in the SANS360 DFIR Lightning Talks (my job title is incorrect on the page...) event on 13 Dec.  This should be very interesting.  I've enjoyed some of the changes to the conference format that I first began seeing in 2008 through the SANS Forensic Summit, particularly the panel format.  This is another new addition...10 speakers, each with 360 seconds to present on a topic. 

Finally (for now, anyway), I'll be presenting on timeline analysis at DC3 in January, and I recently saw that SANS now has the CfP for the SANS Forensics Summit posted.  This is a different approach from last year, but I'm going to submit, hope that I get accepted, and hope to see you in Austin, TX, next summer!

RegRipper
Speaking of RegRipper...to get your very own copy of RegRipper, go here and get the file "RR.zip".  To get the latest and greatest user-submitted plugins, go here.  I know Rob has updated the SANS SIFT Workstation to include the latest and greatest plugins in that distribution.

Oops, he did it (again)...
I caught this very interesting article by Mike Tanji (Kyrus CSO) recently...if you haven't read it, it's an excellent article, largely because he's so on point.  I particularly agree with his statement about critical thinking, particularly in light of this OverHack blog post that describes a phenomenal leap in "analysis" (sort of brings Mike's whole "hyperbole" statement into perspective), and it's inevitable results.

Another part of Mike's article that I agree with wholeheartedly is specificity of language.  Like Mike and others, I see a lot of this (or lack thereof) within our community.  I recently received an email asking for assistance with Registry analysis, and the question revolved around the "system key".  Not to be a "word n@zi", but it's a hive, not a key.  Registry keys are very specific objects and structures, and are different from Registry values.  To Mike's point, other professions have that specificity of language...all doctors know what "stat" means, all lawyers know what "tort" means.  Like other professions and organizations, DFIR folks are often embattled with marketing forces (what does the over-used term "APT" really mean?), but we still simply do not have enough attention paid within our community to agreed-upon terminology.

Linkz
Here are some links I've pulled together since my last post...

Claus is back with some updates to MS Tools and some other software stuff...

Andreas updated his EvtxParser Perl library, to fix an issue with memory.

Dave posted on extending RegRipper...again.  I read the blog post twice, and it seems like the "they see me rollin', they hatin'" blog post of the month.  ;-)

Corey's got another great post up...one of the things I like about it is that he is the first person (that I'm aware of) who's downloaded the malware detection checklist I posted who's actually provided feedback on it.  This is just another example of Corey's continuing contributions to the community.

Windows Security Descriptor Parser (Perl) - found here.

PDF Analysis - PDF Analysis using PDFStreamDumper

Check out Chris Taylor's blog...in his first post, he mentions selling out, but to be honest, he's got some really good stuff there.  I think like many (myself included), he's found the benefit of sharing findings, thoughts and ideas, not just as a way of keeping your own notes, but also getting input from others.

Meetup
Finally, don't forget about next week's NoVA Forensics Meetup.  Time and location haven't changed...Sam Brothers will be presenting on mobile forensics.

Wednesday, November 23, 2011

Stuff

Online Meetups
Usually when I ask online for input into the NoVA Forensics Meetups, I most often get back responses from folks who have not attended the meetups, but want to, and most of those responses are from people who live too far away to attend the meetups...so they ask me when I'm going to start a meetup in their area.  I had a chance to speak with Mike Wilkinson (teaches at Champlain College) while we were both out at PFIC 2011, and not long ago, Mike posted a survey to see if folks would be interested in attending or presenting at online meetups.  Mike posted the results of the survey recently, and posted a schedule of presentations, as well.

Based on the results, Mike will be running the online meetups on the third Thursday of each month, at 8pm EST, starting on 15 Dec 2011.

NoVA Meetup
While we're on the topic of the meetups, I thought I'd throw out a reminder to everyone about the next NoVA Forensics Meetup on Wed, 7 Dec.  I'm looking forward to this one, as Sam Brothers will be presenting on mobile forensics.

Case Notes
Corey posted a narrative version of case notes for an exam he recently worked.  Corey does a great job of walking the reader through the process of discovery during the exam, and if you look at what he's doing, you'll see his process pretty clearly, starting with his goal of determining the IIV for the malware infection.  He even went so far as to post his investigative plan. 

Another aspect of Corey's post that I really liked was this:

Some activities were conducted in parallel to save time.

I can't tell you the number of times I have seen examiners with several systems (a Mac Pro Server and two MacBook Pro systems) do nothing but start a CCN search against the one image they have, using EnCase, and state that they can't do anything else because the image is in use.  When you've got multiple systems, you can easily extract designated data from within the image before subjecting it to a long-running process (AV or CCN scans, etc.)...or simply create a second working copy of the image.  Or, instead of starting the long-running processes at the beginning of your day, start them when you know you're going to have some down-time, or even before you leave the lab.

As part of his analysis process, Corey did two other really impressive things; he made use of the tools he had available, and he created a timeline.  One of the things Corey mentioned in his post was that he created a batch file to run specific RegRipper/rip.exe plugins and extract specific data; this is a great use of available tools - not just RegRipper, but also batch file scripting - to get the job done.  Also, Corey walks through portions of the timeline he created, opening it in Excel and highlighting (in yellow or red) specific entries. 

I'll leave the rest of the post to the reader...great job, Corey!

Tools
Scalpel was updated a bit ago...if you do any file carving, check it out.

After I posted my macl.pl tool, I received an email regarding wwtool v0.1, a CLI tool for listing available wireless networks, from WiFiMafia.  While not specifically a tool for DFIR work, I can easily see how this would be useful for assessment work.

Friday, November 18, 2011

Good Stuff

Geolocation Information
Chad had an excellent post recently regarding geolocation data; besides mobile devices, Windows systems can potentially contain two sources of geolocation information.  One is the WiFi MAC addresses that you can retrieve from the Registry...once you do, you can use tools like macl.pl to plot the location of the WAP on a map.  Second, some users back up their smartphones to their desktop, using iTunes or the BlackBerry Desktop Manager...you may be able to pull geolocation information from these backups, as well.  Check out the FOSS page for some tools that may help you extract that information.

Interviews
Like most analysts, I like to see or hear what other analysts are seeing, and how they're addressing what they're seeing.

Ryan Washington's CyberJungle interview (episode 238) - Ryan was interviewed about his PFIC 2011 presentation about how forensicators can discover artifacts of anti-forensic attempts.  As with his presentation, Ryan discusses not just hiding from the user, but also how even seasoned pen testers leave tracks on systems, often when they try very hard to be stealthy.

I remember a discussion I had with members of the IBM ISS X-Force a while ago regarding an Excel exploit that allowed them access to a system.  I asked about artifacts, and was told that there were none.  I asked explicitly that if the exploit included sending a malicious Excel file and having the user open it, wouldn't the Excel spreadsheet be an "artifact"?  After all, many a forensicator has nailed down a phishing attack by locating the malicious PDF file in the email attachment archive.

Interestingly, Ryan also mentions digital "pocket litter", which isn't something that many folks who try to hide their activities are really aware of...

Chris Pogue's Pauldotcom interview - episode 267, starts about 56:33 into the video; Chris talks about Sniper Forensics; what it means, where we are now, where we need to go, all with respect to DFIR.  Chris also references some of the same topics that Ryan discussed, and in some cases goes into much more technical detail (re: discussion of MFT attributes).  Chris talks about some of the things that he and his team have seen, including MBR infectors, and memory analysis.

Another cool thing about the interview is that you get to see Chris's office, and hear his cell phone ring tone!

Wednesday, November 16, 2011

Stuff, Reloaded

More APT Confusion
I ran across an interesting article on TechTarget recently, which states that due to confusion over the APT threat, which "...leads companies to often misappropriate resources, making unnecessary or uninformed investments."

Really?  I remember going on-site to perform IR back in 2006 when I was with the ISS ERS Team, and understanding how the customer knew to contact us.  They had three copies of ISS RealSecure.  All still in their shrink-wrap.  One was used to prop a door open.  So what I'm saying is that, with respect to the TechTarget article, it isn't necessarily confusion over what "APT" means that leads to "uninformed investments", although I do think that what most organizations find themselves inundated with, with respect to marketing, does lead to significant confusion.  I think it's not understanding threats in general, as well as the panic that follows an incident, particularly one that, when investigated, is found to have been going on for some time (weeks, months) prior to detection.

Context...no, WFP.  Wait...what?
When presenting on timeline analysis, or most recently at PFIC 2011, Windows forensic analysis, one of the concepts I cover is context within your examination.  Recently, Chris posted on the same topic, and gives a great example.

Something about the post, and in particular the following words, caught my eye:

"...manually went through the list of running services using the same methodology...right name, wrong directory, or slightly misspelled name, right directory (for the answer to why I do this, check this out... http://support.microsoft.com/kb/222193)."

Looking at this, I was a little confused...what does Windows File Protection (WFP) have to do with looking for the conditions that Chris mentioned in the above quote?  I mean, if a malware author were to drop "svch0st.exe" into the system32 directory, or "svchost.exe" into the Windows directory, then WFP wouldn't come into play, would it?

What's not mentioned in the post is that, while both of the conditions are useful techniques for hiding malware (because they work), WFP is also easily "subverted".  The reason I put "subverted" in quotes is that it's not so much a hack as it is using an undocumented MS API call.  That's right!  To break stuff, you don't have to break other stuff first...you just use the exit ramp that the vendor didn't post signs for.  ;-)

Okay, to start, open WFA 2/e and turn to pg. 328.  Just below the middle of the page, there's a link to a BitSum page (the page doesn't seem to be available any longer...you'll need to look here) that discusses various methods for disabling WFP...one that I've seen used is method #3; that is, disable WFP for one minute for a particular file.  This is something that is likely used by Windows Updates.  This CodeProject page has some additional useful information regarding the use of the undocumented SfcFileException API call.

To show you what I mean by "undocumented", take a look at the image to the right...this is the Export Address Table from sfc_os.dll from a Windows XP system, via PEView.  If you look at the Export Ordinal Table, you'll see only the last 4 functions listed, by name.  However, in the Export Address Table, you don't see names associated with several of the functions.

Note that at the top of the BitSum page (archived version), several tools are listed to demonstrate some of the mentioned techniques.  As the page appears to be no longer available, I'm sure that the tools are not available either...not from this site, anyway.

Mandiant has a good example of how WFP "subversion" has been used for malware persistence; see slide 25 from this Mandiant The Malies presentation.  W32/Crimea is another example of how disabling WFP may be required (I've seen the target DLL as a "protected" file on some XP systems, but not on others...).  This article describes the WFP subversion technique and points to this McAfee blog post.

Yes, Virginia...it is UTC
I recently posted a link to some of my timeline analysis materials that I've used in previous presentations.  I've mentioned before that I write all of my tools to normalize the time stamps to 32-bit Unix time format, based on the system's interpretation of UTC (which, for the most part, is analogous to GMT).  In fact, if you open the timeline presentation from this archive, slide 18 includes a bullet that states "Time (normalized to Unix epoch time, UTC)".

I hope this makes things a bit clearer to folks.  Thanks!


Intel Sharing
Not long ago, I posted about OpenIOC.org, and recently ran across this DarkReading article that discusses intel sharing. Sharing within the community, of any kind, is something that's been discussed time and time again...very recently, I chatted with some great folks at PFIC (actually, at the PFIC AfterDark event held at The Spur in Park City) about this subject.

In the DarkReading article, Dave Merkel, Mandiant CTO, is quoted as saying, "There's no single, standardized way for how people to share attack intelligence."  I do agree with this...with all of the various disparate technologies available, it's very difficult to express an indicator of compromise (IoC) in a manner that someone else can immediately employ it within their infrastructure.  I mean, how does someone running Snort communicate attack intel to someone else who monitors logs?

I'd suggest that it goes a bit further beyond that, however...there's simply no requirement (nor apparently any desire) for organizations to collect attack intelligence, or even simply share artifacts.  Most "victim" organizations are concerned with resuming business operations, and consulting firms are likely more interested in competitive advantage.  At WACCI 2010, Ovie talked about the lack of sharing amongst analysts during his keynote presentation, and like others, I've experienced that myself on the teams I've worked with...I wouldn't have any contact with another analyst on my team for, say, 3 months, and after all that time, they had nothing to share from their engagements.  We took steps to overcome that...Chris Pogue and I wrote a white paper on SQL injection, we developed some malware characteristics, and I even wrote plugins for RegRipper.  I've seen the same sharing issue when I've talked to groups, not just about intel sharing, but also about the forensic scanner

I think that something like OpenIOC does provide a means for describing IoCs in a manner that can be used by others...but only others with the same toolset.  Also, it is dependent upon what folks find, and from that, what they choose to share.  As an example, take a look at the example Zeus IOC provided at the OpenIOC.org site.  It contains some great information...file names/paths, process handles, etc...but no persistence mechanism for the malware itself, and no Registry indicators.  So, this IoC may be great if I have a copy of IOCFinder and a live system to run it against.  But what happens if I have a memory dump and an acquired image, or just a Windows machine that's been shut off?  Other IoCs, like this one, are more comprehensive...maybe with a bit more descriptive information and an open parser, an analyst could download the XML content and parse out just the information they need/can use.

Now, just to be clear...I'm not saying that no one shares DFIR info or intel.  I know that some folks do...some folks have written RegRipper plugins, but I've also been in a room full of people who do forensic analysis, and while everyone admits to having a full schedule, not one person has a single artifact to share.  I do think that the IoC definition is a good start, and I hope others pick it up and start using it; it may not be perfect, but the best way to improve things is to use them.

DoD CyberCrime Conference
Thanks to Jamie Levy for posting the DC3 track agenda for Wed, 25 Jan 2012.  It looks like there're a number of interesting presentations, many of which all go on at the same time.  Wow.  What's a girl to do?

NoVA Forensics Meetup
Just a quick reminder about the next NoVA Forensics Meetup, scheduled for Wed, 7 Dec 2011, at the ReverseSpace location.  Sam Brothers will be presenting on mobile forensics.

Tuesday, November 15, 2011

Stuff

Registry Parsing
Andrew Case, developer of Registry Decoder, recently posted regarding using reglookup for Registry analysis.  There are a number of links in Andrew's post to some of Tim Morgan's papers regarding such topics as looking for deleted Registry keys, so be sure to take a look.

PFIC 2011
I had an opportunity to meet a lot of great folks in Park City, many of whom I had only known about via their online presence.  One of those is fellow DFIR'er and fellow former Marine Corey Harrell. Corey's one of those impressive folks that you want to reach to and find in the community; rather than just sitting quietly, or just clicking "+1" or "Like", Corey goes out and does stuff, a good deal of which he's posted to his blog.

Corey posted his PFIC 2011 Review to his blog recently (Girl, Unallocated posted her thoughts and experiences, as well)...this is great stuff, for a couple of reasons.  First, some conferences, like PFIC, have a number of good topics and speakers, often during the same time slot.  As such, you may not be able to get to all of the presentations that you'd like to, and having someone post their "take-aways" from the presentation you missed is a good way to get a bit of insight beyond simply downloading the slide pack.  Taking that a step further, not everyone can attend conferences, so this gives folks who couldn't attend an opportunity to peek behind the curtain and see what's going on.  Finally, this gets the word out about next year's conference, as well, and may get someone over the hump of whether to attend or not.

DoD CyberCrime
Speaking of presentations, I got word recently that my DoD CyberCrime Conference presentation on timeline analysis on 25 Jan 2012, from 8:30-10:20am.  The last (and first) time I attended DC3 was in 2007, and unfortunately, within less than an hour of finishing my presentation, I was on an incident call, and off the next day to another major city.  Ah...such was the life of an emergency responder.

My timeline analysis presentation (an example of a previous presentation can be found here) is a bit different from most of those that I find available online, in part because I don't focus on using the SANS SIFT Workstation.  That's not to say that SIFT isn't a great resource...because it is.  Rob's done a great job of assembling a range of open source tools, and getting them all set up and ready to use.  However, the approach I tend to take is to start by attempting to engage the audience and discussing with them the reasons why we'd want to do timeline analysis in the first place, discussing concepts such as context and increased relative confidence in the data.  Understanding these concepts can often be what gets folks to see the value of creating a timeline, when "...because this guy said so..." just isn't enough.  From there, we walk through using the tools, and demonstrate how timelines can be used as part of your analysis process...keeping in mind that like any other tool, this is just a tool and needs to be used accordingly.  Creating a timeline when it doesn't make sense to do simply...well...doesn't make sense.

Anyway, I'm really looking forward to this opportunity, and hopefully seeing a bunch of really good presentations, as well.  Looking at the conference agenda as it is so far, it looks like there's a couple of good social events, as well, which will lead to some great networking.

MMPC Updates
The Microsoft Malware Protection Center (MMPC) recently posted regarding some new MSRT definitions, including Win32/Cridex, another bit of malware that steals online banking credentials.  Cridex uses the user's Run key for persistence, and apparently stores data in the Default value of the HKCU\Software\Microsoft\Windows Media Center\ key.  Figure 3 of the MMPC post includes a screen capture of what this data looks like.

Duqu
Although I haven't had an opportunity to analyze a system infected with Duqu, as always, I remain interested in what's out there, particularly from a host-based perspective.  I ran across a set of open source tools for detecting Duqu files (readme here).  There's also the Symantec write-up on Duqu, which is very interesting, as it defines the Duqu "load point", which is a driver loaded as a Windows service, specifically HKLM\SYSTEM\CurrentControlSet\Services\JmiNET3.  Apparently, configuration information is maintained in the FILTER subkey beneath this key.

Interestingly, the load point is described as "JmiNET7.sys", but the Symantec paper goes on to say that the service name is "JmiNET3".

The Symantec paper goes on to describe the loading techniques for the payload loader, and method 3 involves a section within a DLL called ".zdata".

Finally, the Diagnostics section of the paper includes another Registry key that is supposed to indicate an infected system; specifically, HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\4\”CFID”.

Anyone interested in learning more about Duqu should take a look at the Symantec paper, as well as anything else that's out there.  There seem to be some interesting (and possibly unique) indicators that you can use to scan your infrastructure for infected systems; per the Symantec paper, part of the Duqu threat involves infostealers.

Tool Updates
There've been some updates to the SysInternals tools recently, in particular to AutoRuns (new v 11.1), including some new autostart locations.  Check them out.

Andreas has updated his Evtx Parser tool (written in Perl), as well.

ImDisk was recently updated to version 1.5.2.

I updated my maclookup.pl WiFi geolocation script to macl.pl.   The previous version of the script used Skyhook to perform lookups, in an attempt to translate a WiFi WAP MAC address (found in the Windows Registry) to a lat/long pair.  I found out recently that this stopped working, so I sought out...and found...a way to update the script.

Reading
The e-Evidence.info what's new site was updated recently, and as always, there's lots of great reading material.  This presentation on using open source tools for digital forensic analysis spends a good couple of slides demonstrating how to use RegRipper.  David Hull has a timeline presentation available that discusses the use of SIFT v2.0 to create super timelines.