Rob Lee recently had a very thought provoking post to the SANS Forensics blog titled How to Make a Difference in the Digital Forensics and Incident Response Community
. In that article, Rob highlights the efforts of Kristinn Gudjonsson in creating and developing log2timeline
, which is a core component of the SIFT Workstation
and central to developing super timelines
I love reading stuff like this...it's the background and the context to efforts like this (the log2timeline framework) that I find very interesting, in much the same way that I use an archeological version of the NIV Bible to get social and cultural background about the passages being read. There's considerable context in the history of something, as well as the culture surrounding it and the efforts it took to get something going, that you simply don't see when you download and run the tool. As an example, Columbus discovering the Americas isn't nearly as interesting if you leave out all of the stuff the came before.
However, I also thought that for the vast majority of folks within the community, the sort of thing that Rob talked about in the post can be very intimidating. While there are a good number of folks out there with SANS certifications, many (if not most) likely obtained those certifications in order to do the work, but not so much to learn how to contribute to the community. Also, many analysts don't program. While the ability to program in some language is highly recommended as a valuable skill within the community, it's not a requirement.
As such, it needs to be said that there are other ways to contribute, as well. For example, use the tools and techniques that get discussed or presented, and discuss their feasibility and functionality. Are they easy to understand and use? Is the output of the tool understandable? What were your specific circumstances, and did the tool or technique work for you? What might improve the tool or technique, and make it easier to use?
Another way to contribute is to ask questions. By that, I'm not suggesting that you run a tool and if it doesn't work or you don't understand the output, to then go and cross-post "it don't work" or "I don't get it" across multiple forums. What I am saying is that when you encounter an issue of some kind, do some of your own research and work first...then, if you still have a question, ask it. This does a couple of things...first, it makes others aware of what your needs are, providing the goals of your exam, what you're using to achieve those goals, etc. Second, it lets others see what you've already done...and maybe gives them hints as to how to approach similar problems. If nothing else, it shows that you've at least attempted to do your homework.
A reminder: When posting questions about Windows, in particular, the version of Windows that you're looking at matters a great
deal. I was talking to someone last night about an issue of last access time versus last modification time on a file on a Windows system, and I asked which version of Windows were we talking about...because it's important. I've received questions such as, why are there no Prefetch files on a Windows systems, only to find out after several emails being exchanged that we were talking about Windows 2008.
Post a book or paper review; not a rehash of the table of contents, but instead comment on what was valuable to you, and how you were able (or unable) to use the information in the book or paper to accomplish a task. Did what you read impact what you do?
I think that one of the biggest misconceptions within the community is that a lot of folks feel that they're "junior" or don't have anything to contribute...and nothing could be further from the truth. None of us has seen everything that there is to see, and it is very likely that someone working an exam may run across something (a specific ADS
, a particular application artifact, etc.) that few have seen before. As such, there's no reason why you can't share what you found...just because one person may have seen it before, doesn't mean that everyone has...and God knows that many of us could simply use reminders now and again. Tangential to that is the misconception that you have to expose attributable case data to share anything. Nothing could be further from the truth. There are a number of folks out there in the community that share specific artifacts without exposing any attributable case data.
Speaking of Rob Lee...
I'll be in DC on Tuesday night at the SANS360 Lightning Talk event; my little portion is on accessing VSCs
. If you can't be there, register for the simulcast
, and follow along on Twitter via the #SANS360
Back during the OSDFC
this passed summer, I learned about Simson Garfinkel's bulk_extractor
tool, and my first thought was that it was pretty cool...I mean, being about to just point an executable at an image and let it find all the things would be pretty cool. Then I started thinking about how to employ this sort of thing...because other than the offset within the image file of where the artifact was found, there really wasn't much context to what would be returned. When I was doing PCI work, we had to provide the location (file name) where we found the data (CCNs), and an email address can have entirely different context depending on where it's found...in an EXE, in a file, in an email (To:, From:, CC: lines, message body, etc.).
Well, I haven't tried it yet, but there's a BEViewer tool available now that reportedly lets you view the features that bulkextractor found within the image. As the description says, you have to have bulk_extractor and BEViewer installed together. This is going to be a pretty huge leap forward because, as I mentioned before
, running bulk_extractor by itself leaves you with a bunch of features without any context, and context is where we get part of the value
of data that we find.
For example, when talking about bulk_extractor at OSDFC, Simson mentioned finding email addresses and how many addresses you can expect to find in a fresh Windows installation. Well, an email address will have very different context depending on where it's found...in an email To:, From: or CC: block, in the body of an email, within an executable file, etc. Yes, there is link analysis, but how to you add that email address to your analysis if you have no context. The same is true with PCI investigations; having done these in the past, I know that MS has a couple of PE files that contain what appear to be CCNs...sequences of numbers that meet the three criteria that examiners look for with respect to CCNS (i.e., length, BIN, Luhn check). However, these numbers are usually found within a GUID embedded in the PE file.
As such, BEViewer should be a great addition to this tool. I've had a number of exams when I've extracted just unallocated space or the pagefile, and run strings across it just to look for specific things...but something like this would be useful to run in parallel during an exam, just to see what else may be there.
While we're on the topic of tools, you may have noticed that I've made some updates to my FOSS page
recently, mostly in the area of mobile device forensics. My new position provides me with more opportunities with these devices, but I have talked about examining mobile device backups on Windows systems (BlackBerrys backed up with the Desktop Manager, iPhones/iPads backed up via iTunes, etc.) before, and covered some FOSS tools for accessing these files in WFA 3/e
These tools (there is a commercial tool listed, but it has a trial version available) can be very important. Say that you have a friend that backs up their phone and has lost something...you may be able to use these tools to recover what they lost from the backup. Also, in other instances, you have find data critical to what you're looking at in the phone backup.
Corey had a great post
recently on keeping sharp through simulations; this is a great idea. Corey links to a page that lists some sites that include sample images, and I've got a couple listed here
. In fact, I've not only used some of these myself and in training courses I've put together, but I also posted an example report to the Files section
of the Win4n6 Yahoo Group ("acme_report.doc").
How about your own systems? Do you use Skype? Acquire your own system and see how well some of the available tools work when it comes to parsing the messages database...or write your own tools (Perl has a DBI interface for accessing SQLite databases). Or, install a P2P application, perform some "normal" user functions over time, and then analyze your own system.
Not only are these great for practice, but you can also make a great contribution to the community with your findings. Consider trying to use a particular tool or technique...if it doesn't work, ask why in order to clarify the use, and if it still doesn't work, let someone know. Your contribution may be pointing out a bug.
I ran across an interesting tweet one morning recently, which stated that one of the annoying fake AV bits of malware, AntiVirii 2011, uses the Image File Execution Options
key in the Registry. I thought this was interesting for a number of reasons.
First, we see from the write-up linked above that there are two persistence mechanisms (one of the malware characteristics that we've talked about before), the ubiquitous Run key, and this other key. Many within the DFIR community are probably wondering, "why use the Run key, because we all know to look there?" The answer to that is...because it works. It works because not everyone knows to look there for malware. Many DFIR folks aren't well versed in Registry analysis, and the same is true for IT admins. Most AV doesn't automatically scan autostart locations and specifically target the executables listed within them (I say "most" because I haven't seen every legit AV product).
Second, the use of the Image File Execution Options
key is something that I've only seen once in the wild, during an incident that started with a SQL injection attack. What was interesting about this incident is that none of the internal systems that the bad guy(s) moved to had the same artifacts. We'd find one system that was compromised, determine the IOCs, and search for those IOCs across other systems...and not find anything. Then we'd determine another system that had been compromised, and find different IOCs.
I ran across this article
that talks about the analysis of an apparent breach of an Illinois water treatment facility, via Twitter. While the title of the article calls for "analytical competence", the tweet that I read stated "DHS incompetence". However, I don't think that the need for critical and analytical thinking (from the article) is something that should be reserved for just DHS.
The incident in question was also covered here
, by Wired. The Wired article really pointed out very quickly that the login from a Russian-owned IP address and a failing pump were two disparate events that were five months apart, and were correlated through a lack of competent analysis.
In a lot of ways, these two articles point out a need for reflection...as analysts, are we guilty of some of the same failings mentioned in these articles? Did we submit "analysis" that was really speculation, simply because we were too lazy to do the work, or didn't know enough about what we were looking at to know that we didn't know enough? Did we submit a report full of rampant speculation, in the hopes that no one would see or question it?
It's impossible to know everything about everything, even within a narrowly-focused community such as DFIR. However, it is possible to think critically and examine the data in front of you, and to ask for assistance from someone with a different perspective. We're much smarter together than we are individually, and there's no reason that we can't do professional/personal networking to build up a system of trusted advisers. Something like the DHS report could have been avoided using these networks, not only for the analysis, but also for peer review of the report.