Monday, June 28, 2010


Remember that scene from Napoleon Dynamite where he talks about having "skillz"? Well, analysts have to have skillz, right?

I was working on a malware exam recently...samples had already been provided to another analyst for reverse engineering, and it was my job to analyze acquired images and answer a couple of questions. We knew the name of the malware, and when I was reviewing information about it at various sites (to prepare my analysis plan), I found that when the malware files are installed, their MAC times are copied from kernel32.dll. Okay, no problem, right? I'll just parse the MFT and get the time stamps from the $FILE_NAME attribute.

So I received the images and began my case in-processing. I then got to the point where I extracted the MFT from the image, and the first thing I did was run David Kovar's against it. I got concerned after I ran it for over an hour, and all I got was a 9Kb file. I hit Ctrl-C in the command prompt and killed the process. I then ran Mark Menz's MFTRipperBE against the file and when I opened the output .csv file and ran a search for the file name, Excel told me that it couldn't find the string. I even tried opening the .csv file in an editor and ran the same search, with the same results. Nada.

Fortunately, as part of my in-processing, I had verified the file structure with FTK Imager, and then created a ProDiscover v6.5 project and navigated to the appropriate directory. From there, I could select the file within the Content View of the project and see the $FILE_NAME attribute times in the viewer.

I was a bit curious about the issue I'd had with the first two tools, so I ran my Perl code for parsing the MFT and found an issue with part of the processing. I don't know if this is the same issue that encountered, but I made a couple of quick adjustments to my Perl script, and I was able to fairly quickly get the information I needed. I can see that the file has $STANDARD_INFORMATION and $FILE_NAME attributes, as well as as data attribute, that the file is allocated (from the flags), and that the MFT sequence number is 2. Pretty cool.

The points of this post are:

1. If you run a tool and do not find the output that you expect, there's likely a reason for it. Validate your findings with other tools or processes, and document what you do. I've said (and written in my books) that the absence of an artifact where you would expect to find one is itself an artifact.

2. Analysts need to have an understanding of what they're looking at and for, as well as some troubleshooting skills, particularly when it comes to running tools. Note that I did not say "programming" skills. Not everyone can, or wants to, program. However, if you don't have the skills, develop relationships with folks who do. But if you're going to ask someone for help, you need to be able to provide enough information that they can help you.

3. Have multiple tools available to validate your findings, should you need to do so. I ran three tools to get the same piece of information, of which I had documented the need in my analysis plan prior to receiving the data. One tool hung, another completed without providing the information, and I was able to get what I needed from the third, and then validate it with a fourth. And to be honest, it didn't take me days to accomplish that.

4. The GUI tool that provided the information doesn't differentiate between "MFT Entry Modified" and "File Modified"...I just have two time stamps from the $FILE_NAME attribute called "Modified". So I tweaked my own code to print out the time stamps in MACB format, along with the offset of the MFT entry within the MFT itself. Now, everything I need is documented, so if need be, it can be validated by others.

Tuesday, June 22, 2010

Links and what not

Case Notes
Chris posted on writing and maintaining case notes a bit ago, using Case Notes. One of the things that is sorely overlooked many times is note taking and documenting what you're doing during an examination.

This is perhaps one of the most overlooked aspects of what we (IR/DF folks) do...or perhaps more appropriately, need to be doing. Many times, so little of what we do is documented, and it needs to be, for a number of reasons. One that's mentioned many times is, how are you going to remember the details of what you did 6 months later? Which tool version did you use? Not to pick on EnCase, but have you been to the user forum? Spend a little time there and you'll see why the version of the tool makes a difference.

Another aspect that few really talk about is process do you improve upon something if you're not documenting it? As forensics nerds, we really don't think about it too much, but there are a lot of folks out there who have processes and procedures...EMTs, for example. Let's say you're helping someone with some analysis, and you've worked out an analysis plan with them. Now, let's say that they didn't follow it...they told you that...but they can't remember what it was they did do. How do you validate the results? How can you tell that what they did was correct or sufficient?

A good example is when a customer suspects that they have malware on a system, and all you have to work with is an acquired image. How do you go about finding "the bad stuff"? Well, one way is to mount the image read-only and scan it with AV. Oh, but wait...did you check to see which AV, if any, was already installed on the system? Might not be a good idea to run that, because apparently it didn't work in the first place. So what do/did you run? Which version? When was it updated? What else did you do? You will not ever reach 100% certainty, but with a documented process you can get close.

When you document what you do, one of the side effects is that you can improve upon that process. Hey, two months ago, here's what I did...that was the last time that I had a malware case. Okay, great...what've I learned...or better yet, what have other folks learned since then, and how can we improve this process? Another nice side effect is that if you document what you did, the report (another part of documentation that we all hate...) almost writes itself.

In short, if you didn't document what you didn't happen.

Raw2vmdk is a Java-based, platform independent tool for mounting raw images as VMWare vmdk disks. This is similar to LiveView, and in fact, raw2vmdk reportedly uses some of the same Java classes as LiveView. However, raw2vmdk is a command line tool; thanks for JD for taking the time to try it out and describe it in a blog post.

MFT $FILE_NAME Attributes
Eric posted to the Fistful of Dongles blog, asking about tools that can be used to extract/retrieve $FILE_NAME attributes from the MFT. I mentioned two tools in my comment that have been around and available for some time, and another commenter mentioned the use of EnScripts.

Tool Updates
Autoruns was updated on 15 June; perhaps the most notable update is the -z switch, which specifies an offline Windows system to scan. I really like Autoruns (and it's CLI companion, autorunsc.exe), as it does a great job of going through and collecting information about entries in startup locations, particularly the Registry. The drawback of the tool is that there isn't much of an explanation as to why some areas are queried, leaving it up to the investigator to figure it out. This is pretty unfortunate, given the amount of expertise that goes into developing and maintaining a valuable tool like this, and usually means that a great deal of the information that the tool collects is simply overlooked.

If you're interested in USB device histories on your system, check out usbhistory.exe. The web page provides a very good explanation of how the tool works and were it goes to get it's information. Overall, if this is information that you're interested in, this would be a very good tool to add to a batch file that collects information from a live system.

USB Devices
Speaking of USB device history, there's a topic I keep seeing time and time again in the lists that has to do with when a USB device had last been connected to a system. In almost all cases, the question that is posed indicates that there has been some issue determining when devices had been attached, as the subkeys beneath the Enum\USBStor key all have the same LastWrite times.

While there is clearly some process that is modifying these subkeys so that the LastWrite times are updated, these are not the keys we're interested in when it comes to determining when the devices were last attached to the system. I addressed this in WFA 2/e, starting on pg 209, and Rob Lee has covered it in his guides for profiling USB devices on various versions of Windows.

Sunday, June 20, 2010

Who's on...uh, at...FIRST?

I attended the FIRST conference in Miami last week. My employer is not a member of FIRST, but we were a sponsor, and we hosted the "Geek Bar"...a nice room with two Wiis set up, a smoothie bar (also with coffee and tea), and places to sit and relax. One of my roles at the conference was to be at the Geek Bar to answer questions and help sign folks up for the NAP tour on Thursday, as well as mingle with folks at the conference. As such, I did not get to attend all of the presentations...some were going on during my shift at the Geek Bar, for example.

Note: Comments made in this blog are my own thoughts and opinions, and do not necessarily reflect or speak for my employer.

Dave Aitel's presentation hit the nail on the head...defenders are behind and continue to learn from the attacker. Okay, not all defenders learn from the attacker...some do, others, well, not so much. Besides Dave's really cool presentation, I think that what he said was as important as what he didn't say. I mean, Dave was really kind of cheerful for what, on the surface, could be a "doom-and-gloom" message, but someone mentioned after the presentation that Dave did not provide a roadmap to fixing/correcting the situation. I'd suggest that the same things that have been said for the past 25 years, the same core principles still apply...they simply need to be used. My big take-away from this presentation was that we cannot say that defensive tactics, or the tactics, techniques, and strategies used by the defenders have failed, because in most cases, they haven't been implemented properly, or at all.

I really liked Heather Adkins' presentation regarding Aurora and Google, and I think that overall it was very well received. It was clear that she couldn't provide every bit of information associated with the incident, and I think she did a great job of heading off some questions by pointing out what was already out there publicly and could be easily searched for...via Google.

Vitaly Kamluk's (Kaspersky/Japan) presentation on botnets reiterated Dave's presentation a bit, albeit not in so many words. Essentially, part of the presentation was spent talking about the evolution of botnet infrastructures, going through one-to-many, many-to-one, C2, P2P, and a C2/P2P hybrid.

Unfortunately, I missed Steven Adair's presentation, something I wanted to see. However, I flew to Miami on the same flight as Steven, one row behind and to the side of his seat, so I got to see a lot of the presentation development in action! I mean, really...young guy opens up Wireshark on one of two laptops he's got open...who wouldn't watch?

Jason Larsen (researcher from Idaho National Labs) a good talk on Home Area Networks (HANs). He mentioned that he'd found a way to modify firmware on power meters to do such things as turn on the cellular networking of some of these devices. Imagine the havok that would insue if home power meters suddenly all started transmitting on cellular network frequencies. How about if the transmitting were on emergency services frequencies?

The Mandiant crew was in the hizz-ouse, and I saw Kris Harms at the mic not once, but twice! Once for the Mandiant webcast, and once for the presentation on Friday. I noticed that some parts of both of Mandiant's presentations were from previous presentations...talking to Kris, they were still seeing similar techniques, even as long as two years later. I didn't get a chance to discuss this with Kris much more, to dig into things like, were customers against which these techniques used detecting the incident, or was the call to Mandiant the result of an external third party calling the victim organization?

Richard Bejtlich took a different approach to his PPT! If you're read his blog, you know that he's been talking about this recently, so I wasn't too terribly surprised (actually, I was very interested to see where it would go) when he started his time at the mic by asking members of the audience for questions. He'd had a handout prior to kicking things off, and his presentation was very interesting because of how he spent the time.

There were also a number of presentations on cloud computing, including one by our own Robert Rounsavall, and another on Fri morning by Chris Day. It's interesting to see some folks get up and talk about "cloud computing" and how security, IR, and forensics need to be addressed, and then for others to get up and basically say, "hey, we have it figured out."

Take Aways from FIRST
My take aways from FIRST came from two sources...listening to and thinking about the presentations, and talking to other attendees about their thoughts on what they heard.

As Dave Aitel said, defenders are steps behind the attackers, and continue to learn from them. Kris Harms said that from what they've seen at Mandiant, considerable research is being placed into malware persistence mechanisms...but when talking about these to some attendees, particularly those involved in incident response in some way within their organizations, there were a lot of blank stares. A lot of what was said by some was affirmed by others, and in turn, affirmed my experiences as well as those of other responders.

I guess the biggest take away is that there are two different focuses with respect to business. Organizations conducting business most often focus on business and not so much on securing the information that is their business. The bad guys have a business model, as well, that is also driven by revenue...they are out to get your information, or access to your systems, and they are often better at using your infrastructure than you are. The drive or motivation of business is to do business, and at this point, security is such a culture change that its no wonder that victims find out about so many intrusions and data breaches after the fact, due to third party notification. The road map, the solutions, to addressing this have been around for a very long time, and nothing will change until organizations start adopting those security principles as part of their culture. Security is ineffective if it's "bolted on" has to be part of what businesses do...just like billing and collections, shipping and product fulfillment, etc. Incident responders have watched the evolution of intruder's tactics over the years, and organizations that fall victim to these attacks are often stagnant and rooted in archaic cultures.

Overall, FIRST was a good experience, and a good chance to hear what others were experiencing and thinking, both in and out of the presentation forum.

Thursday, June 10, 2010


This post from the Digital Detective site discusses how to manually identify the time zone of a system from the image. This information is maintained in the Registry, and RegRipper has a plugin for this (as part of the default distro).

I saw this post recently on the SANS ISC blog, which has to do with software restriction policies on a system. I thought...hey, that's pretty cool, AND there's a Registry key listed. From there it was a simple matter to research the MS site and see what other information I could find, and I began to see the possible value of the data derived from the DefaultLevel value (called a "key" in the blog post) to an analyst. In a matter of minutes, I had a functioning RegRipper plugin.

Interestingly enough, the more I research this, the more I see the CodeIdentifiers key being of some level of importance, not only to forensic analysts, but also to system administrators. After all, if it weren't, why would so many bits of malware be modifying or deleting entries beneath this key?

TSK/Open Source Conference

I have to say, when someone who's attended conferences sets out to create a conference, things tend to turn out pretty well. Aaron's OMFW (2008) turned out that way, and Brian's Open Source conference (9 June) was another excellent example.

There were seven presentations, all of high caliber (well, six of high caliber, and mine! ;-) ) and two time-slots for open discussion. I like the shorter talks (unless there's some kind of hands-on component), but that also requires the presenters to develop their presentations to meet the time constraint. For example, Cory had some great stuff (I know, because I was sitting next to him during earlier presentations when he developed it!) in his presentation, but had to skip over portions due to time.

Rather than walking through each presentation individually, I wanted to cover the highlights of the conference as a whole. In that regard, there were a couple of points that were brought up with respect to open source overall:

Tool Interoperability - Open source tools need to be able to interoperate better. During my presentation, I mentioned several times that due to the output provided by some tools and the format that I need, I use Perl as the "glue". Maybe there's a better way to do this.

Tool Storehouse - There are a number of open source (and free) tools out there that are very useful and very powerful...but they're out there and not accessible from a single location. It's difficult to keep up with so many things in forensics and IR, following blogs and can be too much. Having a centralized location where someone can go and search for information, kind of like an encyclopedia, would be more beneficial...maybe something like the Forensics Wiki.

Talking to a number of the other folks attending the conference, it was clear that everyone was at a different level. Some folks develop tools, other use and modify that tools, still others use the tools and submit bug reports or feature requests, and other simply use the tools. One of the benefits of these conferences is that all of these folks can meet, share thoughts and opinions, and discuss the direction of either individual tools, or open source in general. Some folks are very comfortable sharing ideas with a larger audience, and others do so on a more individual basis...conferences like this provide an environment conducive to both.

I think one of the biggest things to come out of the conference is that open source tools come from a range of different areas. Some come from academic research projects, others start as a need that one person has. Regardless of where they come from, they require work...a lot of work. Take memory acquisition and analysis...lots of folks have put a lot of effort into developing what's currently available, and if anyone feels that what's available right now isn't sufficient, then support the work that being done. I think AW covered that very well; the best way to get your needs met is to communicate them, and then support those who are doing the work. Learn to code. Don't code? Can you write documentation? How else can you provide support? Loaning hardware? Providing samples for analysis? There's a great deal that can be done. We all have to remember that for most of this, the work is hard, such as providing a layer of abstraction for a very technical and highly fluid area of analysis (i.e., memory) or providing an easily accessible library for something else. It's hard work, and very often done on someone's own time, using their own resources. I completely agree with AW...folks that do those things should get recognition for their good works. As such, organizations (and individuals) that rely heavily on open source (and free) tools for their work should be offering some kind of support back to the developer, particularly if they want some additional capabilities.

On the organizational side of things...rather than's always good to see a conference where some of the things that you don't like about conferences are improved upon. For example, the attendee's badge wallet had the complete schedule listed on a small card right there in the back of the wallet. That way, you weren't always looking around for the schedule. Also, there were plenty of snacks, although I haven't yet been to an event like this where the coffee was any good. ;-(

Overall, this was a great conference, with lots of great information shared. One of the best things about conferences is that they bring folks together...folks that may "know" each other on the Internet or via email, but haven't actually met, or haven't seen each other in a while. Great content generates some great conversations, and the folks that share end up sharing some great ideas. I'm really hoping to get an invite to the next one...which means I should keep working with open source tools and "going commando"...

Monday, June 07, 2010


The term "anti-forensics" is used sometimes to describe efforts attackers use in order to foil or frustrate the attempts of the good guys to discover what happened, when it happened, and how it happened. Many times, what comes to mind when someone uses the term "anti-forensics" are things like steganography, rootkits, and timestomp.

Richard Bejtlich went so far as to make a distinction between the terms "anti-forensics" and "counterforensics". On Wikipedia, the term "anti-computer forensics" is used to refer tocountermeasures against forensic analysis, while "counterforensics" refers to attacks directed against forensic tools. For additional clarification on the terms, see W. Matthew Hartley's paper on the topic here; the paper is three pages long, and makes a clear distinction between the two terms.

In short, from Hartley's paper, anti-forensics refers to techniques and technologies that "invalidate factual information for judicial review", whereas counterforensics refers to tools and techniques that "target...specific forensic technologies to directly prevent the investigator from analyzing collected evidence". Hartley's paper gives some very good examples to illustrate the differences between these two terms, so I strongly suggest reading the three pages.

Very often, we use these terms (in some manner) to describe what an attacker or intruder may do, but one thing that's not discussed often is how this applies to responder's actions, or inaction, as the case may be. In that regard, there are a couple of things that organizations and responders need to keep in mind:

1. You can not just grab the disk and expect to get what you need. Live response, including memory acquisition, is becoming more and more important, particularly in the face of questions brought on by legislative and regulatory compliance. Too many times, a response team (consultants, not on-site staff) will be called in and during the course of response, but provided with goals. Then, after the fact, the victim organization will start with questions such as, "...was data exfiltrated?" and " much data was exfiltrated?" Fortunately, consulting responders have faced these questions often enough that they can often head them off at the pass, but what do you do when the victim organization has already taken systems offline before calling for help? Would it have been beneficial to the overall response if they'd captured memory, and perhaps some volatile data, first?

2. Temporal Proximity - lots of stuff happens on Windows systems, particularly since Windows XP was released, that have an antiforensics effect...and much of it happens without any interaction from a user or administrator. Let's say that a bad guy gets into a system, uploads a couple of tools, runs them, copies data off of the system, then deletes everything and leaves. While the system is just sitting there, there are limited defrags going on, System Restore Points and Volume Shadow Copies are being created and removed, operating system and application updates are being installed, etc. It won't be long before, due to a lack of responsiveness, all you're left with is an entry in an index file pointing to a file name. Once that happens, and you're unable to determine how many records were exposed, you will very likely have to report and notify on ALL records. Ouch!

The point is that intruders very often don't have to use any fancy or sophisticated techniques to remain undetected on a system or within a network infrastructure. What tends to happen is that responders and analysts may have only so much time (8 hrs, 16 hrs, etc.) to spend on investigating the incident, so as long as the intruder uses a technique that takes just a bit longer than that to investigate, they're good. There's no real need to use counter- or anti-forensics techniques. Very often, though, it's not so much the actions of the intruder, but the inaction of the closest responders that have the greatest impact on an investigation.


ForensicsWiki: Anti-forensics
Adrien's Occult Computing presentation
CSOOnline article: The Rise of Anti-Forensics
Lance Mueller: Detecting Timestamp Changing Utilities

Thursday, June 03, 2010

Book Review

I recently had an opportunity to read through Handbook of Digital Forensics and Investigation (ISBN-10: 0123742676), edited by Eoghan Casey.

One of the first things you notice about the book...aside from the heft and sense that you actually have something in your the fact that the book actually has 16 authors listed! 16! Many of the names listed are very prominent in the field of digital forensics, so one can only imagine the chore of not only keeping everyone on schedule, but getting the book into an overall flow.

Something I really like about the Handbook is that it starts off with basic, core principles. Digital forensics is one of those areas where folks usually want to dive right into the cool stuff (imaging systems, finding "evidence", running tools, etc.), but as with other similar areas, you're only going to get so far that way. These core principles are presented in an easy-to-understand manner, with little sidebars that illustrate or reinforce various points. Many of these principles are topics that are misunderstood, or talked about but not practiced in the wider community, and having luminaries such as Eoghan and Curtis and others present them in this sort of format brings them home again.

Some of those sidebars I mentioned, in my copy of the Handbook, are heavily highlighted, and the book itself has a number of notes in the margins of several chapters. One of the highlighted gems is in a "Practitioner's Tip" sidebar on page 23, where it says, "...apply the scientific method, seek peer review...". This one struck home, because too often in the community we see where statements are made that are assumptions, not based on supporting fact. IMHO, I think that is is a result of not seeking peer review, not engaging with others, and not having someone who's able to ask the simple question, "why?"

Another aspect of the Handbook that I found very useful was that when a technique or something specific about some data was discussed, several times, there were illustrations using not just commercial forensic analysis applications, but also free and open-source tools. For example, page 55 has an illustration of the use of the SQLite command line utility to examine the contents of the Skype main.db database file. My sense is that the overall approach to the Handbook is to move practitioners away from over reliance on a specific commercial application, and toward an understanding that hey, there are other riches to be discovered if you disconnect the dongle and think for a minute.

I'll admit that I didn't spend as much time on some chapters as I did on others. Chapters 1 and 2 were very interesting for me, but chapter 3 got into electronic discovery. While this really isn't something I do a lot of, parts of the chapter (prioritizing systems, processing data from tapes, etc.) caught my eye. The pace picked back up again with chapter 4, Intrusion Investigation, particularly where there was discussion of "fact versus speculation" and "Reporting Audiences". In my experience, these are just two of the areas where any investigation can easily veer off course. Of course, without question, I spent a great deal of time in chapter 5, Windows Forensic Analysis! However, that doesn't mean that the other chapters don't have a great deal of valuable information.

The real value of the Handbook is that it did not focus on any one platform, or on any one commercial product. Analysis of Windows, as well as *nix/Linux, Mac, and embedded systems are addressed, as was the network (including mobile networks). There was no singular focus on one commercial product, something you see in other books; instead, a combination of commercial, free, and open-source tools were used to illustrate various points. In one instance, three commercial applications were shown side-by-side to illustrate a point about deleted files. Even some of my own tools (RegRipper,, etc.) were mentioned!

Overall, I think that the book is an excellent handbook, and it definitely has a prominent place on my bookshelf. No one of us knows everything there is to know, and I even found little gems in the chapter on Windows forensic analysis. For anyone who doesn't spend much time in that area, or analyzing Macs, those chapters will be a veritable goldmine...and you're very likely to find something new, even if you are very experienced in those areas. The Handbook is going to be something that I refer back to time and again, for a long time to come. Thanks to Eoghan, and thanks to all of the authors who put in the time and effort to produce this excellent work.

Richard Bejtlich's review @TaoSecurity


Tool Updates
Paraben recently sent out an email about an updated version of their P2eXplorer tool being available. This is the product that allows you to mount acquired images for viewing, mounting a variety of images as physical disks.

ImDisk is available for 32- and 64-bit versions of Windows, including Windows 2008. I've got an idea for trying it out on Windows 7...we'll have to see how it works.

The TSK tools are up to version 3.1.2. Be sure to update your stuff.

There's a new issue of Hakin9 magazine's free now, which is kind of cool.

Matt posted about how he and Adam used RegRipper to create WindowsRipper. It's an interesting project and I have to say, I really like it when folks find ways to achieve their needs and get the tools to meet their goals, rather than the other way around. Great job, guys...I'm looking forward to seeing where this goes. Let me know what I can do to help.

Speaking of RegRipper, it appears that RegRipper is included in WinFE! Brett Shavers set up the WinFE site (he's also the guy who set up the RegRipper site), and the list of tools includes RegRipper!

I was interviewed last night by the guys from the Securabit podcast (episode 58). Thanks, guys, for a great time..."hanging out" on Skype with a bunch of former sailors...truly a dream come true! ;-) I enjoy having the opportunity to talk nerdy with folks, as forensics is not just a job, it's an adventure!

Check out Chris Pogue's "Sniper Forensics" interview on the CyberJungle podcast. It's episode 141, and the hosts start mentioning SANS (as a lead-in to Chris's interview) at about 58:26 into the podcast. Chris talked about his sniper forensics, as well as the 4-step Alexiou Principle that he uses as a basis for analysis. Chris will be giving his "Sniper Forensics" presentation at the SANS Forensic Summit in July.

Wednesday, June 02, 2010


With a couple of upcoming presentations that address timelines (TSK/Open Source Conference, SANS Forensic Summit, etc.), I thought that it might be a good idea to revisit some of my thoughts and ideas behind timelines.

For me, timeline analysis as a very powerful tool, something that can be (and has been) used to great effect, if it makes sense to do so. Not all of us are always going to have investigations that are going to benefit from a timeline analysis, but for those cases where a timeline would be beneficial, they have proven to be an extremely valuable analysis step. However, creating a timeline isn't necessarily a point-and-click proposition.

I find creating the timelines to be extremely easy, based on the tools I've provided (in the Files section of the Win4n6 Yahoo group). For my uses, I prefer the modular nature of the tools, in part because I don't always have everything I'd like to have available to me. In instances where I'm doing some IR or triage work, I may only have selected files and a directory listing available, rather than access to a complete image. Or sometimes, I use the tools to create micro- or nano-timelines, using just the Event Logs, or just the Security Event Log...or even just specific event IDs from the Security Event Log. Having the raw data available and NOT being hemmed in by a commercial forensic analysis application means that I can use the tools to meet the goals of my analysis, not the other way around. See how that works? The goals of the analysis define what I look for, what I look at, and what tools I use...not the tool. The tool is just that...a tool. The tools I've provided are flexible enough to be able to be used to create a full timeline, or to create these smaller versions of timelines, which also serve a very important role in analysis.

As an example, parses an Event Log (.evt, NOT .evtx) file (or all .evt files in a given directory) into the five-field TLN format I like to use. is also a CLI tool, meaning that the output goes to STDOUT and needs to be redirected to a file. But that also means that I can look for specific event IDs, using something like this:

C:\tools> -e secevent.evt -t | find "Security/528" > sec_528.txt

Another option is to simply run the tool across the entire set of Event Logs and then use other native tools to narrow down what I want:

C:\tools> -d f:\case\evtfiles -t > evt_events.txt
C:\tools>type evt_events.txt | find "Security/528" > sec_528.txt

Pretty simple, pretty straightforward...and, I wouldn't be able to do something like that with a GUI tool. I know that the CLI tool seems more cumbersome to some folks, but for others, it's immensely more flexible.

One thing I'm not a huge proponent of when it comes to creating timelines is collecting all possible data automagically. I know that some folks have said that they would just like to be able to point a tool at an image (maybe a mounted image), push a button, and let it collect all of the information. From my perspective, I see that as too much, simply because you don't really want everything that's available, because you'll end up having a huge timeline with way too much noise and not enough signal.

Here's an example...a bit ago, I was doing some work with respect to a SQL injection issue. I was relatively sure that the issue was SQLi, so I started going through the web server logs and found what I was looking for. SQLi analysis of web server logs is somewhat iterative, as there are things that you search for initially, then you get source IP addresses of the requests, then you start to see things specific to the injection, and each time, you go back and search the logs again. In short, I'd narrowed down the entries to what I needed and was able to discard 99% of the requests, as they were normally occurring web traffic. I then created a timeline using the web server log excerpts and the file system metadata, and I had everything I needed to tell a complete story about what happened, when it started, and what was the source of the intrusion. In fact, looking at the timeline was a lot like looking at a .bash_history file, but with time stamps!

Now, imagine what I would have if I'd run a tool that had run through the file system and collected everything possible. File system metadata, Event Logs, Registry key LastWrite times, .lnk files, all of the web server logs, etc...the amount of data would have been completely overwhelming, possibly to the point of hiding or obscuring what I was looking for, let alone crashing any tool for viewing the data.

The fact is, based on the goals of your analysis, you won't need all of that other data...more to the point, all of that data would have made your analysis and producing an answer much more difficult. I'd much rather take a reasoned approach to the data that I'm adding to a timeline, particularly as it can grow to be quite a bit of information. In one instance, I was looking at a system which had apparently been involved in an incident, but had not been responded to in some time (i.e., it had simply been left in service and used). I was looking for indications of tools being run, so the first thing I did was view the audit settings on the system via RegRipper, and found that auditing was not enabled. Then I created a micro-timeline of just the available Prefetch files, and that proved equally fruitless, as apparently the original .pf files had been deleted and lost.

In another instance, I was looking for indications that someone had logged into a system. Reviewing the audit configuration, I saw that what was being audited had been set shortly after the system had been installed, and logins were not being audited. At that point, it didn't make sense to add the Security Event Log information to the timeline.

I do not see the benefit of running to collect all of the Registry key LastWrite times. Again, I know that some folks feel that they need to have everything, but from a forensic analysis perspective, I don't subscribe to that line of thinking...I'm not saying it's wrong, I'm simply saying that I don't follow that line of reasoning. The reason for this is threefold; first, there's no real context to the data you're looking at if you just grab the Registry key LastWrite times. A key was does that help you without context? Second, you're going to get a lot of noise...particularly due to the lack of context. Looking at a bunch of key LastWrite times, how will you be able to tell if the modification was a result of user or system action? Third, there is a great deal of pertinent time-based information within the data of Registry values (MRU lists, UserAssist subkeys, etc.), all of which is missed if you just run a tool like

A lot of what we do with respect to IR and DF work is about justifying our reasoning. Many times when I've done work, I've been asked, "why did you do that?"...not so much to second guess what I was doing, but to see if I had a valid reason for doing it. The days of backing up a truck to the loading dock of an organization and imaging everything are long past. There are engagements where you have a limited time to get answers, so some form of triage is necessary to justify which systems and/or data (i.e., network logs, packet captures, etc.) you're going to acquire. There are also times where if you grab too many of the wrong systems, you're going to be spending a great deal of time and effort justifying that to the customer when it comes to billing. The same is true when you scale down to analyzing a single system and creating a timeline. You can take the "put in everything and then figure it out" approach, or you can take an iterative approach, adding layers of data as they make sense to do so. Sometimes previewing that data prior to adding it, performing a little pre-analysis, can also be revealing.

Finally, don't get me wrong...I'm not saying that there's just one way to create a timeline...there isn't. I'm just sharing my thoughts and reasoning behind how I approach doing it, as this has been extremely successful for me through a number of incidents.

Cutaway's SysComboTimeline tools