Pages

Wednesday, June 02, 2010

Timelines

With a couple of upcoming presentations that address timelines (TSK/Open Source Conference, SANS Forensic Summit, etc.), I thought that it might be a good idea to revisit some of my thoughts and ideas behind timelines.

For me, timeline analysis as a very powerful tool, something that can be (and has been) used to great effect, if it makes sense to do so. Not all of us are always going to have investigations that are going to benefit from a timeline analysis, but for those cases where a timeline would be beneficial, they have proven to be an extremely valuable analysis step. However, creating a timeline isn't necessarily a point-and-click proposition.

I find creating the timelines to be extremely easy, based on the tools I've provided (in the Files section of the Win4n6 Yahoo group). For my uses, I prefer the modular nature of the tools, in part because I don't always have everything I'd like to have available to me. In instances where I'm doing some IR or triage work, I may only have selected files and a directory listing available, rather than access to a complete image. Or sometimes, I use the tools to create micro- or nano-timelines, using just the Event Logs, or just the Security Event Log...or even just specific event IDs from the Security Event Log. Having the raw data available and NOT being hemmed in by a commercial forensic analysis application means that I can use the tools to meet the goals of my analysis, not the other way around. See how that works? The goals of the analysis define what I look for, what I look at, and what tools I use...not the tool. The tool is just that...a tool. The tools I've provided are flexible enough to be able to be used to create a full timeline, or to create these smaller versions of timelines, which also serve a very important role in analysis.

As an example, evtparse.pl parses an Event Log (.evt, NOT .evtx) file (or all .evt files in a given directory) into the five-field TLN format I like to use. Evtparse.pl is also a CLI tool, meaning that the output goes to STDOUT and needs to be redirected to a file. But that also means that I can look for specific event IDs, using something like this:

C:\tools> evtparse.pl -e secevent.evt -t | find "Security/528" > sec_528.txt

Another option is to simply run the tool across the entire set of Event Logs and then use other native tools to narrow down what I want:

C:\tools>evtparse.pl -d f:\case\evtfiles -t > evt_events.txt
C:\tools>type evt_events.txt | find "Security/528" > sec_528.txt

Pretty simple, pretty straightforward...and, I wouldn't be able to do something like that with a GUI tool. I know that the CLI tool seems more cumbersome to some folks, but for others, it's immensely more flexible.

One thing I'm not a huge proponent of when it comes to creating timelines is collecting all possible data automagically. I know that some folks have said that they would just like to be able to point a tool at an image (maybe a mounted image), push a button, and let it collect all of the information. From my perspective, I see that as too much, simply because you don't really want everything that's available, because you'll end up having a huge timeline with way too much noise and not enough signal.

Here's an example...a bit ago, I was doing some work with respect to a SQL injection issue. I was relatively sure that the issue was SQLi, so I started going through the web server logs and found what I was looking for. SQLi analysis of web server logs is somewhat iterative, as there are things that you search for initially, then you get source IP addresses of the requests, then you start to see things specific to the injection, and each time, you go back and search the logs again. In short, I'd narrowed down the entries to what I needed and was able to discard 99% of the requests, as they were normally occurring web traffic. I then created a timeline using the web server log excerpts and the file system metadata, and I had everything I needed to tell a complete story about what happened, when it started, and what was the source of the intrusion. In fact, looking at the timeline was a lot like looking at a .bash_history file, but with time stamps!

Now, imagine what I would have if I'd run a tool that had run through the file system and collected everything possible. File system metadata, Event Logs, Registry key LastWrite times, .lnk files, all of the web server logs, etc...the amount of data would have been completely overwhelming, possibly to the point of hiding or obscuring what I was looking for, let alone crashing any tool for viewing the data.

The fact is, based on the goals of your analysis, you won't need all of that other data...more to the point, all of that data would have made your analysis and producing an answer much more difficult. I'd much rather take a reasoned approach to the data that I'm adding to a timeline, particularly as it can grow to be quite a bit of information. In one instance, I was looking at a system which had apparently been involved in an incident, but had not been responded to in some time (i.e., it had simply been left in service and used). I was looking for indications of tools being run, so the first thing I did was view the audit settings on the system via RegRipper, and found that auditing was not enabled. Then I created a micro-timeline of just the available Prefetch files, and that proved equally fruitless, as apparently the original .pf files had been deleted and lost.

In another instance, I was looking for indications that someone had logged into a system. Reviewing the audit configuration, I saw that what was being audited had been set shortly after the system had been installed, and logins were not being audited. At that point, it didn't make sense to add the Security Event Log information to the timeline.

I do not see the benefit of running regtime.pl to collect all of the Registry key LastWrite times. Again, I know that some folks feel that they need to have everything, but from a forensic analysis perspective, I don't subscribe to that line of thinking...I'm not saying it's wrong, I'm simply saying that I don't follow that line of reasoning. The reason for this is threefold; first, there's no real context to the data you're looking at if you just grab the Registry key LastWrite times. A key was modified...how does that help you without context? Second, you're going to get a lot of noise...particularly due to the lack of context. Looking at a bunch of key LastWrite times, how will you be able to tell if the modification was a result of user or system action? Third, there is a great deal of pertinent time-based information within the data of Registry values (MRU lists, UserAssist subkeys, etc.), all of which is missed if you just run a tool like regtime.pl.

A lot of what we do with respect to IR and DF work is about justifying our reasoning. Many times when I've done work, I've been asked, "why did you do that?"...not so much to second guess what I was doing, but to see if I had a valid reason for doing it. The days of backing up a truck to the loading dock of an organization and imaging everything are long past. There are engagements where you have a limited time to get answers, so some form of triage is necessary to justify which systems and/or data (i.e., network logs, packet captures, etc.) you're going to acquire. There are also times where if you grab too many of the wrong systems, you're going to be spending a great deal of time and effort justifying that to the customer when it comes to billing. The same is true when you scale down to analyzing a single system and creating a timeline. You can take the "put in everything and then figure it out" approach, or you can take an iterative approach, adding layers of data as they make sense to do so. Sometimes previewing that data prior to adding it, performing a little pre-analysis, can also be revealing.

Finally, don't get me wrong...I'm not saying that there's just one way to create a timeline...there isn't. I'm just sharing my thoughts and reasoning behind how I approach doing it, as this has been extremely successful for me through a number of incidents.

Resources
log2timeline
Cutaway's SysComboTimeline tools

13 comments:

  1. I agree in principle but my approach is nearly the polar opposite. For me, finding all relevant timeline data can be cumbersome...certainly is time consuming. So, I prefer to combine the output of tools such as fls and log2timeline, create a full timeline (mactime) and then use the power of grep, regexps, etc. to concentrate on what I want.

    In your SQLi example, I would get all timeline data I could at once and then grep for SELECT (to start). Say, then, a SELECT statement not only show up in the web and SQL logs but also a particular query threw an exception in the Security or Event log. I'd see all of them...less work, more context (imho).

    And there's may favorite "grep -v " which is a nice iterative way to remove noise. Take the large timeline I mentioned creating. A simple "grep -v EXIF timeline | less" will remove all EXIF timeline events. I then look at the filtered timeline, identify noise (or good data), filter, repeat.

    So, I agree on the importance of timelines and always do them but I prefer to get all the parsing done at once and then quickly filter. I have various howtos on these subjects at http://viaforensics.com/tag/howto/ for anyone interested.

    Andrew Hoog
    viaForensics

    ReplyDelete
  2. Andrew,

    Thanks for the comment!

    I agree in principle but my approach is nearly the polar opposite.

    Understood...like I said, I'm not saying that my way is the right way, I'm simply saying that this is how I look at it.

    In your SQLi example, I would get all timeline data I could at once and then grep for SELECT (to start).

    In this case, the SQLi occurred without using "SELECT" in a way you could find via grep. In fact, a search across all web server logs for "xp_cmdshell" turned up results associated with regular, contracted scans.

    And there's may favorite "grep -v " which is a nice iterative way to remove noise.

    Agreed. I use that to remove things like OS and application update activity from the file system metadata.

    Still, I don't see the need to include a great deal of the information that can be added to a timeline by default. Again, it all goes back to the goals of your investigation.

    But then, this is just one person's opinion...

    ReplyDelete
  3. I both agree and disagree... I usually take everything into the timeline, much like Andrew describes (although I usually do not include the last write time of all registry entries), but for some types of investigation I find that it is better to just include a portion of the information into the timeline, it just really depends on what you are analyzing.

    Despite this general approach I do sometimes get cases where I believe it is beneficial to only include few files or only a certain type of files into the timeline, thus making it more efficient. This is one of the reasons I decided to add an option to the upcoming release of log2timeline to either indicate which modules (parsers) you would like to be used in timescanner, or which you would like to exclude in a given timeline extraction. That way you could point the tool to the directory where you either have the mounted image, or the files that you've extracted using other methods and only look for certain types of files (whether that be IIS logs, Event Logs or what ever fits your type of investigation). That way you have the choice whether you want to include everything or just some specific information into the timeline, thus making it easier to analyse.

    Again, it just really depends on each individual case, this is something that the analyst has to decide for himself when starting the investigation.. is it appropriate to create a micro-timeline, or do I need a more complete one that contains everything? Often it can be beneficial to include everything and then use grep/regular expressions/... to find the needed relevant stuff, and in other cases it would be too much and could make things more difficult.
    I believe that using timeline analysis you can reduce investigation time, often considerably, but you have to use it wisely and know what you are doing (as with everything else). So for some cases it makes perfect sense to add everything and then sort out things afterwards, while in others it is preferable to only include selective information in it... the problem with selective information is that you really need to know what you are doing, so that you are truly picking the correct information out, otherwise you might risk forgetting to add some files that could be of great value...

    ReplyDelete
  4. Kristinn,

    Excellent input/comments from both you and Andrew!

    My thoughts on this are that while (as you point out) timelines can reduce investigative/analysis time, if *everything* and the kitchen sink is thrown into that timeline, then you run the risk of increasing the investigative time, or in fact, the analyst completely missing the pertinent items in the noise.

    The unspoken factor here, of course, is a thorough knowledge of the system(s) you're analyzing. Understand your goals, and understand the system...these are key. The rest falls into place.

    ReplyDelete
  5. I also prefer, most of the times, to do a complete timeline (as Kristinn said, without including LastWrite time of all registry entries) and then filter it via CLI. Although I agree with Harlan that this can increase the time needed to analyse it, I think this is the most common scenario (as far as I've seen) also because the analyst may be worried/afraid of missing somethings whereas with a full timeline you got everything.

    My 0.02$

    Pasquale

    ReplyDelete
  6. ...the analyst may be worried/afraid...

    See, this is what I don't understand, albeit the fact that I hear it all the time. This is a perfect example of how engaging as part of a community can be so valuable...it's like getting training without having to pay thousands of dollars.

    For example, all someone would have to do is say something like, "I have an XP SP 3 system for which the user is claiming the Trojan Defense...what are some of the things I should look for." The resulting discussion becomes a sort of checklist.

    The sad fact is that a great many folks feel that for some reason, they have to give away specifics about the case (they don't...heck, *I* don't want to know), so they don't post. Or, they don't understand enough about Windows to see why providing the version (XP vs 2003 vs Vista vs Win7) is important, and they get offended when someone asks.

    No one can know everything, but at the same time, no one of us is as smart as all of us together.

    ReplyDelete
  7. I totally agree with you!!!

    ReplyDelete
  8. I have tried running the command specified in the article,but its says file not found

    evtparse.pl -e secevent.evt -t | find "Security/528" > sec_528.txt

    I am running the evtparse.pl file under Windows XP after installing ActivePerl 5.14.2

    ReplyDelete
  9. Is the secevent.evt file in the same directory as the script?

    If not, you need to provide the full path to the file itself.

    ReplyDelete
  10. tried the following commands

    C:\Perl\bin>evtparse.pl -e SecEvent.Evt -t | find "Security/528" > sec_528.txt
    File not found.

    C:\Perl\bin>evtparse.pl -e SecEvent.Evt -t | find "Security/528" > sec_528.txt
    File not found.

    C:\Perl\bin>evtparse.pl -e c:\perl\bin\secevent.evt -t | find "Security/528" > s
    ec_528.txt
    File not found.

    C:\Perl\bin>evtparse.pl -e "c:\perl\bin\secevent.evt" -t | find "Security/528" >
    sec_528.txt

    ReplyDelete
  11. I'm sorry, but that doesn't really help much.

    Where is the secevent.evt file that you're trying to parse actually located?

    ReplyDelete
  12. Sorry, I'm not sure what to tell you...it should be working.

    Could you go to that directory via the command prompt, and type 'dir', and post or email me the results? You can redirect the output to a file and send me that.

    Thanks.

    ReplyDelete