Sunday, April 19, 2015

Micro- & Mini-Timelines

I don't always create a timeline of system activity...but sometimes when I do, I don't have all of the data from within the system image available.  Many times, I will create a mini-timeline because all I have available is either limited data sources, or even just a single data source.  I've been sent Event Logs (.evt files) or a couple of Windows Event Logs (.evtx files), and asked to answer specific questions, given some piece of information, such as an indicator or a time frame.  I've had other analysts send me Registry hive files and ask me to determine activity within a specific time frame, or associated with a specific event.

Mini, micro, and even nano-timelines can assist an analyst in answering questions and addressing analysis goals in an extremely timely and accurate manner.

There are times where I will have a full image of a system, and only create a mini- or nano-timeline, just to see if there are specific indicators or artifacts available within the image.  This helps me triage systems and prioritize my analysis based upon the goals that I've been given or developed.  For example, if the question before me is to determine if someone accessed a system via RDP, I really only need a very limited number of data sources to answer that question, or even just to determine if it can be answered.  I was once asked to determine the answer to that question for a Windows XP system, and all I needed was the System Registry hive file...I was able to show that Terminal Services was not enabled on the system, and it hadn't been.  Again, the question I was asked  (my analysis goal) was, "...did someone use RDP to access this system remotely?", and I was able to provide the answer to that question (or perhaps more specifically, if that question could be answered).

Sometimes, I will create a micro-timeline from from specific data sources simply to see if there are indicators that pertain to the time frame that I've been asked to investigate.  One example that comes to mind is the USN change journal...I'll extract the file from an image and parse it, first to see if it covers the time frame I'm interested in.  From there, I will either extract specific events from that output to add to my overall timeline, or I'll just add all of the data to the events file so that it's included in the timeline.  There are times when I won't want all of the data, as having too much of it can add a significant amount of noise to the timeline, drowning out the signal.

There are times when I simply don't need all of the available data.  For example, consider a repurposed laptop (provided to an employee, later provided to another employee), or a system with a number (15, 25, or more)  of user profiles; there are times that I don't need information from every user profile on the system, and including it in the timeline will simply make the file larger and more cumbersome to open and analyze.

I've also created what I refer to as nano-timelines.  That is, I'll parse a single Windows Event Log (.evtx) file, filter it for a specific event source/ID pair, and then create a timeline from just those events so that I can determine if there's something there I can use.

For example, let's say I'm interested in "Microsoft-Windows-Security-Auditing/5156" events; I'd start by running the Security.evtx file through wevtx.bat:

C:\tools>wevtx.bat F:\data\evtx\security.evtx F:\data\sec_events.txt

Now that I have the events from the Security Event Log in a text file, I can parse out just the events I'm interested in:

C:\tools>type F:\data\sec_events.txt | find "Microsoft-Windows-Security-Auditing/5156" > F:\data\sec_5156_events.txt

Okay, now I have an events file that contains just the event records I'm interested in; time to create the timeline:

C:\tools>parse -f F:\data\sec_5156_events.txt > F:\data\sec_5156_tln.txt

Now, I can open the timeline file, see the date range that those specific events cover, as well as determine which events occurred at a specific time. This particular event can help me find indications of malware (RAT, Trojan, etc.), and I can search the timeline for a specific time frame, correlating outbound connections from the system with firewall or IDS logs.  Because I still have the events file, I can write a quick script that will parse the contents of the events file, and provide me statistics based on specific fields, such as the destination IP addresses of the connections.

What's great about the above process is that an analyst working on an engagement can archive the Windows Event Log file in question, send it to me, and I can turn around an answer in a matter of minutes.  Working in parallel, I can assist an analyst who is neck-deep in an IR engagement by providing solid answers to concrete questions, and do so in a timely manner.  My point is that we don't always need a full system image to answer some very important questions during an engagement; sometimes, a well-stated or well-thought-out question can be used as an analysis goal, which leads to a very specific set of data sources within a system being examined, and the answer to whether that system is in scope or not being determined very quickly.

Analysis Process
Regardless of the size of the timeline (full-, mini-, micro-), the process I follow during timeline analysis is best described as iterative.  I'll use initial indicators...time stamps, file names/paths, specific Registry keys...to determine where to start.  From there, I'll search "nearby" within the timeline, and look for other indicators.

I mentioned the tool wevtx.bat earlier in this post; something I really like about that tool is that it helps provide me with indicators to search for during analysis.  It does this by mapping various event records to tags that are easy to understand, remember, and search for.  It does this through the use of the eventmap.txt file, which is nothing more than a simple text file that provides mappings of events to tags.  I won't go into detail in this blog post, as the file can be opened in viewed in Notepad.  What I like to do is provide references as to how the tag was "developed"; that is, how did I decide upon the particular tag.  For example, how did I decide that a Microsoft-Windows-TerminalServices-LocalSessionManager record with event ID 22 should get the tag "Shell Start"?  I found it here.

This is very useful, because my timeline now has tags for various events, and I know what to look for, rather than having to memorize a bunch of event sources and IDs...which, with the advent of Vista and moving into the Window 7 system and beyond, has become even more arduous due to the shear number of Event Logs now incorporated into the systems.

So, essentially, eventmap.txt serves as a list of indicators, which I can use to search a timeline, based upon the goals of my exam.  For example, if I'm interested in locating indications of a remote access Trojan (RAT), I might search for the "[MalDetect]" tag to see if an anti-virus application on the system picked up early attempts to install malware of some kind (I should note that this has been pretty helpful for me).

Once I find something related to the goals of my exam, I can then search "nearby" in the timeline, and possibly develop additional indicators.  I might be looking for indications of a RAT, and while tracking that down, find that the infection vector was via lateral movement (Scheduled Task).  From there, I'd look for at least one Security-Auditing/4624 type 3 event, indicating a network-based logon to access resources on the system, as this would help me determine the source system of the lateral movement.  The great thing about this is that this sort of activity can be determined from just three Windows Event Log files, two Registry hives, and you can even throw in the MFT for good measure, although it's not absolutely required.  Depending on the time frame of response...that is, was the malicious event detected and the system responded to in a relatively short time (an hour or two), or is this the result of a victim notification of something that happened months ago...I may include the USN change journal contents, as well.

My efforts are most often dedicated toward finding clusters of multiple indicators of specific activity, as this provides not only more context, but also a greater level of confidence in the information I'm presenting.  Understanding the content of these indicator clusters is extremely helpful, particularly when anti-forensics actions have been employed, however unknowingly.  Windows Event Logs may have "rolled over", or a malware installer package may include the functionality to "time stomp" the malware files.

4 comments:

Mari DeGrazia said...

Harlan,

To your point about addressing analysis goals in an extremely timely and accurate manner - many times I will export out some of these files (evt, reg file, mft etc) before I kick off an image.

This allows me to begin analysis immediately by generating a mini-timeline. Using this method, it's not uncommon to have some answers before the image has even completed.

H. Carvey said...

Mari,

I've been doing the exact same thing since I was at ISS...when verifying an image (or preparing to image a hard drive), I'll extract data sources and create a timeline, while I'm creating a copy for analysis.

The data sources I collect all go back to the goals of the exam, though.

Corey Harrell said...

Harlan,

Another nice post. Doing timeline analysis in this manner is not only a lot faster but more effective. I wanted to add one more use case about why this approach is better than timelining everything on a system. When interacting with live systems trying to create a timeline across an entire system is not feasible. It takes too long and you can get answers faster by selecting certain artifacts to include in a timeline.

H. Carvey said...

Corey,

You're absolutely right, this is great for triaging live systems. You limit what you need to get, and you can even run tools directly against systems rather than first extracting files.