Pages

Wednesday, June 06, 2012

Training, and Host-Based Analysis

I posted recently on the host-based analysis topic, and just yesterday, I finished up teaching the first rendition of the ASI Intro to Windows Forensic Analysis course, which focuses on host-based analysis.  I'm really looking forward to teaching the Timeline Analysis course in a week and a half.  As with analysis engagements, I tend to take things that I learned in a recent engagement and apply them to future work;  the same is true with the training I provide.  I've been spending a considerable amount of time over the past 12 or so hours going back over the course I just taught, and looking at ways to improve not only the next iteration of the course, but also to improve the next course that I teach.

Both of the courses are two days long, and I try to have a good deal of hands-on work and exercises.  As such, folks planning to attend the course should have some experience in performing forensic analysis, and also be comfortable (not expert, just comfortable) working at the command line.  I found that a great way to illustrate the value of certain artifacts is to provide tools and exercises that illustrate the value of a particular artifact.  As such, I provide a number of my own tools, which I've updated in functionality.  I also provide various sample files so that folks attending the course can practice using the tools.

As an example, I provide the tools discussed here, including pref.exe.  I also provide sample Prefetch files extracted from an XP system, as well as others extracted from a Vista system.  In the Intro course, we walk through each artifact, and provide several means for extracting the data of value from each.  In some cases, I only talk about various tools and provide screen shots, as due to the license agreements, I can't distribute copies of the tools themselves.

Both courses start off with the core concepts; the why of what we're doing, before we step off into the how.  In the Intro course, I spend a good deal of time talking about using multiple data sources to illustrate that something occurred.  In our first exercise scenario, we look at how to determine the active users on a system, and discuss the various sources we can look to for data; while some of these data sources may appear to be redundant, we want to look to them in order to validate our other data, as well as provide a means of analysis in the face of counter- or anti-forensics activities, no matter how unintentional those activities may be.  The reason for this is two-fold...first, some data sources are more easily mutable than others, and may be changed either over the course of time while the system is active, or changed intentionally.  I've had exams where one of the initial steps taken by administrators, prior to contacting our IR team, included removing user accounts.  As such, another reason for making use of redundant data sources is to address those times when the one data source we usually rely on isn't available, or isn't something we necessarily trust.

Another area we look at in the Intro course is indications of program execution.  We look to a number of different locations within a Windows system for indications of programs having been executed (either simply executed on the system, or those that can be associated with a user), and as such, we use a number of RegRipper plugins that you won't find anywhere else, and are only provided in conjunction with the courses. There's one to parse the AppCompatCache value data (as well as run checks for any program paths with 'temp' in the path), another to display the information in TLN format for inclusion in a timeline, as well as others to query, parse and display other Registry data that is relevant to indications of program execution, and other categories of activity.

We also discuss Volume Shadow Copies, as well as means of accessing them when analyzing an acquired image.  In the course, we focus on the VHD method for accessing VSCs, but we also discuss other means, as well.  I stress throughout the course that the purpose of accessing various artifacts in this manner is to let the analyst see what's available, so that they're better able to select the appropriate tool for the job.  For example, if you're accessing VSCs a great deal, perhaps it would be valuable to use techniques such as those that Corey's blogged about, or use something like ShadowKit, or perhaps take advantage of the VSC functionality included in TechPathway's ProDiscover.

One of the things I really like about these training courses is engaging with others and discussing how to go about performing host-based analysis based on identified goals.

3 comments:

  1. Hi Harlan,

    Thanks for sharing your teaching process. I found it interesting that you start off with the Why and then the How.
    Back in my Air Cadet days (like Scouts for young Air Force wannabes), we were taught to give a briefing in the following order:

    What = This is what we are doing
    How = This is how we are going to achieve it
    Why = This is why

    As a newbie in learning mode, I find it easy to get caught up in the How (to find artefacts) and not pay enough attention to the What/Why (ie the overall objective).
    I have to remember that the objective is not to find as many artefacts as possible but to prove/disprove a hypothesis (ie the Why). Thanks for reminding me!

    ReplyDelete
  2. Cheeky,

    Thanks for the comment.

    I guess the "what" part was understood, as that's why the folks were sitting in the class in the first place. ;-)

    Throughout the course, I focus on the goals of the exam and give examples of where this has gone awry. On the second day, we have scenarios that we walk through, each of which starts with a goal.

    Interestingly enough, we did have one question during the course..."what if the customer doesn't know what the goals are?" This is something that, over the years, I've heard asked a great deal in classroom environments, as well as on-site. The fact of the matter is, if someone (a customer) has called you, or someone has decided to put forth the effort to take a system offline, then there's a reason for that, whether they can express/articulate it or not. As DFIR analysts, it's incumbent upon us to help the customer translate their needs into something that we can address and deliver on.

    I've known folks who would simply ask a customer once what they were interested in, and often leave with "find bad stuff", but no definition of what that was. In one case, an analyst found a directory full of hacker tools, and wrote the report...only to have the customer say that the employee's job was to test the security of their web site, and the tools were part of their job.

    Asking a customer questions in order to clarify what they're asking for, and turn what they want into discreet goals that we can deliver on does not mean that you think the customer is lying or stupid (yes, I've heard both of those as excuses for not asking a customer questions). It simply means that you want to work with them to better understand their needs, so that you can deliver on them.

    ReplyDelete
  3. Anonymous9:30 AM

    Greetings,

    Recently took a Forensic / IR class myself, during which we discussed the topic Volume Shadow Copies in-depth.

    It came up that there was a new Linux utility (31 May 2012), which I hadn't seen discussed on your blog before, by Joachim Metz for accessing VSC information, and being able to mount the copies themselves for performing timeline analysis, etc.

    Indeed, I downloaded the tarball last night - make / ldconfig, etc and was able to access and mount the VSCs from a Win7 image as if they were themselves distinct NTFS partitions.

    Utility / Library - libvshadow
    URL: http://code.google.com/p/libvshadow/

    Hope that is useful to you!
    Regards

    ReplyDelete