Pages

Thursday, May 16, 2013

The Tool Validation "Myth-odology"

I posted recently about understanding data structures, and I wanted to continue with that thought process and line of reasoning into the area of the current state of tool validation.

What we have seen in the community for some time is that a new tool is announced or mentioned, and members of the community begin clamoring for their copy of that tool. Many times, one of the first questions is, "where can I download a copy of the tool?"  The reasons most give for wanting to download a copy of the tool is so that they can "test" it, or use it to validate the output of other tools.  To that, I would pose this question - if you do not understand what the tool is doing, what it is designed to do, and you do not understand the underlying data structures being parsed, how can you then effectively test the tool, or use that tool to validate other tools?

As such, the current state of tool validation, for the most part, isn't so much a methodology as it is a myth-odology.  Obviously, this isn't associated with testing and validation processes such as those used by NIST and other organizations, and applies more to individual analysts.

There are tools out there right now that are being recommended as being THE tool for parsing a particular artifact or set of artifacts.  The tools are, in fact, very good at what they do, but the fact is that some of them do not parse all of the data structures available within the set of artifacts, nor do they identify the fact that they're missing these structures in their output.  I'm aware of analysts who, in some cases, have stated that the fact that the tool doesn't parse and display specific artifacts isn't an issue for them, because the tool showed them what they were looking for.  I think what's happening is that someone will run a tool against a data set, see a lot of data in the output, and deem it "good".  They may then run another tool against the same data set, see different output, and deem one of the tools "not good" or at the very least, "questionable".  What I don't think is happening is that analysts are testing the tools against the data structures themselves, viewing the data itself as a 'blob' and relying on the tools to provide that layer of abstraction I mentioned in my previous post.

Consider the parsing of shell items, and shell item ID lists.  These artifacts abound on Windows systems, more so as the versions of Windows increase.  One place that they've existed for some time is in the Windows shortcuts (aka, LNK files).  Some of the tools that we've used for years parse both the headers and LinkInfo blocks of these files, but it's only been in the past 12 - 18 months or so that tools have parsed the shell item ID lists.  Why is this important?  These blog posts do a great job of explaining why...give them a read.  Another reason is that over the past year or so, I've run across several LNK files that consisted solely of the header and the shell item ID list...there was no LinkInfo block to parse.  As such, some of the tools that were available at the time would simply return blank output.

There is also the issue of understanding how a tool performs it's function.  Let's take a look at the XP Event Log example again.  Tools that use the MS API for parsing these files are likely going to return the "corrupted file" message that we're all used to seeing, but tools that parse the files on a binary level, going record-by-record, will likely work just fine.  

Another myth or misconception that is seen too often is that the quality of the tool is determined by how much space the output consumes. This simply is not the case.  Again, consider the shell item ID lists in LNK files.  Some of the structures that make up these lists contain time stamps, and a number of tools display the time stamps.  What do these time stamps mean?  How are they generated/produced?  Perhaps equally important is the question, what format are the time stamps saved in?  As it turns out, the time stamps are DOSDate format, consuming 32-bits and having a 2 second granularity.  On NTFS systems,  a folder entry (that leads to the target file) that appears in the shell item ID list will have a 64-bit FILETIME time stamp converted to a 32-bit DOSDate time stamp, with a corresponding loss in granularity.  As such, it's important to not only understand the data structure and its various elements, but also the context of those structure elements.  As such, if one tool lists all of the elements of the component data structures, and another does not, is the second tool any less valid or correct?

Returning to the subject of data structures, does this mean that every analyst must know and understand the details for every available data structure on, say, a Windows system?  No, not at all...that's simply not realistic.  The answer, IMHO, is that analysts need to engage.  If you're unclear about something, ask.  If you need a reference, ask someone.  There are some great structure references posted on the ForensicsWiki, including those posted by Joachim Metz, but I think that far too few analysts use that site as a resource.  By sharing what we know, and coupling that with what we need to know, we can approach a better method for validating the tools and methodologies that we use.

11 comments:

  1. Harlan, about the 4th paragraph down it looks like a sentence is missing some information?

    "What I don't think is happening is that analysts are testing the"

    Left me hanging!

    ReplyDelete
  2. Maria,

    Thanks, good catch! I had an issue with the save feature earlier...must have missed that.

    So, thoughts overall?

    ReplyDelete
  3. I couldn’t agree more, especially with this statement:

    Returning to the subject of data structures, does this mean that every analyst must know and understand the details for every available data structure on, say, a Windows system? No, not at all...that's simply not realistic.

    I find that with each exam I am usually digging into at least one critical artifact each time. If I see a post about one, I tuck that away until I have time to dig into it further, or it's relevant in a case.

    ReplyDelete
  4. Anonymous1:36 PM

    You raised some great points, it is important for analysts to have an idea of what is going on "under the hood" of the tools they are using.

    ReplyDelete
  5. Maria,

    ...If I see a post about one...

    How about it you don't? Do you seek our resources about those artifacts, or contact individual analysts?

    I'm like you, if I find something I don't know about or understand completely, I dig. I have met analysts who with either try to figure it out themselves, often over the span of week or months, and other who will simply ignore the artifact.

    ReplyDelete
  6. If I cant figure it out or see a post on it, I try to reach out to some people on my Google chat.

    Bryan Moran just recently helped me figure out a date/time stamp from an Mac recent plist files. It was from an Office recent doc plist file, but it didnt seem to be in a typical date/timestamp format.

    After I spun my wheels on it, working together we were able to figure it out.

    I think having someone to bounce ideas off of is a great way to figure something out, or see something from a new perspective.

    Sometimes I think I can get caught in a rut or an idea and someone else's input can help me go in a new direction I would not have thought of before.

    ReplyDelete
  7. A. Thulin2:00 AM

    > ,,, if you do not understand what the tool is doing, what it is designed to do, and you do not understand the underlying data structures being parsed, how can you then effectively test the tool ... ?

    The question suggests that no answer is needed. But I don't agree.

    I'll take those one by one.

    > If I don't understand what the tool is doing ...

    It's the job of the toolmaker to document that so that I can know it. Or perhaps more accurately, so that I don't have any misconception of the limitations of the tool.

    The lack of such documentation is a minus in any reasonable test of a tool as a product. (Quite a few commercial tools are badly lacking in this area.)

    The absence of such information doesn't make any tests invalid, though.

    > ... what it is designed to do

    Same thing here -- this is the toolmaker's reponsibility, unless the design is 'obvious' enough (and even then, I'd say the toolmaker has a responsibility to document it so that it can be verified to conform to what's 'obvious').

    > ... and you do not understand the underlying data structures being parsed

    If I can create correct test data, I don't need to. But this depends on the tool -- if the tool extracts information for which I can't create appropriate test data, then you're right: a particular allocation of clusters, $FN time stamps, perhaps exact MFT record index, unspecified areas in a security descriptor in registry, etc. Then additional expertise is necessary, and pobably also additional expertise in building tools to create appropriate test data.

    But it is also possible to approach this indirectly: if the tool claims to extract X (say, the phase of the moon when file x was created), I (as a user) can and should ask what research has been performed to establish that X indeed can be extracted, and what additional research has been performed to identify the platform variations (is it possible in all variants of OS MEGA, or only in OS MEGA 2.x, say?) If the toolmaker cannot tell me that, the foundations for the tool must be regarded as shaky, and that clearly reflects on the tool itself.

    > how can you then effectively test the tool ...

    An additional possibility is to ask for the test data the toolmaker used for verifying the tool, and examine those tests critically. Any lacunae are indications that the system tests lacked coverage, and that may also reflect poorly on the quality assurance the tool has undergone.

    This one is wishful thinking on my part. Many open source tools come with test libraries -- commecial tools generally don't. So it doesn't apply in all situations. But when it is there, the method can be used.

    So I don't think the answer to the question is 'not at all'.

    ReplyDelete
  8. The question suggests that no answer is needed. But I don't agree.

    No do I. I don't agree that the question suggests that no answer is needed...not at all.

    It's the job of the toolmaker to document that so that I can know it.

    Interesting. In my own experience, I've had folks run tools designed to scan sectors within acquired images looking for MBR infecting malware against memory dumps. I've had folks tell me "...it don't work..." when they ran the Forensic Scanner against their live system. I've seen people comment on Twitter that some code that I wrote to parse Jump Lists was part of RegRipper.

    My point is...I don't think that documentation is something that most people look for.

    ...if the tool extracts information for which I can't create appropriate test data...

    Back to my point...if you don't know what the data structures consist of, then how do you know that the tool is treating your test data appropriately?

    So I don't think the answer to the question is 'not at all'.

    I firmly believe that if all three conditions in the quoted statement are met, then the answer is indeed, not at all. I understand your response, but in my experience, you've offered up "in a perfect world" conditions.

    ReplyDelete
  9. Anonymous11:01 AM

    *sigh* another elitist post: you're only worthy to have a tool if you can first demonstrate intimate knowledge of the data being analyzed.

    ReplyDelete
  10. Harlan,

    This is a great article and well timed for my class. We are going over open source tool validation at the moment. Many of the points you make will help my students see tool validation from a different perspective, thank you.

    ReplyDelete
  11. Vern,

    Thanks. I would be interested in your thoughts, as well as those of your students. Having both the experienced and the fresh perspectives can open up new avenues and thoughts.

    ReplyDelete