Saturday, November 24, 2018

Tool Testing

Phill recently posted regarding some testing that he'd conducted, with respect to tools for parsing Windows Recycle Bin files. From Phill's blog post, and follow-on exchanges via Twitter, it seems that Phill tested the following tools (I'm assuming these are the versions tested):

- Jason Hale's $I Parse - blog posts here and here
- Dan Mare's RECYCLED_I app - the main software page states "RECYCLED_I: Program to parse the $I files extracted via a forensic software package. Special request.", but you can download it (and get syntax/usage) from here.
- My own

Phill's testing resulted in Eric Zimmerman creating RBCmd (tweet thread).

What I was able to determine after the fact is that the "needs" of a parsing tool were:

- parse Recycle Bin files from XP/2003 systems (INFO2), as well as Win7 & Win10 ($I*)
- for Win7/10, be able to parse all $I* files in a folder.

The results from the testing were (summarized):

- Some tools didn't do everything; some don't parse both XP- and Win7-style Recycle Bin files, and the initial versions of the tool I wrote parsed but did not display file sizes (it does now)
- The tool I wrote can optionally display tabular, CSV, and TLN output
- Eric's RBCmd parses all file types, including directories of $I* files; from the tweet thread, it appears that RBCmd displays tabular and CSV output
- rifiuit2 was the fastest

So, if you're looking to parse Recycle Bin index files (either INFO2 or $I* format)...there you go. 

$I* File Structures
As Jason Hale pointed out over 2 1/2 years ago, the $I* file structure changed between Win7 and Win10.  Most of the values are in the same location (the version number...the first four bytes...were updated from 1 to 2), but where Win7 had a fixed length field that included the name and original path (in Unicode) of the file, Win10 and Win2016 have a four byte name length field, followed by the file path and name, in Unicode.

SemanticScholar PDF
4n6Explorer article

Friday, November 23, 2018

Basic Skillz, pt II

Following my initial post on this topic, and to dove-tail off of Brett's recent post, I wanted to provide something of a consolidated view based on the comments received.

For the most part, I think Brett's first comment was very much on point:

Should be easy enough to determine what would constitute basic skills, starting with collecting the common skills needed across every specialty (the 'basic things'). Things like, seizing evidence, imaging, hashing, etc..

Okay, so that's a really good start.  Figure out what is common across all specialties, and come up with a core set of skills that are independent of OS, platform, etc., in order to determine what constitutes a "Basic DF Practitioner". These skills will need to be able to be tested and verified; some will likely be "you took a test and achieved a score", while other skills be pass/fail, or verification of the fact that you were able to demonstrate the skill to some degree.  Yes, this will be more subjective that a written test, but there are some skills (often referred to as "soft skills") that while important, one may not be able to put their finger on to the point of having a written test to verify that skill.

Brigs had some great thoughts as far as a break down of skill sets goes, although when I read his comment, I have to admit that in my head, I read it in my Napolean Dynamite voice.  ;-)  Taking this a step further, however, I wanted to address @mattnotmax's comments, as I think they provide a really good means to walk through the thought process.

1. collect the evidence properly 

What constitutes "properly"?  The terms "forensics" and "evidence" bring a legal perspective to the forefront in discussions on this topic, and while I fully believe that there should be one standard to which we all strive to operate, the simple fact is that business processes and requirements very often prevent us from relying on one single standard.  While it would be great to be able to cleanly shut a system down and extract the hard drive(s) for acquisition, there are plenty of times we cannot do so. I've seen systems with RAID configurations shut down and the individual drives acquired, but the order of the drives and the RAID configuration itself was never documented; as such, we had all those disk images that were useless.  On the other hand, I've acquired images from live systems with USB 1.0 connections by mapping a drive (an ext HDD) to another system on the network that had USB 2.0 connections.

I think we can all agree that we won't always have the perfect, isolated, "clean room-ish" setting for acquiring data or 'evidence'.  Yes, it would be nice to have hard drives removed from systems, and be able to have one verified/validated method for imaging that data, but that's not always going to be the case.

Live, bare-metal systems do not have a "hard drive" for memory, and memory acquisition inherently requires the addition of software, which modifies the contents of memory itself.

I have never done mobile forensics but I'm sure that there are instances, or even just specific handsets, where an analyst cannot simply shut the handset down and acquire a complete image of the device.

I would suggest that rather than simply "collect the evidence properly", we lean toward understanding how evidence can be collected (that one size does not fit all), and that the collection process must be thoroughly documented.

2. image the hard drive 

Great point with respect to collection...but what if "the hard drive" isn't the issue?  What if it's memory?  Or a SIM card?  See my thoughts on #1.

3. verify the tool that did the imaging, and then verify the image taken 

I get that the point here is the integrity of the imaging process itself, as well as maintaining and verifying the integrity of the acquired image.  However, if your only option for collecting data is to acquire it from a live system, and you cannot acquire a complete copy of the data, can we agree that what is important here is (a) documentation, and (b) understanding image integrity as it applies to the process being used (and documented)?

For items 1 thru 3, can we combine them into understanding how evidence or data can be collected, techniques for doing so, and that all processes must be thoroughly documented?

4. know what sort of analysis is required even if they don't know how to do it (i.e. can form a hypothesis) 

Knowing what sort of analysis is required is predicated by understanding the goals of the acquisition and analysis process.  What you are attempting to achieve predicates and informs your acquisition process (i.e., what data/evidence will you seek to acquire)

5. document all their process, analysis and findings 

Documentation is the key to all of this, and as such, I am of the opinion that it needs to be addressed very early in the process, as well as throughout the process.

6. can write a report and communicate that report to a technical and non-technical audience.

If you've followed the #DFIR industry for any period of time, you'll see that there are varying opinions as to how reporting should be done.  I've included my thoughts as to report writing both here in this blog, as well as in one of my books (i.e., ch 9 of WFA 4/e).  While the concepts and techniques for writing DFIR reports may remain fairly consistent across the industry, I know that a lot of folks have asked for templates, and those may vary based on personal preference, etc.

All of that being said, I'm in agreement with Brett, with respect to determining a basic skill set that can be used to identify a "Basic DF Practitioner".  From there, one would branch off to different specialties (OS- or platform-specific), likely with different levels (i.e., MacOSX practitioner level 1, MacOSX analyst level 1, etc.)

As such, my thoughts on identifying and developing basic skills in practitioners include:

1. Basic Concepts

Some of the basic concepts for the industry (IMHO) include documentation, writing from an analytic standpoint (exercises), reviewing other's work and having your work reviewed, etc.

For a training/educational program, I'd highly recommend exercises that follow a building block approach.  For example, start by having students document something that they did over the weekend; say, attending an event or going to a restaurant or movie.  Have them document what they did, then share it, giving them the opportunity to begin speaking in public.  Then have them trade their documentation with someone else in the class, and have that person attempt to complete the same task, based on the documentation.  Then, that person reviews the "work product", providing feedback.

Another approach is to give the students a goal, or set of goals, and have them develop a plan for achieving the goals.  Have them implement the plan, or trade plans such that someone else has to implement the plan.  Then conduct a "lessons learned" review; what went well, what could have gone better, and what did we learn from this that we can use in the future?

This is where the building blocks start.  From here, provide reading materials for with the students provide reviews, and instead of having the instructor/teacher read them all, have the students share the reviews with other students.  This may be a good way to begin building the necessary foundation for the industry.

2. Understanding File Systems and Structures

This area is intended to develop an understanding of how data is maintained on storage systems, and is intended to cover the most common formats, from a high level.  For example (and this is just an example):

MacOSX - HPFS, HFS+, file structures such as plists
Linux - ext3/4
Windows - NTFS, perhaps some basic file structures (OLE, Registry)

Depending on the amount of information and the depth into which the instructor/teacher can go, the above list might be trimmed down, or include Android, network packets, common database formats (i.e., SQLite), etc.

Students can then get much more technically in-depth as they progress into their areas of specialization, or into a further level as "practitioner", before they specialize.

Just a note on "specialization" - this doesn't mean that anyone is pigeon-holed into one area; rather, it refers to the training.  This means that skill sets are identified, training is provided, and skills are achieved and measured such that they can be documented.  In this way, someone that achieves "MacOSX analyst level 2" is known to have completed training and passed testing for a specific set of skills that they can then demonstrate.  The same would true with other specialized areas.

3. Data Acquisition and Integrity

The next phase might be one in which basic techniques for data acquisition are understood.  I can see this as being a fantastic area for "fam fires"; that is, opportunities for the students to get hands-on time with various techniques.  Some of these, such as using write blockers, etc., should be done in the classroom, particularly at the early stages.

In this class, you could also get into memory acquisition techniques, with homework assignments to collect memory from systems using various techniques, documenting the entire process.  Then students will provide their "reports" to other students to review.  This provides other opportunities for evaluation, as well; for example, have a student with, say, a Mac system provide their documentation to another student with a Mac, and see if the process returns similar results.

We want to be sure that some other very important topics are not skipped, such a acquiring logs, network captures (full packet captures vs. netflow), etc.  Again, this should be a high-level understanding, with familiarization exercises, and full/complete documentation.

4. Techniques of Analysis

I think that beginning this topic as part of the basic skill set is not only important, but a good segue into areas of specialization. This is a great place to reiterate the foundational concepts; determine goals, develop a plan, document throughout, and conduct a review (i.e., "lessons learned").  With some basic labs and skills development exercises, an instructor can begin including things such as how those "lessons learned" might be implemented.  For example, a Yara rule, or a grep statement for parsing logs or packet captures.  But again, this is high-level, so detailed/expert knowledge of writing a Yara rule or grep expression isn't required; the fact that one can learn from experiences, and share that knowledge with others should be the point.

Again, this is mostly high-level, and a great way to maximize the time might be to have students get into groups and pick or be assigned a project.  The delivery of the project should include a presentation of the goals, conduct of the project, lessons learned, and a review from the other groups.

What needs to be common throughout the courses is the building block approach, with foundations being built upon and skills developed over time.

As far as skill development goes, somethings I've learned over time include:

We all learn different ways.  Some learn through auditory means, others visually, and others by doing.  Yes, at a young age, I sat in a classroom and heard how to put on MOPP NBC protective gear.  However, I really learned by going out to the field and doing it, and I learned even more about the equipment by having to move through thick bush, wearing all of equipment, in Quantico, in July.

I once worked for a CIO who said that our analysts needed to be able to pick up a basic skill through reading books, etc., as we just could not afford to send everyone to intro-level training for everything.  I thought that made perfect sense.  When I got to a larger team, there were analysts who came right out and said that they could not learn something new unless they were sitting in a classroom and someone was teaching it to them.  At first, I was aghast...but then I realized that what they were saying was that, during the normal work day, there were too many other things going travel, submitting expenses, performing analysis and report writing...such that they didn't feel that they had the time to learn anything.  Being in a room with an instructor took them out of the day-to-day chaos, allowed them to focus on that topic, to understand, and ask questions.  Well, that's the theory, anyway.  ;-)

We begin learning a new skill by developing a foundational understanding, and then practicing the skill based on repeating a "recipe".  Initial learning begins with imitation.  In this way, we learn to follow a process, and as our understanding develops, we begin to move into asking questions.  This helps us develop a further understanding of the process, from which we can then begin making decisions what new situations arise.  However, developing new skills doesn't mean we relinquish old ones, so when a new situation arises, we still have to document our justification for deviation from the process.

Some additional thoughts that I had after clicking "publish"...

First, the above "courses" could be part of an overall curriculum, and include other courses, such as programming, etc.

Second, something else that needs to be considered from the very beginning of the program is specificity of language.  Things are called specific names, and this provides as means by which we can clearly communicate with other analysts, as well as non-technical people.  For example, I've read malware write-ups from vendors, including MS, that state that malware will create a Registry "entry"; well, what kind of entry?  A key or a value?  Some folks I've worked with in the past have told me that I'm pedantic for saying this, but it makes a difference; a key is not a value, nor vice versa.  They each have different structures and properties, and as such, should be referred to as what they are, correctly.

Third, to Brett's point, vendor-specific training has its place, but should not be considered foundational. In 1999, I attended EnCase v3 Intro training; during the course, I was the only person in the room who did not have a gun and a badge.  The course was both taught and attended by sworn law enforcement officers.  At one point during the training, the instructor briefly mentioned MD5 hashes, and then proceeded on with the material.  I asked if he could go back and say a few words about what a hash was and why it was important, and in response, he offered me the honor and opportunity of doing so.  My point is the same as Brett''s not incumbent upon a vendor to provide foundational training, but that training (and the subsequent knowledge and skills) is, indeed, foundational (or should be) to the industry.

Here is a DFRWS paper that describes a cyber forensics ontology; this is worth consideration when discussing this topic.

Tuesday, November 20, 2018

Basic Skillz

Based on some conversations I've had with Jessica Hyde and others recently (over the past month or so), I've been thinking a good bit lately about what constitutes basic skills in the DFIR field.

Let's narrow it down a bit more...what constitutes "basic skills" in digital forensics?

Looking back at my own experiences, particularly the military, there was a pretty clear understanding of what constitutes "basic skills".  The Marines have a motto; "every Marine a rifleman", which essentially states that every Marine must know how to pick up and effectively operate a service rifle, be it the M-16 or M-4.  Boot camp (for enlisted Marines) is centered around a core understanding around what it means to be a "basic Marine", and the same holds true for TBS for officers (both commissioned and warrant).  From each facility, Marines head off to specialized training in their military occupational specialty (MOS). 

Is something like this an effective model for DF?  If so, what constitutes "basic skills"?  What is the point where someone with those basic skills transitions to an area of specialty, such as, say, Windows forensics, or Mac or mobile forensics? 


Saturday, November 17, 2018

Veteran Skillz

I had an interesting chat recently with a fellow Marine vet recently which generated some thoughts regarding non-technical skills that veterans bring to bear, in any environment. This is something I've thought about before, and following the exchange, I thought it was time to put together a blog post.

Before I start, however, I want to state emphatically and be very clear that this is not an "us vs them" blog post.  I'm fully aware that a lot of non-vets may have many of the same skills and experiences discussed in the post, and I'm not suggesting that they don't.  More than anything, the goal of this blog post is to help vets themselves overcome at least a modicum of the "imposter syndrome" they may be feeling as they begin their transition from the military to the civilian community.

The military includes some quality technical skills training, and a great thing about the military is that they'll teach you a skill, and then make you to use it.  This includes the entire spectrum of jobs...machine gunner, truck driver, welder, etc.  While the technical skills imparted by the military, for the most part, may not seem up to par with respect to the private sector, there are a lot of soft skills that are part of military training that are not as prevalent out in the private sector.

Vets also develop some pretty significant technical skill sets, either as part of or ancillary to their roles in the military.  When I was on active duty and went to graduate school, I did things outside of work like upgrade my desktop by adding a new hard drive which, back in '94, was not the most straightforward process if you've never done it before.  I knew an infantry officer who showed up and had not only installed Linux on his 386 desktop computer, but had already developed a familiarity with the OS...again, not something to shake a stick at back in the mid-'90s.  I developed more than a passing familiarity with OS/2 Warp.  Prior to that, I had some pretty inventive enlisted Marines working for me; one developed a field expedient antenna that he called a "cobra-head", and he carried around in an old calculator pouch.  Another Marine discovered a discrepancy in the "Math for Marines" MCI correspondence course exam; he wrote it up, I edited it and had him sign it, and he got the award.  After all, he found it.  My point is that I've spoken with a number of vets who've been reticent to take that big step out into the private sector, instead opting for a "soft" transition by working for a contractor or in LE first.  I think that some of this has been due to the misconception of, "I won't measure up", but honestly, nothing could be further from the truth.

For vets, the skills you have may be more sought after than you realize.  For example, if you spent some time in the military, you have some pretty significant life experiences that non-vets may not have, like living and working with a diverse team.  Spent six years in the Navy, with a good bit of that on ship or on a submarine?  If you spent any time in the military, you very likely spent time living in a barracks environment, and it's also very likely that you spent time having to be responsible for yourself.  As such, when you're transitioning to the private sector, you've already learned a lot of the lessons that others may not yet have experienced.

One notable example that I've heard mentioned by others is being part of a team.  What does this mean?  One fellow vet said that he has, "...a strong sense of not being the guy that screws over my teammates."  He further shared that he'll do whatever it takes to ensure that he doesn't make someone else's job tougher or needlessly burdensome.

There are also a number of little often have you been on a conference call when someone spends a minute responding, but they're still on mute?  Or they have some serious racket going on in the background, and they won't go on mute?

Other examples include planning and communications.  Like many, I took a class on public speaking while I was in college (it was a requirement), but my real experience with direct communications to others came while I was in the military, beginning in Officer Candidate School (OCS, which is an evaluation process, not a training one).  During OCS, we had an evolution called "impromptu speech", where the platoon commander gave us 15 min to prepare a 5 min speech, and we were evaluated (in front of everyone) on both the content and conduct (i.e., did we finish on time, etc.) of the "speech".  Each of us got direct feedback as to such things as, did we follow instructions and stay on point, did we stay within the time limit, were we engaging, etc.  We then had multiple evolutions (military speak for "periods of training") throughout the rest of OCS where we had to use those skills; briefing three other candidates on fire team movement, briefing 12 other candidates on squad movement, the Leadership Reaction Course, etc.  For each evolution, we were evaluated on our ability to come up with a plan, take input and feedback, and then clearly and concisely communicate our plan to others.  And when I say we were "evaluated", I mean exactly that.  I still remember the feedback I received from the Captain manning the station of the Leadership Reaction Course where I was the team leader.  This sort of evolution (along with performance evaluation) continued on into initial officer training (for Marines, The Basic School, or "TBS"); however, there was no intro or basic "impromptu speech" evolution, it just picked up where OCS left off.  Not only were we evaluated and critiqued by senior officers, but we also received feedback from our fellow student officers.

My point is that there were experiences that developed basic skills that many of us don't really think about, but they have a pretty significant impact on your value once you transition out of the military.  For enlisted folks, did you ever have to guide a new person through the wickets of "how we do things here"?  Were you ever in the field somewhere and told by your squad leader that you had to give a training class to fill a block of time, and then evaluated on how you did?  Were you ever in a role where you had to give or elicit feedback? You may think that these were small, meaningless experiences, but to be quite honest, they add up and put you head and shoulders above someone with the same technical skills, but hasn't experienced those same sorts of events during their career to that point.

Like others, I've also experienced prejudice against members of the armed forces.  I'm not sharing this to diminish or minimize anyone else's experiences; rather, I'm simply sharing one of my own experiences.  Years ago (at the time of this writing, close to 20), I worked for a services company in VA, for which the security division was run out of the office in CA.  Not long after I started, I was told that I needed to fly to the CA office to get trained up on how things were done, and that I would be there for three days.  So, I flew out, and spent the first day and a half chatting with the tech writer in the office, who was also a Marine vet.  That's right...after all the discussion and planning, I showed up and nothing happened.  When things finally got kicked off, the Director of Security Services stated emphatically that, had I applied to the company through his office that I wouldn't have been hired, for no other reason that because I was coming from the military.  Apparently, his feeling was that military folks couldn't think the way civilian security folks thought, that we're too "lock-step".

While he was saying this, he was also giving me a tour of the facilities in the local office. Part of the tour included a room that he described as a "Faraday cage".  While we were in the room, with the door closed, his cell phone rang.  Evidently, it was NOT someone calling him (in the "Faraday cage") to remind him that, per my resume, I had earned an MSEE degree prior to leaving the military.  In fact, I knew what a "Faraday cage" was supposed to be from my undergrad schooling.  So...yeah.

My point is, don't put someone on a pedestal due to some minimized sense of self-worth, or some self-inflicted sense of awe in them.  After all, we all knew that Colonel or 1stSgt who really shouldn't have been in their position.  Realize that there are things you do bring to the table that may not be written into the job description, or even be on the forefront of the hiring manager's mind.   However, those skills that you have based simply on what you've experienced will make you an incredibly valuable asset to someone.

For the vets out there who may be feeling anxious or reticence about their impending transition...don't.  Remember how you hated staying late to clean weapons, but you adjusted your attitude and focused on getting it done...and not just your weapon, but once you were finished, you went and helped someone else?  Remember all those times when the trucks were late picking you and your team up, and how you developed patience because of those experiences?  Remember how you also looked to those experiences, and thought about all the steps you'd take to ensure that they didn't happen on your watch, when you were in charge?  Well, remember those times and those feelings while you're interviewing, and then reach out and extend a helping hand to the next vet trying to do the same thing you did. 

Tuesday, October 30, 2018

More Regarding IWS

IWS has been out for a short while now, and there have been a couple of reviews posted.  So far, it seems that there is some reticence to the book, based on the form factor (size), as well as the price point.  Thanks to feedback on the book on that subject from Jessica Hyde, the publisher graciously shared the following with me:

I’m happy to pass on a discount code that Jessica and her students, and anyone else you run across, can use on our website ( for a 30% discount AND we always offer free shipping. The discount code is: FOREN318.

Hopefully, this discount code will bring more readers and DFIR analysts a step closer to the book.  I think that perhaps the next step is to address the content itself. I'm very thankful to Brett Shavers for agreeing to let me share this quote from an email he sent me regarding the IWS content:

As to content, I did a once-over to get a handle of what the book is about, now on Ch 2, and so far I think this is exactly how I want every DFIR book to be written.

I added the emphasis myself.  This book is something of a radical departure from my previous books, which I modeled after other books I'd seen in the genre, because that's what I thought folks wanted to see.  Mention an artifact, provide a description of what the artifact may mean (depending upon the investigation), maybe a general description of how that artifact may be used, and then provide names of a couple of tools to parse the artifact.  After that, move on to the next artifact, and in the end, pretty much leave it to the reader to string everything together into an "investigation".  In this case, my thought process was to use images that were available online to run through an investigation, providing analysis decisions and pivot points along the way.  This way, a reader could follow along, if they chose to do so.

If you get a copy of the book and have a similar reaction to what Brett shared, please let me know.  If there's something that you like or don't like about the book, again, please let me know.  Do this through an email, a comment here on this blog, or a blog post of your own.  As illustrated by the example involving Jessica, if I know about something, I can take action and work to change it. 

How It Works
When a publisher decides to go forward with a book project, they have the author submit a prospectus describing the book, the market for the book, and any challenges that may be faced in the market; in short, the publisher has the author do the market research.  The prospectus is then reviewed by several folks; for the book projects I've been involved with, its usually been three people in the industry.  If the general responses are positive, the publisher will move forward with the project. 

I'm sharing this with you because, in my experience, there are two things that the publisher looks at when considering a second edition; sales numbers and feedback from the first edition.  As such, if you like the content of the book and your thoughts are similar to Brett's, let me know.  Write a review on Amazon or on the Elsevier site, write your own blog post, or send me an email.  Let me know what you think, so that I can let the publisher know, and so that I can make the changes or updates, particularly if they're consistent across several reviewers. 

If you teach DFIR, and find value in the book content, but would like to see something more, or something different, let me know.  As with Jessica's example, there's nothing anyone can to do take action if they don't know what you're thinking.

Sunday, October 28, 2018


Book Discount
While I was attending OSDFCon, I had a chance to (finally!) meet and speak with Jessica Hyde, a very smart and knowledgeable person, former Marine, and an all-around very nice lady.  As part of the conversation, she shared with me some of her thoughts regarding IWS, which is something I sincerely hope she shares with the community.  One of her comments regarding the book was that the price point put it out of reach for many of her students; I shared that with the publisher, and received the following as a response:

I’m happy to pass on a discount code that Jessica and her students, and anyone else you run across, can use on our website ( for a 30% discount AND we always offer free shipping. The discount code is: FOREN318.

What this demonstrates is that if you have a question, thought, or comment, share it. If action needs to or can be taken, someone will do so. In this case, my concern is the value of the book content to the community, and Jessica graciously shared her thoughts with me, and as a result, I did what I could to try and bring the book closer to where others might have an easier time purchasing it.

So how can you share your thoughts?  Write a blog post or an email.  Write a review of the book, and specify what you'd like to see.  What did you find good, useful or valuable about the book content, and what didn't you like?  Write a review and post it to the Amazon page for the book, or to the Elsevier page; both pages provide a facility for posting a review.

Artifacts of Program Execution
Adam recently posted a very comprehensive list of artifacts indicative of program execution, in a manner similar to many other blogs and even books, including my own.  A couple of take-aways from this list include:

- Things keep changing with Windows systems.  Even as far back as Windows XP, there were differences in artifacts, depending upon the Service Pack.  In the case of the Shim Cache data, there were differences in data available on 32-bit and 64-bit systems.  More recently, artifacts have changed between updates to Windows 10.

- While Adam did a great job of listing the artifacts, something analysts need to consider is the context available from viewing multiple artifacts together, as a cluster, as you would in a timeline.  For example, let's say there's an issue where when and how Defrag was executed is critical; creating a timeline using the user's UserAssist entries, the timestamps available in the Application Prefetch file, and the contents of the Task Scheduler Event Log can provide a great deal of context to the analyst.  Do not view the artifacts in isolation; seek to use an analysis methodology that allows you to see the artifacts in clusters, for context.  This also helps in spotting attempts by an adversary to impede analysis.

So, take-aways...know the version of Windows you're working with because it is important, particularly when you ask questions, or seek assistance.  Also, seek assistance.  And don't view artifacts in isolation. 

Artifacts and Evidence
A while back (6 1/2 yrs ago), I wrote about indirect and secondary artifacts, and included a discussion of the subject in WFA 3/e.

Chris Sanders recently posted some thoughts regarding evidence intention, which seemed to me to be along the same thought process.  Chris differentiates intentional evidence (i.e., evidence generated to attest to an event) from unintentional evidence (i.e., evidence created as a byproduct of some non-attestation function).

Towards the end of the blog post, Chris lists six characteristics of unintentional evidence, all of which are true.  To his point, not only may some unintentional evidence have multiple names, it may be called different things by the uninitiated, or those who (for whatever reason) choose to not follow convention or common practice.  Consider NTFS alternate data streams, as an example.  In my early days of researching this topic, I found that MS themselves referred to this artifact as both "alternate" and "multiple" data streams.

Some other things to consider, as well...yes, unintentional evidence artifacts often are quirky and have exceptions, which means they are very often misunderstood and misinterpreted.  Consider the example of Shim Cache entry from Chris's blog post; in my experience, this is perhaps the most commonly misinterpreted artifact to date, for the simple fact that the time stamps are commonly referred to as the "date of execution".  Another aspect of this artifact is that it's taken as standalone, and should not be...there may be evidence of time stomping occurring prior to the file being included as a Shim Cache record.

Finally, Chris is absolutely correct that many of these artifacts have poor documentation, if they have any at all.  I see this as a short-coming of the community, not of the vendor.  The simple fact is that, as a community, we're so busy pushing ahead that we aren't stopping to consider the value to the community as a whole that we're leaving behind.  Yes, the vendor may poorly document an artifact, or the documentation may simply be part of the source code that we cannot see, but what we're not doing as a community is documenting and sharing our findings.  There've been too many instances during my years doing DFIR work that I would share something with someone who would respond with, "oh, yeah...we've seen that before" only to have no documentation, not even a Notepad document or something scribbled on a napkin to which they can refer me.  This is a loss for everyone.

Saturday, October 20, 2018

OSDFCon Trip Report

This past week I attended the 9th OSDFCon...not my 9th, as I haven't been able to make all of them.  In fact, I haven't been able to make it for a couple of years.  However, this return trip did not disappoint.  I've always really enjoyed the format of the conference, the layout, and more importantly, the people.  OSDFCon is well attended, with lots of great talks, and I always end up leaving there with much more than I showed up with.

Interestingly enough, one speaker could not make it at the last minute, and Brian simply shifted the room schedule a bit to better accommodate people.  He clearly understood the nature of the business we're in, and the absent presenter suffered no apparent consequences as a result.  This wasn't one of the lightning talks at the end of the day, this was one of the talks during the first half of the conference, where everyone was in the same room.  It was very gracious of Brian to simply roll with it and move on.

The Talks
Unfortunately, I didn't get a chance to attend all of the talks that I wanted to see.  At OSDFCon, by its very nature, you see people you haven't seen in a while, and want to catch up.  Or, as is very often the case, you see people you only know from online.  And then, of course, you meet people you know only from online because they decide to drop in, as a surprise.

However, I do like the format.  Talk times are much shorter, which not only falls in line with my attention span, but also gets the speakers to focus a bit more, which is really great, from the perspective of the listener, as well as the speaker.  I also like the lightning talks...short snippets of info that someone puts together quickly, very often focusing on the fact that they have only 5 mins, and therefore distilling it down, and boiling away the extra fluff.

My Talk
I feel my talk went pretty well, but then, there's always the bias of "it's my talk".  I was pleasantly surprised when I turned around just before kicking the talk off to find the room pretty packed, with people standing in the back.  I try to make things entertaining, and I don't want to put everything I'm going to say on the slides, mostly because it's not about me talking at the audience, as much as its about us engaging.  As such, there's really no point in me providing my slide pack to those who couldn't attend the presentation, because the slides are just place holders, and the real value of the presentation comes from the engagement.

In short, the purpose of my talk was that I wanted to let people know that if they're just downloading RegRipper and running the GUI, they aren't getting the full power out of the tool.  I added a command line switch to rip.exe earlier this year ("rip -uP") that will run through the plugins folder, and recreate all of the default profiles (software, sam, system, ntuser, usrclass, amcache, all) based on the "hive" field in the config headers of the plugin.

Something that is a recurring theme of this conference is how to get folks new to the community to contribute and keep the community alive, as well as how to just get folks in the community to contribute.  Well, a couple of things came out of my talk that might be of interest to someone in the community.

One way to contribute is this...someone asked if there was a way to determine for which version of Windows a plugin was written.  There is a field in the %config header metadata that can be used for that purpose, but there's no overall list or table that identifies the Windows version for which a plugin was written.  For example, there are two plugins that extract information about user searches from the NTUSER.DAT hive, one for XP ( and one for Vista+ (  There's really no point in running against an NTUSER.DAT from a Windows 7 system.

So, one project that someone might want to take on is to put together a table or spreadsheet that provides this list.  Just sayin'...and I'm sure that there are other ideas as to projects or things folks can do to contribute. 

For example, some talks I'd love to see is how folks (not the authors) use the various open source tools that are available in order to solve problems.  Actually, this could easily start out as a blog post, and then morph into a did someone use an open source tool (or several tools) to solve a problem that they ran into?  This might make a great "thunder talk"...10 to 15 min talks at the next OSDFCon, where the speaker shares the issue, and then how they went about solving it. Something like this has multiple could illustrate the (or, a novel) use of the tool(s), as well as give DFIR folks who haven't spoken in front of a group before a chance to dip their toe in that pool.

Like I said, a recurring theme of the conference is getting those in the community, even those new to the community, involved in keeping the community alive, in some capacity.  Jessica said something several times that struck home with me...that it's up to those of us who've been in the community for a while to lead the way, not by telling, but by doing.  Now, not everyone's going to be able to, or even want to, contribute in the same way.  For example, many folks may not feel that they can contribute by writing tools, which is fine.  But a way you can contribute is by using the tools and then sharing how you used them.  Another way to contribute is by writing reviews of books and papers; by "writing reviews", I don't mean a table of contents, but instead something more in-depth (books and papers usually already have a table of contents).

Shout Outz
Brian Carrier, Mari DeGrazia, Jessica Hyde, Jared Greenhill, Brooke Gottlieb, Mark McKinnon/Mark McKinnonCory Altheide, Cem Gurkok, Thomas Millar, the entire Volatility crew, Ali Hadi, Yogesh Khatri, the PolySwarm folks...I tried to get everyone, and I apologize for anyone I may have missed!

Also, I have to give a huge THANK YOU to the Basis Tech folks who came out, the vendors who were there, and to the hotel staff for helping make this conference go off without a hitch.

Final Words
As always, OSDFCon is well-populated and well-attended.  There was a slack channel established for the conference (albeit not by Brian or his team, but it was available), and the Twitter hashtag for the conference seems to have been pretty well-used.

To follow up on some of the above-mentioned conversations, many of us who've been around for a while (or more than just "a while") are also willing to do more than lead by doing.  Many of us are also willing to answer ask.  Some of us are also willing to mentor and help folks in a more direct and meaningful manner.  Never presented before, but feel like you might want to?  Some of us are willing to help in a way that goes beyond just sending an email or tweet of encouragement.  Just ask.