Sunday, October 30, 2011

Stuff

Speaking
I've got a couple of speaking engagements coming up that I thought I'd share...

7-9 Nov - PFIC 2011 - I'll be giving two presentations, one on the benefits of using a forensic scanner, and the other, an Introduction to Windows Forensics.  I attended the conference last year, and had a great time, and I'm looking forward to meeting up with folks again.

30 Nov - CT HTCIA - I'm not 100% sure what I'm going to be presenting on at this point...  ;-)  I'm thinking about a quick (both presentations are less than an hour) presentation on using RegRipper, as well as one on malware characteristics and malware detection within acquired images.  I think that both are topical, and both are covered in my books.

Jan 2012 - DoD Cybercrime Conference (DC3) - I'll be presenting on timeline analysis.  I gave a presentation on Registry analysis (go figure, right??) here a long time ago, and really enjoyed the portions of the conference that I was able to attend.  I know that Rob Lee recently gave an excellent webinar on Super Timeline Analysis, but rest assured, this isn't the same material.  While I have provided code to assist with log2timeline, I tend to take a slightly different approach when presenting on timeline analysis.  Overall, I'm looking forward to having a great time with this conference and presentation.  Also, timeline analysis has it's own chapter in the upcoming Windows Forensic Analysis 3/e.

Reading
I've had an opportunity to travel recently, and when I do, I like to read.  Being "old skul" (re: I don't own a tablet...yet), I tend to go with hard copy reading materials, such as magazines and small books.  I happened to pick up a copy of Entrepreneur recently, for a couple of reasons.  First, it's easy to maneuver the reading material in to my seat and stow my carry-on bag.  Second, I think it's a great idea to see how other folks in other business areas solve problems and address issues that they encounter, and to spur ideas for how to recognize and address issues in my own area of interest.  For example, the October issue of the magazine has an article on how to start or expand a business during a recession, addressing customer needs.  In the technical community, this is extremely important.

In that same issue, Jonathan Blum's article titled Hack Job (not the same title in the linked article, but the same content) was interesting...while talking about application security, the author made the recommendation to "choose an application security consultant".  I completely agree with this, because it falls right in line with my thoughts on DFIR work...rather than calling an IR consultant or firm in an emergency, find a "trusted adviser" ahead of time who can help you address your DFIR needs.  What are those needs?  Well, in any organization, regardless of size, just look around.  Do you have issues or concerns with violation of acceptable use policies, or any other HR issues?  Theft of intellectual property? 

If you call a consulting firm when you have an emergency, it's going to cost you.  The incident has already happened, and then you have to work through contracting issues, getting a consultant (or a busload of consultants) on-site, and having to help the responders understand your infrastructure, and then start collecting data.  You maybe paying for more consultants than you need initially, because after all, it is an emergency, and your infrastructure is unknown.  Or, you may be paying for more consultants later, as more information about the incident is discovered.  Also, keep in mind emergency/weekend/holiday rates, the cost of last minute travel, lodging, etc.  And we haven't even started talking about anything the consultants would need to purchase (drives for imaging), or fines you may encounter from regulatory bodies.

Your other option is to work with an trusted adviser ahead of time, someone who's seen a number of incidents, and can help you get ready.  You'll even want to do this before increasing your visibility into your infrastructure...because if you don't have a response capability set up prior to getting a deep view into what's really happening on your infrastructure, you can very easily be overwhelmed once you start shining a light into dark corners.  Work with this trusted adviser to understand the threats you're facing, what issues need to be addressed within your infrastructure and business culture, and establish an organic response capability.  Doing this ahead of time is less expensive, and with the right model, can be set up as a budgeted, predictable business expense, rather than a massive, unbudgeted expenditure.  Learning how an incident responder would address your issue and doing it yourself (to some extent) is much faster (quicker response time because you're right there) and less expensive (if you need analysis done, FedEx is much less expensive than last minute plane flights, lodging, rental cars, parking, etc., for multiple consultants).  Working with a trusted adviser ahead of time will help you understand how to do all of this in a sound manner, with confidence (and documentation!).

MBR Infectors
I've posted on MBR infectors before, and even wrote a Perl script to help me detect one of the characteristics of this type of malware (i.e., modifying the MBR, and then copying the original MBR to another sector, etc.).

Chad Tilbury recently posted an MBR malware infographic that is extremely informative!  The infographic does a great job of illustrating the threat posed by this type of malware, not just in what it does and how it works, but being a graphic, you can see the sheer number of variants that are out there, as well as how they seem to be increasing. 

This stuff can be particularly insidious, particularly if you've never heard of it.  I've given a number of presentations where I've discussed NTFS alternate data streams (ADSs), and the subject matter freaks Windows admins out...because they'd never heard of ADSs!  So, imagine something getting on a system in such a way as to bypass security protections on the system during the boot sequence.  More importantly, as a DFIR analyst, do you have checks for MBR infectors as part of your malware detection process?

Blogs
Melissa's posted a couple of great blog posts on a number of topics, including (but not limited to) using Volatility and John the Ripper to crack passwords (includes a video), and examining partition tables.  She's becoming more prolific, which is great because she's been posting some very interesting stuff and I hope to see more of it in the future.

Tools
I've seen some tweets over the past week or so that have mentioned updates to the Registry Decoder tool...sounds like development is really ripping along (no pun intended...).  If you do any analysis of Windows systems and you haven't looked at this tool as a resource...what's wrong with you?  Really?  ;-)

Evidence Collection
A long time ago, while I was on the IBM ISS ERS team, we moved away from using the term "evidence" to describe what we collected.  We did so, because the term "evidence" has the connotation of having do with courts, and there was an air of risk avoidance in much of the IR work that we did...I'm not entirely sure where that came from, but that's how it was.  And if a customer (or someone higher up the food chain) says, "don't call it 'evidence' because it sounds like we're taking it to court...", then, well...to me, it doesn't matter what you call it.  Now, this doesn't mean that we changed what we did or how we did it...it simply means that we didn't call the digital data that we collected "evidence". 

This recent SANS ISC post caught my eye.  The reason it caught my eye was that it started out talking about having a standard for evidence handling, listed that requirements, and then...stopped.  Most often when talking with on-site IT staff during an incident, there's an agreement with respect to the need for collecting data, but when you start talking about what type of evidence is admissible in court, that's when most folks stop dead in their tracks and paralysis sets in, as often the "how" is never addressed...at least, not in a way that the on-site IT staff remembers or has implemented.

Here are a couple of thoughts...first, make data collection part of your incident response process.  The IR process should specify the need to collect data, and there should be procedures for doing so.  Each of these procedures can be short enough to easily understand and implement.

One of the things that I learned while preparing for the CISSP exam way back in 1999 was that business records...those records and that data collected as part of normal business processes...could be used as evidence.  I am not a lawyer, but I would think that this has, in part, to do with whether or not the person collecting the data is acting as an agent for law enforcement.  But if collecting that data is already part of your IR process and procedures, then it's documented as being part of your normal business processes.

And right there is the key to collecting "evidence"...documentation.  In some ways, I have always got the impression that this is the big roadblock to data collection...not that we don't know how to do it (there is a LOT of available information regarding how to collect all sorts of data from computer systems), but that we (technical folks) just seem to naturally balk at documenting anything.  And to be honest, I really don't know why that is...I mean, if a procedure states to follow these steps, and you do so, what's the problem?  Is it the fear of having done something wrong?  Why?  If you followed the steps in the procedure, what's the issue?

This really goes back to what I said earlier in this post about finding and working with a trusted adviser, someone with experience in IR who is there to help you help them to help you (that was completely intentional, by the way...).  For example, let's say you have a discussion and do some hands-on work with your trusted adviser regarding how to collect and preserve "evidence" from the most-often encountered systems in your infrastructure...laptops, desktops, and servers in the server room.  Then, let's say you have an incident and have to collect evidence from a virtual system, or a boot-from-SAN device?  Who better to assist you with this than someone who's probably already encountered these systems?  Or better yet, someone who's already worked with you to identify the one-off systems in your infrastructure and how to address them?

So, working with an adviser would help you address the questions in the SANS ISC blog post, and ensure that if your goal (or one of your goals) is to preserve evidence for use by law enforcement, then you've got the proper process, procedures, and tools in place to do so.

Saturday, October 29, 2011

NoVA Forensics Meetup

Reminder - our next NoVA Forensics Meetup is Wed, 2 Nov 2011...same Bat-time, same Bat-place.

Drop me an email or comment here if you're interested in meeting for a warm up at or just before 6pm.

Thursday, October 27, 2011

Tools and Links

Not long ago, I started a FOSS page for my blog, so I didn't have to keep going back and searching for various tools...if I find something valuable, I'll simply post it to this page and I won't have to keep looking for it.  You'll notice that I really don't have much in the way of descriptions posted yet, but that will come, and hopefully others will find it useful.  That doesn't mean the page is stagnant...not at all.  I'll be updating the page as time goes on.

Volatility
Melissa Augustine recently posted that she'd set up Volatility 2.0 on Windows, using this installation guide, and using the EXE for Distorm3 instead of the ZIP file.  Take a look, and as Melissa says, be sure to thoroughly read and follow the instructions for installing various plugins.  Thanks to Jamie Levy for providing such clear guidance/instructions, as I really think that doing so lowers the "cost of entry" for such a valuable tool.  Remember..."there're more things in heaven and earth than are dreamt of in your philosophy."  That is, performing memory analysis is a valuable skill to have, particularly when you have access to a memory dump, or to a live system from which you can dump memory.  Volatility also works with hibernation files, from whence considerable information can be drawn, as well.

WDE
Now and again, you may run across whole disk encryption, or encrypted volumes on a system.  I've seen these types of systems before...in some cases, the customer has simply asked for an image (knowing that the disk is encrypted) and in others, the only recourse we have to acquire a usable image for analysis is to log into the system as an Admin and perform a live acquisition.

TCHunt
ZeroView from Technology Pathways, to detect WDE (scroll down on the linked page)

You can also determine if the system had been used to access TrueCrypt or PGP volumes by checking the MountedDevices key in the Registry (this is something that I've covered in my books).  You can use the RegRipper mountdev.pl plugin to collect/display this information, either from a System hive extracted from a system, or from a live system that you've accessed via F-Response.

Timelines
David Hull gave a presentation on "Atemporal timeline analysis" at the recent SecTorCA conference (can find the presentation .wmv files here), and posted an abridged version of the presentation to the SANS Forensic blog (blog post here).

When I saw the title, the first thing I thought was...what?  How do you talk about something independent of time in a presentation on timeline analysis?  Well, even David mentions at the beginning of the recorded presentation that it's akin to "asexual sexual reproduction"...so, the title is meant to be an oxymoron.  In short, what the title seems to refer to is performing timeline analysis during an incident when you don't have any sort of time reference from which to start your analysis.  This is sometimes the case...I've performed a number of exams having very little information from which to start my analysis, but finding something associated with the incident often leads me to the timeline, providing a significant level of context to the overall incident.

In this case, David said that the goal was to "find the attacker's code".  Overall, the recorded presentation is a very good example of how to perform analysis using fls and timelines based solely on file system metadata, and using tools such as grep() to manipulate (as David mentions, "pivot on") the data.  In short, the SANS blog post doesn't really address the use of "atemporal" within the context of the timeline...you really need to watch the recorded presentation to see how that term applies.

Sniper Forensics
Also, be sure to check out Chris Pogue's "Sniper Forensics v3.0: Hunt" presentation, which is also available for download via the same page.  There are a number of other presentations that would be very good to watch, as well...some talk about memory analysis.  The latest iteration of Chris's "Sniper Forensics" presentations (Chris is getting a lot of mileage from these things...) makes a very important point regarding analysis...in a lot of a cases, an artifact appears to be relevant to a case based on the analyst's experience.  A lot of analysts find "interesting" artifacts, but many of these artifacts don't relate directly to the goals of their analysis.  Chris gives some good examples of an "expert eye"; in one slide, he shows an animal track.  Most folks might not even really care about that track, but to a hunter, or someone like me (ride horses in a national park), the track tells me a great deal about what I can expect to see.

This applies directly to "Sniper Forensics"; all snipers are trained in observation.  Military snipers are trained to quickly identify military objects, and to look for things that are "different".  For example, snipers will be sent to observe a route of travel, and will recognize freshly turned earth or a pile of trash on that route when the sun comes up the next day...this might indicate an attempt to hide an explosive device.

How does this apply to digital forensic analysis?  Well, if you think about it, it is very applicable.  For example, let's say that you happen to notice that a DLL was modified on a system.  This may stand out as odd, in part because it's not something that you've seen a great deal of...so you create a timeline for analysis, and see that there wasn't a system or application update at that time. 

Much like a sniper, a digital forensic analyst must be focused.  A sniper observes an area in order to gain intelligence...enemy troop movements, civilian traffic through the area, etc.  Is the sniper concerned with the relative airspeed of an unladen swallow?  While that artifact may be "interesting", it's not pertinent to the sniper's goals.  The same holds true with the digital forensic analyst...you may find something "interesting" but how does that apply to your goals, or should you get your scope back on the target?

Data Breach 'Best Practices'
I ran across this article recently on the GovernmentHealthIT site, and while it talks about breach response best practices, I'd strongly suggest that all four of these steps need to be performed before a breach occurs.  After all, while the article specifies PII/PHI, regulatory and compliance organizations for those and other types of data (PCI) specifically state the need for an incident response plan (PCI DSS para 12.9 is just one example).

Item 1 is taking an inventory...I tell folks all the time that when I've done IR work, one of the first things I ask is, where is your critical data.  Most folks don't know.  A few that have have also claimed (incorrectly) that it was encrypted at rest.  I've only been to one site where the location of sensitive data was known and documented prior to a breach, and that information not only helped our response analysis immensely, it also reduced the overall cost of the response (in fines, notification costs, etc.) for the customer.

While I agree with the sentiment of item 4 in the article (look at the breach as an opportunity), I do not agree with the rest of that item; i.e., "the opportunity to find all the vulnerabilities in an organization—and find the resources for fixing them." 

Media Stuff
Brian Krebs has long followed and written on the topic of cybercrime, and one of his recent posts is no exception.  I had a number of take-aways from this post that may not be intuitively obvious:

1.  "Password-stealing banking Trojans" is ambiguous, and could be any of a number of variants.  The "Zeus" (aka, Zbot) Trojan  is mentioned later in the post, but there's no information presented to indicate that this was, in fact, a result of that specific malware.  Anyone who's done this kind of work for a while is aware that there are a number of malware variants that can be used to collect online banking credentials.

2.  Look at the victims mentioned in Brian's post...none of them is a big corporate entity.  Apparently, the bad guys are aware that smaller targets are less likely to have detection and response capabilities (*cough*CarbonBlack*cough*).  This, in turn, leads directly to #3...

3.  Nothing in the post indicates that a digital forensics investigation was done of systems at the victim location.  With no data preserved, no actual analysis was performed to identify the specific malware, and there's nothing on which law enforcement can build a case.

Finally, while the post doesn't specifically mention the use of Zeus at the beginning, it does end with a graphic showing detection rates of new variants of the Zeus Trojan over the previous 60 days; the average detection rate is below 40%.  While the graphic is informative,

More Media Stuff
I read this article recently from InformationWeek that relates to the recent breach of NASDAQ systems; I specifically say "relates" to the breach, as the article specifies, "...two experts with knowledge of Nasdaq OMX Group's internal investigation said that while attackers hadn't directly attacked trading servers...".  The title of the article includes the words "3 Expected Findings", and the article is pretty much just speculation about what happened, from the get-go.  In fact, the article goes on to say, "...based on recent news reports, as well as likely attack scenarios, we'll likely see these three findings:".  That's a lot of "likely" in one sentence, and this much speculation is never a good thing.  


My concern with this is that the overall take-away from this is going to be "NASDAQ trading systems were hit with SQL injection", and folks are going to be looking for this sort of thing...and some will find it.  But others will miss what's really happening while they're looking in the wrong direction.

Other Items
F-Response TACTICAL Examiner for Linux now has a GUI
Lance Mueller has closed his blog; old posts will remain, but no new content will be posted

Thursday, October 20, 2011

Stuff in the Media

Now and again, I run across some interesting articles available through various media sources.  Back in the days when I was doing vulnerability assessments ('98-ish), we used to listen to what our contact said when we went onsite, and try to guess which magazines and journals he had open in his office...usually, we'd hear our contact using keywords from recent articles.

Terry Cutler, CTO of the Canadian firm Digital Locksmiths, had an interesting article published in SecurityWeek recently.  The article is titled, "You've been hacked.  Now what?", and provides a fictional...albeit realistic...description of what happens when an incident has been identified.  A lot of what is described in the article appears to have been pulled from either experience (IR is not listed as an available service on the company web site) or from "best practices".  For example, in the article, the assumption appears to be made that if a compromise occurs, corporate cell phones must be assumed to have been compromised (with respect to calls...email wasn't mentioned).

The article talks about not disconnecting systems, which in many cases is counter to what most victims of a compromise want to do right away.  However, I completely agree with this...unfortunately, the article doesn't expand beyond that statement to say what you should do.

Now, what I do NOT agree with is the statement in the article that you should "get help from an ethical hacker".  First off, given the modern usage of the term "hacker", the phrase "ethical hacker" is an oxymoron...like "jumbo shrimp".  While I do agree that some of the folks performing "ethical hacking" are good at getting into your network (as stated in the article, "Ethical hackers are experts at breaking into your system the same way a hacker will."),  I don't agree that this necessarily makes them experts at protecting networks, or more importantly, scoping the incident and determining where the attack came from.

In the years that I have been an incident responder, the one thing that consistently makes me a cringe is when I hear someone say, "...if I were the hacker, this is what I would have done."  Folks, where that thinking takes you can be irrelevant, or worse, can send your responders chasing way down rabbit holes.  Think CSI, and go where the evidence takes you.  I've seen instances where the intruder had no idea what organization he'd compromised and simply meandered about, leaving copious and prolific artifacts of his activity on all systems he touched.  I've also seen SQL injection attacks where, once in, the intruder was very focused in what they were looking for.  Sometimes, it's not so much about the corporate assets as it is loading keystroke loggers on user systems in order to harvest online banking credentials.

What you should be doing is collecting data and following the evidence, using the information you've collected to make educated, reasoned determinations as to where the intruder is going and what they are doing.  Do not make the assumption that you can intuit the attackers intentions...you may never know what these are, and you may chase down rabbit holes that lead to nowhere.  Instead, focus on what the data is telling you.  Is the intruder going after the database server?  Were they successful?

The best way to go about establishing an organic capability for this sort of work (at least, for tier 1 and/or 2 response) is to establish a relationship with a trusted adviser, someone who has experience in incident response and digital forensics, and can guide you through the steps to building that organic capability for immediate response.

At this point, you're probably wondering what I mean by "organic", and why "immediate response" is something that seems so necessary.  Well, consider what happens during a "normal" incident response; the "victim" organization gets notified of the incident (usually by an external third party), someone is contacted about providing response services, contract negotiations occur, and then at some time in the future, responders arrive and start to learn about your infrastructure so that they can begin collecting data.

The way this should be occurring is that data collection begins immediately, with incident identification as the trigger...if this doesn't happen, critical data is lost and unrecoverable.  The only way to do this is to have someone onsite trained in how to perform the data collection.


A lot of local IT staff look at consultants as the "experts" in data collection, and very often don't realize that before collecting data, those "experts" ask a LOT of questions.  Most often, the consultants called onsite to provide IR capabilities are, while knowledgeable, not experts at networking, and they are definitely not experts in YOUR infrastructure and environment.

I'm not even talking about getting to prosecution at this point...all I'm talking about is that data that is necessary to determine what happened, what data may have been compromised is quickly decaying, and if steps are not taken to immediately collect and preserve this data, there very likely will be a significant detrimental impact on the organization.  Now, the only reason that this isn't being done now is because onsite IT staff don't have the training.  So, work with that trusted adviser and develop a process and a means for collecting the necessary data, and documenting it all. 

Going back to the SecurityWeek article, I completely agree...don't disconnect the system as your first act.  Instead, have the necessary tools in place and your folks trained in what to do...for example, collect the contents of physical memory first, and then do what you need to do.  This may be to disconnect the system from the network (leaving it powered on), or making an emergency modification to a switch or firewall rule in order to isolate the system in another manner.  If the system is boot-from-SAN, you may also want to (for example) have a means in place for acquiring an image of the system before shutting it down.  Regardless of what needs to be done, be sure that you have a documented process for doing it, one that allows for pertinent data, as well as business processes, to be preserved.

Ever wondered, during an incident, what kind of person (or people) you're working against?  This eWeek article indicates that the impression that hackers are isolated, socially-inept "lone wolf" types is incorrect; in fact, according to the article, "hackers" are very social, sharing exploits, techniques and even providing tutorials.  Given this, is it any wonder why folks on the other side of the fence are constantly promoting sharing?  The bad guys do it because it makes sense, and makes them better...so why aren't we doing more of it?

Wednesday, October 19, 2011

Links, Updates, and WhatNot

Malware
Evild3ad has an excellent writeup of the Federal (aka, R2D2) Trojan via memory analysis using Volatility.  The blog post gives a detailed walk-through of the analysis conducted, as well as the findings.  Overall, my three big take-aways:

1.  An excellent example of how to use Volatility to conduct memory analysis.
2.  An excellent example of case notes.
3.  Detailed information that can be used to create a plugin for either RegRipper, or a forensic scanner.

There is also a link to a Rar archive containing the memory image at the site, so you can download it and try running the commands listed in the blog post against the same data.

M-Trends
The Mandiant M-Trends 2011 report is available...I received a copy yesterday and started looking through it.  Very interesting information in the report...as a host-based analysis guy, I found some of the information on persistence mechanisms (starting on pg 11 of the report) to be very interesting.  Some may look at the use of Windows Services and the ubiquitous Run key as passe, but the fact is that these persistence mechanisms work.  After all, when the threat actors compromise an infrastructure, they are not trying to remain hidden from knowledgeable and experienced incident responders.

Interestingly, the report includes a side note that the authors expect to see more DLL Search Order Hijacking used as a persistence mechanism in the future.  I tend to agree with the statement in the report, given that (again, as stated in the report) that this is an effective technique that is difficult to detect.

Another interesting persistence mechanism described in the report was services.exe being modified (without changing the size of the binary) to point to an additional (and malicious) DLL.  This technique has been seen being used with other binaries, including other DLLs. 

A major section (section III) of the report discusses visibility across the enterprise; I think that this is an extremely important issue.  As I've performed incident response over the years, a common factor across most (if not all) of the incidents I've responded to has been a lack of any infrastructure visibility whatsoever.  This has been true not only for initial visibility into what goes on on the network and hosts, but it has also affected response capabilities.  Network- and host-based visibility of some kind needs to be achieved by all organizations, regardless of size, etc.  I mean, think about it...any organization that produces something has some sort of visibility into processes that are critical to the business, right?  A company that manufactures widgets has controls in place to ensure that the widgets are produced correctly, and that they're shipped...right?  I mean, wouldn't someone notice if trucks weren't leaving the loading docks?  So why not have some sort of visibility into the medium where your critical information assets are stored and processed?

Looking at the information provided in the M-Trends report (as well as other reports available from other organizations), I can see the beginning an argument for incident preparation being built up; that is to say that while the report may not specifically highlight this (the M-Trends report mentions the need for "...developing effective threat detection and response capabilities..."), it's clear that the need for incident preparation has existed for some time, and will continue to be an issue.

Addendum: Pg 13 of the M-Trends report mentions some "interesting" persistence mechanisms being used, one of which is "use of COM objects"; however, the rest of the report doesn't provide much of a description of this mechanism.  Well, I ran across this post on the ACROS Security Blog that provides some very good insight into using COM objects for persistence.  Both attacks described are something of a combination of the use of COM objects and the DLL Search Order hijacking, and very interesting.  As such, there needs to be tools, processes, and education of analysts in these techniques so that they can be recognized or at least discovered through analysis.  I would suggest that these techniques have been used for some time...it's simply that most of us may not have known to (or "how to") look for them.

Resources
Verizon DBIR
TrustWave GSR

Incident Preparation
I recently gave a talk on incident preparation at ETCSS, and overall, I think it was well received.  I used a couple of examples to get my point across...boxing, fires in homes...and as the gears have continued to turn, I've thought of another, although it may not be as immediately applicable or understandable for a great many folks out there.

Having been a Marine, and knowing a number of manager- and director-types that come from prior military experience, I thought that the USS Cole would be a great example of incident preparation.  The USS Cole was subject to a bombing attack on 12 October 2000, and there were 56 casualties, 17 of which were fatalities.  The ship was stuck by a bomb amidships, and a massive hole was torn in her side, part of which was below the waterline.  However, the ship did not sink.

By contrast, consider the RMS Titanic.  On 15 April 1912, the Titanic struck an iceberg and shortly thereafter, sank.  According to some sources, a total of six compartments were opened to the sea; however, the design of the Titanic was for the ship to remain afloat with only the first four compartments opened to the sea.  As the weight of the water pulled the ship down, more water was allowed to flood the ship, which quickly led to her sinking.

So, what does this have to do with incident preparation and response?  Both ships were designed with incidents in mind; i.e., it was clear that the designers were aware that incidents, of some kind, would occur.  The USS Cole had some advantages; better design due to a better understanding of threats and risk, a better damage control team, etc.  We can apply this thinking to our current approach to infrastructure design and assessments.

How would the USS Cole have fared had, at the time of the bombing, they not had damage control teams and sailors trained in medical response and ship protection?  What would have happened, do you think, if they'd instead done nothing, and gone searching for someone to call for help?

My point in all this goes right back to my presentation; who is better prepared to respond to an incident - the current IT staff on-site, who live and work in that environment every day, or a consultant who has no idea what your infrastructure looks like?

Determining Quality
Not long ago, I discussed competitive advantage and how it could be achieved, and that got me to thinking...when a deliverable is sent to a customer of DFIR services, how do they (the customer) judge or determine the quality of the work performed?

Over the years, I've had those engagements where a customer says, "this system is infected", but when asked for specifics regarding why they think it was infected, or what led them to think it was infected, most often don't have anything concrete to point to.  I'll go through, perform the work based on a malware detection checklist and very often come up with nothing.  I submit a report detailing my work activity and findings, which leads to my conclusions of "no malware found", and I simply don't hear back.

Consulting is a bit different from the work done in LE circles...many times, the work you do is going to be reviewed by someone.  The prosecution may review it, looking for information that can be used to support their argument, and the defense may review it, possibly to shoot holes in your work.  This doesn't mean that there's any reason to do the work or reporting any differently...it's simply a difference in the environments.

So, how does a customer (of consulting work) determine the quality of the work, particularly when they've just spent considerable money, only to get an answer that contradicts their original supposition?  When they receive a report, how do they know that their money has been well-spent, or that the results are valid?  For example, I use a checklist with a number of steps, but when I provide a report that states that I found no indication of malware on the system, what's the difference between that and another analyst who simply mounted the image as a volume and scanned it with an AV product?

Attacks
If you haven't yet, you should really consider checking out Corey's Linkz about Attacks post, as it provides some very good information regarding how some attacks are conducted.  Corey also provides summaries of some of the information, specifically pointing out artifacts of attacks.  Most of them are Java-based, similar to Corey's exploit artifact posts.

This post dovetails off of a comment that Corey left on one of my posts...

I've seen and hear comments from others about how it's difficult (if not impossible) and time consuming to determine how malware ended up on the system.

Very often, this seems to be the case.  The attack or initial infection vector is not determined, as it is deemed too difficult or time consuming to do so.  There are times when determining the initial infection vector may be extremely difficult, such as when the incident is months old and steps have been taken (either by the attacker or local IT admins) to clean up the indicators of compromise (IoCs).  However, I think that the work Corey has been doing (and providing the results of publicly) will go a long way toward helping analysts narrow down the initial infection vector, particular those who create detailed timelines of system activity.

Consulting
Hal Pomeranz has an excellent series of posts regarding consulting and issues that you're likely to run into and have to address if you go out on your own.  Take a look at part 1, 2, 3, and 4.  Hal's provided a lot of great insight, all of which comes from experience...which is the best teacher!  He also gives you an opportunity to learn from his mistakes, rather than your own...so if you're thinking about going this route, take a look at his posts.

Friday, October 14, 2011

Links

Carbon Black
I recently gave a presentation at ETCSS, during which we discussed the need for incident preparedness in order to improve the effect of incident response efforts.  In that presentation, I mentioned and described Carbon Black (Cb), as well as how it can be used in other ways besides IR.

While I was traveling to the venue, Cb Enterprise was released.  Folks, if you don't know what Carbon Black is, you really should take a look at it.  If you use computers in any capacity beyond simply sitting at a keyboard at your house...if you're a dentist's office, hospital, law firm, or a national/global business...you need to take a good hard look at Cb.  Cb is a small, light-weight sensor that monitors execution on a system...remember Jesse Kornblum's Rootkit Paradox paper?  The paradox of rootkits is that they want to hide, but they must run...the same is true with any malware.  Cb monitors program execution on Windows systems.  The guys at Cb have some great examples of how they've tracked down a three-stage browser drive-by infection in minutes, where it may have taken an examiner doing just disk forensics days to locate the issue.

If you have and use computers, or you have customers who do, you should really take a hard look at Cb and consider deploying it.  Seriously...check out the site, give the Kyrus Tech guys a call, and take a good hard look at what Cb can do for you.  I honestly believe that Cb is a game changer, and the Kyrus Tech guys have demonstrated that it is, indeed, a game changer, but not just for IR work.

Timeliner
Jamie Levy has posted documentation and plugins for her OMFW talk (from last July) regarding extracting timeline data from a memory dump using the Volatility framework.  This is a great set of plugins for a great memory analysis framework, folks.  What's really cool is that with a little bit of programming effort,  you can modify the output format of the plugins to meet your needs, as well.  A greatbighuge THANKS to Jamie for providing these plugins, and for the entire Volatility team/community for a great memory analysis framework.

Exploit Artifacts
Speaking of timelines...Corey has posted yet another analysis of exploit artifacts, this one regarding a signed Java applet. This is a great project that Corey works on, and a fantastic service that he's providing.  Using available tools (i.e., MetaSploit), he compromises a system, and then uses available tools and techniques (i.e., timeline analysis) to demonstrate what the artifacts of the exploit "look like" from the perspective if disk analysis.  Corey's write-up is clear and concise, and to be honest, this is what your case notes and reports should look like...not exactly, of course, but there are lot of folks that use the "...I don't know what standard to write to..." as an excuse to not do anything.  Look at what Corey's done here...don't you think that there's enough information to replicate what he did?  Does that work as a standard?

Also, take a look at the technique Corey used for investigating this issue...rather than posting a question online, he took steps to investigate the issue himself.  Rather than starting with an acquired image and a question (as is often the case during an exam), he started with just a question, and set out to determine an answer.  Information like this can be extremely valuable, particular when it comes to determining things such as the initial infection vector of malware or a bad guy, and a good deal of what he's provided can be added to an exam checklist or a plugin for a forensic scanner.  I know that I'm going to continue to look for these artifacts...a greatbighuge THANKS to Corey, not just for doing this sort of work, but posting his results, as well.

DFF
DFF 1.2 is available for download.  Take a look at this for a list of the updates; check out batch mode.  Sorry, I don't have more to write...I just haven't had a chance to dig into it yet.

Community
One of the things I see a great deal of, whether it's browsing the lists or reading questions that appear in my inbox, is that when asking questions regarding forensic analysis, many of us still aren't providing any indication of the operating system that we're analyzing.  Whether its an application question (P2P, FrostWire, a question about MFT entries, etc.), many of us are still asking the questions without identifying the OS, and if it's Windows, the version.

Is this important at all?  I would suggest that yes, it is.  The other presentation I gave at ETCSS (see the Carbon Black entry above) was titled What's new in Windows 7: An analyst's perspective.  During this presentation, we discussed a number of differences, specifically between Windows XP and Win7, but also between Vista and Win7.  Believe it or not, the version of Windows does matter...for example, Windows 2003 and 2008 do not, by default, perform application prefetching (although they can be configured to do so).  With Windows XP, the searches a user executed from the desktop were recorded in the ACMru key; with Vista, the searches were NOT recorded in a Registry key (they were/are maintained in a file); with Windows 7, the search terms are maintained in the WordWheelQuery key.

Still not convinced?  Try analyzing a Windows 7 memory dump with Volatility, but don't use the Windows 7 profile.  

So, it you're asking a question that has to do with file access times, then the version of Windows is very important...because as of Vista, by default, updating of last access times on files is disabled.  This functionality can be controlled by a Registry value, which means that this functionality can also be disabled on Windows XP systems.

I also see a number of questions referring to various applications, many of which are specific to P2P applications.  Different applications behave differently...so saying, "I'm doing a P2P investigation" doesn't really provide much information if you're looking for assistance.  I mean, who's going to write an encyclopedic if/then loop with all of the possibilities?  Not only is the particular application important, but so is the version...for the same reasons that the OS version is important.  I've dealt with older versions of applications, and what those applications do, or are capable of doing, can be very important to an investigation...that is, unless you're planning to fill in the gaps in your investigation with speculation.

In short, if you've got a question about something, be sure to provide relevant background information regarding what you're looking at...it can go a long way toward helping someone answer that question and provide you with assistance.


Tools
I've started a new page for my blog, listing the FOSS forensic tools that I find, come across, get pointed to, and use.  It's a start...I have a good deal of catching up to do.  I've started listing the tools, and provided some descriptions...I'll be updating the tools and descriptions as time goes on.  This is mostly a place for me to post tools and frameworks so that I don't have to keep going back and searching through my blog for something, but feel free to stop by and take a look, or email me a tool that you like to use, or site with several tools.

Endorsements
One final thing...and this is for Mr. Anonymous, who likes to leave comments to some of my blog posts...I get no benefit, monetarily or otherwise, for my comments or endorsement of Volatility, nor for DFF...or any other tool (FOSS or otherwise) for that matter.  I know that in the past, you've stated that you "...want to make sure that it is done with the right intentions".  Although you've never explicitly stated what those intentions are, I just wanted to be up front and clear...I have used these tools, and I see others discovering great benefit from them, as well...as such, I think that it's a great idea to endorse them as widely as possible, so that others don't just see the web site, but also see how they can benefit from using these tools.  I hope that helps.

Thursday, October 06, 2011

NoVA Forensic Meetup

Last night's meetup went very well!  I'd like to thank Brian Rydstrom for providing a very good presentation on Mobile Forensics...I don't do any forensics of mobile devices, so I found the information very valuable.

I'd also like to thank everyone who showed up last night.  Attendance was very good...we had about 28 people show up, and a lot of interaction and questions.  Per usual, we had a couple of core regulars, as well some new folks who took time out to stop by.

So, Mitch Harris has graciously offered to provide part 2 of his botnets presentation ("Botnets 201") next month (Nov), and Sam Brothers is still on-board to provide December's presentation on "Mobile Forensics".  We also had a request for a presentation on SSD forensics, as well as someone who offered to give such a presentation early next year (TBD).  I did find this blog post that discusses SSDs.

We had a couple of additional requests last night, as well.  One was for something a bit more hands-on...I'm sure that we could do something like that.  Brian offered to set up a LinkedIn group for the meetup, so that folks could see a bit more about the professional backgrounds of the other attendees.  We're also looking for something more stable for providing announcements and copies of presentations...seems that Yahoo groups aren't for everyone.

The other request was for something along the lines of "gorilla forensics", or perhaps more appropriately "Sniper Forensics".  I don't think I want to steal Chris's thunder (not that I could if I tried...), but maybe we can come up with something along the lines of "the essentials of DF investigations".  I think that this would end up being an interesting discussion, particularly when it comes to the topic of maintaining case notes.

Again, thanks to everyone who was able to make it last night, and thanks to the ReverseSpace guys for hosting us.

Wednesday, October 05, 2011

Forensic Scanner

With the manuscript for WFA 3/e submitted, I have time now to focus on other projects (while I wait for the proofs to review), including the next step for or next generation of RegRipper, which is something I call the "forensic scanner"...for now, anyway, until I come up with a really cool name for it.  All new projects need a cool name, right?

As I work on developing this project, I wanted to share some of the thoughts behind it, in part to see if they make sense, but also to show the direction of the project.

Why?
Why have a "forensic scanner" at all?  That's a great question.  Other areas of information security have scanners...when I started my infosec career after leaving the military, I used ISS's Internet Scanner.  Consider Nessus, which was the inspiration behind the design for RegRipper.  And there are others...the idea being that once you've discovered some artifact or "check", you can automate that check for future use, without having to memorize the specifics.  After all, by creating a "plugin" for the check, you're documenting it.  Another strength of something like this is that one analyst can create a check, documenting the plugin, and provide it to others, sharing that information so that those other analysts don't have to have the same experiences.  This way, a team can focus on analysis and benefit from the analysis performed by others.

Look at it this way...do you want to do the same checks that you always do for malware, manually?  Let's say that a customer believes that a system is infected with Zeus/ZBot...do you want to manually check for sdra64.exe every time?  What about the persistence mechanism?  What if the persistence mechanism is the same, but the file name has changed?  What if you could automate your entire malware detection checklist?  Most forensic analysts are familiar with the detection rates of AV products, and sometimes it's a matter of looking not for the malware itself, but rather the effects that the malware had on it's ecosystem...what if you could automate that?

Purpose
The purpose of the forensic scanner is not to replace anything that's already out there, but instead to augment what's currently available.  For instance, the Digital Forensics Framework (DFF) was recently updated to version 1.2 and includes some great features, none of which the forensic scanner includes (i.e., search, etc.).  The forensic scanner is a targeted tool with a specific purpose, and not a general analysis framework.  Instead, much like other scanners (Nessus, ISS's Internet Scanner, etc.), the forensic scanner is intended to fill a gap; using frameworks and applications (ProDiscover, etc.), analysts will find artifacts and indicators of compromise, and then document them as plugins as a means of automation.  Then whenever the scanner is run against an acquired image, checks for those artifacts, as well as processing and even inclusion of references, are run automatically.  This is intended to quickly allow the analyst to analyze, by running checks that have already been discovered.  Learn it once, document it, run it every time.

Scanner Attributes
Here are some of the forensic scanner attributes that I've come up with:

Flexibility - From the beginning, I wanted the scanner to be flexible, so I designed it to be run against a mounted volume.  You're probably  wondering, "how is this flexible?"  Well, how can you mount a volume, particularly in read-only mode?  You can convert a raw/dd image to a .vhd file (using vhdtool.exe, or the VirtualBox convertfromraw command), and mount that .vhd file read-only via the Disk Management tool.  You can use FTK Imager, ImDisk, or another image mounting tool.  You can also connect to a remote system via F-Response and run the scanner.  You can mount an image by converting it to a .vmdk file, and mount is as an independent, non-persistent hard drive.  Using either the .vhd or .vmdk methods, you can also mount VSCs as volumes and scan those; as with RegRipper, a CLI engine for the scanner can be included in a batch file to automate the scans.

Even though I'm writing the plugins that I'm using for Windows, there's nothing that really restricts me to that  platform.  The scanner is written in Perl, and can be run from Linux or MacOS X (the GUI would require the appropriate modules, of course...) and run against pretty much any mounted volume.

Force Multiplier - One of the things I really like about RegRipper is the ability to write my own plugins.  So, let's say I find something of interest...I write a plugin for it.  I can (and do) include appropriate references (i.e., to malware write-ups, MS KB articles, etc.) in the comments of the plugin, or even have those spit out in the output.  I can even add an explanation to the plugin itself, in the comments, describing the reasoning behind the plugin, why it was written, and how to use or interpret the output.  That plugin then persists, along with the documentation.  This plugin can then be shared amongst analysts, increasing their capability while reducing their need to experience the same analysis I did to zero.  So, let's say I find something that I'd never seen before and it took me 10 hrs of dedicated analysis to find it.  If there are 5 other analysts on my team (and we're all of approximately equal skill levels), and I share that plugin with all of them, then I've just added to their capability and saved the team 50 hrs of dedicated work.

This section could also be referred to as Preservation of Corporate Knowledge or Competitive Advantage, depending on who you are.  For example, both LE and private industry consultants benefit from retaining corporate knowledge; also, LE would greatly benefit from any plugins shared by private industry.

Knowledge Retention
Within the private sector, the information security industry can be fluid.  Analysts have changes in their lives, or develop new skills (or want to do so) and move on.  Having a means of documenting and retaining their experiences within the environment can be valuable; have a means of incorporating that knowledge directly into the field can be critical.  It's one thing for an analyst to talk about something they found, or write a white paper...it's something else entirely to have a forensic analyst write a dozen or so plugins throughout their tenure and have those available for use, by all of the other analysts, well after he or she has left.

LE experiences something similiar; many times, an analyst receives training, works on some cases, and is then off to do other LE things.  And often, their wealth of knowledge leaves with them.  With a framework such as the forensic scanner, not only is an individual analyst's knowledge retained, but it can be provided other analysts, even ones that haven't been hired or completely trained.

Competitive Advantage is usually associated with private industry consulting firms, but I'm sure that you can see how this would apply.  Any analyst who finds something and documents it through a plugin can then share that plugin with others...100% capability for 0 experience; the time-to-market for the capability is pretty much as long as it takes to open an email and extract the plugins from an attached archive.  Ideally, you'd want to have an "armorer", like a lab tech or analyst who either gets information from other analysts and writes and tests the plugins, or receives the plugins and tests them before rolling them out.  The approved plugins can be placed in an archive and emailed to analysts, or you can use some form of distribution mechanism that each analyst initiates.

Self-Documenting - The forensic scanner has an interesting feature that I'm carrying over from RegRipper - when running, it produces an activity log, collecting information about the scanned volume and tracking the plugins that were run.  So, not only will your output make it clear what the results of the scan were, but the activity log can tell you exactly which versions of the plugins had been run; if there's a plugin that wasn't run, or an updated version of a plugin comes out, you can simply re-run the scan. 

This information can also be used from a debugging standpoint.  If something didn't work as planned, why was that?  How can the process be improved?

Common Format - One of the things that we're all familiar with how there are a number of tools out there that parse information for us, but these tools all have different output formats, and it can be a very manual process to work through Prefetch files, Jump Lists, etc., and have to manually convert all of that information into a common output format.  Even if we get several tools from the same site and author, and we can format the output in .csv or .xml, we still have to run the tools, and manage the output.  Using the scanner, the plugins will handle the output format.  I can write one plugin, and have .csv output...then modify the output in another version of the plugin to .tln output, and include each plugin in the appropriate scanner profile.

Plugins
When scanning a mounted volume, the engine exports a number of variables that you can use to tailor your plugins; however, as the target is a mounted volume, there is no proprietary API that you need to learn.  Want to get a list of files in a directory?  Use the standard opendir(), readdir(), and closedir() function that ship with Perl.  What this means is that learning to write plugins is as easy as learning to program in Perl, and if you don't know (or want to learn) how to program in Perl, that's okay...find someone who does and buy them a beer.

The plugins can also be flexible, ranging from the broad to the narrowly-focused.  An example of a broad plugin might be one that scans the Windows\Temp (or the user's Temp) folder for PE files.  I know how tedious something like that can be...particularly with a reprovisioned system that has a dozen or more user accounts on it...but how would you like to have a report of all of the .tmp files in all of the user's Temp folders that are actually PE files?

A plugin that's a bit more tactical might be one that looks for a specific file, such as ntshrui.dll in the C:\Windows directory.  The "strategic" variant of that plugin might be one to list all of the DLLs in the Windows directory.

However, plugins don't have to be just Perl; using Perl functions, you can also create plugins to run external commands.  For example, you can use strings and find to parse through the pagefile, and retain the output.  Or you can run sigcheck.  Using Perl functions that allow you to launch external commands, you can automate running (and processing/parsing the output of) external commands against the mounted volume.

Deploying the Scanner
I alluded to some of the deployment scenarios for the scanner earlier in this post, but I'll reiterate some of them here because I think they're important.

When I was on the IBM response team (and the ISS team before that), each responder had two MacBooks that we had in our jump kits, as well as Mac Server in our office; lots of horsepower, with a reduced form factor and weight (over the comparable Dell Latitudes).  I opted to primarily run Windows, as I wanted to be as familiar with the most predominant platform that we encountered.  Our team was also geographically dispersed.  So how would something like the scanner be deployed in such an environment?

Now, if we had a central intake point, such as a lab were images were received and processed (image file system verified, documented, and a working copy made to a storage facility) by a lab tech, the scanner could be deployed to and maintained by the lab tech.  Once an image was processed, the working copy could be scanned, and the analyst could VPN into the lab, fire up the appropriate analysis VM, and review the output report from the scanner.

What's coming?
Recently on Twitter, Ken Johnson (@Patories) point out an artifact that he'd found on the Windows 8 dev build, and likely associated with IE 10...a key named "TypedURLsTime".  The data for each of the listed values is a FILETIME object...when the time comes that Win8 is seen on the desktop, this will likely be a very useful artifact to be included in a plugin. 

So, let me ask you this...who's going to remember that when Windows 8 actually hits the streets?  I'm running the dev build of Windows 8 in a VirtualBox VM, as a .vhd file; anyone doing so can easily mount the .vhd (read-only) on their Windows 7 system, write a plugin for the artifact, and there it is...documented. 

Saturday, October 01, 2011

Documentation

If you didn't document it, it didn't happen.

When I first heard that, it had nothing to do with DFIR work, but it holds true nonetheless.

How often does this happen?  We're working on a school or self-imposed project, and you run across an issue, so you go online to ask a question of the communal brain trust (Twitter, Forensic Focus, CFID mailing list, etc.).  Within short order, you start getting queries...which version OS/version of Windows, which application, where did you get the file (path), etc.  By the time you return to the online world, you can't remember any of this, and now have to start over.  However, had you kept case notes or documentation of some kind, this wouldn't be an issue.

So the questions I usually see/hear at this point are how do I keep case notes and to what standard do I keep case notes?  The how is easy...what works?  When I was on the IBM team, we used the QCC Forensic CaseNotes tool.  This is a very good tool to use, and includes a lot of functionality.  However, it's sometimes simply easy to use MS Word, and create the necessary sections.  I usually create a section for Exhibits (what items I had received, often in a table if there were more than 2 or 3 items), as well as one for Hours (again, sometimes in a table).  When I got to the actual notes, these were most often a narrative of what I actually did, broken down by day.

You can create other sections, as well.  Bill over at Unchained Forensics recently posted about having a case outline or preparation plan.  I usually have an analysis plan in mind, and if it's for work that I don't already have a checklist, I write one.  I think that Chris has even talked about having an analysis plan documented.

So, to what standard do you keep case notes?  Most often, I'll say, "...so that you can come back a year later and know what you did."  Too often, however, this provides a lazy analyst with an easy out, because from their perspective, what are the chances that in a year, someone's going to come back and ask them a question?  Well, you don't know until it happens...and it does happen.  The best standard to use when writing your case notes is to assume that at any point, you could "get hit by a bus" and another analyst will have to take your notes and finish the exam.  As such, are your case notes written to a level where another analyst could run the same commands, using the same versions of the tools you used, and replicate your results?  So, in your case notes, do you say, "Checked for ADSs", or do you say "Mounted image with FTK Imager v3.0 as G:\ volume, scanned for ADSs using LADS v4.0"?  This is important...remember MHL's post on stealth ADSsThere are more things on heaven and earth than are dreamt of in your philosophy, Horatio...so the tool you use will make a difference, and you might want to consider using the tool that MHL provided.

On that note, consider this...what do your case notes say?  If you do PCI work, do your notes say, "Ran CCN search"?  Is that adequate?  How was that search run, and over what data?  Did you load the image into EnCase?  If so, which version?  And yes, the version of EnCase you're using DOES matter.  Was your search run using a specific EnScript, and was that a publicly available EnScript or one crafted specifically by/for your team?  Or did you extract the unallocated space from the image using blkls from the TSK tools, and run a series of regexes over the data?

All of this is important because the number of CCNs that may have been exposed are extremely important to the merchant as well as the banks; as such, accuracy is critical, and one way to ensure accuracy is to be able to replicate your findings.

Start with a Process
An efficient way for maintaining case notes is to have documented processes already in place.  For example, if you're tasked with detecting malware within an acquired image (no memory dump available), do you have a documented process for doing this?  If so, you can say "followed documented malware detection process" and provide the version number or date, as well as the completed checklist itself.  That documented process can be a separate document in your case notes directory, and all you would need to include is any additional actions you took, or anything you decided to leave out, including your justification (ex: "Did not run search for ADSs, as the file system was FAT.").

The Value of Case Notes
So why are case notes so important?  Well, those of us that teach the need for this kind of documentation use the you may have to testify a year later and what if you get hit by a bus hooks, but in the big scheme of things, these events rarely happen.  When they do, they're real eye openers, but by then it's too late and everyone's left saying, "...yeah, I should have kept case notes...".

Something that a lot of folks don't think about when it comes to case notes is competitive advantage.  How do organizations that provide DFIR services define "competitive advantage"?  Most often, the outward expression of this perception is generated through marketing (presentations at conferences, blog posts, use of social media, webinars, etc.) efforts; however, behind the scenes, that organization is going to have to deliver at some point, and it becomes a matter of the quality of the service provided (usually in relation to margins).  As such, detailed and clear case notes serve as a fantastic learning tool for other members of the DFIR team.  Let's say there's a team of 11 analysts/responders, all of whom are geographically dispersed.  One analyst spends 16 hrs of analysis and finds something new, that no one else has ever seen.  Now, assuming a common skill set level across all analysts, we have to assume that for everyone else to replicate this finding, assuming they get a relatively similar case, would take a total of 160 hrs (10 analysts x 16 hrs/analyst).  This isn't terribly efficient, is it, particularly given the assumptions?  However, if the first analyst's case notes are clear, they can be used to provide information to the other analysts regarding what to look for, etc.  If the team uses a remote presentation capability (WebEx, brown bag "lunch and learn", etc.), the 160 hrs can be reduced to 30 minutes, and all analysts would then have the same knowledge and capabilities, without having to have had the same experience.  This can provide a great deal of competitive advantage to that organization.

Another use of the case notes is to use them to create the appropriate indicators of compromise (IoC), or a plugin for a forensic scanner, to be shared amongst all analysts.  This provides an immediate capability (the time it takes to share the plugin) with zero experience, in that the other analysts don't have to actually have had the experience in order to achieve the capability.  This means that corporate knowledge is always available and retained well after analysts leave the organization, and knowledge retention becomes competitive advantage.

Consider this...when performing a specific exam (i.e., malware detection), how do you go about it?  Do you have a series of artifacts that you look for or tasks that you perform?  Now...and be honest here...do you have a documented checklist?  If you do, how much of that can you put into an automated process such as a scanner?  If you do this, you have now reduced your initial analysis time from days or hours to minutes, and by using automation, you've also reduced your chances of forgetting something, particularly those repetitive tasks.  Now, imagine collaborating with other analysts and increasing the number of plugins run...you now have a communal knowledge bank focused on quickly checking for the low-hanging fruit, and providing you with the output report (and a log of all the plugins run).  Ultimately, you're left to do the actual analysis.

So in the case of the scanner, the documentation comes in two forms...first, the documentation of previous analysis that results in a plugin.  Second, the output report from the scanner, as well as the activity log, serve as case documentation, as well.

WFA 3/e update

I posted a bit ago on WFA 3/e, and as I get closer to completing rewrites of reviewed chapters and getting the manuscript submitted, I wanted to provide an update of how things have progressed thus far...

I also wanted to talk a little bit more about what this edition is all about.  Specifically, this edition is NOT a follow-on to the second edition; instead, it's a companion book.  That is to say, if you have the second edition on your bookshelf, you will also want to have this edition, as well. In fact, ideally, you'll have both WFA editions along with Windows Registry Forensics, as well, in order to make a complete set.

There have also been a couple of changes, perhaps the biggest one being that I completely rewrote chapter 2; rather than being "Live Response", I retitled it to "Immediate Response" (the need for which was covered in this article by Garry Byers), as the previous topic had been covered to some extent in WFA 2/e, and one of the points of the third edition is to not rehash what's already been covered.  Instead, I wanted to write about the need for organizations that have identified (or been notified) that an incident has occurred within their infrastructure to immediately collect and preserve data, and do so from the perspective of a third-party consultant/responder.  I think we've seen enough in the media in the last 9 or 10 months to clearly demonstrate that no organization is immune from being compromised; add to that the ethereal nature of "evidence" and you can see why organizations must be ready to begin collecting data as soon as know that something has happened.  The perspective I wanted to take was that of a responder who gets a call, and after the contract has been negotiated, travels to the site and begins working with the local IT staff to develop an understanding of the infrastructure and the nature of the incident...all while digital evidence continues to expire and fade away.

During the rewrites, I'll be adding some specific information that has developed since specific chapters were originally written.  For example, in chapter 4, I fleshed out information regarding Jump Lists, and I added some additional information to the chapter on Registry Analysis.

Now, there are some things I don't cover in the book.  For example, memory analysis and browser analysis are two of the most notable topics; these are not covered in the book because there are covered elsewhere, and in a much better manner that I could have done.

Finally, with WRF, I started posting the code for the books on my Google Code site, and I will do the same with WFA 3/e.  Throughout the book I mention tools and checklists, and I'll have those posted to the Google code site before the book is actually published.