Thursday, March 01, 2007

Book & Attending BlackHat DC

Recently, Troy Larson (bio and picture) graciously offered to review/tech edit my upcoming book, so to start, I asked the editor to send him two chapters; the one that covers volatile data collection and the one that covers Registry analysis. Here's a quote that I received from Troy regarding the Registry analysis chapter:
I really liked the registry chapter.  It is worth the price of the book alone.
Sweet! In the same email, Troy also mentioned that there wasn't as much info on Vista as there was on the other versions of Windows covered (2000, XP, 2003), but also said that this is largely due to the fact that there just isn't that much information available yet. secondedition

Speaking of the book, I found it on Amazon this morning, available for pre-order.

I spent most of yesterday at BlackHat DC and caught the first three presentations in the Forensics track...well most of them. I caught all of Kevin Mandia's presentation, as well as part of the presentation that followed on web app forensics, and then after lunch, I went to Nick Petroni and AAron Walter's presentation on VolaTools. Unfortunately, I didn't get to stay for the entire presentation.

Kevin's presentation was very interesting, addressing the need for live response. There were several comments that Kevin made, stating the need for "fast and light" response.

IMHO, there is a need for a greater understanding of live response...not just the technical aspects of it, but how those technical aspects can be applied to and affect the business infrastructure of an organization.

Following the presentation, I took a few minutes to jot down some notes and thoughts...I started by attempting to classify incidents (think of "Agent Smith" in The Matrix, classifying humans as a "virus") in terms of the speed of response required for each. I started by defining speed of response in general terms at first ("ASAP", etc.) and then attempting to become more specific but not to the point of minutes and seconds. Regardless of the angle I approached the problem from, the one thing that kept coming to mind were business requirements...what are the needs of the business? Sure, we've got all these great technical things we can do, and as Kevin pointed out (as did Nick and AAron), a lot of folks that he's talked to have said, "we've already got all this data (file system data, etc.) to analyze...we don't do live response because the last thing we need is more data!"

I guess the question this point is, is the data that you're currently collecting for analysis meeting your business needs? If you're backlogged 6 months in your analysis...probably not. Depending upon the nature (or class) of the incident and your business environment, it may/most likely will be useful to rapidly collect and analyze a specific subset of volatile data during live response so that you can get a better picture of the issue, and progress through your analysis more accurately and efficiently.

One of the things Kevin pointed out is that malware (can anyone verify that Kevin only said "malware" 6 times...I wasn't keeping track) is including the capability of modifying the MAC times of files written to the system. This is an anti-forensic technique that severely hampers those using the current "Nintendo forensics" approach to analysis.

I think that some people are beginning to recognize the need for live response and how it can be useful in an investigation. However, I think that the issue now is one of do those currently doing response of any kind (live or otherwise) break out of their current mindset, and shift gears? By shifting to a more holistic approach and using live response where applicable, and in the appropriate manner, those people will begin to actually see how useful this activity can be.

The presentation I found to be the most interesting was Nick and AAron's "VolaTools" presentation. Once Nick finished with the overview (presenting some great information on why you'd want to use tools like this), AAron kicked off into the how portion of the talk. I think their approach is an excellent one, in that they've identified a major stumbling block for this kind of analysis...the tools that are available are not included with EnCase and are not "push the Find-All-Evidence button" kinds of tools, and people aren't using them because the tools themselves are too different. In a nutshell, there's a confidence factor that needs to be addressed. While Nick and AAron's tools are written in Python, they do provide a command-line interface in the Basic version of their tools that allow the user to extract information from a RAM dump in nearly the same manner as you would on a live list the modules used, you would use a "dlllist" command (as opposed to listdlls.exe), and to list handles, you'd use "handles".

I can't wait to download and look at their tools...these two guys are really bright and have just moved the issue of Windows memory analysis forward a couple of huge leaps. And thanks to guys like Jesse Kornblum for things such as his Buffalo paper, because we can then use AAron and Nick's work as a basis for incorporating the pagefile into the memory analysis, as well.

Oh, and while I did meet both Ovie and Bret, unfortunately Ovie wasn't wearing his CyberSpeak t-shirt, so I guess I don't get to have a beer with Bret! ;-(

One final thing about BlackHat DC...seeing Jesse do his impression of a sardine was worth the price of admission! Speaking of Jesse, it looks like he's been busy on the ForensicWiki...


hogfly said...

"how do those currently doing response of any kind (live or otherwise) break out of their current mindset, and shift gears? "

While that may have been rhetorical..a sticking point is often business continuity. When I interview tech or non tech staff during a response one of the first questions I ask is "What's the maximum allowable down time on that system?"

One of the other sticking points is how can we ask first responders to validate that an incident is truly an incident without looking at the system in some detail? If we can arm first responders with tools to accurately assess a system I think the bigger picture of incident response and then forensics becomes much more succinct.

I saw Kevin's talk at BH 2006 in vegas and malware must be his favorite word because he definitely said it more than 6 times there.

H. Carvey said...


These all sound like excellent reasons for changing the mindsets...yet it doesn't seem to be working. In many cases, if the system cannot be taken down and dealt with in the traditional CF manner, paralysis sets in.

Bill Ethridge said...

I'm gonna have more to comment on later when I have enough caffiene in me (enough???). But I have to say getting the book from Amazon isn't good enough, I want an autographed copy!

hogfly said...

Yeah I want an autographed copy too!

Perhaps the case for live response hasn't been made clear? The folks I work with tend to think that the only valuable data is on the disk or on the network. As you say education may be needed. I wonder if doing a comparison of live response vs. traditional would help spell it out. Ask a 3rd party to compromise two systems in the same manner and have a live response occur on one, and a traditional on another. A mini contest if you will.

Anonymous said...

Any comments/thoughts on Joanna Rutkowska's presentation on Memory evading techniques?

H. Carvey said...

Any comments/thoughts on Joanna Rutkowska's presentation on Memory evading techniques?

That wasn't one of the presentations I listed above as having seen...although I would have liked, no, no comments.

Yeah I want an autographed copy too!

Okay then! Ship it to me, with a return envelope, and I'll sign it and send it back.