Wednesday, April 16, 2014

Follow up on TTPs post

David Bianco's "Pyramid of Pain"
As a follow-up to my previous post on TTPs, a couple of us (David Bianco, Jack Crook, etc.) took the discussion to G+.  Unfortunately, I did not set the conversation to public, so I wanted to recap the comments here, and then take this back to G+ for open discussions.

First, if you're new to this discussion, start by reading my previous post, and then check out David's post on combining the "Kill Chain" with the Pyramid of Pain.  For another look at this, check out David's Enterprise Security Monitoring presentation from BSidesAugusta - he talks about the kill chain, PoP, and getting inside the adversary's OODA loop.  Pay particular attention to David's "bed of nails" slide in the presentation.

Second, I wanted to provide a synopsis of the discussion from G+.  Those involved included myself, David, Jack, and Ryan Stillions...David brought him into the conversation initially because Ryan had developed a concept of "Detection Maturity Level" that overlaps with David's Pyramid concept.  Nothing is available yet, and hopefully Ryan will blog on it soon.

To start off the discussion, I asked that if finding, understanding and countering TTPs causes the adversary "pain", why is there so much emphasis within the community on finding indicators?  There was the thought that indicators are shared because that's what clients are looking and asking for, implying that those providing 'threat intel' services follow client requests, rather than driving them.  This goes back to order to share TTPs, organizations have to be mature enough to (a) detect and find them, and (b) understand and employ them within their infrastructure.  There was another comment that indicators at the lowest levels of the PoP are focused on because there are more of them...a recent presentation at RSA 2014 mentioned "3000 indicators".  From a marketing perspective, that's much better than "TTPs for one group".

Ryan followed up with a comment that focusing on the lower levels of the PoP actually inflicts pain on the analysts (re: false positives), and he used the phrase "Cost of Context Reconstruction" (Ryan, start blogging, dude!!), which refers to the "lower in the stack you operate, longer it takes to re-establish situational context, arrive at conclusions, pivot, etc."

At that point, the discussion then moved to organizational maturity and people...skills, etc.  David recommended his above blog post and video, and I went off at that point to get caught up.

The question was then posed asking if attribution was important.  Ryan thought that would be a great panel question, and I agree...but I also think that this is a great question to start thinking about now, not simply to mature and crystallize your thoughts, but when it is posed to a panel, there are going to be a lot of folks who are hearing it for the first time.

What the discussion then centered around at that point was that attribution can be important, depending upon the context (if you're in the intel or LE communities), but for most organizations with a maturity level that has them at the lower levels of the Pyramid, attribution is a distraction.  What needs to be focused on at that point is moving further up the Pyramid and maturing the organization to the point where TTPs are understood, detected, and employed within the detection and response framework.

This then circled back to the "why", with "because that's what the client is asking for" thrown in as a possible response.  David brought up the concept of "provisional attribution" during the course of an incident, meaning that "this is what we know at the moment, but we may be wrong so it's subject to change at any time".

At that point, we got back to "hey, maybe we should open this up", hence, this post.  So, that's where we are at this point.  So, as a means of summary:

Use the Pyramid of Pain to:
- Identify detection/skill gaps
- Determine organizational detection/response maturity (looking for a blog post from Ryan...)
- Combine with the Kill Chain to bring "pain" to the adversary

There was also the idea of actually having a panel discussion at a conference.  I think that's great idea, but I also think that it's limiting...shelving the discussion until a conference means no movement, and then all of a sudden, there's a discussion that many folks are seeing for the first time, and they haven't had time to catch up.  So, we'll take this back to G+ for the time being, simply because at this point, there really hasn't been any better ideas for a forum for this sort of discussion.

Addendum: The G+ post with comments can be found here.

Monday, April 14, 2014

WFA 4/e

Okay, so Windows Forensic Analysis 4/e showed up in a couple of boxes on my doorstep tonight.  It's now a thing.  Cool.

As I write this, I'm working on finishing up the materials that go along with the book.  I got hung up on something, and then there was work...but the link will be posted very soon.

A question from Twitter from "Dark Operator":

so it is a version per version of Windows or the latest will cover 7 and 8?

I know the cover says "for Windows 8", and  I tried to incorporate as much info as I could about Windows 8 into the book by the time it went in for the final review before printing...which was back in February.  This edition includes all the Windows 7 information from the third edition, plus some new information (and some corrections), as well as some information for Windows 8.

The thing about questions like this is that Twitter really isn't the medium for them.  If you have a question or comment about the book contents, you can email me, or comment  here.  It's just that sometimes the answers to questions like that do not fit neatly in to 140 characters or less.

Over the past couple of months, I've been asked to speak at a number of events, and when I ask what they'd like me to speak about, I generally get responses like, "...what's new in Windows 8?".  The simple answer is...a lot.  Also, most folks doing DFIR work may not be completely familiar with what information is available for Windows 7 systems, so what could I say about Windows 8 in an hour that would be useful to anyone.  Some things (Jump Lists, the Registry, etc.) are very similar in Windows 8 as they are in Windows 7, but other things...the Registry, in particular...are different enough to pose some challenges to a good number of analysts.

So, once again...I'll be posting the link to the materials that go along with the book very soon.  I post them online because people kept leaving their DVDs somewhere (at home, at work, with a friend, in their car...) and needed a means for getting the download, so I moved it online.  This also allows me to update the materials, as well.

Questions?  Comments?  Leave 'em here, or email me.  Thanks so much.

Addendum: The book materials are posted here.

Sunday, April 13, 2014


Within the DFIR and threat intel communities, there has been considerable talk about "TTPs" - tactics, techniques and procedures used by targeted threat actors.  The most challenging aspect of this topic is that there's a great deal of discussion of "having TTPs" and "getting TTPs", but when you really look at something hard, it kind of becomes clear that you're gonna be left wondering, "where're the TTPs?"  I'm still struggling a bit with this, and I'm sure others are, as well.

I ran across Jack Crook's blog post recently, and didn't see how just posting a comment to his article would do it justice.  Jack's been sharing a lot of great stuff, and there's a lot of great stuff in this article, as well.  I particularly like how Jack tied what he was looking at directly into the Pyramid of Pain, as discussed by David Bianco.  That's something we don't see often enough...rather than going out and starting something from scratch, build on some of the great stuff that others have done.  Jack does this very well, and it was great to see him using David's pyramid to illustrate his point.

A couple of posts that might be of interest in fleshing out Jack's thoughts are HowTo: Track Lateral Movement, and HowTo: Determine Program Execution.

More than anything else, I would suggest that the pyramid that David described can be seen as a indicator of the level of maturity of the IR capability within an organization.  What this means is that the more you've moved up the pyramid (Jack provides a great walk-through of moving up the pyramid), the more mature your activity tends to be, and when you've matured to the point where you're focused on TTPs, you're actually using a process, rather than simply looking for specific data points.  And because you have a process, you're going to be able to not only detect and respond to other threats that use different TTPs, but you'll also be able to detect when those TTPs change.  Remember, when it comes to these targeted threats, you're not dealing with malware that simply does what it does, really fast and over and over again.  Your adversary can think and change what they do, responding to any perceived stimulus.

Consider the use of PSExec (a tool) as means to implement lateral movement.  If you're looking for hashes, you may miss it...all someone has to do is flip a single bit within the PE file itself, particularly one that has no consequence on the function of the executable, and your detection method has been obviated.  If a different tool (there are a number of variants...) is used, and you're looking for specific tools, then similarly, your detection method is obviated.  However, if your organization has a policy that such tools will not be used, and it's enforced, and you're looking for Service Control Manager event records (in the System Event Log) with event ID 7045 (indicating that a service was installed), you're likely to detect the use of the tool on the destination systems, as well as the use of other similar tools.  In the case of more recent versions of Windows, you can then look at other event records in order to determine the originating system for the lateral movement.

Looking for Service Control Manager/7045 events is one of the items listed on the malware detection checklist that goes along with chapter 6 of Windows Forensic Analysis, 4/e.

When it comes to malware, I would agree with Jake's blog post regarding not uploading malware/tools that you've found to VT, but I would also suggest that the statement, "...they create a new piece of malware, with a unique hash, just for you..." falls short of the bigger issue.  If you're focused on hashes, and specific tools/malware, yes, the bad guy making changes is going to have a huge impact on your ability to detect what they're doing.  After all, flipping a single bit somewhere in the file that does not affect the executionof the program is sufficient to change the hash.  However, if you're focused on TTPs, your protection and detection process will likely sweep up those changes, as well.  I get it that the focus of Jake's blog post is to make the information more digestible, but I would also suggest that the bar needs to be raised.

Uploading to VT
One issue not mentioned in Jake's post is that if you upload a sample that you found on your infrastructure, you run the risk of not only letting the bad guy know that his stuff has been detected, but more than once responders have seen malware samples include infrastructure-specific information (domains, network paths, credentials, etc.) - uploading that sample exposes the information to the world.  I would strongly suggest that before you even consider uploading something (sample, hash) to VT, you invest some time in collecting additional artifacts about the malware itself, either though your own internal resources or through the assistance of experts that you've partnered with.

A down-side of this pyramid approach, if you're a consultant (third-party responder), is that if you're responding to a client that hasn't engineered their infrastructure to help them detect TTPs, then what you've got left is the lower levels of the can't change the data that you've got available to you.  Of course, your final report should make suitable recommendations as to how the client might improve their posture for responding.  One example might be to ensure that systems are configured to audit at a certain level, such as Audit Other Object Access - by default, this isn't configured.  This would allow for scanning (via the network, or via a SIEM) for event ID 4698 records, indicating that a scheduled task was created.  Scanning for these, and filtering out the known-good scheduled tasks within your infrastructure would allow for this TTP to be detected.

For a good example of changes in TTPs, take a look at this CrowdStrike video that I found out about via Twitter, thanks to Wendi Rafferty.  The speakers do a very good job of describing changes in TTPs.  One thing from the video, however...I wouldn't think that a "new approach" to forensics is required, per se, at least not for response teams that are organic to an organization.  As a consultant, I don't often see organizations that have enabled WMI logging, as recommended in the video, so it's more a matter of a combination of having an established relationship with your client (minimize response time, maximize available data), and having a detailed and thorough analysis process.

Regarding TTPs, Ran2 says in this EspionageWare blog post, "Out of these three layers, TTP carries the highest intelligent value to identify the human attackers."  While being the most valuable, they also seem to be the hardest to acquire and pin down.

Thursday, April 03, 2014

What's Up?

A bit ago, I ran across this fascinating blog post regarding the Pyramid of Pain.  Yes, it's over a year old, but it's still relevant today.

For one thing, back when I was doing PCI exams (while a member of the IBM ISS ERS team), Visa would send us these lists which included file names (no paths) and hashes...we had to search for them in every exam, so we did.  While I could see the value in the searches themselves, I felt at the time that Visa was sitting on a great deal of valuable intelligence that, if shared and used properly, could not only help us with our analysis, but could also be used to protect the victim merchants.  After all, if we knew more about how the bad guys were getting in and how they were targeting specific systems (TTPs), we could help prevent and detect such things.  As such, it was validating to see someone else discuss the value of such things.

Over the years, I've heard others talk about things like attribution, when it came to "APT" (I apologize for using that term...).  In fact, at one conference in particular, the speaker talked about how examples of code could be used for attribution, but shortly thereafter stated that the code could be downloaded from the Internet, and that various snippets could be pasted together to form working malware.

In his recent RSA Confernce State of the Hack talk, Kevin Mandia said that in the APT1 report, 3000 indicators were released...he mentioned domains and IP addresses, specifically.  Those are pretty low on the pyramid.

I have to agree that tools don't make the threat group, as many seem to be using the same tools.  In fact, it seems that a lot of the tools used for infrastructure recon are native which group gets the attribution?  Microsoft?  ;-)

Every time I read over the blog post, I keep coming back to agreeing with what the author says about TTPs.  Once you're to the point of detecting behaviors, you're no longer concerned with things like disclosure (public or otherwise) resulting in an adversary changing/adapting their tactics...because now, rather than focusing on individual data points, you have a process in place.  In particular, that process is iterative...anything new you learn gets rolled right back into the process and shared amongst all responders, thereby improving the overall response process.  What you want to do is get to the point where you stop reacting to the adversary, and instead, they react to you.

RegRipper EnScript
I recently found out that someone is selling an EnScript to tie RegRipper into EnCase.  Some tweeted questions about how I felt about it, if I was receiving royalties, and asking if it "violated the GPL"...which, honestly, I didn't understand, as the license file in the archive specifies the license.  Why ask me if the answer is right there?

Since Twitter really isn't the medium for this sort of thing (and I honestly have no idea why so many people in the DFIR community restrict themselves to that medium), I thought I'd share my thoughts here.  First, others are making money off of free software, in general, and a few are doing so specifically with why is this particular situation any different?  The GPL v3 quick guide states, in part, that users should have the freedom to "use the software for any purpose".  It further goes on to state that free software should remain free...and in this case, it would appear that RR remains free, and the $15 pays for the EnScript.  As such, I'm wouldn't think that selling the EnScript violates anything.

Second, this is clearly an attempt to bring the use of RR to users of EnCase.  I've never been a big fan of EnCase, but I realize that there are a number of folks who are, and that many specifically rely on this software.  My original purpose for releasing RegRipper was to put it out in the community for others to use and improve upon...well, that second part really hasn't happened to a great extent, and I don't see this EnScript either taking anything away from RegRipper, or adding anything to it.

I'm not saying that no one has offered up ways for improving RegRipper...some have. Not long ago, I received a plugin from someone, and Corey Harrell submitted one just the other day. I've had exchanges recently with some folks who have had some thoughtful suggestions regarding how to improve RegRipper, and perhaps make it more useful to a wider range of users. All I'm saying is that it hasn't happened to a great extent; some of the improvements and updates (inclusion of alerts, RLO plugin, etc.) are things I've added for my own benefit, and I don't want to maintain two disparate source trees.

Does this mean that more people are likely to use it?  Hhhhhmmmm...maybe.  Folks who go this route are likely going to go the same route as most of the folks who already use RegRipper, either by downloading it or using it as part of a consolidated distribution.  That is to say, they're just going to just blindly run all plugins against the hives that they have available, and it's unlikely that there're going to have any ideas for new plugins (or tool updates or improvements as a whole) coming from this crowd.

So, in short, I don't see how this EnScript is a violation of's no different from what others are doing, and RegRipper itself remains free.  Further, it takes nothing away from RegRipper, nor adds anything to it.

Finally, Jamie had a good point on Twitter...if you don't want this to happen, don't put stuff out there for free.  Point well taken.

Speaking of which, I had an email exchange with Jamie and Corey Harrell recently, where we discussed some well-considered possible future additions to RegRipper.

Windows Forensic Analysis 4/e is due to be released soon...I need to complete the archive of materials that go along with the book and get it posted.  As soon as that's done, I will start working on the possible additions to RegRipper.

Malware Detection
Speaking of WFA 4/e, one of the chapters I kept was the one on Malware Detection.  Not long ago, I was following the steps that I had laid out in that chapter, and I found that the system had McAfee AV installed.  So, per my process, I noted this to (a) be sure that I didn't run the same product against the image (mounted as read-only volume) and (b) look for logs and quarantined items.

It turns out that when McAfee AV quarantines an item, it creates a .bup file, which follows the MS CFB file format.  This Open Security Research blog post is very helpful in opening the files up, in part because it points to this McAfee Knowledge Center article on the same topic.

Some additional resources:
Punbup - Python script for parsing BUP files
Bup-parse - Enscript for parsing BUP files
McBup - Another Python script that may be useful

I'll be speaking at a couple of conferences here in the near future.  I'm giving two presentations at the USACyberCrime Conference (formerly known as the DoD CyberCrime Conference, or DC3) at the end of April.  My presentations will be "APT sans Malware", and "Registry Analysis".  It's unlikely that I will be posting the PPTXs for these, as I'm not putting everything I'm going to say in bullets in the slides...if I did, what would be the point of me actually speaking, right?

Thanks to Suzanne Widup, I'll be speaking on the author's panel at the SANS Forensics Summit in Austin, TX, in June.  This is something new, and something I'm looking forward to.  Not getting feedback from the community regarding what they'd like to see or hear in a presentation, I've backed away somewhat from submitting presentations to CfPs that are posted.  One of my go-to presentations is Registry Analysis, in part because I really believe that it's a critical component of Windows forensic analysis, and also because I'm not sure that analysts are doing it correctly.  However, I've been told that I need to present on something else...but not what.  Also, the panel format is more free-form...I was on a panel at one of the first SANS Forensic Summits, and if you've attended any of the Open Memory Forensic Workshops, you've seen how interesting a panel can be.

Saturday, March 29, 2014

Writing DFIR Books: Questions

Based on my Writing DFIR Books post, Alissa Torres tweeted that she had a "ton of questions", so I encouraged her to start asking them.  I think that getting the questions out and asked now would be a great way to get started, for a couple of reasons.  First, the Summit is a ways away still, and it's unlikely that she's going to remember the questions.  Second, we don't know how the panel itself is going to go, so even if she did remember her "ton of questions", she may not be able to ask all of them.  Third, it's likely that some questions, and responses, are going to generate further questions, which themselves won't be asked due to time constraints.  Finally, it's unlikely that everyone is going to see the questions and responses, and it's likely that other panelists are going to have answers of their own.  So...I don't really see how someone asking their questions now is really going to take anything away from the panel that Suzanne Widup is putting together...if anything, I believe strongly that getting questions and answers out there now is going to make the panel that much better.

So, I scooped up some of the questions from her tweets, and decided to answer them via a medium more conducive to doing so, and here are my answers...

Forensics research is constantly in a state of new discovery. When does one stop researching and start writing?

The simple answer is that you're going to have to stop researching and start writing at some point.  It's up to you to decide, based on your topic, what you want to address, your outline, your schedule, etc.  The best advice I can give about this is to write the book the way you'd write a'll want to be able to explain to a client (or anyone else) how you reached your conclusions 6 months or a year later, right?  The same holds true for the book...explain what you were doing in your research, in a clear and concise manner.  That way, if someone comes to you with a question about a new discovery after the book is published, you can discuss this new information intelligently.

Publishing Timelines
One thing to keep in mind about writing books is that the book doesn't immediately go to print as soon as "put down your pen". Rather, once you've completed writing the manuscript, it goes into a review process (separate from the technical review process) and the proofs are then sent to you for review. Once you approve the proofs and send them back, it can be 2 or 3 more months before the book is actually available on shelves.  So, the simple fact is that a published book is always going to be just a bit behind new developments.  However, that doesn't make a book any less valuable...there are always new people coming into the field, and none of knows everything, so a well-written book is going to be very useful, regardless.

If new research disproves something that you wrote, does it work against you later as an expert witness?

With respect to this question, writing a book is no different from conducting analysis and writing a report for a client.  Are you going to write something into a report that someone working for the client is going to disprove in a week or two when they read it?  If you found during your analysis that malware on the system had created a value beneath the user's Run key in order to remain persistent, are you going to say in your report that the malware started up each time the system was booted?  No, you're going to say that it was set to start whenever the user logged in, and because you did a thorough analysis, which included creating a timeline of system activity, you're going to have multiple data points to support that statement.

That is not to say that something won't change...things change all the time, particularly when it comes to DFIR work, and particularly with respect to Windows systems.  However, there's very likely going to be something that changed...some other application was installed on the system, some Registry value was set a certain way, a patch had been installed that modified a DLL, etc.

If you've decided to do "research" and add it to your book, do the same thing you would with a report that you're writing for a client.  Describe the conditions, the versions of tools and OS utilized, etc.  Be clear and concise, and if necessary, caveat your statements as necessary.

When I was writing the fourth edition of Windows Forensic Analysis, I wanted to include updated information regarding Windows 8 and VSCs in chapter 3, so I took what was in that chapter in the third edition, and I ran through the process I'd described, using an image acquired from a Windows 8 system...and it didn't work.  So, I figured out why, and was sure to provide the updated information in the chapter.

Something else to keep in mind is that most publishers want you to have a technical reviewer or editor, someone who will be reviewing each chapter as you submit it.  You can stick with whomever they give you, and take your chances, or you can find someone you know and trust to hold you accountable, and offer their name to the publisher.  This is a great way to ensure that something doesn't "slip through the cracks".  Like a report, you can also have someone else review your work...submit it to peer review.  This way, you're less likely to provide research and documentation that is so weak that it's easily disproved.

As to the part about being an expert witness, Alissa said before, "forensics research is constantly in a state of new discovery".  I've never been an expert witness, but I could not imagine an attorney putting an expert witness on the stand to testify based on research or findings that are five years old, or so weak that they could be so easily disproved.   I mean, I'd hardly think that such a witness would qualify as "expert".

You all have to address time management as well - how did you juggle paid work/full-time job with book writing?

Short answer: you do.

Okay...longer answer:  This is something you have to consider before you even sign a contract...when am I going to write?  How often, how much, etc?

I learned some useful techniques while writing fitness reports in the Marine being that it's easier to correct and modify something than it is to fill empty space.  Write something, then step away from it.  When I wrote fitreps, I'd jot some bullets down, flesh out a paragraph, and step away from it for a day or so.  Coming back to it later would give me a fresh perspective on what I was writing, allowing my thoughts to marinate a bit.  Of course, it also goes without saying that I didn't wait until the last minute to get started.

Something that I've recommended to folks before they start looking at signing a contract to have a book published is to try writing a couple of chapters.  I will provide a template for them...the one that I use for my publisher...and have them try writing a chapter or two.  I think that this is a very good approach to getting folks to see if they really want to invest the time required to write a book.  One of the things I've learned about the DFIR community, and technical folks as a whole, is that people really do not like notes, reports, books, etc.  So the first hurdle is for a potential author to see what it's like to actually write, and it's usually much harder if they haven't put a good deal of thought into what they want to write, and they haven't started by putting a detailed outline together.  Once something is ready for review, I then offer to take a look at it and provide feedback...writing a book, just like a report, isn't about the first words you put down on paper.  Then the potential author gets to see what that part of the process is like...and it's like having to do 50 push ups, and then being told to do them over because 19 of them didn't count.  ;-)

So far, good questions.  Like I said, I think that getting some of these questions out there and answered now really doesn't take away from the panel, but instead, brings more attention to it.  And it appears that Suzanne agrees, so keep the questions coming...

Addendum:  Shortly after I tweeted this blog post, Corey Harrell tweeted this question:

What's the one thing you know now that you wish you knew writing your first book?

That it's so hard to get input or thoughtful feedback from the community.  Most often, if you do get anything, it's impossible to follow up and better understand the person's perspective.

Seriously...and I'm not complaining.  It's just a fact that I've come to accept over the years.

Most folks who do this sort of thing want some kind of feedback.  When I taught courses, I had feedback forms.  I know other courses, and even some conferences, include feedback forms.  It's this interaction that allows for the improvement of things such as books, open source tools, and analysis processes.  I'm a firm believer that it's impossible to know everything, but by engaging with each other, we can all become better analysts.  The great thing about writing a book, in this context, is that I've taken the first step by putting something out there to be scrutinized.

One of the things I've found over time is that my books have been and are being used in academic and training (government, military) courses.  This is great, and I really appreciate the fact that the course developers and instructors find enough value in my books to use them.  When I have had the chance to talk to some of these instructors, they've mentioned that they have thoughts on what could be done...what could be added or modified in the make it more useful for their purposes.  When I've asked them to share their thoughts, or asked them to elaborate on statements such as "...cover anti-forensics...", most often, I don't hear anything.

Now and again, I do hear through the grapevine that someone has/had comments about a book, or specific material in one of my books, but what I've yet to see much of, beyond the reviews posted on Amazon, is thoughtful feedback on how the books might be improved.  That is not to say that I haven't received it...just recently I did receive some thought feedback on one of my books from a course instructor, but it was a one-shot deal and it's been impossible to engage in a discussion so that I can better understand what they're asking.

Had I known that when writing my first book, I would've had different expectations.

Friday, March 28, 2014

Writing DFIR Books

Suzanne Widup (of Verizon) recently asked me to sit on an author's panel that she's putting together, in order to make the rounds of several conferences.  I won't be available for the panel at CEIC, but David Cowen will be sure to stop by and see what he, and the other panel members have to share about their experiences.

I thought I'd put together a blog post on this topic for a couple of reasons.  First, to organize my thoughts a bit, and provide some of those thoughts.  Also, I wanted to let folks know that I'll be a member of the author panel at the SANS DFIR Summit in Austin, TX.

How did I get started?
Several years ago, a friend asked me to be a tech reviewer for a book that he was co-authoring, and during the process, I provided some input into the book itself.  It wasn't a great deal of information really, just a paragraph or so, but I provided more than just a terse "looks good" or "needs work".  After the book was completed, the publisher asked my friend if they knew of anyone who wanted to write a book, and he provided three names...of the three, I was apparently the only one to respond.   From there, I went through the process of writing my first book.  After that book was published, it turned out the publisher had no intention of continuing with follow-on editions, and our contract provided them with the right of first refusal, which they did, and I moved on to another publisher.

Why write a book?
I can't speak for everyone, but the reason I decided to write a book initially was because I couldn't find a book that covered the digital forensic analysis of Windows systems in the manner that I would've liked.  Like many in the DFIR community, I've had notes and stuff sitting all over, and in a lot of cases, those notes were on systems that I no longer have; over time, I've upgraded systems, or installed new OSs.  Writing a book to use as a reference means that rather than rummaging all over looking for something, I can reach up to my bookshelf, open to the chapter in question and find what I'm looking for.

What goes into writing a book?
A lot of work.  Writing books for the DFIR community is hard, because there's so much information out there that is constantly changing.  Even if you pick a niche to focus on, it's still a lot of work, because historically, our community isn't really that good at writing.  People in the DFIR community tend to not like to write case notes and reports, let alone a book.  

For most, the process of writing a book starts with an idea, and then finding someone to publish the book.  Once there's interest from a publisher, the potential author starts the process of putting the necessary information together in the format needed by the publisher, which is usually a proposal or questionnaire of some kind.  If the proposal is accepted, the author likely receives a contract spelling out things like the timeline for the book development, page/word count, specifics regarding advances and royalties, etc.  Once the contract is signed, the writing process begins.  As the authors send the chapters in, they are subjected to a review process and sent back to the author for updates.  Once the manuscript...chapters, front- and back-matter, etc...are all submitted, the proofs are put together for the author to review, and once those are done, the book goes to printing.  The time between the author sending the proofs back and the book being available for shipping can be 2 - 3 months.

I'm not saying that I agree with this fact, I think that there is a lot that can be done to not only make the entire process easier, but also result in better quality DFIR books being available.  However, my thoughts on this are really a matter for another blog post.

How do you decide what to put in the book?
When you feel like you would like to write a DFIR book, start with an outline.  Seriously.  This helps organize your thoughts, and it also helps you see if there's enough information to put into a book. Preparation is key, and this is true when taking on the task of writing a book.  I've found over time that the more effort I put into the outline and organizing my thoughts ahead of time, the easier it is to write the book.  Because, honestly, we all know how much folks in the DFIR profession like to write...

What do you get from writing a book?
It depends on what you want from writing a book, and what you put into it.  For example, I started writing books because I wanted a reference, some sort of consolidated location for all of the bits and pieces, tips and tricks that I have laying around.

First, let me clear something up...some people seem to think that when you write a book, you make a lot of money.  Yes, there are royalties...most contracts include them...but it's also really easy to sit back and assume what the percentages are and what the checks look like, and the fact is that most people that think like that are wrong.  I've had people ask me why I didn't include information about Windows Mobile Devices in my books...either for full analysis or just the Registry...and they've suggested that I make enough money in royalties to purchase these devices.  If you think that writing a book for the DFIR community is going to make you enough money to do something like that, then you probably shouldn't start down that road.  Yes, a royalty check is nice, but it's also considered taxable income by the IRS, and it does get reported to the IRS, and it does get taxed.  This takes a small amount and makes it smaller.  I'm not complaining...I have engaged in this process with my eyes open...I'm simply stating the facts as they are so that others are aware.

One thing that you do get from writing a book, whether you want it or not, is notoriety.  This is especially true if the book is useful to folks, and looked upon favorably...they get to know your name (the same is also true if the book ends up being really bad).  And yes, this notoriety kind of puts you in a club because honestly, the percentage of folks who have written successful DFIR books is kind of small.  But this can also have a down-side; a lot of people will look at you as unapproachable.   I was told once during an interview process that the potential employer didn't feel that they could afford me...even though we hadn't talked salary, nor salary history...because I'd written books.  I've received emails from people in the industry, some that I've met in person, in which they've said that they didn't feel that they could ask me a question because I'd written books.

What I get from writing books is the satisfaction of completing the book and seeing it on my bookshelf.  I've actually had occasion to use my books as references, which is exactly what I intended them to be.  I've gone back and looked up tools, commands, and data formats, and used that information to complete exams.  I've also been further blessed, in that some of my books have been translated into other languages, which adds color to my bookshelf.

Book Reviews
Book reviews are very important, not just for marketing the book, but because they're one way that the author gets feedback and can decide to improve the book (if they opt to develop another edition).

Book reviews need to be more than "...chapter 1 contains...chapter 2 covers..."; that's a table of contents (ToC), not a book review, and most books already have a ToC.  

A review of a DFIR book should be more about what you can't get from the ToC.  If you review a restaurant on Yelp, do you repeat the menu (which is usually already available online), or do you talk about your experience?  I tend to talk about the atmosphere, how crowded the restaurant was, how the service was, and the food was, etc.  I tend to do something similar when reviewing DFIR books.  The table of contents is going to be the same regardless of who reads the book; what's going to be different is the reader's experience with the book.

When writing a book review and making suggestions or providing input, it really helps (the author, and ultimately the community) to think about what you're suggesting or asking for.  For example, now and again, one of the things I've been asked to add to the WFA book is a chapter on memory analysis.  Why would I do that if the Volatility folks (and in particular, Jamie Levy) have already put a great deal of effort into the tool documentation AND they have a book coming out?  The book is currently listed as being 720 pages long...what would I be able to provide in a chapter that they don't already provide in a much more complete and thorough manner?

Now, I know that not everyone who purchases a book is going to actually open it.  I know this because there are folks who've asked me questions and knowing that they own a copy of the book, I've referenced the appropriate page number in the book.  But if you do have a book, and you have some strong feelings about it (whether positive or negative), I would strongly encourage you to write a review, even if you're only going to send it to the author.  The reason is that if the author has any thought of updating the book to another edition, or writing another book all together, your feedback can be helpful in that regard.  In fact, it could change the direction of the book completely.  If you share the review publicly, and the author has no intention of updating the book, someone else may see your review and that might be the catalyst for them to write a book.  

Saturday, March 22, 2014

Coding for Digital Forensic Analysis

Over the years, I've seen a couple of questions on the topic of coding for digital forensic analysis.  Many times, these questions tend to devolve into a quasi-religious debate over the programming language used, and quite honestly, that detracts from the discussion as a whole, because regardless of the language used, these questions are very often more about deconstructing or reconstructing data structures, processing logs, or simply obtaining context and meaning from the available mass of data.

Programming languages abound, and from what I've seen, the one chosen comes down to personal preference, usually based on experience and knowledge of the language.

I started programming BASIC on the Apple IIe back around 1982.  I typed a couple of BASIC programs into a Timex-Sinclair 1000, and then took the required course in BASIC programming my freshman year in college.  In high school, I took the brand new AP Computer Science course, which focused on using PASCAL...I ended up using TurboPASCAL at home to compile my programs and bring them in on a 5.25 in. floppy drive.  In graduate school, I took a C/C++ programming course, but it didn't involve much more than opening a file, writing to it, and then closing it.  I also did some M68000 assembly language programming. We used MatLab a lot in some of the different courses, particularly digital signal processing and neural networks, and I used it to perform some statistical analysis for my thesis.

In 1999, I started teaching myself to program Perl in order to have something to do.  I was working as a consultant at Predictive Systems, and we didn't have a security practice at the time.  I could hear the network operations team talking about how they needed someone who could program Perl, so I started teaching myself what I could so that maybe I could offer some assistance.  From there, I branched out to interacting with Windows systems through the API provided by Dave Roth's Perl modules.  At one point, while working at TDS, I had a fully-functional product that we were using to replace the use of ISS's Internet Scanner, due to the number of false positives and cryptic responses we were receiving.

Since then, I've used Perl for a wide variety of tasks, from IIS web server log processing to RegRipper to decoding binary data structures.

However, I'm NOT suggesting that Perl is the be-all and end-all of programming languages, particularly for DFIR work. Not at all.  All I've done is provided my experience.  Over the years, other programming languages have been found to be extremely useful.  I've seen R used for statistical analysis, and it makes a lot of sense to use this language for that task.  I've also seen a lot of programming in the DFIR space using C# and .NET, and even more using Python.  I've seen folks switch from another language to Python because "everyone is doing it".  I've seen so much of a use of Python, that I've started learning it myself (albeit slowly) in order to better understand the code, and even create my own.  The list of projects written in Python is pretty extensive, so it just makes sense to learn something about this language.

Defining the Problem
The programming language you use to solve a problem is really irrelevant.  I've got bits of Perl code lying around...stuff for parsing certain data structures, printing binary data to the console in hex editor format, if I need to pull something together quickly, I'm likely going to use a snippet of Perl code that I already have.  I'm sure that I'm no different from any other analyst in that regard.

But when it comes to the particular language or approach I'm using, it depends on what I'm trying to achieve...what are my goals?  If I'm trying to put something together that I'm going to either have a client use, or leave behind after I'm done for the client to use, I may opt for a batch file.  If I need something quickly, but with more flexibility than is offered by a batch file, I may opt for a Perl script.

I've found over time that some of the programming languages used can be difficult to work with...what I mean by that is that some of the tools written and made available by the authors display their results in a GUI, and are closed source.  So, you can't see what the tool is doing, and you can't easily incorporate the output of the tools using techniques like timeline analysis.

Task-Specific Programming
Some folks have asked about learning programming specifically for DFIR work; I'm not sure if there are any.  What it comes down to is, what do you want to do?  If you want to read binary data and parse out data structures based on some definition, then C/C++, C#, Python, Perl, and some other languages work well.  For many, some snippets of code are already available online for these tasks.  If you're trying to process text-based log files, I've found Perl to be well-suited for this task, but if you're more familiar with and comfortable with Python, go for it.

When it really comes down to it, it isn't about which programming language is "the best", it's about what your goal is, and how you want to go about achieving it.

Python tutorial
80+ Best Free Python Tutorials/books
List of free programming books, Python section