Maybe there really isn't a whole lot going on at the moment...maybe adding something a little bit different, perhaps a different angle, will change things up just a bit.
Many of you know that I've had a couple of books published, and number 5...Windows Registry Forensics...will be out shortly. WFA 2/e has been pretty successful so far, and a number of folks have sort of off-handedly asked me about a third edition. To be honest, this is something that's up to the publisher...WFA 2/e may be successful enough for a third edition, or the publisher may decide that the title and subject have run their course....who knows?
However, the question itself got me thinking along a different road...that is, what does it take to write a book? In exploring material for a third edition, I took an opportunity to ask some folks what they thought should be included in that edition, only to get some very interesting responses. Those responses then got me to thinking, "here's someone who does a lot of
If you're looking for a particular topic, my publisher has a page up called "Write for Syngress" for those who may be thinking about writing a book, or just tossing the idea around.
I think that a lot of folks look for someone to write a book for them, without ever thinking that maybe they could write that book themselves. Time is always a factor, but based on my experience, there are a number of decisions that need to be made, and sometimes thinking through those things can actually get you over that hump.
As such, what I'm going to be doing over time is writing a series of blog posts about writing books. I'm going to offer up my thoughts and experiences, and hope that someone out there decides to put pen to paper (figuratively, or literally) and step out and just do it. Yes, writing a book is time consuming and it can be arduous, but it can also be very rewarding.
Mark Wade recently had an article published in the DFI News regarding Prefetch file analysis. The article is a good resource for those not familiar with Prefetch files and how they might be used. I do, however, have a question about one statement made in the article:
For instance if the Standard Information Attribute (SIA) and File Name Attribute (FNA) timestamps are modified in the Master File Table (MFT) to impede analysis...
In light of what's available in Brian Carrier's File System Forensic Analysis book, I'm a bit unclear as to how FNA timestamps would be purposely modified. Does anyone have any input on that? Is anyone seeing FNA timestamps being purposely and maliciously modified in the same manner as SIA timestamps?
That question aside, I think that Mark makes a number of excellent points and observations in his article, particularly that data embedded within the Prefetch files can be used to determine a number of interesting artifacts, such as errant removable storage devices or paths, "abnormal" or deleted user accounts, etc.
Lance recently posted a "hard drive imaging process tree for basic training"; keeping in mind that it's for basic training, it's an excellent resource to look to and use.
There are a couple of things that you should consider adding if you're planning to use this decision tree for your own training purposes. For example, the first comment talks about adding chain of custody documentation...this is something that should definitely be done. Several other comments refer to collecting volatile memory...while Lance's responses are that this is covered in other modules, I tend to agree with Rob and Andrew...this is something that needs to be included at the basic level; we can no longer leave this to the advanced level.
The thing I am glad to see at the end of the entire process is "Verify Image". I can't say that it's happened too many times, but I do know of analysts who have gone on-site, acquired images, documented everything, and not verified their images. When they got back to the lab, they found that for some reason, they did not have valid images. This must be part of the process.
Great job, Lance, on producing an excellent resource! Thanks!
Oh, hey, almost forgot...thanks for updating the post to include volatile data collection!
Gl33da's got a new blog post up, this one on identifying memory images. If you do memory analysis, but don't actually perform the memory acquisition, this is an excellent post that's chock full of great information, in that she not only points out a method for determining the version of Windows of a memory image, but points out that it's already been incorporated into Volatility 1.4. Shoutz to gl33da for sharing!
If you are performing memory acquisition, are you using windd? Or are you using the MoonSols Windows Memory Toolkit Community Edition?
Addendum: Russ McRee pointed me to an article he wrote for InfoSecInstitute, titled "Security Incident Response Testing To Meet Audit Requirements". The article focuses primarily on the PCI DSS v2.0, specifically para. 12.9, which mandates the requirement for an IR capability, in addition to providing specifications. Overall, I think that the article is very good, and includes a good deal of the things I would do or recommend when writing or evaluating an IR plan.
The only real issue I have with Russ's article...and this comes from being QSA certified and on a QIRA team for far too long...is that the majority of organizations that I investigated had enough trouble just getting something down on paper, let alone a coherent IR plan. The likelihood of a comprehensive plan that was tested, and an IR team that is trained and drilled on a regular basis was slim to none. Now, this isn't all organizations obviously...this is simply based on my experience and the responses I performed.
I will say that Russ does have some very interesting information in the article that will no only benefit responders, but forensic analysts, as well as those who provided training to responders and analysts. Thanks, Russ, for pointing me to the article.