Blogosphere
Cory Harrell has another valuable post up, this one walking through an actual root cause analysis. I'm not going to steal Corey's thunder on this...the post or the underlying motive...but suffice to say that performing a root cause analysis is critical, and it's something that can be done in an efficient manner. There's more to come on this topic, but this sort of analysis needs to be done, it can be done effectively and efficiently, and if you don't do it, it will end up being much more expensive for you in the long run.
Jimmy Weg started blogging a bit ago, and in very short order has already posted several very valuable articles. Jimmy's posts so far a very tutorial in nature, and provide a great deal of information regarding the analysis of Volume Shadow Copies. He has a number of very informative posts available, to include a very recent post on using X-Ways to cull evidence from VSCs.
Mari has started blogging, and her inaugural post sets the bar pretty high. I mentioned in this blog post that blogging is a great way to get started with respect to sharing DFIR information, and that even an initial blog post can lead to further research. Mari's post included enough information to begin writing another parser for the Bing search bar artifacts, if you're looking to write one that presents the data in a manner that's usable in your reporting format.
David Nides recently posted to his blog regarding what he discussed in his SANS Forensic Summit presentation. David's post focuses exclusively on log2timeline as the sole means for creating timelines, as well as some of what he sees as the short-comings with respect to analyzing the output of Kristinn's framework. However, that's not what struck me about David's post...rather, what caught my attention was statements such as, for the average DFIR professional who is not familiar with CLI.
Now, don't think that I'm on the side of the fence that feels that every DFIR "professional" must be well-versed in CLI tools. Not at all. I don't even think that it should a requirement that DFIR folks be able to program. However, I do see something...off...about a statement that includes the word "professional" along with "not familiar with CLI".
I've worked with several analysts throughout my time in the infosec field, and I've taught (and teach) a number of courses. I have worked with analysts who have used *only* Linux, and done so at the command line...and I have engaged with paid professionals tasked with analysis work who are only able to use one commercial analysis framework. So, while I am concerned by repeated statements in a post that seem to say, "...this doesn't work, because Homey don't play that...", I am also familiar with the reality of it.
Speaking of the SANS Forensic Summit, the Volatility blog has a new post up that is something of a different perspective on the event. Sometimes it can be refreshing to get away from the distractions of the cartoons, and it's always a good idea to get different perspectives on events.
Tools
The folks over at TZWorks have put together a number of new tools. Their Jump List parser works for both the *.automaticDestinations-ms Jump Lists and the *.customDestinations-ms files, as well. There's a Prefetch file parser, USB storage parser, and a number of other very useful utilities freely available. The utilities are very useful, and are available for a variety of platforms (Win32- and 64-bit, Linux, Mac OS X).
If you're not familiar with the TZWorks.net site, take a look and bookmark it. If you're downloading the tools for use, be sure to read the license agreement. Remember, if you're reporting on your analysis properly, you're identifying the tools (and the versions) that you used, and relying on these tools for commercial work may come back and bite you.
Andrew posted to the ForensicsArtifacts.com site recently regarding MS Office Trust Records, which appear to be generated when a user trusts content via MS Office 2010. Andrew, co-creator of Registry Decoder, pointed out that Mark Woan's RegExtract parses this information, and shortly after reading his post, I wrote a RegRipper plugin to extract the information, and then created another version of that plugin to extract the data in TLN format. This information is very valuable, as it is an indicator of explicit user activity...when opening a document from an untrusted source, the user must click the "Enable Editing" button that appears in the application in order to proceed with editing it. Clearly, this requires some additional testing to determine actions that cause this artifact to be populated, etc., but for now, it clearly demonstrates user access to resources (i.e., network drives, external drives, files, etc.). In the limited testing that I've done so far, the time stamp associated with the data appears to be when the document was created on the system, not when the user clicked the "Enable Editing" button. What I've done is downloaded a document (MS Word .docx) to my desktop via Chrome, recorded the date and time of the download, and then opened the file. When the "Enable Editing" button is present in the warning ribbon at the top of the document, I will wait up to an hour (sometimes more) to click the button and record the time I did so. Once I do, I generally close the document. I then reboot the system and use FTK Imager to get a copy of the NTUSER.DAT hive, and run the plugin. In every case so far, what I've seen is that the time stamp associated with the values in the key correlate to the creation time of the file, further evidenced by running "dir /tc".
Thank you for the blog mention.. glad to know you found part of one sentence interesting.
ReplyDelete>>I am also familiar with the reality of it.
What's the reality? The context of my entire sentence was parsing and reviewing timeline data. If you know of a efficient and effective CLI kung foo method, please share..
What's the reality?
ReplyDeleteFrom my perspective, the "reality" is that you're right.
The fact of the matter seems to be that no matter how useful a tool may be, if it's CLI, a great majority of the "community" won't be interested in using it...for the simple fact that it's CLI.
I've seen this a number of times. CLI tools seem to be a limiting factor for a lot of analysts.
I have a number of tools in my timeline analysis toolkit, but I've been told that there are too many, and they're all CLI. It doesn't matter that producing TLN output is only one of the formats that the tool can provide; nor does it matter that the tools have some built-in filtering capability to pull the analyst's attention to specific items. Most analysts tell me that they don't use them b/c they're CLI, and even seasoned analysts tell me that there are too many of them.
I've also seen people post to forums saying that they don't get into using SIFT b/c it's Linux, there are too many tools, etc.
So, I guess the point of my comment is that while I would hope that more analysts would use CLI tools where it is appropriate to do so, the reality of it is that this simply isn't the case...