Tuesday, October 30, 2018

More Regarding IWS

IWS has been out for a short while now, and there have been a couple of reviews posted.  So far, it seems that there is some reticence to the book, based on the form factor (size), as well as the price point.  Thanks to feedback on the book on that subject from Jessica Hyde, the publisher graciously shared the following with me:

I’m happy to pass on a discount code that Jessica and her students, and anyone else you run across, can use on our website (www.elsevier.com) for a 30% discount AND we always offer free shipping. The discount code is: FOREN318.

Hopefully, this discount code will bring more readers and DFIR analysts a step closer to the book.  I think that perhaps the next step is to address the content itself. I'm very thankful to Brett Shavers for agreeing to let me share this quote from an email he sent me regarding the IWS content:

As to content, I did a once-over to get a handle of what the book is about, now on Ch 2, and so far I think this is exactly how I want every DFIR book to be written.

I added the emphasis myself.  This book is something of a radical departure from my previous books, which I modeled after other books I'd seen in the genre, because that's what I thought folks wanted to see.  Mention an artifact, provide a description of what the artifact may mean (depending upon the investigation), maybe a general description of how that artifact may be used, and then provide names of a couple of tools to parse the artifact.  After that, move on to the next artifact, and in the end, pretty much leave it to the reader to string everything together into an "investigation".  In this case, my thought process was to use images that were available online to run through an investigation, providing analysis decisions and pivot points along the way.  This way, a reader could follow along, if they chose to do so.

If you get a copy of the book and have a similar reaction to what Brett shared, please let me know.  If there's something that you like or don't like about the book, again, please let me know.  Do this through an email, a comment here on this blog, or a blog post of your own.  As illustrated by the example involving Jessica, if I know about something, I can take action and work to change it. 

How It Works
When a publisher decides to go forward with a book project, they have the author submit a prospectus describing the book, the market for the book, and any challenges that may be faced in the market; in short, the publisher has the author do the market research.  The prospectus is then reviewed by several folks; for the book projects I've been involved with, its usually been three people in the industry.  If the general responses are positive, the publisher will move forward with the project. 

I'm sharing this with you because, in my experience, there are two things that the publisher looks at when considering a second edition; sales numbers and feedback from the first edition.  As such, if you like the content of the book and your thoughts are similar to Brett's, let me know.  Write a review on Amazon or on the Elsevier site, write your own blog post, or send me an email.  Let me know what you think, so that I can let the publisher know, and so that I can make the changes or updates, particularly if they're consistent across several reviewers. 

If you teach DFIR, and find value in the book content, but would like to see something more, or something different, let me know.  As with Jessica's example, there's nothing anyone can to do take action if they don't know what you're thinking.

Sunday, October 28, 2018

Updates

Book Discount
While I was attending OSDFCon, I had a chance to (finally!) meet and speak with Jessica Hyde, a very smart and knowledgeable person, former Marine, and an all-around very nice lady.  As part of the conversation, she shared with me some of her thoughts regarding IWS, which is something I sincerely hope she shares with the community.  One of her comments regarding the book was that the price point put it out of reach for many of her students; I shared that with the publisher, and received the following as a response:

I’m happy to pass on a discount code that Jessica and her students, and anyone else you run across, can use on our website (www.elsevier.com) for a 30% discount AND we always offer free shipping. The discount code is: FOREN318.

What this demonstrates is that if you have a question, thought, or comment, share it. If action needs to or can be taken, someone will do so. In this case, my concern is the value of the book content to the community, and Jessica graciously shared her thoughts with me, and as a result, I did what I could to try and bring the book closer to where others might have an easier time purchasing it.

So how can you share your thoughts?  Write a blog post or an email.  Write a review of the book, and specify what you'd like to see.  What did you find good, useful or valuable about the book content, and what didn't you like?  Write a review and post it to the Amazon page for the book, or to the Elsevier page; both pages provide a facility for posting a review.

Artifacts of Program Execution
Adam recently posted a very comprehensive list of artifacts indicative of program execution, in a manner similar to many other blogs and even books, including my own.  A couple of take-aways from this list include:

- Things keep changing with Windows systems.  Even as far back as Windows XP, there were differences in artifacts, depending upon the Service Pack.  In the case of the Shim Cache data, there were differences in data available on 32-bit and 64-bit systems.  More recently, artifacts have changed between updates to Windows 10.

- While Adam did a great job of listing the artifacts, something analysts need to consider is the context available from viewing multiple artifacts together, as a cluster, as you would in a timeline.  For example, let's say there's an issue where when and how Defrag was executed is critical; creating a timeline using the user's UserAssist entries, the timestamps available in the Application Prefetch file, and the contents of the Task Scheduler Event Log can provide a great deal of context to the analyst.  Do not view the artifacts in isolation; seek to use an analysis methodology that allows you to see the artifacts in clusters, for context.  This also helps in spotting attempts by an adversary to impede analysis.

So, take-aways...know the version of Windows you're working with because it is important, particularly when you ask questions, or seek assistance.  Also, seek assistance.  And don't view artifacts in isolation. 

Artifacts and Evidence
A while back (6 1/2 yrs ago), I wrote about indirect and secondary artifacts, and included a discussion of the subject in WFA 3/e.

Chris Sanders recently posted some thoughts regarding evidence intention, which seemed to me to be along the same thought process.  Chris differentiates intentional evidence (i.e., evidence generated to attest to an event) from unintentional evidence (i.e., evidence created as a byproduct of some non-attestation function).

Towards the end of the blog post, Chris lists six characteristics of unintentional evidence, all of which are true.  To his point, not only may some unintentional evidence have multiple names, it may be called different things by the uninitiated, or those who (for whatever reason) choose to not follow convention or common practice.  Consider NTFS alternate data streams, as an example.  In my early days of researching this topic, I found that MS themselves referred to this artifact as both "alternate" and "multiple" data streams.

Some other things to consider, as well...yes, unintentional evidence artifacts often are quirky and have exceptions, which means they are very often misunderstood and misinterpreted.  Consider the example of Shim Cache entry from Chris's blog post; in my experience, this is perhaps the most commonly misinterpreted artifact to date, for the simple fact that the time stamps are commonly referred to as the "date of execution".  Another aspect of this artifact is that it's taken as standalone, and should not be...there may be evidence of time stomping occurring prior to the file being included as a Shim Cache record.

Finally, Chris is absolutely correct that many of these artifacts have poor documentation, if they have any at all.  I see this as a short-coming of the community, not of the vendor.  The simple fact is that, as a community, we're so busy pushing ahead that we aren't stopping to consider the value to the community as a whole that we're leaving behind.  Yes, the vendor may poorly document an artifact, or the documentation may simply be part of the source code that we cannot see, but what we're not doing as a community is documenting and sharing our findings.  There've been too many instances during my years doing DFIR work that I would share something with someone who would respond with, "oh, yeah...we've seen that before" only to have no documentation, not even a Notepad document or something scribbled on a napkin to which they can refer me.  This is a loss for everyone.

Saturday, October 20, 2018

OSDFCon Trip Report

This past week I attended the 9th OSDFCon...not my 9th, as I haven't been able to make all of them.  In fact, I haven't been able to make it for a couple of years.  However, this return trip did not disappoint.  I've always really enjoyed the format of the conference, the layout, and more importantly, the people.  OSDFCon is well attended, with lots of great talks, and I always end up leaving there with much more than I showed up with.

Interestingly enough, one speaker could not make it at the last minute, and Brian simply shifted the room schedule a bit to better accommodate people.  He clearly understood the nature of the business we're in, and the absent presenter suffered no apparent consequences as a result.  This wasn't one of the lightning talks at the end of the day, this was one of the talks during the first half of the conference, where everyone was in the same room.  It was very gracious of Brian to simply roll with it and move on.

The Talks
Unfortunately, I didn't get a chance to attend all of the talks that I wanted to see.  At OSDFCon, by its very nature, you see people you haven't seen in a while, and want to catch up.  Or, as is very often the case, you see people you only know from online.  And then, of course, you meet people you know only from online because they decide to drop in, as a surprise.

However, I do like the format.  Talk times are much shorter, which not only falls in line with my attention span, but also gets the speakers to focus a bit more, which is really great, from the perspective of the listener, as well as the speaker.  I also like the lightning talks...short snippets of info that someone puts together quickly, very often focusing on the fact that they have only 5 mins, and therefore distilling it down, and boiling away the extra fluff.

My Talk
I feel my talk went pretty well, but then, there's always the bias of "it's my talk".  I was pleasantly surprised when I turned around just before kicking the talk off to find the room pretty packed, with people standing in the back.  I try to make things entertaining, and I don't want to put everything I'm going to say on the slides, mostly because it's not about me talking at the audience, as much as its about us engaging.  As such, there's really no point in me providing my slide pack to those who couldn't attend the presentation, because the slides are just place holders, and the real value of the presentation comes from the engagement.

In short, the purpose of my talk was that I wanted to let people know that if they're just downloading RegRipper and running the GUI, they aren't getting the full power out of the tool.  I added a command line switch to rip.exe earlier this year ("rip -uP") that will run through the plugins folder, and recreate all of the default profiles (software, sam, system, ntuser, usrclass, amcache, all) based on the "hive" field in the config headers of the plugin.

To-Do
Something that is a recurring theme of this conference is how to get folks new to the community to contribute and keep the community alive, as well as how to just get folks in the community to contribute.  Well, a couple of things came out of my talk that might be of interest to someone in the community.

One way to contribute is this...someone asked if there was a way to determine for which version of Windows a plugin was written.  There is a field in the %config header metadata that can be used for that purpose, but there's no overall list or table that identifies the Windows version for which a plugin was written.  For example, there are two plugins that extract information about user searches from the NTUSER.DAT hive, one for XP (acmru.pl) and one for Vista+ (wordwheelquery.pl).  There's really no point in running acmru.pl against an NTUSER.DAT from a Windows 7 system.

So, one project that someone might want to take on is to put together a table or spreadsheet that provides this list.  Just sayin'...and I'm sure that there are other ideas as to projects or things folks can do to contribute. 

For example, some talks I'd love to see is how folks (not the authors) use the various open source tools that are available in order to solve problems.  Actually, this could easily start out as a blog post, and then morph into a presentation...how did someone use an open source tool (or several tools) to solve a problem that they ran into?  This might make a great "thunder talk"...10 to 15 min talks at the next OSDFCon, where the speaker shares the issue, and then how they went about solving it. Something like this has multiple benefits...it could illustrate the (or, a novel) use of the tool(s), as well as give DFIR folks who haven't spoken in front of a group before a chance to dip their toe in that pool.

Conversations
Like I said, a recurring theme of the conference is getting those in the community, even those new to the community, involved in keeping the community alive, in some capacity.  Jessica said something several times that struck home with me...that it's up to those of us who've been in the community for a while to lead the way, not by telling, but by doing.  Now, not everyone's going to be able to, or even want to, contribute in the same way.  For example, many folks may not feel that they can contribute by writing tools, which is fine.  But a way you can contribute is by using the tools and then sharing how you used them.  Another way to contribute is by writing reviews of books and papers; by "writing reviews", I don't mean a table of contents, but instead something more in-depth (books and papers usually already have a table of contents).

Shout Outz
Brian Carrier, Mari DeGrazia, Jessica Hyde, Jared Greenhill, Brooke Gottlieb, Mark McKinnon/Mark McKinnonCory Altheide, Cem Gurkok, Thomas Millar, the entire Volatility crew, Ali Hadi, Yogesh Khatri, the PolySwarm folks...I tried to get everyone, and I apologize for anyone I may have missed!

Also, I have to give a huge THANK YOU to the Basis Tech folks who came out, the vendors who were there, and to the hotel staff for helping make this conference go off without a hitch.

Final Words
As always, OSDFCon is well-populated and well-attended.  There was a slack channel established for the conference (albeit not by Brian or his team, but it was available), and the Twitter hashtag for the conference seems to have been pretty well-used.

To follow up on some of the above-mentioned conversations, many of us who've been around for a while (or more than just "a while") are also willing to do more than lead by doing.  Many of us are also willing to answer questions...so ask.  Some of us are also willing to mentor and help folks in a more direct and meaningful manner.  Never presented before, but feel like you might want to?  Some of us are willing to help in a way that goes beyond just sending an email or tweet of encouragement.  Just ask.

Sunday, October 07, 2018

Updates

IWS
Folks have started receiving the copies of IWS they ordered, and folks like Joey and Mary Ellen have already posted reviews!  Mary Ellen has also gone so far as to post her review the Amazon page for the book!

Some have also pointed out that the XP image from Lance's practical is no longer available.  Sorry about that but I was using the image, and don't have access to, nor control over the site itself.  However, the focus of the book is the process, and choosing to use available images, I thought, would provide more value, as readers could follow along.

Addendum, 7 Oct: Thanks to the wonderful folks from the TwitterVerse who pointed out archive.org as a resource, the XP image can be found here!

Speaking of images, I got an interesting tweet the other day, asking why Windows 10 wasn't mentioned in ch. 2 of the book.  The short answer is two-fold; one, because it wasn't used/addressed.  For the second part of the answer, I'd refer back to a blog post I'd written two years ago when I started writing IWS, specifically the section of the post entitled "The "Ask"".  Okay, I know that there's a lot going on in the TwitterVerse, and that two year is multiple lifetimes in Internet time.  And I know that not everyone sees nor ingests everything, and for those who do see things or tweets, if they have no relevance at the time, then "meh".  I get it.  I'm subject to it myself.

Okay, so, just to be clear...I'm not addressing that tweet in order to call someone out for not paying attention, or missing something.  Not at all.  I felt that this was a very good opportunity to provide clarity around and set expectations regarding the book, now that it's out. The longer response to the tweet question, the one that doesn't fit neatly into a tweet, is also two-fold; one, I could not find a Windows 10 image online that would have fit that into that chapter.  The idea at the core of writing the book was to provide a view into the analysis process, so that analysts could have something with which they could follow along.

The second part of the answer is that it's about the process; the analysis process should hold regardless of the version of Windows examined.  Yes, the technical and tactical mechanics may change, but the process itself holds, or should hold.  So, rather than focusing on, "wow, there's a whole section that addresses Windows XP...WTF??", I'd ask that the focus should be on documenting an analysis plan, documenting case notes, and documenting what was learned from the analysis, and then rolling that right back into the analysis process.  After all, the goal of the book is NOT to state that this is THE way to analyze a Windows system for malware, but to show the value of having a living, breathing, growing, documented, repeatable analysis process.

Also, I was engaged in analyzing systems impacted by NotPetya during the early summer of 2017.  Another analyst on our team received several images from an impacted client, all of which were XP and 2003.  So, yes, those systems are still out there and still actively being used by clients.

Books
One of the challenges of writing books is keeping people informed as to what's coming, and giving them the opportunity to have input.  For example, after IWS was published, someone who follows me on social media said that they had no idea that there was a new book coming out.  I wanted to take the opportunity (again) to let others know what was coming, what I'm working on, in an effort to not just set expectations, but to see if anyone has any thoughts or comments that might drive the content itself.

This new book is titled Practical Windows Investigations, and the current chapters are:

1. Core Concepts
2. How to analyze Windows Event Logs
3. How to get the most out of RegRipper
4. Malware Detection
5. How to determine data exfiltration
6. File (LNK, DOCX/DOC, PDF) Analysis
7. How to investigate lateral movement
8. How to investigate program execution
9. How to investigate user activity
10. How to correlate/associate a device with a user (USB, Bluetooth)
11. How to detect/analyze the use of anti-forensics
12. Making use of VSCs

PWI differs from the current IWS in that it's about halfway between my previous books and IWS.  What I mean by that is my previous books listed artifacts, how to parse them, their potential value during an investigation, but left it to the analyst to stitch the analysis together.  IWS was more of a cradle-to-grave approach to an investigation, relying on publicly available images so that a reader could follow along, if they chose to do so.  As such, IWS was somewhat restricted to what was available; PWI is intended to address some of those things that weren't available through the images used in IWS.

I'm going to leave that right there...

RegRipper Plugins
I recently released a couple of new plugins.  One is "appkeys.pl", which offers an interesting persistence mechanism, based on Adam's blog post.  Oh, and there's the fact that it's been seen in the wild, too...so, yeah.

The other is "slack.pl", which extracts slack space from Registry cells, and parses the retrieved data for keys and values.  In my own testing, I've got it parsing keys and values, but just the data from those cell types.  As of yet, I haven't seen a value cell, for example, that included the value data, just the name.  It's there if you need it, and I hope folks find value in it.

LNK Parsing
While doing some research into LNK 'hotkeys' recently, I ran across Adam's blog post regarding the use of the AppKey subkeys in the Registry.  I found this pretty fascinating, even though I do not have media keys on my keyboard, and as such, I wrote a plugin (aptly named "appkeys.pl") to pull this information from the Registry.  I also created "appkeys_tln.pl" to extract those subkeys with "ShellExecute" values, and send the info to STDOUT in TLN format.

Adam also pointed out in his post that this isn't something that was entirely theoretical; it's been seen in the wild.  As such, something like this takes on even greater significance.

Adam also provided a link to MS's keyboard mappings.  By default, the subkey numbered "17" points to a CLSID, which translates to "My Computer".

Fun with Flags
There was a really interesting Twitter thread recently regarding a BSides Perth talk on APT LNK files.  During the thread, Nick Carr pointed out MS had recently updated their LNK format specification documentation.  During the discussion, Silas mentioned the LinkFlags field, and I thought, oh, here's a great opportunity to write another blog post, and work in a "The Big Bang Theory" reference.  More to the point, however, I thought that by parsing the LinkFlags field, there might be an opportunity to identify toolmarks from whatever tool or process was used to create the LNK file.  As such, I set about updating my parser to not only look for those documented flags that are set, but to also check the unused flags.  I should also note that Silas recently updated his Python-based LNK parser, as well.

During a follow-on exchange on Twitter on the topic, @Malwageddon pointed me to this sample, and I downloaded a copy, naming it simply "iris" on my analysis system.  I had to disable Windows Defender on my system, as downloading it or accessing it in any way, even via one of my tools, causes the file to be quarantined.

Doing a Google search for "dikona", I found this ISC handler post, authored by Didier Stevens. Didier's explanation is very thorough.

In order to do some additional testing, I used the VBS code available from Adam's blog post to create a LNK file that includes a "hotkey" field.  In Adam's example, he uses a hotkey that isn't covered in the MS documentation, and illustrates that other hotkeys can be used, particularly for malicious purposes.  For example, I modified Adam's example LNK file to launch the Calculator when the "Caps Lock" key was hit; it worked like a champ, even when I hit the "Caps Lock" key a second time to turn off the functionality on my keyboard.  Now, image making that LNK file hidden from view on the Desktop...it does make a very interesting malware persistence method.

Additional Stuff:
Values associated with the ShowWindow function - the LNK file documentation describes a 4 byte ShowCommand value, and only includes 3 values with their descriptions in the specification; however, there are other values, as demonstrated in Adam's post.

Support in the Industry
On 4 July, Alexis tweeted regarding the core reasons that should be behind our motivation for giving back to the community.  Yes, I get that this tweet was directed at content producers, as well as those who might be thinking about producing content.  His statement about the community not owing us engagement or feedback is absolutely correct, however disheartening I might have found that statement, and the realization, to be.  But like I said, he's right.  So, if you're going to share something, first look at why you're sharing.  If you're doing it to get feedback (like I very often do...), then you have to accept that you're likely not going to get it.  If you're okay with that, cool...fire away.  This is something I've had to come to grips with, and doing so has changed the way (and what) I share.  I think that it also predicates how others share, as well.  What I mean is, why put in the effort of a thorough write-up in a blog post or an article, publishing it somewhere, when it's so much easier to just put it into a tweet (or two, or twelve...).  In fact, by tweeting it, you'll likely get much more feedback (in likes and RTs) than you would otherwise, even though stuff tweeted has a lifespan comparable to a fruit fly.

More recently, Alexis shared this blog post.  I thought that this was a very interesting perspective to take, given that when I've engaged with others specifically about just offering a "thank you", I've gotten back some pretty extreme, absolutist comments in return.  For example, when I suggested that if someone shares a program or script that you find useful, one should say, "thank you", one tweeter responded that he's not going to say "thank you" every time he uses the script.  That's a little extreme, not what I intended, and not what I was suggesting at all. But I do support Alexis' statement; if you find value in something that someone else put out there, express your gratitude in some manner.  Say "thank you", write a review of the tool, comment on the blog post, whatever.  While the imposter syndrome appears to be something that an individual needs to deal with, I think as a community, we can all help others overcome their own imposter syndrome by providing some sort of feedback.

As a side note, the imposter syndrome is not something isolated to the DFIR community...not at all.  I've talked to a number of folks in other communities (threat intel, etc.) who have expressed realization of their own imposter syndrome.

Alexis also shared some additional means by which the community can support efforts in the field, and one that comes to mind is the request made by the good folks at Arsenal Recon.  Their image mounter, called "AIM", provides a great capability, one that they're willing to improve with support from the community.

Sunday, September 23, 2018

First Review of IWS

The written first review of IWS comes from Joey Victorino, a consultant with the IBM X-Force IRIS team.  Joey sent me his review via LinkedIn, and graciously allowed me to post it here.  Here are Joey's thoughts on the book, in his own words:

I've been a fan of Harlan Carvey ever since his first release of Windows Forensic Analysis, 2nd Edition book years ago when I first entered the digital forensics environment. Initially, I had reservations that the new book "Investigating Windows Systems" would be a bit simplistic as I've read quite a bit of forensics books and attended multiple SANS Courses. However, I was wrong, and I really enjoyed this book - especially because Harlan Carvey is an awesome analyst. Essentially, it looks into different scenarios a DFIR professional will encounter throughout their career, by unpacking these scenarios through the eyes of a professional analyst. After the initial evidence is triaged it is then broken down into a conversation about the scenario, with clear examples of what the artifacts are like in this stage of the investigation and then provides practical examples in identifying actionable data and leveraging that as a pivot point to uncover more data. Because of the spread of knowledge, I found it very interesting and very useful to cover off areas where I was a bit unfamiliar with the subject matter. My favorite was "Chapter 4" as it went into Ali Hadi’s “Web Server Case” in a fantastic manner. I've been using this challenge as a method to train junior analysts and another IT professional moving into the DFIR field. His approach to solving the exercise was a great example, of being a consultant performing on-site DFIR with a focus on getting the answers needed quickly, and in a proper manner to be able to allow clients to make important business decisions. Overall, the greatest message learned from this book is that even though there are many different tools, the most effective skill a forensic analyst can have is being able to investigate properly, by analyzing the correct data, and using it to clearly answer the important questions. This would be an excellent book to not only have on the shelf but read and actively reference for DFIR practitioners of all experience levels. Joey Victorino – Consultant IBM X-Force IRIS

Thanks, Joey, for your kind words, and thanks taking the time to share your thoughts on the book. Most importantly, thank you for purchasing and reading my book!   My hope is that others find similar interest and value in what I've written.

Addendum, 26 Sept: Mary Ellen posted a review, as well!

Tuesday, September 18, 2018

Book Writing

With the release of my latest book, Investigating Windows Systems, I thought that now would be a good time to revisit the topic of writing books.  It occurred to me watching some of the activity and comments on social media that there were likely folks who hadn't seen my previous posts on this topic, let alone the early stuff I'd posted about the book, particularly the stuff I'd written about the book two years ago.

As I said, I've blogged on the topic of writing (DFIR) books before:
17 Dec 2010
26 Dec 2010
28 Mar 2014
29 Mar 2014
16 Feb 2018

There are some things about writing books that many folks out there simply may not be aware of, particular if they haven't written a book themselves.  One such item, for example, is that the author doesn't own the books.  I had a sales guy set up an event that he wanted me to attend, and he suggested that I "bring some books to sell".  I don't own the books to sell, and as far as I'm aware, I'm not PCI compliant; I don't process credit cards.

Further, authors have little, if any, control over...well, anything beyond the content, unless they're self-published.  For example, I'm fully aware that as of 17 Sept 2018, the image on the Amazon page for IWS is still the place holder that was likely used when the page was originally set up.  I did reach to the publisher about this, and they let me know that this is an issue that they've had with the vendor, not just with my book but with many others, as well.  While I can, and do, market the books to the extent that I can, I have no control over, nor any input into, what marketing the publisher does, if any.  Nor do authors have any input or control over what the publisher's vendors do, or don't do, as the case may be.

Addendum, 19 Sept: I checked again today, and the Amazon page for the book has been updated with the proper book cover image.

Taking a quick look through a copy of the Investigating Windows Systems book, I noted a few things that jumped out at me.  I did see a misspelled word or two, read a sentence here and there that seemed awkward and required a re-read (and maybe a re-write), and noticed an image or two that could have been bigger and more visible.  Would the book have looked a bit better if it had a bit bigger form factor?  Perhaps.  Then it would have been thinner.  However, my point is that I didn't have any input into that aspect of the book.

I'm also aware that several pages (pp. 1, 45, 73, 97) have something unusual at the bottom of the page.  Specifically, "Investigating Windows Systems. DOI: https://doi.org/10.1016/{random}", and then immediately below that, a copyright notice.  Yes, this does stand out like a sore thumb, and no, I have no idea why it's there.

If you purchased or received a copy of the book, I hope you find value in it.  As I've said before, I wanted to take a different approach with this book, and produce something new and I hope, useful.

Monday, September 17, 2018

IWS is out!

My latest book, Investigating Windows Systems (Amazon, Elsevier) is out, and it seems that some have already received their ordered copies.  Very cool.  For anyone who's ordered a copy, I thank you and I hope you find some value in it.

I've blogged about the book several times (beginning here) since I started writing it, and I wanted to take a moment, now that it's out, to maybe clear up what may be some misconceptions about the book, because this book isn't at all like any of my previous books.

My goal in writing the book was to demonstrate the analysis process, and provide the analysis decisions I made during that process.  I wanted to write a book like this, because as with my other books, I hadn't seen anything out there (blogs, books, etc.) that provided something similar.  When I've had time to reflect on and look back over my own analysis engagements, this is something that has interested and fascinated me.  It's also made its way into conversations, but has also been something that has proven difficult when engaging with others; that is, understanding what led a particular analyst down a particular investigative route, to choose a particular tool, or to pivot on a particular artifact or piece of data.

As such, this book does not cover things like the basic usage of the mentioned or described tools, as these topics have been covered before, and anyone can look the usage up.  Also, the very basics of constructing a timeline are not addressed, as that topic has been covered extensively in other resources.

While writing the book, I used available images from several online sources (and with permission) as a backdrop against which to demonstrate the analysis process, as well as to illustrate the analysis decisions made during that process.

What the book is not is a walk-through of each CTF or forensic challenge employed.  That is to say, when engaging in analysis of a particular image, I do not simply walk through the posted challenge.  There's nothing wrong with the posted challenges at all, and most of the ones I've seen are quite good.  However, in two decades of performing DFIR work, I have yet to engage with a client that had 37 questions they wanted me to answer. I wanted to present the analysis based on (in my experience) as close to real world engagements as I could.

As I said, I used available images as the basis for the analysis I was performing.  I did this intentionally, as it provides an opportunity for the reader to follow along, or to try their own hand at analyzing the images.  The images range from Windows XP to Windows 7 and 10, and there's even a Windows 2008 image in there, as well. 

In addition, the tools used in the book are all free and open source tools.  The reason for this was two-fold; first, I wanted readers to see (as much as possible) and understand what was being done, so that when it came time to decide upon a commercial forensic suite to use, they could make better educated decisions.  Second, being just one person, I do not have access to commercial forensic suites.  The same goes for the images used; I do not have access to MSDN, nor other compromised images, and decided to make use (again, with permission) of what was available.

For anyone who purchased the book, thank you.  I hope you find value in it, enough so that you opt to write a review, or simply provide feedback.  Thanks.

Saturday, September 15, 2018

A lil Saturday morning computer programming

I've wanted to update lfle.pl for sometime, and with the rain this weekend, Saturday morning was the perfect time do so.  For those who may not be aware, lfle.pl is a script I have for extracting WinXP/2003 style Event Log records from unstructured data.  This means that it can be run against Event Log files, page files, unallocated space, and even memory dumps in order to extract these records.  When it finds a valid EVT record, it dumps the contents in TLN format, so that a timeline can be easily generated.

Okay, so the obvious elephant in the room...why bother updating this tool?  Well, in the summer of 2017, the world saw the NotPetya incident.  During this incident, one of the analysts on the team I was working with got several Windows XP and 2003 systems in for analysis. 

I downloaded the CFReDS hacking case image, and use FTK Imager to create a single, raw/dd-style image.  I then exported the unallocated space from the image file, using the following commands:

mmls -i raw -t dos g:\cfreds\image.001
blkls -i raw -f ntfs -A -o 63  g:\cfreds\image.001 > g:\cfreds\image_unalloc

Next, I ran lfle.pl against the resulting file:

lfle.pl -f g:\cfreds\image_unalloc

This command yielded no results.  Okay, next I exported the Event Log files from the image and ran lfle.pl against each one.  Forty records were found in the AppEvent.evt file, and 141 in the SysEvent.evt file.

Troubleshooting
Interestingly, I go no results running lfle.pl against the Security Event Log file, using the following command:

lfle.pl -f g:\cfreds\image\secevent.evt

The prompt simply returned.  Okay, so let's do some troubleshooting.  Using the "-s" switch to display statistics, I get the following results:

lfle.pl -f g:\cfreds\image\secevent.evt -s

Small records skipped         : 1
Large records skipped         :
Malformed records skipped:
Records retrieved                :

Okay, that's interesting. Only one "small" (i.e., less than 0x30 bytes) record was located.  Going with the "-d" switch to enable debugging output, I get the following:

lfle.pl -f g:\cfreds\image\secevent.evt -d

Magic number located at offset 0x4 with length of 48 bytes
0x00000000   30 00 00 00 4C 66 4C 65 01 00 00 00 01 00 00 00   0...LfLe........
0x00000001   30 00 00 00 30 00 00 00 01 00 00 00 00 00 00 00    0...0...........
0x00000002   00 00 01 00 00 00 00 00 80 3A 09 00 30 00 00 00    .........:..0...

My final step was to open the secevent.evt file in hex editor, and low and behold, I found that there was just one record visible in the file, the one illustrated above.  Following the record, there is the "cursor", which is 0x28 bytes, and does not contain the "LfLe" magic number.  After that, what remained of the file was all zeros.  So, that explains the results; it's not that the tool is "broken" or "doesn't work", but instead that there's nothing to display in the file.

Hibernation File
The image also contains a hibernation file, which I exported from the image.  Using Volatility 2.6, I checked which profile would work for the image:

vol -f g:\cfreds\image\hiberfil.sys imageinfo
*Suggested Profile(s) : WinXPSP2x86, WinXPSP3x86 (Instantiated with WinXPSP2x86)

I then converted the hibernation file to raw format:

vol -f g:\cfreds\image\hiberfil.sys --profile=WinXPSP2x86 imagecopy -O g:\cfreds\image\mem.raw

Finally, I ran lfle.pl against it, with all switches enabled:

lfle.pl -f g:\cfreds\image\mem.raw -d -s > g:\cfreds\image\mem_lfle_out.txt

With everything turned on and being collected in the output file, it's kind of messy, with valid records mixed in with debugging info, etc.  However, the statistics displayed at the end of the file tell us the story:

Small records skipped         : 22
Large records skipped         : 2
Malformed records skipped: 4
Records retrieved                : 95

So, 22 "small" records were skipped, 2 "large" (i.e., 0x1000 bytes or larger) were skipped, 4 malformed (size values that bracket the record were not identical) records were skipped, and a total of 95 valid records were retrieved.  Very nice.

Both the Perl script and the portable .exe version of the tool can be found on the GitHub repository.

Friday, September 14, 2018

Random Stuff

I decided to put this post together because some things just need to persist beyond the typical Twitter life cycle.  The focus here is free and open source tools that can be used on Windows to investigate/parse/enable analysis of Windows artifacts.  It's not my intention to take anything away from current repositories of such tools, such as the DFIRTraining site, but rather to bring these tools to the forefront again.

Random
Windows 10 Oct 2018 update includes clipboard history and cloud sync

Ryan Kovar shared this resource regarding detailed properties in O365 audit logs

Registry
Maxim tweeted (on 4 Sept, I just saw it today) that yarp-carver had been run against the image from the LoneWolf scenario and recovered a good deal of Registry data.

Here's a great explanation of ShimCache data from the folks at Mandiant.

yarp tools - be sure to follow Maxim on Twitter (ex: tweet regarding yarp-carver)

Windows 10 Timeline
Matthew Seyer wrote up a nice article over on Medium regarding a tool that he wrote to parse the Windows 10 Timeline database (SQLite format).  In that article, he also referred to Eric Zimmerman's WxTCmd tool, which can be found here.

Anytime you're working with an SQLite database, be sure to incorporate Mari's SQLite deleted data parser (blog, Github)

Paper: A Forensic Exploration of the Microsoft Windows 10 Timeline

Windows 10 Notification Database
Yogesh's post - 2016
David Cowen's post - 2018
Malware Maloney's post on parsing the .wal file - 2018

Windows Event Logs
Tools for parsing Windows Event Log (*.evtx) files:
LogParser - MS's tool
parse_evtx.exe - KasperskyLab ForensicTools (x64)
Evtx2json - includes experimental support for EVTXtract output
EVTXtract - Willi Ballenthin's Python code (presentation)
EvtxParser - Andreas Schuster's Perl code (here's some more info on getting it installed)
EventCleaner - reportedly will allow you to remove EVTX records

VSCs
I blogged about accessing VSCs recently (actually, I blogged about it twice...), and I wanted to include the information in this post.

Something to be clear about...the version of Arsenal's Image Mounter tool is the one from GitHub, NOT the one discussed here. Yes, one of the issues I ran into when seeking assistance in this endeavor was that there is more than one tool with same name, and that presented some challenges in communication.

My hope is that the version found on Github is updated to include the ability to mount raw/dd-style images via "Write-Temporary".

Here's a tweet about a presentation regarding recovering deleted VSCs using vss-carver.

DFIRTraining list of tools -

Parsing RDP Cache Files
remotecache.py -
bmc-tools.py -
Link to tools at DFIRTraining site

I'm more than happy to add to this list as new things come in.

Thursday, September 06, 2018

Accessing Volume Shadows, re-revisited

As a follow-on to my previous post, I wanted to provide a concise summary or overview of the processes for accessing VSCs.

First, a couple of important factors regarding this exercise:

The goal of this exercise was to identify and validate processes for accessing VSCs within acquired images, using free and open source tools on a Windows analysis system.

The source image file was from Digital Corpora's Lone Wolf scenario (downloaded *.e0x files, also converted to raw/dd).  Note that when using mmls.exe to view the partition table, the partition type is "gpt", not "dos".  This is part of the reason I wanted to use this image, to see if the tools used have any issues with different partition types.  The other reason for using this image is that it is (relatively) easily accessible by almost anyone, and anything I did can be validated using the image.

Processes

*.e0x
Arsenal Image Mounter (Mount through libewf)
  |__ ShadowExplorer (v0.9)

*After I posted this article, Eric Zimmerman pointed me to his VSCMount tool, which would be a great alternative to ShadowExplorer.  Note that if you're using VSCMount, you won't be able to access the VSCs via Windows Explorer, but Eric was able to use PowerShell to navigate the file system.  Similarly, I had no issues using a command prompt.  There simply appears to be an issue when trying to use Windows Explorer.

raw/dd #1
Arsenal Image Mounter (Mount raw image)
  |__ vssadmin
          |__ vss (X:\)
                |__ FTK Imager (add X:\ as logical drive evidence item)

raw/dd #2
mmls
  |__vshadowinfo/vshadowmount (requires Dokan 0.7.4)
         |__ access individual VSCs via FTK Imager (Image File)

raw/dd #3
Convert to *.e0x format, use *.e0x process (above)

I've been able to repeatedly replicate raw/dd processes #1 and #2 on several other images to which I have access.

Addendum, 7 Oct: I ran across Andrea Fortuna's blog post that addresses a few means for accessing VSCs, as well.  At this point, I've already submitted that chapter for PWI that addresses this topic for technical review; this doesn't mean that there isn't time to address other methods; in fact, I'm going to be waiting until just prior to the due date to submit the chapter, as I'm keeping an eye on some tools to see if they're updated before the final submission of the chapter, and the overall manuscript.

Tuesday, September 04, 2018

Accessing Volume Shadows, Revisited

Almost 3 yrs ago, I published this blog post regarding accessing VSCs, via several different means.  I recently did some testing, revisiting the methods, and oddly enough, I couldn't get the several of them to work.  I posted to Twitter, but due to the nature of the medium, things started to get out of hand.  As such, I thought I'd post what I'd done, so that others could try the same methods, and see if maybe they could get the processes to work.

My analysis system is Windows 10; when I open a command prompt, I get "Microsoft Windows [Version 10.0.17134.228]".

I downloaded the *.e01 images files from the LoneWolf scenario that is available via the Digital Corpora site.  The image is of a physical Windows 10 system.  I then used FTK Imager to convert the split *.e0x images files into a unified raw/dd-style image, and I used that file (i.e., lonewolf_dd.001) in all of my testing.

Vhd Method
This method is defunct, in part because it no longer seems to work.

I did try vhdxtool, as well, but that didn't work at all...I'm not working with a .vhd file that needs to be converted to a .vhdx file, I'm using a raw/dd-style image.

Libvshadow/vshadowmount
This method requires Dokan 0.7.4 and pretty much follows along from the Libvshadow section of my previous post.  For this attempt, I used the extended libvshadow tools provided with vss_carver.py.

First, I used mmls (from the TSK tools) to determine the offset to the partition of interest:

mmls -i raw -t gpt g:\lonewolf\lonewolf_dd.001

From that, I saw:

     Slot    Start        End          Length       Description
00:  Meta    0000000000   0000000000   0000000001   Safety Table
01:  -----     0000000000   0000002047   0000002048   Unallocated
02:  Meta    0000000001   0000000001   0000000001   GPT Header
03:  Meta    0000000002   0000000033   0000000032   Partition Table
04:  00       0000002048   0001023999   0001021952   Basic data partition
05:  01       0001024000   0001226751   0000202752   EFI system partition
06:  02       0001226752   0001259519   0000032768   Microsoft reserved partition
07:  03      0001259520   1000214527   0998955008   Basic data partition
08:  -----   1000214528   1000215216   0000000689   Unallocated

The partition we're interested in is the one in bold, and the offset is 1259520 sectors, or 644874240 bytes.  Using that information, I can run vshadowinfo (I don't have to, but I can):

vshadowinfo -o 644874240 g:\lonewolf\lonewolf_dd.001

From the output of the above command, I can see two VSCs, which is what I expected.  Now, for vshadowmount:

vshadowmount -o 644874240 g:\lonewolf\lonewolf_dd.001 x:\

After running this command, do not expect the command to exit...it won't, until you hit Control-C.  I minimized the command prompt (which was running with Admin privileges) and could see "VSS1" and "VSS2" in the X:\ volume via Windows Explorer.

Next, I opened FTK Imager 4.2.0.13, and added the X:\ volume as an logical drive evidence item, or I tried to.  I got an error dialog that said simply "Incorrect function. (1)". 

I then opened Autopsy 4.8.0, and tried to add the volume as a local disk data source to a case.  Unfortunately, unlike FTK Imager, Autopsy did not 'see' the X:\ volume.

Arsenal Image Mounter
I downloaded Arsenal Image Mounter, and used it to mount the image file.  When mounting, I chose the "Read Only" option (the "Write Temporary" option was grayed out, and reportedly not available for raw/dd-style images).  The image was mounted as "F:\", and I could easily browse the volume via Windows Explorer.

I had also downloaded ShadowExplorer 0.9, and when I opened it, it did not recognize the F:\ volume.  I could see C:\, D:\, and G:\ (ext HDD where the image was stored).

Vss Method
Finally, I tried vss.exe, mentioned in my previous post; however, I should note that Jimmy Weg seems to no longer maintain his "justaskweg" domain, or the site(s) referenced in my original blog post.  Further, the copy of vss.exe that I have has no identifying information (file version info), nor any syntax info.

I mounted the image via Arsenal Image Mounter (just like I did above), and got identifiers for both VSCs:

vssadmin list shadows /for=f:

I copied the ID for one of the VSCs to the clipboard, and then pasted it into the following command:

vss x: Shadow Copy Volume: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy47

The response I got back was:

Drive X:  ---> \Device\HarddiskVolumeShadowCopy47

Okay...that's a start.  I then opened FTK Imager and added the X:\ volume as a logical drive evidence item, and was easily able to traverse through the image, just as I normally would.

Summary
Of all of the methods tested, only vss.exe in conjunction with Arsenal Image Mounter was the one method that worked.

Addendum, 5 Sept
I've continued my testing, but before I kicked off the summary, I wanted to provide a link for vss.exe.  Thanks to Brian Maloney for tracking that down.

One thing that's been clear over the past day or so is that even providing clear documentation (above) as to what I did, and where I got the tools, there's been a significant disconnect with respect to those offering assistance.  It appears that the biggest one is that when we say "Arsenal Image Mounter", apparently, we're not all referring to the same thing.  I got a copy of Arsenal Image Mounter from Github, not from the from the main ArsenalRecon.com web site, and that appears to make a significant difference.  In particular, the option to mount a raw/dd image in "Write Temporary" mode is specifically NOT supported in this version of the tool.

So, to clarify, what I've been looking to do is get working processes for accessing VSCs within acquired images, using free and/or open source tools that run on Windows.

To get this version of Arsenal Image Mounter to work, I had to do a couple of things.  First, I had to download libewf.dll (My system is x64, so I used the DLLs from this location).  I then copied the libewf.dll from my Autopsy installation over to the same folder as the Arsenal Image Mount tool, overwriting the older version of libewf.dll (there was some speculation online that the older version of libewf.dll was causing problems).  Finally, I had to open the E01 files from the Lone Wolf scenario, rather than the raw/dd format image (I'd opened the E0x image files in FTK Imager and exported a raw/dd image, so I had both available).  Once I did that, I was able to use Shadow Explorer to successfully view and access the VSCs.

The process that appears to work for raw/dd images is to use the Arsenal Image Mounter to mount the image file in Read Only mode, and then use vssadmin (via an Admin-level command prompt) to list the VSCs, and then mount a VSC as X:\ (or whatever you choose) via vss.exe.  Once you've done that, you can open the now mounted VSC via FTK Imager,

I'm still trying to get vshadowmount to work, as well... I got the vshadowmount method working right after I published the updated post...essentially, once you run the vshadowmount command, you can access the individual VSCs via FTK Imager, by adding one of the VSCs as an image file evidence item, rather than logical drive.  That is, when adding an evidence item to FTK Imager (after running the vshadowmount command), you choose "Image File".  Boom.  Done.

Wednesday, August 22, 2018

Updates

RegRipper
Not all RegRipper plugins come from external sources; in fact, a good number of the plugins I've written start as something I've run across on the Internet, from various sources (most times Twitter).  Sometimes it's a blog post, other times it's a malware write-up, or it could be the result of a working a forensic challenge.

Based on Adam's post, I created a plugin (named wsh_settings.pl) that outputs the values of the Settings key, and includes an Analysis Tip that references the Remote value.

I also updated the clsid.pl plugin to incorporate looking for the TreatAs value. On a sample hive that I have from a Win7 SP1 system, I ran the following command:

rip -r d:\cases\test\software -p clsid | find "TreatAs"

I got a total of 9 hits, 7 of which were all for the same GUID (i.e., {F20DA720-C02F-11CE-927B-0800095AE340}), which appears to refer to packager.dll.

I also created a TLN output version of clsid.pl (named clsid_tln.pl) so that this information can be used to create timeline (in and of itself), or can be added to a timeline that incorporates other data sources.  I know from initial testing that under "normal" circumstances, the LastWrite times for the keys may be lumped together around the same time, but what we're looking for here is outliers, timeline entries that correspond with other suspicious activity, forming an artifact cluster.

Book
I received an email from my publisher on 20 Aug 2018 telling me that Investigating Windows Systems had officially been published, and is available here through the publisher!  I'm not sure what that means with respect to the book actually being available or shipped (if you pre-ordered it) from Amazon; for me, it's a milestone, something I can mark off my list. That's #9 down (as in, nine books that I've authored), and I'm currently working on Practical Windows Investigations, which will is due out next year.

IWS is a bit of a departure from my previous books; instead of listing various artifacts that you could use in an investigation, and leaving it to the reader to figure out how to string them together, I used images available online to illustrate what an investigation might look like.  Using the images, I provide analysis goals that are more inline with what one might expect to see during a real world IR investigation.  I then walk through the analysis of the image (based on the stated goals), providing decision pivot points along the way.  However, these investigations are somewhat naturally limited...they aren't enterprise level, don't involve things like lateral movement, etc.  As such, these things aren't addressed, but I did try to cover as much as I could with what was available.

I have a GitHub repo for the book - it doesn't contain a great deal at the moment, just links to the images used, and in the folder for chapter 4, code that I wrote for that particular chapter.  I'm sure I'll be adding material over time, either based on requests or based on interesting things from my notes and folders for the book.

Practical Windows Investigations is going to swing the pendulum back a bit, so to speak, in that rather than just looking at artifacts, I'm focusing on different aspects of investigations and addressing what can be achieved when pursuing those avenues.  The book is currently spec'd at 12 chapters, and the list is not too different from what was listed in this post from March.

The current chapters are:

Core Concepts
How to analyze Windows Event Logs
How to get the most out of RegRipper
Malware Detection
How to determine data exfiltration
File (LNK, DOCX/DOC, PDF) Analysis
How to investigate lateral movement
How to investigate program execution
How to investigate user activity
How to correlate/associate a device with a user (USB, Bluetooth)
How to detect/analyze the use of anti-forensics
Making use of VSCs

As with my previous books, tools used for analysis will be free and open source tools; this is due to the fact that I simply do not have access to commercial tools.  This is a topic that is continually brought up during prospectus reviews, and the reviewers simply do not seem to understand. 

Saturday, August 18, 2018

Updates

Win10 Notification Database
Leave it to MS to make our jobs as DFIR analysts fun, all day, every day!  Actually, that is one of the things I've always found fascinating about analyzing Windows systems is that the version of Windows, more often than not, will predicate how far you're able to go with your analysis.

An interesting artifact that's available on Win10 systems is the notification database, which is where those pop up messages you receive on the desktop are stored.  Over the past couple of months, I've noticed that on my work computer, I get more of these messages, because it now ties into Outlook.  It turns out that this database is a SQLite database. Lots of folks in the community use various means to parse SQLite databases; one of the popular ways to do this is via Python, and subsequently, you can often find either samples via tutorials, or full-on scripts to parse these databases for you.

MalwareMaloney posted a very interesting article on parsing the write-ahead logging (.wal) file for the database.  Also, as David pointed out,

Anytime you're working with a SQLite database, you should consider taking a look at Mari's blog post on recovering deleted data.

RegRipper
Based on input from a user, I updated the sizes.pl plugin in a way that you may find useful; it now displays a brief sample of the data 'found' by the plugin (default is 48 bytes/characters).  So, instead of just finding a value of a certain size (or above) and telling you that it found it, the plugin now displays a portion of the data itself.  The method of display is based on the data type...if it's a string, it outputs a portion of the string, and if the data is binary, it outputs of hex dump of the pre-determined length.  That length, as well as the minimum data size, can be modified by opening the plugin in Notepad (or any other editor) and modifying the "$output_size" and "$min_size" values, respectively.

Here is some sample output from the plugin, run against a Software hive known to contain a malicious Powershell script:

sizes v.20180817
(All) Scans a hive file looking for binary value data of a min size (5000)

Key  : \4MX64uqR  Value: Dp8m09KD  Size: 7056 bytes
Data Sample (first 48 bytes) : aQBmACgAWwBJAG4AdABQAHQAcgBdADoAOgBTAGkAegBlACAA...

From here, I'd definitely pivot on the key name ("4MX64uqR"), looking into a timeline, as well as searching other locations in the Registry (auto start locations??) and file system for references to the name.

Interestingly enough, while working on updating this plugin, I referred back to pg 34 of Windows Registry Forensics and for the table of value types.  Good thing I keep my copy handy for just this sort of emergency.  ;-)

Mari has an excellent example of how she has used this plugin in actual analysis here.

IWS
Speaking of books, Investigating Windows Systems is due out soon.  I'm really looking forward to this one, as it's a different approach all together from my previous books.  Rather than listing the various artifacts that are available on Windows systems, folks like Phill MooreDavid Cowen and Ali Al-Shemery graciously allowed me access to the images that they put together so that I could work through them.  The purpose of the book is to illustrate a way of stringing the various artifacts together in to a full-blown investigation, with analysis decisions called out and discussed along the way.

What I wanted to do with the book is present something more of a "real world" analysis approach.  Some of the images came with 30 or more questions that had to be answered as part of the challenge, and in my limited experience, that seemed a bit much.

The Github repo for the book has links to the images used, and for one chapter, has the code I used to complete a task.  Over time, I may add other bits and pieces of information, as well.

OSDFCon
My submission for OSDFCon was accepted, so I'll be at the conference to talk about RegRipper, and how you can really get the most out of it.

Here is the list of speakers at the conference...I'm thinking that my speaker bio had something to do with me being selected.  ;-)

Friday, August 10, 2018

Gaps and Up-Hill Battles

I've been thinking about writing a RegRipper plugin that looks for settings that indicate certain functionality in Windows has been disabled, such as maintaining JumpLists, as well as MRUs in the Registry.

Hold on for a segue...

David Cowen recently requested input via his blog, regarding the use of anti-forensics tools seen in the wild.  A little more than two years ago, Kevin S. wrote this really awesome blog post regarding the evolution of the Samas/Samsam ransomware, and I know you're going to ask, "great, but what does this have to do with anti-forensics tools?"  Well, about half way down the post, you'll see where Kevin mentions that one of the early variants of Samas included a copy of sdelete in one of its resource sections.

As usual, Brett made some very poignant comments, in this case regarding seeing anti- or counter-forensics tools in the wild; specifically, just having the program on a system doesn't mean it was used, and with limited visibility, it's difficult to see how it was used.

Now, coming back around...

What constitutes counter-forensics efforts, particularly when it comes to user intent?  Do we really know the difference between user intent and operating system/application functionality?  Maybe more importantly, do we know that there is such a thing?

I've seen too many times where leaps (assumptions, speculation) have been made without first fully examining the available data, or even just looking a little bit closer.  Back in the days of XP, an issue I ran into was an empty Recycle Bin for a user, which might have been a bad thing, particularly following a legal hold.  So, an analyst would preview an image, find an empty Recycle Bin, and assume that the user had emptied it, following the legal hold announcement.  But wait...what was the NukeOnDelete setting?  Wait...the what?  Yes, with this functionality enabled, the user would delete a file as they normally would, but the file would not appear in the Recycle Bin. 

Other functionality that's similar to this includes, did the user clear the IE history, or was it the result of the regularly scheduled purge?  Did the user delete files and run defrag after a legal hold order, or was defrag run automatically by the OS?

Skipping ahead to modern times, what happens if you get a "violation of acceptable use policies" case, or a harassment case, or any other case where user activity is a (or the) central focus of your examination, and the user has no JumpLists.  Yes, the automatic JumpLists folder is empty.  Does that seem possible?  Or did the user suspect someone would be looking at their system, and purposely delete them?  Well, did you check if the tracking of JumpLists had been disabled via the Registry?

My point is that there is functionality within Windows to disable the recording and maintenance of various artifacts that analysts use to do their jobs.  This functionality can be enabled or disabled through the Registry.  As such, if an analyst does not find what they expect to find (i.e., files in the Recycle Bin, RecentDocs populated, etc.) then it's a good idea to check for the settings.

Oh, and yes, I did write the plugin, the current iteration of which is called disablemru.pl.  I'll keep adding to it as more information becomes available.

Friday, August 03, 2018

The Future of IR, pt I

I've been doing IR work for a while now, and I've had the great fortune of watching things grow over time.

When I started in the industry, there were no real training courses or programs available, and IR business models pretty much required (and still do) that a new analyst is out on the road as soon as they're hired.  I got started in the industry and developed some skills, had some training (initial EnCase training in '99), and when I got to a consulting position, I was extremely fortunate to have a boss who took an interest in mentoring me and providing guidance, which I greatly appreciate to this day.

However, I've seen this same issue with business models as recently as 2018.  New candidates for IR teams are interviewed, and once they're hired and go through the corporate on-boarding, there is little if any facility for training or supervising...the analysts are left to themselves.  Yes, they are provided with tools, software products, and report templates, but for the most part, that's it.  How are they communicating with clients?  Are they sending in regular updates?  If so, are those updates appropriate, and more importantly, are they technically correct?  Are the analysts maintaining case notes?

Over the years of doing IR work, I ran into the usual issues that most of us see...like trying to find physical systems in a data center by accessing the system remotely and opening the CD-ROM tray.  But two things kept popping into my mind; one was, I really wished that there was a way to get a broader view of what was happening during an incident.  Rather than the client sending me the systems that they thought were involved or the "key" systems, what if I could get a wider view of the incident?  This was very evident when there were indications on the systems I was examining that pointed to them being accessed from other systems, or accessing other systems, on the same infrastructure.

The other was that I was spending a lot of time looking at what was left behind after a process ran.  I hadn't seen BigFoot tromp across the field, because due to the nature of IR, all I had to look at were footprints that were several days old.  What if, instead of telling the client that there were gaps in the data available for analysis (because I'm not going to guess or speculate...) I actually had a recording of the process command lines?

During one particularly fascinating engagement, it turned out that the client had installed a monitoring program on two of the affected systems.  The program was one of those applications that parents use to monitor their kid's computers, and what the client provided was 3-frame-per-second videos of what went on.  As such, I just accessed the folder, found all of the frames with command prompts open, and could see exactly what the adversary typed in at the prompt.  I then went back and watched the videos to see what the adversary was doing via the browser, as well as via other GUI applications.

How useful are process command lines?  Right now, there're considerable artifacts on systems that give analysts a view into what programs were run on a system, but not how they were run.  For instance, during an engagement where we had established process creation monitoring across the enterprise, an alert was triggered on the use of rar.exe, which is very often as a means of staging files for exfiltration.  The alert was not for "rar.exe", as the file had been renamed, but was instead for command line options that had been used, and as such, we had the password used to encrypt the archives.  When we receive the image from the system and recovered the archives (they'd been deleted after exfil), we were able to open the archives and show the client exactly what was taken.

So, things have progressed quite a bit over the years, while some things remain the same.  While there have been significant in-roads made into establishing enterprise-wide visibility, the increase of device types (Windows, Mac, Linux, IoT, mobile, etc.) still requires us to have the ability to go out and get (or receive) individual devices or systems for collection and analysis; those skills will always be required.  As such, if the business model isn't changed in some meaningful way, we are going to continue to have instances where someone without the appropriate skill sets is sent out on their own.

The next step in the evolution of IR is MDR, which does more than just mash MSS and IR together.  What I mean by that is that the typical MSS functionality receives an alert, enriches it somehow, and sends the client a ticket (email, text, etc.).  This then requires that the client receive and understand the message, and figure out how they need to respond...or that they call someone to get them to respond.  While this is happening, the adversary is embedding themselves deeply within the infrastructure...in the words of Jesse Ventura from the original Predator movie, "...like an Alabama tick."

Okay, so what do you do?  Well, if you're going to have enterprise-wide visibility, how about adding enterprise-wide response and remediation?  If we're able to monitor process command lines, what if we could specify conditions that are known pretty universally to be "bad", and stop the processes?  For example, every day, hundreds of thousands of us log into our computers, open Outlook, check our email, and read attachments.  This is all normal.  What isn't normal is when that Word document that arrived as an email attachment "opens" a command prompt and downloads a file to the system (as a result of an embedded macro).  If it isn't normal and it isn't supposed to happen and we know it's bad, why not automatically block it?  Why not respond at software speeds, rather than waiting for the detection to get back to the SOC, for an analyst to review it, for the analyst to send a ticket, and for the client to receive the ticket, then open it, read it, and figure out what to do about it?  In that time, your infrastructure could be hit by a dedicated adversary, or by ransomware. 

If you stop the download from occurring, you prevent all sorts of bad follow-on things from happening, like having to report a "personal data breach", per GDPR. 

Of course, the next step would be to automatically isolate the system on the network.  Yes, I completely understand that if someone's trying to do work and they can't communicate off of their own system, it's going to hamper or even obviate their workflow.  But if that were the case, why did they say, "Yes, I think I will "enable content", thank you very much!", after the phishing training showed them why they shouldn't do that?  I get that it's a pain in the hind end, but which is worse...enterprise-wide ransomware that not only shuts everything down but requires you to report to GDPR, or one person having to have a new computer to get work done?

So, the overall point I'm trying to make here is that the future of IR is going to be to detect and respond faster.  Faster than we have been.  Get ahead of the adversary, get inside their OODA loop, and cycle through the decision process faster than they can respond.  I've seen this in action...the military has things called "immediate actions", which are actions that, when a condition is met, you respond immediately.  In the military, you train at these things until they're automatic muscle memory, so that when those things go occur (say, your rifle jams), you perform the actions immediately.  We can apply these sorts of things to the OODA loop by removing the need to make decisions under duress because we made them ahead of time; we made the decision regarding a specific action while we had the time to think about it, so that we didn't have to try to make the decision during an incident.

In order to detect and respond quicker, this is going to require a couple of things:
- Visibility
- Intelligence
- Planning

I'll be addressing these topics in future blog posts.

Notes, etc.

Case Notes
@mattnotmax recently posted a blog on "contemporaneous notes".

But wait...there's more.

I agree with what Matt said regarding notes.  1000%.

I've been on engagements where things have gone sideways.  In one instance, I was assigned by the PoC to work with a network engineer to get access to network device logs.  We walked through a high-level network diagram on a white board, and at every point, I was told by the network engineer that there were no logs available from that device, nor that one, and definitely not that one.  I took a picture with my cell phone of the white board to document my "findings" in that regard, and when I met with the PoC the following morning, he seemed bothered by the fact that I hadn't gotten anywhere the previous day.  He called in the network engineer, who then said that he'd never said that logs were not available.  I pulled up the photo of the white board on my laptop and walked through it.  By the time we got to the end of the engagement, I never did get any logs from any of the network devices.  However, I had documented my findings (written, as well as with the photo) and had a defensible position, not just for myself, but also for my boss to take up the chain, if the issue ended up going that far.

As a side note, I make it a practice that if I get the sense that something is fishy, or that an engagement could go sideways, I'll call my boss and inform my management chain first, so that they hear it from me.  Better that, than the first they hear of any issue is an angry call from a client.  This is where notes really help...IR engagements are highly stressful events for everyone involved, and having notes is going to help you when it comes to who said/did what, and when.

Rules
One of the questions Matt addresses in his blog post is the 'rules' of the notes; whenever I've been asked about this, the question has always been, "what's the standard?"  My response has always been pretty simple...reproduceability.  You need to write your notes to the point that 6 months or a year later, you (or more importantly, someone else) can take the notes and the data, and reproduce the results.

Standards
In fact, I have been asked the question about the "standard" so much over the years that it's become a high-fidelity indicator that notes will not be taken.  So far, every single time I've been asked about the "standard" to which notes should be taken, note have not been kept.

When I started with ISS in 2006, I was provided dongles for AccessData FTK, as well as for EnCase 4.22 and 6.19.  When I performed work, I noted which application I used right along with what I did...because that was important.  When we (our team) determined that one of the built-in functions for a commercial tool was not, in fact, finding all valid credit card numbers (kind of important for PFI work...), we worked with Lance Mueller to get our own function working, and made the use of the result script part of our standard operating (and repeatable) procedure.

Part of the PFI work at the time also included a search for hashes, file names, and a few other indicators.  Our notes had to have the date of the work, and our SOP required that just prior to running the searches for those indicators that we pull down the latest-and-greatest copies of the indicator lists.

Why was this important?  Well, if someone found something in the data (or on the original system) six or eight months later, we had clear and concise documentation as to what steps were taken when.  That way, if the analyst was on leave or on another engagement, a team lead or the director could easily answer the questions.  With some simple, clear, and concise notes, the team lead could say, "Yes, but that case was worked 8 months ago, and that indicator was only made available as part of last month's list."  Boom.  Done.  Next.

Application
Another question that comes up is, what application should I use?  Well, I started with Notepad, because it was there.  I loved it.  When I received a laptop from a client and had to remove the hard drive for imaging, I could paste a link to the online instructions I followed, and I could download the instructions and keep them in a separate file, or print them out and keep them in a folder.  URL link or printed out, I had an "appendix" to my notes.  When I got to the point where I wanted to add photos to my notes, I simply used Write or Word.  Depending upon your requirements, you may not need anything schmancy or "high speed, low drag"...just what you've got available may work just fine.

If you're looking for something a little more refined when it comes to keeping notes, take a look at ForensicNotes.

Conclusion
Many of us say/talk about it...we keep notes because at some point in the future, someone's going to have questions.  I was in a role where we said that repeatedly...and then it happened.  Fully a year after working an engagement, questions came up.  Serious questions.  Through corporate counsel.  And guess what...there were no notes.  No one had any clue as to what happened, and by "no one", I'm only referring to those who actually worked the case.  Look, we say these things not because we're being silly or we're bored or we just want to be mean...we say them because they're significant, serious, and some of us have seen the damage that can occur to reputation of a company or an individual when notes aren't maintained.

Even with all of this being said, and even with Matt's blog post, there is no doubt in my mind that many DFIR folks are going to continue to say, "I don't want my notes to be discoverable, because a defense attorney would tear them/me apart on the stand."  According to Brett Shavers, that's going to happen anyway, with or without notes, so better to have the notes and be able to answer questions with confidence, or at least something more than, "I don't know".  The simple fact is that if you're not keeping case notes, you're doing yourself, your fellow analysts, and your client all a huge disservice.