IWS has been out for a short while now, and there have been a couple of reviews posted. So far, it seems that there is some reticence to the book, based on the form factor (size), as well as the price point. Thanks to feedback on the book on that subject from Jessica Hyde, the publisher graciously shared the following with me:
I’m happy to pass on a discount code that Jessica and her students, and anyone else you run across, can use on our website (www.elsevier.com) for a 30% discount AND we always offer free shipping. The discount code is: FOREN318.
Hopefully, this discount code will bring more readers and DFIR analysts a step closer to the book. I think that perhaps the next step is to address the content itself. I'm very thankful to Brett Shavers for agreeing to let me share this quote from an email he sent me regarding the IWS content:
As to content, I did a once-over to get a handle of what the book is about, now on Ch 2, and so far I think this is exactly how I want every DFIR book to be written.
I added the emphasis myself. This book is something of a radical departure from my previous books, which I modeled after other books I'd seen in the genre, because that's what I thought folks wanted to see. Mention an artifact, provide a description of what the artifact may mean (depending upon the investigation), maybe a general description of how that artifact may be used, and then provide names of a couple of tools to parse the artifact. After that, move on to the next artifact, and in the end, pretty much leave it to the reader to string everything together into an "investigation". In this case, my thought process was to use images that were available online to run through an investigation, providing analysis decisions and pivot points along the way. This way, a reader could follow along, if they chose to do so.
If you get a copy of the book and have a similar reaction to what Brett shared, please let me know. If there's something that you like or don't like about the book, again, please let me know. Do this through an email, a comment here on this blog, or a blog post of your own. As illustrated by the example involving Jessica, if I know about something, I can take action and work to change it.
How It Works
When a publisher decides to go forward with a book project, they have the author submit a prospectus describing the book, the market for the book, and any challenges that may be faced in the market; in short, the publisher has the author do the market research. The prospectus is then reviewed by several folks; for the book projects I've been involved with, its usually been three people in the industry. If the general responses are positive, the publisher will move forward with the project.
I'm sharing this with you because, in my experience, there are two things that the publisher looks at when considering a second edition; sales numbers and feedback from the first edition. As such, if you like the content of the book and your thoughts are similar to Brett's, let me know. Write a review on Amazon or on the Elsevier site, write your own blog post, or send me an email. Let me know what you think, so that I can let the publisher know, and so that I can make the changes or updates, particularly if they're consistent across several reviewers.
If you teach DFIR, and find value in the book content, but would like to see something more, or something different, let me know. As with Jessica's example, there's nothing anyone can to do take action if they don't know what you're thinking.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Tuesday, October 30, 2018
Sunday, October 28, 2018
Updates
Book Discount
While I was attending OSDFCon, I had a chance to (finally!) meet and speak with Jessica Hyde, a very smart and knowledgeable person, former Marine, and an all-around very nice lady. As part of the conversation, she shared with me some of her thoughts regarding IWS, which is something I sincerely hope she shares with the community. One of her comments regarding the book was that the price point put it out of reach for many of her students; I shared that with the publisher, and received the following as a response:
I’m happy to pass on a discount code that Jessica and her students, and anyone else you run across, can use on our website (www.elsevier.com) for a 30% discount AND we always offer free shipping. The discount code is: FOREN318.
What this demonstrates is that if you have a question, thought, or comment, share it. If action needs to or can be taken, someone will do so. In this case, my concern is the value of the book content to the community, and Jessica graciously shared her thoughts with me, and as a result, I did what I could to try and bring the book closer to where others might have an easier time purchasing it.
So how can you share your thoughts? Write a blog post or an email. Write a review of the book, and specify what you'd like to see. What did you find good, useful or valuable about the book content, and what didn't you like? Write a review and post it to the Amazon page for the book, or to the Elsevier page; both pages provide a facility for posting a review.
Artifacts of Program Execution
Adam recently posted a very comprehensive list of artifacts indicative of program execution, in a manner similar to many other blogs and even books, including my own. A couple of take-aways from this list include:
- Things keep changing with Windows systems. Even as far back as Windows XP, there were differences in artifacts, depending upon the Service Pack. In the case of the Shim Cache data, there were differences in data available on 32-bit and 64-bit systems. More recently, artifacts have changed between updates to Windows 10.
- While Adam did a great job of listing the artifacts, something analysts need to consider is the context available from viewing multiple artifacts together, as a cluster, as you would in a timeline. For example, let's say there's an issue where when and how Defrag was executed is critical; creating a timeline using the user's UserAssist entries, the timestamps available in the Application Prefetch file, and the contents of the Task Scheduler Event Log can provide a great deal of context to the analyst. Do not view the artifacts in isolation; seek to use an analysis methodology that allows you to see the artifacts in clusters, for context. This also helps in spotting attempts by an adversary to impede analysis.
So, take-aways...know the version of Windows you're working with because it is important, particularly when you ask questions, or seek assistance. Also, seek assistance. And don't view artifacts in isolation.
Artifacts and Evidence
A while back (6 1/2 yrs ago), I wrote about indirect and secondary artifacts, and included a discussion of the subject in WFA 3/e.
Chris Sanders recently posted some thoughts regarding evidence intention, which seemed to me to be along the same thought process. Chris differentiates intentional evidence (i.e., evidence generated to attest to an event) from unintentional evidence (i.e., evidence created as a byproduct of some non-attestation function).
Towards the end of the blog post, Chris lists six characteristics of unintentional evidence, all of which are true. To his point, not only may some unintentional evidence have multiple names, it may be called different things by the uninitiated, or those who (for whatever reason) choose to not follow convention or common practice. Consider NTFS alternate data streams, as an example. In my early days of researching this topic, I found that MS themselves referred to this artifact as both "alternate" and "multiple" data streams.
Some other things to consider, as well...yes, unintentional evidence artifacts often are quirky and have exceptions, which means they are very often misunderstood and misinterpreted. Consider the example of Shim Cache entry from Chris's blog post; in my experience, this is perhaps the most commonly misinterpreted artifact to date, for the simple fact that the time stamps are commonly referred to as the "date of execution". Another aspect of this artifact is that it's taken as standalone, and should not be...there may be evidence of time stomping occurring prior to the file being included as a Shim Cache record.
Finally, Chris is absolutely correct that many of these artifacts have poor documentation, if they have any at all. I see this as a short-coming of the community, not of the vendor. The simple fact is that, as a community, we're so busy pushing ahead that we aren't stopping to consider the value to the community as a whole that we're leaving behind. Yes, the vendor may poorly document an artifact, or the documentation may simply be part of the source code that we cannot see, but what we're not doing as a community is documenting and sharing our findings. There've been too many instances during my years doing DFIR work that I would share something with someone who would respond with, "oh, yeah...we've seen that before" only to have no documentation, not even a Notepad document or something scribbled on a napkin to which they can refer me. This is a loss for everyone.
While I was attending OSDFCon, I had a chance to (finally!) meet and speak with Jessica Hyde, a very smart and knowledgeable person, former Marine, and an all-around very nice lady. As part of the conversation, she shared with me some of her thoughts regarding IWS, which is something I sincerely hope she shares with the community. One of her comments regarding the book was that the price point put it out of reach for many of her students; I shared that with the publisher, and received the following as a response:
I’m happy to pass on a discount code that Jessica and her students, and anyone else you run across, can use on our website (www.elsevier.com) for a 30% discount AND we always offer free shipping. The discount code is: FOREN318.
What this demonstrates is that if you have a question, thought, or comment, share it. If action needs to or can be taken, someone will do so. In this case, my concern is the value of the book content to the community, and Jessica graciously shared her thoughts with me, and as a result, I did what I could to try and bring the book closer to where others might have an easier time purchasing it.
So how can you share your thoughts? Write a blog post or an email. Write a review of the book, and specify what you'd like to see. What did you find good, useful or valuable about the book content, and what didn't you like? Write a review and post it to the Amazon page for the book, or to the Elsevier page; both pages provide a facility for posting a review.
Artifacts of Program Execution
Adam recently posted a very comprehensive list of artifacts indicative of program execution, in a manner similar to many other blogs and even books, including my own. A couple of take-aways from this list include:
- Things keep changing with Windows systems. Even as far back as Windows XP, there were differences in artifacts, depending upon the Service Pack. In the case of the Shim Cache data, there were differences in data available on 32-bit and 64-bit systems. More recently, artifacts have changed between updates to Windows 10.
- While Adam did a great job of listing the artifacts, something analysts need to consider is the context available from viewing multiple artifacts together, as a cluster, as you would in a timeline. For example, let's say there's an issue where when and how Defrag was executed is critical; creating a timeline using the user's UserAssist entries, the timestamps available in the Application Prefetch file, and the contents of the Task Scheduler Event Log can provide a great deal of context to the analyst. Do not view the artifacts in isolation; seek to use an analysis methodology that allows you to see the artifacts in clusters, for context. This also helps in spotting attempts by an adversary to impede analysis.
So, take-aways...know the version of Windows you're working with because it is important, particularly when you ask questions, or seek assistance. Also, seek assistance. And don't view artifacts in isolation.
Artifacts and Evidence
A while back (6 1/2 yrs ago), I wrote about indirect and secondary artifacts, and included a discussion of the subject in WFA 3/e.
Chris Sanders recently posted some thoughts regarding evidence intention, which seemed to me to be along the same thought process. Chris differentiates intentional evidence (i.e., evidence generated to attest to an event) from unintentional evidence (i.e., evidence created as a byproduct of some non-attestation function).
Towards the end of the blog post, Chris lists six characteristics of unintentional evidence, all of which are true. To his point, not only may some unintentional evidence have multiple names, it may be called different things by the uninitiated, or those who (for whatever reason) choose to not follow convention or common practice. Consider NTFS alternate data streams, as an example. In my early days of researching this topic, I found that MS themselves referred to this artifact as both "alternate" and "multiple" data streams.
Some other things to consider, as well...yes, unintentional evidence artifacts often are quirky and have exceptions, which means they are very often misunderstood and misinterpreted. Consider the example of Shim Cache entry from Chris's blog post; in my experience, this is perhaps the most commonly misinterpreted artifact to date, for the simple fact that the time stamps are commonly referred to as the "date of execution". Another aspect of this artifact is that it's taken as standalone, and should not be...there may be evidence of time stomping occurring prior to the file being included as a Shim Cache record.
Finally, Chris is absolutely correct that many of these artifacts have poor documentation, if they have any at all. I see this as a short-coming of the community, not of the vendor. The simple fact is that, as a community, we're so busy pushing ahead that we aren't stopping to consider the value to the community as a whole that we're leaving behind. Yes, the vendor may poorly document an artifact, or the documentation may simply be part of the source code that we cannot see, but what we're not doing as a community is documenting and sharing our findings. There've been too many instances during my years doing DFIR work that I would share something with someone who would respond with, "oh, yeah...we've seen that before" only to have no documentation, not even a Notepad document or something scribbled on a napkin to which they can refer me. This is a loss for everyone.
Saturday, October 20, 2018
OSDFCon Trip Report
This past week I attended the 9th OSDFCon...not my 9th, as I haven't been able to make all of them. In fact, I haven't been able to make it for a couple of years. However, this return trip did not disappoint. I've always really enjoyed the format of the conference, the layout, and more importantly, the people. OSDFCon is well attended, with lots of great talks, and I always end up leaving there with much more than I showed up with.
Interestingly enough, one speaker could not make it at the last minute, and Brian simply shifted the room schedule a bit to better accommodate people. He clearly understood the nature of the business we're in, and the absent presenter suffered no apparent consequences as a result. This wasn't one of the lightning talks at the end of the day, this was one of the talks during the first half of the conference, where everyone was in the same room. It was very gracious of Brian to simply roll with it and move on.
The Talks
Unfortunately, I didn't get a chance to attend all of the talks that I wanted to see. At OSDFCon, by its very nature, you see people you haven't seen in a while, and want to catch up. Or, as is very often the case, you see people you only know from online. And then, of course, you meet people you know only from online because they decide to drop in, as a surprise.
However, I do like the format. Talk times are much shorter, which not only falls in line with my attention span, but also gets the speakers to focus a bit more, which is really great, from the perspective of the listener, as well as the speaker. I also like the lightning talks...short snippets of info that someone puts together quickly, very often focusing on the fact that they have only 5 mins, and therefore distilling it down, and boiling away the extra fluff.
My Talk
I feel my talk went pretty well, but then, there's always the bias of "it's my talk". I was pleasantly surprised when I turned around just before kicking the talk off to find the room pretty packed, with people standing in the back. I try to make things entertaining, and I don't want to put everything I'm going to say on the slides, mostly because it's not about me talking at the audience, as much as its about us engaging. As such, there's really no point in me providing my slide pack to those who couldn't attend the presentation, because the slides are just place holders, and the real value of the presentation comes from the engagement.
In short, the purpose of my talk was that I wanted to let people know that if they're just downloading RegRipper and running the GUI, they aren't getting the full power out of the tool. I added a command line switch to rip.exe earlier this year ("rip -uP") that will run through the plugins folder, and recreate all of the default profiles (software, sam, system, ntuser, usrclass, amcache, all) based on the "hive" field in the config headers of the plugin.
To-Do
Something that is a recurring theme of this conference is how to get folks new to the community to contribute and keep the community alive, as well as how to just get folks in the community to contribute. Well, a couple of things came out of my talk that might be of interest to someone in the community.
One way to contribute is this...someone asked if there was a way to determine for which version of Windows a plugin was written. There is a field in the %config header metadata that can be used for that purpose, but there's no overall list or table that identifies the Windows version for which a plugin was written. For example, there are two plugins that extract information about user searches from the NTUSER.DAT hive, one for XP (acmru.pl) and one for Vista+ (wordwheelquery.pl). There's really no point in running acmru.pl against an NTUSER.DAT from a Windows 7 system.
So, one project that someone might want to take on is to put together a table or spreadsheet that provides this list. Just sayin'...and I'm sure that there are other ideas as to projects or things folks can do to contribute.
For example, some talks I'd love to see is how folks (not the authors) use the various open source tools that are available in order to solve problems. Actually, this could easily start out as a blog post, and then morph into a presentation...how did someone use an open source tool (or several tools) to solve a problem that they ran into? This might make a great "thunder talk"...10 to 15 min talks at the next OSDFCon, where the speaker shares the issue, and then how they went about solving it. Something like this has multiple benefits...it could illustrate the (or, a novel) use of the tool(s), as well as give DFIR folks who haven't spoken in front of a group before a chance to dip their toe in that pool.
Conversations
Like I said, a recurring theme of the conference is getting those in the community, even those new to the community, involved in keeping the community alive, in some capacity. Jessica said something several times that struck home with me...that it's up to those of us who've been in the community for a while to lead the way, not by telling, but by doing. Now, not everyone's going to be able to, or even want to, contribute in the same way. For example, many folks may not feel that they can contribute by writing tools, which is fine. But a way you can contribute is by using the tools and then sharing how you used them. Another way to contribute is by writing reviews of books and papers; by "writing reviews", I don't mean a table of contents, but instead something more in-depth (books and papers usually already have a table of contents).
Shout Outz
Brian Carrier, Mari DeGrazia, Jessica Hyde, Jared Greenhill, Brooke Gottlieb, Mark McKinnon/Mark McKinnon, Cory Altheide, Cem Gurkok, Thomas Millar, the entire Volatility crew, Ali Hadi, Yogesh Khatri, the PolySwarm folks...I tried to get everyone, and I apologize for anyone I may have missed!
Also, I have to give a huge THANK YOU to the Basis Tech folks who came out, the vendors who were there, and to the hotel staff for helping make this conference go off without a hitch.
Final Words
As always, OSDFCon is well-populated and well-attended. There was a slack channel established for the conference (albeit not by Brian or his team, but it was available), and the Twitter hashtag for the conference seems to have been pretty well-used.
To follow up on some of the above-mentioned conversations, many of us who've been around for a while (or more than just "a while") are also willing to do more than lead by doing. Many of us are also willing to answer questions...so ask. Some of us are also willing to mentor and help folks in a more direct and meaningful manner. Never presented before, but feel like you might want to? Some of us are willing to help in a way that goes beyond just sending an email or tweet of encouragement. Just ask.
Interestingly enough, one speaker could not make it at the last minute, and Brian simply shifted the room schedule a bit to better accommodate people. He clearly understood the nature of the business we're in, and the absent presenter suffered no apparent consequences as a result. This wasn't one of the lightning talks at the end of the day, this was one of the talks during the first half of the conference, where everyone was in the same room. It was very gracious of Brian to simply roll with it and move on.
The Talks
Unfortunately, I didn't get a chance to attend all of the talks that I wanted to see. At OSDFCon, by its very nature, you see people you haven't seen in a while, and want to catch up. Or, as is very often the case, you see people you only know from online. And then, of course, you meet people you know only from online because they decide to drop in, as a surprise.
However, I do like the format. Talk times are much shorter, which not only falls in line with my attention span, but also gets the speakers to focus a bit more, which is really great, from the perspective of the listener, as well as the speaker. I also like the lightning talks...short snippets of info that someone puts together quickly, very often focusing on the fact that they have only 5 mins, and therefore distilling it down, and boiling away the extra fluff.
My Talk
I feel my talk went pretty well, but then, there's always the bias of "it's my talk". I was pleasantly surprised when I turned around just before kicking the talk off to find the room pretty packed, with people standing in the back. I try to make things entertaining, and I don't want to put everything I'm going to say on the slides, mostly because it's not about me talking at the audience, as much as its about us engaging. As such, there's really no point in me providing my slide pack to those who couldn't attend the presentation, because the slides are just place holders, and the real value of the presentation comes from the engagement.
In short, the purpose of my talk was that I wanted to let people know that if they're just downloading RegRipper and running the GUI, they aren't getting the full power out of the tool. I added a command line switch to rip.exe earlier this year ("rip -uP") that will run through the plugins folder, and recreate all of the default profiles (software, sam, system, ntuser, usrclass, amcache, all) based on the "hive" field in the config headers of the plugin.
To-Do
Something that is a recurring theme of this conference is how to get folks new to the community to contribute and keep the community alive, as well as how to just get folks in the community to contribute. Well, a couple of things came out of my talk that might be of interest to someone in the community.
One way to contribute is this...someone asked if there was a way to determine for which version of Windows a plugin was written. There is a field in the %config header metadata that can be used for that purpose, but there's no overall list or table that identifies the Windows version for which a plugin was written. For example, there are two plugins that extract information about user searches from the NTUSER.DAT hive, one for XP (acmru.pl) and one for Vista+ (wordwheelquery.pl). There's really no point in running acmru.pl against an NTUSER.DAT from a Windows 7 system.
So, one project that someone might want to take on is to put together a table or spreadsheet that provides this list. Just sayin'...and I'm sure that there are other ideas as to projects or things folks can do to contribute.
For example, some talks I'd love to see is how folks (not the authors) use the various open source tools that are available in order to solve problems. Actually, this could easily start out as a blog post, and then morph into a presentation...how did someone use an open source tool (or several tools) to solve a problem that they ran into? This might make a great "thunder talk"...10 to 15 min talks at the next OSDFCon, where the speaker shares the issue, and then how they went about solving it. Something like this has multiple benefits...it could illustrate the (or, a novel) use of the tool(s), as well as give DFIR folks who haven't spoken in front of a group before a chance to dip their toe in that pool.
Conversations
Like I said, a recurring theme of the conference is getting those in the community, even those new to the community, involved in keeping the community alive, in some capacity. Jessica said something several times that struck home with me...that it's up to those of us who've been in the community for a while to lead the way, not by telling, but by doing. Now, not everyone's going to be able to, or even want to, contribute in the same way. For example, many folks may not feel that they can contribute by writing tools, which is fine. But a way you can contribute is by using the tools and then sharing how you used them. Another way to contribute is by writing reviews of books and papers; by "writing reviews", I don't mean a table of contents, but instead something more in-depth (books and papers usually already have a table of contents).
Shout Outz
Brian Carrier, Mari DeGrazia, Jessica Hyde, Jared Greenhill, Brooke Gottlieb, Mark McKinnon/Mark McKinnon, Cory Altheide, Cem Gurkok, Thomas Millar, the entire Volatility crew, Ali Hadi, Yogesh Khatri, the PolySwarm folks...I tried to get everyone, and I apologize for anyone I may have missed!
Also, I have to give a huge THANK YOU to the Basis Tech folks who came out, the vendors who were there, and to the hotel staff for helping make this conference go off without a hitch.
Final Words
As always, OSDFCon is well-populated and well-attended. There was a slack channel established for the conference (albeit not by Brian or his team, but it was available), and the Twitter hashtag for the conference seems to have been pretty well-used.
To follow up on some of the above-mentioned conversations, many of us who've been around for a while (or more than just "a while") are also willing to do more than lead by doing. Many of us are also willing to answer questions...so ask. Some of us are also willing to mentor and help folks in a more direct and meaningful manner. Never presented before, but feel like you might want to? Some of us are willing to help in a way that goes beyond just sending an email or tweet of encouragement. Just ask.
Sunday, October 07, 2018
Updates
IWS
Folks have started receiving the copies of IWS they ordered, and folks like Joey and Mary Ellen have already posted reviews! Mary Ellen has also gone so far as to post her review the Amazon page for the book!
Some have also pointed out that the XP image from Lance's practical is no longer available. Sorry about that but I was using the image, and don't have access to, nor control over the site itself. However, the focus of the book is the process, and choosing to use available images, I thought, would provide more value, as readers could follow along.
Addendum, 7 Oct: Thanks to the wonderful folks from the TwitterVerse who pointed out archive.org as a resource, the XP image can be found here!
Speaking of images, I got an interesting tweet the other day, asking why Windows 10 wasn't mentioned in ch. 2 of the book. The short answer is two-fold; one, because it wasn't used/addressed. For the second part of the answer, I'd refer back to a blog post I'd written two years ago when I started writing IWS, specifically the section of the post entitled "The "Ask"". Okay, I know that there's a lot going on in the TwitterVerse, and that two year is multiple lifetimes in Internet time. And I know that not everyone sees nor ingests everything, and for those who do see things or tweets, if they have no relevance at the time, then "meh". I get it. I'm subject to it myself.
Okay, so, just to be clear...I'm not addressing that tweet in order to call someone out for not paying attention, or missing something. Not at all. I felt that this was a very good opportunity to provide clarity around and set expectations regarding the book, now that it's out. The longer response to the tweet question, the one that doesn't fit neatly into a tweet, is also two-fold; one, I could not find a Windows 10 image online that would have fit that into that chapter. The idea at the core of writing the book was to provide a view into the analysis process, so that analysts could have something with which they could follow along.
The second part of the answer is that it's about the process; the analysis process should hold regardless of the version of Windows examined. Yes, the technical and tactical mechanics may change, but the process itself holds, or should hold. So, rather than focusing on, "wow, there's a whole section that addresses Windows XP...WTF??", I'd ask that the focus should be on documenting an analysis plan, documenting case notes, and documenting what was learned from the analysis, and then rolling that right back into the analysis process. After all, the goal of the book is NOT to state that this is THE way to analyze a Windows system for malware, but to show the value of having a living, breathing, growing, documented, repeatable analysis process.
Also, I was engaged in analyzing systems impacted by NotPetya during the early summer of 2017. Another analyst on our team received several images from an impacted client, all of which were XP and 2003. So, yes, those systems are still out there and still actively being used by clients.
Books
One of the challenges of writing books is keeping people informed as to what's coming, and giving them the opportunity to have input. For example, after IWS was published, someone who follows me on social media said that they had no idea that there was a new book coming out. I wanted to take the opportunity (again) to let others know what was coming, what I'm working on, in an effort to not just set expectations, but to see if anyone has any thoughts or comments that might drive the content itself.
T his new book is titled Practical Windows Investigations, and the current chapters are:
1. Core Concepts
2. How to analyze Windows Event Logs
3. How to get the most out of RegRipper
4. Malware Detection
5. How to determine data exfiltration
6. File (LNK, DOCX/DOC, PDF) Analysis
7. How to investigate lateral movement
8. How to investigate program execution
9. How to investigate user activity
10. How to correlate/associate a device with a user (USB, Bluetooth)
11. How to detect/analyze the use of anti-forensics
12. Making use of VSCs
PWI differs from the current IWS in that it's about halfway between my previous books and IWS. What I mean by that is my previous books listed artifacts, how to parse them, their potential value during an investigation, but left it to the analyst to stitch the analysis together. IWS was more of a cradle-to-grave approach to an investigation, relying on publicly available images so that a reader could follow along, if they chose to do so. As such, IWS was somewhat restricted to what was available; PWI is intended to address some of those things that weren't available through the images used in IWS.
I'm going to leave that right there...
RegRipper Plugins
I recently released a couple of new plugins. One is "appkeys.pl", which offers an interesting persistence mechanism, based on Adam's blog post. Oh, and there's the fact that it's been seen in the wild, too...so, yeah.
The other is "slack.pl", which extracts slack space from Registry cells, and parses the retrieved data for keys and values. In my own testing, I've got it parsing keys and values, but just the data from those cell types. As of yet, I haven't seen a value cell, for example, that included the value data, just the name. It's there if you need it, and I hope folks find value in it.
LNK Parsing
While doing some research into LNK 'hotkeys' recently, I ran across Adam's blog post regarding the use of the AppKey subkeys in the Registry. I found this pretty fascinating, even though I do not have media keys on my keyboard, and as such, I wrote a plugin (aptly named "appkeys.pl") to pull this information from the Registry. I also created "appkeys_tln.pl" to extract those subkeys with "ShellExecute" values, and send the info to STDOUT in TLN format.
Adam also pointed out in his post that this isn't something that was entirely theoretical; it's been seen in the wild. As such, something like this takes on even greater significance.
Adam also provided a link to MS's keyboard mappings. By default, the subkey numbered "17" points to a CLSID, which translates to "My Computer".
Fun with Flags
There was a really interesting Twitter thread recently regarding a BSides Perth talk on APT LNK files. During the thread, Nick Carr pointed out MS had recently updated their LNK format specification documentation. During the discussion, Silas mentioned the LinkFlags field, and I thought, oh, here's a great opportunity to write another blog post, and work in a "The Big Bang Theory" reference. More to the point, however, I thought that by parsing the LinkFlags field, there might be an opportunity to identify toolmarks from whatever tool or process was used to create the LNK file. As such, I set about updating my parser to not only look for those documented flags that are set, but to also check the unused flags. I should also note that Silas recently updated his Python-based LNK parser, as well.
During a follow-on exchange on Twitter on the topic, @Malwageddon pointed me to this sample, and I downloaded a copy, naming it simply "iris" on my analysis system. I had to disable Windows Defender on my system, as downloading it or accessing it in any way, even via one of my tools, causes the file to be quarantined.
Doing a Google search for "dikona", I found this ISC handler post, authored by Didier Stevens. Didier's explanation is very thorough.
In order to do some additional testing, I used the VBS code available from Adam's blog post to create a LNK file that includes a "hotkey" field. In Adam's example, he uses a hotkey that isn't covered in the MS documentation, and illustrates that other hotkeys can be used, particularly for malicious purposes. For example, I modified Adam's example LNK file to launch the Calculator when the "Caps Lock" key was hit; it worked like a champ, even when I hit the "Caps Lock" key a second time to turn off the functionality on my keyboard. Now, image making that LNK file hidden from view on the Desktop...it does make a very interesting malware persistence method.
Additional Stuff:
Values associated with the ShowWindow function - the LNK file documentation describes a 4 byte ShowCommand value, and only includes 3 values with their descriptions in the specification; however, there are other values, as demonstrated in Adam's post.
Support in the Industry
On 4 July, Alexis tweeted regarding the core reasons that should be behind our motivation for giving back to the community. Yes, I get that this tweet was directed at content producers, as well as those who might be thinking about producing content. His statement about the community not owing us engagement or feedback is absolutely correct, however disheartening I might have found that statement, and the realization, to be. But like I said, he's right. So, if you're going to share something, first look at why you're sharing. If you're doing it to get feedback (like I very often do...), then you have to accept that you're likely not going to get it. If you're okay with that, cool...fire away. This is something I've had to come to grips with, and doing so has changed the way (and what) I share. I think that it also predicates how others share, as well. What I mean is, why put in the effort of a thorough write-up in a blog post or an article, publishing it somewhere, when it's so much easier to just put it into a tweet (or two, or twelve...). In fact, by tweeting it, you'll likely get much more feedback (in likes and RTs) than you would otherwise, even though stuff tweeted has a lifespan comparable to a fruit fly.
More recently, Alexis shared this blog post. I thought that this was a very interesting perspective to take, given that when I've engaged with others specifically about just offering a "thank you", I've gotten back some pretty extreme, absolutist comments in return. For example, when I suggested that if someone shares a program or script that you find useful, one should say, "thank you", one tweeter responded that he's not going to say "thank you" every time he uses the script. That's a little extreme, not what I intended, and not what I was suggesting at all. But I do support Alexis' statement; if you find value in something that someone else put out there, express your gratitude in some manner. Say "thank you", write a review of the tool, comment on the blog post, whatever. While the imposter syndrome appears to be something that an individual needs to deal with, I think as a community, we can all help others overcome their own imposter syndrome by providing some sort of feedback.
As a side note, the imposter syndrome is not something isolated to the DFIR community...not at all. I've talked to a number of folks in other communities (threat intel, etc.) who have expressed realization of their own imposter syndrome.
Alexis also shared some additional means by which the community can support efforts in the field, and one that comes to mind is the request made by the good folks at Arsenal Recon. Their image mounter, called "AIM", provides a great capability, one that they're willing to improve with support from the community.
Folks have started receiving the copies of IWS they ordered, and folks like Joey and Mary Ellen have already posted reviews! Mary Ellen has also gone so far as to post her review the Amazon page for the book!
Some have also pointed out that the XP image from Lance's practical is no longer available. Sorry about that but I was using the image, and don't have access to, nor control over the site itself. However, the focus of the book is the process, and choosing to use available images, I thought, would provide more value, as readers could follow along.
Addendum, 7 Oct: Thanks to the wonderful folks from the TwitterVerse who pointed out archive.org as a resource, the XP image can be found here!
Speaking of images, I got an interesting tweet the other day, asking why Windows 10 wasn't mentioned in ch. 2 of the book. The short answer is two-fold; one, because it wasn't used/addressed. For the second part of the answer, I'd refer back to a blog post I'd written two years ago when I started writing IWS, specifically the section of the post entitled "The "Ask"". Okay, I know that there's a lot going on in the TwitterVerse, and that two year is multiple lifetimes in Internet time. And I know that not everyone sees nor ingests everything, and for those who do see things or tweets, if they have no relevance at the time, then "meh". I get it. I'm subject to it myself.
Okay, so, just to be clear...I'm not addressing that tweet in order to call someone out for not paying attention, or missing something. Not at all. I felt that this was a very good opportunity to provide clarity around and set expectations regarding the book, now that it's out. The longer response to the tweet question, the one that doesn't fit neatly into a tweet, is also two-fold; one, I could not find a Windows 10 image online that would have fit that into that chapter. The idea at the core of writing the book was to provide a view into the analysis process, so that analysts could have something with which they could follow along.
The second part of the answer is that it's about the process; the analysis process should hold regardless of the version of Windows examined. Yes, the technical and tactical mechanics may change, but the process itself holds, or should hold. So, rather than focusing on, "wow, there's a whole section that addresses Windows XP...WTF??", I'd ask that the focus should be on documenting an analysis plan, documenting case notes, and documenting what was learned from the analysis, and then rolling that right back into the analysis process. After all, the goal of the book is NOT to state that this is THE way to analyze a Windows
1. Core Concepts
2. How to analyze Windows Event Logs
3. How to get the most out of RegRipper
4. Malware Detection
5. How to determine data exfiltration
6. File (LNK, DOCX/DOC, PDF) Analysis
7. How to investigate lateral movement
8. How to investigate program execution
9. How to investigate user activity
10. How to correlate/associate a device with a user (USB, Bluetooth)
11. How to detect/analyze the use of anti-forensics
12. Making use of VSCs
RegRipper Plugins
I recently released a couple of new plugins. One is "appkeys.pl", which offers an interesting persistence mechanism, based on Adam's blog post. Oh, and there's the fact that it's been seen in the wild, too...so, yeah.
The other is "slack.pl", which extracts slack space from Registry cells, and parses the retrieved data for keys and values. In my own testing, I've got it parsing keys and values, but just the data from those cell types. As of yet, I haven't seen a value cell, for example, that included the value data, just the name. It's there if you need it, and I hope folks find value in it.
LNK Parsing
While doing some research into LNK 'hotkeys' recently, I ran across Adam's blog post regarding the use of the AppKey subkeys in the Registry. I found this pretty fascinating, even though I do not have media keys on my keyboard, and as such, I wrote a plugin (aptly named "appkeys.pl") to pull this information from the Registry. I also created "appkeys_tln.pl" to extract those subkeys with "ShellExecute" values, and send the info to STDOUT in TLN format.
Adam also pointed out in his post that this isn't something that was entirely theoretical; it's been seen in the wild. As such, something like this takes on even greater significance.
Adam also provided a link to MS's keyboard mappings. By default, the subkey numbered "17" points to a CLSID, which translates to "My Computer".
Fun with Flags
There was a really interesting Twitter thread recently regarding a BSides Perth talk on APT LNK files. During the thread, Nick Carr pointed out MS had recently updated their LNK format specification documentation. During the discussion, Silas mentioned the LinkFlags field, and I thought, oh, here's a great opportunity to write another blog post, and work in a "The Big Bang Theory" reference. More to the point, however, I thought that by parsing the LinkFlags field, there might be an opportunity to identify toolmarks from whatever tool or process was used to create the LNK file. As such, I set about updating my parser to not only look for those documented flags that are set, but to also check the unused flags. I should also note that Silas recently updated his Python-based LNK parser, as well.
During a follow-on exchange on Twitter on the topic, @Malwageddon pointed me to this sample, and I downloaded a copy, naming it simply "iris" on my analysis system. I had to disable Windows Defender on my system, as downloading it or accessing it in any way, even via one of my tools, causes the file to be quarantined.
Doing a Google search for "dikona", I found this ISC handler post, authored by Didier Stevens. Didier's explanation is very thorough.
In order to do some additional testing, I used the VBS code available from Adam's blog post to create a LNK file that includes a "hotkey" field. In Adam's example, he uses a hotkey that isn't covered in the MS documentation, and illustrates that other hotkeys can be used, particularly for malicious purposes. For example, I modified Adam's example LNK file to launch the Calculator when the "Caps Lock" key was hit; it worked like a champ, even when I hit the "Caps Lock" key a second time to turn off the functionality on my keyboard. Now, image making that LNK file hidden from view on the Desktop...it does make a very interesting malware persistence method.
Additional Stuff:
Values associated with the ShowWindow function - the LNK file documentation describes a 4 byte ShowCommand value, and only includes 3 values with their descriptions in the specification; however, there are other values, as demonstrated in Adam's post.
Support in the Industry
On 4 July, Alexis tweeted regarding the core reasons that should be behind our motivation for giving back to the community. Yes, I get that this tweet was directed at content producers, as well as those who might be thinking about producing content. His statement about the community not owing us engagement or feedback is absolutely correct, however disheartening I might have found that statement, and the realization, to be. But like I said, he's right. So, if you're going to share something, first look at why you're sharing. If you're doing it to get feedback (like I very often do...), then you have to accept that you're likely not going to get it. If you're okay with that, cool...fire away. This is something I've had to come to grips with, and doing so has changed the way (and what) I share. I think that it also predicates how others share, as well. What I mean is, why put in the effort of a thorough write-up in a blog post or an article, publishing it somewhere, when it's so much easier to just put it into a tweet (or two, or twelve...). In fact, by tweeting it, you'll likely get much more feedback (in likes and RTs) than you would otherwise, even though stuff tweeted has a lifespan comparable to a fruit fly.
More recently, Alexis shared this blog post. I thought that this was a very interesting perspective to take, given that when I've engaged with others specifically about just offering a "thank you", I've gotten back some pretty extreme, absolutist comments in return. For example, when I suggested that if someone shares a program or script that you find useful, one should say, "thank you", one tweeter responded that he's not going to say "thank you" every time he uses the script. That's a little extreme, not what I intended, and not what I was suggesting at all. But I do support Alexis' statement; if you find value in something that someone else put out there, express your gratitude in some manner. Say "thank you", write a review of the tool, comment on the blog post, whatever. While the imposter syndrome appears to be something that an individual needs to deal with, I think as a community, we can all help others overcome their own imposter syndrome by providing some sort of feedback.
As a side note, the imposter syndrome is not something isolated to the DFIR community...not at all. I've talked to a number of folks in other communities (threat intel, etc.) who have expressed realization of their own imposter syndrome.
Alexis also shared some additional means by which the community can support efforts in the field, and one that comes to mind is the request made by the good folks at Arsenal Recon. Their image mounter, called "AIM", provides a great capability, one that they're willing to improve with support from the community.
Subscribe to:
Posts (Atom)