...yeah, I get it...I'm not entirely imaginative, and don't come up with the best titles for my blog posts....
#DFIR Intel
About 4 1/2 years ago, I got on a riff and blasted out a bunch of blog posts, over a dozen in one month. Most of these posts were "how to" articles...
One of the articles I wrote was How To: Add Intelligence to Analysis Processes. This is also a concept I address to some extent in my new book, Investigating Windows Systems (recently sent the manuscript to the publisher for review and formatting...); not only that, I give lots of examples of how to actually do this. IMHO, there are a lot of opportunities that are missed during pure #DFIR work to correlate and develop intelligence, which can then be "baked" back into tools and processes to make analysis "better" in the future.
Further, I read Brett's recent post about fitting a square peg into a round #DFIR hole, and I started to see correlations between the intel stuff, and what Brett was saying. Sometimes, in looking for or at which tool is "best", we may run across something that's extremely valuable during another engagement, or to another analyst all together.
I can easily see Brett's point; how do you answer the question of "what's the best tool to do X?", if the user/analyst requirements aren't fully understood? "Best" is relative, after all, isn't it? But I really think that in the process of determining requirements (what is "best"?) and which tool or process is best suited to provide the necessary answers, there's a great deal of intelligence that can and should be preserved.
A great example of developing intel within the community is from Mari's post, where she evaluated MFT parsers. In her post, Mari clearly specified the testing requirements, and evaluated the various tools against those requirements. This was great work, and is not only valid today (the post was published about 2 1/2 yrs ago), but continues to offer a great example of what can be done.
By now, you're probably wondering about the connection between correlating and developing intel from #DFIR engagements, and deciding which is the "better" #DFIR tool. Well, here it is...by correlating and developing intel from #DFIR cases/engagements, we can make all of our tools (and more importantly, analysis processes) inherently "better".
For example, going back to Mari's blog post on MFT parsers, let's say that the issue at hand is time stomping, and you'd like a means, as part of your analysis process to just look at one file and determine if it had been time stomped. I wrote my own mft.pl parser, in part, to let me do exactly that...and in doing so, improved my analysis process. You can do the same thing by finding the right tool...or contacting the author of a tool and ask that they provide the capability...to provide the needed capability.
What Does It Look Like?
So, what does this "intel" we're talking about look like? Well, they can be several things, depending upon what you're doing. For example, if you do string searches, consider using Yara rules to augment your searches. Yara rules can be incorporated into analysis processes through a variety of means; one of my favorites is checking systems for web shells. Usually, I'll mount the image via FTK Imager, and run a Yara scan across the web site folder(s), using a combination of open source Yara rules to scan for web shells, as well as things that have been developed internally. Sometimes, this approach works great for locating web shells right out of the box; in other cases, further investigation of web server logs may be required to narrow down the hits you received from the Yara scan to the file in question.
Side Note: "Intel" can also "look like" this blog post. Great stuff, and yes, Yara rule(s) resulted from the analysis, as well.
Speaking of Yara rules, there is a LOT you can do with Yara rules beyond just running them against a file system or folder. You can run them against memory dumps, and in this article, Jeremy Scott mentions the use of page_brute to run Yara rules across a page file, or in the case of more recent versions of Windows (yet another reason why knowing the version of Windows you're examining matters!), all of the page files.
Another means for preserving and sharing intel is through the use of RegRipper plugins. I know you're thinking, "oh, yeah...easy, right? You write them all the time..."...and yes, I do. And getting a plugin written or modified is as easy as asking. I've been able to turn around plugins pretty quickly with a concise, clear description of what you're looking for, and some sample data for testing. Another example is that while I was examining one of the CTF images for my upcoming book, I ran across Registry data I've never encountered, and as a result, I wrote a RegRipper plugin.
Another great thing about RegRipper plugins is that Nuix (for transparency, Nuix is my employer) now has a RegRipper extension for their Workbench product, meaning that you can run RegRipper plugins against Registry hives (and depending upon the version of Windows, the AmCache.hve file) right from Workbench, and then incorporate the results directly into your Nuix case. That way, all of your word searches, etc., will be run against this data, as well.
There are a myriad other ways of preserving intel and sharing your findings. The means you may use for preserving intel depends on what you're doing in your analysis process. Unusual command line options, such as those seen with cryptocurrency miners, can be preserved in EDR filter or alert rules, or in the case of the browser-based miners, Yara rules to be run against memory. Sometimes, you may not immediately see how to turn your findings into useful intel or tools, nor how to bake what you found back into your own analysis process...this is a great reason for reaching out to someone, and seeing that they might have to offer.
Side Note: There was a pretty extensive #DFIR sharing thread over on Twitter, that started out as thoughts/discussion, re: writing blogs vs writing books. As this thread progressed, Jessica reiterated the point that sharing doesn't have to be just about writing, that sharing within the #DFIR community can take a variety of forms; book reviews, podcasts, etc. I wholeheartedly agree, and also believe at the same time that writing is of paramount importance in our profession, as whether you're writing code, case notes, or a report, you have to be able to communicate. To that point, Brett shared his thoughts on ham sandwiches.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Monday, February 19, 2018
Friday, February 16, 2018
On Writing (DFIR) Books
After sharing my recent post regarding my next book, IWS, one of the comments I received via social media was a tongue-in-cheek reference to me being a "new" author. I got the humor in that right away, but it also got me to thinking about something that I hadn't thought about in a while...what it actually takes to write a DFIR book. I've thought about this before, at considerable length, because over the years, I've talked to others who have considered going down that path but for whatever reason, were not able to complete the journey.
Often times, the magnitude of the endeavor can simply overwhelm folks. In some cases, events turn out to be much less easy to manage than originally thought. In one instance, I was once asked for advice from a friend...he and two co-authors had worked through the process of establishing a contract for a book with a publisher. It turned out that after the contract was signed, the team was assigned an editor, who then informed them that there was an error in the contract; they needed to deliver twice as many words than were previously stated, with no extension on the delivery date. Needless to say, the team made the decision to not go forward with writing the book.
To be honest, one of the biggest challenges I've seen over the years is the disparity between the publishing company and their SOP, and the authors. It took me a while to figure this out, but the publishing company (I can't speak to all publishing companies, just the three I've been associated with...) look to objective measures; word counts, numbers of chapters, numbers of images or figures, etc. I would think that schedules are pretty much universal, as we all deal with them, but some publishing companies are used to dealing with academia, for whom publishing is often an absolute necessity for survival. For many of those within the DFIR community who may be considering the idea of becoming a published author, writing a book is one of many things on an already crowded plate.
The other side of the coin is simply that, in my experience, many DFIR folks do not like to write, because they're not good at it. One of the first company's I worked with out of the military had a forensics guy who apparently did fantastic work, but it took two other people (usually) to turn his reports into something presentable for review...not for the client, but for someone on our team to review before sending them to the client. I recognize that writing isn't something people like to do, and I also recognize that my background, going back to my time on active duty, includes a great deal of writing (i.e., personnel evaluations/fitness reports, JAG manual investigations, etc.). As such, I approach it differently. I documented that approach to some extent in one of my books, providing a chapter on...wait for it...writing reports. Those same techniques can be used in writing books.
I've been with essentially the same publishing company (that's not to say the same editor, and I haven't worked with the same individuals throughout) since my second book (Elsevier bought Syngress), so needless to say, I've seen a great deal. I've gone through the effort (and no small amount of pain) and trials to get books published, and as such, I've learned a great deal along the way. At the same time, I've talked to a number of friends and other folks within the DFIR community who've expressed a desire to write a book, and some who've already demonstrated a very good basis for doing just that.
Sometime ago, in a galaxy far, far away, I engaged with my editor to develop a role for myself, one in which, rather than writing books, I engaged with new authors as a liaison. In this role, I would begin working with aspiring authors in the early stages of developing their ideas, and help them navigate the labyrinth to getting a book published. I basically sat down and asked myself (after my fourth or fifth book had been published), "what do I know now that I wish I'd known when writing my first book?" Armed with this information, I thought, here's a great opportunity to present this information to new authors ahead of time, and make the process easier for them. Or, they may look at the scope and range of the process, and determine that it's not for them. Either way, it's a win-win.
Also, and I think that this important to point out, this was not a paying position or role. There are significant cultural differences between DFIR practitioners, and a publisher of predominantly academic books, and as such, this role needed to be socialized on both sides. However, before either editor could really wrap their heads around the idea, and socialize it with the publishing company, they moved on to other adventures.
As such, I figured that a good way to help folks interested in writing a book would be to provide some initial thoughts and advice, and then let those who are interested take it a step or two beyond that.
The Idea
All books start with an idea. What is the basis for what you want to write/communicate? When I started out with the Windows Forensic Analysis books, the basic idea I had in mind was that I wanted to write a book that I'd want to purchase. I'd seen a number of the books that were out there that covered, to some extent, the same topic, but not to what I saw as the appropriate depth. I wanted to be able to go to a bookstore, see a book with the words "Windows" and "forensics" on the spine, and upon opening it, have it be something I'd want to take to the register and purchase.
Something else to consider is that you do not have to have a new or original idea. I wrote Windows Registry Forensics because there was nothing out there like it. But I wrote Windows Forensic Analysis because I wanted to take a different approach to what was already out there...most of what I found didn't go into the depth that I wanted to see.
When I was employed by SecureWorks, I authored a blog post that discussed the use of the Samsam ransomware. Kevin Strickland, a member of the IR team, took a completely different approach in how he looked at some of the same data, which ended up being one of the most quoted Secureworks blog posts for the entire 2016 year. My point is that it doesn't always take an original idea...sometimes, all it really takes is a different way of looking at the same data.
Structure Your Thoughts
It may not seem obvious, but structuring your thoughts can go a LONG way toward making your project an achievable success.
The best way to do this, that I've found, is to create a detailed outline. Actually write down your thoughts. And don't think you have to do it all at once...when I wrote personnel evaluations in the military, I didn't do it one sitting, because I didn't think that would be fair to my Marines. I did it over time...I wrote down my initial thoughts, then let them marinate, and came back to them a day or two later. The same thing can be done with the outline...create the initial outline, and then walk away from it. Maybe socialize it with some co-workers, discuss it, see what other ideas may be out there. Take some of the terms and phrases you used in your outline, and Google them to see what others may be saying about them. You may find validation that way, like, "yeah, this is a good idea...", or you may find that others are thinking about those terms in a different way. Either way, use time to develop your ideas. I do this with my blog posts. Something to realize is that the outline may be a living document; once you've "completed it", it will likely change and grow over time, as you grow in your writing. Chapters/thoughts may be consolidated, or you may find that what you thought would be one chapter is actually better communicated as two (or more) chapters. And that's okay.
What I've learned over the years is that the more detailed your outline is, the easier it is to communicate your ideas to the publisher, because they're going to send your idea out to others for review. Much like a resume, the thought behind your outline is that you want to leave the person reviewing it no other option than to say, "yes"...so the clearer you can be, the more likely this is to happen. And the other thing I've learned is that the more detailed the outline, the easier it is to actually write the book. Because you're very likely going to be writing in sections, it's oh, so much easier to pick something back up if you know exactly where you left off, and a detailed outline can help you with that.
Start Writing
That's right...try writing a chapter. Pick one that's easy, and see what it's like to actually write it. We all have "life", that stuff we do all the time, and it's a good idea to see how this new adventure fits into yours. Do you get up early and write before kicking off your work day, or is your best time to write after the work day is over?
Get someone to take a look at what you've written, from the perspective of purchasing the finished product. We may not hit the bull's eye on the first few iterations, and that's okay.
Reviews
Get your initial attempts reviewed by someone you trust to be honest with you. Too many times over the years, I've provided draft reports for co-workers to review, and within 15 minutes received a just "looks good". Great, that makes me feel wonderful, but is that realistic for a highly technical report that's over 30 pages long? In one particular instance, I rewrote the entire report from scratch, and got the same response within the same time frame, from the same co-worker. Clearly, this is not an honest review.
In the early stages of writing my second book, I had a reviewer selected by the publishing company, and I'd get chapters back that just said, "looks good" or "needs work". From that point on, I made a point of finding my own reviewer and making arrangements with them ahead of time to get them on-board with the project. What I wanted to know from the reviewer was, does what I wrote make sense? Is it easy to follow? When you're writing a book based on your own knowledge and experience, you're very often extremely close to and intimate with the subject, and someone else how may not be as familiar with it may need a bit more explanation or description. That's okay...that's what having a reviewer is all about.
At this point, we've probably reached the "TL;DR" mark. I hope that this article has been helpful, in general, and more specifically, if you're interested in writing a DFIR book. If you have any thoughts or questions, feel free to comment here, or send them to me.
Often times, the magnitude of the endeavor can simply overwhelm folks. In some cases, events turn out to be much less easy to manage than originally thought. In one instance, I was once asked for advice from a friend...he and two co-authors had worked through the process of establishing a contract for a book with a publisher. It turned out that after the contract was signed, the team was assigned an editor, who then informed them that there was an error in the contract; they needed to deliver twice as many words than were previously stated, with no extension on the delivery date. Needless to say, the team made the decision to not go forward with writing the book.
To be honest, one of the biggest challenges I've seen over the years is the disparity between the publishing company and their SOP, and the authors. It took me a while to figure this out, but the publishing company (I can't speak to all publishing companies, just the three I've been associated with...) look to objective measures; word counts, numbers of chapters, numbers of images or figures, etc. I would think that schedules are pretty much universal, as we all deal with them, but some publishing companies are used to dealing with academia, for whom publishing is often an absolute necessity for survival. For many of those within the DFIR community who may be considering the idea of becoming a published author, writing a book is one of many things on an already crowded plate.
The other side of the coin is simply that, in my experience, many DFIR folks do not like to write, because they're not good at it. One of the first company's I worked with out of the military had a forensics guy who apparently did fantastic work, but it took two other people (usually) to turn his reports into something presentable for review...not for the client, but for someone on our team to review before sending them to the client. I recognize that writing isn't something people like to do, and I also recognize that my background, going back to my time on active duty, includes a great deal of writing (i.e., personnel evaluations/fitness reports, JAG manual investigations, etc.). As such, I approach it differently. I documented that approach to some extent in one of my books, providing a chapter on...wait for it...writing reports. Those same techniques can be used in writing books.
I've been with essentially the same publishing company (that's not to say the same editor, and I haven't worked with the same individuals throughout) since my second book (Elsevier bought Syngress), so needless to say, I've seen a great deal. I've gone through the effort (and no small amount of pain) and trials to get books published, and as such, I've learned a great deal along the way. At the same time, I've talked to a number of friends and other folks within the DFIR community who've expressed a desire to write a book, and some who've already demonstrated a very good basis for doing just that.
Sometime ago, in a galaxy far, far away, I engaged with my editor to develop a role for myself, one in which, rather than writing books, I engaged with new authors as a liaison. In this role, I would begin working with aspiring authors in the early stages of developing their ideas, and help them navigate the labyrinth to getting a book published. I basically sat down and asked myself (after my fourth or fifth book had been published), "what do I know now that I wish I'd known when writing my first book?" Armed with this information, I thought, here's a great opportunity to present this information to new authors ahead of time, and make the process easier for them. Or, they may look at the scope and range of the process, and determine that it's not for them. Either way, it's a win-win.
Also, and I think that this important to point out, this was not a paying position or role. There are significant cultural differences between DFIR practitioners, and a publisher of predominantly academic books, and as such, this role needed to be socialized on both sides. However, before either editor could really wrap their heads around the idea, and socialize it with the publishing company, they moved on to other adventures.
As such, I figured that a good way to help folks interested in writing a book would be to provide some initial thoughts and advice, and then let those who are interested take it a step or two beyond that.
The Idea
All books start with an idea. What is the basis for what you want to write/communicate? When I started out with the Windows Forensic Analysis books, the basic idea I had in mind was that I wanted to write a book that I'd want to purchase. I'd seen a number of the books that were out there that covered, to some extent, the same topic, but not to what I saw as the appropriate depth. I wanted to be able to go to a bookstore, see a book with the words "Windows" and "forensics" on the spine, and upon opening it, have it be something I'd want to take to the register and purchase.
Something else to consider is that you do not have to have a new or original idea. I wrote Windows Registry Forensics because there was nothing out there like it. But I wrote Windows Forensic Analysis because I wanted to take a different approach to what was already out there...most of what I found didn't go into the depth that I wanted to see.
When I was employed by SecureWorks, I authored a blog post that discussed the use of the Samsam ransomware. Kevin Strickland, a member of the IR team, took a completely different approach in how he looked at some of the same data, which ended up being one of the most quoted Secureworks blog posts for the entire 2016 year. My point is that it doesn't always take an original idea...sometimes, all it really takes is a different way of looking at the same data.
Structure Your Thoughts
It may not seem obvious, but structuring your thoughts can go a LONG way toward making your project an achievable success.
The best way to do this, that I've found, is to create a detailed outline. Actually write down your thoughts. And don't think you have to do it all at once...when I wrote personnel evaluations in the military, I didn't do it one sitting, because I didn't think that would be fair to my Marines. I did it over time...I wrote down my initial thoughts, then let them marinate, and came back to them a day or two later. The same thing can be done with the outline...create the initial outline, and then walk away from it. Maybe socialize it with some co-workers, discuss it, see what other ideas may be out there. Take some of the terms and phrases you used in your outline, and Google them to see what others may be saying about them. You may find validation that way, like, "yeah, this is a good idea...", or you may find that others are thinking about those terms in a different way. Either way, use time to develop your ideas. I do this with my blog posts. Something to realize is that the outline may be a living document; once you've "completed it", it will likely change and grow over time, as you grow in your writing. Chapters/thoughts may be consolidated, or you may find that what you thought would be one chapter is actually better communicated as two (or more) chapters. And that's okay.
What I've learned over the years is that the more detailed your outline is, the easier it is to communicate your ideas to the publisher, because they're going to send your idea out to others for review. Much like a resume, the thought behind your outline is that you want to leave the person reviewing it no other option than to say, "yes"...so the clearer you can be, the more likely this is to happen. And the other thing I've learned is that the more detailed the outline, the easier it is to actually write the book. Because you're very likely going to be writing in sections, it's oh, so much easier to pick something back up if you know exactly where you left off, and a detailed outline can help you with that.
Start Writing
That's right...try writing a chapter. Pick one that's easy, and see what it's like to actually write it. We all have "life", that stuff we do all the time, and it's a good idea to see how this new adventure fits into yours. Do you get up early and write before kicking off your work day, or is your best time to write after the work day is over?
Get someone to take a look at what you've written, from the perspective of purchasing the finished product. We may not hit the bull's eye on the first few iterations, and that's okay.
Reviews
Get your initial attempts reviewed by someone you trust to be honest with you. Too many times over the years, I've provided draft reports for co-workers to review, and within 15 minutes received a just "looks good". Great, that makes me feel wonderful, but is that realistic for a highly technical report that's over 30 pages long? In one particular instance, I rewrote the entire report from scratch, and got the same response within the same time frame, from the same co-worker. Clearly, this is not an honest review.
In the early stages of writing my second book, I had a reviewer selected by the publishing company, and I'd get chapters back that just said, "looks good" or "needs work". From that point on, I made a point of finding my own reviewer and making arrangements with them ahead of time to get them on-board with the project. What I wanted to know from the reviewer was, does what I wrote make sense? Is it easy to follow? When you're writing a book based on your own knowledge and experience, you're very often extremely close to and intimate with the subject, and someone else how may not be as familiar with it may need a bit more explanation or description. That's okay...that's what having a reviewer is all about.
At this point, we've probably reached the "TL;DR" mark. I hope that this article has been helpful, in general, and more specifically, if you're interested in writing a DFIR book. If you have any thoughts or questions, feel free to comment here, or send them to me.
Wednesday, February 14, 2018
IWS
As I'm winding up the final writing for my next book, Investigating Windows Systems, I thought I'd take the opportunity to say/write a few words with respect to what the book is, and what it is not.
In the past, I've written books that have provided walk-thrus of various artifacts on Windows systems. This seemed to be a good way to introduce folks to the possibilities of what was available on Windows systems, and what they could achieve through their analysis of images acquired from those systems.
With Investigating Windows Systems, I've taken a markedly different approach. Rather that providing introductory walk-thrus of artifacts, I'm focusing on the analysis process itself, and discussing pivot points, and analysis decisions made along the way. To do this, I've used available CTF and forensic challenge images (I reached to the authors to see if it was okay with them to do this...) as the basis, and in chapters 2, 3, and 4, walk through the analysis of the images. In most cases, I've tried to provide more real world examples of the analysis goals (which we document) than what was provided as part of the CTF. For instance, one CTF has 31 questions to answer as part of the challenge, some of which are things that should be documented as a matter of SOP in just about every case. However, I opted to take a different approach with the analysis goals, because in two decades of cybersecurity consulting, I've never worked with a client that has asked 30 or more questions regarding the case, or the image being analyzed. In the vast majority of cases, the questions have been, "..was the system compromised/infected?", often followed by "...was sensitive data exfiltrated from the system?" Pretty straightforward stuff, and as such, I wanted to take of what I've seen as a realistic, IRL approach to the analysis goals.
Another aspect of the book is that a certain level of knowledge and capability is assumed of the reader, like a "you must be this tall to ride this ride" thing. For example, throughout the book, in the various sections, I create timelines as part of the analysis process. However, I don't provide a basic walk-thru of how to create a timeline, because I assume that the reader already knows how to do so, either by using their own process, or from chapter 7 of Windows Forensic Analysis (in both the third and fourth editions). Also, in the book, I don't spend any time explaining all of the different things you can do with some of the tools that are discussed; rather, I leave that to the reader. After all, a lot of the things that someone might be curious about are easy to find online. Now, this doesn't mean that a new analyst can't make use of the book...not at all. I'm simply sharing this to set the expectation of anyone who's considering purchasing the book. I don't cover topics such as malware RE, memory acquisition and analysis, etc., as there are some fantastic resources already available that provide in-depth coverage of these topics.
Additional Materials
With some of my previous books, I've established online repositories for additional materials included with the book. As such, I've established a Github repository for materials associated with this one. As an example, in writing Chapter 4, I ended up having to write some code to parse some logs...that code is included in the repository.
Producing Intel
Something else I talk about in the book, in addition to the need for documentation, is the need for DFIR analysts to look at what they have available in an IR engagement that they can use in other engagements in the future. The basic idea behind this to develop, correlate and maintain corporate knowledge and intelligence.
In one instance in the book, during an analysis, I found something in the Registry that didn't directly pertain to the analysis in question, but I created a new RegRipper plugin, wc_shares.pl. I added the plugin directly to the RegRipper repository.
As a bit of a side note, if you're a Nuix customer, you can now run RegRipper through Workbench. Nuix has added an extension to their Workbench product that allows you to run RegRipper without having to close out the case or export individual files. For more details, here's the fact sheet.
Other ways to maintain and share intelligence include Yara rules, endpoint filter/alert rules, adding an entry to eventmap.txt, etc. But that's not it, there are other ways to share intelligence, such as this blog post that I wrote during previous employment, with a good deal of help from some really smart folks. That blog post alone has a great deal of valuable intelligence that can be baked back into tools and processes, and extend your team's capabilities. For example, look at figure 2 in the blog post; it illustrates the command that the adversary issued to take specific actions (fig. 1 illustrates the results of that command). If you're using an EDR tool, monitoring for that command line (or something similar) will allow you to detect this activity early in the adversary's attack cycle. If you're not using an EDR tool and want to do some threat hunting, you now have something specific to look for.
In the past, I've written books that have provided walk-thrus of various artifacts on Windows systems. This seemed to be a good way to introduce folks to the possibilities of what was available on Windows systems, and what they could achieve through their analysis of images acquired from those systems.
With Investigating Windows Systems, I've taken a markedly different approach. Rather that providing introductory walk-thrus of artifacts, I'm focusing on the analysis process itself, and discussing pivot points, and analysis decisions made along the way. To do this, I've used available CTF and forensic challenge images (I reached to the authors to see if it was okay with them to do this...) as the basis, and in chapters 2, 3, and 4, walk through the analysis of the images. In most cases, I've tried to provide more real world examples of the analysis goals (which we document) than what was provided as part of the CTF. For instance, one CTF has 31 questions to answer as part of the challenge, some of which are things that should be documented as a matter of SOP in just about every case. However, I opted to take a different approach with the analysis goals, because in two decades of cybersecurity consulting, I've never worked with a client that has asked 30 or more questions regarding the case, or the image being analyzed. In the vast majority of cases, the questions have been, "..was the system compromised/infected?", often followed by "...was sensitive data exfiltrated from the system?" Pretty straightforward stuff, and as such, I wanted to take of what I've seen as a realistic, IRL approach to the analysis goals.
Another aspect of the book is that a certain level of knowledge and capability is assumed of the reader, like a "you must be this tall to ride this ride" thing. For example, throughout the book, in the various sections, I create timelines as part of the analysis process. However, I don't provide a basic walk-thru of how to create a timeline, because I assume that the reader already knows how to do so, either by using their own process, or from chapter 7 of Windows Forensic Analysis (in both the third and fourth editions). Also, in the book, I don't spend any time explaining all of the different things you can do with some of the tools that are discussed; rather, I leave that to the reader. After all, a lot of the things that someone might be curious about are easy to find online. Now, this doesn't mean that a new analyst can't make use of the book...not at all. I'm simply sharing this to set the expectation of anyone who's considering purchasing the book. I don't cover topics such as malware RE, memory acquisition and analysis, etc., as there are some fantastic resources already available that provide in-depth coverage of these topics.
Additional Materials
With some of my previous books, I've established online repositories for additional materials included with the book. As such, I've established a Github repository for materials associated with this one. As an example, in writing Chapter 4, I ended up having to write some code to parse some logs...that code is included in the repository.
Producing Intel
Something else I talk about in the book, in addition to the need for documentation, is the need for DFIR analysts to look at what they have available in an IR engagement that they can use in other engagements in the future. The basic idea behind this to develop, correlate and maintain corporate knowledge and intelligence.
In one instance in the book, during an analysis, I found something in the Registry that didn't directly pertain to the analysis in question, but I created a new RegRipper plugin, wc_shares.pl. I added the plugin directly to the RegRipper repository.
As a bit of a side note, if you're a Nuix customer, you can now run RegRipper through Workbench. Nuix has added an extension to their Workbench product that allows you to run RegRipper without having to close out the case or export individual files. For more details, here's the fact sheet.
Other ways to maintain and share intelligence include Yara rules, endpoint filter/alert rules, adding an entry to eventmap.txt, etc. But that's not it, there are other ways to share intelligence, such as this blog post that I wrote during previous employment, with a good deal of help from some really smart folks. That blog post alone has a great deal of valuable intelligence that can be baked back into tools and processes, and extend your team's capabilities. For example, look at figure 2 in the blog post; it illustrates the command that the adversary issued to take specific actions (fig. 1 illustrates the results of that command). If you're using an EDR tool, monitoring for that command line (or something similar) will allow you to detect this activity early in the adversary's attack cycle. If you're not using an EDR tool and want to do some threat hunting, you now have something specific to look for.
How To...
...Parse Windows Event Logs
I caught a really interesting tweet the other day that pointed to the DFIR blog, one that discussed how to parse Windows Event Logs. I thought the approach was interesting, so I thought I'd share the process I use for parsing Windows Event Logs (*.evtx files).
So, I'm not saying that there's anything wrong with the process laid out in the DFIR blog post...not at all. In fact, I'm grateful that the author took the time to write it up and share it with others. It's a fantastic resource, but there's more than one way to accomplish a great many tasks in the DFIR world, aren't there? As Dan said, there are some great examples in the post.
When I create timelines, I use a batch file (wevtx.bat) that runs LogParser, and as the *.evtx logs are parsed, runs them through eventmap.txt to "tag" interesting events. The batch file takes two arguments, the path to a file or directory with *.evtx files (LogParser understands wild cards), and the output event file (events are appended to the file if the file already exists).
Now, I did say, "...when I create timelines...", but this method works very well with just *.evtx files, or even just a few, or even one, *.evtx file.
The methodology in the DFIR blog post includes looking for specific events IDs, which is great. The way I do it in my methodology is that when I parse all of the *.evtx files that I'm going to parse, I have an "events file"; from there, I can parse out event source/ID pairs pretty easily using "type" and "find". It's pretty easy, like so:
type events.txt | find "Security/4624" > logon_events.txt
You can then add to that file using the append redirection operator (i.e., ">>"), or search for other source/ID pairs and create other output files. I've used this method to create micro- or nano-timelines of just specific events, so that I can get a view of things that I wouldn't be able to see in a complete timeline.
Okay, why am I talking about event source/ID "pairs"? Well, in the DFIR blog post, they're looking in the Security Event Log file (Security.evtx) for specific event IDs, but when you start looking across other *.evtx files and incorporating them into your analysis, you may start to see that some event records may have different sources, but the same event ID, depending upon what's installed on the system. For example, event ID 6001 can have sources of WinLogon, DNS, and Wlclntfy.
So, for the sake of clarity, I use event source/ID pairs in eventmap.txt; I haven't seen every possible event ID, and therefore, don't want to have something incorrectly tagged. There's no reason to draw the analyst's attention to something if it's not necessary to do so.
Wait...What?
Okay, there are times when Windows Event Logs are not Windows Event Logs...that's when they're Event Logs. Get it?
Okay, stand by...this is the part where the version of Windows matters. I've gotten myself in trouble over the years for asking stoopit questions (after someone takes the time to write out their request), like, "What's the version of Windows you're examining?" I get it. Bad Harlan. But you know what...it matters. And I know you're going to say, "yeah, dude...but no one uses XP any more." Uh...yeah...no. During the summer of 2017, I was assisting with analyzing some systems that had been hit with NotPetya, and another analyst was examining Windows XP and 2003 systems from another client.
The reason this is important is that, in addition to there being many more log files available on Vista+ systems, the binary format of the log files themselves is different. For example, I wrote evtparse.exe (NOTE: there is NO "x" in the file name! Evtxparse.exe is a completely different tool!) specifically to parse Event Log files from XP and Win2003 systems. The great thing is that it does so on a binary basis, without using the MS API. This means that if the header information says that there are 400 event records in the file, but there are actually 4004, you will get 4004 records parsed.
I also wrote lfle.pl to parse Event Log records from unstructured data (pagefile, memory dump, unallocated space, etc.). I originally wrote this code to assist a friend of mine who'd been working on a way to carve Event Log records from unallocated space from a Win2003 server for about 3 months. Since I wrote it, I've used it successfully to parse records myself. Lots of fun!
I caught a really interesting tweet the other day that pointed to the DFIR blog, one that discussed how to parse Windows Event Logs. I thought the approach was interesting, so I thought I'd share the process I use for parsing Windows Event Logs (*.evtx files).
So, I'm not saying that there's anything wrong with the process laid out in the DFIR blog post...not at all. In fact, I'm grateful that the author took the time to write it up and share it with others. It's a fantastic resource, but there's more than one way to accomplish a great many tasks in the DFIR world, aren't there? As Dan said, there are some great examples in the post.
When I create timelines, I use a batch file (wevtx.bat) that runs LogParser, and as the *.evtx logs are parsed, runs them through eventmap.txt to "tag" interesting events. The batch file takes two arguments, the path to a file or directory with *.evtx files (LogParser understands wild cards), and the output event file (events are appended to the file if the file already exists).
Now, I did say, "...when I create timelines...", but this method works very well with just *.evtx files, or even just a few, or even one, *.evtx file.
The methodology in the DFIR blog post includes looking for specific events IDs, which is great. The way I do it in my methodology is that when I parse all of the *.evtx files that I'm going to parse, I have an "events file"; from there, I can parse out event source/ID pairs pretty easily using "type" and "find". It's pretty easy, like so:
type events.txt | find "Security/4624" > logon_events.txt
You can then add to that file using the append redirection operator (i.e., ">>"), or search for other source/ID pairs and create other output files. I've used this method to create micro- or nano-timelines of just specific events, so that I can get a view of things that I wouldn't be able to see in a complete timeline.
Okay, why am I talking about event source/ID "pairs"? Well, in the DFIR blog post, they're looking in the Security Event Log file (Security.evtx) for specific event IDs, but when you start looking across other *.evtx files and incorporating them into your analysis, you may start to see that some event records may have different sources, but the same event ID, depending upon what's installed on the system. For example, event ID 6001 can have sources of WinLogon, DNS, and Wlclntfy.
So, for the sake of clarity, I use event source/ID pairs in eventmap.txt; I haven't seen every possible event ID, and therefore, don't want to have something incorrectly tagged. There's no reason to draw the analyst's attention to something if it's not necessary to do so.
Wait...What?
Okay, there are times when Windows Event Logs are not Windows Event Logs...that's when they're Event Logs. Get it?
Okay, stand by...this is the part where the version of Windows matters. I've gotten myself in trouble over the years for asking stoopit questions (after someone takes the time to write out their request), like, "What's the version of Windows you're examining?" I get it. Bad Harlan. But you know what...it matters. And I know you're going to say, "yeah, dude...but no one uses XP any more." Uh...yeah...no. During the summer of 2017, I was assisting with analyzing some systems that had been hit with NotPetya, and another analyst was examining Windows XP and 2003 systems from another client.
The reason this is important is that, in addition to there being many more log files available on Vista+ systems, the binary format of the log files themselves is different. For example, I wrote evtparse.exe (NOTE: there is NO "x" in the file name! Evtxparse.exe is a completely different tool!) specifically to parse Event Log files from XP and Win2003 systems. The great thing is that it does so on a binary basis, without using the MS API. This means that if the header information says that there are 400 event records in the file, but there are actually 4004, you will get 4004 records parsed.
I also wrote lfle.pl to parse Event Log records from unstructured data (pagefile, memory dump, unallocated space, etc.). I originally wrote this code to assist a friend of mine who'd been working on a way to carve Event Log records from unallocated space from a Win2003 server for about 3 months. Since I wrote it, I've used it successfully to parse records myself. Lots of fun!
Monday, February 12, 2018
Intel, Tools, Etc.
Intel
One of the things I've been pushing for over the years is for IR teams to start (for those that haven't) developing intelligence from IR engagements, based on individual engagements, as well as correlating data across multiple engagements. If you're not looking for artifacts, findings and intelligence that you can bake back into your tools and processes, then you're leaving a great deal of value on the floor.
Over on Twitter, Chad Tilbury recently mentioned that "old techniques are still relevant today", and he's right. Where that catches us is when we don't make baking intelligence from engagements back into our processes and tools part of what we do. Conducting a DFIR engagement and moving on, without collecting and retaining corporate knowledge and intelligence from that engagement, means we're doing ourselves (and each other) a massive disservice.
Chad's tweet pointed to a TrustWave blog post that's just over four years old at this point, about webshell code being hidden in image files. I've seen this myself on engagements, and by sharing it with others on my team, we were able to upgrade our detection capabilities, and bake those findings right back into the tools that everyone used (and still use). Also, and it kind of goes without saying, sharing it in the public blog post informs others in the community, and provides a mechanism for them to upgrade their knowledge and detection capabilities, as well, without having to have experienced or seen the issue themselves.
Malware Research
I've always been really interested in write-ups of analysis of malware, in part, because I try to pull out the intel I can use. I don't have much use for listings of assembly language code...I'm not sure how that helps folks hunt for, find, respond to, nor understand the risk associated with the malware. Also, it doesn't really illustrate how the malware is actually used.
I recently ran across this write-up of PlugX, a backdoor that I'm somewhat familiar with. It's helpful, particularly the section on persistence. In this case, the backdoor takes advantage of the DLL search order hijacking vulnerability by dropping a legit signed executable (avp.exe) in the same folder as the malicious DLL. I've seen this sort of thing before...in other instances, a Symantec file was used.
Something I've wondered when folks have shared this finding is, what is the AV solution used at the client site? Was 'avp.exe' used because Kaspersky was the AV solution employed by the client/target? The same question applies regardless of which legitimate executable file was used
Tools
The folks over at Kaspersky Labs posted a tool for parsing Windows Event Log files on their GitHub repository. I downloaded the Windows x64 binary, and found that apparently, the tool has no available syntax information, as it takes only a file name as an argument. The output is sent to the console, as illustrated in the example below:
Record #244 2016.05.12-00:22:08 'Computer':'slimshady', 'Channel':'Application', 'EventSourceName':'Software Protection Platform Service', 'Guid':'{E23B33B0-C8C9-472C-A5F9-F2BDFEA0F156}', 'Name':'Microsoft-Windows-Security-SPP', 'xmlns':'http://schemas.microsoft.com/win/2004/08/events/event', 'Level':00, 'Opcode':00, 'Task':0000, 'EventID':1009, 'Qualifiers':16384, 'Keywords':0080000000000000, 'SystemTime':2016.05.12-00:22:08, 'ProcessID':00000000, 'ThreadID':00000000, 'EventRecordID':0000000000000244, 'Version':00, 'Data':'...//0081[004A]',
As you can see, this output is searchable, or 'grepable' as the *nixphiles like to say. It's a good tool for parsing individual *.evtx files, or you can include it in a batch file.
I ran across WHIDS (Windows host IDS) on Twitter the other day...seems like a great idea, allowing for rules/filters to be built on top of Sysmon, on workstations or event collectors.
On 10 Feb 2018, I found this pretty fascinating blog post regarding an MS tool named "vshadow". This is some great info about how the tool could be used. I blogged about this tool myself back in 2016, which as based in part on a Carbon Black blog post from the previous summer. If you're an analyst or threat hunter, or even a pen tester, you should most definitely take a look at all of this reading. The Cb blog in particular describes some pretty interesting uses of the tool that have been observed in the wild, and it's pretty crazy stuff.
On Writing Books
As the writing of my most recent book winds down and chapters (and the ancillary materials, front and back matter) are sent to the publisher for review, I feel less of the sense of urgency that I need to be working on the book, and my thoughts get freed up to think about things like...writing books.
With respect to the current book, there're always the thoughts, "...did I say enough..", and "...was that meaningful? Did it make sense? Will others understand it? Will something I said have an impact on how someone performs this work?" Even with the great work done and assistance provided by Mari, my tech editor, those questions persist.
The simple fact that I've had to make peace with is that no matter how much work I put into the book, it's never going to be "perfect"; it's not going to be everything to everyone. And that's cool. Really. I've seen over the years that no matter how much I talk about the books while writing them, at some point after the book is published, I'm going to get the inevitable, "...did you cover...", "...did you talk about...", and "...why didn't you say anything about..." questions. I get it. It's all good.
One of the things I've been pushing for over the years is for IR teams to start (for those that haven't) developing intelligence from IR engagements, based on individual engagements, as well as correlating data across multiple engagements. If you're not looking for artifacts, findings and intelligence that you can bake back into your tools and processes, then you're leaving a great deal of value on the floor.
Over on Twitter, Chad Tilbury recently mentioned that "old techniques are still relevant today", and he's right. Where that catches us is when we don't make baking intelligence from engagements back into our processes and tools part of what we do. Conducting a DFIR engagement and moving on, without collecting and retaining corporate knowledge and intelligence from that engagement, means we're doing ourselves (and each other) a massive disservice.
Chad's tweet pointed to a TrustWave blog post that's just over four years old at this point, about webshell code being hidden in image files. I've seen this myself on engagements, and by sharing it with others on my team, we were able to upgrade our detection capabilities, and bake those findings right back into the tools that everyone used (and still use). Also, and it kind of goes without saying, sharing it in the public blog post informs others in the community, and provides a mechanism for them to upgrade their knowledge and detection capabilities, as well, without having to have experienced or seen the issue themselves.
Malware Research
I've always been really interested in write-ups of analysis of malware, in part, because I try to pull out the intel I can use. I don't have much use for listings of assembly language code...I'm not sure how that helps folks hunt for, find, respond to, nor understand the risk associated with the malware. Also, it doesn't really illustrate how the malware is actually used.
I recently ran across this write-up of PlugX, a backdoor that I'm somewhat familiar with. It's helpful, particularly the section on persistence. In this case, the backdoor takes advantage of the DLL search order hijacking vulnerability by dropping a legit signed executable (avp.exe) in the same folder as the malicious DLL. I've seen this sort of thing before...in other instances, a Symantec file was used.
Something I've wondered when folks have shared this finding is, what is the AV solution used at the client site? Was 'avp.exe' used because Kaspersky was the AV solution employed by the client/target? The same question applies regardless of which legitimate executable file was used
Tools
The folks over at Kaspersky Labs posted a tool for parsing Windows Event Log files on their GitHub repository. I downloaded the Windows x64 binary, and found that apparently, the tool has no available syntax information, as it takes only a file name as an argument. The output is sent to the console, as illustrated in the example below:
Record #244 2016.05.12-00:22:08 'Computer':'slimshady', 'Channel':'Application', 'EventSourceName':'Software Protection Platform Service', 'Guid':'{E23B33B0-C8C9-472C-A5F9-F2BDFEA0F156}', 'Name':'Microsoft-Windows-Security-SPP', 'xmlns':'http://schemas.microsoft.com/win/2004/08/events/event', 'Level':00, 'Opcode':00, 'Task':0000, 'EventID':1009, 'Qualifiers':16384, 'Keywords':0080000000000000, 'SystemTime':2016.05.12-00:22:08, 'ProcessID':00000000, 'ThreadID':00000000, 'EventRecordID':0000000000000244, 'Version':00, 'Data':'...//0081[004A]',
As you can see, this output is searchable, or 'grepable' as the *nixphiles like to say. It's a good tool for parsing individual *.evtx files, or you can include it in a batch file.
I ran across WHIDS (Windows host IDS) on Twitter the other day...seems like a great idea, allowing for rules/filters to be built on top of Sysmon, on workstations or event collectors.
On 10 Feb 2018, I found this pretty fascinating blog post regarding an MS tool named "vshadow". This is some great info about how the tool could be used. I blogged about this tool myself back in 2016, which as based in part on a Carbon Black blog post from the previous summer. If you're an analyst or threat hunter, or even a pen tester, you should most definitely take a look at all of this reading. The Cb blog in particular describes some pretty interesting uses of the tool that have been observed in the wild, and it's pretty crazy stuff.
On Writing Books
As the writing of my most recent book winds down and chapters (and the ancillary materials, front and back matter) are sent to the publisher for review, I feel less of the sense of urgency that I need to be working on the book, and my thoughts get freed up to think about things like...writing books.
With respect to the current book, there're always the thoughts, "...did I say enough..", and "...was that meaningful? Did it make sense? Will others understand it? Will something I said have an impact on how someone performs this work?" Even with the great work done and assistance provided by Mari, my tech editor, those questions persist.
The simple fact that I've had to make peace with is that no matter how much work I put into the book, it's never going to be "perfect"; it's not going to be everything to everyone. And that's cool. Really. I've seen over the years that no matter how much I talk about the books while writing them, at some point after the book is published, I'm going to get the inevitable, "...did you cover...", "...did you talk about...", and "...why didn't you say anything about..." questions. I get it. It's all good.
Subscribe to:
Posts (Atom)