Friday, February 16, 2018

On Writing (DFIR) Books

After sharing my recent post regarding my next book, IWS, one of the comments I received via social media was a tongue-in-cheek reference to me being a "new" author.  I got the humor in that right away, but it also got me to thinking about something that I hadn't thought about in a while...what it actually takes to write a DFIR book.  I've thought about this before, at considerable length, because over the years, I've talked to others who have considered going down that path but for whatever reason, were not able to complete the journey.

Often times, the magnitude of the endeavor can simply overwhelm folks.  In some cases, events turn out to be much less easy to manage than originally thought.  In one instance, I was once asked for advice from a friend...he and two co-authors had worked through the process of establishing a contract for a book with a publisher.  It turned out that after the contract was signed, the team was assigned an editor, who then informed them that there was an error in the contract; they needed to deliver twice as many words than were previously stated, with no extension on the delivery date.  Needless to say, the team made the decision to not go forward with writing the book.

To be honest, one of the biggest challenges I've seen over the years is the disparity between the publishing company and their SOP, and the authors.  It took me a while to figure this out, but the publishing company (I can't speak to all publishing companies, just the three I've been associated with...) look to objective measures; word counts, numbers of chapters, numbers of images or figures, etc.  I would think that schedules are pretty much universal, as we all deal with them, but some publishing companies are used to dealing with academia, for whom publishing is often an absolute necessity for survival.  For many of those within the DFIR community who may be considering the idea of becoming a published author, writing a book is one of many things on an already crowded plate.

The other side of the coin is simply that, in my experience, many DFIR folks do not like to write, because they're not good at it.  One of the first company's I worked with out of the military had a forensics guy who apparently did fantastic work, but it took two other people (usually) to turn his reports into something presentable for review...not for the client, but for someone on our team to review before sending them to the client.  I recognize that writing isn't something people like to do, and I also recognize that my background, going back to my time on active duty, includes a great deal of writing (i.e., personnel evaluations/fitness reports, JAG manual investigations, etc.).  As such, I approach it differently.  I documented that approach to some extent in one of my books, providing a chapter on...wait for it...writing reports.  Those same techniques can be used in writing books.

I've been with essentially the same publishing company (that's not to say the same editor, and I haven't worked with the same individuals throughout) since my second book (Elsevier bought Syngress), so needless to say, I've seen a great deal.  I've gone through the effort (and no small amount of pain) and trials to get books published, and as such, I've learned a great deal along the way.  At the same time, I've talked to a number of friends and other folks within the DFIR community who've expressed a desire to write a book, and some who've already demonstrated a very good basis for doing just that.

Sometime ago, in a galaxy far, far away, I engaged with my editor to develop a role for myself, one in which, rather than writing books, I engaged with new authors as a liaison.  In this role, I would begin working with aspiring authors in the early stages of developing their ideas, and help them navigate the labyrinth to getting a book published.  I basically sat down and asked myself (after my fourth or fifth book had been published), "what do I know now that I wish I'd known when writing my first book?"  Armed with this information, I thought, here's a great opportunity to present this information to new authors ahead of time, and make the process easier for them.  Or, they may look at the scope and range of the process, and determine that it's not for them.  Either way, it's a win-win.

Also, and I think that this important to point out, this was not a paying position or role.  There are significant cultural differences between DFIR practitioners, and a publisher of predominantly academic books, and as such, this role needed to be socialized on both sides.  However, before either editor could really wrap their heads around the idea, and socialize it with the publishing company, they moved on to other adventures. 

As such, I figured that a good way to help folks interested in writing a book would be to provide some initial thoughts and advice, and then let those who are interested take it a step or two beyond that.

The Idea
All books start with an idea.  What is the basis for what you want to write/communicate?  When I started out with the Windows Forensic Analysis books, the basic idea I had in mind was that I wanted to write a book that I'd want to purchase.  I'd seen a number of the books that were out there that covered, to some extent, the same topic, but not to what I saw as the appropriate depth.  I wanted to be able to go to a bookstore, see a book with the words "Windows" and "forensics" on the spine, and upon opening it, have it be something I'd want to take to the register and purchase. 

Something else to consider is that you do not have to have a new or original idea.  I wrote Windows Registry Forensics because there was nothing out there like it.  But I wrote Windows Forensic Analysis because I wanted to take a different approach to what was already out there...most of what I found didn't go into the depth that I wanted to see.

When I was employed by SecureWorks, I authored a blog post that discussed the use of the Samsam ransomware.  Kevin Strickland, a member of the IR team, took a completely different approach in how he looked at some of the same data, which ended up being one of the most quoted Secureworks blog posts for the entire 2016 year.  My point is that it doesn't always take an original idea...sometimes, all it really takes is a different way of looking at the same data.

Structure Your Thoughts
It may not seem obvious, but structuring your thoughts can go a LONG way toward making your project an achievable success.

The best way to do this, that I've found, is to create a detailed outline.  Actually write down your thoughts.  And don't think you have to do it all at once...when I wrote personnel evaluations in the military, I didn't do it one sitting, because I didn't think that would be fair to my Marines.  I did it over time...I wrote down my initial thoughts, then let them marinate, and came back to them a day or two later.  The same thing can be done with the outline...create the initial outline, and then walk away from it.  Maybe socialize it with some co-workers, discuss it, see what other ideas may be out there.  Take some of the terms and phrases you used in your outline, and Google them to see what others may be saying about them.  You may find validation that way, like, "yeah, this is a good idea...", or you may find that others are thinking about those terms in a different way.  Either way, use time to develop your ideas. I do this with my blog posts. Something to realize is that the outline may be a living document; once you've "completed it", it will likely change and grow over time, as you grow in your writing.  Chapters/thoughts may be consolidated, or you may find that what you thought would be one chapter is actually better communicated as two (or more) chapters.  And that's okay.

What I've learned over the years is that the more detailed your outline is, the easier it is to communicate your ideas to the publisher, because they're going to send your idea out to others for review.  Much like a resume, the thought behind your outline is that you want to leave the person reviewing it no other option than to say, "yes"...so the clearer you can be, the more likely this is to happen.  And the other thing I've learned is that the more detailed the outline, the easier it is to actually write the book.  Because you're very likely going to be writing in sections, it's oh, so much easier to pick something back up if you know exactly where you left off, and a detailed outline can help you with that.

Start Writing
That's right...try writing a chapter.  Pick one that's easy, and see what it's like to actually write it.  We all have "life", that stuff we do all the time, and it's a good idea to see how this new adventure fits into yours.  Do you get up early and write before kicking off your work day, or is your best time to write after the work day is over?

Get someone to take a look at what you've written, from the perspective of purchasing the finished product.  We may not hit the bull's eye on the first few iterations, and that's okay. 

Reviews
Get your initial attempts reviewed by someone you trust to be honest with you.  Too many times over the years, I've provided draft reports for co-workers to review, and within 15 minutes received a just "looks good".  Great, that makes me feel wonderful, but is that realistic for a highly technical report that's over 30 pages long?  In one particular instance, I rewrote the entire report from scratch, and got the same response within the same time frame, from the same co-worker.  Clearly, this is not an honest review.

In the early stages of writing my second book, I had a reviewer selected by the publishing company, and I'd get chapters back that just said, "looks good" or "needs work".  From that point on, I made a point of finding my own reviewer and making arrangements with them ahead of time to get them on-board with the project.  What I wanted to know from the reviewer was, does what I wrote make sense?  Is it easy to follow?  When you're writing a book based on your own knowledge and experience, you're very often extremely close to and intimate with the subject, and someone else how may not be as familiar with it may need a bit more explanation or description.  That's okay...that's what having a reviewer is all about.

At this point, we've probably reached the "TL;DR" mark.  I hope that this article has been helpful, in general, and more specifically, if you're interested in writing a DFIR book.  If you have any thoughts or questions, feel free to comment here, or send them to me.

Wednesday, February 14, 2018

IWS

As I'm winding up the final writing for my next book, Investigating Windows Systems, I thought I'd take the opportunity to say/write a few words with respect to what the book is, and what it is not.

In the past, I've written books that have provided walk-thrus of various artifacts on Windows systems.  This seemed to be a good way to introduce folks to the possibilities of what was available on Windows systems, and what they could achieve through their analysis of images acquired from those systems.

With Investigating Windows Systems, I've taken a markedly different approach.  Rather that providing introductory walk-thrus of artifacts, I'm focusing on the analysis process itself, and discussing pivot points, and analysis decisions made along the way.  To do this, I've used available CTF and forensic challenge images (I reached to the authors to see if it was okay with them to do this...) as the basis, and in chapters 2, 3, and 4, walk through the analysis of the images.  In most cases, I've tried to provide more real world examples of the analysis goals (which we document) than what was provided as part of the CTF.  For instance, one CTF has 31 questions to answer as part of the challenge, some of which are things that should be documented as a matter of SOP in just about every case.  However, I opted to take a different approach with the analysis goals, because in two decades of cybersecurity consulting, I've never worked with a client that has asked 30 or more questions regarding the case, or the image being analyzed.  In the vast majority of cases, the questions have been, "..was the system compromised/infected?", often followed by "...was sensitive data exfiltrated from the system?"  Pretty straightforward stuff, and as such, I wanted to take of what I've seen as a realistic, IRL approach to the analysis goals.

Another aspect of the book is that a certain level of knowledge and capability is assumed of the reader, like a "you must be this tall to ride this ride" thing.  For example, throughout the book, in the various sections, I create timelines as part of the analysis process.  However, I don't provide a basic walk-thru of how to create a timeline, because I assume that the reader already knows how to do so, either by using their own process, or from chapter 7 of Windows Forensic Analysis (in both the third and fourth editions).  Also, in the book, I don't spend any time explaining all of the different things you can do with some of the tools that are discussed; rather, I leave that to the reader.  After all, a lot of the things that someone might be curious about are easy to find online.  Now, this doesn't mean that a new analyst can't make use of the book...not at all.  I'm simply sharing this to set the expectation of anyone who's considering purchasing the book.  I don't cover topics such as malware RE, memory acquisition and analysis, etc., as there are some fantastic resources already available that provide in-depth coverage of these topics.

Additional Materials
With some of my previous books, I've established online repositories for additional materials included with the book.  As such, I've established a Github repository for materials associated with this one.  As an example, in writing Chapter 4, I ended up having to write some code to parse some logs...that code is included in the repository.

Producing Intel
Something else I talk about in the book, in addition to the need for documentation, is the need for DFIR analysts to look at what they have available in an IR engagement that they can use in other engagements in the future.  The basic idea behind this to develop, correlate and maintain corporate knowledge and intelligence.

In one instance in the book, during an analysis, I found something in the Registry that didn't directly pertain to the analysis in question, but I created a new RegRipper plugin, wc_shares.pl.  I added the plugin directly to the RegRipper repository.

As a bit of a side note, if you're a Nuix customer, you can now run RegRipper through Workbench.  Nuix has added an extension to their Workbench product that allows you to run RegRipper without having to close out the case or export individual files.  For more details, here's the fact sheet.

Other ways to maintain and share intelligence include Yara rules, endpoint filter/alert rules, adding an entry to eventmap.txt, etc.  But that's not it, there are other ways to share intelligence, such as this blog post that I wrote during previous employment, with a good deal of help from some really smart folks.  That blog post alone has a great deal of valuable intelligence that can be baked back into tools and processes, and extend your team's capabilities.  For example, look at figure 2 in the blog post; it illustrates the command that the adversary issued to take specific actions (fig. 1 illustrates the results of that command).  If you're using an EDR tool, monitoring for that command line (or something similar) will allow you to detect this activity early in the adversary's attack cycle.  If you're not using an EDR tool and want to do some threat hunting, you now have something specific to look for.

How To...

...Parse Windows Event Logs
I caught a really interesting tweet the other day that pointed to the DFIR blog, one that discussed how to parse Windows Event Logs.  I thought the approach was interesting, so I thought I'd share the process I use for parsing Windows Event Logs (*.evtx files).

So, I'm not saying that there's anything wrong with the process laid out in the DFIR blog post...not at all.  In fact, I'm grateful that the author took the time to write it up and share it with others.  It's a fantastic resource, but there's more than one way to accomplish a great many tasks in the DFIR world, aren't there?  As Dan said, there are some great examples in the post.

When I create timelines, I use a batch file (wevtx.bat) that runs LogParser, and as the *.evtx logs are parsed, runs them through eventmap.txt to "tag" interesting events.  The batch file takes two arguments, the path to a file or directory with *.evtx files (LogParser understands wild cards), and the output event file (events are appended to the file if the file already exists).

Now, I did say, "...when I create timelines...", but this method works very well with just *.evtx files, or even just a few, or even one, *.evtx file.

The methodology in the DFIR blog post includes looking for specific events IDs, which is great.  The way I do it in my methodology is that when I parse all of the *.evtx files that I'm going to parse, I have an "events file"; from there, I can parse out event source/ID pairs pretty easily using "type" and "find".  It's pretty easy, like so:

type events.txt | find "Security/4624" > logon_events.txt

You can then add to that file using the append redirection operator (i.e., ">>"), or search for other source/ID pairs and create other output files.  I've used this method to create micro- or nano-timelines of just specific events, so that I can get a view of things that I wouldn't be able to see in a complete timeline.

Okay, why am I talking about event source/ID "pairs"?  Well, in the DFIR blog post, they're looking in the Security Event Log file (Security.evtx) for specific event IDs, but when you start looking across other *.evtx files and incorporating them into your analysis, you may start to see that some event records may have different sources, but the same event ID, depending upon what's installed on the system.  For example, event ID 6001 can have sources of WinLogon, DNS, and Wlclntfy

So, for the sake of clarity, I use event source/ID pairs in eventmap.txt; I haven't seen every possible event ID, and therefore, don't want to have something incorrectly tagged.  There's no reason to draw the analyst's attention to something if it's not necessary to do so.

Wait...What?
Okay, there are times when Windows Event Logs are not Windows Event Logs...that's when they're Event Logs.  Get it?

Okay, stand by...this is the part where the version of Windows matters.  I've gotten myself in trouble over the years for asking stoopit questions (after someone takes the time to write out their request), like, "What's the version of Windows you're examining?"  I get it.  Bad Harlan.  But you know what...it matters.  And I know you're going to say, "yeah, dude...but no one uses XP any more."  Uh...yeah...no.  During the summer of 2017, I was assisting with analyzing some systems that had been hit with NotPetya, and another analyst was examining Windows XP and 2003 systems from another client.

The reason this is important is that, in addition to there being many more log files available on Vista+ systems, the binary format of the log files themselves is different.  For example, I wrote evtparse.exe (NOTE: there is NO "x" in the file name!  Evtxparse.exe is a completely different tool!) specifically to parse Event Log files from XP and Win2003 systems.  The great thing is that it does so on a binary basis, without using the MS API.  This means that if the header information says that there are 400 event records in the file, but there are actually 4004, you will get 4004 records parsed. 

I also wrote lfle.pl to parse Event Log records from unstructured data (pagefile, memory dump, unallocated space, etc.).  I originally wrote this code to assist a friend of mine who'd been working on a way to carve Event Log records from unallocated space from a Win2003 server for about 3 months.  Since I wrote it, I've used it successfully to parse records myself.  Lots of fun!

Monday, February 12, 2018

Intel, Tools, Etc.

Intel
One of the things I've been pushing for over the years is for IR teams to start (for those that haven't) developing intelligence from IR engagements, based on individual engagements, as well as correlating data across multiple engagements.  If you're not looking for artifacts, findings and intelligence that you can bake back into your tools and processes, then you're leaving a great deal of value on the floor.

Over on Twitter, Chad Tilbury recently mentioned that "old techniques are still relevant today", and he's right.  Where that catches us is when we don't make baking intelligence from engagements back into our processes and tools part of what we do.  Conducting a DFIR engagement and moving on, without collecting and retaining corporate knowledge and intelligence from that engagement, means we're doing ourselves (and each other) a massive disservice.

Chad's tweet pointed to a TrustWave blog post that's just over four years old at this point, about webshell code being hidden in image files.  I've seen this myself on engagements, and by sharing it with others on my team, we were able to upgrade our detection capabilities, and bake those findings right back into the tools that everyone used (and still use).  Also, and it kind of goes without saying, sharing it in the public blog post informs others in the community, and provides a mechanism for them to upgrade their knowledge and detection capabilities, as well, without having to have experienced or seen the issue themselves.

Malware Research
I've always been really interested in write-ups of analysis of malware, in part, because I try to pull out the intel I can use.  I don't have much use for listings of assembly language code...I'm not sure how that helps folks hunt for, find, respond to, nor understand the risk associated with the malware.  Also, it doesn't really illustrate how the malware is actually used.

I recently ran across this write-up of PlugX, a backdoor that I'm somewhat familiar with.  It's helpful, particularly the section on persistence.  In this case, the backdoor takes advantage of the DLL search order hijacking vulnerability by dropping a legit signed executable (avp.exe) in the same folder as the malicious DLL.  I've seen this sort of thing before...in other instances, a Symantec file was used.

Something I've wondered when folks have shared this finding is, what is the AV solution used at the client site?  Was 'avp.exe' used because Kaspersky was the AV solution employed by the client/target?  The same question applies regardless of which legitimate executable file was used

Tools
The folks over at Kaspersky Labs posted a tool for parsing Windows Event Log files on their GitHub repository.  I downloaded the Windows x64 binary, and found that apparently, the tool has no available syntax information, as it takes only a file name as an argument.  The output is sent to the console, as illustrated in the example below:

Record #244 2016.05.12-00:22:08 'Computer':'slimshady', 'Channel':'Application', 'EventSourceName':'Software Protection Platform Service', 'Guid':'{E23B33B0-C8C9-472C-A5F9-F2BDFEA0F156}', 'Name':'Microsoft-Windows-Security-SPP', 'xmlns':'http://schemas.microsoft.com/win/2004/08/events/event', 'Level':00, 'Opcode':00, 'Task':0000, 'EventID':1009, 'Qualifiers':16384, 'Keywords':0080000000000000, 'SystemTime':2016.05.12-00:22:08, 'ProcessID':00000000, 'ThreadID':00000000, 'EventRecordID':0000000000000244, 'Version':00, 'Data':'...//0081[004A]',

As you can see, this output is searchable, or 'grepable' as the *nixphiles like to say.  It's a good tool for parsing individual *.evtx files, or you can include it in a batch file.

I ran across WHIDS (Windows host IDS) on Twitter the other day...seems like a great idea, allowing for rules/filters to be built on top of Sysmon, on workstations or event collectors.

On 10 Feb 2018, I found this pretty fascinating blog post regarding an MS tool named "vshadow".  This is some great info about how the tool could be used.  I blogged about this tool myself back in 2016, which as based in part on a Carbon Black blog post from the previous summer.  If you're an analyst or threat hunter, or even a pen tester, you should most definitely take a look at all of this reading.  The Cb blog in particular describes some pretty interesting uses of the tool that have been observed in the wild, and it's pretty crazy stuff.

On Writing Books
As the writing of my most recent book winds down and chapters (and the ancillary materials, front and back matter) are sent to the publisher for review, I feel less of the sense of urgency that I need to be working on the book, and my thoughts get freed up to think about things like...writing books.

With respect to the current book, there're always the thoughts, "...did I say enough..", and "...was that meaningful?  Did it make sense?  Will others understand it?  Will something I said have an impact on how someone performs this work?"  Even with the great work done and assistance provided by Mari, my tech editor, those questions persist.

The simple fact that I've had to make peace with is that no matter how much work I put into the book, it's never going to be "perfect"; it's not going to be everything to everyone.  And that's cool.  Really.  I've seen over the years that no matter how much I talk about the books while writing them, at some point after the book is published, I'm going to get the inevitable, "...did you cover...", "...did you talk about...", and "...why didn't you say anything about..." questions.  I get it.  It's all good.

Monday, January 29, 2018

Tools

I've been thinking about some of the tools I use in my work a good deal lately, and looked back over the breadth of some of those that I've used.  I've even thought a good deal about the book that I helped Cory write (my part was rather small).  I think that what I find most interesting about this is that when I take a good look at the tools, I realize that it's not so much about the tools, as much as it is about the workflow.

I really like tools like Yara and RegRipper not just because they're relatively easy to use, but they provide a great platform for documenting, retaining, and sharing corporate knowledge, as well as intelligence gleaned from previous cases, or other sources...I wrote the logonstats.pl plugin after seeing a Tweet.  The only short-coming is that analysts actually have to either write the rules and plugins themselves, or seek assistance from someone with more experience, in order to get this sort of thing documented in a usable form that can be deployed to and shared with others. 

The folks at H-11 Digital Forensics Services posted their list of the top 30 open source digital forensics tools.  There are some interesting things about the list, such as that Autopsy and TSK both made it as separate entries.  But I wanted to be sure to share the link, as each tool includes a brief description of how they've found it useful, which isn't something that you see very often.

While the list is limited to 30 tools, there is some duplication, which can be viewed as a good thing...it's very often a good idea to have multiple tools capable of doing the same or similar work, although I'd add, doing that work in a different way.  You don't want multiple tools that all do the same thing the same way; for example, Microsoft's Event Viewer and LogParser tools both use the MS API, so it's likely that for basic parsing, they're both going to do it the same way, and produce very similar results.  A better idea would be to use something like LogParser and EVTXtract.

As a bit of a side note, we know that LogParser utilizes the MS API, because of how it operates on different versions of Windows; that is, XP/2003 vs Vista+.

Just to be clear...having and using multiple tools is a great idea, but having multiple tools that all do the same thing the same way, maybe not so much.  It's still up to the individual analyst.

Speaking of tools, or more appropriately, lists of tools, be sure to stop by the dfir.training tools page and check out what's there. 

Tools+
bits_parser - Python script (install it into your installation using 'pip') for parsing the BITS database(s).  Interestingly enough, within the DFIR community as a whole, it's not really clear to me that this is viewed as an infiltration (download) or exfiltration (getting data off of a system) method.  However, if someone has admin access to a system, it's not that hard to put together a command line and use this native tool (i.e., "living off the land") to get data off of the system.  Over on Twitter, Dan has some caveats for installing and running this tool; hint: requires Python3.

vss_carver - requires Python 3.6+ or Python 2.7+; description states that it "carves and recreates VSS catalog and store from Windows disk image."  That's pretty cool...to be able to not only get historical data from VSCs (which I've found to be extremely valuable a number of times...), but now we can carve for and access deleted VSCs, extending our reach into the past for that system.  Pretty nice.

YARP - Yet Another Registry Parser,written in Python3.  One of the great things about the YARP site on GitHub is that test hives are available.

danzek's notes on UAC Virtualization - this is a great idea; so few within the #DFIR community keep notes (Note: in my first iteration of this post, I had included, "...on things like this...", but it occurred to me that few, if any, #DFIR folks keep notes at all...)

RegRipper
Just the other day, I saw that Jason had mentioned a couple of Registry values that seemed to be of interest, so I wrote up new RegRipper plugin and uploaded it into the repository.

Something I've noticed over the years is that as part of a conversation with someone in the #DFIR community, I'll mention RegRipper, and someone will say, "...I've been using RegRipper for years!"  Great...but using, how?  Most often I find out that it's via the GUI, and little else.  There's no real powerful use of the tool; someone may use the command line, but there is very little use of profiles.  And the vast majority of folks seem to run the plugins 'as is'...that is, they use only what comes with the download.

For the sake of transparency, someone did recently suggest (and make) updates to several plugins, and I also assisted in updating the shellbags.pl plugin after someone asking for assistance provided the test data that I needed (I needed it to troubleshoot and fix the plugin).

I've written a LOT of documentation for RegRipper and how to use it.  I've written books, as well as blog posts, and I've also addressed support issues.  I say this because I was recently having a conversation with another analyst on the topic of what RegRipper offers that another tool (the one they were using) doesn't.  The conversation wasn't about, "...why is tool X better than tool Y...", but rather it was about "...what are the gaps that RegRipper fills over this other tool?"  To be honest, the commercial tool in question does, IMHO, a great job of presenting some parsed Registry items to the analyst, providing a layer of abstraction over the binary data itself, but the data that was parsed was based on what had been designed into the application.  As the application isn't just a Registry viewer, what the tool parses and presents to the analyst are commonly-sought, 'big ticket' items.  As is often the case, I heard, "...I've been using RegRipper for years..." but as it turned out, the usage didn't extend beyond running the GUI.  That's fine, if that's your workflow and that's what works for you...but that also means that you're likely missing out on some of the truly powerful aspects of RegRipper.

Something else that occurred to me was that, in attempting to verify the data parsed between two tools, some analysts didn't know where to go to find the data in the RegRipper output.  This goes back to something I've said time and time again, particularly when answering questions from other analysts...it may not seem like it, but the version of Windows you're dealing with IS important, as it not only helps you recognize what data is available and where it's located, but it also helps recognize what other data may be available to verify findings, or fill gaps in analysis.

Tools like RegRipper (and Yara, and many of the other tools mentioned above) can be extremely powerful, surgical tools .  I've converted a number of plugins to sending their output to TLN format, for inclusion in my timeline creation and analysis workflow.  Running a limited number of the plugins against the appropriate hives, or even just one plugin, can give me a "sniper's eye" view of the situation that I couldn't get through any other means...either because a full, complete timeline simply has too much data (I will miss the tree while looking at the forest), or because other means of creating a timeline take longer.

Friday, January 19, 2018

Stuff

What's old is new
Some discussions I've been part of (IRL and online, recently and a while ago), have been about what it takes to get started in the DFIR field, and one of the activities I've recommended and pushed, over certifications, is running one's own experiments and blogging about the findings and experience. The immediate push-back from many on that topic is often with respect to content, and my response is that I'm not looking for something new and innovative, I'm more interested in how well you express yourself.

That's right, things we blog about don't have to be new or innovative.  Not long ago, Richard Davis tweeted that he'd put together a video explaining shellbag forensics.  And that's great...it's a good thing that we're talking about these things.  Some of these topics...shellbags, ShimCache, etc...are poorly understood and need to be discussed regularly, because not everyone who needs to know this stuff is going to see it when they need it.  I have blog posts going back almost seven and a half years on the topic of shellbags that are still valid today.  Taking that even deeper, I also have blog posts going back almost as far about shell items, the data blobs that make up shellbags, LNK files (and by extension, JumpLists), as well as provide the building blocks of a number of other important evidentary items found in the Windows Registry, such as RecentDocs and ComDlg32 values.

My point is that DFIR is a growing field, and many of the available pipelines into the industry don't provide complete enough coverage of a lot of topics.  As such, it's incumbent upon analysts to keep up on things themselves, something that can be done through mentoring and self-exploration.  A great way to get "into" the industry is to pick a topic or area, and start blogging your own experience and findings.  Develop your ability to communicate in a clear and concise manner.

It's NOT just for the military
Over the two decades that I've been working in the cybersecurity field, there've been a great many times where I've seen or heard something that has sparked a memory from my time on active duty.  When I was fresh out of the military, I initially found a lot of folks, particularly in the private sector who were very reticent to hear about anything that had to do with the military.  If I, or someone else, started a conversation with, "...in the military...", the folks on the other side of the table would cut us off and state, "...this isn't the military, that won't work here."

However, over time, I began to see that not only would what we were talking about definitely work, but some times, folks would talk about "military things" as if they were doing it, but it wasn't being applied.  Not at all.  It was just something they were saying to sound cool.

"Defense-in-depth" is something near and dear to my heart, because throughout my time in the military, it was something that was on the forefront of my mind from pretty much the first day of training.  Regardless of location...terrain model or the woods of Quantico...or the size of the unit...squad, platoon, company...we were always pushed to consider things like channelization and defense-in-depth.  We were pushed to recognize and use the terrain, and what we had available.  The basic idea was to have layers to the defense that slowed down, stopped, or drove the enemy in the direction you wanted them to go.

The same thing can be applied to a network infrastructure.  Reduce your attack surface by making sure of things like, that DNS server is only providing DNS services, not RDP and a web server, as well.  Don't make it easy for the bad guy, and don't leave "low hanging fruit" laying around in easy reach. 

A great deal of what the military does in the real world can be easily transitioned to the cyber world, and over the years that I've been working in this space, I have seen/heard folks say that "defense-in-depth has failed"...yet, I've never seen it actually employed.  Things like the use of two-factor authentication, segmentation, and role-based access can make it such that a bad guy is going to be really noisy in their attempts to compromise your network...so put something in place that will "hear" them (i.e., EDR, monitoring). 

Not a military example, but did you see the first "Mission: Impossible" movie?  Remember the scene where Tom Cruise's character made it back to the safe house, and when he got to the top of the stairs, took the light bulb out of the socket and crushed it in his jacket?  He then spread the shards out on the now-darkened hallway floor, as he backed toward his room.  This is a really good example for network defense, as he made the one pathway to the room more difficult to navigate, particularly in a stealthy manner.  If you kept watching, you'll see that he was awakened by someone stepping on a shard from the broken light bulb, alerting him to their presence.

Compartmentalization and segmentation are other things that security pros talk about often; if someone from HR has no need whatsoever to access information in, say, engineering or finance, there should be controls in place, but more importantly, why should they be able to access it at all?  I've seen a lot of what I call "bolt-on M&As", where a merger and acquisition takes place and fat pipes with no controls are used to connect the two organizations.  What was once two small, flat networks is now one big, flat network, where someone in marketing from company A can access all of the manufacturing docs in company B.

The US Navy understands compartmentalization very well; this is why the bulkheads on Navy ships go all the way to the ceiling.  In the case of a catastrophic failure, where flooding occurs, sections of the ship can be shut off from access to others.  Consider the fate of the USS Cole versus that of the Titanic. 'Nuff said!

Sometimes, the military examples strike too close to home.  I've been reading Ben MacIntyre's Rogue Heroes, a history of the British SAS.  In the book, the author describes the preparation for a raid on the port of Benghazi, and that while practicing for the raid in a British-held port, a sentry noticed some suspicious activity, to which he was informed, in quite colorful language, to mind his own business.  And he did.  According to the author, this was later repeated at the target port on the night of the raid...a sentry aboard a ship noticed something going on and inquired, only to be informed (again, in very colorful language) that he should mind his own business.  I've seen a number of incidents where this very example has applied...in fact, I've seen it many times, particularly during targeted adversary investigations.  During one particular investigation, while examining several systems, I noticed activity indicative of an admin logging into the system (during regular work hours, and from the console), and 'seeing' the adversary's RAT on the system and removing it. Okay, I get that the admin might not be familiar with the RAT and would just remove it from a system, but when they do the same thing on a second system, and then failed to inform anyone of what they'd seen or done, there's no difference between those actions, and what the SAS troopers had encountered in the African desert.

Intel
I recently ran across this WPScans blog post, which discusses finding PHP and Wordpress "backdoors" using a number of methods.  I took the opportunity to download the archive linked at the end of the blog post, and ran a Yara rule file I've been maintaining across it, and got some interesting hits.

The Yara rule file I used started out as a collection of rules pulled in part from various rules found online (DarkenCode, Thor, etc.), but over time I have added rules (or modified existing ones) based on web shells I've seen on IR engagements, as well as shells others have seen and shared with me.  For those who've shared web shells with me, I've shared either rules or snippets of what could be included in rules back with them.

So, the moral of the story is that when finishing up a DFIR engagement, look for those things from that engagement that you can "bake back into" your tools (RegRipper, Yara rules, EDR filters, etc.) and analysis processes.  This is particularly valuable if you're working as part of a team, because the entire team benefits from the experience of one analyst.

Additional Resources:
DFIR.IT - Webshells

Updates
I made some updates recently to a couple of tools...

I got some interesting information from a RegRipper user who'd had an issue with the shellbags.pl plugin on Windows 10.  Thanks to their providing sample data to work with, I was (finally) able to dig into the data and figure out how to address the issue in the code.

I also updated the clsid.pl and assoc.pl plugins, based on input from a user.  It's funny, because at least one of those plugins hadn't been updated in a decade.

I added an additional event mapping to the eventmap.txt file, not due to any new artifacts I'd seen, but as a result of some research I'd done into errors generated by the TaskScheduler service, particularly as they related to backward compatibility.

All updates were sync'd with their respective repositories.

Saturday, January 06, 2018

WindowsIR 2018: First Steps

Ransomware
I haven't mentioned ransomware in a while, but I ran across something that really stood out to me, in part because we had a couple of things we don't usually see in the cybersecurity arena through the media...numbers. 

Okay, first, there's this article from GovTech that describes an issue with ransomware encountered by Erie County Medical Center in April, 2017.  Apparently, in the wee hours of the morning, ransom notes started appearing on computer screens, demanding the equivalent (at the time) of $30,000USD to decrypt the files.  The hospital opted to not pay the ransom, and it reportedly took 6 weeks to recover, with a loss of $10MUSD...yes, the article said, "MILLION". 

A couple of other things from the article...the hospital reportedly took preparatory steps and "upgraded" their cybersecurity (not sure what that means, really...), and increased their insurance from $2MUSD to $10MUSD. Also, the hospital claimed that it's security was now rather "advanced", to the point where the CEO of the hospital stated, “Our cybersecurity team said they would have rated us above-average before the attack.”

So, here we have our numbers, and the fact that the hospital took a look at their risks and tried to cover what they could.  I thought I'd take a bit of a look around and see what else I could find about this situation, and I ran across this Barkly article, that includes a timeline of the incident itself.  The first two bullets of that timeline are:

Roughly a week before the ransom notes appear attackers gain access to one of ECMC's servers after scanning the Internet for potential victims with port 3389 open and Remote Desktop Protocol (RDP) exposed. They then brute-force the RDP connection thanks to "a relatively easy default password."

Once inside, attackers explore the network, and potentially use Windows command-line utility PsExec to manually deploy SamSam ransomware on an undisclosed number of machines...

Ah, yes...readers of this blog know a thing or two about Samsam, or Samas, don't you?  There's this blog post that I authored, and there's this one by The Great Kevin Strickland.

Anyway, I continued looking to see if I could find any other article that included some sort of definitive information that would collaborate the Barkly article, and while I did find several articles that referred to the firm that responded to and assisted ECMC in getting back up and running, I had trouble finding anything else that specifically supported the Barkly document.  This article states that the ransom note included 'hot pink text'...some of the variants of Samas ransomware that I looked at earlier this year included the ransom note HTML in the executable itself, and used a misspelled font color...the HTML tag read "DrakRed".  I did, however, find this BuffaloNews.com article that seemed to collaborate the Barkly article.

So, a couple of lessons we can learn from this...

1.  A vulnerable web or RDP server just hanging out there on the Internet, waiting to be plucked is not "above-average" cybersecurity.

2.  An EDR solution would have paid huge dividends; the solution gets programmed into the budget cycle, and can lead to much earlier detection (or prevention) of the sort of thing.  Not only can an EDR solution detect the adversary's activities before they get to the point of deploying the ransomware, but it can also prevent processes from running, such as if the WinWord.exe process (MS Word) attempts to launch something else, such as a command prompt, run Powershell, run rundll32.exe, etc. 

If you still don't think ransomware is an issue, consider what happened to Spring Hill, TN, and Mecklenberg County, NC.  The CEO of the Fortalice Solutions (the cybersecurity firm that responded to assist Mecklenberg Cty), Theresa Payton, discussed budget issues regarding IR in this CSOOnline article; incorporating early detection into the IR plan, and by extension, the budget, will end up saving organizations from the direct and indirect costs associated with incidents.

EDR Solutions
Something to keep in mind is that your EDR solution is only as good as those who maintain it.  For example, EDR can be powerful and provide a great deal of capabilities, but it has to be monitored and maintained.  Most, if not all, EDR solutions provide the capability to add new detections, so you want to make sure that those detections are being kept up to date.  For example, detecting "weaponized" or "malicious" Word documents...those email attachments that, when opened, will launch something else...should be part of the package.  Then when something new comes along, such as the MS Word "subdoc" functionality (functionality is a 'feature' that can be abused...), you would want your EDR solution updated with the ability to detect that, as well as your infrastructure searched to determine if there are any indications of someone having already abused this feature.

RegRipper Plugin Update
The morning of 1 Jan 2018, I read a recent blog post by Adam/Hexacorn regarding an alternative means that executable images files can use to load DLLs, and given my experience with targeted threat hunting and the pervasiveness I've seen of things like DLL side loading, and adversaries taking advantage of the DLL search order for loading malicious DLLs (loaded by innocuous and often well-known EXEs), I figured it was a good bit of kit to have available...so I wrote a RegRipper plugin.  It took all of 10 min to write and test, mostly because I used an already-existing plugin (something Corey Harrell had mentioned a while back in his blog) as the basis for the new one.  I then uploaded the completed plugin to the GitHub repository.

Sunday, December 31, 2017

WindowsIR 2017

I thought that for my final post of 2017, I'd pull together some loose ends and tie off some threads from throughout the year, and to do that, I figured I'd just go through the draft blog posts I have sitting around, pull out the content, and put it all into one final blog post for the year.

Investigating Windows Systems
Writing for my next book, Investigating Windows Systems, is going well.  I've got one more chapter to get completed and in to my tech reviewer, and the final manuscript is due in April, 2018.  The writing is coming along really well, particularly given everything that's gone on this year.

IWS is a departure from my previous books, in that instead of walking through artifacts one after another, this book walks through the process of analysis, using currently available CTF and forensic challenges posted online, and specifically calling out analysis decisions along the way.  For example, at one point in the analysis, why did I opt to take this action, rather than that one?  Why did I choose to pivot on this IP address, rather than run a scan?

IWS is not a book about the basics.  From the beginning, I start from the point that whomever is reading the book knows about the MFT, understands what a "timeline" is, etc.  Images and analysis scenarios are somewhat limited, given that I'm using what's already available online, but the available images do range from Windows XP, through Windows 10.  In several instances in the book, I mention creating a timeline, but do not include the exact process in the text of the book, for a couple of reasons.  One is that I've already covered the process.  The other is that this is the process I use; I don't require anyone to use the exact same process, step by step.  I will include information such as the list of commands used in online resources in support of the book, but the reader can create a timeline using any method of their choosing.  Or not.

Why Windows XP?  Well, this past summer (July, 2017), I was assisting with some analysis work for NotPetya, and WinXP and 2003 systems were involved.  Also, the book is about the analysis process, not about tools.  My goal is, in part, to illustrate the need for analysts to choose a particular tool based their understanding of the image(s) being analyzed and the goals of the analysis.  Too many times, I've heard, "...we aren't finished because vendor tool X kept crashing, and we kept restarting it...", without that same analyst having a reasoned decision behind running that tool in the first place.

Building a Personal Brand in InfoSec
I've blogged a couple of times this year (here, and here) on the topic of "getting started" in the cyber security field.  In some cases, thoughts along these lines are initiated by a series of questions in online forums, or some other event.  I recently read this AlienVault blog post on the topic, and I thought I'd share my thoughts on the topic, with respect to the contents of the article itself.  The author makes some very good points, and I saw an opportunity to tie their comments in with my own.

- Blogging is a good way to showcase your knowledge
Most definitely.  Not only that, a blog post is a great way to showcase your ability to write a coherent sentence.  Sound harsh?  Maybe it is.  I've been in the infosec/cybersecurity industry for about 20 yrs, and my blog goes back 13+ yrs.  This isn't me bragging, just stating a fact. When I look at a blog article from someone, anyone...new to the industry, been in the industry for a while, or returning to blogging after being away for a while...the first thing that stands out in my mind isn't the content as much as it the author's ability to communicate.  This is something that is not unique to me; I've read about this in books such as "The Articulate Executive in Action", and it's all about that initial, emotional, visceral response.

There are a LOT of things you can use as a basis for writing a blog post.  The first step is understanding that you do not have to come up with original or innovative content.  Not at all.  This is probably the single most difficult obstacle to blogging for most folks.  From my experience working on several consulting teams over two decades, it's the single most used excuse for not sharing information amongst the team itself, and I've also seen/heard it used as a reason for not blogging.

You can take a new (to you) look at a topic that's been discussed.  One of the biggest misconceptions (or maybe the biggest excuse) for folks is that they have nothing to share that's of any significance or value, and that simply is not the case at all.  Even if something you may write about has been seen before, the simple fact that others are still seeing it, or others are seeing it again after an apparent absence, is pretty significant.  Maybe others aren't seeing it due to some change in the delivery mechanism; for example, early in 2016, there was a lot of talk about ransomware, and most of the media articles indicated that ransomware is email-borne.  What if you see some ransomware that was, in fact, email-borne but bypassed protection/detection mechanisms due to some variation in the delivery?  Blah blah ransomware blah blah email blah user clicked on it blah.  Whether you know it or not, there's a significant story there.  Or, maybe I should say, whether you're willing to admit it to yourself or not, there's a significant story there.

Don't feel that you can create your own content?  You can write book reviews, or reviews of individual chapters of books.  I would caution you, however...a "book review" that consists of "...chapter 1 covers blah, chapter 2 covers blah..." is NOT a review at all.  It's the table of contents.  Even when reviewing a single chapter, there's so much to be said...what was it that the author said, or was it how they said it?  How did what you read in the pages impact you, or your daily SOP?  Too many times I've seen reviews consist of little more than a table of contents, and not be abundantly helpful.

- Conferences are an absolute goldmine for knowledge...
They definitely are, and the author makes a point of discussing what can be the real value of conferences, which is the networking aspect. 

A long time ago, I found that I was attending conference presentations, and found that it felt like the first day of high school.  I'd confidently walk into a room, take my seat, and within a few minutes of the presenter starting to speak, start to wonder if I was in the correct room.  It often turned out that either the presentation title was a place holder, or that I had completely misinterpreted the content from the 5 words used in the title.  Okay, shame on me.  Often times, however, presenters don't dig into the detail that I would've hoped for; on the other hand, when I've presented on what something look like (i.e., lateral movement, etc.), I've received a great deal of feedback.  When I've seen such presentations, I've had a great deal of feedback (and questions) myself.  Often a lot of the detail...what did you see, what did it look like, why did you decide to go left instead of right...comes from the networking aspect of conferences.

- Certifications are a hot topic in the tech industry, and they are HR’s best friend for screening applicants.
True, very true.  But this goes back to the blogging discussed above...I'm not so much interested in the certifications one has, as much as I'm interested in how the knowledge and skills are used and applied.  Case in point, I once worked with someone (several years ago) who went off to a week-long training course in digital forensics, and within 6 weeks of the end of the course, wrote a report to a client in which they demonstrated their lack of understanding of the MFT.  In a report.  To a client.

So, yes, certifications are viewed as providing an objective measure of an applicant's potential abilities, but the most important question to ask is, when someone is sent off to training, does anyone hold them accountable for what they learned?  When they come back from the training, do they have new tools, processes, and techniques that they then employ, and you can see the result of this in their work?  Or, is there little difference between the "before" and "after"?

On Writing
I mentioned above that I'm working on completing a book, and over the last couple of weeks/months, I've run across some excellent advice for folks who want to write books that cover 'cyber' topic.  Ed Amoroso shared some great Advice For Cyber Authors, and Brett Shavers shared a tip for those thinking of writing a DFIR book.

Fast Times at DFIR High
One of the draft posts I'd started earlier this year had been one in which I would recount the "funny" things that I'd seen go on over the past 20 years in infosec (what we used to call "cybersecurity"), and specifically within DFIR consulting work.  I'm sure that a great deal of what I could recount would ring true for many, and that many would also have their own stories, or their own twists on similar stories.  I'd had vignettes in the draft written down, things like a client calling late at night to declare an emergency and demand that someone be sent on-site before they provided any information about the incident at all, only to have someone show up and be told that they could go home (after arriving at the airport and traveling on-site, of course). Or things like a client sending password-enabled archives containing "information you'll want", but no other description...requiring that they all be opened and examined, questions asked as to their context, etc.

I opted to not continue with the blog article, and I deleted it, because it occurred to me that it wasn't quite as humorous as I would have hoped.  The simple fact is that the types of actions I experienced in, say, 2000, are still being seen by active IR consultants today.  I know because I've talked to some IR folks who've experienced them.  Further, it really came down to a "...cast the first stone..." issue; who are we (DFIR folks, responders, etc.) to chide clients for their actions (how inappropriate or misguided that we see them...) when our own actions are not pristine?  Largely, as a community (and I'm not including specific individuals in this...), we don't share experiences, general findings, etc., with each other.  Some do, most don't...and life goes on.

Final Thought for 2017
EDR. Endpoint visibility is going to become an even more important issue in the coming year.  Yes, I've refrained from making predictions, and I'm not making one right now...I'm simply saying that the past decade has shown the power of early detection, and the differences in outcomes when an organization understands that a breach has likely already occurred, and actively prepares for it.  Just this year, we've seen a number of very significant breaches make it into the mainstream media, with several others being mentioned but not making, say, the national news.  Even more (many, many more) breaches occur without ever being publicly announced in any manner.

Equifax is one of the breaches that we (apparently) know the most about, and given what's appeared in the public eye, EDR would've played a significant role in avoiding the issue overall.  Given what's been shared publicly, someone monitoring the systems would've seen something suspicious launched as a child process of the web server (such filters or alerts should be common place) and been able to initiate investigative actions immediately.

EDR gives visibility into endpoints, and yes, there is a great deal of data headed your way; however, there are ways to manage this volume of data.  Employ threat intelligence to alert you to those actions and events that require your attention, and have a plan in place for how to quickly investigate and address them.  The military refers to this as "immediate actions", things that you learn in a "safe" environment and that you practice in adverse environments, so that you can effectively employ them when required.  Ensure that you cover blind spots; not just OS variants, but also, what happens when the adversary moves from command line access (via a Trojan or web shell) to GUI shell (Windows Explorer) access?  If your EDR solution stops providing visibility when the adversary logs in via RDP and opens any graphical application on the desktop, you have a blind spot.

Effective use of EDR solutions can significantly reduce costs associated with the current "state of the breach"; that is, getting notified by an external third party that you've suffered a breach, weeks or months after the initial compromise occurred.  With more immediate detection and response, your requirement to notify (based on state laws, regulatory bodies, etc.) will be obviated.  Which would you rather be...the next Equifax, or, because you budgeted for a solution, you can go to regulatory bodies and state, yes, we were breached, but no critical/sensitive data was accessed?