Monday, January 29, 2018


I've been thinking about some of the tools I use in my work a good deal lately, and looked back over the breadth of some of those that I've used.  I've even thought a good deal about the book that I helped Cory write (my part was rather small).  I think that what I find most interesting about this is that when I take a good look at the tools, I realize that it's not so much about the tools, as much as it is about the workflow.

I really like tools like Yara and RegRipper not just because they're relatively easy to use, but they provide a great platform for documenting, retaining, and sharing corporate knowledge, as well as intelligence gleaned from previous cases, or other sources...I wrote the plugin after seeing a Tweet.  The only short-coming is that analysts actually have to either write the rules and plugins themselves, or seek assistance from someone with more experience, in order to get this sort of thing documented in a usable form that can be deployed to and shared with others. 

The folks at H-11 Digital Forensics Services posted their list of the top 30 open source digital forensics tools.  There are some interesting things about the list, such as that Autopsy and TSK both made it as separate entries.  But I wanted to be sure to share the link, as each tool includes a brief description of how they've found it useful, which isn't something that you see very often.

While the list is limited to 30 tools, there is some duplication, which can be viewed as a good's very often a good idea to have multiple tools capable of doing the same or similar work, although I'd add, doing that work in a different way.  You don't want multiple tools that all do the same thing the same way; for example, Microsoft's Event Viewer and LogParser tools both use the MS API, so it's likely that for basic parsing, they're both going to do it the same way, and produce very similar results.  A better idea would be to use something like LogParser and EVTXtract.

As a bit of a side note, we know that LogParser utilizes the MS API, because of how it operates on different versions of Windows; that is, XP/2003 vs Vista+.

Just to be clear...having and using multiple tools is a great idea, but having multiple tools that all do the same thing the same way, maybe not so much.  It's still up to the individual analyst.

Speaking of tools, or more appropriately, lists of tools, be sure to stop by the tools page and check out what's there. 

bits_parser - Python script (install it into your installation using 'pip') for parsing the BITS database(s).  Interestingly enough, within the DFIR community as a whole, it's not really clear to me that this is viewed as an infiltration (download) or exfiltration (getting data off of a system) method.  However, if someone has admin access to a system, it's not that hard to put together a command line and use this native tool (i.e., "living off the land") to get data off of the system.  Over on Twitter, Dan has some caveats for installing and running this tool; hint: requires Python3.

vss_carver - requires Python 3.6+ or Python 2.7+; description states that it "carves and recreates VSS catalog and store from Windows disk image."  That's pretty be able to not only get historical data from VSCs (which I've found to be extremely valuable a number of times...), but now we can carve for and access deleted VSCs, extending our reach into the past for that system.  Pretty nice.

YARP - Yet Another Registry Parser,written in Python3.  One of the great things about the YARP site on GitHub is that test hives are available.

danzek's notes on UAC Virtualization - this is a great idea; so few within the #DFIR community keep notes (Note: in my first iteration of this post, I had included, "...on things like this...", but it occurred to me that few, if any, #DFIR folks keep notes at all...)

Just the other day, I saw that Jason had mentioned a couple of Registry values that seemed to be of interest, so I wrote up new RegRipper plugin and uploaded it into the repository.

Something I've noticed over the years is that as part of a conversation with someone in the #DFIR community, I'll mention RegRipper, and someone will say, "...I've been using RegRipper for years!"  Great...but using, how?  Most often I find out that it's via the GUI, and little else.  There's no real powerful use of the tool; someone may use the command line, but there is very little use of profiles.  And the vast majority of folks seem to run the plugins 'as is'...that is, they use only what comes with the download.

For the sake of transparency, someone did recently suggest (and make) updates to several plugins, and I also assisted in updating the plugin after someone asking for assistance provided the test data that I needed (I needed it to troubleshoot and fix the plugin).

I've written a LOT of documentation for RegRipper and how to use it.  I've written books, as well as blog posts, and I've also addressed support issues.  I say this because I was recently having a conversation with another analyst on the topic of what RegRipper offers that another tool (the one they were using) doesn't.  The conversation wasn't about, "...why is tool X better than tool Y...", but rather it was about "...what are the gaps that RegRipper fills over this other tool?"  To be honest, the commercial tool in question does, IMHO, a great job of presenting some parsed Registry items to the analyst, providing a layer of abstraction over the binary data itself, but the data that was parsed was based on what had been designed into the application.  As the application isn't just a Registry viewer, what the tool parses and presents to the analyst are commonly-sought, 'big ticket' items.  As is often the case, I heard, "...I've been using RegRipper for years..." but as it turned out, the usage didn't extend beyond running the GUI.  That's fine, if that's your workflow and that's what works for you...but that also means that you're likely missing out on some of the truly powerful aspects of RegRipper.

Something else that occurred to me was that, in attempting to verify the data parsed between two tools, some analysts didn't know where to go to find the data in the RegRipper output.  This goes back to something I've said time and time again, particularly when answering questions from other may not seem like it, but the version of Windows you're dealing with IS important, as it not only helps you recognize what data is available and where it's located, but it also helps recognize what other data may be available to verify findings, or fill gaps in analysis.

Tools like RegRipper (and Yara, and many of the other tools mentioned above) can be extremely powerful, surgical tools .  I've converted a number of plugins to sending their output to TLN format, for inclusion in my timeline creation and analysis workflow.  Running a limited number of the plugins against the appropriate hives, or even just one plugin, can give me a "sniper's eye" view of the situation that I couldn't get through any other means...either because a full, complete timeline simply has too much data (I will miss the tree while looking at the forest), or because other means of creating a timeline take longer.

Friday, January 19, 2018


What's old is new
Some discussions I've been part of (IRL and online, recently and a while ago), have been about what it takes to get started in the DFIR field, and one of the activities I've recommended and pushed, over certifications, is running one's own experiments and blogging about the findings and experience. The immediate push-back from many on that topic is often with respect to content, and my response is that I'm not looking for something new and innovative, I'm more interested in how well you express yourself.

That's right, things we blog about don't have to be new or innovative.  Not long ago, Richard Davis tweeted that he'd put together a video explaining shellbag forensics.  And that's's a good thing that we're talking about these things.  Some of these topics...shellbags, ShimCache, etc...are poorly understood and need to be discussed regularly, because not everyone who needs to know this stuff is going to see it when they need it.  I have blog posts going back almost seven and a half years on the topic of shellbags that are still valid today.  Taking that even deeper, I also have blog posts going back almost as far about shell items, the data blobs that make up shellbags, LNK files (and by extension, JumpLists), as well as provide the building blocks of a number of other important evidentary items found in the Windows Registry, such as RecentDocs and ComDlg32 values.

My point is that DFIR is a growing field, and many of the available pipelines into the industry don't provide complete enough coverage of a lot of topics.  As such, it's incumbent upon analysts to keep up on things themselves, something that can be done through mentoring and self-exploration.  A great way to get "into" the industry is to pick a topic or area, and start blogging your own experience and findings.  Develop your ability to communicate in a clear and concise manner.

It's NOT just for the military
Over the two decades that I've been working in the cybersecurity field, there've been a great many times where I've seen or heard something that has sparked a memory from my time on active duty.  When I was fresh out of the military, I initially found a lot of folks, particularly in the private sector who were very reticent to hear about anything that had to do with the military.  If I, or someone else, started a conversation with, " the military...", the folks on the other side of the table would cut us off and state, "...this isn't the military, that won't work here."

However, over time, I began to see that not only would what we were talking about definitely work, but some times, folks would talk about "military things" as if they were doing it, but it wasn't being applied.  Not at all.  It was just something they were saying to sound cool.

"Defense-in-depth" is something near and dear to my heart, because throughout my time in the military, it was something that was on the forefront of my mind from pretty much the first day of training.  Regardless of location...terrain model or the woods of Quantico...or the size of the unit...squad, platoon, company...we were always pushed to consider things like channelization and defense-in-depth.  We were pushed to recognize and use the terrain, and what we had available.  The basic idea was to have layers to the defense that slowed down, stopped, or drove the enemy in the direction you wanted them to go.

The same thing can be applied to a network infrastructure.  Reduce your attack surface by making sure of things like, that DNS server is only providing DNS services, not RDP and a web server, as well.  Don't make it easy for the bad guy, and don't leave "low hanging fruit" laying around in easy reach. 

A great deal of what the military does in the real world can be easily transitioned to the cyber world, and over the years that I've been working in this space, I have seen/heard folks say that "defense-in-depth has failed"...yet, I've never seen it actually employed.  Things like the use of two-factor authentication, segmentation, and role-based access can make it such that a bad guy is going to be really noisy in their attempts to compromise your put something in place that will "hear" them (i.e., EDR, monitoring). 

Not a military example, but did you see the first "Mission: Impossible" movie?  Remember the scene where Tom Cruise's character made it back to the safe house, and when he got to the top of the stairs, took the light bulb out of the socket and crushed it in his jacket?  He then spread the shards out on the now-darkened hallway floor, as he backed toward his room.  This is a really good example for network defense, as he made the one pathway to the room more difficult to navigate, particularly in a stealthy manner.  If you kept watching, you'll see that he was awakened by someone stepping on a shard from the broken light bulb, alerting him to their presence.

Compartmentalization and segmentation are other things that security pros talk about often; if someone from HR has no need whatsoever to access information in, say, engineering or finance, there should be controls in place, but more importantly, why should they be able to access it at all?  I've seen a lot of what I call "bolt-on M&As", where a merger and acquisition takes place and fat pipes with no controls are used to connect the two organizations.  What was once two small, flat networks is now one big, flat network, where someone in marketing from company A can access all of the manufacturing docs in company B.

The US Navy understands compartmentalization very well; this is why the bulkheads on Navy ships go all the way to the ceiling.  In the case of a catastrophic failure, where flooding occurs, sections of the ship can be shut off from access to others.  Consider the fate of the USS Cole versus that of the Titanic. 'Nuff said!

Sometimes, the military examples strike too close to home.  I've been reading Ben MacIntyre's Rogue Heroes, a history of the British SAS.  In the book, the author describes the preparation for a raid on the port of Benghazi, and that while practicing for the raid in a British-held port, a sentry noticed some suspicious activity, to which he was informed, in quite colorful language, to mind his own business.  And he did.  According to the author, this was later repeated at the target port on the night of the raid...a sentry aboard a ship noticed something going on and inquired, only to be informed (again, in very colorful language) that he should mind his own business.  I've seen a number of incidents where this very example has fact, I've seen it many times, particularly during targeted adversary investigations.  During one particular investigation, while examining several systems, I noticed activity indicative of an admin logging into the system (during regular work hours, and from the console), and 'seeing' the adversary's RAT on the system and removing it. Okay, I get that the admin might not be familiar with the RAT and would just remove it from a system, but when they do the same thing on a second system, and then failed to inform anyone of what they'd seen or done, there's no difference between those actions, and what the SAS troopers had encountered in the African desert.

I recently ran across this WPScans blog post, which discusses finding PHP and Wordpress "backdoors" using a number of methods.  I took the opportunity to download the archive linked at the end of the blog post, and ran a Yara rule file I've been maintaining across it, and got some interesting hits.

The Yara rule file I used started out as a collection of rules pulled in part from various rules found online (DarkenCode, Thor, etc.), but over time I have added rules (or modified existing ones) based on web shells I've seen on IR engagements, as well as shells others have seen and shared with me.  For those who've shared web shells with me, I've shared either rules or snippets of what could be included in rules back with them.

So, the moral of the story is that when finishing up a DFIR engagement, look for those things from that engagement that you can "bake back into" your tools (RegRipper, Yara rules, EDR filters, etc.) and analysis processes.  This is particularly valuable if you're working as part of a team, because the entire team benefits from the experience of one analyst.

Additional Resources:
DFIR.IT - Webshells

I made some updates recently to a couple of tools...

I got some interesting information from a RegRipper user who'd had an issue with the plugin on Windows 10.  Thanks to their providing sample data to work with, I was (finally) able to dig into the data and figure out how to address the issue in the code.

I also updated the and plugins, based on input from a user.  It's funny, because at least one of those plugins hadn't been updated in a decade.

I added an additional event mapping to the eventmap.txt file, not due to any new artifacts I'd seen, but as a result of some research I'd done into errors generated by the TaskScheduler service, particularly as they related to backward compatibility.

All updates were sync'd with their respective repositories.

Saturday, January 06, 2018

WindowsIR 2018: First Steps

I haven't mentioned ransomware in a while, but I ran across something that really stood out to me, in part because we had a couple of things we don't usually see in the cybersecurity arena through the media...numbers. 

Okay, first, there's this article from GovTech that describes an issue with ransomware encountered by Erie County Medical Center in April, 2017.  Apparently, in the wee hours of the morning, ransom notes started appearing on computer screens, demanding the equivalent (at the time) of $30,000USD to decrypt the files.  The hospital opted to not pay the ransom, and it reportedly took 6 weeks to recover, with a loss of $10MUSD...yes, the article said, "MILLION". 

A couple of other things from the article...the hospital reportedly took preparatory steps and "upgraded" their cybersecurity (not sure what that means, really...), and increased their insurance from $2MUSD to $10MUSD. Also, the hospital claimed that it's security was now rather "advanced", to the point where the CEO of the hospital stated, “Our cybersecurity team said they would have rated us above-average before the attack.”

So, here we have our numbers, and the fact that the hospital took a look at their risks and tried to cover what they could.  I thought I'd take a bit of a look around and see what else I could find about this situation, and I ran across this Barkly article, that includes a timeline of the incident itself.  The first two bullets of that timeline are:

Roughly a week before the ransom notes appear attackers gain access to one of ECMC's servers after scanning the Internet for potential victims with port 3389 open and Remote Desktop Protocol (RDP) exposed. They then brute-force the RDP connection thanks to "a relatively easy default password."

Once inside, attackers explore the network, and potentially use Windows command-line utility PsExec to manually deploy SamSam ransomware on an undisclosed number of machines...

Ah, yes...readers of this blog know a thing or two about Samsam, or Samas, don't you?  There's this blog post that I authored, and there's this one by The Great Kevin Strickland.

Anyway, I continued looking to see if I could find any other article that included some sort of definitive information that would collaborate the Barkly article, and while I did find several articles that referred to the firm that responded to and assisted ECMC in getting back up and running, I had trouble finding anything else that specifically supported the Barkly document.  This article states that the ransom note included 'hot pink text'...some of the variants of Samas ransomware that I looked at earlier this year included the ransom note HTML in the executable itself, and used a misspelled font color...the HTML tag read "DrakRed".  I did, however, find this article that seemed to collaborate the Barkly article.

So, a couple of lessons we can learn from this...

1.  A vulnerable web or RDP server just hanging out there on the Internet, waiting to be plucked is not "above-average" cybersecurity.

2.  An EDR solution would have paid huge dividends; the solution gets programmed into the budget cycle, and can lead to much earlier detection (or prevention) of the sort of thing.  Not only can an EDR solution detect the adversary's activities before they get to the point of deploying the ransomware, but it can also prevent processes from running, such as if the WinWord.exe process (MS Word) attempts to launch something else, such as a command prompt, run Powershell, run rundll32.exe, etc. 

If you still don't think ransomware is an issue, consider what happened to Spring Hill, TN, and Mecklenberg County, NC.  The CEO of the Fortalice Solutions (the cybersecurity firm that responded to assist Mecklenberg Cty), Theresa Payton, discussed budget issues regarding IR in this CSOOnline article; incorporating early detection into the IR plan, and by extension, the budget, will end up saving organizations from the direct and indirect costs associated with incidents.

EDR Solutions
Something to keep in mind is that your EDR solution is only as good as those who maintain it.  For example, EDR can be powerful and provide a great deal of capabilities, but it has to be monitored and maintained.  Most, if not all, EDR solutions provide the capability to add new detections, so you want to make sure that those detections are being kept up to date.  For example, detecting "weaponized" or "malicious" Word documents...those email attachments that, when opened, will launch something else...should be part of the package.  Then when something new comes along, such as the MS Word "subdoc" functionality (functionality is a 'feature' that can be abused...), you would want your EDR solution updated with the ability to detect that, as well as your infrastructure searched to determine if there are any indications of someone having already abused this feature.

RegRipper Plugin Update
The morning of 1 Jan 2018, I read a recent blog post by Adam/Hexacorn regarding an alternative means that executable images files can use to load DLLs, and given my experience with targeted threat hunting and the pervasiveness I've seen of things like DLL side loading, and adversaries taking advantage of the DLL search order for loading malicious DLLs (loaded by innocuous and often well-known EXEs), I figured it was a good bit of kit to have I wrote a RegRipper plugin.  It took all of 10 min to write and test, mostly because I used an already-existing plugin (something Corey Harrell had mentioned a while back in his blog) as the basis for the new one.  I then uploaded the completed plugin to the GitHub repository.