Pages

Friday, June 29, 2018

Updates

I just pushed three new plugins up to the RegRipper plugin repository, two written by Gabriele Zambelli (photos_win10.pl, msedge_win10.pl), and I wrote source_os.pl, to address the "When Windows Lies" issue that Mari brought up last year.

Addendum, 30 June: Pushed a new plugin from M. Godfrey, named "imgburn1.pl" to the repo this afternoon. Many thanks to Michael (and Gabriele) for writing and submitting plugins!

Addendum, 2 July: Thanks to input and test data from Mitch Impey, I was able to quickly update not only shellbags.pl and shellbags_tln.pl, but also comdlg32.pl.  Providing sample/test data makes troubleshooting and updating current plugins, or creating new ones, a much quicker and more efficient process.  Thanks, Mitch!

Addendum, 5 July: Many thinks to Micah Jones for sharing the dafupnp.pl plugin he wrote created!  This is a plugin that pulls information about media streaming devices from the System hive.  Thanks, Micah, for the great work and for sharing the plugin.  Also, based on the data that Micah shared, I updated the bthport.pl plugin, as well.

Also, I added bthport_tln.pl to the repository, as well.  This will help in performing timeline analysis for such things as data exfil via Bluetooth devices.

Addendum, 28 July: I committed two new plugins from Micah Jones to the repo this morning, thunderbirdinstalled.pl and mzthunderbird.pl.  Thanks, Micah!

Thursday, June 21, 2018

More about EDR/MDR

On the heels of my previous post on the topic, from my perspective, there still seems to be something of a misconception within the market as a whole as to the value of EDR/MDR. 

During conversations on the EDR/MDR topic, one of the questions we hear quite often is, "Do you detect X?", with "X" being some newly-identified virus or exploit.  I think that the reason we hear this question so often is that EDR/MDR is still somewhat new in the minds of the marketplace, and most organizations are still trying to visualize and really understand the concept, and how it applies to them.  The way we most often go about doing that sort of thing is by finding something we're familiar with, and start by building a comparison relationship there.  Unfortunately, starting with AV as the comparison doesn't really get us to where we need to be, in part because this is a whole new way of looking at things. 

And believe me, I do understand how difficult it is in today's marketplace to really understand what's out there.  A while back, I was in a sales presentation, and a sales rep was going through the slide pack they'd been provided by marketing; at one point, the client just came right out and said, "...yes, everyone says that, every other vendor we've spoken to has said the same exact thing.  How are you different?"  They were asking this question because they were trying to get beyond the initial fluff and down to, okay, how is this vendor's product better for us and our needs? Now, the folks we were talking to were actually very mature, in the sense that they had already identified their EDR needs, and actually had been working with an EDR product with a subset of their infrastructure, so they were essentially trying to make an apples-to-apples comparison of products.

As recently as last week, I was encouraged via a marketing email to sign up for a webinar, and one of the inducement statements in the corresponding blog post (linked in the email) was along the lines of, "...you should be detecting ransomware the moment it reaches out to the C2 server."

Hhhhmmm...but what if we could detect the issue before that point?  What if you could detect the initial behaviors that lead to a malware infection, and block the activity at that point, rather than waiting to try to detect the malware itself?  If we focused our efforts earlier in the attack cycle, we could impose greater pain on the attacker, raising the level of effort required on their part, and forcing them to change their tactics.

So what do I mean by this?  Every day, thousands (millions?) of employees log into their corporate computer systems, launch Outlook, and read their email.  Part of this may involve opening attachments, including MSWord documents and Excel spreadsheets.  As such, it's normal and expected to see a process tree that goes from the user logging in (Windows Explorer), to launching Outlook, and then Outlook includes whichever attachment is opened as a child process.  It looks something like this:

explorer.exe
    |__Outlook.exe
              |__winword.exe

Now, when a user is sent a phishing email with a weaponized attachment, the user may be prompted to click the "Enable Content" button in MSWord in order to allow any embedded macros to run; once this happens, we generally see something like the command prompt (cmd.exe), or something else (PowerShell, etc.) running as a child process of MSWord.  This is ( a ) never a good thing, ( b ) easy to detect, and ( c ) equally trivial to block.  Here's what that might look like:

explorer.exe
    |__Outlook.exe
              |__winword.exe
                        |__cmd.exe /c powershell.exe...
                                |__powershell.exe...

Here's a good example, shared by Phil Burdette some time ago, albeit without the Outlook component. 

The point is that this behavior can be detected and alerted on, and then action can be take at the point where winword.exe spawns a command shell as a subprocess.  As such, if we're blocking processes from executing at that point, then we're not even getting to the point where PowerShell or WScript or BITSAdmin or whatever is used to download the malicious stuff to the compromised system.  With the right instrumentation, you may also be able to isolate the system on the network after having blocked the malicious behavior.  That way, instead of sending someone a ticket telling them that they have to respond, and them getting it at some point in the future (even as much as a few minutes is enough for the adversary to burrow their way into your infrastructure), all access is immediately disabled.

This is where MDR solutions come into play.  With the right endpoint technology in place, an MDR not only monitors what's going on, but is able to take threat intelligence developed from monitoring their entire client base, and apply is across all monitored clients.  This means that every client benefits from what was gleaned and developed from any one client, and they benefit immediately.

What this ultimately leads to is much earlier detection of malicious activity, and a much quicker response to that activity, such that reporting is obviated.  After all, if you're able to detect and respond to an adversary much earlier in their attack cycle, and you're able to cycle through the OODA loop faster than the adversary, then you're able to prevent access to any "sensitive data".  If that sensitive data isn't accessed, then you don't have to report.  Also, you've got a record of activity that demonstrates that you were able to respond to, contain, and eradicate the adversary.

Monday, June 18, 2018

Updates

I've had a bunch of draft blog posts sitting around, and for some of them, particularly the really technical ones, I thought, "...ah, no one's gonna read this...", so I opted for another medium, or I simply deleted them.  However, I decided to throw a couple of them in together, into one post, just so I could complete my thoughts and get them out there, and do a bit of housekeeping with respect to my blog drafts.

Publishing
Brett Shavers recently posted some interesting content on the topic of publishing in DFIR.  While DFIR doesn't necessary follow the "publish or perish" adage most often seen in academia, Brett does have some very good points.

To Brett's point about "lack of contributors", this is the reason why efforts like Into The Boxes project never really took off...you can't sustain a community-based project if folks within the community aren't willing to contribute.

By the way, I still have my inaugural hard copy of "Into The Boxes #1", knowing full well that one day it's going to be part of my DFIR museum, right alongside my MS-DOS install diskettes, my original Windows XP install CDs, and an AOL installation CD!

Something else that Brett mentioned in his post was "peer review".  Oddly enough, I've been engaged in a conversation with someone else in the DFIR community about this very topic, particularly as it applies to analysis findings.  In my experience, peer reviews within the community are spotty, at best.  We all know that team member that we can go to, because they'll turn around a 40 page report draft in 15 minutes, saying just that it "looks good."  And we also know that team member we can go to and rely on to catch stuff we may have missed.

Blog-a-Day Challenge
Speaking of publishing, it looks as if over the last couple of weeks a number of folks have decided to take the Zeltser Challenge, that is to post a blog a day for an entire year.  Not only has David Cowen picked it back up (he's done it before...), but others have joined the party, as well.  For example, Stacey has started The Knowledge Bean and decided to partake in the challenge. Tony at Archer Forensics has picked up the mantle, as well, and is off and running.

If anyone else in DFIR is doing something like this...blog-a-day, or even blog-a-week, let me know...I'd like to check it out.

Oh, and tying this back to the previous topic, something Tony mentioned in one of the blog posts:

I would solicit everyone to do that deeper dive research to further the field.

Agreed, 1000%. 

New Plugins
Well, let's see...what's new on the plugin front?  So, I recently received one updated plugin (lastloggedon.pl) and a new plugin (utorrent.pl) from Michael Godfrey (author information included in the plugins), and uploaded them to the repository.  I also added jumplistdata.pl, which is based on this finding from Costas K.  Finally, I created execpolicy.pl, a plugin to check the PowerShell ExecutionPolicy value, if it exists...some folks have said that adversaries have been observed modifying the Registry value to that they can bypass the execution policies in order to run their code, albeit not from the command line.

Not a new plugin, but I also updated the license for RegRipper to the MIT license, at the request of some folks at AWS.

Tool Usage
Mari published a blog article recently, which I thought was great...not just because Mari's great at sharing information in an easy-to-understand and repeatable manner, but more so due to how her post discussed tool usage.  There any number of DFIR sites out there, web sites and blogs alike, that are full of tool listings...here's this tool, and here's this tool.  Great.  It's like shop class, with all of the tools hanging on the peg board, on their specific hooks, over their specific silhouettes. 

The question folks should have is, great, but how can I best employ those tools to complete the goals of my investigation in a complete, thorough, accurate, and timely manner?

If you're looking for something to blog about...there you go.  There are potentially hundreds or thousands of blog posts based on just that single theme, that one topic.

Sunday, June 17, 2018

Coding in DFIR

A while back, I wrote a blog post where I discussed writing my own tools, and why I do it.  I have publicly shared the tools I've written, and I was asked to share my thoughts as to why I do it, and how it's "of value" to me, and more importantly, to my workflow.  I generally find it very helpful to have tools that fit into and support the workflow I'm using (i.e., timelines), and I also find it very valuable to be able to address issues with the tools (and others) as a result of the available data that's being parsed.

Something I've been interested in for quite some time is the metadata embedded in various file formats used on Windows systems, and this interest has cracked the shell a bit in more than a few cases, and given me just a peak inside of what may be going on when someone creates a file for malicious purposes.  I've not only been able to pull apart OLE format MS documents (MS Word .doc files), but also the OLE objects embedded inside the newer .docx format files.

One tool I like to use is Didier Stevens' oledump.py, in part because it provides a great deal of functionality, particularly where it will decompress VBA macros.  However, I will also use my own OLE parser, because it pulls some metadata that I tend to find very useful when developing a view into an adversary's activities.  Also, there is the potential for additional data to be extracted from areas that other tools simply do not address.

An example of simply metadata extraction came from the Mia Ash persona that Allison Wickoff (an excellent intel analyst at SecureWorks) unraveled.  I didn't have much at all to do with the amazing work that Allison did...all I did was look at the metadata associated with a document sent to the victims.  However, from that small piece of information came some pretty amazing insights.

Another, much older example of how metadata can be extracted and used comes from the Blair case.  That's all I'll say about that one.

Another file format that I've put some work into understanding and unraveling is the LNK file format, which MS has done a very good job of documenting.

There are a number of FOSS tools available for parsing the binary LNK file format that will display the various fields, but I noticed during some recent testing that there were files submitted to VirusTotal that had apparently "done their job" (i.e., been successfully used to infect systems), but the parsing tools failed at various points in dissecting the binary contents of the file.  Some tools that did all of their parsing and then displayed the various fields failed completely, and those that displayed the fields as they were parsed appeared to fail only at a certain point.  Understanding the file format allowed me to identify where the tools appeared to be failing, and in this way, I'm not simply going back to some who wrote a tool with, "..didn't work."  I know exactly what happened and why, and more importantly, can address it.  As a result of the work I did, I have also seen that some aspects of these file formats can be abused without degrading the base functionality offered via the format.

This is where coding in DFIR comes in...using a hex editor, I was able to see where and why the tools were having issues, and I could also see that there really wasn't anything that any of the tools could do about.  What I mean by that is, when parsing Extra Data Blocks within an LNK file (in particular, the TrackerDataBlock) what would be the "machine ID" or NetBIOS name of the system on which the LNK file was created extends beyond the bounds of the size value for that section.  Yes, the machine ID value is of variable length (per the specification) but it also represents the NetBIOS name of a Windows system, so there's an expectation (however inaccurate) that it won't extend beyond the maximum length of a NetBIOS name.

In some of the test cases I observed (files downloaded from VirusTotal using the "tag: lnk" search term), the machine ID field was several bytes longer than expected, pushing the total length of the TrackerDataBlock out beyond the identified length.  This causes all tools to fail, and begin reading the droids from the wrong location.  In other instances, what would be the machine ID is a string of hex characters that extends beyond the length of the TrackerDataBlock, with no apparent droid fields visible via a hex editor.

These are possibly modifications of some kind, or errors in the tool used to create the LNK files (for an example of such a tool, go here).  Either way, could these be considered 'tool marks' that could be used to track LNK files being deployed across campaigns?

Am I suggesting that everyone, every examiner needs to be proficient at coding and knowledgeable of file formats?  No, not at all.  In this blog post, I'm simply elaborating on a previous response, and illustrating how having an understanding of coding can be helpful.

Other examples of how coding in DFIR is useful include not just being able to automate significant portions of your workflow, but also being able to understand what an attacker may have been trying to achieve.  I've engaged in a number of IR engagements over the years where attackers have left batch files or Visual Basic scripts behind, and for the most part, they were easy to understand, and very useful.  However, every now and then, there would be a relatively un- or under-used command that required some research, or something "new".

Another example is decoding weaponized MSWord documents.  Not long ago, I unraveled a macro embedded in an MSWord document, and having some ability to code not only helped me do the unraveling, but also helped me see what was actually being done.  The macro had multiple layers of obfuscation, including base64 encoding, character encoding (i.e., using the character number from an ASCII table instead of the character itself, and decoding via chr()...), as well as a few other tricks.  At one point, there was even a base64-encoded string embedded within a script that was, itself, base64 encoded.

So, in summary, having some capability to code, at some level, is extremely valuable within the DFIR community...that, or knowing someone.  In fact, having someone (or a few someones) you can go to for assistance is just helpful all around.