Pages

Tuesday, May 21, 2019

DefCon 2018 CTF File Server Take-Aways

In my last post, I shared some findings with respect to analyzing an image for anti- or counter-forensics techniques, which was part of the DefCon 2018 CTF.  Working on the analysis for the question I pursued, writing it up, and reflecting on it afterwards really brought out a number of what I consider important take-aways that apply to DFIR analysis.

Version
The Windows version matters.  The first DefCon 2018 CTF image I looked at was Windows 2016, and the file server image was Windows 2008 R2.  Each had some artifacts that the other didn't.  For example, the file server image had a SysCache.hve file, and the HR server didn't.

Both of the images that I've looked at so far from the CTF were of Windows server systems, and by default, application prefetching is not enabled on server editions.

Something else to consider is, was the system 32- or 64-bit?  This is an important factor, as some artifacts associated with 32-bit applications will be found in a different location on a 64-bit system.

The version of Windows you're working with is very important, as it tells you what should and should not be available for analysis, and sort of sets the expectations.  Now, once you start digging in, things could change.  Someone could have enabled more detailed logging, or enabled application prefetching on a server edition of Windows; if that's the case, it's gravy.

This is also true for applications, as well as other operating systems.  Versions and family matter. 

Execution
A file existing on the system does not definitively mean it was executed; the same is true with a GUI application.  Just because a GUI application was launched by a user, does that mean that it was actually run, that the functionality available through the GUI was run?  That's the challenge with GUI applications, isn't it?  How do you tell when they're actually run beyond just the GUI being opened?  Did the user select various options in the UI and then click "Ok", or did they simply open the GUI and then close it?

Yeah, I get it...why open the application and launch the UI if you don't actually intend to run it?  We often think or say the same thing when it comes to data staging and exfiltration, don't we?  Why would an actor bother to stage an archive if they weren't going to exfil it somehow?  So, when we see data staged, we would assume (of course) that it was exfiltrated.  However, we may end up making that statement about exfiltration in the absence of any actual data to support it. What if the actor tried to exfil the archive, but failed? 

If you don't have the data available to clearly and definitively illustrate that something happened, say that.  It's better to do that, than it is to state that something occurred, only to find out later that it didn't.

Consider this...an application running on the system can be looked at as a variation of Locard's Exchange Principle; in this case, the two "objects" are the application and the eco-system in which it is executing.  In the case of an application that reportedly performs anti- or counter-forensics functions, one would expect "forensic artifacts" to be removed somehow; this would constitute the "material" exchanged between the two objects.  "Contact" would be the execution of the application, and in the case of a GUI application, specifically clicking on "Ok" once the appropriate settings have been selected.  In the case of the file server CTF question, the assumption in the analysis is that "forensic artifacts" were deleted.

Why does it matter that we determine if the executable was actually launched, or if the functionality within the GUI tool was actually launched by the user?  Isn't it enough to simply say that the file exists on the system and that it had been run, and leave it at that?  We have to remember that as an expert, our findings are going to be used by someone to make a decision, or are going to impact someone.  If you're an IR consultant, it's likely that your findings will be used to make critical business decisions; i.e., what resources to employ, or possibly even external notification mandated by compliance regulations or even legislation.  I'm not even going to mention the legal/law enforcement perspective on this, as there are plenty of other folks out there who can speak authoritatively on that topic.  However, the point remains the same; what you state in your findings are going to impact someone.

Something else to consider is that throughout my time in the industry, I've seen more than a few times when a customer has contacted my team, not to perform DFIR work, but rather to review the DFIR work done by others.  In one instance, a teammate was asked by the client, "what questions would you ask?"  He was given the report produced by the other team, and went through it with a fine-tooth comb. I'm also aware of other instances where a customer has provided the same data to two different consultants, and compared the reports.  I can't say that this has happened often, but it has happened.  I've also been in situations where the customer has hired two different consulting companies, and shared the reports they (the customer) received with the other company.

Micro-Timelines and Overlays
One thing that was clear to me in both of the posts regarding the DefCon CTF images was the value and utility of micro-timelines.  Full timelines of system activity are very valuable, as well, because they serve to provide a significant amount of context around system activity at some point in time.  However, full system timelines are also very noisy, because Windows systems are very noisy.  As we saw in the last blog post, there was a Windows update being installed around the same time that "forensic artifacts" were being deleted.  This sort of circumstance can create confusion, and lead to the misinterpretation of data. 

However, by using micro-timelines, we can focus on specific sets of artifacts and start building out a skeleton analysis.  We can then use that skeleton as an overlay, and pivot into other micro-timelines, as well as into the full timeline.  Timeline analysis is an iterative process, adding and removing layers as the picture comes more into focus. 

This process can be especially valuable when dealing with activity that does not occur in an immediately sequential manner.  In my last blog post, the user launched an application, and then enabled and launched some modicum of the application's functionality.  This occurred in a straightforward fashion, albeit with other system activity (i.e., Windows update) occurring around the same time.  But what if that hadn't been the case?  What if the activity had been dispersed over days or weeks?  What if the user had used RegEdit to delete some of the forensic artifacts, and had done a few at a time?  What if the user had also performed online searches for tips on how to remove indications of activity, and then those artifacts were impacted, but over a period of days?  Having mini- and micro-timelines to develop overlays and use as pivot points would make the analysis so much easier and efficient that scrolling through a full system timeline.

Deep Knowledge
Something that working and reflecting on my previous post brought to mind is that deep knowledge has considerable value.  By that, I mean deep knowledge not only of data structures, but also of the context of the available data.

Back when I was performing PCI investigations (long ago, in a galaxy far, far away...), one of the analysts on our team followed our standard process for running searches for credit card numbers (CCNs) across the images in their case.  We had a standardized process for many of the functions associated with PCI investigations due not only to what was mandated for the investigations, but the time frame in which the information had to be provided, as well.  By having standardized, repeatable processes, no one had to figure out what steps to take, or what to do next.  In one instance, the search resulted in CCNs being found "in" the Software hive.  Closer examination revealed that the CCNs were not value names, nor were they stored in a value data; rather, they were located in unallocated space within the hive file, part of sectors added to the logical file as it "grew".  Apparently, the bad guy had run tools to collect CCNs into a text file, and then at one point in their process, deleted that text file.  Those sectors that contained the data were added to the Software hive as it grew in size.

Having the ability to investigate, discover, and understand this was critical, as it had to be reported to Visa (Visa ran the PCI Council at the time) and there were strong indications that someone at Visa actually read the reports that were sent in...often, in great detail.  In fact, I have no doubt in my mind that there were other DFIR analysts who reviewed the reports, as well.  As such, being able to tie our findings to data in a reproducible manner was absolutely critical.  Actually, it should always be paramount, and foremost on your mind.

MITRE ATT&CK
One of the aspects of the MITRE ATT&CK framework that I've been considering for some time now is something of a reverse mapping for DFIR. What I mean by this is that right now, using the DefCon 2018 CTF file server image, we can map the activity from the question of interest (counter-forensics) to the Defense Evasion tactic, and then to the Indicator Removal From Host technique.  From there, we might take things a step further and add the findings from my last blog post as "observables"; that is, the user's use of the PrivaZer application led to specific "forensic artifacts" being deleted, which included artifacts such as the UserAssist value name found to have been deleted, the "empty" keys we saw, etc.

From there, we can then say that in order to identify those observables for the technique, we would want to look for the artifacts, or look in the locations, identified in the blog post.  This would be a reverse mapping, going from the artifacts back to the tactic. 

Let's say that you were examining an instance of data exfiltration, one that you found had occurred through the use of a bitsadmin 'upload' job.  Within the Data Exfiltration tactic, the use of bitsadmin.exe (or PowerShell) to create an 'upload' job might be tagged with the Automated Exfiltration or the Scheduled Transfer technique (or both), depending upon how the job is created.  The observable would be the command line (if EDR monitoring was in place) used to set up the job, or in the case of DFIR analysis, a series of records in the BitsAdmin Client Event Log. 

Reversing this mapping, we know that one way to identify either of the two techniques would be to look in the BitsAdmin Client Event Log.  You might also include web server logs as a DFIR artifact that might help you definitively identify another aspect of data exfiltration.

In this way, we can extend our usual mapping from tactic to technique, adding "observables" and data sources, which then allows us to do a reverse mapping back to the tactic.  Sharing this across the DFIR team means that now analysts don't have to have had the experience of investigating a case of data exfiltration in order to know what to look for.

2 comments:

  1. Excellent work as always! Please keep them coming. Thank you for your excellent work in the field.

    ReplyDelete