As an industry and community, we need to go beyond...go beyond looking at single artifacts to indicate or justify "evidence", and we need to go beyond having those lists of single artifacts provided to us. Lists, such as the SANS DFIR poster of artifacts, are a good place to start, but they are not intended to be the end-all. And we need to go beyond our own analysis, in isolation, and discuss and share what we see with others.
Here's a good example...in
this recent blog post, the author points to Prefetch artifacts as evidence of file execution. Prefetch artifacts are a great source of information, but (a) they don't tell the full story, and (b) they aren't the
only artifact that illustrates "file execution". They're one of many. While it's a good idea to start with one artifact, we need to build on that one artifact and create (and pursue) artifact constellations.
This post, and numerous others, tend to look at artifacts in isolation, and not as part of an overall artifact constellation. Subsequently, attempts at analysis fall apart (or simply fall short) when that one artifact, the one we discussed in isolation, is not present. Consider Prefetch files...yes, they are great sources of information, but they are not the
only source of information, and they are not present
by default on Windows servers.
And, no, I do not think that one blog post speaks for the entire community...not at all. Last year, I took the opportunity to explore the images provided as part of the DefCon 2018 CTF. I examined two of the images, but it was the
analysis of the file server image that I found most interesting. Rather than attempting to answer all of the questions in the CTF (CTF questions generally are not a good representation of real world engagements), I focused on one or two questions in particular. In the case of the file server, there was a question regarding the use of an anti-forensics tool. If you read
my write-up, you'll see that I also reviewed three other publicly available write-ups...two relied on a UserAssist entry to answer the question, and the third relied on a Registry value that provided information about the contents of a user's desktop. However, none of them (and again, these are just the public write-ups that I could find quickly) actually determined if the anti-forensics tool had been used, if the functionality in question had been deployed.
Wait...what? What I'm saying is that one write up had answered the question based on what was on the user's desktop, and the two others had based their findings on UserAssist entries (i.e., that the user had double-clicked on an icon or program on their desktop). However, neither had determined if anything had actually been deleted. I say this because there was also evidence that another anti-forensics tool (CCleaner) had been of interest to the user, as well.
My point is that when we look at artifacts in isolation from each other, we only see part of the picture, and often a very small part. If we only look at indications of what was on the user's desktop, that doesn't tell us if the application was ever launched. If we look at artifacts of program execution (UserAssist, Prefetch, etc.), those artifacts, in and of themselves, will not tell us what the user did once the application was launched; it won't tell us what functionality the user employed, if any.
Here's another way to look at it. Let's say the user has CCleaner (a GUI tool) on their desktop. Looking at just UserAssist or Prefetch...or, how about UserAssist
and Prefetch...artifacts, what is the difference between the user launching CCleaner and deleting stuff, and launching CCleaner, waiting and then closing it?
None. There is no difference. Which is why we need to go beyond just the initial, easy artifacts, and instead look at artifact clusters or constellations, as much as possible, to provide a clear(er) picture of behavior. This is due to the nature of what we, as examiners, are looking at today. None of the incidents we're looking at...targeted threats/APTs, ransomware/crimeware, violations of acceptable use policies, insider threats, etc...are based on single events or records.
Consider ransomware...for the most part, these events were looked at, more often than not, as, "files were encrypted". End of story. But the reality is that in many cases,
going back years, ransomware incidents involved much more than
just encrypting files. Threat actors were embedded within environments for weeks or months before ever encrypting a file, and during that time they were collecting information and modifying the infrastructure to meet their needs. I say "were", but "still are" applies equally well. And we've seen an evolution of this "business model" over the past few months, in that we know that data was exfil'd during the time the actor was embedded within the infrastructure, not due to our analysis, but because the threat actor releases it publicly, in order to "encourage" victims to pay. A great deal of activity needs to occur for all of this to happen...settings need to be modified, tools need to be run, data needs to be pulled back to the threat actor's environment, etc. And because these actions occur over time, we cannot simply look to one, or a few artifacts in isolation, in order to see the full picture (or as full a picture as possible).
Dr. Ali Hadi recently authored a pair of interesting blog posts on the topic of USB devices (
here, and
here). In these posts, Dr. Hadi essentially addresses the question of, how do we go about performing out usual analysis when some of the artifacts in our constellation are absent?
Something I found fascinating about Dr. Hadi's approach is that he's essentially provided a
playbook for USB device analysis. While he went back and forth between two different tools, both of his blog posts provide sufficient information to develop that playbook in either tool. For example, while Dr Hadi incorporated the use of Registry Explorer, all of the artifacts (as well as others) can also be derived via RegRipper plugins. As such, you can create RegRipper profiles of those plugins, and then run the automatically against the data you've collected, automating the extraction of the necessary data. Doing so means that while some things may be missing, others may not, and analysts will be able to develop a more complete picture of activity, and subsequently, more accurate findings. And automation will reduce the time it takes to collect this information, making analysis more efficient, more accurate, and more consistent across time, analysts, etc.
Okay, so what? Well, again...we have to stop thinking in isolation. In this case, it's not about
just looking at artifact constellations, but it's also about sharing what we see and learn with other analysts. What one analyst learns, even the fact that a particular technique is still in use, is valuable to other analysts, as it can be used to significantly decrease their analysis time, while at the same time increasing accuracy, efficiency, and consistency.
Let's think bigger picture...are we (DFIR analysts) the only ones involved? In today's business environment, that's highly unlikely. Most things of value to a DFIR analyst, when examined from a different angle, will also be valuable to a SOC analyst, or an EDR/EPP detection engineer. Here's an example...earlier this year, I read that a
variant of Ryuk had been analyzed and found to contain code for
deploying Wake-on-LAN packets in order to increase the number of systems it could reach, and encrypt. As a result, I wrote a detection rule to alert when such packets were found originating from a system; the point is that something found by malware reverse engineers could be effectively employed by SOC analysts, which in turn would result in more effective response from DFIR analysts.
We need to go beyond. It's not about looking at artifacts in isolation, and it's not about conducting our own analysis in isolation. The bad guys don't do it...after all, we track them as groups. So why not pursue all aspects of DFIR as a 'group'; why not look at groups of artifacts (constellations), and share our analysis and findings not just with other DFIR analysts, but other members of our group (malware RE, threat intel analyst, SOC analyst, etc.), as well?