I've always been a fan of books or shows where someone follow clues and develops an overall picture to lead them to their end goal. I've always like the "hot on the trail" mysteries, particularly when the clues are assembled in a way to understand that the antagonist was going to do next, what their next likely move would be. Interestingly enough, a lot of the shows I've watched have been centered around the FBI, shows like "The X-Files", and "Criminal Minds". I know intellectually that these shows are contrived, but assembling a trail of technical bread crumbs to develop a profile of human behavior is a fascinating idea, and something I've tried to bring to my work in DFIR.
Former FBI Supervisory Special Agent and Behavioral Profiler Cameron Malin recently shared that his newest endeavor, Modus Cyberandi, has gone live! The main focus of his effort, cyber behavior profiling, is right there at the top of the main web page. In fact, the main web page even includes a brief history of behavioral profiling.
This seems to be similar to Len Opanashuk's endeavor, Motives Unlocked, which leads me to wonder, is this a thing?
Is this something folks are interested in?
Apparently ,it is, as there's research to suggest that this is, in fact, the case. Consider this research paper describing behavioral evidence analysis as a "paradigm shift", or this paper on idiographic digital profiling from the Journal of Digital Forensics, Security, and Law, to name but a few. Further, Google lists a number of (mostly academic) resources dedicated to cyber behavioral profiling.
This topic seems to be talked about here and there, so maybe there is an interest in this sort of analysis, but the question is, is the interest more academic, is the focus more niche (law enforcement), or is this something that can be effectively leveraged in the private sector, particularly where digital forensics and intrusion intelligence intersect?
I ask the question, as this is something I've looked at for some time now, in order to not only develop a better understanding of targeted threat actors who are still active during incident response, but to also determine the difference between a threat actor's actions during the response, and those of others involved (IT staff, responders, legitimate users of endpoints, etc.).
In a recent comment on social media, Cameron used the phrase, "...adversary analysis and how human behavior renders in digital forensics...", and it occurred to me that this really does a great job of describing going beyond just individual data points and malware analysis in DFIR, particularly when it comes to hands-on targeted threat actors. By going beyond just individual data points and looking at the multifaceted, nuanced nature of those artifacts, we can begin to discern patterns that inform us about the intent, sophistication, and situational awareness of the threat actor.
To that end, Joe Slowik has correctly stated that there's a need in CTI (and DFIR, SOC, etc.) to view indicators as composite objects, that things like hashes and IP addresses have greater value when other aspects of their nature is understood. Many times we tend to view IP addresses (and other indicators) one-dimensionally; however, there's so much more about those indicators that can provide insight to the threat actor behind them, such as when, how, and in what context that IP address was used. Was it the source of a login, and if so, what type? Was it a C2 IP address, or the source of a download or upload? If so, how...via HTTP, curl, msiexec, BITS, etc?
Here's an example of an IP address; in this case, 185.56.83.82. We can get some insight on this IP address from VirusTotal, enough to know that we should probably pay attention. However, if you read the blog post, you'll see that this IP address was used as the target for data exfiltration.
Via finger.exe.
Add to that the use of the LOLBin is identical to what was described in this 2020 advisory, and it should be easy to see that we've gone well beyond just an IP address, by this point, as we've started to unlock and reveal the composite nature of that indicator.
The point of all this is that there's more to the data we have available than just the one-dimensional perspective that we're used to thinking in, in which we've been viewing that data. Now, if we begin to incorporate other data sources that are available to us (EDR telemetry, endpoint data and configurations, etc.), we'll being to see exactly how, as Cameron stated, human behavior renders in digital forensics. Some of the things I've pursued and been successful in demonstration during previous engagements includes things like hours of operations, preferred TTPs and approaches, enough so to separate the actions of two different threat actors on a single endpoint.
I've also gained insight into the situational awareness of a threat actor by observing how they reacted to stimulus; during one incident, the installed EDR framework was blocking the threat actor's tools from executing on different endpoints. The threat actor never bothered to query any of the three endpoints to determine what was blocking their attempts; rather, on one endpoint, they attempted to disable Windows Defender. On the second endpoint, they attempted to delete a specific AV product, without ever first determining if it was installed on the endpoint; the batch file they ran to delete all aspects and variations of the product were not preceded by query commands. Finally, on the third endpoint, the threat actor ran a "spray-and-pray" batch file that attempted to disable or delete a variety of products, none of which were actually installed on the endpoint. When none of these succeeded in allowing them to pursue their goals, they left.
So, yes, viewed through the right lens, with the right perspective, human behavior can be discerned through digital forensics. But the question remains...is this useful? Is the insight that this approach provides valuable to anyone?
I have felt this to be the most ignored aspect in the DFIR field. We tend to focus our efforts on analysis of data and not the person behind that data. Vendors train tool usage (appropriately so). College programs teach to technology. Few teach investigation and investigative mindset (which profiling is part).
ReplyDeleteIt’s funny, but you’d be hard pressed to find this taught at police academies..detectives most always learn through experience, because the training is scarce.
IMHO, if one in DFIR does not take the human aspect with an investigative mindset into account with every ‘data analysis’, they are only 75% (at best) effective in that case.
Brett,
ReplyDeleteThanks for the comment. This is an area I've been interested in for some time, but it just doesn't seem to get much traction.
Even if "profiling" is too formalized a word...after all, look what it takes for an FBI to be a certified profiler...even just looking at the totality of data and gaining insights into the threat actors activities is a valuable outcome.
Or, perhaps, more valuable to some than others.
For example, maybe the reason this sort of thing isn't done is because there's no appetite for it...not from the analyst (to get the training to do it), nor from the end "customer".
When one doesn't know what they need to know, they will never seek it out, even if it is the missing piece of their competence puzzle.
ReplyDeleteThe DFIR toolbox of hardware and software gives us a false impression that this is all that we need to work a case/incident. We know the buttons to push/scripts to write, and therefore, we know DFIR.
Many in the field simply run the tools with default settings, never question the output, and think that it was a good job.
Really compelling Harlan. Like you, I have been interested in the human element of cybersecurity and DFIR for years, but also like you, I have seen little in the way of market traction. I think this is primarily due to the current market appetite won't sustain this increased level of scrutiny - unless of course there is a way to monetize it.
ReplyDeleteTo Brett's point, tool usage continues to be given too much attention while thinking through the nuance of a case, and trying to see how it connects to other cases, or other threat actor groups or other campaigns gets far too little.
I think another aspect that we have to talk about in the same vein, is communication skills. Being able to clearly and succinctly, articulate what you did, why you did it, why it's important to the case, and why anyone should care is an equally uncommon skill.
I think if we can start at least talking about it, and making it part of the DFIR lexicon, we can begin the process of turning the battleship with a boat oar. I don't anticipate it changing anytime soon, but it's at least a start.
Chris,
ReplyDelete> I think this is primarily due to the current market appetite won't sustain this increased level of scrutiny -
That's an interesting perspective...can you elaborate?
Thanks!
Most tend to choose the path of least resistance in life and work. Paying (in time and money) for a technical course is easy if the software does a good job because it requires less thinking.
ReplyDeleteLooking at the human aspect, aka "investigating the human actor" requires making assumptions, guesses, theories, inferences, critical thinking, making mistakes, taking risks in decisions, admiting mistakes, taking responsibility for outcomes (ie: accusations!), and knowing how to investigate. That is a lot of work that you can't get from a book or a tool-focused class.
The sexy part of DFIR is using the tools. It is not report writing, testifying, presenting, or trying to identify the actor through a conviction if necessary. Tools are a necessary part of DFIR (can't build a house without a hammer...).
But, the technical work in DFIR is maybe 50% of the job. Those who focus only on the technical side will only be 50% effective compared to a competitor/peer/opposing expert/adversary who spends an equal amount of effort on the human aspect.
In practice, once you are competent in the human aspect, cases are easier to work because you can observe what happened, not just interpret data.
I have to agree with you, Brett.
ReplyDeleteIMHO, most instances I've seen, data isn't so much "interpreted", as it is "reported".
When there is interpretation of data, it's most often the analyst finding data staging (threat actor collecting files and creating an archive), but stating in the report that "data exfiltration" occurred, even with no direct evidence of the data actually leaving the endpoint.