Thursday, August 26, 2021

Tips for DFIR Analysts

Over the years as a DFIR analyst...first doing digital forensics analysis, and then incorporating that analysis as a component of IR activity...there have been some stunningly simple truths that I've learned, truths that I thought I'd share. Many of these "tips" are truisms that I've seen time and time again, and recognized that they made much more sense and had more value when they were "named".

Tips, Thought, and Stuff to Think About

Computer systems are a finite, deterministic space. The adversary can only go so far, within memory or on the hard drive. When monitoring computer systems and writing detections, the goal is not write the perfect detection, but rather to force the adversary into a corner, so that no matter what they do, they will trigger something. So, it's a good thing to have a catalog of detections, particularly if it is based on things like, "...we don't do this here..".

For example, I worked with a customer who'd been breached by an "APT" the previous year. During the analysis of that breach, they saw that the threat actor had used net.exe to create user accounts within their environment, and this is something that they knew that they did NOT do. There were specific employees who managed user accounts, and they used a very specific third-party tool to do so. When they rolled out an EDR framework, they wrote a number of detection rules related to user account management via net.exe. I was asked to come on-site to assist them when the threat actor returned; this time, they almost immediately detected the presence of the threat actor. Another good example is, how many of us log into our computer systems and type, "whoami" at a command prompt? I haven't seen many users do this, but I've seen threat actors do this. A lot.

From McChrystal's "Team of Teams", there's a difference between "complexity" and "complicated". We often refer to computer systems and networks as "complex", when they are really just complicated, and inherently knowable. We, as humans, tend to make things that are complicated out to be complex.

A follow-on to the previous tip is that there is an over-use of the term "sophisticated" to describe a significant number of attacks. When you look at the data, very often you'll see that attacks are only as sophisticated as they need to be, and in most cases, they really aren't all that sophisticated. An RDP server with an account password of "password" (I've seen this recently...yes, during the summer of 2021), or a long-patched vulnerability with a freely available published exploit (i.e., JexBoss was used by the Samas ransomware actors during the first half of 2016).

When performing DF analysis, the goal is to be as comprehensive and thorough as possible. A great way to achieve this is through automation. For example, I developed RegRipper because I found that I was doing the same things over and over again, and I wanted a way to make my job easier. The RegRipper framework allowed me to add checks and queries without having to write (or rewrite) entirely new tools every time, as well as provided a framework for easy sharing between analysts.

TCP networking is a three-stage handshake, UDP is "fire and forget". This one tip helped me a great deal during my early days of DFIR consulting, particularly when having discussions with admins regarding things like firewalls and switches.

Guessing is lazy. Recognize when you're doing it before someone else does. If there is a gap in data or logs, say so. At some point, someone is going to see your notes or report, and see beyond the veil of emphatic statements, and realize that there are gaping holes in analysis that were spackled over with a thin layer of assumption and guesswork. As such, if you don't have a data source...if firewall logs were not available, or Windows Event Logs were disabled, say so.

The corollary to the tip on "guessing" is that nothing works better than a demonstration. Years ago, I was doing an assessment of a law enforcement headquarters office, and I was getting ready to collect password hashes from the domain server using l0phtcrack. The admin said that the systems were locked down and there was no way I was going to get the password hashes. I pressed the Enter key down, and had the hashes almost before the Enter key returned to its original position. The point is, rather than saying that a threat actor could have done something, a demonstration can drive the point home much quicker.

Never guess at the intentions of a threat actor. Someone raised in the American public school system, with or without military or law enforcement experience, is never going to be able determine the mindset of someone who grew up in the cities of Russia, China, etc. That is, not without considerable training and experience, which many of us simply do not have. It's easy to recognize when someone's guessing the threat actor's intention, because they'll start off a statement with, "...if I were the threat actor...".

If no one is watching, there is no need for stealth. The lack of stealth does not bely sophistication. I was in a room with other analysts discussing the breach with the customer when one analyst described what we'd determined through forensic analysis as, "...not terribly sophisticated...", in part because the activity wasn't very well hidden, nor did the attacker cover their tracks. I had to later remind the analyst that we had been called in a full 8 months after the threat actor's most recent activity.

The adversary has their own version of David Bianco's "Pyramid of Pain", and they're much better at using it. David's pyramid provides a framework for understanding what we (the good guys) can do to impact and "bring pain" to the threat actor. It's clear from engaging in hundreds of breaches, either directly or indirectly, that the bad guys have a similar pyramid of their own, and that they're much better at using theirs.

We're not always right, or correct. It's just a simple fact. This is also true of "big names", ones we imagine are backed by significant resources (spell checkers, copy editors, etc.), and as such, we assume are correct and accurate. As such, we shouldn't blindly accept what others say in open reporting, not without checking and critical thinking.

There are a lot of assumptions in this industry. I'm sure it's the same in other industries, but I can't speak to those industries. I've seen more than a few assumptions regarding royalties for published books; new authors stating out big publishers may start out at 8%, or less. And that's just for paper copies (not electronic), and only for English language editions. I had a discussion once with a big name in the DFIR community who assumed that because I worked for a big name company, of course I had access to commercial forensic suites; they'd assumed that my commenting on not having access to such suites was a load of crap. When I asked what made them think that I would have access to these expensive tool sets, they ultimately said that yes, they'd assumed that I would.

If you're new to DFIR, or if you've been around for a while, you've probably found interviewing for a job to be nerve-racking, anxiety-producing affairs. One thing to keep in mind is that most of the folks you're interviewing with aren't terribly good at it, and are probably just as nervous as you. Think about many times have you seen courses offered in how to conduct a job interview, from the perspective of the interviewer?


Andreas said...

Thanks so much as always for your thoughts and sharing.

I think the rating that an attack was "sophisticated" says more about the person than about the actual attack, because there is no common interpretation what sophistication means or is, but more of a relative term compared to the speaker's knowledge and experience. Companies declaring an attack as sophisticated try to improve their light on themselves. Getting familiar with attack frameworks, detection frameworks, DFIR reports, Harlan's writeups, ... will reduce the use of the word sophistication I guess :)

Guessing or overseeing gaps in analysis is difficult to spot, because we get blind during a case. Therefore, it's important that multiple analysts work on cases to identify those guessing and speculating.

Regarding catalog of e.g "...we don't do this here..": These catalogs varies inside large infrastructures between different platforms or organisational units. Clustering and applying detections to those clusters help but is a huge effort to maintain and build. Often we apply detections to all of the endpoints, servers, ... and aren't able to apply more specific needs to special environments. Do you have found such clustering too?

"Never guess at the intentions of a threat actor. ": Unfortunately, it is exactly asked over and over when discussing cases. In the same sense, articles on Red Team Journal were also talking about this in regards to red teaming. Red teams faces a similar issue when building and executing cases, e.g. The Need for Genuine Empathy in Modern Adversarial Red Teaming. Only some posts are still available. It's difficult to play the role of the Gegenspieler when having totally different physical environments and time constraints, framed views, ... Instead of guessing the intention, we can enumarate different worst-case outcomes from the point where an attacker was without speculating and prooving these "hypothesis". Distinguishing if an attack is a ransomware attack vs a stealthy APT with spionage intentions based on a detection is difficult, nevertheless important, because the playbook differs... but to get that intention the attack must already be executed.

H. Carvey said...


Thanks for leaving a comment!

> ...more of a relative term compared to the speaker's knowledge and experience...

Agreed, to some extent. In my experience, it's more often a statement to hide behind. I say this as the one who has gone to the CIO or CTO and had to say, "...this Windows 7 system was running an unauthorized RDP server, installed against policy, and the admin password was, in fact, "password"...".

> ...catalogs varies inside large infrastructures...

Yes, of course...but this not a reason for not having them. The example I gave was based on policy, as well as an operational business decision. It's up to organizations to make their own similar decisions.

> Unfortunately, it is exactly asked over and over when discussing cases.

Oh, yes...agreed. But that doesn't mean that it has to be answered, particularly not in the way that it is often answered, which is very often when gaps are not acknowledged.

Consider this post:

In the above case, my team was "second chair" to the primary response team, and were given this task in an almost, "well, crap, we hired you so we might as well give you something to do..." manner. I fully acknowledged that we did not have full visibility into the overall engagement, so we were only going to comment on what we could see.

Again, thanks for reading, and thanks for commenting...