Monday, February 13, 2023

Training and CTFs

The military has a couple of adages...one, "you fight like you train", and another being, "the more you sweat in peace, the less you bleed in war." The idea behind these adages is that progressive, realistic training prepares you for the job at hand, which is often one performed under "other than optimal" conditions. You start by learning in the classroom, then in the field, and then under austere conditions, so that when you do have to perform the function(s) or task(s) under similar conditions, you're prepared and it's not a surprise. This is also true of law enforcement, as well as other roles and functions. Given the pervasiveness of this style of training and familiarization, I would think that it's suffice to say that it's a highly successful approach.

The way DFIR CTFs, while fun, are being constructed and presented, they are doing those in the field a disservice, as they do not encourage analysts to train the way they should be fighting. In fact, they tend to cement and even encourage bad habits.

Let me say right now that I understand the drive behind CTF challenges, particularly those in the DFIR field. I understand the desire to make something available for others to use to practice, and perhaps rate themselves against, and I do appreciate the work that goes into such things. Honestly, I do, because I know that it isn't easy. 

Let me also say that I understand why CTFs are provided in this manner; it's because this is how many analysts are "taught", and it's because this is how other CTFs are presented. I also understand that presenting challenges in this manner provides for an objective measure against which to score individual participants; the time it takes to complete the challenge, the time between answering subsequent questions, and the number of correct responses are all objective measures that can be handled by a computer program, and really provide little wiggle room. So, we have analysts who "come up" in the industry, taking courses and participating in CTFs that are all structured in a similar manner, and they go on to create their own CTFs, based on that same structure.

However, the issue remains...the way DFIR CTFs are presented, they encourage something much less than what we should be doing, IRL. We continue to teach analysts that reviewing individual artifacts in isolation is "sufficient", and there's no direction or emphasis on concepts such as validation, toolmarks, or artifact constellations. In addition, there's no development of incident intelligence to be shared with others, both in the DFIR field, and adjacent to it (SOC, detection engineering, CTI, etc.).

Hassan recently posted regarding CTFs and "deliberate practice"; while I agree with his thoughts in principle, these tend to fall short. Yes, CTFs are great, because they offer the opportunity to practice, but they fall short in a couple of areas. One in particular is that they really aren't necessarily "deliberate practice"; perhaps a different way of saying that is that it's "deliberate practice" in the wrong areas, because we're telling those who participate in these challenges that answering obscure questions, in a manner that isolates that information from other other information needed to "solve" the case, is the standard to strive for, and this should not ever be the case.

Another way that these DFIR CTFs fall short is that they tend to perpetuate the belief that examiners should look at artifacts one a time, in isolation from other artifacts (particularly others in the constellation). Given that Windows is an operating system, with a lot going on, our old way of viewing artifacts...the way we've always done it...no longer serves us well. It's like trying watch a rock concert in a stadium by looking through a key hole. We can no longer open one Windows Event Log file in a GUI viewer, search for somethings we think might be relevant, close that log file, open another one, and repeat. Regardless of how comfortable we are with this approach, it is terribly insufficient and leaves a great many gaps and unanswered questions in even what appears to be the most rudimentary case.

Let's take a look at an example; this CyberDefenders challenge, as Hassan mentioned CyberDefenders in a comment. The first thing we see is that we have to sign up, and then sign in to work the challenge, and that none of the analyst case notes from how they solved the CTF are available. The same has been true of other CTFs, including (but not limited to) those such as the 2018 DefCon DFIR CTF. Keeping case notes is something that analysts should be deliberately practicing, as well as sharing them.

Second, we see that there are 32 questions to be answered in the CTF, the first of which is, "what is the OS product name?" We already know from one of the tags for the CTF that the image is Windows, so how important is the "OS product name"? This information does not appear to be significant to any of the follow-on questions, and seems to be solely for the purpose of establishing some sort of objective measure. Further, in over 2 decades of DFIR work, addressing wide range of response scenarios (malware, ransomware, PCI, APT, etc.), I don't think I've ever had a customer ask more than 4 or 5 questions...max. In the early days, there was most often just one question customers were interested in:

Is there malware on this system?

As time progressed, many customers wanted to know:

How'd they get in? 
Who are they?
Are they still in my network?
What did they take?

Most often, whether engaging in PCI forensic exams, or in "APT" or targeted threat response, those four questions, or some variation thereof, were in the forefront of customer's minds. In over two decades of DFIR work, ranging from individual systems up to the enterprise, I never had a case where a customer asked 32 questions (I've seen CTFs with 51 questions), and I've never had a customer (or a co-worker/teammate) ask me for the the LogFile sequence number of an Excel spreadsheet. In fact, I can't remember a single case (none stands out in my mind) where the LogFile sequence number of any file was a component or building block of an overall investigation.

Now, I'm not saying this isn't true for others...honestly, I don't know, as so few in our field actually share what they do. But from my experience, in working my own cases, and working cases with others, none of the questions asked in the CTF were pivotal to the case.

So, What's The Answer?
The answer is that forensic challenges need to be adapted, worked, and "graded" differently. CTFs should be more "deliberate practice", aligned to how would DFIR work should be done, and perpetuating and reinforcing good habits. Analysts need to keep and share case notes, being transparent about their analytic goals and thought processes, because this is now we learn overall. And I don't just mean that this is how that analyst, the one who shares these things, learns; no, I mean that this is how we all learn. In his book, Call Sign Chaos, retired Marine General Jim Mattis said that our own "personal experiences alone are not broad enough to sustain us"; while this thought applies to a warfighter reading, this portion of the quote applies much more broadly to mean that if we're stuck in our own little bubble, not sharing what we've done and what we know with others, then we're not improving, adapting and growing in our profession.

If we're looking to provide others with "deliberate practice", then we need to change the way we're providing that opportunity.

Additional Resources
John Asmussen - Case_notes.py
My 2018 DefCon DFIR CTF write-ups (part 1, part 2)

No comments: