Tuesday, November 23, 2021

Tips for DFIR Analysts, pt. V

Over the years, I've seen DFIR referred to in terms of special operations forces. I've seen incident response teams referred to as "Cyber SEALs", as well as via various other terms. However, when you really look at, incident response is much more akin to the US Army Special Forces, aka "Green Berets"; you have to "parachute in" to a foreign environment, and quickly develop a response capability making use of the customer's staff ("the natives"), all of whom live in a "foreign culture". As such, IR is less about "direct action" and "hostage rescue", and more about "foreign internal defense".

Analysis occurs when an analyst applies their knowledge and experience to data, and is usually predicated by a parsing phase. We can learn a great deal about the analyst's level of knowledge and experience by what data they collect, how they approach data NOT addressed during the initial collection phase, how they go about parsing the data, and then how they go about presenting their findings.

Analysts need to understand that there is NO SUCH THING as an attack that leaves no traces. You may hear pen testers and fellow DFIR analysts saying this, often with great authority, but the simple fact is that this is NOT THE CASE. EVER. I heard this quite often when I was a member of the ISS X-Force ERS team; at one point, the vulnerability discovery team told me that they'd discovered a vulnerability to Excel that left NO TRACES of execution on the endpoint. Oddly enough, they stopped talking to me all together when I asked if the user had to open...wait for it...an Excel file. Locard's Exchange Principle tells us that there will be some traces of the activity. Some artifacts in the constellation may be more transient than others, but any "attack" requires the execution of code (processing of instructions through the CPU) on the system, even those that include the use of "native tools" or "LOLBins". 

Take what you learn from one engagement, and bake it back into your process. Automate it, so that the process is documented (or self-documenting) and repeatable.

Over on Twitter, Chris Sanders has a fascinating thread "discussing" the ethics around the release of OSTs. He has some pretty important points, one being that cybersecurity does not have any licensing body, and as such, no specific code of ethics or conduct to which one must adhere. Section 17 of the thread mentions releasing OSTs being a "good career move"; do we see that on the "blue" side? I'd suggest, "no". Either way, this is a really good read, and definitely something to consider...particularly the source of "data" in the thread. Chris later followed up with this article, titled, "A Socratic Outline for Discussing the OST Release Debate".

Tools
Not long ago, I ran across this LinkedIn post on SCCM Software Metering; this can potentially provide insight into program execution on host (where other artifacts, such as AmCache and ShimCache, do not explicitly illustrate program execution). Remember, this should be combined with other validating artifacts, such as Prefetch files, EDR telemetry (if possible), etc., and not considered in isolation. Microsoft has a document available that addresses software metering, and FireEye has some great info available on the topic, and David Pany has a tool available for parsing this information from the OBJECTS.DATA file.

Inversecos had a great Tweet thread on RDP artifacts not long ago, and in that thread, linked to an artifact parsing tool, BMC-Tools. Looking around a bit, I found another one, RdpCacheStitcher. Both (or either) can provide valuable insight into putting together "the story" and validating activity.

Endpoint Tools
I wanted to pull together a compilation of tools I'd been collecting, and just put them out there for inspection. I haven't had the opportunity to really use these tools, but in putting them together into the below rather loose list, I'm hoping that folks will see and use them...

DFIR Orc/Zircolite - LinkedIn message
DFIR Orc - Forensic artifact collection tool
Zircolite - Standalone SIGMA-based tool for EVTX
Chainsaw - tool for rapidly searching Windows Event Logs
Aurora from Nextron Systems (presentation) - SIGMA-based EDR agent
PCAP Parser (DaisyWoman) - HTTP/S queries & responses, VT scans 
*This parser is great for using after you use bulk_extractor to retrieve a PCAP file from memory, or other unstructured data
Kaspersky Labs parse_evtx.exe binary

Images/Labs
If you want to practice using any of the above tools, or you want to practice using other tools and techniques, and (like me), you're somewhat limited at to what you have access, here are some resources you might consider exploring...

Digital Forensics Lab & Shared Cyber Forensic Intelligence Repository
CyberDefenders Labs - a fair number of labs available
Cado Security REvil Ransomware Attack Image/Data
Ali Hadi's DataSets - lots of good data sets available 
MUS2019 DFIR CTF (via David Cowen) 

2018 DefCon CTF (here, here)
DigitalCorpora Narcos Scenario (write-up)
CalPoly 2019 DF Downloads
DFIRMadness Stolen Szechuan Sauce (Twitter) (Answers)

2 comments:

Andreas said...

Analysts need to understand that there is NO SUCH THING as an attack that leaves no traces. You may hear pen testers and fellow DFIR analysts saying this, often with great authority, but the simple fact is that this is NOT THE CASE. EVER. […] Locard's Exchange Principle tells us that there will be some traces of the activity. Some artifacts in the constellation may be more transient than others, but any "attack" requires the execution of code […].

Great point! Multiple things come in mind..

Hearing such things say more about an analystĖ‹s knowledge and attitude than the case itself.

When we don‘t see something, it‘s wrong to conclude, that there is nothing or that there WAS nothing. it‘s also a question of timing.

It‘s more often than not that there WERE traces at the time of the attack, but we lost them, overlooked them or simply have gaps in log and artifact collection, location of an artifact changed, tools had errors (e.g. the classic registry null bytes), rootkits hide stuff from us and we sometimes simply don‘t know where to look or where to find evidence. and of course, time ate traces, keyword log retention among others. understanding techniques and test them with artifact collection and detections lead to the required knowledge and tooling.

Also context again, traces missing on one place, leaves us open to finding traces on other places. network, something come in and out, lateral movement, someone had a goal, objectives on the target, persistence, … memory changes … and so on. the crime scene is larger than just one narrowed focus area.

Finally, the lack of traces in one case should lead to an improved situation in the next one. Otherwise, it was just an excuse or it was not worth the effort to improve the situation.

Andreas said...

We included DFIR Orc also in the Forensic Artifact Collection Tool Matrix. Good point is, that it is a single binary used on the target system which contains everything and don‘t have to write files on disk. This process must be kept in mind: changing the artifact list, paths and so on the binary requires a re-compilation of it and the needed setup must be available to all the analysts. Strengh of a one shot binary vs. the process of changing or adding things to the tool.