Thursday, November 25, 2021
Tuesday, November 23, 2021
Over the years, I've seen DFIR referred to in terms of special operations forces. I've seen incident response teams referred to as "Cyber SEALs", as well as via various other terms. However, when you really look at, incident response is much more akin to the US Army Special Forces, aka "Green Berets"; you have to "parachute in" to a foreign environment, and quickly develop a response capability making use of the customer's staff ("the natives"), all of whom live in a "foreign culture". As such, IR is less about "direct action" and "hostage rescue", and more about "foreign internal defense".
Analysis occurs when an analyst applies their knowledge and experience to data, and is usually predicated by a parsing phase. We can learn a great deal about the analyst's level of knowledge and experience by what data they collect, how they approach data NOT addressed during the initial collection phase, how they go about parsing the data, and then how they go about presenting their findings.
Analysts need to understand that there is NO SUCH THING as an attack that leaves no traces. You may hear pen testers and fellow DFIR analysts saying this, often with great authority, but the simple fact is that this is NOT THE CASE. EVER. I heard this quite often when I was a member of the ISS X-Force ERS team; at one point, the vulnerability discovery team told me that they'd discovered a vulnerability to Excel that left NO TRACES of execution on the endpoint. Oddly enough, they stopped talking to me all together when I asked if the user had to open...wait for it...an Excel file. Locard's Exchange Principle tells us that there will be some traces of the activity. Some artifacts in the constellation may be more transient than others, but any "attack" requires the execution of code (processing of instructions through the CPU) on the system, even those that include the use of "native tools" or "LOLBins".
Take what you learn from one engagement, and bake it back into your process. Automate it, so that the process is documented (or self-documenting) and repeatable.
Over on Twitter, Chris Sanders has a fascinating thread "discussing" the ethics around the release of OSTs. He has some pretty important points, one being that cybersecurity does not have any licensing body, and as such, no specific code of ethics or conduct to which one must adhere. Section 17 of the thread mentions releasing OSTs being a "good career move"; do we see that on the "blue" side? I'd suggest, "no". Either way, this is a really good read, and definitely something to consider...particularly the source of "data" in the thread. Chris later followed up with this article, titled, "A Socratic Outline for Discussing the OST Release Debate".
Not long ago, I ran across this LinkedIn post on SCCM Software Metering; this can potentially provide insight into program execution on host (where other artifacts, such as AmCache and ShimCache, do not explicitly illustrate program execution). Remember, this should be combined with other validating artifacts, such as Prefetch files, EDR telemetry (if possible), etc., and not considered in isolation. Microsoft has a document available that addresses software metering, and FireEye has some great info available on the topic, and David Pany has a tool available for parsing this information from the OBJECTS.DATA file.
Inversecos had a great Tweet thread on RDP artifacts not long ago, and in that thread, linked to an artifact parsing tool, BMC-Tools. Looking around a bit, I found another one, RdpCacheStitcher. Both (or either) can provide valuable insight into putting together "the story" and validating activity.
I wanted to pull together a compilation of tools I'd been collecting, and just put them out there for inspection. I haven't had the opportunity to really use these tools, but in putting them together into the below rather loose list, I'm hoping that folks will see and use them...
DFIR Orc/Zircolite - LinkedIn message
DFIR Orc - Forensic artifact collection tool
Zircolite - Standalone SIGMA-based tool for EVTX
Chainsaw - tool for rapidly searching Windows Event Logs
Aurora from Nextron Systems (presentation) - SIGMA-based EDR agent
PCAP Parser (DaisyWoman) - HTTP/S queries & responses, VT scans
*This parser is great for using after you use bulk_extractor to retrieve a PCAP file from memory, or other unstructured data
Kaspersky Labs parse_evtx.exe binary
If you want to practice using any of the above tools, or you want to practice using other tools and techniques, and (like me), you're somewhat limited at to what you have access, here are some resources you might consider exploring...
Digital Forensics Lab & Shared Cyber Forensic Intelligence Repository
CyberDefenders Labs - a fair number of labs available
Cado Security REvil Ransomware Attack Image/Data
Ali Hadi's DataSets - lots of good data sets available
MUS2019 DFIR CTF (via David Cowen)
Monday, November 01, 2021
Context is king, it makes all the difference. You may see something run in EDR telemetry, or in logs, but the context of when it ran in relation to other activities is often critical. Did it occur immediately following a system reboot or a user login? Does it occur repeatedly? Does it occur on other systems? Did it occur in rapid succession with other commands, indicating that perhaps it was scripted? The how and when of the context then leads to attribution.
Automation can be a wonderful thing, if you use it, and use it to your advantage. The bad guys do it all the time. Automation means you don't have to remember steps (because you will forget), and it drives consistency and efficiency. Even at the micro-level, at the level of the individual analyst's desktop, automation means that data sources can be parsed, enriched, decorated and presented to the analyst, getting them to analysis faster. Admit it...parsing all of the data sources you have access to, the way you're doing it now, is terribly inefficient and error-prone. Imagine if you could use the system, the CPU, to do that for you, and have pivot points identified when you access the data, rather than having to discover them for yourself. Those pivot points could be based on your own individual experience, but what if it were based on the sum total experience of all analysts on the team, including analysts who were on the team previously, but are no longer available?
Understand and use the terminology of your industry. All industries have their own common terminology because if facilitates communications, as well as identifying outsiders. Doctors, lawyers, Marines all have terms and phrases that mean something very specific to the group or "tribe". Learn what those are for your industry or community, understand them and use them.
Not all things are what we think they are...this is why we need to validate what we think are "findings", but are really assumptions. Take Windows upgrades vs. updates...an analyst may believe that a lack of Windows Event Logs is the result of an upgrade when they (a) do not have Windows Event Log records that extend back to the time in question, and (b) see that there was a great deal of file system and Registry activity associated with an update, "near" that time. Most often the lack of Windows Event Logs is assumed to be the threat actor "covering" their tracks, and this assumption will often persist despite the lack of evidence pointing to this activity (i.e., batch files or EDR telemetry illustrating the necessary command, specific event IDs, etc.). The next step, in the face of specific event IDs "missing", is to assume that the update caused the "data loss", without realizing the implication of that statement...that a Windows update would lead to data loss. How often do we see these assumptions, when the real reason for a dearth of Windows Event Logs covering a specific time frame is simply the passage of time?