Context is king, it makes all the difference. You may see something run in EDR telemetry, or in logs, but the context of when it ran in relation to other activities is often critical. Did it occur immediately following a system reboot or a user login? Does it occur repeatedly? Does it occur on other systems? Did it occur in rapid succession with other commands, indicating that perhaps it was scripted? The how and when of the context then leads to attribution.
Andy Piazza brings the same thoughts to CTI in his article, "CTI is Better Served with Context".
Automation can be a wonderful thing, if you use it, and use it to your advantage. The bad guys do it all the time. Automation means you don't have to remember steps (because you will forget), and it drives consistency and efficiency. Even at the micro-level, at the level of the individual analyst's desktop, automation means that data sources can be parsed, enriched, decorated and presented to the analyst, getting them to analysis faster. Admit it...parsing all of the data sources you have access to, the way you're doing it now, is terribly inefficient and error-prone. Imagine if you could use the system, the CPU, to do that for you, and have pivot points identified when you access the data, rather than having to discover them for yourself. Those pivot points could be based on your own individual experience, but what if it were based on the sum total experience of all analysts on the team, including analysts who were on the team previously, but are no longer available?
Understand and use the terminology of your industry. All industries have their own common terminology because if facilitates communications, as well as identifying outsiders. Doctors, lawyers, Marines all have terms and phrases that mean something very specific to the group or "tribe". Learn what those are for your industry or community, understand them and use them.
Not all things are what we think they are...this is why we need to validate what we think are "findings", but are really assumptions. Take Windows upgrades vs. updates...an analyst may believe that a lack of Windows Event Logs is the result of an upgrade when they (a) do not have Windows Event Log records that extend back to the time in question, and (b) see that there was a great deal of file system and Registry activity associated with an update, "near" that time. Most often the lack of Windows Event Logs is assumed to be the threat actor "covering" their tracks, and this assumption will often persist despite the lack of evidence pointing to this activity (i.e., batch files or EDR telemetry illustrating the necessary command, specific event IDs, etc.). The next step, in the face of specific event IDs "missing", is to assume that the update caused the "data loss", without realizing the implication of that statement...that a Windows update would lead to data loss. How often do we see these assumptions, when the real reason for a dearth of Windows Event Logs covering a specific time frame is simply the passage of time?
3 comments:
Thanks for your post and sharing your thoughts, Harlan.
Context is king, so true, as always, better have multiple artifacts (artifact constellation as you said in previous posts) than just one source. It's like looking for a word in a text and take only the word out of the text for analysis. It would be clear to everybody that this doesn't help. Context around that word is essential. Sometimes in detection engineering or forensic analysis we work only with extracted context-less information, like a process execution, but with no context at all. With the use of living of the land techniques the situation gets more difficult. First question is always: what's the context. What's the sentence around the word.
You write about it in the context of forensic and analysis but that applies perfectly too to detection engineering and monitoring and also to threat hunting. Every detection will be more robust having more than one source, criticality can be increased having multiple alerts at once for the same asset and for threat hunting, the larger picture having context helps a lot.
Furthermore, understanding the context of an attack pattern gives a lot more indicators than only loocking at a single source. Looking at various sources results in better better intel, better detections and better monitoring.
The difficult part is having multiple sources and context at the first place and having ways to correlate them. Here automation helps to collect various sources.
Not all things are what we think they are...this is why we need to validate what we think are "findings", but are really assumptions.
Totally agree.
How often do we see these assumptions, when the real reason for a dearth of Windows Event Logs covering a specific time frame is simply the passage of time?
so true! :) or changes on the disk because of our security tools, or other default OS behavior, ...
Andreas,
Thanks for the comment! I'm glad to see validation with respect to what's seen, as well as what's thought about what's seen...thanks!
Hi, Great blog post and totally agree with the need for "context". A lot of EDRs out there create tons of alerts creating a smoke screen than helping out the blue teamers. Context clubbed with multiple other factors like baseline, frequency will help to lessen the burden on the soc teams and the burn rate.
Post a Comment