Saturday, March 25, 2023

The "Why" Behind Tactics

Very often we'll see mention in open reporting of a threat actor's tactics, be they "new" or just what's being observed, and while we may consider how our technology stack might be used to detect these tactics, or maybe how we'd respond to an incident where we saw these tactics used, how often to do we consider why the tactic was used?

To see the "why", we have to take a peek behind the curtain of detection and response, if you will.

If you so much as dip your toe into "news" within the cyber security arena, you've likely seen mention that Emotet has returned after a brief hiatus [here, here]. New tactics observed associated with the deployment of this malware include the fact that the lure document is an old-style MS Word .doc file, which presents a warning message to the user to copy the file to a 'safe' location and reopen it. The lure document itself is in excess of 500MB in size (padded with zeros), and when the macros are executed, a DLL that is similarly zero-padded to over 500MB is downloaded.

Okay, why was this approach taken? Why pad out two files to such a size, albeit with zeros? 

Well, consider this...SOC analysts are usually front-line when responding to incident alerts, and they may have a lot of ground to cover while meeting SLAs during their shift, so they aren't going to have a lot of time to invest in investigations. Their approach to dealing with the .doc or even the DLL file will be to first download them from the endpoint...if they can. That's right...does the technology they're using have limits on file sizes for download, and if so, what does it take to change that limit? Can the change be made in a timely manner such that the analyst can simply reissue the request to download the file, or does the change take some additional action. If additional action is required, it likely won't be followed up on.

Once they have the file, what are they going to do? Parse it? Not likely. Do they have the tools available, and skills for parsing and analyzing old-style/OLE format .doc files? Maybe. But it's easier to just upload the file to an automated analysis framework...if that framework doesn't have a file size limit of it's own.

Oh, and remember, all of that space full of zeros means the threat actor can change the padding contents (flip a single "0" to a "1") and change the hash without impacting the functionality of the file. So...yeah.

So, what's happening here is that whether or not it's specifically intended, these tactics are targeting analysts, relying on their lacking in experience, and targeting response processes within the security stack. Okay, "targeting" implies intent...let's say, "impacting" instead. You have to admit that when looking at these tactics and comparing them to your security stack, in some cases, these are the effects we're seeing, this is what we see happening when we peek behind the curtain.

Consider this report from Sentinel Labs, which mentions the use of the "C:\MS_DATA\" folder by threat actors. Now, consider the approach taken by a SOC analyst who sees this for the first time; given that some SOC analysts are remote, they'll likely turn to Google to learn about this folder, and find that the folder is used by the Microsoft Troubleshooting tool (TSSv2), and at that point, perhaps deem it "safe" or "benign". After all, how many SOCs maintain a central, searchable repository of curated, documented intrusion intel? For those that do, how many analysts on those teams turn to that repository first, every time? 

How about DFIR consulting teams? How many DFIR consulting teams have an automated process for parsing acquired data, and automatically tagging and decorating it based on intrusion intel developed from previous engagements?

In this case, an automated process could parse the MFT and automatically tag the folder with a note for analysts, with tips regarding how to validate the use of TSSv2, and maybe even tag any files found within the folder.

When seeing tactics listed in open reporting, it's not just a good idea to consider, "does my security stack detect this?", but to also think about, "what happens if we do?"

No comments: