Not long ago, I ran across this LinkedIn post on analyzing a ransomware executable, which led to this
HexaStrike post. The HexaStrike post covers analyzing an AI-generated ransomware variant, which (to be honest) is not something I'm normally interested in; however, in this case, the blog contained the following statement that caught my interest:People often ask: “Why analyze ransomware? It’s destructive; by the time analysis happens, it’s too late”. That’s only half true. Analysis matters because sometimes samples exploit bugs to spread or escalate (think WannaCry/EternalBlue), they often ship persistence or exfiltration tricks that translate into detection rules, custom crypto occasionally ships with fixable flaws allowing recovering from ransomware, infrastructure and dev breadcrumbs surface through pathnames and URLs, and, being honest, it’s fun.
For anyone who's been following me for any length of time, here on this blog or on LinkedIn, you'll know that "dev breadcrumbs" are something that I'm very, VERY interested in. I tend to refer to them as "toolmarks" but "dev breadcrumbs" works just as well.
Something else...in my experience, some of the malware RE write-ups are devoid of the types of things mentioned in the above quotes, particularly anything that "translates into detection rules". I know some are going think, "yeah, but like the quote also says, by the time we see this stuff executing, it's too late...", but that isn't always the case. For example, if you're able to write a detection rule that says, "...when we see an [un]signed process act as the parent for the following processes in quick succession, kill the parent process, log out the session owner, isolate the endpoint, and generate an alert...", then this sort of thing can be very valuable.
Also, specific to ransomware, if there's a flaw in the encryption process found, then this may help with recovery where paying the ransom isn't required. For example, if the encryption process looks for a specific file or some other indicator, then that indicator can act as a "vaccine" of sorts; simply create it (say, an empty file) on the endpoint, and if the ransomware is launched against that endpoint, it will find the indicator (file), and based on the encryption logic, not encrypt files on the endpoint.
This is not a new idea, to be sure. Back in 2016, Kevin Strickland authored a blog post titled, "The Continuing Evolution of Samas Ransomware", showing how the ransomware executable changed over time, providing insight not just into the thought processes of the threat actors, and evolution of their tactics, but also detection opportunities.

2 comments:
From a threat intel perspective, I wonder if "AI slop" malware is uniquely valuable due to AI's propensity to overshare. Elements of infrastructure, filenames, paths, etc that experienced malware developers take care to remove might be more common in AI generated samples. What do you think?
I don't write malware, so I can't really comment in that regard.
However, what I will say is that similar "oversharing" has already been available for years in LNK files, and that hasn't leveraged or exploited. If it does happen with "AI slop" malware, then that would be new.
Post a Comment