I've been reading a bit lately on social media about how cyber security is "hard" and it's "expensive", and about how threat actors becoming "increasingly sophisticated".
The thing is, going back more than 20 yrs, in fact going back to 1997, when I left military active duty and transitioned to the private sector, I've seen something entirely different.
On 7 Feb 2022, Microsoft announced their plans to change how the Windows platform (OS and applications) handled macros in Office files downloaded from the Internet; they were planning to block them, by default. Okay, so why is that? Well, it turns out that weaponized Office docs (Word documents, Excel spreadsheets, etc.) were popular methods for gaining access to systems.
As it turns out, even after all of the discussion and activity around this one, single topic, weaponized documents are still in use today. In fact, March 2023 saw the return of Emotet, delivered via an older-style MS Word .doc file that was in excess of 500MB in size. This demonstrates that even with documented incidents and available protections, these attacks will still continue to work, because the necessary steps to help protect organizations are never taken. In addition to using macros in old-style MS Word documents, the actors behind the new Emotet campaigns are also including instructions to the recipient for...essentially...bypassing those protection mechanisms.
Following the Feb 2022 announcement from Microsoft, we saw some threat actors shift to using disk image files to deploy their malware, due in large part to the apparent dearth of security measures (at the time) to protect organizations from such attacks. For example, a BumbleBee campaign was observed using IMG files to help spread malware.
MS later updated Windows to ensure "mark-of-the-web" (MotW) propagation to files embedded within disk image files downloaded from the Internet, so that protection mechanisms were available for some file types, and that at least warnings would be generated for others.
We then saw a shift to the use of macros in MS OneNote files, as apparently these file weren't considered "MS Office files" (wait...what??).
So, in the face of this constant shifting in and evolution of tactics, what are organizations to do to address these issues and protect themselves?
Well, the solution for the issue of weaponized Office documents existed well prior to the Microsoft announcement in Feb 2022; in fact, MS was simply implementing it where orgs weren't doing so. And the thing is, the solution was absolutely free. Yep. Free, as in "beer". A GPO, or a simple Registry modification. That's it.
The issue with the use of disk image files is that when received and a user double-clicks them, they're automatically mounted and the contents accessible to the user. The fix for this...disabling automatically mounting the image files when the user double-clicks them...is similarly free. With two simple Registry modifications, users are prevented from automatically mounting 4 file types - ISO, IMG, VHD, and VHDX. However, this does not prevent users from programmatically accessing these files, such as via a legitimate business process; all it does is prevent the files from being automatically mounted via double-clicking.
And did I mention that it's free?
What about OneNote files? Yeah, what about them?
My point is that we very often say, "...security is too expensive..." and "...threat actors are increasing in sophistication...", but even with changes in tactics, is either statement really true? As an incident responder, over the years, I've seen the boots-on-the-ground details of attacks, and a great many of them could have been prevented or at the very least significantly hampered had a few simple, free modifications been made to the infrastructure.
The Huntress team posted an article recently that includes Powershell code that you can copy-paste and use immediately, and will address all three of the situations/conditions discussed in this blog post.
"And what is called „sophisticated“ says maybe more about the speaker than the actual attack."
For me, a truly sophisticated attack would be one that was profound in how remarkably simple and complete it was. For example, create a Run key value, but disable it...most analysts won't "get" that it was disabled, and since the industry, for the most part, does not purse validation, this is a pretty simple red herring. Grab something off of VT, something that is attributed to another TA group, drop it on disk, create and then "disable" persistence.
Then go about your real task. Enable RDP, set StickyKeys, hide the user account from the Welcome Screen.
Want to have some real fun? Don't do these all on the same system.
Most often, "sophisticated" is really the 800 lb gorilla. No one wants the world to know that they got beat up by a 12 yr old girl in a wheelchair, so they say, "...it was an 800 lb gorilla...". After all, they said it, so it *must* be true.
When you look at the details of most incidents, what do we see? User double-clicks an ISO file attachment, something they've NEVER seen before, and then launches the malware inside the disk image file. Or, RDP is left open with an easily-guessed password...or maybe not so easily guessed, but no one was watching the logs as that system was subject to brute force attacks over weeks and months.
A lot of basic security measures are "hard" and "expensive" because we're not doing them.
a truly sophisticated attack
True true, and sophisticated must not be sophisticated in technical versatility, but in just not being seen by the defenders, is it because of decoys, is it because your acting outside of the focus of the security teams. It doesn't matter if it was highly technical or not, if attackers achieve their goal and stay hidden, it was sophisticated enough. And if it only needs basic tooling. BEC after phishing, not very sophisticated, but highly effective.
A lot of basic security measures are "hard" and "expensive" because we're not doing them.
And you know that only AFTER the hack which makes it always surprising for all. Unfortunately, the price one pays for missing the prevention mechanisms is only visible after the fact. It's the same for body, mental, earth health. Until then, we can dream, hope, make ones life's easier to not doing the home work.
After all, every hack of one of the participants in the systems will make the overall security of the others a little bit better. Anti-fragility at best (some thoughts about that here). One company is fragile, the overall system is more robust and hopefully learns.
So now, what do we do about that? I think that the issue with these articles on some platforms calling everything "new" and "sophisticated" and "not seen ever" just to be journalism in mainstream media. Less people in my community talk about attacks like that. And the others, (1) train security analysts to keep the overall pictures in mind, know the tools and the tricks used by more advanced groups and (2) writing good article as you did to alert and making people aware of that.
most analysts won't "get" that it was disabled
How should analysts in typical SOC/DFIR teams know that? And the other 20 sneaky things?
It's not available in EDR logs, it's not available in this or that fancy UI. And knowing the tools which cover such a technique is also a time consuming task. Who has the time to study relevant tools… How many teams have forensic experts in their teams which checking for null bytes when some tools fail to parse the registry files? Checking COM hijacking? Checking for malicious eBPF filters on Linux servers? Knowing that there are malicious eBPF software at all. There are so many thing to keep in mind during analysis, that people obviously focus on the most common things. Like the well-known video with the basketball players and the monkey walking by. And when do you know, that you can stop because you did enough for that specific case. It's not surprising at all that we miss such sneaky changes. If such a technique gets more common, it will eventually find its way into the regular analysis procedures of all.
I could imagine that more algorithm based support in analysis (not parsing, but analysis), as it is getting more common in medicine too, would help reduce such blind spots for more hidden corners.
As your write-ups helps a lot and I shared many post with my peers, it helps to to focus on the education and the analysts way of thinking, as Chris Sanders did in his dissertation.
> ... is it because of decoys, is it because your acting outside of the focus of the security teams.
It's not any *one* thing...it's a determination made based on the totality of data. Sometimes, it's much more "sophisticated" to use an LOLBin rather than bring your own tool, one that is likely to leave an impact on the system. There's also situational awareness...I've seen some threat actors remove (or attempt to) AV when their commands are blocked (had no effect whatsoever), and I've seen threat actors first look to see what's installed, and then check the infrastructure to see if that EDR agent is installed on other systems. In one instance, the threat actor immediately moved to a system where Falcon was *not* installed.
Something else to consider is clean up...does the threat actor not bother at all, do they "clean up all the things", or do they log in and launch a script that clears only the last 4 min of the Security Event Log? Or, do they simply modify the logging via auditpol?
So, sophistication is based on a number of factors. I've been on engagements where a fellow analyst stated that the threat actor was "not sophisticated at all" because the commands they ran were noisy and left a lot of tracks on the system. I had to remind the analyst that we were there, responding, 8 months *after* the threat actor had departed.
> And you know that only AFTER the hack which makes it always surprising for all.
Yes, and no.
We know it AFTER the hack, *for that incident*, but we know it well ahead of time from all of the other incidents we've responded to.
Case in point...7 Feb 2022, MS announced that they were going to change the default behavior of systems, and block macros in Office docs downloaded from the Internet. Even today, this attack *still* works, including using old-style, OLE-based .doc files...because some orgs have not made the necessary changes.
So, we've known about this (and other measures) as a protection mechanism for a long time, but (like patching), we just don't do it.
>> most analysts won't "get" that it was disabled
> It's not available in EDR logs...
It very likely would be, as it can be easily set up via reg.exe, and that telemetry would exist for most EDR tools. Those that monitor Registry mods might not include it because the telemetry may be filtered based on sheer volume.
> ...knowing the tools which cover such a technique is also a time consuming task...
Not really. It is time-consuming because we choose to not pursue it or understand it.
But I do get what you're saying...that an arbitrary organization sitting out there, with just it's current staff, isn't necessarily going to be able to keep up with everything...yeah, I get that.
> If such a technique gets more common, it will eventually find its way into the regular analysis procedures of all.
Or, we can just change the overall process, and start including it in tools now. Checking for null-bytes at the beginning of key and value names in the Registry? Done. Checking for RLO characters in Registry key and value names? Done. Checking the sizes of Registry values, based on type, so you can find "large" outliers? Done.
I wrote Events Ripper (https://github.com/keydet89/Events-Ripper), in part, to preserve those things that I only saw once...such as the "Windows Defender/2051" event...so that they would be searched for on *every* engagement, and I didn't have to remember to look for it...and so that other analysts using the tool didn't have to have the same experience, and do the same research that I did.
Post a Comment