As a follow-on to my earlier blog post, I've seen a few more posts and comments regarding EDR 'bypass' and blinding/avoiding EDR tools, and to be honest, my earlier post stands. However, I wanted to add some additional thoughts...for example, when considering EDR, consider the technology, product, and service in light of not just the threat landscape, but also the other telemetry you have available.
This uberAgent article was very interesting, in particular the following statement:
“DLL sideloading attack is the most successful attack as most EDRs fail to detect, let alone block it.”
The simple fact is, EDR wasn't designed to detect DLL side loading, so this is tantamount to saying, "hey, I just purchased this brand new car, and it doesn't fly, nor does it drive underwater...".
Joe Stocker mentions on Twitter that the Windows Defender executable file, MpCmdRun.exe, can be used for malicious DLL side loading. He's absolutely right. I've seen EXEs from other AV tools...Kaspersky, McAfee, Symantec...used for DLL side loading during targeted threat actor attacks. This is nothing new...this tactic has been used going back over a decade or more.
When it comes to process telemetry, most EDR starts by collecting information about the process creation, and many will get information about the DLLs or "modules" that are loaded. Many will also collect telemetry about network connections and Registry keys/values accessed during the lifetime of the process, but that's going a bit far afield and off topic for us.
There are a number of ways to detect issues with the executable file image being launched. For example, we can take a hash of the EXE on disk and compare it to known good and known bad lists. We can check to see if the file is signed. We can check to see if the EXE contains file version information, and if so, compare the image file name to the embedded original file name.
Further, many EDR frameworks allow us to check the prevalence of executables within the environment; how often has this EXE been seen in the environment? Is this the first time the EXE has been seen?
However, something we cannot do, because it's too 'expensive', is to maintain a known good list of all application EXE files, their associated DLLs, as well as their hashed and locations, and then compare what we're seeing being launched to that database. This is why we need to have other measure in place in a defense in depth posture. Joe goes on to mention what he'd like to see in an EDR tool, so the question is, is there an available framework that allows this condition to be easily identified, so that it can be acted upon?
DLL side loading is not new. Over a decade ago, we were seeing legitimate EXEs from known vendors being dropped on systems, often in the ProgramData folder, and the "malicious" DLL being dropped in the same folder. However, it's been difficult to detect because the EXE file that is launched to initiate the process is a well-known, legitimate EXE, but one of the DLLs it loads, which if often found in the same folder as the EXE, is in fact malicious. Telemetry might pick up the fact that the process, when launched, had some suspicious network activity associated with it, perhaps even network activity that we've never seen before, but a stronger indicator of something "bad" would be the persistence mechanism employed. Hey, why is this signed, known good EXE, found in a folder that it's not normally found in (i.e., the ProgramData folder), being launched from a Run key, or from a Scheduled Task that was just created a couple of minutes ago?
The point is, while EDR frameworks may not directly identify DLL side loading, as described in my previous blog post, we can look for other phases of the attack cycle to help use identify such things. We may not directly detect the malicious DLL being "side loaded", but a good bit of activity is required for a threat actor to get to the point of DLL side loading...gain access to the system or infrastructure, create files on the system, create persistence, etc. We can detect all of these activities to help us identify potential DLL side loading, or simply inhibit or even obviate further phases of the attack.
Kevin had an interesting tweet recently about another EDR 'bypass', this one involving the use of WindowsTerminal (wt.exe) instead of the usual command prompt (cmd.exe). I've been interested in the use of Windows Subsystem for Linux (wsl.exe) for some time, and would be interested to see how pervasive it is in various environments. However, the point is, if you're able to monitor new processes being created via the cmd.exe, are you also able to do the same with other shells, such as Powershell, wt.exe, or wsl.exe?
Finally, something that I've seen for years in the industry is that it doesn't matter how many alerts are being generated by an EDR framework if no one is listening, or if the alerts are misinterpreted and not acted upon. In this blog post on history repeating itself I shared an example of what it looked like well before the advent of EDR when no one was listening for alerts. Not long after that event, we saw in the industry what happened when firewall (or other device) logs were generated but no one monitored them. This is something that we've seen consistently over the past 30+ years.
I was once on an engagement where the threat actor had accessed a Linux-based infrastructure and been able to access the corporate Windows network; two disparate infrastructures that were not supposed to be connected at all were actually a flat network. The threat actor collected data in an unprotected archive and copied (not "moved") it to the Linux infrastructure. We had both copies of the archive, as well as corresponding netflow that demonstrated the transfer. One of the other analysts in the room offered their insight that this threat actor was not "sophisticated" at all, and was, in fact, rather sloppy with respect to their opsec (they'd also left all of their staged tools on the systems).
I had to remind the team that we were there several months after the threat actor had taken the data and left. They'd completed what they had wanted to do, completely unchallenged, and we were here looking at their footprints.
I've seen other incidents where an external SOC or threat hunting team had sent repeated email notifications to a customer, but no action was taken. It turned out that when the contract was set up, one person was designated to receive the email notifications, but they'd since left the organization. In more than a few instances, alert emails went to a communal inbox and the person who had monitored that inbox had since left the company, or was on vacation, or was too overwhelmed with other duties to keep up with the emails. If there is no plan for what happens when an alert is generated and received, it really doesn't matter what technology you're using.
No comments:
Post a Comment