Thursday, June 26, 2025

Program Execution, follow-up pt II

On the heels of my previous post on this topic, it occurred to me that this tendency to incorrectly refer to ShimCache and AmCache artifacts as "evidence of execution" strongly indicates that we're also not validating program execution. That is to say, when we "see" a program execution event, or something that indicates that a program may have executed, are we validating that it was successful? Are we looking to determine if it completed its intended task, or are we simply assuming that it did?

For example, let's say we have an alert based on a threat actor running a net user command to add a new user account to an endpoint; when I see this command, I want to check the Security Event Log to see if there are any Security-Auditing/4720 records at about the same time, to indicate that the command succeeded. The command will very likely be accompanied by other Security Event Log records related to the account being enabled, the password being reset, etc; however, the ../4720 event record is what primarily interests me, because sometimes, you'll see the net user command that does not include the /add or /ad switch, but is still reported as a "new user being created", when, in fact, the account already exists and the password is being changed.

Regardless of what's reported, the point here is, are we validating what we're seeing? Another example is the use of msiexec.exe; when we see a command using this LOLBin run, do we also see accompanying MsiInstaller records in the Application Event Log? I've seen reports of msiexec.exe being run against HTTP resources, stating that something was installed; however, there are no corresponding MsiInstaller records in the Application Event Log.

Another use of the Application Event Log, when validating program execution, comes when you timeline the log records alongside EDR telemetry or process launch (Sysmon, Security-Auditing/4688) records. For example, if you see Application Pop-up or Windows Error Reporting messages for the program around the same time as the program execution, this would indicate that the program did not successfully launch. 

Another similarly valuable resource is AV logs. You may see the program execution attempt, followed by an AV message indicating that the process was detected and quarantined. Or, as has occurred several times, Window Defender may generate a detection record, and rather than a successful quarantine message, the detection is followed by a critical failure message, and the malware continues to execute.

The great folks over at Cyber Triage posted this guide on Malware WMI Event Consumers; pg 6 illustrates the "Classic Detection" techniques. Looking at these, EDR/Sysmon, and the WMI-Activity/Operational Event Log can be incorporated into a timeline to not only illustrate program execution, but that the execution succeeded and resulted in the intended (by the threat actor) outcome. For example, if you incorporate EDR into a timeline that includes the Windows Event Logs, then you'd likely look for WMI-Activity/5861 event records to see if a new event consumer had been successfully created. 

From there, the next step would be to parse the Objects.DATA file to determine if the event consumer is still visible in the WMI repository. 

Summary
Continuing to see artifacts such as ShimCache and AmCache referred to in the community as "evidence of execution" really showed me how we're overall too focused on the one thing that illustrates that something happened. While it's important to have a correct, accurate understanding of the nature of various individual artifacts, as a community we need to start processing this understanding within a system framework, understanding that each data source plays an important role within the system, as a whole. Nothing happens in isolation; whenever something happens on a live system, impressions and tool marks are going to be left in a variety of data sources. Some may be extremely transient, existing in memory for only a very short time, while others may be written to logs or to the Registry, and persist well beyond the removal of the "offending" application. 

But, I get it. It's easy to simply state that something happened, and hope that no one questions your statement. It's much harder to make a statement supported by data, because doing so isn't something we're familiar with, it's not something we've been doing for years at this point. It's not part of our process, nor is it part of our culture. But remember...everything is difficult, sometimes even after the first time we do it. Climbing a rope in gym class was hard, until you first did it. It may even have been hard the second or third time, but eventually you realized you could do it. 

Validation of your findings is important, because when you complete the ticket or the report you're writing, and send it off to your "customer", someone may be making a decision and allocating resources based on those findings. My previous blog post provides one example of how I've experienced the need to validate findings during my time in the industry. Whether you see it right now or not, at some point, someone's very likely going to take your findings and make a decision based on what you're provided, and you want to be as sure as you can that those findings are correct, and supported by the data.

No comments: