Friday, May 26, 2023

Events Ripper Updates

I updated an Events Ripper plugin recently, and added two new ones...I tend to do this when I see something new to that I don't have to remember to run a command, check a box on a checklist, or take some other step. If I have to do any of these, I'm not going to remember these steps, so instead, I just create a plugin, drop it into the "plugins" folder, and it gets run every time, for every investigation. What's really cool is that I can re-run Events Ripper after I add addition Windows Event Log files to the mix, or after creating a new plugin (or updating a current one); most often, it's just hitting the up-arrow while in the command prompt, and boom, it's done.

Here's a look at the updates: - I added some filtering capabilities to this plugin, so that known-good URLs (MS, Google, Chrome, etc.) don't clutter the output with noise. There is a lot of legitimate use of BITS on a Windows system, so this log file is likely going to be full of things that aren't a concern for the analyst, and are simply noise, obscuring the signal. I'm sure I'll be updating this again as I see more things that need to be filtered out. - I wanted a means for collecting PowerShell scripts from event ID 600 records in the Windows PowerShell Event Log, so I wrote this plugin. As with other plugins, this will provide pivot points into the timeline, helping to more easily elevate potentially malicious activity to the analyst, leveraging automation to facilitate analysis.

Similar to the plugin, I took steps to reduce the volume of information presented to the analyst. Specifically, I've seen on several investigations that there are a LOT of PowerShell scripts run as part of normal operations. So, while the plugin will collect all scripts, it only displays those that appear 5 or fewer times in log; that is, it shows those that appear least frequently.

Something I really like about this plugin is the use of data structures (via Perl) to manipulate the data, and how they lead to the data being presented. Anyone who's looked at the data in the Windows Powershell.evtx log file knows that for each script, there are something like 6 records, each with a lot of information that's not entirely useful to a DFIR or SOC analyst. Also, there are infrastructures that use a LOT of PowerShell for normal IT ops, so the question becomes, how do we reduce the data that the analyst needs to wade through, particularly those analysts that are less experienced? Well, the approach I took was to first collect unique instances of all of the scripts, along with the time at which they were seen. Then, the plugin only displays those that appeared 5 or fewer times in the log (this value is configurable by anyone using the plugin, just change the value of "$cap" on line 42). By displaying each script alongside the time stamp, it's easy for an analyst to quickly 'see' those scripts that were run least frequently, to 'see' what the scripts are doing, and to correlate data with other sources, validating their findings

So far, this technique has proven effective; during a recent investigation into two disparate endpoints that exhibited similar malicious activity, we were able to correlate the PowerShell scripts with RMM access logs to validate the specific RMM tool as the means of access. By viewing the scripts listed in reverse order (soonest first) based on time stamps, we were able to not only correlate the activity against the threat actor's batch file, with the logins, but also demonstrate that, on one endpoint, the batch file did not complete. That is, several lines from the batch file, evidenced as PowerShell scripts extracted from the WEVTX log file did not appear in the data on one endpoint. This demonstrated that the attack did not complete, which is why we detected the first endpoint our telemetry, but not the second one (because it didn't succeed).

As a side note, on the first endpoint, the timeline demonstrated that the coin miner crashed not long after it was started. Interestingly enough, that's what led to the next plugin (below) being created. ;-)

For those who are interested and want to do their own testing, line 42 of the plugin (open it in Notepad++, it won't bite...) lists the value "$cap", which is set to "5". Change that value, and re-run the plugin against your events file; do this as often as you like.

I will say that I'm particularly proud/fond of this plugin because of the use of data structures; Perl "hash of hashes" to get a list of unique scripts, transitioning to a Perl "hash of arrays" to facilitate least frequency of occurrence analysis and display to the analyst. Not bad for someone who has no formal training in computer science or data structures! It reminds me the time I heard Martin Roesch, the creator of the snort IDS, talk about how he failed his CS data structures course! - apparently, nssm.exe logs to the Application Event Log, which helps to identify activity and correlate Window Event Log records with EDR telemetry. Now, a review of EDR telemetry indicates that nssm.exe has legitimate usage, and is apparently included in a number of packages, so look at the output of the plugin as potential pivot points, not that it's bad just because it's there.

And, hey, I'm not the only one who finds value in Events Ripper...Dray used it recently, and I didn't even have to pay him (this time)!!

Tuesday, May 16, 2023

Composite Objects and Constellations

Okay, to start off, if you haven't seen Joe Slowik's RSA 2022 presentation, you should stop now and go watch it. Joe does a great job of explaining and demonstrating why IOCs are truly composite objects, that there's much more to an IP address than just it IP address. When we start thinking in these terms, in terms of context, the IOCs we see and share can become much more actionable. 

Why does any of this matter? Once, in a DFIR consulting firm far, far away, our team was working PCI forensics investigations, and Visa was sending us monthly lists of IOCs that we had search for during every case. We'd get three of file names, one of file paths, and one of hashes. There was no correlation between the various lists, nothing like, "...a file with this name and this hash existing in this folder...". Not at all. Just three lists, without context. Not entirely helpful for us, and any hits we found could be similarly lacking in any meaning or context..."hey, yeah, we found that this folder existed on one system...", but nothing beyond that was asked for, nor required. The point is that an IOC is often more than just what we see at face value...a file has a hash, a time frame that it existed on the system (or was seen on other systems), functionality associated with the file (if it's an executable file), etc. Similarly, an IP address is more than just four dot-separated octets...there's the time frame it was associated with an endpoint, the context with respect to how it was associated with the endpoint (was it the source IP for a login...what type...or lateral movement, was it a C2 address, was it the source of an HTTP request), etc.

Symantec recently posted an article regarding a group identified as "LanceFly", and their use of the Merdoor backdoor. In the article, table 1 references different legitimate applications used in DLL sideloading to load the backdoor; while the application names are listed, there are a couple of items that might be important missing. For example, what folders were used? For each of the legitimate applications used, were they or other products from the vendor used in the environment (i.e., what is the prior prevalence)? Further, there's no mention of persistence, nor how the legitimate application is launched in order to load the Merdoor backdoor. Prior to table 1, the article states that the backdoor itself persists as a Windows service, but there's no mention of how, once the legit application and the sideloaded DLL are place on the system, how the legit app is launched.

This is something I've wondered when I've seen previous cases involving DLL sideloading...specifically, how did the threat actor decide which legitimate application to use? I've seen cases, going back to 2013, where legit apps from McAfee, Kaspersky, and Symantec were used by threat actors to sideload their malicious DLLs, but when I asked the analyst working that case if the customer used that application within their environment, most often they had no idea.

Why does this matter? If the application used is new to the environment, and the EDR tool used within the infrastructure includes the capability, then you can alert on new applications, those that have never been 'seen' in the environment. Does your organization rely on Windows Defender? Okay, so if you see a Symantec or Kaspersky application for the first time...or rather, your EDR 'sees' it...then this might be something to alert on.

Viewing indicators as composite objects, and as part of constellations, allows us to look to those indicators as something a bit more actionable than just an IP address or a file name/hash. Viewing indicators as composite objects helps add context to what we're seeing, as well as better communicate that context to others. Viewing indicators as one element in a constellation allows us to validate what we're seeing. 

The Windows Registry

When it comes to analyzing and understanding the Windows Registry, where do we go, as an industry, to get the information we need?

Why does this even matter?

Well, an understanding of the Registry can provide insight into the target (admin, malicious insider, cyber
criminal, nation-state threat actor) by what they do, what they don't do, and how they go about doing it.

The Registry can be used to control a great deal of functionality and access on endpoints, going beyond just persistence. Various keys and values within the Registry can determine what we can see or not see, what we can do or not do, 

For example, let's say a threat actor enables RDP on an endpoint...this is something we see quite often. This could even be a Windows 10 or Windows 11 laptop; that is, it doesn't just have to be a server. When they enable it, do they also create a user account, add it to a group that has remote access, and then hide the new user account from the Welcome Screen? Do they enable Sticky Keys? Regardless of the various settings that they enable or disable, how do they go about doing so? Manually, or via a batch file or script of some kind?

The settings enabled or disabled, and the manner employed, can tell you something about the target. Are they prepared? Was it likely that they'd conducted some recon and developed some situational awareness of the environment, or as we see with many RaaS offerings, was it more of a "spray-and-pray" approach? If they used sc.exe (or some other means) to disable services, was that list specific and unique to the environment, or was it more of a "wish list" where many of the listed services didn't even exist on the endpoint?

Something that's been seen recently is the LogonType value being created, often as part of a batch file. This is interesting because the value itself appears to apply to Windows XP systems, but it's been seen being created on Windows 10 endpoints, as well as server variants of Windows. The order of the modifications, the timing between the modifications, and the position of the LogonType value within the list of modifications has been consistent across multiple endpoints, owned by unrelated customers. All of this, combined with the fact that the LogonType value apparently has no impact on the endpoints to which it was deployed, indicates that the "threat actor" is deploying this script of settings modifications without consideration for how "noisy" or unique it is.  

Okay, so let's consider persistence mechanisms, some of which can be a bit esoteric. For example, @0gtweet shared an interesting technique on 13 Dec 2022, and John Hammond shared a video of the technique on 12 May 2023. Now, if you take a really close look at it, this really isn't a "persistence" technique, per se, in the traditional sense...because in order to activate it, the threat actor has to have access to the system, or have some other means to run the "query" command. Maybe this could be used as a chained persistence technique; that is, what "persists" is the use of the "query" command, such as in an LNK file in the user's StartUp folder, or in another autoruns/persistence location, so that the "query" command is run, which in turn, runs the command created through the technique described by @0gtweet. 

So, consider this...threat actor compromises an admin account on an endpoint, and modifies the Registry so that the user accounts Startup folder is no longer the traditional "Startup" folder (i.e., "%userprofile%\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup"), but is something like "Temp". Then, they modify the "query" key with a value that launches their malware, or a downloader for their malware, and then drop an LNK file to run the new "query" entry in the new "Startup" location whenever that admin user logs in.

Now, here's something to think about...set this up and run it in a test environment, and see what the process lineage looks like, and try to figure out from that lineage what happened (i.e., working backwards). Pretty cool, eh?

Speaking of persistence, what about false flags? Say, the threat actor drops some "malware" on an endpoint, and adds a value to the Run key, but disables it. The SOC or DFIR analyst sees the Run key value being set and figures, "ah, gotcha!", and not knowing about other values within the Registry, doesn't understand that the value has been disabled. Just as with a military inspection, you leave something for the inspector to find so that they're satisfied and move on; in this case, the analyst may decide that they've sussed out the bad guy, delete the Run key value and referenced file, and move on...all while the threat actor's real persistence mechanism is still in place.

The point is, there's a good bit within the Registry that controls what access and capabilities the operating system and, to some extent, applications provide, and understanding that helps us understand a bit about the target we're interested in, whether they be a cyber criminal, threat actor, or malicious insider. 

Friday, May 05, 2023

Events Ripper Updates

As you may know, I'm a pretty big proponent for documenting things that we "see" or find during investigations, and then baking those things back into the parsing and decoration process, as a means of automating and retaining corporate knowledge. This means that something I see once can be added to the parsing, decoration, and enrichment process, so that I never have to remember to look for it again. Things I've seen before can be raised up through the "noise" and brought to my attention, along with any references or necessary context. This makes subsequent investigations more efficient, and gets me to where I'm actually doing analysis much sooner.

One of the ways I do this is by creating simple plugins for Events Ripper, a proof-of-concept tool for "mining" Windows Event Log data for pivot points that can be applied to analysis, and in particular timeline analysis. Events Ripper uses the events file, the intermediate step between normalizing Windows Event Log events into a timeline, extracting pivot points and allowing me to build the picture of what happened, and when, a great deal faster than doing so manually.

The recently created or updated plugins include: 
Check for "Microsoft-Windows-Security-Auditing/4797" events, indicating that a user account was checked for a blank password. I'd never seen these events before, but they popped up during a recent investigation, and helped to identify the threat actor's activity, as well as validate the compromised account they were using. 
"Microsoft-Windows-Security-Auditing/5156", and /5158 events; this plugin output is similar to what we see with ShimCache parsers, in that it lists the applications for which the Windows Filtering Platform allows connections, or allows to bind to a local port, respectively. Similar to "Service Control Manager" events illustrating a new service being installed, this plugin may show quite a few legitimate applications, but it's much easier to go through that list and see a few suspicious or malicious applications than it is to manually scroll through the timeline. Searching the timeline for those applications can really help focus the investigation on specific timeframes of activity. 
Windows Defender event IDs 1116, 1117, 2051, and 5007, all in a single plugin, allowing us to look for detections and modifications to Windows Defender. Some modifications to Windows Defender may be legitimate, but in recent investigations, exclusions added to Windows Defender have provided insight into the compromised user account, as well as the folders the threat actor used for staging their tools.
Source "MsiInstaller", with event IDs 11707 (successful product installation), 11724, and 1034 (both successful product removal). 
Combined several event IDs (7000, 7009, 7024, 7040, and 7045) events, all with "Service Control Manager" as the source, into a single plugin. This plugin is not so much the result of recent investigations, as it is the desire to optimize validation; a service being created or installed doesn't mean that it successfully runs each time the system is restarted. 
Combined "Application Hang/1002", "Application Error/1000", and "Windows Error Reporting/1001" events into a single plugin, very often allowing us to see the threat actor's malware failing to function.

Each of the new or updated plugins is the result of something observed or learned during recent investigations, and allow me to find unusual or malicious events to use as pivot points in my analysis.

We can do the same things with RegRipper plugins, Yara or Sigma rules, etc. It simply depends upon your framework and medium.