Thursday, March 21, 2024

A Look At Threat Intel Through The Lens Of Kimsuky

Rapid7 recently shared a fascinating post regarding the Kimsuky threat actor group making changes in their playbooks, specifically in their apparent shift to the use of .chm/"compiled HTML Help" files. In the post, the team does a great job of sharing not only likely reasons why there might be a shift to this file format, but also what organizations have been previously targeted by the threat actor group, and why they believe that this is shift in TTPs, rather than a separate group all together.

Specifically with respect to this threat actor group, if you fall into one of the previously targeted organizations, you'd definitely want to be concerned about the group itself, as well as it's change in tactics. 

Even if you're not in one of the targeted organizations, there's still value in a blog post such as this; for example, are you able to detect .chm files being sent via email, even if they're embedded in archives? Is this something you even want to do?

How can you protect yourself? Well, the first thing to look at is your attack surface...is there any legitimate business reason for you or your employees to access .chm files? If not, change the default file association from hh.exe to something else, like Notepad. If you want to take it step further, create a text document with a message along the lines of "...you're tried to open a .chm file, please contact an administrator...", and change the default file association to have Notepad open that file. Heck, you can even create a PowerShell script that grabs the name of .chm file, as well as other information (file path, system name, user name, time stamp), and emails it to an administrator, and have that script run instead of actually opening the .chm file. Something like this not only prevents the attack all together, but also provides insight into the prevalence of this type of attack. This may be important to other organizations not targeted by this specific group, as this group is not the only one to rely on .chm files (see here, also). In fact, the folks from TrustWave shared their findings regarding .chm files from over 6 yrs ago.

This is not terribly different from similar measures laid out by Huntress not long ago, in that you can use native Windows functionality (which is free) to enable protective measures that make sense for your organization.

One thing to be aware of, though, is from the section of the blog post that addresses persistence, illustrated in the below image:





The Run key in the HKCU path does not ensure that the program runs at startup, but instead, as stated in the following sentence, when the user logs in. 

What I would do in an investigation is correlate the Run key LastWrite time with the contents of the Microsoft-Windows-Shell-Core%4Operational Event Log, allowing me to validate when the value was actually written to the key. I would then use this information to then pivot back into the investigative timeline in order to determine how the value ended up being created in the first place.

Reading through the Rapid7 post, as well as other posts regarding a similar use of .chm files indications that we could have other information available to serve as pivot points and to validate attack timing, through Windows services or scheduled tasks.

File Metadata
Something else the Rapid7 post does a good job of presenting/discussing is the .chm file format, and tools you can use to access it without launching any code and infecting yourself. There's information in the blog post regarding not just tools, but also the binary structure of the file format itself. This can all be used to enhance DFIR information regarding an attack, which should then feed threat intel, and provide additional insight to detect and respond to such attacks.

Also, given what can be embedded in a .chm file, there are other possibilities for metadata and time stamps, as well. 

On the topic of file metadata, the Rapid7 blog post makes reference to the threat actor group's prior use of LNK files as a delivery mechanism, describing several scenarios during which the use of LNK files was observed. I think it would be fascinating to view the LNK metadata across their use; after all, others have done so to great effect. 

Conclusion
There's a lot of great information in the Rapid7 blog post, and I applaud and greatly appreciate the efforts by the authors, not only performing the research, but also in publishing their findings. However, in the end, this a good deal of threat information, and it's up to the individual reader to determine how to apply it to their environment. 

For me, this is what I like about things like this, and why I appreciate them. Put all the cards...or almost all...on the table, and let me determine who best to utilize or exploit that information within my own infrastructure or processes. A lot of times this is what's best, and we shouldn't consider it to be "threat intelligence". 

Additional Info
For those interested, here's some additional information about the .chm file format that may be useful in writing tools to parse the binary structure of the file format.

Threat Actors Dropping Multiple Ransomware Variants

I ran across an interesting LinkedIn post recently, "interesting" in the sense that it addressed something I hadn't seen a great deal of reporting on; that is, ransomware threat actors dropping multiple RaaS variants within a single compromised organization.

Now, I have heard of impacted orgs being hit multiple times, over the course of weeks, months. or even years. But what I hadn't heard/seen a great deal of was a single organization being compromised by a single threat actor, and that threat actor/affiliate dropping multiple RaaS variants.

Here's the original post from Anastasia that caught my attention. Anastasia's post shares some speculation as to motivations for this approach, which kind of illustrates how this particular topic (motivations) is poorly understood. In item #1 on her list, I think what I'd be most in starting with is a better understanding as to how the findings were arrived at; that is, what were the data points that led to finding that a single affiliate was working with two different RaaS providers simultaneously. As someone who is very interested in the specifics of how threat actors go about their activities (the specifics as to how, not just the what), I have seen systems that were apparently compromised by two different threat actors simultaneously. I've also been involved in providing analysis for incidents where we were able to identify members of a threat group changing shifts, kind of like Fred Flinstone sliding down the back of a brontosaurus. 

From there you can see in the comments, Valery begins responding with some very helpful insight and direction, referring to the topic as "cross-claims". One of the links he provides is a LinkedIn post from Alex that provides some interesting references to how he (Alex) was able to determine that the same threat actor was deploying both Trigona and BlackCat within the same impacted organization. Within the comments to Alex's post, Valery shared an interesting X/Twitter thread, as well.

I should note that the Huntress team has seen both Trigona and BlackCat affiliates in action, albeit not within the same infrastructure, at the same time.

Like I said, I hadn't seen a great deal of open reporting on this particular topic, and it does sound like an interesting tactic, although I'm not entirely sure that I understand the point. I'm sure that it adds some complexity to the claims process, for those who have cyber insurance policies.

Friday, March 15, 2024

Uptycs Cybersecurity Standup

I was listening to a couple of fascinating interviews on the Uptycs Cybersecurity Standup podcast recently, and I have to tell you, there were some pretty insightful comments from the speakers.

The first one I listened to was Becky Gaylord talking about her career transition from an investigative journalist into cybersecurity.

Check out Becky's interview, and be sure to check out the show notes, as well.

I also listened to Quinn Varcoe's interview, talking about Quinn journey from zero experience in cybersecurity to owning and running her own consulting firm, Blueberry Security.

Check out Quinn's interview, and the show notes.

More recently, I listened to Olivia Rose's interview. Olivia and I crossed paths years ago at ISS, and has now hung out her own shingle as a virtual CISO (vCISO). I joined ISS in Feb 2006, about 6 months before their purchase by IBM, which was announced in August 2006. Olivia and I met at the IBM ISS sales kick-off in Atlanta early in 2007.

All of these interviews are extremely insightful; each speaker brings something unique with them from their background and experiences, and every single one of them has a very different "up-bringing" in the industry.

There's no one interview that stands out as more valuable than the others. Instead, my recommendation is to listen to them all, in fact, do so several times. Take notes. Take note of what they say.

Thursday, March 14, 2024

Investigative Scenario, 2024-03-12

Investigative Scenario
Chris Sanders posted another investigative scenario on Tues, 12 Mar, and this one, I thought, was interesting (see the image to the right).

First off, you can find the scenario posted on X/Twitter, and here on LinkedIn.

Now, let's go ahead and kick this off. In this scenario, a threat actor remotely wiped a laptop, and the sole source of evidence we have available is a backup of "the Windows Registry", made just prior to the system being wiped.

Goals
I try to make sure I have the investigative goals written out where I can see them and quickly refer back to them. 

Per the scenario, our goals are to determine:
1. How the threat actor accessed the system?
2. What were their actions on objectives, prior to wiping the system?

Investigation
The first thing I'd do is create a timeline from the Software and System hive files, in order to establish a pivot point. Per the scenario, the Registry was backed up "just before the attacker wiped the system". Therefore, by creating a timeline, we can assume that the last entry in the timeline was from just prior to the system being wiped. This would give us a starting point to work backward from, and provide an "aiming stake" for our investigation.

The next thing I'd do is examine the NTUSER.DAT files for any indication of "proof of life" up to that point. What I'm looking for here is to determine the how of the access; specifically, was the laptop accessed via a means that provided shell- or GUI-based access? 

If I did find "proof of life", I'd definitely check the SAM hive to see if the account is local (not a domain account), and if so, try to see if I could get last login time info, as well as any indication that the account password was changed, etc. However, keep in mind that the SAM hive is limited to local accounts only, and does not provide information about domain accounts.

Depending upon the version/build of Windows (that info was not available in the scenario), I might check the contents of the BAM subkeys, for some indication of process execution or "proof of life" during the time frame of interest.

If there are indications of "proof of life" from a user profile, and it's corroborated with the contents of the BAM subkeys, I'd definitely take a look at profile, and create a timeline of activity.

What we're looking for at this point is:
1. Shell-, GUI-based access, via RDP, or an RMM?
2. Network-, CLI-based access, such as via ssh, Meterpreter, user creds/PSExec/some variant, or a RAT

Shell-based access tends to provide us with a slew of artifacts to examine, such as RecentApps, RecentDocs, UserAssist, shellbags, WordWheelQuery, etc., all of which we can use to develop insight into a threat actor actor, via not just their activity, but the timing thereof, as well. 

If there are indications of shell-based access, we check the Registry to determine if RDP was enabled, or if there were RMM tools installed, but without Windows Event Logs and other other logs, we won't know definitively which means was used to access the laptop. Contrary to what some analysts seem to believe, the TSClients subkeys within the NTUSER.DAT hive do not show systems that have connected to the endpoint, but rather which systems were connected to from the endpoint.

Something else to consider is if the threat actor had shell-based access, and chose to perform their actions via a command prompt, or via Powershell, rather than navigating the system via the Explorer shell and double-clicking files and applications. As we have only the backed up Registry, we wouldn't be able to examine user's console history, nor the Powershell Event Logs.

However, if there are no indications of shell-based access, and since we only have the Registry and no access to any other log files from the endpoint, it's going to likely be impossible to determine the exact means of access. Further, if all of the threat actor's activity was via network-based/type 3 logins to the laptop, such as via Meterpreter, or PSExec, 

It doesn't do any good to parse the Security hive for the Security Event Log audit policy, because we don't have access to the Windows Event Logs. We could attempt to recover them via record parsing of the image, if we had a copy of the image. 

I would not put a priority on persistence; after all, if a threat actor is going to wipe a system, any persistence they create is not going to survive, unless the persistence they added was included in a system-wide or incremental backup, from which the system is restored. While this is possible, it's not something I'd prioritize at this point. I would definitely check autostart locations within the Registry for any indication of something that might look suspicious; for example, something that may be a RAT, etc. However, without more information, we wouldn't be able to definitively determine if (a) if the entry was malicious, and (b) if it was used by the threat actor to access the endpoint. For example, without logs, we have no way of knowing if an item in an autostart location started successfully, or generated an error and crashed each time it was launched. Even with logs, we would have no way of knowing if the threat actor accessed the laptop via an installed RAT.

Something else I would look for would be indications of third-party applications added to the laptop. For example, LANDesk used to have a Software Monitoring module, and it would record information about programs executed on the system, along with how many times it was launched, the last time it was launched, and the user name associated with the last launch. 

Findings
So, where do we stand with our goals? I'd say that at the moment, we're at "inclusive" because we simply do not have enough information to go on. There is no memory dump, no other files collected, no logs, etc., just the backed up Registry. While we won't know definitively how the threat actor was able to access the endpoint, we do know that if access was achieved via some means that allowed for shell-based access, we might have a chance at determining what actions the threat actor took while they were on the system. Of course, the extent to which we'd be able to do that also depends upon other factors, including the version of Windows, the software "load" (i.e., installed applications), actions taken by the threat actor (navigating/running apps via the Explorer shell vs. command prompt/Powershell). It's entirely possible that the threat actor accessed the endpoint via the network, through a means such as Meterpreter, or there was a RAT installed that they used to access the system.