Friday, June 25, 2021

What We Know About The Ransomware Economy

Okay, I think that we can all admit that ransomware has consumed the news cycle of late, thanks to high visibility attacks such as Colonial Pipeline and JBS. Interestingly enough, there wasn't this sort of reaction the second time the City of Baltimore got attacked, which (IMHO) belies the news cycle more than anything else.

However, while the focus is on ransomware, for the moment, it's a good time to point out that there's more to this than just the attacks that get blasted across news feeds. That is, ransomware itself is an economy, an eco-system, which is a moniker that goes a long way to toward describing why victims of these attacks are impacted to the extent that they are. What I mean by this is that everything...EVERYTHING...about what goes into a ransomware attack is directed at the final goal of the threat actor...someone...getting paid. What goes further to making this an eco-system is that when a ransomware attack does result in the threat actor getting paid, there are also others in the supply chain (see what I did there??) who are also getting paid.

I was reading Unit42's write-up on the Prometheus ransomware recently, and I have to say, a couple of things really stood out for me, one being the possible identification of a "false flag". The Prometheus group apparently made a claim that is unsupported by the data Unit42 has observed. Even keeping collection bias in mind, this is still very interesting. What would be the purpose of such a "false flag"? Does it tell use that the Prometheus group has insight into the workings of most counter threat intel (CTI) functions; have they "seen" CTI write-ups and developed their own version of the MITRE ATT&CK matrix?

Regardless of the reason or rationale behind the statement, Unit42 is...wait for it...relying on their data.  Imagine that!

Another thing that stood out is the situational awareness of the ransomware developer.

When Prometheus ransomware is executed, it tries to kill several backups and security software-related processes, such as Raccine...

Well, per the article, this is part of the ransomware itself, and not something the threat actors appear to be doing themselves. Being part of more than a few ransomware investigations over the years, relying on both EDR telemetry and #DFIR data, I've seen different levels of situational awareness on the part of threat actors. In some cases where the EDR tool blocks a threat actor's commands, I've seen them either give up, or disable or remove AV tools. In some cases, the threat actor has removed AV tools prior to performing a query, so the question becomes, was that tool even installed on the system?

This does, however, speak to how the barrier for entry has been lowered; that is, a far less sophisticated actor is able to be just as effective, or more so. Rather than having to know and manage all the parts of the "business", rather than having to invest in the resources required to gain access, navigate the compromised infrastructure, and then develop and deploy ransomware...you can just buy those things that you need. Just like the supply chain of a 'normal' business. Say that you want to start a business that's going to provide products to people...are you going to build your own shipping fleet, or are you going to use a commercial shipper (DHL, FedEx, UPS, etc.)?

Further, from the article:

At the time of writing, we don’t have information on how Prometheus ransomware is being delivered, but threat actors are known for buying access to certain networks, brute-forcing credentials or spear phishing for initial access.

This is not unusual. This write-up appears to be based primarily on OSINT, and does not seem to be derived from actual intrusion data or intelligence. The commands listed in the article for disabling Raccine are reportedly embedded in the ransomware executable itself, and not something derived from EDR telemetry, nor DFIR analysis. So what this is saying is that threat actors generally gain access by brute-forcing credentials (or purchasing them), or spear phishing, or by purchasing access from someone who's done either of the first two.

Again, this speaks to how the barrier for entry has been lowered.  Why put the effort into gaining access yourself when you can just purchase access someone else has already established?  

We’ve compiled this report to shed light into the threat posed by the emergence of new ransomware gangs like Prometheus, which are able to quickly scale up new operations by embracing the ransomware-as-a-service (RaaS) model, in which they procure ransomware code, infrastructure and access to compromised networks from outside providers. The RaaS model has lowered the barrier to entry for ransomware gangs.

Purchasing access to compromised computer systems...or compromising computer systems for the purpose of monetizing that access...is nothing new. Let's look back 15+ years to when Brian Krebs interviewed a botherder known as "0x80". This was an in-person interview with a purveyor of access to compromised systems, which is just part of the eco-system. Since then, the whole thing has clearly been "improved upon".

This just affirms that, like many businesses, the ransomware economy, the eco-system, has a supply chain. This not only means that there are specializations within that supply chain, and that the barrier to entry is lowered, it also means that attribution of these cybercrimes is going to become much more difficult, and possibly tenuous, at best.

Thoughts on Assessing Threat Actor Intent & Sophistication

I was reading this Splunk blog post recently, and I have to say up front, I was disappointed by the fact that the promise of the title (i.e., "Detecting Cl0p Ransomware") was not delivered on by the remaining content of the post. Very early on in the blog post is the statement:

Ransomware is by nature a post-exploitation tool, so before deploying it they must infiltrate the victim's infrastructure. 

Okay, so at this point, I'm looking for something juicy, some information regarding the TTPs used to "infiltrate the victim's infrastructure" and to locate files of interest for staging and exfil, but instead, the author(s) dove right into analyzing the malware itself, through reverse engineering. Early in that malware RE exercise is the statement:

This ransomware has a defense evasion feature where it tries to delete all the logs in the infected machine to avoid detection.

The embedded command is essentially a "one-liner" used to list and clear all Windows Event Logs, leveraging wevtutil.exe. However, while used for "defense evasion", it occurred to me that this command is not, in fact, intended to "avoid detection".  After all, with ransomware, the threat actors want to get paid, so they want to be detected. In fact, to ensure they're detected, the actors put ransom notes on the system, with clear statements, declarations, warnings, and time limits. In this case, the ransom note says that if action is not taken in two weeks, the files will be deleted. So, yes, it's safe to say that clearing all of the Windows Event Logs is not about avoiding detection. If anything, its really nothing more than a gesture of dominance, the threat actor saying, look at what I can do to your system. 

So, what is the purpose of clearing the Windows Event Logs? As a long-time #DFIR analyst, the value of the Windows Event Logs in such cases is to assist in a root-cause analysis (RCA) investigation, and clearing some Windows Event Logs (albeit not ALL of them) will hobble (but not completely remove) a responder's ability to determine aspects of the attack cycle such as lateral movement. By tracing lateral movement, the investigator can determine the original system used by the threat actor to gain access to the infrastructure, the "nexus" or "foothold" system, and from there, determine how the threat actor gained access. I say "hobble" because clearing the Windows Event Logs does not obviate the ability of the investigator to recover the event records, it simply requires a bit more effort. However, the vast majority of organizations impacted by ransomware are not conducting full investigations or RCAs, and #DFIR consulting firms are not publicly sharing ransomware trends and TTPs, anonymized through aggregation. In short, clearing the Windows Event Logs, or not, would likely have little impact either way on the response.

But why clear ALL Windows Event Logs? IMHO, it was used to ensure that the ransomware attack was, in fact, detected. Perhaps the assumption is that most organizations have some small modicum of a detection capability, and the most rudimentary SIEM or EDR framework should throw an alert of some kind in the face of the "wevtutil cl" command, or when the SIEM starts receiving events indicating that Windows Event Logs were cleared (especially if the Security Event Log was cleared!).

Sunday, June 06, 2021

Toolmarks: LNK Files in the news again

 As most regular readers of this blog can tell you, I'm a bit of a fan of LNK files...a LNK-o-phile, if you will. I'm not only fascinated by the richness of the structure, but as I began writing a parser for LNK files, I began too see some interesting aspects of intelligence that can be gleaned from LNK files, in particular, those created within a threat actors development environment, and deployed to targets/infrastructures. First, there are different ways to create LNK files using the Windows API, and what's really cool is that each method has it's own unique #toolmarks associated with it!  

Second, most often there is a pretty good amount of metadata embedded in the LNK file structure. There are file system time stamps, and often we'll see a NetBIOS system name, a volume S/N, a SID, or other pieces of information that we can use in a VirusTotal retro-hunt in order to build out a significant history of other similar LNK files.

In the course of my research, I was able to create the smallest possible functioning LNK file, albeit with NO (yes, you read that right...) metadata. Well, that's not 100% true...there is metadata within the LNK file. Specifically, the Windows version identifier is still there, and this is something I purposely left. Instead of zero'ing it out, I altered it to an as-yet-unseen value (in this case, 0x0a). You can also alter each version identifier to their own value, rather than keeping them all the same.

Microsoft recently shared some information about NOBELIUM sending LNK files embedded within ISO files, as did the Volexity team. Both discuss aspects of the NOBELIUM campaign; in fact, they do so in a similar manner, but each with different details. For example, the Volexity team specifically states the following (with respect to the LNK file):

It should be noted that nearly all of the metadata from the LNK file has been removed. Typically, LNK files contain timestamps for creation, modification, and access, as well as information about the device on which they were created.

Now, that's pretty cool! As someone who's put considerable effort into understanding the structure of LNK files, and done research into creating the smallest, minimal, functioning LNK file, this was a pretty interesting statement to read, and I wanted to learn more.

Taking a look at the metadata for the reports.lnk file (from fig 4 in the Microsoft blog post, and fig 3 of the Volexity blog post) we see

guid               {00021401-0000-0000-c000-000000000046}
shitemidlist    My Computer/C:\/Windows/system32/rundll32.exe
**Shell Items Details (times in UTC)**
  C:0                   M:0                   A:0                  Windows  (9)
  C:0                   M:0                   A:0                  system32  (9)
  C:0                   M:0                   A:0                  rundll32.exe  (9)

commandline  Documents.dll,Open
iconfilename   %windir%/system32/shell32.dll
hotkey             0x0
showcmd        0x1

***LinkFlags***
HasLinkTargetIDList|IsUnicode|HasArguments|HasExpString|HasIconLocation

***PropertyStoreDataBlock***
GUID/ID pairs:
{46588ae2-4cbc-4338-bbfc-139326986dce}/4      SID: S-1-5-21-8417294525-741976727-420522995-1001

***EnvironmentVariableDataBlock***
EnvironmentVariableDataBlock: %windir%/system32/explorer.exe

***KnownFolderDataBlock***
GUID  : {1ac14e77-02e7-4e5d-b744-2eb1ae5198b7}
Folder: CSIDL_SYSTEM

While the file system time stamps embedded within the LNK file structure appear to have been zero'd out, a good deal of metadata still exists within the structure itself. For example, the Windows version information (i.e., "9") is still available, as are the contents of several ExtraData blocks. The SID listed in the PropertyStoreDataBlock can be used to search across repositories, looking for other LNK files that contain the same SID. Further, the fact that these blocks still exist in the structure gives us clues as to the process used to create the original LNK file, before the internal structure elements were manipulated.

I'm not sure that this is the first time this sort of thing has happened; after all, the MS blog post makes no mention of metadata being removed from the LNK file, so it's entirely possible that it's happened before but no one's thought that it was important enough to mention. However, items such as ExtraDataBlocks and which elements exist within the structure not only give us clues (toolmarks) as to how the file was created, but the fact that metadata elements were intentionally removed serve as additional toolmarks, and provide insight into the intentions of the actors.

But why use an ISO file? Well, interesting you should ask.  Matt Graeber said:

Adversaries choose ISO/IMG as a delivery vector b/c SmartScreen doesn't apply to non-NTFS volumes

In the ensuring thread, @arekfurt said:

Adversaries can also use the iso trick to put evade MOTW-based macro blocking with Office docs.

Ah, interesting points! The ISO file is downloaded from the Internet, and as such, would likely have a zone identifier ADS associated with it (I say, "likely" because I haven't seen it mentioned as a toolmark), whereas once the ISO file is mounted, the embedded files would not have zone ID ADSs associated with them. So, the decision to use an ISO file was intentional, and not just cool...in fact, it appears to have been intentionally used for defense evasion.

Testing, and taking DFIR a step further

One of Shakespeare's lines from Hamlet I remember from high school is, "...there are more things on heaven and earth, Horatio, than are dreamt of in your philosophy." And that's one of the great things about the #DFIR industry...there's always something new. I do not for a moment think that I've seen everything, and I, for one, find it fascinating when we find something that is either new, or that has been talked about but is being seen "in the wild" for the first time.

Someone mentioned recently that Microsoft's Antimalware Scan Interface (i.e., AMSI) could be used for persistence, and that got me very interested.  This isn't something specifically or explicitly covered by the MTRE ATT&CK framework, and I wanted to dig into this a bit more to understand it. As it can be used for persistence, it offers not only an opportunity for a detection, but also for a #DFIR detection and artifact constellation that can provide insight into threat actor sophistication and intent, as well as attribution. 

AMSI was introduced in 2015, and discussions of issues with it and bypassing it date back to 2016. However, the earliest discussion of the use of AMSI for persistence that I could find is from just last year. An interesting aspect of this means of persistence isn't so much as a detection itself, but rather how it's investigated. I've worked with analysis and response teams over the years, and one of the recurring questions I've had when something "bad" is detected is where that event occurred in relation to others. For example, whether you're using EDR telemetry or a timeline of system activity, all events tend to have one thing in common...a time stamp indicating the time at which they occurred. That is, either the event itself has an associated time stamp (file system time stamp, Registry key LastWrite time, PE file compile time, etc.), or some monitoring functionality is able to associate a time stamp with the observed event. As such, determining when a "bad" event occurred in relation to other events, such as a system reboot or a user login, can provide insight into determining if the event is the result of some persistence mechanism. This is necessary, as while EDR telemetry in particular can provide a great deal of insight, it is largely blind to a great deal of on-system artifacts (for example, Windows Event Log records). However, adding EDR telemetry to on-system artifact constellations significantly magnifies the value of both.

As I started researching this issue, the first resource to really jump out at me was this blog post from PenTestLab. It's thorough, and provides a good deal of insight as well as resources. For example, this post links to not only another blog post from b4rtik, but to a compiled DLL from netbiosX that you can download and use in testing. As a side note, be sure to install the Visual C++ redistributable on your system if you don't already have it, in order to get the DLL registration to work (Thanks to @netbiosX for the assist with that!)

I found that there are also other resources on the topic from Microsoft, as well.

Following the PenTestLab blog post, I registered the DLL, waited for a bit, and then collected a copy of the Software hive via the following command:

C:\Users\Public>reg save HKLM\Software software.sav

This testing provides the basis for developing #DFIR resources, including a RegRipper plugin. This allows detections to persist, particularly in larger environments, so that corporate knowledge is maintained.

It also sets the stage for further, additional testing. For example, you can install Sysmon, open a Powershell command prompt, submit the test phrase, and then create a timeline of system activity once calc.exe opens, to determine (or begin developing) the artifact constellation associated with this use of 

Speaking of PenTestLab, Sysmon, and taking things a step further, there's this blog post from PenTestLab on Threat Hunting AMSI Bypasses, including not just a Yara rule to run across memory, but also a Registry modification that indicates yet another AMSI bypass.