Saturday, March 25, 2023

Password Hash Leakage

If you've been in the security community for even a brief time, or you've taking training associated with a certification in this field, you've likely encountered the concept of password hashes. The "Reader's Digest" version of password hashes are that passwords are subject to a one-way cryptographic algorithm for storage, and that same algorithm is applied to passwords that are input, and a comparison is made for authentication. The basic idea is that the password is not stored in its original form. 

Now, we 'see' password hashes being collected by threat actors all the time; grab a copy of the AD database, or of Registry hives from an endpoint. Or, why bother with hashes, when you can use NPPSpy or enable WDigest

Or, if you wanted to maintain unauthenticated persistence, you could enable RDP and Sticky Keys.

Okay, so neither of those last two instances involves password hashes, so what if that's what you were specifically interested in? What if you wanted to get password hashes, or continue to receive password hashes, even across password resets? There are more than a few ways to go about doing this, all of which take advantage of available "functionality"; all you have to do is set up a file or document to attempt to connect to a remote, threat actor-controlled resource.

Collecting hashes is nothing new...check out this article from 1997. Further, hashes can be leaked via an interesting variety of routes and applications; take a look at this Securify article from 2018. Also, consider the approach presented in ACE Responder's tweet regarding modifying Remote Desktop Client .rdp files.

One means of enabling hash leaks across password resets is to modify the iconfilename field in specifically placed LNK/Windows shortcut files, which is similar to what is described in this article, except that you set the IconLocation parameter to point to a threat actor-controlled resource. There's even a free framework for creating shortcuts called "LNKBomb" available online.

Outlook has been a target for NTLM hash leakage attacks; consider this Red Team Notes article from 2018. More recently, Microsoft published this blog article explaining CVE-2023-23397, and how to investigate attempts to exploit the vulnerability. This PwnDefend article shares some thoughts as to persisting hash collection via the Registry, enabling the "long game".

So, What?
Okay, so what's the big deal? Why is this something that you even need to be concerned about?

Well, there's been a great deal of discussion regarding the cyber crime, and in particular, the ransomware economy for some time now. This is NOT a euphemism; cyber crime is an economy focused on money. In 2016, the Samas ransomware actors were conducting their own operations, cradle to grave; at the time, they targeted Java-based JBoss CMS systems as their initial access points. Over the years, an economy has developed around initial access, to the point where there are specialists, initial access brokers (IABs), who obtain and sell access to systems and infrastructures. Once initial access is achieved, they will determine what access is available, to which organization, and it would behoove them to retain access, if possible. Say they sell access, and the threat actor is "noisy", is caught, and the  implant or backdoor placed by the IAB (not the initial access point itself) is "burned". NTLM leakage is a means for ensuring later, repeated access, given that one of the response and remediation recommendations is very often a global password change. If one of the routes into the infrastructure used by the IAB require authentication, then setting up a means for receiving password hashes enables continued access.

What To Do About It
There are a number of ways to address this issue. First, block outbound communications over ports 139 and 445 (because of course you've already blocked inbound communication attempts over those ports!!), and monitor your logs for attempts to do so.

Of course, consider using some means of MFA, particularly for higher privilege access.

If your threat hunting allows for access to endpoints (rather than log sources sent to a SIEM) and file shares, searching for LNK files in specific locations and checking their iconfilename attributes is a good hunt, and something you may want to enable on a regular, repeated basis, much like a security patrol.

For SOC detections, look for means by which this activity...either enabling or using these attempts at hash leakage...might be detected.

From a DFIR perspective, my recommendation would be to develop an evidence intake process that includes automated parsing and rendering of data sources prior to presenting the information to the DFIR analyst. Think of this as a means of generating alerts, but instead of going to the SOC console, these "alerts" are enriched and decorated for the DFIR analyst. This process should include parsing of LNK files within specific locations/paths in the acquired evidence, as parsing all LNK files might not be effective, nor timely.

The "Why" Behind Tactics

Very often we'll see mention in open reporting of a threat actor's tactics, be they "new" or just what's being observed, and while we may consider how our technology stack might be used to detect these tactics, or maybe how we'd respond to an incident where we saw these tactics used, how often to do we consider why the tactic was used?

To see the "why", we have to take a peek behind the curtain of detection and response, if you will.

If you so much as dip your toe into "news" within the cyber security arena, you've likely seen mention that Emotet has returned after a brief hiatus [here, here]. New tactics observed associated with the deployment of this malware include the fact that the lure document is an old-style MS Word .doc file, which presents a warning message to the user to copy the file to a 'safe' location and reopen it. The lure document itself is in excess of 500MB in size (padded with zeros), and when the macros are executed, a DLL that is similarly zero-padded to over 500MB is downloaded.

Okay, why was this approach taken? Why pad out two files to such a size, albeit with zeros? 

Well, consider this...SOC analysts are usually front-line when responding to incident alerts, and they may have a lot of ground to cover while meeting SLAs during their shift, so they aren't going to have a lot of time to invest in investigations. Their approach to dealing with the .doc or even the DLL file will be to first download them from the endpoint...if they can. That's right...does the technology they're using have limits on file sizes for download, and if so, what does it take to change that limit? Can the change be made in a timely manner such that the analyst can simply reissue the request to download the file, or does the change take some additional action. If additional action is required, it likely won't be followed up on.

Once they have the file, what are they going to do? Parse it? Not likely. Do they have the tools available, and skills for parsing and analyzing old-style/OLE format .doc files? Maybe. But it's easier to just upload the file to an automated analysis framework...if that framework doesn't have a file size limit of it's own.

Oh, and remember, all of that space full of zeros means the threat actor can change the padding contents (flip a single "0" to a "1") and change the hash without impacting the functionality of the file. So...yeah.

So, what's happening here is that whether or not it's specifically intended, these tactics are targeting analysts, relying on their lacking in experience, and targeting response processes within the security stack. Okay, "targeting" implies intent...let's say, "impacting" instead. You have to admit that when looking at these tactics and comparing them to your security stack, in some cases, these are the effects we're seeing, this is what we see happening when we peek behind the curtain.

Consider this report from Sentinel Labs, which mentions the use of the "C:\MS_DATA\" folder by threat actors. Now, consider the approach taken by a SOC analyst who sees this for the first time; given that some SOC analysts are remote, they'll likely turn to Google to learn about this folder, and find that the folder is used by the Microsoft Troubleshooting tool (TSSv2), and at that point, perhaps deem it "safe" or "benign". After all, how many SOCs maintain a central, searchable repository of curated, documented intrusion intel? For those that do, how many analysts on those teams turn to that repository first, every time? 

How about DFIR consulting teams? How many DFIR consulting teams have an automated process for parsing acquired data, and automatically tagging and decorating it based on intrusion intel developed from previous engagements?

In this case, an automated process could parse the MFT and automatically tag the folder with a note for analysts, with tips regarding how to validate the use of TSSv2, and maybe even tag any files found within the folder.

When seeing tactics listed in open reporting, it's not just a good idea to consider, "does my security stack detect this?", but to also think about, "what happens if we do?"

Thursday, March 16, 2023

Threat Actors Changing Tactics

I've been reading a bit lately on social media about how cyber security is "hard" and it's "expensive", and about how threat actors becoming "increasingly sophisticated". 

The thing is, going back more than 20 yrs, in fact going back to 1997, when I left military active duty and transitioned to the private sector, I've seen something entirely different. 

On 7 Feb 2022, Microsoft announced their plans to change how the Windows platform (OS and applications) handled macros in Office files downloaded from the Internet; they were planning to block them, by default. Okay, so why is that? Well, it turns out that weaponized Office docs (Word documents, Excel spreadsheets, etc.) were popular methods for gaining access to systems. 

As it turns out, even after all of the discussion and activity around this one, single topic, weaponized documents are still in use today. In fact, March 2023 saw the return of Emotet, delivered via an older-style MS Word .doc file that was in excess of 500MB in size. This demonstrates that even with documented incidents and available protections, these attacks will still continue to work, because the necessary steps to help protect organizations are never taken. In addition to using macros in old-style MS Word documents, the actors behind the new Emotet campaigns are also including instructions to the recipient for...essentially...bypassing those protection mechanisms.

Following the Feb 2022 announcement from Microsoft, we saw some threat actors shift to using disk image files to deploy their malware, due in large part to the apparent dearth of security measures (at the time) to protect organizations from such attacks. For example, a BumbleBee campaign was observed using IMG files to help spread malware.

MS later updated Windows to ensure "mark-of-the-web" (MotW) propagation to files embedded within disk image files downloaded from the Internet, so that protection mechanisms were available for some file types, and that at least warnings would be generated for others.

We then saw a shift to the use of macros in MS OneNote files, as apparently these file weren't considered "MS Office files" (wait...what??).

So, in the face of this constant shifting in and evolution of tactics, what are organizations to do to address these issues and protect themselves? 

Well, the solution for the issue of weaponized Office documents existed well prior to the Microsoft announcement in Feb 2022; in fact, MS was simply implementing it where orgs weren't doing so. And the thing is, the solution was absolutely free. Yep. Free, as in "beer". A GPO, or a simple Registry modification. That's it. 

The issue with the use of disk image files is that when received and a user double-clicks them, they're automatically mounted and the contents accessible to the user. The fix for this...disabling automatically mounting the image files when the user double-clicks similarly free. With two simple Registry modifications, users are prevented from automatically mounting 4 file types - ISO, IMG, VHD, and VHDX. However, this does not prevent users from programmatically accessing these files, such as via a legitimate business process; all it does is prevent the files from being automatically mounted via double-clicking. 

And did I mention that it's free?

What about OneNote files? Yeah, what about them?

My point is that we very often say, " is too expensive..." and "...threat actors are increasing in sophistication...", but even with changes in tactics, is either statement really true? As an incident responder, over the years, I've seen the boots-on-the-ground details of attacks, and a great many of them could have been prevented or at the very least significantly hampered had a few simple, free modifications been made to the infrastructure.

The Huntress team posted an article recently that includes Powershell code that you can copy-paste and use immediately, and will address all three of the situations/conditions discussed in this blog post.

Sunday, March 12, 2023

On Using Tools

I've written about using tools before in this blog, but there are times when something comes up that provokes a desire to revisit a topic, to repeat it, or to evolve and develop the thoughts around it. This is one of those posts. 

When I first released RegRipper in 2008, my intention was that once others saw the value in the tool, it would organically just grow on its own as practitioners found value in the tool, and sought to expand it. My thought was that once analysts started using it, they'd see the value proposition in the tool, and all see that the real power that comes from it is that it can easily be updated; "easily" by either developing new plugins, or seeking assistance in doing so.

That was the vision, but it's not something that was ever really realized. Yes, over time, some have created their own plugins, and of those, some have shared them. However, for the most part, the "use case" behind RegRipper has been "download and RUNALLTHETHINGS", and that's pretty much it.

On my side, there are a few assumptions I've made with respect to those using RegRipper, specifically around how they were using it. One assumption has been that whomever downloaded and is using the tool has a purposeful, intentional reason for doing so, that they understand their investigative goals and understand that there's value in using tools like RegRipper to extract information for analysis, to validate other findings and add context, and to use as pivot points into further analysis. 

Another assumption on my part is that if they don't find what they're looking for, don't find something that "helps", or don't understand what they do find, that they'll ask. Ask me, ask someone else. 

And finally, I assume that when they find something that either needs to be updated in a plugin, or a new plugin needs to be written to address something, that they'll do so (copy-paste is a great way to start), or reach out to seek assistance in doing so.

Now, I'm assuming here, because it's proved impossible to engage others in the "community" in a meaningful conversation regarding tool usage, but it appears to me that most people who use tools like RegRipper assume that the author is the expert, that they've done and seen everything, that they know everything, and that they've encapsulated all of that knowledge and experience in a free tool. The thing is, I haven't found that to be the case in most tools, and that is most definitely NOT the case when it comes to RegRipper.

Why would anyone need to update RegRipper? 

Lina recently tweeted about the need for host forensics, and she's 10,000% correct! SIEMs only collect those data sources that are pointed at them, and EDR tools can only collect and alert on so much. As such, there are going to be analysis gaps, gaps that need to be filled in via host forensics. And as we've seen over time, a lot changes about various endpoint platforms (not just Windows). For example, we've been aware of the ubiquitous Run keys and how they're used for persistence; however, there are keys that can be used to disable the Run key values (Note: the keys and values can be created manually...) without modifying the Run key itself. As such, if you're checking the contents of the Run key and stating that whatever is listed in the values was executed, without verifying/validating that information, then is this correct? If you're not checking to see if the values were disabled (this can be done via reg.exe), and if you're not validating execution via the Shell-Core and Application Event Logs, then is the finding correct? I saw the value in validating findings when determining the "window of compromise" during PCI forensic exams, because the finding was used to determine any regulatory fines levied against the merchant.

My point is that if you're running a tool and expecting it to do everything for you, then maybe there needs to be a re-examination of why the tool is being run in the first place. If you downloaded RegRipper 6 months ago and haven't updated it in any way since then, is it still providing you with the information you need? If you haven't added new plugins based on information you've been seeing during analysis, at what point does the tool cease to be of value? If you look closely at the RegRipper v3.0 distro available on Github, you'll notice that it hasn't been updated in over 2 1/2 yrs. I uploaded a minor update to the main engine a bit ago, but the plugins themselves exist as they were in August 2020. Since then, I've been developing an "internal" custom version of RegRipper, complete with MITRE ATT&CK and category mappings, Analysis Tips, etc. I've also started developing plugins that output in JSON format. However, all of these are things that either I proposed in 2019 and got zero feedback on, or someone close to me asked about. Not a week goes by when I don't see something online, research it, and it ends up in a plugin (or two, or five...).

If you're using a tool, any tool (RegRipper, plaso, etc.), do you understand it's strengths and weaknesses, do you understand what it does and does not do, or do you just assume that it gives you what you need?