Saturday, March 25, 2023

Password Hash Leakage

If you've been in the security community for even a brief time, or you've taking training associated with a certification in this field, you've likely encountered the concept of password hashes. The "Reader's Digest" version of password hashes are that passwords are subject to a one-way cryptographic algorithm for storage, and that same algorithm is applied to passwords that are input, and a comparison is made for authentication. The basic idea is that the password is not stored in its original form. 

Now, we 'see' password hashes being collected by threat actors all the time; grab a copy of the AD database, or of Registry hives from an endpoint. Or, why bother with hashes, when you can use NPPSpy or enable WDigest

Or, if you wanted to maintain unauthenticated persistence, you could enable RDP and Sticky Keys.

Okay, so neither of those last two instances involves password hashes, so what if that's what you were specifically interested in? What if you wanted to get password hashes, or continue to receive password hashes, even across password resets? There are more than a few ways to go about doing this, all of which take advantage of available "functionality"; all you have to do is set up a file or document to attempt to connect to a remote, threat actor-controlled resource.

Collecting hashes is nothing new...check out this InSecure.org article from 1997. Further, hashes can be leaked via an interesting variety of routes and applications; take a look at this Securify article from 2018. Also, consider the approach presented in ACE Responder's tweet regarding modifying Remote Desktop Client .rdp files.

One means of enabling hash leaks across password resets is to modify the iconfilename field in specifically placed LNK/Windows shortcut files, which is similar to what is described in this article, except that you set the IconLocation parameter to point to a threat actor-controlled resource. There's even a free framework for creating shortcuts called "LNKBomb" available online.

Outlook has been a target for NTLM hash leakage attacks; consider this Red Team Notes article from 2018. More recently, Microsoft published this blog article explaining CVE-2023-23397, and how to investigate attempts to exploit the vulnerability. This PwnDefend article shares some thoughts as to persisting hash collection via the Registry, enabling the "long game".

So, What?
Okay, so what's the big deal? Why is this something that you even need to be concerned about?

Well, there's been a great deal of discussion regarding the cyber crime, and in particular, the ransomware economy for some time now. This is NOT a euphemism; cyber crime is an economy focused on money. In 2016, the Samas ransomware actors were conducting their own operations, cradle to grave; at the time, they targeted Java-based JBoss CMS systems as their initial access points. Over the years, an economy has developed around initial access, to the point where there are specialists, initial access brokers (IABs), who obtain and sell access to systems and infrastructures. Once initial access is achieved, they will determine what access is available, to which organization, and it would behoove them to retain access, if possible. Say they sell access, and the threat actor is "noisy", is caught, and the  implant or backdoor placed by the IAB (not the initial access point itself) is "burned". NTLM leakage is a means for ensuring later, repeated access, given that one of the response and remediation recommendations is very often a global password change. If one of the routes into the infrastructure used by the IAB require authentication, then setting up a means for receiving password hashes enables continued access.

What To Do About It
There are a number of ways to address this issue. First, block outbound communications over ports 139 and 445 (because of course you've already blocked inbound communication attempts over those ports!!), and monitor your logs for attempts to do so.

Of course, consider using some means of MFA, particularly for higher privilege access.

If your threat hunting allows for access to endpoints (rather than log sources sent to a SIEM) and file shares, searching for LNK files in specific locations and checking their iconfilename attributes is a good hunt, and something you may want to enable on a regular, repeated basis, much like a security patrol.

For SOC detections, look for means by which this activity...either enabling or using these attempts at hash leakage...might be detected.

From a DFIR perspective, my recommendation would be to develop an evidence intake process that includes automated parsing and rendering of data sources prior to presenting the information to the DFIR analyst. Think of this as a means of generating alerts, but instead of going to the SOC console, these "alerts" are enriched and decorated for the DFIR analyst. This process should include parsing of LNK files within specific locations/paths in the acquired evidence, as parsing all LNK files might not be effective, nor timely.

The "Why" Behind Tactics

Very often we'll see mention in open reporting of a threat actor's tactics, be they "new" or just what's being observed, and while we may consider how our technology stack might be used to detect these tactics, or maybe how we'd respond to an incident where we saw these tactics used, how often to do we consider why the tactic was used?

To see the "why", we have to take a peek behind the curtain of detection and response, if you will.

If you so much as dip your toe into "news" within the cyber security arena, you've likely seen mention that Emotet has returned after a brief hiatus [here, here]. New tactics observed associated with the deployment of this malware include the fact that the lure document is an old-style MS Word .doc file, which presents a warning message to the user to copy the file to a 'safe' location and reopen it. The lure document itself is in excess of 500MB in size (padded with zeros), and when the macros are executed, a DLL that is similarly zero-padded to over 500MB is downloaded.

Okay, why was this approach taken? Why pad out two files to such a size, albeit with zeros? 

Well, consider this...SOC analysts are usually front-line when responding to incident alerts, and they may have a lot of ground to cover while meeting SLAs during their shift, so they aren't going to have a lot of time to invest in investigations. Their approach to dealing with the .doc or even the DLL file will be to first download them from the endpoint...if they can. That's right...does the technology they're using have limits on file sizes for download, and if so, what does it take to change that limit? Can the change be made in a timely manner such that the analyst can simply reissue the request to download the file, or does the change take some additional action. If additional action is required, it likely won't be followed up on.

Once they have the file, what are they going to do? Parse it? Not likely. Do they have the tools available, and skills for parsing and analyzing old-style/OLE format .doc files? Maybe. But it's easier to just upload the file to an automated analysis framework...if that framework doesn't have a file size limit of it's own.

Oh, and remember, all of that space full of zeros means the threat actor can change the padding contents (flip a single "0" to a "1") and change the hash without impacting the functionality of the file. So...yeah.

So, what's happening here is that whether or not it's specifically intended, these tactics are targeting analysts, relying on their lacking in experience, and targeting response processes within the security stack. Okay, "targeting" implies intent...let's say, "impacting" instead. You have to admit that when looking at these tactics and comparing them to your security stack, in some cases, these are the effects we're seeing, this is what we see happening when we peek behind the curtain.

Consider this report from Sentinel Labs, which mentions the use of the "C:\MS_DATA\" folder by threat actors. Now, consider the approach taken by a SOC analyst who sees this for the first time; given that some SOC analysts are remote, they'll likely turn to Google to learn about this folder, and find that the folder is used by the Microsoft Troubleshooting tool (TSSv2), and at that point, perhaps deem it "safe" or "benign". After all, how many SOCs maintain a central, searchable repository of curated, documented intrusion intel? For those that do, how many analysts on those teams turn to that repository first, every time? 

How about DFIR consulting teams? How many DFIR consulting teams have an automated process for parsing acquired data, and automatically tagging and decorating it based on intrusion intel developed from previous engagements?

In this case, an automated process could parse the MFT and automatically tag the folder with a note for analysts, with tips regarding how to validate the use of TSSv2, and maybe even tag any files found within the folder.

When seeing tactics listed in open reporting, it's not just a good idea to consider, "does my security stack detect this?", but to also think about, "what happens if we do?"

Thursday, March 16, 2023

Threat Actors Changing Tactics

I've been reading a bit lately on social media about how cyber security is "hard" and it's "expensive", and about how threat actors becoming "increasingly sophisticated". 

The thing is, going back more than 20 yrs, in fact going back to 1997, when I left military active duty and transitioned to the private sector, I've seen something entirely different. 

On 7 Feb 2022, Microsoft announced their plans to change how the Windows platform (OS and applications) handled macros in Office files downloaded from the Internet; they were planning to block them, by default. Okay, so why is that? Well, it turns out that weaponized Office docs (Word documents, Excel spreadsheets, etc.) were popular methods for gaining access to systems. 

As it turns out, even after all of the discussion and activity around this one, single topic, weaponized documents are still in use today. In fact, March 2023 saw the return of Emotet, delivered via an older-style MS Word .doc file that was in excess of 500MB in size. This demonstrates that even with documented incidents and available protections, these attacks will still continue to work, because the necessary steps to help protect organizations are never taken. In addition to using macros in old-style MS Word documents, the actors behind the new Emotet campaigns are also including instructions to the recipient for...essentially...bypassing those protection mechanisms.

Following the Feb 2022 announcement from Microsoft, we saw some threat actors shift to using disk image files to deploy their malware, due in large part to the apparent dearth of security measures (at the time) to protect organizations from such attacks. For example, a BumbleBee campaign was observed using IMG files to help spread malware.

MS later updated Windows to ensure "mark-of-the-web" (MotW) propagation to files embedded within disk image files downloaded from the Internet, so that protection mechanisms were available for some file types, and that at least warnings would be generated for others.

We then saw a shift to the use of macros in MS OneNote files, as apparently these file weren't considered "MS Office files" (wait...what??).

So, in the face of this constant shifting in and evolution of tactics, what are organizations to do to address these issues and protect themselves? 

Well, the solution for the issue of weaponized Office documents existed well prior to the Microsoft announcement in Feb 2022; in fact, MS was simply implementing it where orgs weren't doing so. And the thing is, the solution was absolutely free. Yep. Free, as in "beer". A GPO, or a simple Registry modification. That's it. 

The issue with the use of disk image files is that when received and a user double-clicks them, they're automatically mounted and the contents accessible to the user. The fix for this...disabling automatically mounting the image files when the user double-clicks them...is similarly free. With two simple Registry modifications, users are prevented from automatically mounting 4 file types - ISO, IMG, VHD, and VHDX. However, this does not prevent users from programmatically accessing these files, such as via a legitimate business process; all it does is prevent the files from being automatically mounted via double-clicking. 

And did I mention that it's free?

What about OneNote files? Yeah, what about them?

My point is that we very often say, "...security is too expensive..." and "...threat actors are increasing in sophistication...", but even with changes in tactics, is either statement really true? As an incident responder, over the years, I've seen the boots-on-the-ground details of attacks, and a great many of them could have been prevented or at the very least significantly hampered had a few simple, free modifications been made to the infrastructure.

The Huntress team posted an article recently that includes Powershell code that you can copy-paste and use immediately, and will address all three of the situations/conditions discussed in this blog post.

Sunday, March 12, 2023

On Using Tools

I've written about using tools before in this blog, but there are times when something comes up that provokes a desire to revisit a topic, to repeat it, or to evolve and develop the thoughts around it. This is one of those posts. 

When I first released RegRipper in 2008, my intention was that once others saw the value in the tool, it would organically just grow on its own as practitioners found value in the tool, and sought to expand it. My thought was that once analysts started using it, they'd see the value proposition in the tool, and all see that the real power that comes from it is that it can easily be updated; "easily" by either developing new plugins, or seeking assistance in doing so.

That was the vision, but it's not something that was ever really realized. Yes, over time, some have created their own plugins, and of those, some have shared them. However, for the most part, the "use case" behind RegRipper has been "download and RUNALLTHETHINGS", and that's pretty much it.

On my side, there are a few assumptions I've made with respect to those using RegRipper, specifically around how they were using it. One assumption has been that whomever downloaded and is using the tool has a purposeful, intentional reason for doing so, that they understand their investigative goals and understand that there's value in using tools like RegRipper to extract information for analysis, to validate other findings and add context, and to use as pivot points into further analysis. 

Another assumption on my part is that if they don't find what they're looking for, don't find something that "helps", or don't understand what they do find, that they'll ask. Ask me, ask someone else. 

And finally, I assume that when they find something that either needs to be updated in a plugin, or a new plugin needs to be written to address something, that they'll do so (copy-paste is a great way to start), or reach out to seek assistance in doing so.

Now, I'm assuming here, because it's proved impossible to engage others in the "community" in a meaningful conversation regarding tool usage, but it appears to me that most people who use tools like RegRipper assume that the author is the expert, that they've done and seen everything, that they know everything, and that they've encapsulated all of that knowledge and experience in a free tool. The thing is, I haven't found that to be the case in most tools, and that is most definitely NOT the case when it comes to RegRipper.

Why would anyone need to update RegRipper? 

Lina recently tweeted about the need for host forensics, and she's 10,000% correct! SIEMs only collect those data sources that are pointed at them, and EDR tools can only collect and alert on so much. As such, there are going to be analysis gaps, gaps that need to be filled in via host forensics. And as we've seen over time, a lot changes about various endpoint platforms (not just Windows). For example, we've been aware of the ubiquitous Run keys and how they're used for persistence; however, there are keys that can be used to disable the Run key values (Note: the keys and values can be created manually...) without modifying the Run key itself. As such, if you're checking the contents of the Run key and stating that whatever is listed in the values was executed, without verifying/validating that information, then is this correct? If you're not checking to see if the values were disabled (this can be done via reg.exe), and if you're not validating execution via the Shell-Core and Application Event Logs, then is the finding correct? I saw the value in validating findings when determining the "window of compromise" during PCI forensic exams, because the finding was used to determine any regulatory fines levied against the merchant.

My point is that if you're running a tool and expecting it to do everything for you, then maybe there needs to be a re-examination of why the tool is being run in the first place. If you downloaded RegRipper 6 months ago and haven't updated it in any way since then, is it still providing you with the information you need? If you haven't added new plugins based on information you've been seeing during analysis, at what point does the tool cease to be of value? If you look closely at the RegRipper v3.0 distro available on Github, you'll notice that it hasn't been updated in over 2 1/2 yrs. I uploaded a minor update to the main engine a bit ago, but the plugins themselves exist as they were in August 2020. Since then, I've been developing an "internal" custom version of RegRipper, complete with MITRE ATT&CK and category mappings, Analysis Tips, etc. I've also started developing plugins that output in JSON format. However, all of these are things that either I proposed in 2019 and got zero feedback on, or someone close to me asked about. Not a week goes by when I don't see something online, research it, and it ends up in a plugin (or two, or five...).

If you're using a tool, any tool (RegRipper, plaso, etc.), do you understand it's strengths and weaknesses, do you understand what it does and does not do, or do you just assume that it gives you what you need?

Sunday, February 26, 2023

Devices

This interview regarding one of the victims of the University of Idaho killings having a Bluetooth speaker in her room brings up a very important aspect of digital forensic analysis; that technology that we know little about is very pervasive in our lives. While the interview centers around the alleged killer's smart phone, the same concept applies to Windows systems, and specifically mobile systems such as laptops and tablets. Very often, there are remnants or artifacts left over as a result of prior activity (user interaction, connected devices, etc.) that we may not be aware of, and in more than a few instances, these artifacts may exist well beyond the deletion of applications.

Something I've mentioned previously here in this blog is that where you look for indications of Bluetooth or other connections may depend upon the drivers and/or applications installed. Some laptops or tablets, for example, may come with Bluetooth chipsets and drivers, and their own control applications, while other systems may have to have an external adapter. Or...and this is a possibility...the internal chipset may have been disabled in favor of an external adapter, such as a USB-connected Bluetooth adapter. As such, we can cover a means for extracting the necessary identifying information, just as Brian did here in his blog in 2014, but that specific information may not apply to other systems. By way of example, participants in this analysis test would have found information about connected Bluetooth devices in an entirely different location. The publicly available RegRipper v3.0 includes three plugins for extracting information about Bluetooth-connected devices from the Registry, one of which is specific to certain Broadcom drivers.

WiFi
Okay, not what we'd specifically consider "devices", but WiFi connections have long been valuable in determining the location of a system at a point in time, often referred to as geolocation. Windows systems maintain a good deal of information about WiFi access points they've connected to, much like smartphones in the "Bluetooth" section above. We "see" this when we have the system (Windows laptop, or a smartphone) away from a WiFi access point for a period of time, and then return...once we're back within range, if the system is configured to do so, it will automatically reconnect to the access point.

While I've done research into discovering and extracting information from the endpoint, others have used this information to determine the location of systems. I've talked to analysts who've been able to demonstrate that a former employee for their company met with a competitor prior to leaving the company and joining the competitor's team. In a few instances, those orgs have had DLP software installed on the endpoint, and were able to show that during that time, files were copied to USB devices, or sent off of the system via a personal email account.

USB Devices
Speaking of USB devices...

USB devices connected to Windows systems have long been an interest within the digital forensics community; in 2005, Cory Altheide and I co-authored the first peer-reviewed, published paper on the topic. Since then, there has been extensive writing on this topic. For example, Nicole Ibrahim, formerly of G-C Partners, has written about USB-connected devices, and the different artifacts left by their use, based on the device type (thumb drive, external hard drive, smartphone) and protocols used. I've even written several blog posts in the past year, covering artifacts that remain as a result not of USB devices being connected to a Windows system, but changes in Windows itself (here, and here). Over time, as Windows evolves, the artifacts left behind by different activities can change; we've even seen this between Windows 10 builds. As a result, we need to keep looking at the same things, the same activities, and ensure that our analysis process is keeping up, as well.

To that end, Kathryn Hedley recently shared a very good article on her site, khyrenz.com. She's also shared other great content, such as what USB connections look like with no user logged into the system. While Kathryn's writing covers specifically USB devices, she does address the issue of validation by providing insight into additional data sources.

Saturday, February 25, 2023

Why Write?

I shared yet another post on writing recently; I say "yet another" because I've published blog posts on the topic of "writing" several times. But something I haven't really discussed is why should we write, nor what we should write about?

In his book, Call Sign Chaos, Jim Mattis said, "If you haven't read hundreds of books, you are functionally illiterate, and you will be incompetent, because your personal experiences alone aren't broad enough to sustain you." While this is true for the warfighter, it is equally (and profoundly) true for other professions, and there's something else to the quote that's not as obvious. It's predicated on other professionals writing. In his book, Mattis described his reading as he moved into a theatre of operations, going back through history to learn what challenges previous commanders had faced, what they'd attempted to overcome those challenges, and what they'd learned.

While the focus of his book was on reading and professional development/preparation, the underlying "truth" is that someone...a previous commander, a historian, an analyst, someone...needs to write. This is what we need more of in cybersecurity...yes, there are books available, and lists available online, but what's missing is the real value, going beyond simple lists and instructions, to the how and the why, and perhaps more importantly, to what was learned.

So, if you are interested in developing content, what are some things you can write about? Here are some ideas...

Book Reviews
With all of the books that are out there that cover topics in DFIR, one of the few things we see are book reviews.

A book review is not a listing of the chapters and what each chapter contains.

What I mean by a book review is how you found it; was it well written, easy to follow? Was there something that could have made it better, perhaps more valuable, and if so, what was it? What impact did the contents have on your daily work? Is there something you'd like to see; perhaps a deeper explanation, more screen captures, maybe exercises at the end of sections or chapters would be beneficial? 

And, if you found something that could be improved, maybe make clear, explicit recommendations. I've seen where folks have asked for "more screen captures" without saying of what, nor for what reason (i.e, what would be the goal or impact of doing so). 

Conference Talks
Many times, particularly during 'conference season', we'll see messages on social media along the lines of "so-and-so is about to go on stage...", or we'll see a picture of someone on a stage, with the message, "so-and-so talking about this-and-that...", but what we don't see is commentary about what was said. So we know a person is going to talk about something, or did talk about something, but we know little beyond that, like how did what they say impact the listener/attendee? This is a great way to develop and share content, and is similar to book reviews...talk about how what you heard (or read) impacted you, or impacted your approach to analysis.

General Engagement 
Speaking of social media, this is a great way to get started with the habit of writing...articulate your thoughts regarding something you see, rather than just clicking "Like", or some other button offered by the platform. 

Monday, February 20, 2023

WEVTX Event IDs

Now and again, we see online content that moves the community forward, a step or several steps. One such article appeared on Medium recently, titled Forensic Traces of Exploiting NTDS. This article begins developing the artifact constellations, and walks through forensics analysis of different means of credential theft on an Active Directory server.

We need to see more of these sorts of "how to investigate..." articles that go beyond just saying, "...look at the <data source>...". Articles like this can be very useful because they help other analysts understand how to go about investigating these and similar issues.

The sole shortcoming of this article is that the research was clearly conducted by someone used to looking at forensic artifacts in a list; each artifact is presented individually, isolated from others, rather than as part of an artifact constellation. Analysts who come from a background such as this tend to approach analysis in this way, because this is how they were taught. 

Further, about halfway through the article we see a reference to "Event ID 400"; the subsequent images illustrate the event source as being "Kernel-PNP". However, this isn't specified. If you Google for "event ID 400", you find event sources such as Powershell, Microsoft-Windows-TerminalServices-Gateway, Performance Diagnostics, Veritas Enterprise Vault, and that's just on the first page.

About a third of the way down the article (sorry, images are numbered for reference) there's an image with the caption "Event ID 4688". The important thing that readers need to understand with this image is that these do not appear in the Security Event Log by default. For these events to appear, successful Process Tracking needs to be enabled, and there's an additional step, a Registry modification that needs to be made, in order for full command lines to appear in the event record. This is important for analysts to understand, so that they do not expect the records to be present by default. Also, you can parse the Security Registry hive using the RegRipper auditpol.pl plugin to determine the audit configuration for the system, validating what you should expect to see in the Security Event Log.

When examining the Windows Event Log as a data source during an investigation, what's actually available in the logs is dependent upon the version of Windows, the installed applications, the configuration of the Security Event Log, etc. Don't assume when reading articles such as this online that, while profoundly useful, you're going to see the log entries in the systems you engage with and examine.

Monday, February 13, 2023

Training and CTFs

The military has a couple of adages...one, "you fight like you train", and another being, "the more you sweat in peace, the less you bleed in war." The idea behind these adages is that progressive, realistic training prepares you for the job at hand, which is often one performed under "other than optimal" conditions. You start by learning in the classroom, then in the field, and then under austere conditions, so that when you do have to perform the function(s) or task(s) under similar conditions, you're prepared and it's not a surprise. This is also true of law enforcement, as well as other roles and functions. Given the pervasiveness of this style of training and familiarization, I would think that it's suffice to say that it's a highly successful approach.

The way DFIR CTFs, while fun, are being constructed and presented, they are doing those in the field a disservice, as they do not encourage analysts to train the way they should be fighting. In fact, they tend to cement and even encourage bad habits.

Let me say right now that I understand the drive behind CTF challenges, particularly those in the DFIR field. I understand the desire to make something available for others to use to practice, and perhaps rate themselves against, and I do appreciate the work that goes into such things. Honestly, I do, because I know that it isn't easy. 

Let me also say that I understand why CTFs are provided in this manner; it's because this is how many analysts are "taught", and it's because this is how other CTFs are presented. I also understand that presenting challenges in this manner provides for an objective measure against which to score individual participants; the time it takes to complete the challenge, the time between answering subsequent questions, and the number of correct responses are all objective measures that can be handled by a computer program, and really provide little wiggle room. So, we have analysts who "come up" in the industry, taking courses and participating in CTFs that are all structured in a similar manner, and they go on to create their own CTFs, based on that same structure.

However, the issue remains...the way DFIR CTFs are presented, they encourage something much less than what we should be doing, IRL. We continue to teach analysts that reviewing individual artifacts in isolation is "sufficient", and there's no direction or emphasis on concepts such as validation, toolmarks, or artifact constellations. In addition, there's no development of incident intelligence to be shared with others, both in the DFIR field, and adjacent to it (SOC, detection engineering, CTI, etc.).

Hassan recently posted regarding CTFs and "deliberate practice"; while I agree with his thoughts in principle, these tend to fall short. Yes, CTFs are great, because they offer the opportunity to practice, but they fall short in a couple of areas. One in particular is that they really aren't necessarily "deliberate practice"; perhaps a different way of saying that is that it's "deliberate practice" in the wrong areas, because we're telling those who participate in these challenges that answering obscure questions, in a manner that isolates that information from other other information needed to "solve" the case, is the standard to strive for, and this should not ever be the case.

Another way that these DFIR CTFs fall short is that they tend to perpetuate the belief that examiners should look at artifacts one a time, in isolation from other artifacts (particularly others in the constellation). Given that Windows is an operating system, with a lot going on, our old way of viewing artifacts...the way we've always done it...no longer serves us well. It's like trying watch a rock concert in a stadium by looking through a key hole. We can no longer open one Windows Event Log file in a GUI viewer, search for somethings we think might be relevant, close that log file, open another one, and repeat. Regardless of how comfortable we are with this approach, it is terribly insufficient and leaves a great many gaps and unanswered questions in even what appears to be the most rudimentary case.

Let's take a look at an example; this CyberDefenders challenge, as Hassan mentioned CyberDefenders in a comment. The first thing we see is that we have to sign up, and then sign in to work the challenge, and that none of the analyst case notes from how they solved the CTF are available. The same has been true of other CTFs, including (but not limited to) those such as the 2018 DefCon DFIR CTF. Keeping case notes is something that analysts should be deliberately practicing, as well as sharing them.

Second, we see that there are 32 questions to be answered in the CTF, the first of which is, "what is the OS product name?" We already know from one of the tags for the CTF that the image is Windows, so how important is the "OS product name"? This information does not appear to be significant to any of the follow-on questions, and seems to be solely for the purpose of establishing some sort of objective measure. Further, in over 2 decades of DFIR work, addressing wide range of response scenarios (malware, ransomware, PCI, APT, etc.), I don't think I've ever had a customer ask more than 4 or 5 questions...max. In the early days, there was most often just one question customers were interested in:

Is there malware on this system?

As time progressed, many customers wanted to know:

How'd they get in? 
Who are they?
Are they still in my network?
What did they take?

Most often, whether engaging in PCI forensic exams, or in "APT" or targeted threat response, those four questions, or some variation thereof, were in the forefront of customer's minds. In over two decades of DFIR work, ranging from individual systems up to the enterprise, I never had a case where a customer asked 32 questions (I've seen CTFs with 51 questions), and I've never had a customer (or a co-worker/teammate) ask me for the the LogFile sequence number of an Excel spreadsheet. In fact, I can't remember a single case (none stands out in my mind) where the LogFile sequence number of any file was a component or building block of an overall investigation.

Now, I'm not saying this isn't true for others...honestly, I don't know, as so few in our field actually share what they do. But from my experience, in working my own cases, and working cases with others, none of the questions asked in the CTF were pivotal to the case.

So, What's The Answer?
The answer is that forensic challenges need to be adapted, worked, and "graded" differently. CTFs should be more "deliberate practice", aligned to how would DFIR work should be done, and perpetuating and reinforcing good habits. Analysts need to keep and share case notes, being transparent about their analytic goals and thought processes, because this is now we learn overall. And I don't just mean that this is how that analyst, the one who shares these things, learns; no, I mean that this is how we all learn. In his book, Call Sign Chaos, retired Marine General Jim Mattis said that our own "personal experiences alone are not broad enough to sustain us"; while this thought applies to a warfighter reading, this portion of the quote applies much more broadly to mean that if we're stuck in our own little bubble, not sharing what we've done and what we know with others, then we're not improving, adapting and growing in our profession.

If we're looking to provide others with "deliberate practice", then we need to change the way we're providing that opportunity.

Additional Resources
John Asmussen - Case_notes.py
My 2018 DefCon DFIR CTF write-ups (part 1, part 2)

Monday, February 06, 2023

Why Lists?

So much of what we see in cybersecurity, in SOC, DFIR, red teaming/ethical hacking/pen testing, seems to be predicated on lists. Lists of tools, lists of books, lists of sites with courses, lists of free courses, etc. CD-based distros are the same way, regardless of whether they're meant for red- or blue-team efforts; the driving factor behind them is often the list of tools embedded within the distribution. For example, the Kali Linux site says that it has "All the tools you need". If you go to the SANS SIFT Workstation site, you'll see the description that includes, "...a collection of free and open-source incident response and forensic tools." Here's a Github site that lists "blue team tools"...but that's it, just a list.

Okay, so what's up with lists, you ask? What's the "so, what?" 

Lists are great...they often show us new tools that we'd hadn't seen or heard about, possibly tools that might be more effective or efficient for us and our workflows. Maybe a data source has been updated and there's a tool that addresses that new format, or maybe you're run across a case that includes the use of a different web browser, and there's a tool that parses the history for you. So, having lists is good, and familiar...because that's the way we've always done it, right? A lot of folks developing these lists came into the industry themselves at one point, looked around, and saw others posting lists. As such, the general consensus seems to be, "share lists"...either share a list you found, or share a list you've added to.

Lists, particularly checklists, can be useful. They can ensure that we don't forget something that's part of a necessary process, and if we intentionally and purposely manage and maintain that checklist, it can be our documentation; rather than writing out each step in our checklist as part of our case notes/documentation, we can just say, "...followed/completed the checklist version xx.xx, as of Y date...", noting any discrepancies or places we diverged. The value of a checklist depends upon how it's used...if it's downloaded and used because it's "cool", and it's not managed and never updated, then it's pretty useless.

Are lists enough?

I recently ran across a specific kind of list...the "cheat sheet". This specific cheat sheet was a list of Windows Event Log record event IDs. It was different from some other similar cheat sheets I'd seen because it was broken down by Windows Event Log file, with the "event IDs of interest" listed beneath each heading. However, it was still just a list, with the event IDs listed along with a brief description of what they meant.

However, even though this cheat sheet was "different", it was still just a list and it still wasn't sufficient for analysis today. 

Why is that?

Because a simple list doesn't give you the how, nor does it give you the why. Great, so I found a record with that event ID, and someone's list said it was "important", but that list doesn't tell me how this event ID is important to nor used in my investigation, nor how I can leverage that event ID to answer my investigative questions. The cheat sheet didn't tell me anything about how that specific event ID 

We have our lists, we have our cheat sheets, and now it's time to move beyond these and start developing the how and why; how to use the entry in an investigation, and why it's important. We need to focus less on simple lists and more on developing investigative goals and artifact constellations, so that we can understand what that entry means within the overall context of our investigation, and what it means when the entry is absent. 

We need to share more about how the various items on our lists are leveraged to reach or further our investigative goals. Instead of a list of tools to use, talk about how you've used one of those tools as part of your investigative process, to achieve your investigative goals.

Having lists or cheat sheets like those we've been seeing perpetuates the belief that it's sufficient to examine data sources in isolation from each other, and that's one of the biggest failings of these lists. As a community, and as an industry, we need to move beyond these ideas of isolation and sufficiency; while they seem to bring about an immediate answer or immediate findings, the simple fact is that neither serves us well when it comes to finding accurate and complete answers.

Sunday, February 05, 2023

Validating Tools

Many times, in the course of our work as analysts (SOC, DFIR, etc.), we run tools...and that's it. But do we often stop to think about why we're running that tool, as opposed to some other tool? Is it because that's the tool everyone we know uses, and we just never thought to ask about another? Not so much the how, but do we really think about the why?

The big question, however, is...do we validate our tools? Do we verify that the tools are doing what they are supposed to, what they should be doing, or do we simply accept the output of the tool without question or critical thought? Do we validate our tools against our investigative goals?

Back when Chris Pogue and I were working PCI cases as part of the IBM ISS X-Force ERS team, we ran across an instance where we really had to dig in and verify our toolset. Because we were a larger team, with varying skill levels, we developed a process for all of the required searches, scans and checks (search for credit card numbers, scans for file names, paths, hashes, etc.) based on Guidance Software's EnCase product, which was in common usage across the team. As part of the searches for credit card numbers (CCNs), we were using the built-in function isValidCreditCard(). Not long after establishing this process, we had a case where JCB and Discover credit cards had been used, but these weren't popping up in our searches.

Chris and I decided to take a look at this issue, and we went to the brands and got test card numbers...card numbers that would pass the necessary checks (BIN, length, Luhn check), but were not actual cards used by consumers. We ran test after test, and none using the isValidCreditCard() returned the card numbers. We tried reaching out via the user portal, and didn't get much in the way of a response that was useful. Eventually, we determined that those two card brands were simply not considered "valid" by the built-in function, so we overrode that function with one of our one, one that included 7 regexes in order to find all valid credit card numbers, which we verified with some help from a friend

We learned a hard lesson from this exercise, one that really cemented the adage, "verify your tools". If you're seeing (or not, as the case may be) something that you don't expect to see in the output of your tools, verify the tool. Do not assume that the tool is correct, that the tool author knew everything about the data they were dealing with and had accounted for edge cases. This is not to say that tool authors aren't smart and don't know what they're doing...not at all. In fact, it's quite the opposite, because what can often happen is that the data changes over time (we see this a LOT with Windows...), or there are edge cases that the tool simply doesn't handle well.

So we're not just asking about the general "verify your tools" adage; what we're really asking about is, "do you verify your tools against your investigative goals?". The flip side of this is that if you can't articulate your investigative goals, why are you running any tools in the first place?

Not long ago, I was working with someone who was using a toolset built out of open source and free tools. This toolset included a data collection component, middleware (parsed the data), and a backend component for engaging with and displaying the parsed data. The data collection component included retrieving a copy of the WMI repository, and I asked the analyst if they saw any use of WMI persistence, to which they said, "no". In this particular case, open reporting indicated that these threat actors had been observed using WMI for persistence. While the data collection component retrieved the WMI repository, the middleware component did not include the necessary code to parse that repository, and as such, one could not expect to see artifacts related to WMI persistence in the backend, even if they did exist in the repository. 

The issue was that we often expect the tools or toolset to be complete in serving our needs, without really understanding those "needs", nor the full scope of the toolset itself. Investigative needs or goals may not be determined or articulated, and the toolset was not validated against investigative goals, so assumptions were made, including ones that would lead to incomplete or incorrect reporting to customers.

Going Beyond Tool Validation to Process Validation
Not long ago, I included a question in one of my tweet responses: "how would you use RegRipper to check to see if Run key values were disabled?" The point of me asking that question was to determine who was just running RegRipper because it was cool, and who was doing so because they were trying to answer investigative questions. After several days of not getting any responses to the question (I'd asked the same question on LinkedIn), I posed the question directly to Dr. Ali Hadi, who responded by posting a YouTube video demonstrating how to use RegRipper. Dr. Hadi then posted a second YouTube video, asking, "did the program truly run or not?", addressing the issue of the StartupApproved\Run key.

The point is, if you're running RegRipper (or any other tool for that matter), why are you running it? Not how...that comes later. If you're running RegRipper thinking that it's going to address all of your investigative needs, then how do you know? What are your "investigative needs"? Are you trying to determine program execution? If so, the plugin Dr. Hadi illustrated in both videos is a great place to start, but it's nowhere near complete. 

You see, the plugin will extract values from the keys listed in the plugin (which Dr. Hadi illustrated in one of the videos). That version includes the StartupApproved\Run key in the plugin, as well, as it was added before I had a really good chance to conduct some more comprehensive testing with respect to that key and it's values. I've since removed the key (and the other associated keys) from the run.pl plugin and moved them to a separate plugin, with associated MITRE ATT&CK mapping and analysis tips.

As you can see from Dr. Hadi's YouTube video, it would be pretty elementary for a threat actor to drop a malware executable in a folder, and create a Run key value that points to it. Then, create a StartupApproved\Run key value that disables the Run key entry so that it doesn't run. What would be the point of doing this? Well, for one, to create a distraction so that the responder's attention is focused elsewhere, similar to what happened with this engagement.

If you are looking to determine program execution and you're examining the contents of the Run keys, then you'd also want to include the Microsoft-Windows-Shell-Core%4Operational Event Log, as well, as the event records indicate when the key contents are processed, as well as when execution of individual programs (pointed to by the values) began and completed. This is a great way to determine program execution (not just "maybe it ran"), as well as to see what may have been run via the RunOnce key, as well.

The investigative goal is to verify program execution via the Run/RunOnce keys, from both the Software and NTUSER.DAT hives. A tool was can use is RegRipper, but even so, this will not allow us to actually validate program execution; for that, we need a process that includes incorporating the Microsoft-Windows-Shell-Core%4Operational Event Log, as well as the Application Event Log, looking for Windows Error Reporting or Application Popup events. For any specific programs we are interested in, we'd need to look at artifacts that included "toolmarks" of that program, looking for any file system, Registry, or other impacts on the system.

Conclusion
If you're going to use a tool in SOC or DFIR work, understand the why; what investigative questions or goals will the tool help you answer/achieve? Then, validate that the tool will actually meet those needs. Would those investigative goals be better served by a process, one that addresses multiple aspects of the goal? For example, if you're interested in IP addresses in a memory dump, searching for the IP address (or IP addresses, in general) via keyword or regex searches will not be comprehensive, and will lead to inaccurate reporting. In such cases, you'd want to use Volatility, as well as bulk_extractor, to look for indications of network connections and communications.

Monday, January 30, 2023

Soft Skills: Writing

Writing. 

Like math in middle school, this is one of those subjects that we pushed back on, telling ourselves, "I'll never have to use this...", and then quite shockingly finding that it's amazing how much writing we actually do. However, are we doing it well, given the particular circumstances of the writing? We "write" on social media, not being too overly concerned about things like grammar, spelling, or even word choice, falling back on the old, "...you know what I meant...", or blaming auto-correct for the miscommunication.

I'll be the first to admit, I'm not an "expert" at writing, nor am I "the best". But I will say that I am intentional in my writing, and this is something that's led me to...not been the result of...maintaining a blog, and publishing several books, with others in the hopper.

Writing is a necessary skill that many who need to, do not intentionally engage in, and of those who do, very few accept criticism or feedback well. For myself, I have a long history in my career of having to write, which stated in college in the mid-to-late '80s. I was an engineering major, and my English professor said that I wrote "like an English major". To this day, I still don't know what that meant, because I was constantly switching verb tenses in my writing, which was reflected in my grades. 

While on active duty, I had to write, and because it was the military, there was feedback..it was part of what we did, so there was no getting away from it, shrinking back and retreating when someone had recommendations. It started in training, with things like operations orders, and continued out "in the fleet", progressing into JAG manual investigations, fitness reports, pro-con assessments, etc. There was all kinds of writing, and there was a LOT of feedback, whether you wanted or liked it, or not.

Another thing that was clear about the writing in that environment was that not everyone received the same feedback, and not everyone took the feedback they received the same way. Very early on in my career, I learned some "truths" about writing fitness reports, particularly from knowledgeable individuals. I reported to my first unit in May 1990, and not long after, a CWO-2 named "Rick" returned from a promotion board at HQMC. He was able (and willing) to act as a confidant and mentor, particularly given not only his longevity in service, but also based on his very recent experience. He not only shared what he'd learned throughout his career, but also the insights he'd learned from the recent promotion panel. I was able to take what I learned from Rick, and use it going forward, but just a couple of years later, I had a SSgt who was applying for the Warrant Officer program, who had some fitness reports written on him that included questionable statements, statements that stood out as being starkly and glaringly counter to what I'd learned.

What I learned on active duty served me very well in the private sector, the biggest lesson being, "...don't get butt hurt when someone says something...". Look beyond the "how" of what's said, to the "what". Don't get so wrapped in the "you were mean to me" emotional response that you miss the gem hidden beyond that will get you over that hump, and allow you to be a better writer. Look beyond your own initial, visceral, emotional response, and closely examine the "what".

Now, if you're posting to social media, you may not care about grammar, spelling, punctuation, etc. It may not matter, and that's fine. If you're not trying to convey a thought or idea, and you're just "sh*t posting", then it really doesn't matter if the reader understands what you're trying to say. 

But what about if you're filling out a ticket, or reporting on an investigation? What if you're actually trying to convey something, because it's "important"? Now, I put the word "important" in quotes, because throughout the passed two decades, I've talked to more than a few in the DFIR community who haven't really grasped how important their communication is, how what they are sharing in a ticket or in a report is actually used by someone else to make a decision, to commit resources (or not), or to levy a fine or punishment. Many analysts never see what's done with their work, they never see corporate counsel or HR, or a regulatory body using what they've written to make decisions.

I've also seen far too many times how a simple, "...what does this mean?" or "...can you clarify which version of Windows you're working with..." is wildly misinterpreted and internalized as, "I'm being called out unfairly."

What Can I Do?
So, what? So, what does this all mean to you, the reader, and what can you do? What I'm going to share here are some of the lessons I've learned over the past three decades...

The first step is to recognize that we can all get better at communicating, and in particular, writing. So, start writing. Comment on posts (Twitter, LinkedIn) rather than simply clicking "Like". Did you like a book you read? Write a review. Ask someone a question about a book they read or about a post they wrote or recommended. 

To get better at writing, it's best to read. If you read something that you enjoyed reading, consider why you enjoyed it. Was it the content itself, or the writing style? If it was the writing style, try emulating that style. Is it more formal, clinical, or perhaps more conversational? Consider what works for you, what do you enjoy, and how can you make that part of how you write.

Do not assume any response or feedback you receive is intended to be negative. Yes, I get it...this is the Internet, and there is a lot of negativity, and when you encounter that, the best thing to do is ignore it. But when you receive feedback on something, particularly when it's sought out, don't immediately assume that it's negative, or that you've done something profoundly wrong. Instead, recognize the negative feelings you're having, take a deep breath...and look beyond those feelings and really try to see the "what" beyond the "how". Look for what's being said, beyond how it's being said, or how it makes you feel.

Friday, January 27, 2023

Updates, Compilation

Thoughts on Detection Engineering
I read something online recently that suggested that the role of detection engineering is to reduce the false positive (FPs) alerts sent to the SOC. In part, I fully agree with this; however, "cyber security" is a team sport, and it's really incumbent upon SOC and DFIR analysts to support the detection engineering effort through their investigations. This is something I addressed a bit ago in this blog, first here, and then here

From the second blog post linked above, the most important value-add is the image to the right. This is something I put together to illustrate what, IMHO, should be the interaction between the SOC, DFIR, threat hunting, threat intel, and detection engineering. As you see from the image, the idea is that the output of DFIR work, the DFIR analysis, feeds back into the overall process, through threat intel and detection engineering. Then, both of those functions further feed back into the overall process at various points, one being back into the SOC through the development of high(er) fidelity detections. Another feedback point is that threat intel or gaps identified by detection engineer serve to inform what other data sources may need to be collected and parsed as part of the overall response process.

The overall point here is that the SOC shouldn't be inundated or overwhelmed with false positive (FP) detections. Rather, the SOC should be collecting the necessary metrics (through an appropriate level of investigation) to definitively demonstrate that the detections are FPs, and the feed that directly to the DFIR cycle to collect and analyze the necessary information to determine how to best address those FPs.

One example of the use of such a process, although not related to false positives, can be seen here. Specifically, Huntress ThreatOps analysts were seeing a lot of malware (in particular, but not solely restricted to Qakbot) on customer systems that seemed to be originating from phishing campaigns that employed disk image file attachments. One of the things we did was create an advisory for customers, providing a means to disable the ability for users to just double-click the ISO, IMG, or VHD files and automatically mount them. Users are still able to access the files programmatically, they just can't mount them by double-clicking them.

While this specific event wasn't related to false positives, it does illustrate how taking a deeper look at an issue or event can provide something of an "upstream remediation", approaching and addressing the issue much earlier in the attack chain

Podcasts
If you're into podcasts, Zaira provided me the wonderful opportunity to appear on the Future of Cyber Crime podcast! It was a great opportunity for me to engage with and learn from Zaira! Thank you so much!

Recycle Bin Persistence  
D1rkMtr recently released a Windows persistence mechanism (tweet found here) based on the Recycle Bin. This one is pretty interesting, not just in it's implementation but you have to wonder how someone on the DFIR side of that persistence mechanism would even begin to investigate it. 

I know how I would...I created a RegRipper plugin for it, one that will be run on every investigation automatically, and provide an analysis tip so I never forget what it's meant to show.

recyclepersist v.20230122
(Software, USRCLASS.DAT) Check for persistence via Recycle Bin
Category: persistence (MITRE T1546)

Classes\CLSID\{645FF040-5081-101B-9F08-00AA002F954E}\shell\open\command not found.
Classes\Wow6432Node\CLSID\{645FF040-5081-101B-9F08-00AA002F954E}\shell\open\command not found.

Analysis Tip: Adding a \shell\open\command value to the Recycle Bin will allow the program to be launched when the Recycle Bin is opened. This key path does not exist by default; however, the \shell\empty\command key path does.

Ref: https://github.com/D1rkMtr/RecyclePersist

Plugins
Speaking of RegRipper plugins, I ran across this blog post recently about retrieving Registry values to decrypt files protected by DDPE. For me, while the overall post was fascinating in the approach taken, the biggest statement from the post was:

I don’t have a background in Perl and it turns out I didn’t need to. If the only requirement is a handful of registry values, several plugins that exist in the GitHub repository may be used as a template. To get a feel for the syntax, I found it helpful to review plugins for registry artifacts I’m familiar with. After a few moments of time and testing, I had an operational plugin.

For years, I've been saying that if there's a plugin that needs to be created or modified, it's as easy as either creating it yourself, by using copy-paste, or by reaching out and asking. Providing a clear, concise description of what you're looking for, along with sample data, has regularly resulted in a working plugin being available in an hour or so.

However, taking the reigns of the DIY approach as been something that Corey Harrell started doing years ago, and what let to such tools as auto_rip.

Now, this isn't to say that it's always that easy...talking through adding JSON output took some discussion, but the person who asked about that was willing to discuss it, and I think we both learned from the engagement.

LNKs
Anyone who's followed me for a short while will know that I'm a really huge proponent for making the most of what's available, particularly when it comes to file metadata. One of the richest and yet largely untapped (IMHO) sources of such metadata are LNK files. Cisco's Talos team recently published a blog post titled, "Following the LNK Metadata Trail".

The article is interesting, and while several LNK builders are identified, the post falls just short of identifying toolmarks associated with these builders. At one point, the article turns to Qakbot campaigns and states that there was no overlap in LNK metadata between campaigns. This is interesting, when compared to what Mandiant found regarding two Cozy Bear campaigns separated by 2 years (see figs 5 & 6). What does this say to you about the Qakbot campaigns vs the Cozy Bear campaigns?

Updates to MemProcFS-Analyzer 
Evild3ad79 tweeted that MemProcFS-Analyser has been updated to version 0.8. Wow! I haven't had the opportunity to try this yet, but it does look pretty amazing with all of the functionality provided in the current version! Give it a shot, and write a review of your use of the tool!

OneNote Tools
Following the prevalence of malicious OneNote files we've seen though social media over the past few weeks, both Didier Stevens and Volexity crew have released tools for parsing those OneNote files.

Addendum, 30 Jan: Matthew Green added a OneNote parser/detection artifact to Velocidex.

Sunday, January 15, 2023

Wi-Fi Geolocation, Then and Now

I've always been fascinated by the information maintained in the Windows Registry. But in order to understand this, to really get a view into this, you have to know a little bit about my background. The first computer I remember actually using was a Timex-Sinclair 1000, just like the one in the image shown to the right. You connected it to the TV, programs were created via the keyboard and usually copied from "recipes" in the manual or in a magazine, and the "programs" could be saved to or loaded from a tape in a tape recorder. Yes, you read that right...a tape recorder. I was programming BASIC programs on this system, and then on a Mac IIe. After that, it was the Epson QX-10, and then for a very long time, in high school and then in college (I started college in August, 1985), the TRS-80

The point of all of this is that the configuration of these systems, particularly as we moved to systems running MS-DOS, was handled through configuration files, particularly autoexec.bat and a myriad *.ini files. Even when I started using Windows 3.1 or Windows 3.11 for Workgroups, the same held true...configuration files. We started to see the beginnings of the Registry with Windows 95, and files such as system.dat. 

Even from the very beginning of my experience with the Windows Registry, the amount and range of information stored in this data source has been absolutely incredible. In 2005, Cory Altheide and I published the first paper outlining artifacts associated with USB devices being connected to Windows (Windows XP) systems. What we were looking at at the time was commonalities across systems when the same device was connected to multiple systems, say, to run programs from the thumb drive, or copy files from systems to then take back to a central computer system.

From there, this topic has continued to be explored and unraveled, even as Windows itself continued to evolve and recognize different types of devices (thumb drives, digital cameras, smart phones) based on the protocol used.

In 2009, I wrote a blog post about another artifact stored within the Windows Registry; specifically, MAC addresses of wireless access points that a Windows system had connected to. By tracking this information and mapping the geo-location of those wireless access points based on data recorded in online databases, the idea was that an analyst could track the movements of that system, and hence, the owner. 

Why was this interesting? I'd heard more than a few stories from analysts and investigators who talked about an (former) employee of a company who, usually after the fact, was found to have visited a competitor's offices prior to resigning and accepting employment with that competitor. In one instance, not only did the employee connect their work computer to the Wifi system at a competitor's location, but they also connected to a Starbuck's store Wifi system that morning, next to or close to the competitor's location. With the time stamps of the connections, analysts were then able to use other timeline information to illustrate applications opened and files accessed until the system was shut down again.

I updated the tool I wrote in 2011, and as you can see from the post and comments, there was still interest in this topic at the time. I remember working on the tool, and taking the lat/long coordinates returned by the online database to populate a Google Map URL. So, over the course of about 2 yrs, the interest...or at least, my interest...in moving this forward, or at least revisiting it, was still there.

I recently ran across this tweet (I saw it on 15 Jan 2023), which led me to this Github repository.

This is what I love, truly love to see...how something that was of interest at one point is once again on the forefront of someone's mind, to the point where they create a tool, and post it on Github. This truly shows that no matter how much work and effort is put into something at one point, there will always be growth, and different aspects of the early project (the platform, the Registry, the online databases, etc.) will be extended. This also shows that nothing ever really goes away...

Saturday, December 31, 2022

Persistence and LOLBins

Grzegorz/@0gtweet tweeted something recently that I thought was fascinating, suggesting that a Registry modification might be considered an LOLBin. What he shared was pretty interesting, so I tried it out.

First, the Registry modification:

reg add "HKLM\System\CurrentControlSet\Control\Terminal Server\Utilities\query" /v LOLBin /t REG_MULTI_SZ /d 0\01\0LOLBin\0calc.exe

Then the command to launch calc.exe:

query LOLBin

Now, I've tried this on a Windows 10 system and it works great, even though Terminal Services isn't actually running on this system. Running just the "query" command on both Windows 10 and Windows 11 systems (neither with Terminal Services running) results in the same output on both:

C:\Users\harlan>query
Invalid parameter(s)
QUERY { PROCESS | SESSION | TERMSERVER | USER }

Running the "query" command with different parameters (i.e., "process", "user", etc.) proxies that command to the appropriate entry based on the value in the Registry, as illustrated in figure 1.

Fig 1: query key values






As such, running "query user" runs quser.exe, and you see the same output as if you simply ran "quser". 

Note that the Utilities key has two other subkeys, in addition to "query"; "change" and "reset", as illustrated in fig. 2.

Fig. 2: Utilities subkeys









So, I thought, what if I change the key path from "query", and make the same modification (via the 'reg add' command above) to the "change" subkey...would that have the same effect? Well, I tried it with an elevated command prompt, for both the "change" and "reset" subkeys, and got "Access is denied." both times. Okay, so we can only use this...at an Admin level, anyway...with the "query" subkey.

So what?
Part of what makes this particular persistence so insidious (IMHO) is how it can be launched. When we use the Run keys (or a Scheduled Task, or a Windows service) for persistence, an analyst may not have any trouble seeing the launch mechanism for the program/malware. In fact, even outside of Registry analysis, some SOC consoles will allow the analyst to see notable events, and even provide enough process lineage to allow the analyst to deduce that a notable event occurred.

To Grzegorz's point, this is an interesting LOLBin, because query.exe exists on the system by default. IMHO, this is important because when I started learning computers over 40 yrs ago (yes, circa 1982), all we had was the command line; as such, understanding which commands were available on the system, as well as things like STDOUT, STDERR, file redirection, etc., were all just part of what we learned. Now, 40 yrs later, we have entire generations of analysts (SOC, DFIR, etc.) who "grew up" without ever touching a command line. So, when I saw Grzegorz's tweet, the first thing I did was go the command prompt and type "query", and saw the response listed above. Easy peasy...but how many analysts do that? What's going to happen if a SOC analyst sees telemetry for "query user"? Or, what happens if a DFIR analyst sees a Prefetch file for query.exe? What assumptions will they make, and how will those assumptions drive the rest of their analysis and response?

What is a way to use this? Well, let's say you gain access to a system...and this is just hypothetical...and run a script that enables RDP (if it's not already running), enables StickyKeys, writes a Trojan to an alternate data stream (thanks to Dr. Hadi for some awesome research!!) and then creates a value beneath the "query" key for the Trojan. That way, if your activity on other systems gets discovered, you have a way back into the infrastructure that doesn't require authentication. Connect to the system via the Remote Desktop Client, access StickyKeys so that you get a System-level command prompt, and you type "query LOLBin". Boom, you're back in!

As you might expect, I did write a RegRipper plugin for this persistence mechanism, the output of which makes it pretty straightforward to see any changes that may have been made. For example, "normal" look like the following:

utilities v.20221231
(System) Get TS Utilities subkey values
Category: persistence - T1546

ControlSet001\Control\Terminal Server\Utilities\change
LastWrite time: 2018-04-12 09:20:04Z
logon           0 1 LOGON chglogon.exe
port            0 1 PORT chgport.exe
user            0 1 USER chgusr.exe
winsta          1 WINSTA chglogon.exe

ControlSet001\Control\Terminal Server\Utilities\query
LastWrite time: 2018-04-12 09:20:04Z
appserver       0 2 TERMSERVER qappsrv.exe
process         0 1 PROCESS qprocess.exe
session         0 1 SESSION qwinsta.exe
user            0 1 USER quser.exe
winsta          1 WINSTA qwinsta.exe

ControlSet001\Control\Terminal Server\Utilities\reset
LastWrite time: 2018-04-12 09:20:04Z
session         0 1 SESSION rwinsta.exe
winsta          1 WINSTA rwinsta.exe

Analysis Tip: The "query" subkey beneath "\Terminal Server\Utilities" can be used for persistence. Look for unusual value names.

Ref: https://twitter.com/0gtweet/status/1607690354068754433

Keeping Grounded

As 2022 comes to a close, I reflect back over the past year, and the previous years that have gone before. I know we find it fascinating to hear "experts" make predictions for the future, but I tend to believe that there's more value in reflecting on and learning from the past.

Years ago, I remember hearing about something in legal circles referred to as "the CSI effect". In short, the unrealistic portrayal of "forensics" on TV shows had influenced public opinion. People would watch an hour-long crime drama TV show and what they saw set the expectation in their minds of "forensics" should be, and this unrealistic expectation made it difficult for prosecutors to convince some juries of their evidence.

Over the holiday season, a "bomb cyclone" across the US combined with the numbers of folks wanting to travel to cause travel delays with the airlines, as one might expect. Planes needed to be deiced, but at some locations, travel was simply impossible. However, one airline in particular experienced heavier than usual delays and cancellations, to the point where the Transportation Secretary took notice. For several days, the evening national news covered this story, focusing in on the failures of the airline, and how stranded passengers were standing in long lines just to seek assistance from the airline's customer service. As each day went by and media reports highlighted how the cascading failures were snowballing and impacting travelers, all of this served to create an sense of negativity toward the airline management. Everyone I spoke with over the holidays had the same negative perspective of the airline's management.

On the morning of 28 Dec 2022, I saw the following post on LinkedIn:







Erin's message served as a stark reminder that there's often more to the story, that regardless of what we see being reported in the media, there are often stories that are not covered and reported, and that do not make it into the public eye. News outlets have a limited amount of time to cover a hand-picked menu of events of the day, so we have to be conscious of "collection bias", and if what we're seeing and hearing is playing into a narrative that we assume is correct.

The point here is to remain grounded, and as we roll over into the New Year, this is a good opportunity to make a resolution to remain grounded, and to seek out accountability partners and mentors to help us remain grounded. Don't be so focused on the negative aspects of an event that we loose sight of the positive things that happen, and that the folks who make those positive things happen need our support more than someone we believe is to blame needs our anger. 

Sunday, December 25, 2022

Why I love RegRipper

Yes, yes, I know...you're probably thinking, "you wrote it, dude", and while that's true, that's not the reason why I really love RegRipper. Yes, it's my "baby", but there's so much more to it than that. For me, it's about flexibility and utility. At the beginning of 2020, there was an issue with the core Perl module that RegRipper is built on...all of the time stamps were coming back as all zeros. So, I tracked down the individual line of code in the specific module, and changed it...then recompiled the EXEs and updated the Github repo. Boom. Done. I've written plugins during investigations, based on new things I found, and I've turned around working plugins in under an hour for folks who've reached out with a concise request and sample data. When I've seen something on social media, or something as a result of engaging in a CTF, I can tweak RegRipper; add a plugin, add capability, extend current functionality, etc. Updates are pretty easy. Yes, yes...I know what you're going to say..."...but you wrote it." Yes, I did...but more importantly, I'm passionate about it. I see far too few folks in the industry who know anything about the Registry, so when I see something on social media, I'll try to imagine how what's talked about could be used maliciously, and write a plugin.

And I'm not the only one writing plugins. Over the past few months, some folks have reached out with new plugins, updates, fixes, etc. I even had an exchange with someone the other day that resulted in them submitting a plugin to the repo. Even if you don't know Perl (a lot of folks just copy-paste), getting a new plugin is as easy as sending a clear, concise description of what you're looking for, and some sample data.

Not long ago, a friend asked me about JSON output for the plugins, so I've started a project to create JSON-output versions of the plugins where it makes sense to do so. The first was for the AppCompatCache...I still have a couple of updates to do on what information appears in the output, but the basic format is there. Here's an excerpt of what that output currently looks like:

{
  "pluginname": "appcompatcache_json"
  "description": "query\parse the appcompatcache\shimcache data source"
  "key": ".ControlSet001\Control\Session Manager."
  "value": "AppCompatCache"
  "LastWrite Time": "2019-02-15 14:01:26Z"
  "members": [
    {
      "value": "C:\Program Files\Puppet Labs\Puppet\bin\run_facter_interactive.bat"
      "data": "2016-04-25 20:19:03"
    },
    {
      "value": "C:\Windows\System32\FodHelper.exe"
      "data": "2018-04-11 23:34:32"
    },
    {
      "value": "C:\Windows\system32\regsvr32.exe"
      "data": "2018-04-11 23:34:34"
    },

Yeah, I have some ideas as to how to align this output with other tools, and once I settle on the basic format, I can continue creating plugins where it makes sense to do so.

Recently, we've been seeing instances of "Scheduled Task abuse", specifically of the RegIdleBackup task. To be clear, while some have been seeing it recently, it's not "new"...Fox IT, part of the NCC Group, reported on it last year. It's also been covered more recently here in reference to GraceWire/FlawedGrace. Of all the reporting that's out there on this issue, what hasn't been addressed is how the modification to the task is performed; is the task XML file being modified directly, or is the API being used, such that the change would also be reflected in the Task subkey entry in the Registry?

From the RegIdleBackup task XML file:

<Actions Context="LocalSystem">
    <ComHandler>
      <ClassId>{CA767AA8-9157-4604-B64B-40747123D5F2}</ClassId>
    </ComHandler>
  </Actions>

So, we see the COM handler listed in the XML file, and that's the same CLSID that's listed in the Task entry in the Software hive. About 2 years ago, I updated a plugin that parsed the Scheduled Tasks from the Software hive, so I went back to that plugin recently and added additional code to look up CLSIDs within the Software hive and report the DLL they point to; here's what the output looks like now (from a sample hive):

Path: \Microsoft\Windows\Registry\RegIdleBackup
URI : Microsoft\Windows\Registry\RegIdleBackup
Task Reg Time : 2020-09-22 14:34:08Z
Task Last Run : 2022-12-11 16:17:11Z
Task Completed: 2022-12-11 16:17:11Z
User   : LocalSystem
Action : {ca767aa8-9157-4604-b64b-40747123d5f2} (%SystemRoot%\System32\regidle.dll)

The code I added does the look up regardless of whether the CLSID is the only entry listed in the Action field, or if arguments are provided, as well. For example:

Path: \Microsoft\Windows\DeviceDirectoryClient\HandleWnsCommand
URI : \Microsoft\Windows\DeviceDirectoryClient\HandleWnsCommand
Task Reg Time : 2020-09-27 14:34:08Z
User   : System
Action : {ae31b729-d5fd-401e-af42-784074835afe} -WnsCommand (%systemroot%\system32\DeviceDirectoryClient.dll)

The plugin also reports if it's unable to locate the CLSID within the Software hive.

So, what this means is that the next time I see a system that's been subject to an attack that includes Scheduled Task abuse, I can check to see if the issue impacts the Software hive as well as the Task XML file, and get a better understanding of the attack, beyond what's available in open reporting.

Finally, I can quickly create plugins based on testing scenarios; for some things, like parsing unallocated space within hive files, this is great for doing tool comparison and validation. However, when it comes to other aspects of DFIR, like extracting and parsing specific data from the Registry, there really aren't many other tools to use for comparison.

Interestingly enough, if you're interested in running something on a live system, I ran across reg_hunter a bit ago; it seems to provide a good bit of the same functionality provided by RegRipper, including looking for null bytes, RLO control characters in key and value names, looking for executables in value data, etc.