Pages

Saturday, August 27, 2022

When Windows Lies

"When Windows Lies"...what does that really mean? 

Mari had a fascinating blog post on this topic some years ago; she talked about the process DFIR analysts had been using to that point to determine the installation date of the operating system. In short...and this has happened several more times since then...while DFIR analysts had been using one process to assess the installation date, Windows developers had changed how this information is stored and tracked in Windows systems, reaffirming the notion that operating systems are NOT designed and maintained with forensic examiners in mind. ;-)

The take-away from Mari's blog article...for me, anyway...is the need for analysts to keep up-to-date with changes to the operating system; storage locations, log formats, etc., can (and do) change without notice. Does this mean that every analyst has to invest in research to keep up on these things? No, not at all...this is why we have a community, where this research and these findings are shared. But as she mentioned in the title and content of the article, if we just keep following our same methods, we're going to end up finding that "Windows lies". This idea or concept is not new; I've talked about the need for validation previously.

Earlier this year, a researcher used a twist on that title to talk about the "lies" analysts tools will "tell" them, specifically when it comes to USB device serial numbers. I understand the author presented at the recent SANS DFIR Summit; unfortunately, I was not able to view the presentation due to a previous commitment. However, if the content was similar, I'm not sure I'd use the term "lies" to describe what was happening here.

The author does a great job of documenting the approach they took in the article, with lots of screen captures. However, when describing the T300D Super Speed Toaster, the author states:

...I would have expected a device such as this to simply be a pass-through device.

I've used a lot of imaging devices in my time, but not this one; even so, just looking at the system (and without reading the online description of the device) I can't say that I would have assumed, in a million years, that this was simply a "pass-through device". Just looking at the front of the device, there's a good bit going on, and given that this is a "disk dock" for duplicating drives, I'm not at all sure that the designers took forensics processes into account.

As a result, in this case, the take-away isn't that it's about Windows "lying", as much as it is...once again...the analyst's assumptions. If the analyst feels that what they "know" is beyond reproach, and do not recognize what they "know" as assumption (even if it's more of, "...but that's how we've always done it..."), then it would appear that Windows is "lying" to them. So, again, we have the need for validation, but this time we've added the layer of "check your assumptions".

Earlier this year, Krz posted a pretty fascinating article, using the term "fools" in the title, as in "Windows fools you". In that case, what he meant was that during updates, Windows will "do things" as part of the update functionality that have an impact on subsequent response and analysis. As such, an analyst with minimal experience or visibility may assume that the "thing" done was the result of a threat actor's actions, simply because they weren't aware that this is "normal Windows functionality".

It's pretty clear that the use of the term "lies" is meant to garner attention to the content. Yes, it's absolutely critical that analysts understand the OS and data they're working with (including file formats), how their tools work, and when necessary, use multiple tools. But it's also incumbent upon analysts to check their assumptions and validate their findings, particularly when there's ample data to help dispel those assumptions. Critical thinking is paramount for DFIR analysts, and I think that both authors did a very good job in pointing that out.

Wednesday, August 24, 2022

Kudos and Recognition

During my time in the industry, I've seen a couple of interesting aspects of "information sharing". One is that not many like to do it. The other is that, over time, content creation and consumption has changed pretty dramatically.

Back in the day, folks like Chris Pogue, with his The Digital Standard blog, and Corey Harrell with his Journey Into IR blog, and even more recently, Mari with her Another Forensics Blog have all provided a great deal of relevant, well-developed information. A lot of what Mari shared as far back as 2015 has even been relevant very recently, particularly regarding deleted data in SQLite databases. And, puh-LEASE, let's not forget Jolanta Thomassen, who, in 2008, published her dissertation addressing unallocated space in Registry hives, along with the first tool (regslack) to parse and extract those contents - truly seminal work!

Many may not be aware, but there are some unsung heroes in the DFIR industry, unrecognized contributors who are developing and sharing some incredible content, but without really tooting their own horn. These folks have been doing some really phenomenal work that needs to be called out and held up, so I'm gonna toot their horn for them! So, in no particular order...

Lina is an IR consultant with Secureworks (an org for which I am an alum), and as string of alphabet soup following her name. Lina has developed some pretty incredible content, which she shares via her blog, as well as via LinkedIn, and in tweet threads. One of her posts I've enjoyed in particular is this one regarding clipboard analysis. Lina's content has always been well-considered, well-constructed, and very thoughtful. I have always enjoyed when content produced by practitioners, as it's very often the most relevant.

Krz is another analyst, and has dropped a good deal of high quality content, as well as written some of his own tools (including RegRipper plugins), which he also shares via Github. Not only did Krz uncover that Windows Updates will clear out valuable forensic resources, but also did some considerable research into how a system going on battery power impacts that system, and subsequently, forensic analysis.

Patrick Siewert has hung out his own shingle, and does a lot of work in the law enforcement and legal communities, in addition to sharing some pretty fascinating content. I have never had the opportunity to work with mobile devices (beyond laptops), but Patrick's article on cellular records analysis is a thorough and interesting treatment of the topic. 

Julian-Ferdinand Vögele recently shared a fascinating article titled The Rise of LNK Files, dropping a really good description of Windows shortcut files and their use. Anyone who's followed me for any amount of time knows I'm more than mildly interested in this topic, from a digital forensic and threat intel perspective. He's got some other really interesting articles on his blog, including this one regarding Scheduled Tasks, and like the other folks mentioned here, I'm looking forward to more great content in the future.

If you're looking for something less on the deep technical side or less DFIR focused, check out Maril's content. She's a leader in the "purple team" space, and she's got some really great content on personal branding that I strongly recommend that everyone take the time to watch, follow, digest, and consider. To add to that, it seems that Maril and her partners-in-crime (other #womenincyber) will be dropping the CyberQueensPodcast starting in Sept.

If you're into podcasts, give Jax a listen over at Outpost Gray (she also co-hosts the 2 Cyber Chicks podcast) and in particular, catch her chat with Dorota Koslowska tomorrow (25 Aug). Jax is a former US Army special ops EW/cyber warrant officer, and as you can imagine, she brings an interesting perspective to a range of subjects, a good bit of which she shares via her blog.

Let's be sure to recognize those who produce exceptional content, and in particular those who do so on a regular basis!

Saturday, August 13, 2022

Who "Owns" Your Infrastructure?

That's a good question.

You go into work every day, sit down at your desk, log in...but who actually "owns" the systems and network that you're using? Is it you, your employer...or someone else?

Anyone who's been involve in this industry for even a short time has either seen or heard how threat actors will modify an infrastructure to meet their needs, enabling or disabling functionality (as the case may be) to cover their tracks, make it harder for responders to track them, or to simply open new doors for follow-on activity.

Cisco (yes, *that* Cisco) was compromised in May 2022, and following their investigation, provided a thorough write-up of what occurred. From their write-up:

"Once the attacker had obtained initial access, they enrolled a series of new devices for MFA and authenticated successfully to the Cisco VPN." (emphasis added)

Throughout the course of the engagement, the threat actor apparently added a user, modified the Windows firewall, cleared Windows Event Logs, etc. Then, later in the Cisco write-up, we see that the threat actor modified the Windows Registry to allow for unauthenticated SYSTEM-level access back into systems by setting StickyKeys. What this means is that if Cisco goes about their remediation steps, including changing passwords, but misses this one, the threat actor can return, hit a key combination, and gain SYSTEM-level access back into the infrastructure without having to enter a password. There's no malware involved...this based on functionality provided by Microsoft.

Remember..the Cisco write-up states that the activity is attributed to an IAB, which means that this activity was likely intended to gain and provide access to a follow-on threat actor. As a result of response actions taken by the Cisco team, that follow-on access has been obviated.

On 11 Aug 2022, the SANS Internet Storm Center included this write-up regarding the use of a publicly available tool called nsudo. There's a screen capture in the middle of the write-up that shows a number of modifications the threat actor makes to the system, the first five of which are clearly Registry modifications via reg add. Later there's the use of the Powershell Mp-Preference module to enable Windows Defender exclusions, but I don't know if those will even take effect if the preceding commands to stop and delete Windows Defender succeeded. Either way, it's clear that the threat actor in this case is taking steps to modify the infrastructure to meet their needs.

It doesn't stop there; there is a great deal of native functionality that threat actors can leverage to modify systems to meet their needs. For example, it's one thing to clear Windows Event Logs or delete web server log files; as we saw with NotPetya in 2017, those logs can still be recovered. To take this a step further, I've seen threat actors use appcmd.exe to disable IIS logging; if the logs are never written, they can't be recovered. We've seen threat actors install remote access tools, and install virtual machines or hard drives from which to run their malicious software, because (a) the VMs are not identified as malicious by AV software, and (b) AV software doesn't "look inside" the VMs.

So what? What does all this mean?

What this means is that these modifications can be detected and responded to early in the attack cycle, inhibiting or even obviating follow-on activity (ransomware deployment?). When I was researching web shells, for example, I kept running into trouble with Windows Defender; no matter how "esoteric" the web shell, if I didn't disable Defender before downloading it, Defender would quite literally eat the web shell! Other tools do a great job of detecting and quarantining web shells, and even more identify them. That's a means of initial access, so detecting and quarantining the web shell means you've obviated the rest of the attack and forced the threat actor to look for another means, AND you know someone's knocking at your door!

Wednesday, August 10, 2022

Researching the Windows Registry

The Windows Registry is a magical place that I love to research because there's always something new and fun to find, and apply to detections and DFIR analysis! Some of my recent research topics have included default behaviors with respect to running macros in Office documents downloaded from the Internet, default settings for mounting ISO/IMG files, as well as how to go about enabling RDP account lockouts based on failed login attempts. 

Not long ago I ran across some settings specific to nested VHD files, and thought, well...okay, I've seen virtual machines installed on systems during incidents, as a means of defense evasion, and VHD/VHDX files are one such resource. Further, they don't require another application, like VMWare or VirtualBox.

Digging a bit further, I found this MS documentation:

"VHDs can be contained within a VHD, so Windows limits the number of nesting levels of VHDs that it will present to the system as a disk to two, with the maximum number of nesting levels specified by the registry value HKLM\System\CurrentControlSet\Services\FsDepends\Parameters\VirtualDiskMaxTreeDepth

Mounting VHDs can be prevented by setting the registry value HKLM\System\CurrentControlSet\Services\FsDepends\Parameters\VirtualDiskNoLocalMount to 1."

Hhhmmm...so I can modify a Registry value and prevent the default behavior of mounting VHD files! Very cool! This is pretty huge, because admins can set this value to "1" within their environment, and protect their infrastructure.

Almost 3 yrs ago, Will Dormann published an article about the dangers of VHD/VHDX files. Some of the issues Will points out are:

- VHD/VHDX files downloaded from the Internet do not propagate MOTW the way some archive utilities do, so even if the VHD is downloaded from the Internet and MOTW is applied, this does not transfer to any of the files within the VHD file. This behavior is similar to what we see with ISO/IMG files.

- AV doesn't scan inside VHD/VHDX files.

So, it may be worth it to modify the VirtualDiskNoLocalMount value.    

To check the various settings from a DFIR perspective, I use RegRipper:

(System) Get VHD[X] Settings

ControlSet001\Services\FsDepends\Parameters

LastWrite time: 2019-12-07 09:15:07Z

VirtualDiskExpandOnMount  0x0001
VirtualDiskMaxTreeDepth   0x0002
VirtualDiskNoLocalMount   0x0000

Analysis Tip: The values listed impact how Windows handles VHD[X] files, which can be used to bypass security measures, including AV and MOTW.

VirtualDiskMaxTreeDepth determines how deep to do with embedding VHD files.
VirtualDiskNoLocalMount set to 1 prevents mounting of VHD[X] files.

Ref: https://insights.sei.cmu.edu/blog/the-dangers-of-vhd-and-vhdx-files/

From what's in the Registry (above), we can see what's possible. In this case, per the Analysis Tip in the output of the RegRipper plugin, this system allows automatic mounting of the virtual disk file. You can look for access to .vhd/.vhdx files in the user's RecentDocs key. Also from a DFIR perspective, look for indications of files being mounted in the Microsoft-Windows-VHDMP%4Operational Event Log.

Monday, August 08, 2022

An Attacker's Perspective

Something I've thought about quite often during my time in DFIR is the threat actor's perspective...what is the attacker seeing and thinking during their time in an infrastructure. As a DFIR analyst, I don't often get to 'see' the threat actor's actions, at least not fully. Rather, my early perspective was based solely on what was left behind. That's changed and expanded over the years, as we've moved from WinXP/2000/2003 to Win7 and Win10, and added some modicum of enterprise capability by deploying EDR. During the better part of my time as a responder, EDR was something deployed after an incident had been detected, but the technology we deployed at that time had a targeted "look back" capability that most current EDR technologies do not incorporate. This allowed us to quickly target the few systems that the threat actor actually touched (in one case, only 8 out of 150K endpoints), and then narrow down those systems to the 1 or 2 nexus systems for a more detailed examination. This led to us 'seeing' the impact or results of actions taken by the threat actor, but what we didn't have insight into was their perspective during their actions...why did they go left instead of right, or why did they apparently target one 'thing' instead of another?

EDR did allow us to capture things like the command line used to archive collected data, as well as the password, so that when we recovered the archives, we could open them and see what data was stolen. While that did provide some insight, it still didn't give us the attacker's perspective as they sought that data out.

During an active IR, attribution is most often a distraction. Early on in the IR, you very often don't have enough data to differentiate the threat actor (sometimes you might...), and for the attribution to be valuable, it needs to be able to inform you of the most likely places to look for intrusion data; when the threat actor gets to this point, what do they do? Turn left? Turn right? What do they pivot to based on previous intrusion data? However, during this time, you need to resist developing tunnel vision. Even after the IR is complete and you have a full(er) picture that includes attribution, it's often difficult to really get the perspective of the threat actor; after all, I'm a white American male, the product of a public school education and military experience...without a great deal of training and education, how am I going to understand the political and cultural nuances in the mind of Chinese or Russian threat actor?

I recently watched this Usenix presentation by Rob Joyce, NSA TAO chief, and it was really very valuable for me, because what Rob shared was the attacker's perspective. The presentation is pretty profound, in that it's the perspective a "nation-state actor"...yes, Rob's role is to be a "nation-state threat actor", literally.

Rob said things like:

"Well-run networks make our job hard."

...and...

"We look for the things that are actually in your network."

This entire video made a lot of great points for me, because most of what Rob was saying was the same thing many of us have been saying since the late '90s, albeit from an attacker's perspective...so, a different side of the same coin, if you will. Rather than making recommendations based on a vulnerability assessment or incident response, this is an attacker saying, "...if you do these things, it makes my job oh so much harder...". Very nice, and validating, all at the same time.

Something else that has occurred to me over the years is that threat actors are unencumbered by the artificialities we (defenders) often impose upon ourselves. What does this mean? Well, it's like the sign at the pool that says, "no running"...does this actually stop people from running? Even if you sat at the pool, in the shade, with a blindfold on, it wouldn't take you long to realize that no, it doesn't work...because you'd continually hear life guards and parents yelling, "STOP RUNNING!!" We see this in networks, as well, when someone says, "...you can't do that..." right after you, or the bad guy, has already done it. 

About 22 years ago, I was doing an assessment of a government organization, and I was using the original version of L0phtCrack. I was on the customer's network, using a domain admin account (per our contract), and I informed the senior sysadmin of what I was about to do (i.e., get all of the password hashes and begin cracking them). He informed me that we'd never be able to do it, so I went ahead and hit the Enter key, and the password hashes appeared on my system almost before the Enter key came all the way back up!

I once engaged in an IR with a customer with a global presence; they literally had systems all over the world, in different countries. One of those countries has very stringent privacy laws, and that impacted the IR because those laws prevented us from installing our EDR tools as part of the response. This was important because all of the data that we did have at that point showed the threat actor moving...not moving laterally and touching, but "lifting and shifting"...to that environment. As part of the IR, we (the customer POC, actually) informed the folks in this country about what was going to happen, and they pushed back. However, the threat actor never bothered to ask, and was already there doing things that we couldn't monitor or track, nor could we impact. 

This was all because someone said, "no, you can't do this, we have laws...", after we'd asked. The threat actor never bothered to ask; they'd found that they could, so they did. We were inhibited by our own artificialities, by our own imaginary road blocks. 

Something else that's popped up recently in discussions regarding cyber insurance is the term "good faith knowledge". In July, Travelers filed in court to rescind a policy for a customer, claiming that portions of the policy applications included misrepresentations. Specifically, the customer had applied for the policy and included MFA attestation documentation, indicating that they had the specified coverage. However, the customer was later impacted by a ransomware attack, and it was found that the MFA attestation was not correct. During exchanges on social media following this filing, a term came up that caught my attention..."good faith knowledge". My understanding (and this may be wholly incorrect) is that if the signer had "good faith knowledge" at the time that the attestation was correct, then this provides legal backing to deny the filing to rescind the policy.

If my understanding is correct in this instance, the question then becomes, why have the attestation at all? The CEO or CIO can sit in a conference room, ask the IT director, and sign the attestation "in good faith" that the required coverage is, in fact, in place...even if it's not. Don't take action to check or verify the attestation, just sign it and say, "I had no reason to believe at the time that this was not the case." And while all this legal action goes on, the fact remains, the organization was subject to a successful ransomware attack, and the claim still has to be paid. 

Now, go back and read what I said a few paragraphs ago, specifically, "the customer was later impacted by a ransomware attack...". Yep, that's how they came to the understanding that the MFA attestation was incorrect/misrepresented...the CEO signed the attestation, but the bad guy wasn't even slowed down, because after all, there was no MFA in place, at least not any on the route taken by the threat actor. This is just another example of a self-inflicted artificiality that prevents us from doing a better job at "cyber" or "information" security, but does nothing to impede threat actors.

In the end, the needle hasn't moved. We've encumbered ourselves with legal and cultural constraints while the threat actor leverages what has been technically enabled. I've seen networks that were supposed to be air-gapped, but no one told the threat actor, so they went ahead and used it just like the flat network that it was. Remember, Rob said, "We look for the things that are actually in your network"; this statement should include, "...and use them how we wish, regardless of what your 'documentation' says...".

For defenders, an attacker's perspective is good. No, it's more than good...it's invaluable. But the reverse is also true...