Tuesday, September 20, 2022

ResponderCon Followup

I had the opportunity to speak at the recent ResponderCon, put on by Brian Carrier of BasisTech. I'll start out by saying that I really enjoyed attending an in-person event after 2 1/2 yrs of virtual events, and that Brian's idea to do something a bit different (different from OSDFCon) worked out really well. I know that there've been other ransomware-specific events, but I've not been able to attend them.

As soon as the agenda kicked off, it seemed as though the first four presentations had been coordinated...but they hadn't. It simply worked out that way. Brian referenced what he thought my content would be throughout his presentation, I referred back to Brian's content, Allan referred to content from the earlier presentations, and Dennis's presentation fit right in as if it were a seamless part of the overall theme. Congrats to Dennis, by the way, not only for his presentation, but also on his first time presenting. Ever.

During his presentation, Brian mentioned TheDFIRReport site, at one point referring to a Sodinokibi write-up from March, 2021. That report mentions that the threat actor deployed the ransomware executable to various endpoints by using BITS jobs to download the EXE from the domain controller. My presentation focused less on analysis of the ransomware EXE and more on threat actor behaviors, and Brian's mention of the above report (twice, as a matter of fact) provided excellent content. In particular, for the BITS deployment to work, the DC would have to (a) have the IIS web server installed and running, and (b) have the BITS server extensions installed/enabled, so that the web server knew how to respond to the BITS client requests. As such, the question becomes, did the victim org have that configuration in place for a specific reason, or did the threat actor modify the infrastructure to meet their own needs? 

However, the point is that without prior coordination or even really trying, the first four presentations seemed to mesh nicely and seem as if there was detailed planning involved. This is likely more due to the particular focus of the event, combined with some luck delivered when the organizing team decided upon the agenda. 

Unfortunately, due to a prior commitment (Tradecraft Tuesday webinar), I didn't get to attend Joseph Edwards' presentation, which was the one presentation I wanted to see (yes, even more than my own!).  ;-) I am going to run through the slides (available from the agenda and individual presentation pages), and view the presentation recording once it's available. I've been very interested in threat actor's use of LNK files and the subsequent use (or rather, lack thereof) by DFIR and threat intel teams. The last time I remember seeing extensive use of threat actor-delivered LNK files was when the Mandiant team compared Cozy Bear campaigns.

Going through my notes, comments made during one presentation kind of stood out, in that "event ID 1102" within the RDPClient/Operational Event Log was mentioned when looking for indications of lateral movement. The reason this stood out, and why I made a specific note, was due to the fact that many times in the industry, we refer to simply "event IDs" to identify event records; however, event IDs are not unique. For example, we most often think of "event log cleared" when someone says "event ID 1102"; however, it can mean something else entirely based on the event source (a field in the event record, not the log file where it was found). As a result, we should be referring to Windows Event Log records by their event source/ID pairs, rather than solely by their event ID. 

Something else that stood out for me was that during one presentation, the speaker was referring to artifacts in isolation. Specifically, they listed AmCache and ShimCache each as artifacts demonstrating process execution, and this simply isn't the case. It's easy for many who do not follow this line of thought to dismiss such things, but we have to remember that we have a lot of folks who are new, junior, or simply less experienced in the industry, and if they're hearing this messaging, but not hearing it being corrected, they're going to assume that this is how things are, and should be, done.

What Next?
ResponderCon was put on by the same folks that have run OSDFCon for quite some time now, and it seems that ResponderCon is something a bit more than just a focused version of OSDFCon. So, the question becomes, what next? What's the next iteration, or topic, or theme? 

If you have thoughts, insights, or just ideas you want to share, feel free to do so in the comments, or via social media, and be sure to include/tag Brian.

Monday, September 19, 2022

Deconstructing Florian's Bicycle

Not long ago, Florian Roth shared some fascinating thoughts via his post, The Bicycle of the Forensic Analyst, in which he discusses increases in efficiency in the forensic review process. I say "review" here, because "analysis" is a term that is often used incorrectly, but that's for another time. Specifically, Florian's post discusses efficiency in the forensic review process during incident response.

After reading Florian's article, I had some thoughts that I wanted share to that would extend what he's referring to, in part because I've seen, and continue to see the need for something just like what is discussed. I've shared my own thoughts on this topic previously.

My initial foray into digital forensics was not terribly different from Florian's, as he describes in his article. For me, it wasn't a lab crammed with equipment and two dozen drives, but the image his words create and perhaps even the sense of "where do I start?" was likely similar. At the same time, this was also a very manual process...open images in a viewer, or access data within images via some other means, and begin processing the data. Depending upon the circumstances, we might access and view the original data to verify that it *can* be viewed, and at that point, extract some modicum of data (referred to as "triage data") to begin the initial parsing and data presentation process before kicking off the full acquisition process. But again, this has often been a very manual process, and even with checklists, it can be tedious, time consuming, and prone to errors.

Over the years, analysts have adopted something similar to what Florian describes, using such tools as Yara, Thor (Lite), log2timeline/plaso, or CyLR. These are all great tools that provide considerable capabilities, making the analyst's job easier when used appropriately and correctly. I should note that several years ago, extensions for Yara and RegRipper were added to the Nuix Workstation product, putting the functionality and capability of both tools at the fingertips of investigators, allowing them to significantly extend their investigations from within the Nuix product. This is an example of how a commercial product provided the ability of its users to leverage the freeware tools in their parsing and data presentation process.

So, where does the "bicycle" come in? Florian said:

Processing the large stack of disk images manually felt like being deprived of something essential: the bicycle of forensic analysts.

His point is, if we have the means for creating vast efficiencies in our work, alleviating ourselves of manual, time-consuming, error-prone processes, why don't we do so? Why not leverage scanners to reduce our overhead and make our jobs easier?

So, what was achieved through Florian's use of scanners?

The automatic processing of the images accelerated our analysis a lot. From a situation where we processed only three disk images daily, we started scanning every disk image on our desk in a single day. And we could prioritize them for further manual analysis based on the scan reports.

Florian's article continues with a lot of great information regarding the use of scanners, and applying detection engineering to processing acquired images and data. He also provides examples of rules to identify the misuse/abuse of common tools. 

All of this is great, and it's something we should all look to in our work, keeping two things in mind. First, if we're going to download and use tools created by others (Yara/Thor, plaso, RegRipper, etc.), we have to understand what the tools actually do. We can't make assumptions about the tools and the functionality they provide, as these assumptions lead to significant gaps in analysis. By way of example, in March, 2020, Microsoft published a blog article addressing human-operated ransomware attacks. In that article, they identified a ransomware group that used WMI for persistence, and the team I was engaged with at the time received data from several customers impacted by that ransomware group. However, the team was unable to determine if WMI had been used for persistence because the toolset they were using to parse data did not have the capability to parse the OBJECTS.DATA file. The collection process included this file, but the parsing process did not parse the file for persistence mechanisms, and as a result, analysts assumed that the data had been parsed and yielded a negative response.

Fig 1: New DFIR Model
Second, we cannot download a tool and use it, expecting it to be up-to-date 6 months or a year later, unless we actually take steps to update it. And I'm not talking about going back to the original download site and getting more rules. The best way to go about updating the tools is it use the scanners as Florian described, leveraging the efficiencies they provide, but to then bake new findings as a result of our analysis back into the overall DFIR process, as illustrated in figure 1. Notice that following the "Analysis" phase, there's a loop that feeds back into the "Collect" phase (in case there's something additional that needs to be collected from a system) and then proceeds to the "Parse" phase, where those scanners and rules Florian described are applied. They can then be further used to enrich and decorate the data prior to presentation to the analyst. The key to this feedback loop is that rather than solely the knowledge and experience of the analyst assigned to an engagement being applied to the data, the collective corporate knowledge of all analysts, prior analysts, detection engineers, and threat intel analysts can be applied consistently to all engagements. 

So, to truly leverage the efficiencies of Florian's scanner "bicycles", we need to continually extend them by baking findings developed through analysis, digging into open reports, etc., back into the process.

Saturday, September 10, 2022

AmCache Revisited

Not long ago, I posted about When Windows Lies, and that post really wasn't so much about Windows "lying", per se, as it was about challenging analyst assumptions about artifacts, and recognizing misconceptions. Along the same lines, I've also posted about the (Mis)Use of Artifact Categories, in an attempt to address the reductionist approach that leads analysts to oversimplify and subsequently misinterpret artifacts based on their perceived category (i.e., program execution, persistence, lateral movement, etc.). This misinterpretation of artifacts can lead to incorrect findings, and subsequently, incorrect recommendations for improvements to address identified issues.

I recently ran across this LinkedIn post that begins by describing AmCache entries as "evidence of execution", which is somewhat counter to the seminal research conducted by Blanche Lagney regarding the AmCache.hve file, particularly with more recent versions of Windows 10. If you prefer a more auditory approach, there's also a Forensic Lunch where Blanche discussed her research (starts around 12:45), thanks to David and Matt! Suffice to say, data from the AmCache.hve file should not simply be considered as "evidence of execution", as it's far too reductionist; the artifact is much more nuanced that simply "evidence of execution". This overly simplistic approach is indicative of the misuse of artifact categories I mentioned earlier.

Unfortunately, misconceptions as to the nature and value of artifacts such as this (and others, most notably, ShimCache) continue to exist because we continue to treat the artifacts in isolation, rather than as part of a system. Reviewing Blanche's paper in detail, for example, makes it clear that the value of specific portions of the AmCache data depend upon such factors as which keys/values are in question, as well as which version of Windows is being discussed. Given these nuances, it's easy to see how a reductionist approach evolves, leaving us simply with "AmCache == execution". 

What we need to do is look at artifacts not in isolation, but in context with other data sources (Registry, WEVTX, etc.). This is important because artifacts can be manipulated; for example, there've been instances where malware (a PCI scraper) was written to disk and immediately "time stomped", using the time stamps from kernel32.dll to make it appear that the malware had always been part of the system. As a result, when the $STANDARD_INFORMATION attribute last modification time was included in the ShimCache entry for the malware, the analyst misinterpreted this as the time of execution, and reported a "window of compromise" of 4 years to the PCI Council, rather than a more correct 3 weeks. The "window of compromise" reported by the analyst is used, in part, to determine fines to be levied against the merchant/victim. 

So, yes...we need to first learn about artifacts by themselves, as we begin learning. We need to develop an understanding of the nature of an artifact, particularly how many artifacts and IOCs need to be viewed as compound objects (shoutz to Joe Slowik!). However, when we get to the next artifact, we then need to start viewing artifacts in the context of other artifacts, and understand how to go about developing artifact constellations that help us clearly demonstrate the behaviors we're seeing, the behaviors that drive our findings.

The overall take-away here, as with other instances, is that we have to recognize when we're viewing artifacts in isolation, and then avoid doing so. If some circumstance prevents correlation across multiple data sources, then we need to acknowledge this in our reporting, and clearly state our findings as such.

Saturday, September 03, 2022

LNK Builders

I've blogged a bit...okay, a LOT...over the years on the topic of parsing LNK files, but a subject I really haven't touched on is LNK builders or generators. This is actually an interesting topic because it ties into the cybercrime economy quite nicely. What that means is that there are "initial access brokers", or "IABs", who gain and sell access to systems, and there are "RaaS" or "ransomware-as-a-service" operators who will provide ransomware EXEs and infrastructure, for a price. There are a number of other for-pay services, one of which is LNK builders.

In March, 2020, the Checkpoint Research team published an article regarding the mLNK builder, which at the time was version 2.2. Reading through the article, you can see that the building includes a great deal of functionality, there's even a pricing table. Late in July, 2022, Acronis shared a YouTube video describing how version 4.2 of the mLNK builder available.

In March, 2022, the Google TAG published an article regarding the "Exotic Lily" IAB, describing (among other things) their use of LNK files, and including some toolmarks (drive serial number, machine ID) extracted from LNK metadata. Searching Twitter for "#exoticlily" returns a number of references that may lead to LNK samples embedded in archives or ISO files. 

In June, 2022, Cyble published an article regarding the Quantum LNK builder, which also includes features and pricing scheme for the builder. The article indicates a possible connection between the Lazarus group and the Quantum LNK builder; similarities in Powershell scripts may indicate this connection.

In August, 2022, SentinelLabs published an article that mentioned both the mLNK and Quantum builders. This is not to suggest that these are the only LNK builders or generators available, but it does speak to the prevalence of this "*-as-a-service" offering, particularly as some threat actors move away from the use of "weaponized" (via macros) Office documents, and toward the use of archives, ISO/IMG files, and embedded LNK files.

Freeware Options
In addition to creating shortcuts through the traditional means (i.e., right-clicking in a folder, etc.), there are a number of freely available tools that allow you to create malicious LNK files. However, from looking at them, there's little available to indicate that they provide the same breadth of capabilities as the for-pay options listed earlier in this article. Here's some of the options I found:

lnk-generator (circa 2017)
Booby-trapped shortcut (circa 2017) - includes script
LNKUp (circa 2017) - LNK data exfil payload generator
lnk-kisser (circa 2019) - payload generator
pylnk3 (circa 2020) - read/write LNK files in Python
SharpLNKGen-UI (circa 2021) - expert mode includes use of ADSs (Github)
Haxx generator (circa 2022) - free download
lnkbomb - Python source, EXE provided
lnk2pwn (circa 2018) - EXE provided
embed_exe_lnk - embed EXE in LNK, sample provided 

Next Steps
So, what's missing in all this is toolmarks; with all these options, what does the metadata from malicious LNK files created using the builders/generators look like? Is it possible that given a sample or enough samples, we can find toolmarks that allow us to understand which builder was used?

Consider this file, for example, which shows the parsed metadata from several samples (most recent sample begins on line 119). The first two samples, from Mandiant's Cozy Bear article, are very similar; in fact, they have the same volume serial number and machine ID. The third sample, beginning on line 91, has a lot of the information we'd look to use for comparison removed from the LNK file metadata; perhaps the description field could be used instead, along with specific offsets and values from the header (given that the time stamps are zero'd out). In fact, besides zero'd out time stamps, there's the SID embedded in the LNK file, which can be used to narrow down a search.

The final sample is interesting, in that the PropertyStoreDataBlock appears to be well-populated (unlike the previous samples in the file), and contains information that sheds light on the threat actor's development environment.

Perhaps, as time permits, I'll be able to use a common executable (the calculator, Solitaire, etc.), and create LNK files with some of the freeware tools, noting the similarities and differences in metadata/toolmarks. The idea behind this would be to demonstrate the value in exploring file metadata, regardless of the actual file, as a means of understanding the breadth of such things in threat actor campaigns.

Analysis: Situational Awareness + Timelines

I've talked and written about timelines as an analysis process for some time, in both this blog and in my books, because I've seen time and again over the years the incredible value in approaching an investigation by developing a timeline (including mini- and micro-timelines, and overlays), rather than leaving the timeline as something to be manually created in a spreadsheet after everything else is done.

Now, I know timelines can be "messy", in part because there's a LOT of activity that goes on on a system, even when it's "idle", such as Windows and application updates. This content can "muck up" a timeline and make it difficult to distill the malicious activity, particularly when discerning that malicious activity is predicated solely on the breadth of the analyst's knowledge and experience. Going back to my early days of "doing" IR, I remember sitting at an XP machine, opening the Task Manager, and seeing that third copy of svchost.exe running. I'd immediately drill in on that item, and the IT admin would be asking me, "...what were you looking at??" The same is true when it comes to timelines...there are going to be things that will immediately jump out to one analyst, were another analyst may not have experienced the same thing, and not developed the knowledge that this particular event was "malicious", or at least suspicious.

As such, one of the approaches I've used and advocated is to develop situational awareness via the use of mini- or micro-timelines, or overlays. A great example of this approach can be seen in the first IronMan movie, when Tony's stuck in the cave and in order to hide the fact that he's building his first suit of armor, he has successive parts of the armor drawn on thin paper. The image of the full armor only comes into view when the papers are laid on top of one another. I figured that this would be a much better example than referring to overhead projectors and cellophane sheets, etc., that those of us who grew up in the '70s and '80s are so very familiar with.

Now, let's add to that the idea of "indicators as composite objects". I'm not going to go into detail discussing this topic, because I stole borrowed it from Joe Slowik; to get a really good view into this topic, give Joe's RSA 2022 presentation a listen; it's under an hour and chock full of great stuff!

So, what this means is that we're going to have events in our timeline that are really composite objects. This means that the context of those events may not be readily available or accessible for a single system or incident. For example, we may see a "Microsoft-Windows-Security-Auditing/4625" event in the timeline, indicating a failed login attempt. But that event also includes additional factors...the login type, the login source, username, etc., all of which can have a significant impact...or not...on our investigation. But here we have a timeline with hundreds, or more likely, thousands of events, and no real way to develop situational awareness beyond our own individual experiences. 

Or, do we?

In order to try to develop more value and insight from timeline data, I developed Events Ripper, a proof of concept tool based on RegRipper that provides a means for digging deeper into individual events and providing analysts with insight and situational awareness they might not otherwise have, due to the volume of data, lack of experience, etc. In addition to addressing the issue of composite objects, this tool also serves, like RegRipper, to preserve corporate knowledge and intrusion intel, in that a plugin developed by one analyst is shared with all analysts, and the wealth of that analyst's knowledge and experience is available to the rest of the team regardless of their status, in an operational manner.

The basic idea behind Events Ripper is that an analyst is developing a timeline and wants to take things a step further and develop some modicum of situational awareness. The timeline process used results in an "events file" being created; this is an intermediate file between the raw data ($MFT, *.evtx, Registry hives, SRUM DB, etc.) and the timeline. The "events file" is a text file, with each line consisting of an event in the 5-field TLN format. 

For example, one plugin runs across all login (Security-Auditing/4624) events, and distinguishes between type 2, 3, and 10 logins, and presents source IP addresses for the type 3 and type 10 logins. Using this plugin, I was able to quickly determine that a system was accessible from the raw Internet via both file sharing and RDP. Another plugin does something similar for failed logins, in addition to translating the sub-status reason for the failed attempt.

The current set of plugins available with the tool are focused on Windows Event Log records from some of the various log files. As such, an analyst can run Events Ripper across an events file consisting of multiple data sources, or can opt to do so after parsing just the Windows Event Log files, as described in the tool readme. As this is provided as a standalone Windows executable, it can also be included in an automated process where data is received through 'normal' intake processes, and then parsed prior to being presented to the analyst. 

Like RegRipper, Events Ripper was intended for the development and propagation of corporate knowledge. You can write a plugin that looks for something specific in an event, a combination of elements within an event, or correlate events across the events file, and add Analysis Tips to the plugin, sharing things like reference URLs and other items the analyst may want to consider. Adding this plugin to your own corporate repo means that the experience and knowledge developed in investigating and unraveling this event (or combination of events) is no longer limited or restricted to just the analyst who did that work; now, it's shared with all analysts, even those who haven't yet joined your team.

At the moment, there are 16 plugins, but like RegRipper, this is likely to expand over time as new investigations develop and new ways of looking at the data are needed.

Saturday, August 27, 2022

When Windows Lies

"When Windows Lies"...what does that really mean? 

Mari had a fascinating blog post on this topic some years ago; she talked about the process DFIR analysts had been using to that point to determine the installation date of the operating system. In short...and this has happened several more times since then...while DFIR analysts had been using one process to assess the installation date, Windows developers had changed how this information is stored and tracked in Windows systems, reaffirming the notion that operating systems are NOT designed and maintained with forensic examiners in mind. ;-)

The take-away from Mari's blog article...for me, anyway...is the need for analysts to keep up-to-date with changes to the operating system; storage locations, log formats, etc., can (and do) change without notice. Does this mean that every analyst has to invest in research to keep up on these things? No, not at all...this is why we have a community, where this research and these findings are shared. But as she mentioned in the title and content of the article, if we just keep following our same methods, we're going to end up finding that "Windows lies". This idea or concept is not new; I've talked about the need for validation previously.

Earlier this year, a researcher used a twist on that title to talk about the "lies" analysts tools will "tell" them, specifically when it comes to USB device serial numbers. I understand the author presented at the recent SANS DFIR Summit; unfortunately, I was not able to view the presentation due to a previous commitment. However, if the content was similar, I'm not sure I'd use the term "lies" to describe what was happening here.

The author does a great job of documenting the approach they took in the article, with lots of screen captures. However, when describing the T300D Super Speed Toaster, the author states:

...I would have expected a device such as this to simply be a pass-through device.

I've used a lot of imaging devices in my time, but not this one; even so, just looking at the system (and without reading the online description of the device) I can't say that I would have assumed, in a million years, that this was simply a "pass-through device". Just looking at the front of the device, there's a good bit going on, and given that this is a "disk dock" for duplicating drives, I'm not at all sure that the designers took forensics processes into account.

As a result, in this case, the take-away isn't that it's about Windows "lying", as much as it is...once again...the analyst's assumptions. If the analyst feels that what they "know" is beyond reproach, and do not recognize what they "know" as assumption (even if it's more of, "...but that's how we've always done it..."), then it would appear that Windows is "lying" to them. So, again, we have the need for validation, but this time we've added the layer of "check your assumptions".

Earlier this year, Krz posted a pretty fascinating article, using the term "fools" in the title, as in "Windows fools you". In that case, what he meant was that during updates, Windows will "do things" as part of the update functionality that have an impact on subsequent response and analysis. As such, an analyst with minimal experience or visibility may assume that the "thing" done was the result of a threat actor's actions, simply because they weren't aware that this is "normal Windows functionality".

It's pretty clear that the use of the term "lies" is meant to garner attention to the content. Yes, it's absolutely critical that analysts understand the OS and data they're working with (including file formats), how their tools work, and when necessary, use multiple tools. But it's also incumbent upon analysts to check their assumptions and validate their findings, particularly when there's ample data to help dispel those assumptions. Critical thinking is paramount for DFIR analysts, and I think that both authors did a very good job in pointing that out.

Wednesday, August 24, 2022

Kudos and Recognition

During my time in the industry, I've seen a couple of interesting aspects of "information sharing". One is that not many like to do it. The other is that, over time, content creation and consumption has changed pretty dramatically.

Back in the day, folks like Chris Pogue, with his The Digital Standard blog, and Corey Harrell with his Journey Into IR blog, and even more recently, Mari with her Another Forensics Blog have all provided a great deal of relevant, well-developed information. A lot of what Mari shared as far back as 2015 has even been relevant very recently, particularly regarding deleted data in SQLite databases. And, puh-LEASE, let's not forget Jolanta Thomassen, who, in 2008, published her dissertation addressing unallocated space in Registry hives, along with the first tool (regslack) to parse and extract those contents - truly seminal work!

Many may not be aware, but there are some unsung heroes in the DFIR industry, unrecognized contributors who are developing and sharing some incredible content, but without really tooting their own horn. These folks have been doing some really phenomenal work that needs to be called out and held up, so I'm gonna toot their horn for them! So, in no particular order...

Lina is an IR consultant with Secureworks (an org for which I am an alum), and as string of alphabet soup following her name. Lina has developed some pretty incredible content, which she shares via her blog, as well as via LinkedIn, and in tweet threads. One of her posts I've enjoyed in particular is this one regarding clipboard analysis. Lina's content has always been well-considered, well-constructed, and very thoughtful. I have always enjoyed when content produced by practitioners, as it's very often the most relevant.

Krz is another analyst, and has dropped a good deal of high quality content, as well as written some of his own tools (including RegRipper plugins), which he also shares via Github. Not only did Krz uncover that Windows Updates will clear out valuable forensic resources, but also did some considerable research into how a system going on battery power impacts that system, and subsequently, forensic analysis.

Patrick Siewert has hung out his own shingle, and does a lot of work in the law enforcement and legal communities, in addition to sharing some pretty fascinating content. I have never had the opportunity to work with mobile devices (beyond laptops), but Patrick's article on cellular records analysis is a thorough and interesting treatment of the topic. 

Julian-Ferdinand Vögele recently shared a fascinating article titled The Rise of LNK Files, dropping a really good description of Windows shortcut files and their use. Anyone who's followed me for any amount of time knows I'm more than mildly interested in this topic, from a digital forensic and threat intel perspective. He's got some other really interesting articles on his blog, including this one regarding Scheduled Tasks, and like the other folks mentioned here, I'm looking forward to more great content in the future.

If you're looking for something less on the deep technical side or less DFIR focused, check out Maril's content. She's a leader in the "purple team" space, and she's got some really great content on personal branding that I strongly recommend that everyone take the time to watch, follow, digest, and consider. To add to that, it seems that Maril and her partners-in-crime (other #womenincyber) will be dropping the CyberQueensPodcast starting in Sept.

If you're into podcasts, give Jax a listen over at Outpost Gray (she also co-hosts the 2 Cyber Chicks podcast) and in particular, catch her chat with Dorota Koslowska tomorrow (25 Aug). Jax is a former US Army special ops EW/cyber warrant officer, and as you can imagine, she brings an interesting perspective to a range of subjects, a good bit of which she shares via her blog.

Let's be sure to recognize those who produce exceptional content, and in particular those who do so on a regular basis!

Saturday, August 13, 2022

Who "Owns" Your Infrastructure?

That's a good question.

You go into work every day, sit down at your desk, log in...but who actually "owns" the systems and network that you're using? Is it you, your employer...or someone else?

Anyone who's been involve in this industry for even a short time has either seen or heard how threat actors will modify an infrastructure to meet their needs, enabling or disabling functionality (as the case may be) to cover their tracks, make it harder for responders to track them, or to simply open new doors for follow-on activity.

Cisco (yes, *that* Cisco) was compromised in May 2022, and following their investigation, provided a thorough write-up of what occurred. From their write-up:

"Once the attacker had obtained initial access, they enrolled a series of new devices for MFA and authenticated successfully to the Cisco VPN." (emphasis added)

Throughout the course of the engagement, the threat actor apparently added a user, modified the Windows firewall, cleared Windows Event Logs, etc. Then, later in the Cisco write-up, we see that the threat actor modified the Windows Registry to allow for unauthenticated SYSTEM-level access back into systems by setting StickyKeys. What this means is that if Cisco goes about their remediation steps, including changing passwords, but misses this one, the threat actor can return, hit a key combination, and gain SYSTEM-level access back into the infrastructure without having to enter a password. There's no malware involved...this based on functionality provided by Microsoft.

Remember..the Cisco write-up states that the activity is attributed to an IAB, which means that this activity was likely intended to gain and provide access to a follow-on threat actor. As a result of response actions taken by the Cisco team, that follow-on access has been obviated.

On 11 Aug 2022, the SANS Internet Storm Center included this write-up regarding the use of a publicly available tool called nsudo. There's a screen capture in the middle of the write-up that shows a number of modifications the threat actor makes to the system, the first five of which are clearly Registry modifications via reg add. Later there's the use of the Powershell Mp-Preference module to enable Windows Defender exclusions, but I don't know if those will even take effect if the preceding commands to stop and delete Windows Defender succeeded. Either way, it's clear that the threat actor in this case is taking steps to modify the infrastructure to meet their needs.

It doesn't stop there; there is a great deal of native functionality that threat actors can leverage to modify systems to meet their needs. For example, it's one thing to clear Windows Event Logs or delete web server log files; as we saw with NotPetya in 2017, those logs can still be recovered. To take this a step further, I've seen threat actors use appcmd.exe to disable IIS logging; if the logs are never written, they can't be recovered. We've seen threat actors install remote access tools, and install virtual machines or hard drives from which to run their malicious software, because (a) the VMs are not identified as malicious by AV software, and (b) AV software doesn't "look inside" the VMs.

So what? What does all this mean?

What this means is that these modifications can be detected and responded to early in the attack cycle, inhibiting or even obviating follow-on activity (ransomware deployment?). When I was researching web shells, for example, I kept running into trouble with Windows Defender; no matter how "esoteric" the web shell, if I didn't disable Defender before downloading it, Defender would quite literally eat the web shell! Other tools do a great job of detecting and quarantining web shells, and even more identify them. That's a means of initial access, so detecting and quarantining the web shell means you've obviated the rest of the attack and forced the threat actor to look for another means, AND you know someone's knocking at your door!

Wednesday, August 10, 2022

Researching the Windows Registry

The Windows Registry is a magical place that I love to research because there's always something new and fun to find, and apply to detections and DFIR analysis! Some of my recent research topics have included default behaviors with respect to running macros in Office documents downloaded from the Internet, default settings for mounting ISO/IMG files, as well as how to go about enabling RDP account lockouts based on failed login attempts. 

Not long ago I ran across some settings specific to nested VHD files, and thought, well...okay, I've seen virtual machines installed on systems during incidents, as a means of defense evasion, and VHD/VHDX files are one such resource. Further, they don't require another application, like VMWare or VirtualBox.

Digging a bit further, I found this MS documentation:

"VHDs can be contained within a VHD, so Windows limits the number of nesting levels of VHDs that it will present to the system as a disk to two, with the maximum number of nesting levels specified by the registry value HKLM\System\CurrentControlSet\Services\FsDepends\Parameters\VirtualDiskMaxTreeDepth

Mounting VHDs can be prevented by setting the registry value HKLM\System\CurrentControlSet\Services\FsDepends\Parameters\VirtualDiskNoLocalMount to 1."

Hhhmmm...so I can modify a Registry value and prevent the default behavior of mounting VHD files! Very cool! This is pretty huge, because admins can set this value to "1" within their environment, and protect their infrastructure.

Almost 3 yrs ago, Will Dormann published an article about the dangers of VHD/VHDX files. Some of the issues Will points out are:

- VHD/VHDX files downloaded from the Internet do not propagate MOTW the way some archive utilities do, so even if the VHD is downloaded from the Internet and MOTW is applied, this does not transfer to any of the files within the VHD file. This behavior is similar to what we see with ISO/IMG files.

- AV doesn't scan inside VHD/VHDX files.

So, it may be worth it to modify the VirtualDiskNoLocalMount value.    

To check the various settings from a DFIR perspective, I use RegRipper:

(System) Get VHD[X] Settings

ControlSet001\Services\FsDepends\Parameters

LastWrite time: 2019-12-07 09:15:07Z

VirtualDiskExpandOnMount  0x0001
VirtualDiskMaxTreeDepth   0x0002
VirtualDiskNoLocalMount   0x0000

Analysis Tip: The values listed impact how Windows handles VHD[X] files, which can be used to bypass security measures, including AV and MOTW.

VirtualDiskMaxTreeDepth determines how deep to do with embedding VHD files.
VirtualDiskNoLocalMount set to 1 prevents mounting of VHD[X] files.

Ref: https://insights.sei.cmu.edu/blog/the-dangers-of-vhd-and-vhdx-files/

From what's in the Registry (above), we can see what's possible. In this case, per the Analysis Tip in the output of the RegRipper plugin, this system allows automatic mounting of the virtual disk file. You can look for access to .vhd/.vhdx files in the user's RecentDocs key. Also from a DFIR perspective, look for indications of files being mounted in the Microsoft-Windows-VHDMP%4Operational Event Log.

Monday, August 08, 2022

An Attacker's Perspective

Something I've thought about quite often during my time in DFIR is the threat actor's perspective...what is the attacker seeing and thinking during their time in an infrastructure. As a DFIR analyst, I don't often get to 'see' the threat actor's actions, at least not fully. Rather, my early perspective was based solely on what was left behind. That's changed and expanded over the years, as we've moved from WinXP/2000/2003 to Win7 and Win10, and added some modicum of enterprise capability by deploying EDR. During the better part of my time as a responder, EDR was something deployed after an incident had been detected, but the technology we deployed at that time had a targeted "look back" capability that most current EDR technologies do not incorporate. This allowed us to quickly target the few systems that the threat actor actually touched (in one case, only 8 out of 150K endpoints), and then narrow down those systems to the 1 or 2 nexus systems for a more detailed examination. This led to us 'seeing' the impact or results of actions taken by the threat actor, but what we didn't have insight into was their perspective during their actions...why did they go left instead of right, or why did they apparently target one 'thing' instead of another?

EDR did allow us to capture things like the command line used to archive collected data, as well as the password, so that when we recovered the archives, we could open them and see what data was stolen. While that did provide some insight, it still didn't give us the attacker's perspective as they sought that data out.

During an active IR, attribution is most often a distraction. Early on in the IR, you very often don't have enough data to differentiate the threat actor (sometimes you might...), and for the attribution to be valuable, it needs to be able to inform you of the most likely places to look for intrusion data; when the threat actor gets to this point, what do they do? Turn left? Turn right? What do they pivot to based on previous intrusion data? However, during this time, you need to resist developing tunnel vision. Even after the IR is complete and you have a full(er) picture that includes attribution, it's often difficult to really get the perspective of the threat actor; after all, I'm a white American male, the product of a public school education and military experience...without a great deal of training and education, how am I going to understand the political and cultural nuances in the mind of Chinese or Russian threat actor?

I recently watched this Usenix presentation by Rob Joyce, NSA TAO chief, and it was really very valuable for me, because what Rob shared was the attacker's perspective. The presentation is pretty profound, in that it's the perspective a "nation-state actor"...yes, Rob's role is to be a "nation-state threat actor", literally.

Rob said things like:

"Well-run networks make our job hard."

...and...

"We look for the things that are actually in your network."

This entire video made a lot of great points for me, because most of what Rob was saying was the same thing many of us have been saying since the late '90s, albeit from an attacker's perspective...so, a different side of the same coin, if you will. Rather than making recommendations based on a vulnerability assessment or incident response, this is an attacker saying, "...if you do these things, it makes my job oh so much harder...". Very nice, and validating, all at the same time.

Something else that has occurred to me over the years is that threat actors are unencumbered by the artificialities we (defenders) often impose upon ourselves. What does this mean? Well, it's like the sign at the pool that says, "no running"...does this actually stop people from running? Even if you sat at the pool, in the shade, with a blindfold on, it wouldn't take you long to realize that no, it doesn't work...because you'd continually hear life guards and parents yelling, "STOP RUNNING!!" We see this in networks, as well, when someone says, "...you can't do that..." right after you, or the bad guy, has already done it. 

About 22 years ago, I was doing an assessment of a government organization, and I was using the original version of L0phtCrack. I was on the customer's network, using a domain admin account (per our contract), and I informed the senior sysadmin of what I was about to do (i.e., get all of the password hashes and begin cracking them). He informed me that we'd never be able to do it, so I went ahead and hit the Enter key, and the password hashes appeared on my system almost before the Enter key came all the way back up!

I once engaged in an IR with a customer with a global presence; they literally had systems all over the world, in different countries. One of those countries has very stringent privacy laws, and that impacted the IR because those laws prevented us from installing our EDR tools as part of the response. This was important because all of the data that we did have at that point showed the threat actor moving...not moving laterally and touching, but "lifting and shifting"...to that environment. As part of the IR, we (the customer POC, actually) informed the folks in this country about what was going to happen, and they pushed back. However, the threat actor never bothered to ask, and was already there doing things that we couldn't monitor or track, nor could we impact. 

This was all because someone said, "no, you can't do this, we have laws...", after we'd asked. The threat actor never bothered to ask; they'd found that they could, so they did. We were inhibited by our own artificialities, by our own imaginary road blocks. 

Something else that's popped up recently in discussions regarding cyber insurance is the term "good faith knowledge". In July, Travelers filed in court to rescind a policy for a customer, claiming that portions of the policy applications included misrepresentations. Specifically, the customer had applied for the policy and included MFA attestation documentation, indicating that they had the specified coverage. However, the customer was later impacted by a ransomware attack, and it was found that the MFA attestation was not correct. During exchanges on social media following this filing, a term came up that caught my attention..."good faith knowledge". My understanding (and this may be wholly incorrect) is that if the signer had "good faith knowledge" at the time that the attestation was correct, then this provides legal backing to deny the filing to rescind the policy.

If my understanding is correct in this instance, the question then becomes, why have the attestation at all? The CEO or CIO can sit in a conference room, ask the IT director, and sign the attestation "in good faith" that the required coverage is, in fact, in place...even if it's not. Don't take action to check or verify the attestation, just sign it and say, "I had no reason to believe at the time that this was not the case." And while all this legal action goes on, the fact remains, the organization was subject to a successful ransomware attack, and the claim still has to be paid. 

Now, go back and read what I said a few paragraphs ago, specifically, "the customer was later impacted by a ransomware attack...". Yep, that's how they came to the understanding that the MFA attestation was incorrect/misrepresented...the CEO signed the attestation, but the bad guy wasn't even slowed down, because after all, there was no MFA in place, at least not any on the route taken by the threat actor. This is just another example of a self-inflicted artificiality that prevents us from doing a better job at "cyber" or "information" security, but does nothing to impede threat actors.

In the end, the needle hasn't moved. We've encumbered ourselves with legal and cultural constraints while the threat actor leverages what has been technically enabled. I've seen networks that were supposed to be air-gapped, but no one told the threat actor, so they went ahead and used it just like the flat network that it was. Remember, Rob said, "We look for the things that are actually in your network"; this statement should include, "...and use them how we wish, regardless of what your 'documentation' says...".

For defenders, an attacker's perspective is good. No, it's more than good...it's invaluable. But the reverse is also true...

Sunday, July 31, 2022

Virtual Images for Testing

Many within the DFIR community make use of virtual systems for testing...for detonating malware, trying things within a "safe", isolated environment, etc. However, sometimes it can be tough to get hold of suitable images for creating that testing environment.

I've collected a bunch of links to VirtualBox VMs for Windows, but I cannot attest to all of them actually working. But, if you'd like to try any of them, here they are...

MS Edge developer virtual machines (Win7 - 10, limited time)
Windows 7 Image, reports no activation needed
Win95 virtual machine
Various MS virtual machines (MS-DOS, Windows, etc.)
Windows 11 Dev Environment (eval)
Use Disk2vhd to create a virtual machine from an existing installation
ReactOS - clone of Windows 5.2 (XP/2003)

There's no shortage of Linux and Unix variant OS VMs available. For example, you can find Solaris VMs here. For MacOS Big Sur, you can try this site.

Back in 1994 and '95, while I was in graduate school, I went to Frye's Electronics in Sunnyvale (across the street from a store called "Weird Stuff") and purchased a copy of OS/2 2.1. I did that because the box came with a $15 coupon for the impending OS/2 Warp 3.0. If you'd like to give the OS/2 Warp OS a shot, you can try this v4.52 download, or try this site for other versions of OS/2.

If you're a fan of CommodoreOS, you can give this site a shot. For AmigaOS, try here, or here. How about Plan9?

General Download Sites
OSBoxes
SysProbs
VirtualBoxes
DFIRDiva

Hope that helps!

EDR Blindness, pt II

As a follow-on to my earlier blog post, I've seen a few more posts and comments regarding EDR 'bypass' and blinding/avoiding EDR tools, and to be honest, my earlier post stands. However, I wanted to add some additional thoughts...for example, when considering EDR, consider the technology, product, and service in light of not just the threat landscape, but also the other telemetry you have available. 

This uberAgent article was very interesting, in particular the following statement:

“DLL sideloading attack is the most successful attack as most EDRs fail to detect, let alone block it.”

The simple fact is, EDR wasn't designed to detect DLL side loading, so this is tantamount to saying, "hey, I just purchased this brand new car, and it doesn't fly, nor does it drive underwater...". 

Joe Stocker mentions on Twitter that the Windows Defender executable file, MpCmdRun.exe, can be used for malicious DLL side loading. He's absolutely right. I've seen EXEs from other AV tools...Kaspersky, McAfee, Symantec...used for DLL side loading during targeted threat actor attacks. This is nothing new...this tactic has been used going back over a decade or more.

When it comes to process telemetry, most EDR starts by collecting information about the process creation, and many will get information about the DLLs or "modules" that are loaded. Many will also collect telemetry about network connections and Registry keys/values accessed during the lifetime of the process, but that's going a bit far afield and off topic for us.

There are a number of ways to detect issues with the executable file image being launched. For example, we can take a hash of the EXE on disk and compare it to known good and known bad lists. We can check to see if the file is signed. We can check to see if the EXE contains file version information, and if so, compare the image file name to the embedded original file name. 

Further, many EDR frameworks allow us to check the prevalence of executables within the environment; how often has this EXE been seen in the environment? Is this the first time the EXE has been seen?

However, something we cannot do, because it's too 'expensive', is to maintain a known good list of all application EXE files, their associated DLLs, as well as their hashed and locations, and then compare what we're seeing being launched to that database. This is why we need to have other measure in place in a defense in depth posture. Joe goes on to mention what he'd like to see in an EDR tool, so the question is, is there an available framework that allows this condition to be easily identified, so that it can be acted upon?

DLL side loading is not new. Over a decade ago, we were seeing legitimate EXEs from known vendors being dropped on systems, often in the ProgramData folder, and the "malicious" DLL being dropped in the same folder. However, it's been difficult to detect because the EXE file that is launched to initiate the process is a well-known, legitimate EXE, but one of the DLLs it loads, which if often found in the same folder as the EXE, is in fact malicious. Telemetry might pick up the fact that the process, when launched, had some suspicious network activity associated with it, perhaps even network activity that we've never seen before, but a stronger indicator of something "bad" would be the persistence mechanism employed. Hey, why is this signed, known good EXE, found in a folder that it's not normally found in (i.e., the ProgramData folder), being launched from a Run key, or from a Scheduled Task that was just created a couple of minutes ago?

The point is, while EDR frameworks may not directly identify DLL side loading, as described in my previous blog post, we can look for other phases of the attack cycle to help use identify such things. We may not directly detect the malicious DLL being "side loaded", but a good bit of activity is required for a threat actor to get to the point of DLL side loading...gain access to the system or infrastructure, create files on the system, create persistence, etc. We can detect all of these activities to help us identify potential DLL side loading, or simply inhibit or even obviate further phases of the attack.

Kevin had an interesting tweet recently about another EDR 'bypass', this one involving the use of WindowsTerminal (wt.exe) instead of the usual command prompt (cmd.exe). I've been interested in the use of Windows Subsystem for Linux (wsl.exe) for some time, and would be interested to see how pervasive it is in various environments. However, the point is, if you're able to monitor new processes being created via the cmd.exe, are you also able to do the same with other shells, such as Powershell, wt.exe, or wsl.exe?

Finally, something that I've seen for years in the industry is that it doesn't matter how many alerts are being generated by an EDR framework if no one is listening, or if the alerts are misinterpreted and not acted upon. In this blog post on history repeating itself I shared an example of what it looked like well before the advent of EDR when no one was listening for alerts. Not long after that event, we saw in the industry what happened when firewall (or other device) logs were generated but no one monitored them. This is something that we've seen consistently over the past 30+ years.

I was once on an engagement where the threat actor had accessed a Linux-based infrastructure and been able to access the corporate Windows network; two disparate infrastructures that were not supposed to be connected at all were actually a flat network. The threat actor collected data in an unprotected archive and copied (not "moved") it to the Linux infrastructure. We had both copies of the archive, as well as corresponding netflow that demonstrated the transfer. One of the other analysts in the room offered their insight that this threat actor was not "sophisticated" at all, and was, in fact, rather sloppy with respect to their opsec (they'd also left all of their staged tools on the systems).

I had to remind the team that we were there several months after the threat actor had taken the data and left. They'd completed what they had wanted to do, completely unchallenged, and we were here looking at their footprints.

I've seen other incidents where an external SOC or threat hunting team had sent repeated email notifications to a customer, but no action was taken. It turned out that when the contract was set up, one person was designated to receive the email notifications, but they'd since left the organization. In more than a few instances, alert emails went to a communal inbox and the person who had monitored that inbox had since left the company, or was on vacation, or was too overwhelmed with other duties to keep up with the emails. If there is no plan for what happens when an alert is generated and received, it really doesn't matter what technology you're using.

Tuesday, July 26, 2022

Rods and Cones, and EDR "blindness"

I ran across an interesting post recently regarding blinding EDR on Windows systems, which describes four general techniques for avoiding EDR monitoring. Looking at the techniques, I've seen several of these techniques in use on actual, real world incidents. For example, while I was with the Crowdstrike Overwatch team, we observed a threat actor reach out to determine systems with Falcon installed; of the fifteen systems queried, we knew from our records that only four were covered. We lost visibility because the threat actor moved to one of the other eleven systems. I've also seen threat actors "disappear from view" when they've used the Powershell console rather than cmd.exe, or when the threat actor has shell-based/RDP access to systems and uses a GUI-based tool. EDR telemetry includes process creation information, so we might "see" a GUI-based tool being launched but after that, no new processes are created based on whatever options the threat actor chose, or buttons they pushed, so without some other visibility (file system, Registry, network telemetry) we'd have no visibility into what they were doing.

I know this sounds pretty simplistic to some, but I really don't get the impression that it's commonly understood throughout the industry, and not just with customers.

I've previously discussed EDR bypass in this blog. Anyone who's been involved in a SOC or SOC-like role that involves EDR, including IR engagements where EDR is deployed, has seen many of the same conditions, such as incomplete deployment or roll-out of capabilities. A while back, for some products, EDR monitoring did not natively include the ability to block processes or isolate systems; this was a separate line item, if the capability was available. We see customers purchase the capability, and then complain when it didn't work...only to find out that it hadn't been deployed. I've seen infrastructures with 30,000 or 50,000 endpoints, and EDR deployed to only 200 or fewer systems.

The take-away here is that when it comes to EDR, finding a blind spot isn't really the profound challenge it's made out to be, particularly if you understand not only the technology but also the business aspect from the customer's point of view.

All that being said, what does this have to do with "rods and cones"? Okay, just bear with me for a moment. When I was an instructor in the military, we had an exercise for student Lts where we'd take them out at night and introduce them to the nuances of operating at night. Interestingly, I learned a great deal from this exercise, because when I went through the same school several years previously, we didn't have this exercise...I assume we were simply expected to develop an understanding ourselves. Anyway, the point is that the construction of the human eye means that the cones clustered around the center of the eye provide excellent acuity during daylight hours, but at night/low light levels, we don't see something when we're looking directly at it. Rather, for that object to come into "view" (relatively speaking), we need to look slightly to one side or the other so that the object "appears" to us via the rods of the eye.

What does this have to do with EDR? Well, assuming that everything else is in place (complete deployment, monitoring, etc.), if we are aware of blind spots, we need to ensure that we have "nearby" visibility. For example, in addition to process creation events, can we also monitor file system, Registry, and network events? Anyone who's dealt with this level of telemetry knows that it's a great deal of data to process, so if you're not collecting, filtering, and monitoring these events directly through EDR, do you have some other means or capability of getting this information if you need to do so? Can you retrieve/get access to the USN change journal and/or the MFT, to the Registry (on a live system or via triage retrieval), and/or to the Windows Event Logs? The fact is that while EDR does provide considerable visibility (and in a timely manner), it doesn't 'see' everything. As such, when a threat actor attempts to bypass EDR, it's likely that they're going to be visible or leave tracks through some other means which we can access via another data source.

An analogy I like to use at this point is that when walking by a pond, if someone throws a rock into the pond, we don't actually have to see them throw the rock, nor do we have see that a rock broke the surface of the pond, to understand what's happened. We can hear the splash and see ripples against the shore, and know that something happened. When it comes to monitoring endpoints, the same is true...we don't always have to 'see' an action to know that something happened, particularly something that requires a closer look or further investigation. Many times, we can observe or alert on the effect of that action, the impact of the action within the environment, and understand that additional investigation is required.

An additional thought regarding process creation is that process completion is a "blind spot" within some EDR products. SysMon has a process termination event (event ID 5), but most EDR tools only include process creation telemetry, and additional access and analysis are needed to validate whether the process executed completely, or if something occurred that prevented the process from completing normally. For example, many SOC analysts have likely seen threat actors use the Mp-Preference Powershell module to impact Windows Defender, and made statements in tickets and incident diaries to that effect, but what happens if Windows Defender is not enabled, and some other security control is used instead? Well, the command will have no effect; using the Mp-Preference module to, say, set exclusions in Defender will not result in Registry entries or the corresponding Windows Event Log records if Defender is disabled.

So, keeping in mind that for something bad to happen on a system, something has to happen, the take-away here is that EDR by-passes or blind spots are something to be understood, not feared. Many times, we can get that "good harvest from low-hanging fruit" (quote attributed to David Kleinatland), but sometimes we need a bit more. When we employ (or deploy) one tool, we need to understand both it's strengths and shortcomings, and come up with a plan to address any gaps. When considering EDR, we have to understand the impact of coverage (or lack thereof), both on the endpoint as well as across the infrastructure; we have to understand what it can and cannot 'see', how to properly configure it, and how to use it most effectively to "find bad". Sometimes, this may mean that we'll get a signal that something may be amiss, rather than something that clearly jumps out to us as "this is BAD!!!" Sometimes, we may rely on the impact of the action, rather than directly identifying the action itself.

Sunday, July 24, 2022

History Repeats Itself

It's said that those who do not study history are doomed to repeat it. I'd suggest that the adage should be extended to, "those who do not study history and learn from her lessons are doomed to repeat it."

My engagement with technology began at an early age; in the early '80s, I was programming BASIC on a very early IBM-based PC, the Timex-Sinclair 1000, and Mac IIe. By the mid-'80s, I'd programmed in BASIC and Pascal on the TRS-80. However, it wasn't until I completed my initial military training in 1990 that I began providing technology as a service to others; I was a Communications Officer in the Marine Corps, providing trained technical Marines, servicing technical assets, in support of others. I had been taught about the technology...radios, phones, switchboards, etc...and troubleshooting in school, and now I had to apply what I had learned, as well as continue to learn from the experiences of others.

My first unit was deployed to an exercise in Nov, 1990. We set up radio and phone communications as part of our plan, which included a TA-938 phone configured as a "hot line" (think the "Bat phone" from the '60s Batman serial) to another unit. The idea...and implementation...was that when someone picked up the handset on one end, the phone on the other end would ring, a red light would blink. It was set up this way, tested, and found to function as expected.

TA-838
Except when it didn't. As the platoon leader, I took overnight shifts and filled in on switchboard watch, etc. One night, we got a call from the operations tent that the hotline "wasn't working", so I sent one of my young Marines over to check the phone out. When he returned, he reported that the Captain in the ops tent had repeated that the phone "wasn't working", and said that they hadn't done anything to attempt to troubleshoot or address the problem themselves. The Marine continued by saying that they'd checked the batteries...TA-938 (the TA-838, similar in form to the TA-938, is shown on the right) phones took 2 D cell batteries...and found that someone had, indeed, attempted to troubleshoot the phone. An interesting aspect of the TA-938 phone is that both batteries had to be placed in the phone with their positive terminals facing up, which is something very different from what most folks are used to. The Marine shared that they'd ensured the correct placement of the batteries, and everything seemed to be functioning correctly but when they picked up the hot line, no one on the other end answered. 

About 20 minutes later, we got another call that the hot line in the ops tent wasn't work; this time, there was more frustration in the caller's voice. So, I went over and checked out the phone...only to find that the phone had been disconnected from the wires and thrown across the tent. Fortunately, the phones are "ruggedized", meaning that they are intended to take some level of punishment. I got the phone reconnected, checked to ensure that the batteries hadn't been tampered with, and picked up the handset...only to have the line answered on the second ring. I spoke with the person on the other end, explaining the "issue" we'd experienced at our unit, and found that there was only one person manning the other end of the hot line that night, and that they'd gone for a bathroom break, stopped on their way back to get a cup of coffee, gotten into a conversation while they were out, and had only recently returned to the tent.

The lesson I learned here is that technology most often functions as designed or configured, not as expected. If we don't understand the technology, then we may have expectations of the technology that are far out of alignment with its design. As such, when those expectations aren't met, we may declare that the technology "doesn't work". As someone providing services based on technology...Communications Officer, DFIR consultant, SOC manager, etc...we often need to keep this in mind.

Another lesson I learned that served me very well when I was a DFIR consultant was the Captain's statement that he "hadn't done anything" to troubleshoot the issue. Not understanding the technology, his statement was meant to convey that he hadn't done anything of consequence, or anything that was terribly important. I've also seen this during targeted threat actor/nation-state APT response engagements, where an admin will say that they "didn't do anything", when the evidence clearly demonstrated that they deleted Registry keys, cleared Windows Event Logs, etc. Again, these actions may be part of their normal, day-to-day operations, things that they normally do as a course of business; as such, it doesn't stand out to them that their actions were of consequence.

And it's more than just lessons in technology and it's deployment. History also provides us with, among others, lessons in working with others. Some refer to this is as "leadership", but in my experience, those who talk about leadership in this way often present it as a bit one-sided.

When I was in military training, we'd have various exercises where we'd leave the school house and go out into the woods to actually do that things we'd learned. One of my favorites was patrolling. One of the students (including myself) would write up a "patrol order", specifying equipment to bring, as well as specific roles for other students. The day of the exercise, we'd take buses out to a remote area that we hadn't encountered in previous exercises, and be dropped off...only to find that some students had not brought the specified equipment. In one instance, the gear list specified that everyone bring a poncho, but only about half of the students showed up with ponchos...and there was no, "hey, give me a sec, I'll be right back." One time, the student who was supposed to be the navigator for the patrol showed up without a compass. 

About 20 yrs later, I'm on an IR engagement, one on which the customer very specifically asked me to be an advisor to his team; rather than doing all the work myself, guide his team in the approach, providing "knowledge transfer" throughout the process. At one point, I went to an administrator and asked him to copy the Event Log files from an XP system, onto a thumb drive. I was very specific about my request; I specifically stated that I did not want them to open the Event Viewer and export the contents to text. Instead, I needed him to copy the files to the thumb drive, preserving their original format. After acknowledging that they understood, I went off to complete another task, collecting the thumb drive when I returned. Later that night in my hotel room, before shutting down for the evening, I wanted to take a look at the data (follow a hunch) so I tried to access the copied files with my tools, assuming that my request had been followed; none of them worked. I opened the files in a hex editor, only to see that the administrator had exported the contents of the Event Logs to text, and renamed the output files the the ".evt" extension. 

The lesson here is that, even in the face of leadership (be it "good" or "bad"), there's an element of followership. People have a choice...if they don't understand, they can choose (or not) to ask for help. Or, they can choose to follow the request or guidance, or not. Given that, if something has a material impact on the outcome of your engagement or project, consider doing that thing yourself, or closely supervising it. Be thankful when things go right, and be especially grateful when someone involved sees an issue and shares their justification for doing something other than what was requested. However, you need to also be ready for those instances where things do not go as requested, because it will happen at some point.

Let's do more than just study history. Let's dig into it intentionally, with purpose, and develop and learn from those lessons.

Saturday, July 23, 2022

Turning Open Reporting Into Detections

I saw this tweet from Ankit recently, and as soon as I read through it, I thought I was watching "The Matrix" again. Instead of seeing the "blonde, brunette, redhead" that Cypher saw, I was seeing actionable detection opportunities and pivot points. How you choose to use them...detections in EDR telemetry or from a SIEM, threat hunts, or specifically flagging/alerting on entries in DFIR parsing...is up to you, but there are some interesting...and again, actionable...opportunities, nonetheless.

From the tweet itself...

%Draft% is environment variable leading to PowerShell
Environment variables are good...because someone has to set that variable using...wait for it...w  a  i  t   f  o  r    i  t...the 'set' command. This means that if the variable is set via the command line, the process can be detected. 

Reads another registry value's base64 blob
Blobs are good...because they're usually of a particular value type (i.e., binary) and usually pretty big. I wrote the RegRipper sizes.pl plugin some time ago to address this exact issue, to find values of a certain size or larger.

If the blob isn't binary and is a base64-encoded string, there are a number of ways to detect strings that are base64 encoded.

What's not stated in the body of the tweet, but instead visible in one of the images is that the Run key value written for persistence has interesting data. First, the type is "REG_EXPAND_SZ", which may be a good detection opportunity. This may take some research to determine how prevalent it is in your environment, or in your customers environments, but Microsoft's documentation says that values within the Run key contain a "command line no longer than 260 characters". From this, we can assume that the value data are strings, and of type REG_SZ. For my own use, I've updated one of my RegRipper plugins to specifically look for instances where values in the Run key (HKLM or HKCU) are other than "REG_SZ" types.

Next, the command line itself has a number of unique items you can hunt on. Even if another attack changes the name of the key and value in which the data blob is stored, the command line still offers ample opportunities for detections.

If you don't have EDR telemetry available, try parsing the Microsoft-Windows-Shell-Core%4Operational Event Log, specifically event IDs 9707/9708. Or, if you're sending the data from that Windows Event Log to a SIEM, try searching on elements from within the command line.

The point of all this is that there is very often actionable info in open reporting, things that we can turn into detections via either EDR telemetry or SIEM searches, for threat hunting, or add to our automated DFIR parsing process as a means of retaining "corporate knowledge" and expanding the experience base of all analysts.

Tuesday, July 19, 2022

Fully Exploiting Data Sources

Very often, we view data sources as somewhat one dimensional, and don't think about how we can really get value from that data source. We're usually working on a case, just that investigation that's in front of us, and we're so "heads down" that we may not consider that what we see as a single data source, or an entry from that data source (artifact, indicator), is really much more useful, more valuable, than how we're used to viewing it.

So, what am I talking about? Let's consider some of the common data sources we access during investigations, and how they're accessed. Consider something that we're looking at during an investigation...say, a data source that we often say (albeit incorrectly) indicates program execution the "AppCompatCache", or "ShimCache". Let's say that we parse the AppCompatCache, and find an entry of interest, a path to a file with a name that is relevant to our investigation. Many of us will look at that entry and just think, "...that program executed at this time...". But would that statement be correct?

As with most things in life, the answer is, "it depends." For example, if you read Caching Out: The Value Of ShimCache to Investigators (Mandiant), it becomes pretty clear that the AppCompatCache is not the same on all versions of Windows. On some, an associated time stamp does indeed indicate that the file was executed, but on others, only that the file existed on the system, and not that it was explicitly executed. The time stamp associated with the entry is not (with the exception of 32-bit Windows XP) the time that the file was executed; rather, it's the last modification time from the $STANDARD_INFORMATION attribute in the MFT record for that file. To understand if that time stamp corresponds to the time that the file was executed, we need to consider artifact constellations, correlating the data point with other data sources to develop the context, to develop a better understanding of the data source (and point), and to validate our findings.

Further, we need to remember that ShimCache entries are written at shutdown; as a result, a file may exist on the system long enough to be included in the ShimCache, but a shutdown or two later, that entry will no longer be available within the data source. This can tell us something about the efforts of the threat actor or malware author (malware authors have been known to embed and launch copies of sdelete.exe), and it also tells us something about the file system at a point in time during the incident.

The point is that the data sources we rely on very often have much more value and context than we realize or acknowledge, and are often much more nuanced that we might imagine. With the ShimCache, for example, an important factor to understand is which version of Windows from which the data was retrieved...because it matters. And that's just the beginning.

I hope this is beginning to shine light on the fact that the data sources we very often rely on are actually multidimensional, have context and nuance, and have a number of attributes. For example, some artifacts (constituents of data sources) do not have an indefinite lifetime on the system, and some artifacts are more easily mutable than others. To that point, Joe Slowik wrote an excellent paper last year on Formulating a Robust Pivoting Methodology. On the top of the third page of that paper, Joe refers to IOCs as "compound objects linking multiple observations and context into a single indicator", and I have to say, that is the best, most succinct description I think I've ever seen. The same can be said for indicators found with the various data sources we access during investigations, so the question is, are we fully exploiting those data sources?