Wednesday, November 26, 2025

Registry: FeatureUsage

Maurice posted on LinkedIn recently about one of the FeatureUsage Registry key subkeys; specifically, the AppSwitched subkey. Being somewhat, maybe even only slightly aware of the Windows Registry, I read the post with casual, even mild interest. 

Someone posted recently that cybersecurity runs on caffeine and sarcasm...I've got my cup of coffee right in front of me, and I've placed the sarcasm in front of us all.  ;-)

The RegRipper featureusage plugin was originally written in 2019, and includes a reference to a Crowdstrike blog post written in 2020, authored by Jai Minton (who is now with Huntress). The figure to the right was captured from the blog post, and provides a succinct description of how the AppSwitched key is populated. Specifically, Jai stated, "This key provides the number of times an application switched focus (was left-clicked on the taskbar)." This helps us understand a bit more about process execution, as for the application to exist on the taskbar and to have it's focus switched, that application has to have been executed.

After finding and reading the blog post, I wrote a brief blog post here on this blog that mentioned the Registry key, and referenced Jai's blog post.

To Maurice's point, this is, in fact, a valuable artifact, so kudos to Maurice for pointing it out and mentioning it again. If you read through the comments to Maurice's post, there are some hints there as to how to use the value data in your analysis. As you'll note, neither the value names nor the data itself  includes a timestamp, but the AppSwitched subkey does have a LastWrite time, and this can be included in a timeline. The value names can be used as pivot points into a timeline, adding something of a "third dimension" to timeline analysis. You can use this alongside timestamped information, such as Prefetch entries, Registry data, SRUM DB data, etc.

Unprecedented Complexity

I saw it again, just today. Another post on social media stating that IT teams/defenders "face unprecedented complexity". 

This one stood out amongst all of the posts proclaiming the need for agentic AI on the defender's side, due to how these agents were currently being employed on the attacker side. We hear things like "autonomous agents" being able to bring speed and scale to attacks. 

I will say that in my experience, defenders are always going to face complexity, but by it's very nature, I would be very reticent to call it "unprecedented". The reason for this is that cybersecurity is usually a bolted-on afterthought, one where the infrastructure already exists with a management culture that is completely against any sort of consolidated effort at overall "control". 

Most often, there's no single guiding policy or vision statement, specifically regarding cybersecurity. Many organizations and/or departments may not even fully understand their assets; what endpoints, both physical and virtual, are within their purview? How about applications running on those endpoints? And which of those assets are exposed to the public Internet, or even to access within the infrastructure itself, that don't need to be, or shouldn't be?

For example, some applications, such as backup or accounting solutions, will install MSSQL "under the hood". Is your IT team aware of this, and if so, have they taken steps to secure this installation? From experience, most don't...the answer to both questions is a resounding "IDK". 

Default installations of MSSQL will send login failure attempts to the Application Event Log, but not successful logins. That only really matters if you're monitoring your logs in some way; many aren't, and even with SIEM solutions, these events are sometimes not included in monitoring. I've seen endpoints with upwards of 45K (yes, you read that right...) failed login attempts to MSSQL recorded in the Application Event Log, but this is most often on systems where the application doesn't rely on the MSSQL installation having hard-coded account credentials. 

The point is that basic IT hygiene starts with an accurate asset inventory, and is followed with attack surface reduction. Once you know what systems you have, and what applications should/need to be running on those systems, you can begin to address such things as logging configurations, monitoring (SIEM, EDR, etc.), etc. Attack surface reduction adds to increasing the efficacy of your controls, and to minimizing noise from false positives. Not only does this ensure that you're alerted when a security incident does occur, but it also provides for context, *and* it furthers your ability to quickly determine the nature, scope, and origin of the incident. I've seen organizations who are using SIEM, but the Security Event Log on many of their Windows endpoints are not logging either successful logins, nor failed login attempts. 

This is going to be something of a hot take for many readers, but right now, AI (and specifically agentic AI) is a distraction for defenders. Yes, based on what we're seeing through social media, agentic AI may be furthering threat actors by facilitating faster attacks, at scale, but when it comes right down to it, it's all about the speed and volume; exposure of vulnerable systems remains the same. It's the finding and exploiting the vulnerable systems that's gotten faster.

The fact is that for the attacks to be successful, even with speed and scale, vulnerable systems need to be exposed to the Internet, and as such, accessed via some means. Yes, even endpoints not directly exposed to the Internet can be compromised via attacks such as phishing, SEO poisoning, or some other means that tricks the user into installing something that reaches out, but even some of these attacks can be inhibited or even obviated by some simple, free modifications to the endpoint itself. 

Thoughts on Analysis

Warning - before you get started reading this blog post, it's only fair that I warn you...in this post, I make the recommendation that you document your analysis process. If you find this traumatic, you might want to just move on. ;-)

Robert Jan Mora, a name that I've known for some time within the DFIR community, recently posted something pretty fascinating on LinkedIn, having to do with a case that he worked a bit ago. His post, in turn, leads to this The Wire article, from India, and includes an interview with him. 

The "so what" of the article itself has to do with an initial report that states that no malware was present, and two subsequent reports, one of which is from Robert Jan's analysis, stating that malware was found on a USB device.

In his LinkedIn post, Robert Jan emphasizes the need for malware scans in a law enforcement environment. Back when I first started working cases, even internally, in the early 2000s, I recommended the same thing, albeit with AV scanners not installed on imaged endpoint. Even more tools are available these days; for example, consider Yara, or something more along the lines of the Thor Scanner from Nextron Systems.

To learn a bit more about this man, check out this Forensic Focus interview

Okay, but so what? Why does this matter, or why is it important?

When looking to see if malware exists, or did exist on an endpoint, there are different approaches you can take, perhaps using AV scanners, or something like Yara. Or, sometimes, we may not find the actual malware itself, maybe not right away, but we will see clear indications of it's presence, or that it had executed. For example, if you've created a timeline of system activity, you may find a cluster of activity with different components derived from different sources, such as the Registry, Windows Event Log, file system, etc. Separately, these may not be entirely conclusive, but when viewed together, and within a narrow timeframe, they may provide clear indications of something having happened, much like finding footprints, disturbed earth, broken branches, and matted grass following the passage of an animal through a wilderness area.

You can even find indications of malware having been or even executed on a system, while the executable itself no longer exists on the endpoint at the time of the investigation. Using sources such as the SRUM DB and/or AmCache.hve and PCA log files may provide valuable insights, even if the malware has been removed from the endpoint.

When I was working PCI forensic investigations, our team had found that some of the malware used by the threat actors was based on a "compiled" Perl script, used to retrieve credit card numbers from a process memory dump. When run, the program extracted the Perl runtime from the EXE, and placed it in a particular folder path, and each time it was run, the path was slightly different; the final folder name changed. However, using file system metadata, we were able to determine a very explicit timeline of the compromise.

Or, we may find indications that an attempt was made to execute the malware, but it failed, either because it was detected and quarantined, or because something caused it to crash. PCA log files and application pop-up/WER messages in the Application Event Log have proved to be very illuminating on some of the cases with which I've engaged.

The key to all of this is to document your analysis process; if you don't know what you did, you can't make modifications or adjustments to the process. For example, if you have a memory dump, how did you go about searching for indications of a specific IP address? Volatility? Bulk_extractor? Don't remember/don't know? If you mounted the image and scanned it with AV, which one/version did you use? Knowing what you did, and what you found, means that you can then make adjustments and improvements to the process over time.

Thursday, November 13, 2025

Images

In writing Investigating Windows Systems, published in 2018, I made use of publicly available images found on the Internet. Some were images posted as examples of techniques, others were posted by professors running courses, and some were from CTFs. If you have read the book, you'll know that for each of the images, I either used or made a more "real world" scenario, something that aligned much more closely to my experiences over two and a half decades of DF/IR work, a good bit of which was consulting. During that time, and at several different companies, we'd have an "IR hotline" folks could call, and request computer incident response...this is something many firms continue to do. Those firms also very often had "intake forms", documents an analyst would fill out with pertinent information from a caller or customer, which very often included investigative goals. 

Over the years, the sites from which I downloaded some of the images I used have disappeared, which is unfortunate, but not a deal killer. The intent and value of the book isn't about the images, but rather, about the process. The processes used, even those where an image of a Windows XP system was used, can be replicated, developed, and extended for any Windows OS. 

Brett Shavers recently posted on LinkedIn, pointing the repository of available images he's compiled at DFIR.Training.

Over at Stark4N6, we see another repository of images, this one called The Evidence Locker. Here's Kevin's LinkedIn post with a description of the site.

If you're not interested in downloading full or partial images, I recently took a look at an infostealer sample, from the perspective of file formats. Fortunately, the OP provided a hash for the sample they looked at, which allowed me to find a site from which I could download a copy of the sample. I'm not a malware RE guy, but what I do try to do is follow Jesse Kornblum's example of using all the parts of the buffalo, and exploit file format metadata for threat intel purposes.

Wednesday, November 12, 2025

File Formats

I'm a huge fan of MS file formats, mostly because they provide for the possibility of an immense (and often untapped, unexploited) amount of metadata. Anyone who's followed me for any length of time, or has read my blog, knows that I'm a huge fan of file formats such as Registry hives (and non-Registry files with the same structure), as well as LNK files.

Historically, lots of different MS file formats have contained significant, and often damning, metadata. Anyone remember the issue of MSWord metadata that the Blair administration encountered over two decades ago? I shared some information related to coding, using the file as an exemplar, in the WindowsIR blog.

I ran across a LinkedIn post from Maurice Fielenbach, where he talked about an infostealer bundled in an MSI file. Interestingly enough, MSI files are structured storage files, following the OLE format, albeit with different streams, the same as MSWord docs and JumpList files

I'm not a malware RE guy, so I don't have a specialized tool set for parsing these kinds of files. I generally start with the MiTeC Structured Storage Viewer, something I've used before. In the image to the left, you can see the SummaryInformation block parsed and visible in MSSV. 

If you read through the comments, MalCat is recommended as a tool to use to run or click through the structure of this file format, and others. This looks like a great possibility, and to be honest, if you're into malware analysis, the MalCat blog looks really informative, as well. If you're interested in a sample to work with yourself, I found one at MalwareBazaar

In his LinkedIn post, Maurice said, "I highly recommend taking a deeper look at the MSI file format itself and familiarizing yourself with common installer frameworks such as WiX." I'd agree, particularly given that the test.msi image shows that the creating application was the "WiX Toolset".

Regardless of the tools you use, and the area of cybersecurity that you're in or focused on, information like this can expand your knowledge base as to what's possible, or by providing new directions for study or skill expansion. This is not only valuable as a malware or DF analyst, but also for threat intel analysts, as this information can add context and granularity to the intel you're developing.

Monday, November 10, 2025

What We Value

Over the passed couple of days, I've had images pop up in my feed showing people's workstations, most often with multiple screens. I've seen various configurations, some with three or more screens, but the other thing I've noted is that for those social media profiles, there's little else that's shared. 

Yep, they have a nifty configuration, one that they want to show off, but there's little to no attempts to show off their skills and abilities, what they've actually done with that configuration.

Within cybersecurity, we see a lot of that, as well with certifications, commercial product licenses, and even open-source distributions - you know, those DVDs or VMs where someone else pulls together a list of free and open-source tools, puts a fancy background and/or menu system on it. A lot of these distros often derive their "value" from how new they are, or how many tools are included...but rarely do we see, "...yeah, and this is how I used this distro to solve a case...", or anything close to that.

We often collect these things, like badges or challenge coins, but is someone really a better analyst if they have 4 instead of 3 screens? I honestly don't know...never having been able to afford multiple screens myself, I've spent the past 28 yrs operating off of single laptop screen. And yes, I have used some of those open-source tools, but I'm not a huge fan of distributions. 

All of this means that I tend to value something completely different from what the more vocal folks within the industry tend to value. I look to folks trying to develop and share processes and experiences that have the potential to change the direction of the industry.

Monday, November 03, 2025

Analysis Playbooks: USB

In 2005, Cory Altheide and I published the first peer-reviewed paper to address tracking USB devices on Windows systems. Over the years, it's been pretty amazing to see not only the artifacts expand and evolve, but to also see folks pick up the baton and carry on with describing what amounts to a "playbook" for developing this information as part of an investigation. Not only did malware such as Raspberry Robin propagate via USB devices, but with the rise of other devices that could be attached via a USB connection, but use different protocols, it became more important to operationalize this analysis in a playbook. 

After all, why not take the inefficient, error-prone, purely manual aspects out of the parsing by automating it?

Morad R. put together a series of posts that outline different data/artifact sources you can examine to identify USB devices that had been connected to the endpoint, as well as attribute the use of the devices to a particular user. This series of posts illustrates some steps that begin the process of pulling back the veil, if you will, to unraveling the use of USB devices on Windows systems. While there is definitely more to be done and shared, the important common factor across the posts is the use of timelines. 

USB Forensics, pt 1: Unmasking the connected device - Focuses on the System Registry hive, and extracting time stamps from Properties key values. The focus on a timeline is great way to get started on this, as doing so takes the analyst directly to context. 

However, by focusing on just USB and USBStor keys in the System hive, other devices (smartphones, digital cameras) are missed. However, that's not really an issue, per se, as the same playbook can be applied to the appropriate Registry keys.

USB Forensics, pt 2: Mapping device to user & drive letter - focuses on the user's NTUSER.DAT, but doesn't mention other artifacts, such as shellbags, RecentDocs, UserAssist, etc., that could be used to correlate additional user activity with the device, particularly via a timeline. 

RegRipper still makes use of "profiles", which is the term I used to describe what became known as "playbooks". Or, another way to look at it is that you can implement playbooks through these profiles.

USB Forensics, pt 3: The Event Log timeline - focus on a timeline continues, which is good. However, the logs are technically referred to as "Windows Event Logs"; "Event Logs" refer to the Windows 2000, XP, and 2003 era logs. I understand, I 'get it', that this is a distinction without a difference for most analysts, particularly those who've never had to work with Event Log records from older systems, and are only familiar with the new format implemented as of Windows Vista. 

All three of these posts, together, serve as a good foundation, and a great first step toward addressing USB-connected devices on Windows endpoints. Just as the field has grown and expanded since 2005, it will continue to do so in the future. In addition to providing the data sources, the underlying reliance on (or at least pointing in the direction of) timelines is, I believe, foundational. Start with a timeline, do not let a timeline be something you assemble manually, after everything else is done. We can always add or remove data sources, create new RegRipper or Events Ripper plugins, etc., but creating a timeline should be "first principles". 

In my current role, I don't have a need to determine things such as USB devices connected to a Windows system, but if I did, I'd definitely have Events Ripper plugins to extract that information, maybe even correlate it, into an easy-to-view manner. 

This is just some of the content from my blog that explicitly addresses USB devices: