Friday, October 31, 2025

Registry Analysis

First off, what is "analysis"?

I submit that "analysis" is what happens when an examiner has investigative goals and context, and applies this, along with their knowledge and experience, to a data set. This can be anything, from a physical image of a mobile device, to a triage collection from an endpoint, to logs from a device, or various devices. 

IMHO, this distinction is valuable, because what we often call "analysis" is really nothing more than parsing. For example, someone may recommend (or state as part of their process) that we open a Registry hive in a viewer, and navigate to a particular path by clicking through the UI. Now, there are ways that this could be accomplished in a much more efficient manner (I didn't say "easier", because the command line isn't "easier" for some), but in the end, looking for one value, dumping all of the values from a user's Run key is still just parsing; there's no "analysis" unless the investigator can articulate how this action and their finding applies to their goals. 

Registry Analysis
That being said, again...what we most often think of, or refer to as "Registry analysis" really amounts to nothing more than simple parsing. Few are actually conducting analysis of the files that comprise the Windows Registry, largely because the knowledge of and experience with these files is often somewhat limited. For example...and you don't need to raise your hands...but how many analysts are incorporating Registry hive file metadata into timelines? Or incorporating deleted keys and values in their overall analysis plan?

The Windows Registry includes a great deal of information related to the configuration of the endpoint, and for each user, contains information related to that user's activities. Not only does the Registry contain considerable metadata, but some of the values found within the Registry can contain valuable information regarding pre-existing states/conditions of the endpoint. For example, what we refer to as "shellbag" artifacts are comprised of shell items, strung together in shell item ID lists. Some of these shell items themselves contain considerable metadata, such as time stamps from folders, preserved at the time the "shellbag" artifacts were created. 

Something else to consider is that very often, information stored in the Registry will persist beyond the point where applications and files are deleted/removed from the endpoint. 

Over the years, the Windows Registry has gone through changes, but the analysis process remains the same, in part, because the binary format of the Registry remains consistent. What we traditionally refer to as "Registry analysis" now extends beyond "the usual" hive files that make up the Windows Registry, to the AmCache.hve file, as well as similarly formatted files associated with AppX packages. Ogmini's recent blog post regarding Registry hives associated with AppX packages references/points to Mari's ZeroFox blog on the topic, as well as Chris's Cyber Triage contribution, in addition to discussing sources beyond the "traditional" Registry. 

As these files are of the same format, there's no reason to believe that what we learned about the traditional hive files...metadata, what constitutes a "deleted" key or value, etc...needs to change when it comes to these files, as well. We can parse them for keys and values, such as looking for the recently accessed documents in the AppX version of Notepad or WordPad, just as we can parse these files into a timeline.

Parting Thoughts
Limitations or shortcomings in knowledge and experience of individual analysts can (and does) lead to the analysis and intel "poverty", and the shortcomings have a cascading impact. To overcome these, we need to work together, in mentor/mentee relationships, to build better, more applicable processes that allow us to fill these gaps. Operationalizing "corporate" knowledge for the long term is the key to this, as knowledge is shared without the requirement for commensurate experience. 

Monday, October 27, 2025

Analyzing Ransomware

Not long ago, I ran across this LinkedIn post on analyzing a ransomware executable, which led to this

HexaStrike post. The HexaStrike post covers analyzing an AI-generated ransomware variant, which (to be honest) is not something I'm normally interested in; however, in this case, the blog contained the following statement that caught my interest:

People often ask: “Why analyze ransomware? It’s destructive; by the time analysis happens, it’s too late”. That’s only half true. Analysis matters because sometimes samples exploit bugs to spread or escalate (think WannaCry/EternalBlue), they often ship persistence or exfiltration tricks that translate into detection rules, custom crypto occasionally ships with fixable flaws allowing recovering from ransomware, infrastructure and dev breadcrumbs surface through pathnames and URLs, and, being honest, it’s fun.

For anyone who's been following me for any length of time, here on this blog or on LinkedIn, you'll know that "dev breadcrumbs" are something that I'm very, VERY interested in. I tend to refer to them as "toolmarks" but "dev breadcrumbs" works just as well. 

Something else...in my experience, some of the malware RE write-ups are devoid of the types of things mentioned in the above quotes, particularly anything that "translates into detection rules". I know some are going think, "yeah, but like the quote also says, by the time we see this stuff executing, it's too late...", but that isn't always the case. For example, if you're able to write a detection rule that says, "...when we see an [un]signed process act as the parent for the following processes in quick succession, kill the parent process, log out the session owner, isolate the endpoint, and generate an alert...", then this sort of thing can be very valuable. 

Also, specific to ransomware, if there's a flaw in the encryption process found, then this may help with recovery where paying the ransom isn't required. For example, if the encryption process looks for a specific file or some other indicator, then that indicator can act as a "vaccine" of sorts; simply create it (say, an empty file) on the endpoint, and if the ransomware is launched against that endpoint, it will find the indicator (file), and based on the encryption logic, not encrypt files on the endpoint.

This is not a new idea, to be sure. Back in 2016, Kevin Strickland authored a blog post titled, "The Continuing Evolution of Samas Ransomware", showing how the ransomware executable changed over time, providing insight not just into the thought processes of the threat actors, and evolution of their tactics, but also detection opportunities.