Monday, December 29, 2025

Question on Open Source Tools

I received a question recently, one I receive every now and again, asking if there are any updates to an open source tool I created a while back, called "RegRipper". This time, the question came in this way:

Is there any update on reg tool?

After a little more back and forth, I was able to tease out that the question was about RegRipper, and the question was really directed more at asking about reg keys updates for win11. My response was the usual, that everything's online. After all, in addition to my blog, the GitHub repo is publicly available, so anyone can take a look at it and see what's new. I mean, I don't have to do your Googling for you. If you don't see something specific that you're looking for, RegRipper was designed from the beginning to be extensible; you can either write your own plugin, or ask for assistance in doing so (I've turned working plugins around in an hour or less). If you choose to go that route (most don't), it usually helps if you can clearly articulate what you're looking for, and even more so if you can provide a Registry hive for testing. 

But the question itself got me to thinking, is this what we've come to?

The answer is pretty simple. No, this is how things have always been, that the initial interest in any tool, especially one that's freely available is, is there anything new and shiny? More so, the majority of this attention seems to be just asking the high level question, with no articulation of anything specific. Just, "what's new?" 

In the time I've been in the industry, there's an inordinate focus on what's "new" over mastering what's already out there and available.

On the topic of what's new, in Aug 2023, I published this blog post regarding updates I'd made to RegRipper, specifically version 4.0. While there were no specific plugins called out in the blog post, and nothing specific about updates for Registry keys or values specific to Windows 11, I did describe writing a plugin to write output in JSON format, and incorporating Yara into RegRipper.

In October 2024, I spoke at the inaugural From The Source conference, basically walking through my Aug 2023 blog post. 

My question is, is 2026 the year that some commercial tool provider will want to incorporate RegRipper functionality directly into the application?

In 2018, while I was at Nuix, extensions were created to incorporate Yara and RegRipper into the Nuix Workstation product. Here's my blog post describing this capability. I know that in June 2022, RegRipper v3.0 was added to Paraben's E3 product, albeit at that time, on a very limited basis. 

Thursday, December 11, 2025

Perspectives on Cybersecurity

I'm not a fan of many podcasts. I do like a conversational style, and there are some podcasts that I listen to, albeit not on a regular basis, and not for technical content. They're mostly about either "easter eggs" in Marvel or DC movies, or the conspiracies or speculation about an upcoming movie. Yeah, I know what you're thinking...why spoil it? The fact of the matter is that the way things are going with these superhero movies, it's going to be 2 or more years before the movie even comes out, and there's no way I'm going to remember the podcast.

When it comes to technical content, however, my podcast or video preferences are much more stringent. I'm not a big fan of gratuitous small talk and hilarity; for technical content, I take a more focused approach, and would tend to look for show notes, rather than sit through chatter, ads, and shoutz to sponsors.

Igor Tsyganskiy posted on LinkedIn about having listened to a recent Joe Rogan podcast, where Jensen Huang was the guest. I haven't listened to the podcast - Igor doesn't provide a link to make the interview easily accessible - but Igor did provide his view of a synopsis of the podcast content. Based on his comments, what I wanted to do here in this blog post is share my views, via my own aperture, with respect to Igor's comments.

Let me start by sharing my aperture. For those readers who are not familiar with my background, I spent the first 8 yrs following college working in a technical field in the military. I was trained to provide technical communications services to non- and less-technical folks, so it was something of a consulting role. I did more than just that, but that was what I was trained for, and represents most of my experience. Following my military time, I moved to the private sector, and began providing information security consulting services, starting with vulnerability assessments, and in some cases, "war dialing" (I'll leave that for you to Google). Around about 2000 or so, I moved into providing DFIR services, and spent the better part of the next quarter century in consulting, responding "on call" to cybersecurity incidents. During that time, I've worked closely with SOCs and MSSPs, I've been a SOC analyst, I've run a SOC, and I currently work for an MDR. 

All of this is to say that my perspective is not the same as Igor's, nor Jensen's; rather than perhaps seeing the final output from a high level, I've spent the better part of my civilian career at the boots-on-the-ground level, watching the sausage get made, or making the sausage myself. 

Now, on to Igor's comments:

AI presents unique challenges in the game of attack vs defense.

Does it? I'm not entirely sure, nor am I convinced, that it does. I do think that based on recent events, threat actors using AI to scale attacks, by pointing AI at the Internet and having it return a list of compromised systems or shells, is a challenge, but I do not see how it changes the defender's current status beyond volume, speed, and scale of the attacks.

Threat actors need to find a path from point A (their starting point) to point B (their target). These points are known to the threat actor (TA). Only the path is not known. 

Defenders need to identify A, deduct B and protect Every path from point A to target B. 

First task is computationally less costly then a second task.

On the surface, I agree with the "computationally less costly" statement with respect to the threat actors, and that just means that the disparity between threat actors and defenders is asymmetric. However, there's something here that neither Jensen nor Igor appears to add to the discussion; the "why". Why is there this asymmetry? 

The simplest answer is that while defenders should have the "home field advantage", they often don't. Most organizations are built on default installations of operating systems and applications, often with no accurate asset inventory (of systems and of applications), and without any attempts at attack surface reduction. It's not that threat actors have some magical intuition or mystical abilities, and are able to discern the network infrastructure from the outside...most times, if you look closely at the data, they do things because no one told them that they couldn't.

This brings something important to mind. About a decade ago, I ran an incident response for a manufacturing company with a global presence, including offices in Germany. As we started about deploying our tooling and scoping the incident, we saw that the threat actor had moved to endpoints in the office in Germany; however, we were not able to get our tooling installed, not because of technical issues, but due to Germany privacy laws. We could "see" that the threat actor was moving about these systems, establishing persistence, and stealing data, but there was nothing we could do because we could not deploy our toolset. This was clearly an artificiality that the threat actor ignored, because...well...who was going to stop them? Those working on those systems in that office were more concerned that anything found during the course of the incident response was going to be used against them, than they were of data theft, or possibly even ransomware. 

Sometimes during an incident response (IR) engagement, we'd find systems that no one recognized or knew about. Sometimes, we'd have a difficult time finding the actual "owner", of either the system or the application it was running. We'd find indications of Terminal Services running on workstations, or clear evidence that critical servers/domain controllers were regularly used by admins to browse the web, answer email (corporate and personal), etc. Threat actors simply take advantage of this lack of compartmentalization by trying to connect to another system, and finding their efforts succeed. 

Jensen makes a good point that defenders historically work together and therefore can most effectively defend. I agree.  

I do not. 

Based on my aperture, I have not seen "defenders historically work together"...not where the rubber meets the road, "at the coalface", as it were. Yes, perhaps from a much higher view...say that of a founder or CEO...looking across the vast landscape, we see gov't (CISA) and civilian agencies "working together" to make information available. 

However, when it comes to "boots on the ground", where I and others are looking over the shoulder of a system admin (or they're standing over mine), the perspective is entirely different.

I've been on IR engagements where local staff we happy to see me, and enthusiastic to work with me, and even sought out knowledge transfer. But I've also been on engagements that were the complete opposite, where local admins were not just suspicious of the IR team, but in some cases, actively worked against us.

During one IR, my point of contact asked me to take an advisor role, to direct his staff and engage them, having them do the work, while sharing the "why". At one point, I asked an admin to copy the Event Logs off of a Windows XP system, specifically asking him to NOT export the logs to text format, but to instead use the "copy" command. The admin nodded that he understood, so I handed him a thumb drive, and went about addressing other issues of the response. Later, when I got back to my room, I tried to run my tools against the files that had been copied to the thumb drive, but none of them worked. I opened the thumb drive, saw the three ".evt" files, and saw that they were not zero bytes in size. I then opened one of the .evt files in a hex editor, and could immediately see that the logs had indeed been exported to text, and then renamed to ".evt". I was able to do some work with the files that I had, but I was disappointed that I could not bring my toolset to bear against the files.

While this is just one example, but for every IR engagement where I'd show up and be greeted warmly, there was one or two where the local IT staff was ambivalent, at best, or quietly hostile about my presence. But sometimes, it's not about a consultant being there; something I've seen multiple times over the years is that regardless of what's said or how it's said, be it a SOC ticket, or a phone call (or both), there are times...a lot of times...when an organization will shrug off the "security incident" as, "oh, yeah, that's this admin or that contractor doing their work", when it clearly isn't. 

My point is that I'm not entirely sure that from my aperture, the statement "defenders historically work together" is historically accurate.

2. Defenders need magnitude more GPU cycles to defend 

Once again...why is that?

This goes back to the environment they're defending, which many times lacks an accurate "map", and most often has not been subject to an overall vision or policy, and hence, there's no attack surface reduction. Windows 10 and 11 endpoints are running Terminal Services, and applications are installed with no one realizing that a default installation of MSSQL was also installed along with the application, with default, hard-coded credentials (no brute force password guessing required). 

The detection and response eco-system is more expansive and dynamic than most folks seem to realize. While there is "the way things should be" at each point along the eco-system path, reality often belies this assumption.

Defenders require more GPU cycles because when alert is received, they need to understand the context of that alert, which many times is simply not available. Or, the analyst who receives the alert misinterprets it, and takes the wrong action (i.e., deems it a false positive, etc.), and the effects snowball. There are more than a few points along the eco-system path where the detection or response can go off the rails, leaving the impacted organization in a much worse position than they were before the alert was generated.

Wednesday, December 10, 2025

Releasing Open Source Tools to the Community

Every now and then, I get contacted by someone who tells me that they used the open source tools I've released in either a college course they took, or in a course provided by one of the many training vendors in the industry. I even once responded to an incident for a large energy sector organization, and while I was orienting myself to the incident, I looked over one of their analyst's shoulders and recognized the output of the tool they were using...it was one of mine.

What I've seen pretty consistently throughout my time in the industry is that once tools are known, people begin downloading them, and including them in their distros/toolsets, and some even add them to training courses (colleges, LE, the federal gov't, private sector, etc.). However, they do so without ever truly understanding the nature of the tool, how and why it was designed, or what problem it was intended to solve. Further, they rarely (to my knowledge) contact  the author to understand what went into the development of the tool, nor understand how the tool was intended to be used. For training courses in particular, those providing the materials and instruction do so without fully understanding how the tool author conducts their own investigations, and therefore, how the open source too fits into their overall investigative process. As a result, the instruction around that tool that's provided is often a shadow of what how the tool was intended to be used; what you're getting in these training courses is the instructor's perception of how the tool can be used. 

I've blogged a couple of times regarding various distros of tools that include RegRipper; for example, here and here. That second post includes a brief mention of the fact that, to a very limited extent, RegRipper v3.0 was included in Paraben's E3 product back in 2022; that is to say that the full capability of RegRipper wasn't implemented at the time, just a limited subset of plugins.

I recently heard from someone that Blue Cape Security includes RegRipper in their Practical Windows Forensics training course. If you look at the course content that's provided, the use of RegRipper starts in section 5.1 (Windows Registry Analysis), but Registry parsing (not actual analysis) itself continues into sections 5.2 (User Behavior Analysis), and 5.6 (Analyzing Evidence of Program Execution...I know, don't get me started...) and 5.7 (Finding Evidence of Persistence Mechanisms).

It seems that others use RegRipper, as well. PluralSight has an "OS Analysis with RegRipper" course, and Hackviser has a Windows Registry Forensic Analysis course, both including RegRipper.

I know what you're thinking...if I'm going to "complain" about this, why not do something about it? 

Well, I'm not complaining. That's not it, at all. All I'm saying is that if you take a training course that involves the use of open source tools that the vendor/instructor has collected, you're getting their perspective of the use of the tool, and likely not the full benefit of the "why" behind the tool. 

And yes, if I had even a hint that analysts and examiners were interested in really, truly understanding how to go about analyzing the Windows Registry, I'd develop and deliver a course, or series of courses, myself. As it is, my perspective is that folks are pretty happy with what they know.