Thursday, December 11, 2025

Perspectives on Cybersecurity

I'm not a fan of many podcasts. I do like a conversational style, and there are some podcasts that I listen to, albeit not on a regular basis, and not for technical content. They're mostly about either "easter eggs" in Marvel or DC movies, or the conspiracies or speculation about an upcoming movie. Yeah, I know what you're thinking...why spoil it? The fact of the matter is that the way things are going with these superhero movies, it's going to be 2 or more years before the movie even comes out, and there's no way I'm going to remember the podcast.

When it comes to technical content, however, my podcast or video preferences are much more stringent. I'm not a big fan of gratuitous small talk and hilarity; for technical content, I take a more focused approach, and would tend to look for show notes, rather than sit through chatter, ads, and shoutz to sponsors.

Igor Tsyganskiy posted on LinkedIn about having listened to a recent Joe Rogan podcast, where Jensen Huang was the guest. I haven't listened to the podcast - Igor doesn't provide a link to make the interview easily accessible - but Igor did provide his view of a synopsis of the podcast content. Based on his comments, what I wanted to do here in this blog post is share my views, via my own aperture, with respect to Igor's comments.

Let me start by sharing my aperture. For those readers who are not familiar with my background, I spent the first 8 yrs following college working in a technical field in the military. I was trained to provide technical communications services to non- and less-technical folks, so it was something of a consulting role. I did more than just that, but that was what I was trained for, and represents most of my experience. Following my military time, I moved to the private sector, and began providing information security consulting services, starting with vulnerability assessments, and in some cases, "war dialing" (I'll leave that for you to Google). Around about 2000 or so, I moved into providing DFIR services, and spent the better part of the next quarter century in consulting, responding "on call" to cybersecurity incidents. During that time, I've worked closely with SOCs and MSSPs, I've been a SOC analyst, I've run a SOC, and I currently work for an MDR. 

All of this is to say that my perspective is not the same as Igor's, nor Jensen's; rather than perhaps seeing the final output from a high level, I've spent the better part of my civilian career at the boots-on-the-ground level, watching the sausage get made, or making the sausage myself. 

Now, on to Igor's comments:

AI presents unique challenges in the game of attack vs defense.

Does it? I'm not entirely sure, nor am I convinced, that it does. I do think that based on recent events, threat actors using AI to scale attacks, by pointing AI at the Internet and having it return a list of compromised systems or shells, is a challenge, but I do not see how it changes the defender's current status beyond volume, speed, and scale of the attacks.

Threat actors need to find a path from point A (their starting point) to point B (their target). These points are known to the threat actor (TA). Only the path is not known. 

Defenders need to identify A, deduct B and protect Every path from point A to target B. 

First task is computationally less costly then a second task.

On the surface, I agree with the "computationally less costly" statement with respect to the threat actors, and that just means that the disparity between threat actors and defenders is asymmetric. However, there's something here that neither Jensen nor Igor appears to add to the discussion; the "why". Why is there this asymmetry? 

The simplest answer is that while defenders should have the "home field advantage", they often don't. Most organizations are built on default installations of operating systems and applications, often with no accurate asset inventory (of systems and of applications), and without any attempts at attack surface reduction. It's not that threat actors have some magical intuition or mystical abilities, and are able to discern the network infrastructure from the outside...most times, if you look closely at the data, they do things because no one told them that they couldn't.

This brings something important to mind. About a decade ago, I ran an incident response for a manufacturing company with a global presence, including offices in Germany. As we started about deploying our tooling and scoping the incident, we saw that the threat actor had moved to endpoints in the office in Germany; however, we were not able to get our tooling installed, not because of technical issues, but due to Germany privacy laws. We could "see" that the threat actor was moving about these systems, establishing persistence, and stealing data, but there was nothing we could do because we could not deploy our toolset. This was clearly an artificiality that the threat actor ignored, because...well...who was going to stop them? Those working on those systems in that office were more concerned that anything found during the course of the incident response was going to be used against them, than they were of data theft, or possibly even ransomware. 

Sometimes during an incident response (IR) engagement, we'd find systems that no one recognized or knew about. Sometimes, we'd have a difficult time finding the actual "owner", of either the system or the application it was running. We'd find indications of Terminal Services running on workstations, or clear evidence that critical servers/domain controllers were regularly used by admins to browse the web, answer email (corporate and personal), etc. Threat actors simply take advantage of this lack of compartmentalization by trying to connect to another system, and finding their efforts succeed. 

Jensen makes a good point that defenders historically work together and therefore can most effectively defend. I agree.  

I do not. 

Based on my aperture, I have not seen "defenders historically work together"...not where the rubber meets the road, "at the coalface", as it were. Yes, perhaps from a much higher view...say that of a founder or CEO...looking across the vast landscape, we see gov't (CISA) and civilian agencies "working together" to make information available. 

However, when it comes to "boots on the ground", where I and others are looking over the shoulder of a system admin (or they're standing over mine), the perspective is entirely different.

I've been on IR engagements where local staff we happy to see me, and enthusiastic to work with me, and even sought out knowledge transfer. But I've also been on engagements that were the complete opposite, where local admins were not just suspicious of the IR team, but in some cases, actively worked against us.

During one IR, my point of contact asked me to take an advisor role, to direct his staff and engage them, having them do the work, while sharing the "why". At one point, I asked an admin to copy the Event Logs off of a Windows XP system, specifically asking him to NOT export the logs to text format, but to instead use the "copy" command. The admin nodded that he understood, so I handed him a thumb drive, and went about addressing other issues of the response. Later, when I got back to my room, I tried to run my tools against the files that had been copied to the thumb drive, but none of them worked. I opened the thumb drive, saw the three ".evt" files, and saw that they were not zero bytes in size. I then opened one of the .evt files in a hex editor, and could immediately see that the logs had indeed been exported to text, and then renamed to ".evt". I was able to do some work with the files that I had, but I was disappointed that I could not bring my toolset to bear against the files.

While this is just one example, but for every IR engagement where I'd show up and be greeted warmly, there was one or two where the local IT staff was ambivalent, at best, or quietly hostile about my presence. But sometimes, it's not about a consultant being there; something I've seen multiple times over the years is that regardless of what's said or how it's said, be it a SOC ticket, or a phone call (or both), there are times...a lot of times...when an organization will shrug off the "security incident" as, "oh, yeah, that's this admin or that contractor doing their work", when it clearly isn't. 

My point is that I'm not entirely sure that from my aperture, the statement "defenders historically work together" is historically accurate.

2. Defenders need magnitude more GPU cycles to defend 

Once again...why is that?

This goes back to the environment they're defending, which many times lacks an accurate "map", and most often has not been subject to an overall vision or policy, and hence, there's no attack surface reduction. Windows 10 and 11 endpoints are running Terminal Services, and applications are installed with no one realizing that a default installation of MSSQL was also installed along with the application, with default, hard-coded credentials (no brute force password guessing required). 

The detection and response eco-system is more expansive and dynamic than most folks seem to realize. While there is "the way things should be" at each point along the eco-system path, reality often belies this assumption.

Defenders require more GPU cycles because when alert is received, they need to understand the context of that alert, which many times is simply not available. Or, the analyst who receives the alert misinterprets it, and takes the wrong action (i.e., deems it a false positive, etc.), and the effects snowball. There are more than a few points along the eco-system path where the detection or response can go off the rails, leaving the impacted organization in a much worse position than they were before the alert was generated.

No comments: