The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Monday, January 22, 2024
Lists of Images
Monday, January 15, 2024
EDRSilencer
Going unnoticed on an endpoint when we believe or feel that EDR is prevalent can be a challenge, and this could be the reason why these discussions have taken hold. However, the fact of the matter is that the "feeling" that EDR is prevalent is just that...a feeling, and not supported by data, nor situational awareness. If you look at other aspects of EDR and SOC operations, there are plenty of opportunities using minimal/native tools to achieve the same effect; to have your actions not generate alerts that a SOC analyst investigates.
Situational Awareness
Not all threat actors have the same level of situational awareness. I've seen threat actors where EDR has blocked their process from executing, and they respond by attempting to uninstall AV that isn't installed on the endpoint. Yep, that's right...this was not preceded by a query attempting to determine which AV product was installed; rather, the threat actor when right to uninstalling ESET. In another instance, the threat actor attempted to uninstall Carbon Black; the monitored endpoint was running <EDR>. Again, no attempt was made to determine what was installed.
However, I did see one instance where the threat actor, before doing anything else or being blocked/inhibited, ran queries looking for <EDR> running on 15 other endpoints. From our dashboard, we knew that only 4 of those endpoints had <EDR> running; the threat actor moved to one of the 11 that didn't.
The take-away from this is that even beyond "shadow IT", there are likely endpoints within an infrastructure that don't have EDR installed; 100% coverage, while preferred, is not guaranteed. I remember an organization several years ago that was impacted by a breach, and after discovering the breach, installed EDR on only about 200 endpoints, out of almost 15,000. They also installed the EDR in "learning mode", and several of the installed endpoints were heavily used by the threat actors. As such, the EDR "learned" that the threat actor was "normal" activity.
EDRSilencer
Another aspect of EDR is that for the tool to be effective, most need to communicate to "the cloud"; that is, send data off of the endpoint and outside of the network, were it will be processed. Yes, I know that Carbon Black started out with an on-prem approach, and that Sysmon writes to a local Windows Event Log file, but most EDR frameworks send data to "the cloud", in part so that users with laptops will still have coverage.
EDRSilencer takes advantage of this, not by stopping, altering or "blinding" EDR, but by preventing it from communicating off of the endpoint. See p1k4chu's write up here; EDRSilencer works by creating a WFP rule to block the EDR EXE from communicating off of the host, which, to be honest, is a great idea.
Why a "great idea"? For one, it's neither easy nor productive to create a rule to alert when the EDR is no longer communicating. Some organizations will have hundreds or thousands of endpoints with EDR installed, and there's no real "heartbeat" function in many of them. Employees will disconnect laptops, offices (including WFH) may have power interruptions, etc., so there are LOT of reasons why an EDR agent may cease communicating.
In 2000, I worked for an organization that had a rule that would detect significant time changes (more than a few minutes) on all of their Windows endpoints. The senior sysadmin and IT director would not do anything about the rules, and simply accepted that twice a year, we'd be inundated with these alerts for every endpoint. My point is that when you're talking about global/international infrastructures, or MDRs, having a means of detecting when an agent is not communicating is a tough nut to crack; do it wrong and don't plan well for edge cases, and you're going to crush your SOC.
If you read the EDRSilencer Github page and p1k4chu's write-up closely, you'll see that EDRSilencer uses a hard-coded list of EDR executables, which doesn't include all possible EDR tools.
Fortunately, p1k4chu's write up provides some excellent insights as to how to detect the use of EDRSilencer, even pointing out specific audit configuration changes to ensure that the appropriate events are written to the Security Event Log.
As a bit of a side note, audtipol.exe is, in fact, natively available on Windows platforms.
Once the change is made, the two main events of interest are Security-Auditing/5441 and Security-Auditing/5157. P1k4chu's write-up also includes a Yara rule to detect the EDRSilencer executable, which is based in part on a list of the hard-coded EDR tools.
EDRNoiseMaker detects the use of EDRSilencer, by looking for filters blocking those communications.
Other "Opportunities"
There's another, perhaps more subtle way to inhibit communications off of an endpoint; modify the hosts file. Credit goes to Dray (LinkedIn, X) for reminding me of this sneaky way to inhibiting off-system communications. The difference is that rather than blocking by executable, you need to know to where the communications are going, and add an entry so that the returned IP address is localhost.
I thought Dray's suggestion was both funny and timely; I used to do this for/to my daughter's computer when she was younger...I'd modify her hosts file right around 10pm, so that her favorites sites (MySpace, Facebook, whatever) resolved to localhost, but other sites, like Google, were still accessible.
One of the side effects would likely be the difficulty in investigating an issue like this; how many current or relatively new SOC/DFIR analysts are familiar with the hosts file? How many understand or know the host name resolution process followed by Windows? I think that the first time I became aware of MS's documentation of the host name resolution process was 1995, when I was attempting to troubleshoot an issue; how often is this taught in networking classes these days?
Conclusion
Many of us have seen the use of offensive security tools (OSTs) by pen tester and threat actors alike, so how long do you think it will be before EDRSilencer, or something like it, makes its way into either toolkit? The question becomes, how capable is your team of detecting and responding to the use of such tools, particularly when used in combination with other techniques ("silence" EDR, then clear all Windows Event Logs)? Tools and techniques like this (EDRSilencer, or the technique it uses) shed a whole new light on initial recon (process/service listing, query the Registry for installed applications, etc.) activities, particularly when they're intentionally and purposefully used to create situational awareness.
Addendum, 1 Dec: I came across this Blu Raven blog post this morning, that mentions two alternatives to WFP as a means of silencing EDR. One is the hosts file, the other is custom routes. Very cool stuff folks, not terribly sophisticated, but can be done in a manner that would leave most current investigators and analysts kind of confused.
Addendum, 18 Dec: Yet another means of 'silencing' EDR (and other) agents is by modifying the Name Resolution Policy Table, which is maintained in the...wait for it...Windows Registry.
Wednesday, January 10, 2024
Human Behavior In Digital Forensics, pt III
Did they create any additional persistence? If so, what do they do? Create user accounts? Add services or Scheduled Tasks? Do they lay any "booby traps", akin to the Targeted Threat Actor from my previous blog post?
During their time on the endpoint, do they seem prepared, or do they "muck about", as if they're wandering around a dark room, getting the lay of the land? Do they make mistakes, and if so, how do they overcome them?
As Blade stated during the first movie (quote 3), "...when you understand the nature of thing, you know what it's capable of." Understanding a threat actor's nature provides insight into what they're capable of, and what we should be looking for on endpoints and within the infrastructure.
Saturday, January 06, 2024
Human Behavior In Digital Forensics, pt II
On the heels of my first post on this topic, I wanted to follow up with some additional case studies that might demonstrate how digital forensics can provide insight into human activity and behavior, as part of an investigation.
Targeted Threat Actor
I was working a targeted threat actor response, and while we were continuing to collect information for scoping, so we could move to containment, we found that on one day, from one endpoint, the threat actor pushed their RAT installer to 8 endpoints, and had the installer launched via a Scheduled Task. Then, about a week later, we saw that the threat actor had pushed out another version of their RAT to a completely separate endpoint, by dropping the installer into the StartUp folder for an admin account.
Now, when I showed up on-site for this engagement, I walked into a meeting that served as the "war room", and before I got a chance to introduce myself, or find out what was going on, one of the admins came up to me and blurted out, "we don't use communal admin accounts." Yes, I know...very odd. No, "hi, I'm Steve", nothing like that. Just this comment about accounts. So, I filed it away.
The first thing we did once we got started was roll out our EDR tech, and begin getting insight into what was going on...which accounts had been compromised, which were the nexus systems the threat actor was operating from, how they were getting in, etc. After all, we couldn't establish a perimeter and move to containment until we determined scope, etc.
So we found this RAT installer in the StartUp folder for an admin account...a communal admin account. We found it because in the course of rolling out our EDR tech, the admins used this account to push out their software management platform, as well as our agent...and the initial login to install the software management platform activated the installer. When our tech was installed, it immediately alerted on the RAT, which had been installed by that point. It had a different configuration and C2 from what we'd seen from previous RAT installations, which appeared to be intentional. We grabbed a full image of that endpoint, so we were able to get information from VSCs, including a copy of the original installer file.Just because an admin told me that they didn't use communal admin accounts doesn't mean that I believed him. I tend to follow the data. However, in this case, the threat actor clearly already knew the truth, regardless of what the admins stated. On top of that, they planned out far enough in advance to have multiple means of access, including leaving behind "booby traps" what would be tripped through admin activity, but not have the same configuration. That way, if admins had blocked access to their first C2 IP address at the firewall, or were monitoring for that specific IP address via some other means, having the new, second C2 IP address would mean that they would go unnoticed, at least for a while.
What I took away from all of the totality of what we saw, largely through historical data on a few endpoints, was that the threat actor seemed to have something of a plan in place regarding their goals. We never saw any indication of search terms, wandering around looking for files, etc., and as such, it seemed that they were intent upon establishing persistence at that point. The customer didn't have EDR in place prior to our arrival, so there's a lot we likely missed out on, but from what we were able to assemble from host-based historical data, it seemed that the threat actor's plan, at the point we were brought in, was to establish a beachhead.
Pro Bono Legal Case
A number of years ago, I did some work on a legal case. The background was that someone had taken a job at a company, and on their first day, they were given an account and password on a system for them to use, but they couldn't change the password. The reason they were given was that this company had one licensed copy of an application, and it was installed on that system, and multiple people needed access.
Jump forward about a year, and the guy who got hired grew disillusioned, and went in one Friday morning, logged into the computer, wrote out a Word document where they resigned, effective immediately. They sent the document to the printer, then signed it, handed it in, and apparently walked out.
So, as it turns out, several files on the system were encrypted with ransomware, and this guy's now-former employer claimed that he'd done it, basically "salting the earth" on his way out the door. There were suits and countersuits, and I was asked to examine the image of the system, after exams had already been performed by law enforcement and an expert from SANS.
What I found was that on Thursday evening, the day before the guy resigned, at 9pm, someone had logged into the system locally (at the console) and surfed the web for about 6 minutes. During that time, the browser landing on a specific web site caused the ransomware executable to be downloaded to the system, with persistence written to the user account's Run key. Then, when the guy returned the following morning and logged into the account, the ransomware launched, albeit without his knowledge. Using a variety of data sources, to include the Registry, Event Log, file system metadata, etc., I was able to demonstrate when the infection activity actually took place, and in this instance, I had to leave it up to others to establish who had actually been sitting at the keyboard. I was able to articulate a clear story of human activity and what led to the files being encrypted. As part of the legal battle, the guy had witness statements and receipts from the bar he had been at the evening prior to resigning, where he'd been out with friends celebrating. Further, the employer had testified that they'd sat at the computer the evening prior, but all they'd done was a short web browser session before logging out.
As far as the ransomware itself was concerned, it was purely opportunistic. "Damage" was limited to files on the endpoint, and no attempt was made to spread to other endpoints within the infrastructure. On the surface, what happened was clearly what the former employer described; the former employee came in, typed and printed their resignation, and launched the ransomware executable on their way out the door. However, file system metadata, Registry key LastWrite times, and browser history painted a different story all together. The interesting thing about this case was that all of the activity occurred within the same user account, and as such, the technical findings needed to be (and were) supported by external data sources.
RAT Removal
During another targeted threat actor response engagement, I worked with a customer that had sales offices in China, and was seeing sporadic traffic associated with a specific variant of a well-known RAT come across the VPN from China. As part of the engagement, we worked out a plan to have the laptop in question sent back to the states; when we received the laptop, the first thing I did was remove and image the hard drive.
The laptop had run Windows 7, which ended up being very beneficial for our analysis. We found that, yes, the RAT had been installed on the system at one point, and our analysis of the available data painted a much clearer picture.
Apparently, the employee/user of the endpoint had been coerced to install the RAT. Using all the parts of the buffalo (file system, WEVTX, Registry, VSCs, hibernation file, etc.), we were able to determine that, at one point, the user had logged into the console, attached a USB device, and run the RAT installer. Then, after the user had been contacted to turn the system over to their employer, we could clearly see where they made attempts to remove and "clean up" the RAT. Again, as with the RAT installation, the user account that performed the various "clean up" attempts logged in locally, and performed some steps that were very clearly manual attempts to remove and "clean up" the RAT by someone who didn't fully understand what they were doing.
Non-PCI BreachI was investigating a breach into corporate infrastructure at a company that was part of the healthcare industry. I turned out that an employee with remote access had somehow ended up with a keystroke logger installed on their home system, which they used to remote into the corporate infrastructure via RDP. This was about 2 weeks before they were scheduled to implement MFA.
The threat actors was moving around the infrastructure via RDP, using an account that hadn't accessed the internal systems, because there was no need for the employee to do so. This meant that on all of these systems, the login initiated the creation of the user profile, so we had a really good view of the timeline across the infrastructure, and we could 'see' a lot of their activity. This was before EDR tools were in use, but that was okay, because the threat actor stuck to the GUI-based access they had via RDP. We could see documents they accessed, shares and drives they opened, and ever searches they ran. This was a healthcare organization, which the threat actor was apparently unaware of, because they were running searches for "password", as well as various misspellings of the word "banking" (i.e., "bangking", etc.).
The organization was fully aware that they had two spreadsheets on a share that contained unencrypted PCI data. They'd been trying to get the data owner to remove them, but at the time of the incident, the files were still accessible. As such, this incident had to be reported to the PCI Council, but we did so with as complete a picture as possible, which showed that the threat actor was both unaware of the files, as well as apparently not interested in credit card, nor billing, data.
Based on the nature of the totality of the data, we had a picture of an opportunistic breach, one that clearly wasn't planned, and I might even go so far as to describe the threat actor as "caught off guard" that they'd actually gained access to an organization. There was apparently no research conducted, the breach wasn't intentional, and had all the hallmarks of someone wandering around the systems, in shock that they'd actually accessed them. Presenting this data to the PCI Council in a clear, concise manner led to a greatly reduced fine for the customer - yes, the data should not have been there, but no, it hadn't been accessed or exposed by the intruder.
Wednesday, January 03, 2024
Human Behavior In Digital Forensics
I've always been a fan of books or shows where someone follow clues and develops an overall picture to lead them to their end goal. I've always like the "hot on the trail" mysteries, particularly when the clues are assembled in a way to understand that the antagonist was going to do next, what their next likely move would be. Interestingly enough, a lot of the shows I've watched have been centered around the FBI, shows like "The X-Files", and "Criminal Minds". I know intellectually that these shows are contrived, but assembling a trail of technical bread crumbs to develop a profile of human behavior is a fascinating idea, and something I've tried to bring to my work in DFIR.
Former FBI Supervisory Special Agent and Behavioral Profiler Cameron Malin recently shared that his newest endeavor, Modus Cyberandi, has gone live! The main focus of his effort, cyber behavior profiling, is right there at the top of the main web page. In fact, the main web page even includes a brief history of behavioral profiling.
This seems to be similar to Len Opanashuk's endeavor, Motives Unlocked, which leads me to wonder, is this a thing?
Is this something folks are interested in?
Apparently ,it is, as there's research to suggest that this is, in fact, the case. Consider this research paper describing behavioral evidence analysis as a "paradigm shift", or this paper on idiographic digital profiling from the Journal of Digital Forensics, Security, and Law, to name but a few. Further, Google lists a number of (mostly academic) resources dedicated to cyber behavioral profiling.
This topic seems to be talked about here and there, so maybe there is an interest in this sort of analysis, but the question is, is the interest more academic, is the focus more niche (law enforcement), or is this something that can be effectively leveraged in the private sector, particularly where digital forensics and intrusion intelligence intersect?
I ask the question, as this is something I've looked at for some time now, in order to not only develop a better understanding of targeted threat actors who are still active during incident response, but to also determine the difference between a threat actor's actions during the response, and those of others involved (IT staff, responders, legitimate users of endpoints, etc.).
In a recent comment on social media, Cameron used the phrase, "...adversary analysis and how human behavior renders in digital forensics...", and it occurred to me that this really does a great job of describing going beyond just individual data points and malware analysis in DFIR, particularly when it comes to hands-on targeted threat actors. By going beyond just individual data points and looking at the multifaceted, nuanced nature of those artifacts, we can begin to discern patterns that inform us about the intent, sophistication, and situational awareness of the threat actor.
To that end, Joe Slowik has correctly stated that there's a need in CTI (and DFIR, SOC, etc.) to view indicators as composite objects, that things like hashes and IP addresses have greater value when other aspects of their nature is understood. Many times we tend to view IP addresses (and other indicators) one-dimensionally; however, there's so much more about those indicators that can provide insight to the threat actor behind them, such as when, how, and in what context that IP address was used. Was it the source of a login, and if so, what type? Was it a C2 IP address, or the source of a download or upload? If so, how...via HTTP, curl, msiexec, BITS, etc?
Here's an example of an IP address; in this case, 185.56.83.82. We can get some insight on this IP address from VirusTotal, enough to know that we should probably pay attention. However, if you read the blog post, you'll see that this IP address was used as the target for data exfiltration.
Via finger.exe.
Add to that the use of the LOLBin is identical to what was described in this 2020 advisory, and it should be easy to see that we've gone well beyond just an IP address, by this point, as we've started to unlock and reveal the composite nature of that indicator.
The point of all this is that there's more to the data we have available than just the one-dimensional perspective that we're used to thinking in, in which we've been viewing that data. Now, if we begin to incorporate other data sources that are available to us (EDR telemetry, endpoint data and configurations, etc.), we'll being to see exactly how, as Cameron stated, human behavior renders in digital forensics. Some of the things I've pursued and been successful in demonstration during previous engagements includes things like hours of operations, preferred TTPs and approaches, enough so to separate the actions of two different threat actors on a single endpoint.
I've also gained insight into the situational awareness of a threat actor by observing how they reacted to stimulus; during one incident, the installed EDR framework was blocking the threat actor's tools from executing on different endpoints. The threat actor never bothered to query any of the three endpoints to determine what was blocking their attempts; rather, on one endpoint, they attempted to disable Windows Defender. On the second endpoint, they attempted to delete a specific AV product, without ever first determining if it was installed on the endpoint; the batch file they ran to delete all aspects and variations of the product were not preceded by query commands. Finally, on the third endpoint, the threat actor ran a "spray-and-pray" batch file that attempted to disable or delete a variety of products, none of which were actually installed on the endpoint. When none of these succeeded in allowing them to pursue their goals, they left.
So, yes, viewed through the right lens, with the right perspective, human behavior can be discerned through digital forensics. But the question remains...is this useful? Is the insight that this approach provides valuable to anyone?