Monday, August 31, 2020

The Death of AV??

I had a conversation recently, which started out being about endpoint technologies.  At one point in the conversation, the topic of AV came up.  The question was, is there still value in AV?

I believe there is; I believe that AV, when managed properly, can be a valuable tool.  However, what I've very often seen, through targeted threat response and DFIR analysis, is that AV isn't maintained or updated, and when it does detect something, that detection is ignored.

MS systems have had the Malicious Software Removal Tool (MSRT) installed for some time.  This is a micro-scanner, designed to target specific instances of malware.  Throughout many of the investigations I've done, I've seen where systems were infected with malware variants that should have been prevented by MSRT; however, in those instances, I've found that MSRT hasn't been kept up to date, and was last updated well prior to the infection.

Not long ago, I was assisting with some analysis work, and found that the customer was using WebRoot as their AV product. I found entries for AV detections in the Registry, and based on the timing, it was clear that while the product had detected and quarantined an instance of Dridex, the customer was still infected with ransomware.  That was due to no one being aware of the detection, and as such, no one took action. The threat actor was able to find something else they could install that wasn't detected by the installed AV product, and proceed with their attack.

Over the years, I've had more than a few opportunities to observe threat actor behavior, through a combination of EDR telemetry and DFIR analysis.  As such, I've seen more than a few methods for circumventing AV, and in particular, Windows Defender.  Windows Defender is actually a pretty decent AV product; I've had my research interrupted, as when I would download a web shell or "malicious" LNK file for testing, and Windows Defender would wake up and quarantine the file.  Very recently, I was conducting some analysis as part of an interview questionnaire, and wrote a Perl script to deobfuscate some code.  I ran the script and redirected the output to a file, and Windows Defender pounced on the resulting file.  Apparently, I did things correctly.

Again, I've seen threat actors disable Windows Defender through a variety of means, from stopping the service, to uninstalling the product.  I've also seen more subtle "tweaks", such as adding path exclusions to the product, or just disabling the AV component via a Registry value. However, the attacks have proceeded, because the infrastructure lacked the necessary visibility to detect these system modifications.  Further, there was no proactive threat hunting activity, not even an automated 'sweep' of the infrastructure, checking against various settings.

So, no...AV isn't dead.  It's simply not being maintained, and no one is listening for alerts.  AV can be very valuable.  Not only can checking AV status be an effective threat hunting tool (in both proactive scanning and DFIR threat hunting), but I've also been using "check AV logs" as part of my malware detection process. This is because AV has always been a great place to look for indications of attacks.

Sunday, August 16, 2020

Toolmarks: The "How"

 I recently published a post where I discussed DFIR toolmarks, and not long after sharing it on Twitter, someone asked me for a list of resources that describe the "how"; that is, how to go about finding or determining toolmarks for various activities.  The specific use case asked about was malware; how to determine malware toolmarks after running the sample in a sandbox.  The best response I could come up with was Windows Forensic Analysis, 4/e, and I included the caveat that they'd have to learn the material and then apply it.  As they'd already found out, there isn't a great deal of information available related to identifying DFIR toolmarks.

One way to think of toolmarks is as artifact constellations, as these are the "scratches" or "indentations" left behind either as a direct result of the activity, or as a result of the activity occurring within the "eco-system" (i.e., operating system, audit configuration, applications, etc.).

The way I have gone about identifying toolmarks is through the use of timelines, at multiple levels.  What I mean by that is that I will start with a system timeline, using data sources such as the MFT, USN change journal, selected Windows Event Logs, Registry hives, SRUM database, etc.  From there, if I find that a specific user account was used in a manner that likely populated a number of artifacts (i.e., type 10 login as opposed to a type 3 login), I might need to create mini- or micro-timelines, based on user activity (browser history, Registry hives, timeline activity database, etc.).  This would allow me to drill down on specific activity without loosing it amongst the 'noise' of the system timeline.  Throughout the timeline analysis process, there's also considerable enrichment or "decoration" of the data that goes on; for example, the timeline might show that a Registry key had been modified, and decorating the data with the specific value that was created or modified will add context, and likely provide a pivot point for further analysis.  Data can also be decorated using online resources (knowledgebase articles, blog posts, etc.), providing additional insight and context, and even leading to additional pivot points.

The theDFIRReport write-up on Snatch ransomware attacks states that the threat actors "turned off Windows Defender"; however, there's no mention as to how this is accomplished. Now, like many will tell you, there are a number of ways to do this, and that's absolutely true...but it doesn't mean that we walk away from it.

Disabling Windows Defender falls under MITRE ATT&CK subtechnique T1562.001, and even so, doesn't go quite deep enough and address how defenses were impaired.  The data from the analyzed cases [c|sh]ould be used to develop the necessary toolmarks.  For example, was sc.exe or some other tool (GMER?) used to disable the Windows service?  Or was reg.exe or Powershell used to add or modify Registry values, such as disabling some aspect of Defender, or adding exclusions?  Given that the threat actors were found to access systems via RDP, was a graphical means used to disable Defender, such as Defender Control?  Each of these approaches has very different toolmarks, and each of these approaches is trivial to check for and identify, as most collection processes (be they triage or full image acquisition) already collect the necessary data.

So, for me, identifying toolmarks starts with timeline creation and analysis.  However, I do understand that this approach doesn't scale, and isn't something that everyone even wants to do, even if there is considerable value in doing so. As such, the way to scale something like this is to start with timeline analysis, and then incorporate findings from this analysis back into the process, 'baking' it back in to the way you do analysis.  One approach to automating this is to incorporate data decoration into a timeline creation process, so that something learned from one engagement is then used to enrich future engagements. A very simple example of how to do this can be found in eventmap.txt, which "tags" specific Windows Event Log records (based on source/ID pairs) as the Windows Event Logs are processed from the native state and being incorporated into a timeline.

If your data collection process results in data in a standard structure (i.e., acquired image, triage data in a zipped archive, etc.) then you can easily automate the parsing process, even at the lab intake point rather than having the analyst spend time doing it. If your analysts are able to articulate their findings and incorporate those back into the parsing and decoration process via some means, they are then able to share their knowledge with other analysts such that those other analysts don't have to have the same experiences.

Another example of toolmarks can be found in this u0041 post that discusses artifacts associated with the use of, one of the Impacket tools used for lateral movement, similar to PSExec. We can then create a timeline events file (from an acquired image, or from triage data), and add Windows Event Log records to the events file using wevtx.bat, employing data decoration via eventmap.txt.  An additional means of identifying pivot points in analysis would include running Yara rules over the timeline events file, looking for specific lines with the event source and IDs, as well as the specific service name that, per the u0041 blog post, is hard-coded into  All of this can be incorporated into an automated process that is run via the lab intake team, prior to the data being handed off to the analyst to complete the case.  This would mean that the parsing process was consistent across the entire team of analysts, regardless of who was performing analysis, or what case they received.  It would also mean that anything learned by one analyst, any set of toolmarks determined via an investigation, would then be immediately available to other analysts, particularly those who had not been involved in, nor experienced, that case.  The same can be said for research items such as the u0041 blog analyst might read about it, create the necessary data decoration and enrichment, and boom, they're done.  


Identifying toolmarks is just the first step.  Again, the means I've used is timeline analysis....not creating one big timeline, but creating mini-timelines or "overlays" (think cellophane sheets teachers used to use with overhead projectors) that allow me to see the important items without having to wade through the "noise" of regular system activity (i.e., operating system and application updates, etc.).

Once toolmarks have been identified, the next step is to document and 'bake' them back into the analysis process, so that they are immediately available for future use.

Wednesday, August 05, 2020

Toolmarks and Intrusion Intelligence

Very often, DFIR and intel analysts alike don't appear to consider such things as toolmarks associated with TTPs, nor intrusion intelligence. However, considering such things can lead to greater edge sharpness with respect to attribution, as well as to the intrusion itself.  

What I'm suggesting in this post is fully exploiting the data that most DFIR analysts already collect and therefore have available.  I'm not suggesting that additional tools be purchased; rather, what I'm illustrating is the value of going just below the surface of much of what's shared, and adding a bit of context regarding the how and when of various actions taken by threat actors.

Disable Security Tools
What used to be referred to as simply "disable security tools" in the MITRE ATT&CK framework is now identified as "impair defenses", with six subtechniques.  The one we're interested in at the moment is "disable or modify tools", which I think makes better sense, as we'll discuss in this section.

In TheDFIRReport regarding the Snatch ransomware actors, the following statement is made regarding the threat actor's activities:

"...turned off Windows Defender..."

Beyond that, there's no detail in the report regarding how Windows Defender was "turned off", and the question likely asked would be, "..does it really matter?"  I know a lot of folks have said, "...there are a lot of ways to turn off or disable Windows Defender...", and they're absolutely correct.  However, something like this should not be dismissed, as the toolmarks associated with a particular method or mechanism for disabling or modifying a tool such as Windows Defender will vary, and have been seen to vary between different threat actors. However, it is because they vary that make them so valuable.

Toolmarks associated with the means used by a particular threat actor or group to disable Defender, or any other tool, can be used as intrusion intelligence associated with that actor, and can be used to attribute the observed activity to that actor in the future.

Again, there are a number of ways to render Windows Defender ineffective.  For example, you can incapacitate the tool, or use native functionality to make any number of Registry modifications that significantly impact Defender.  For threat actors that gain access to systems via RDP, using a tool such as Defender Control is very effective, as it's simply a button click away; it also has it's own set of toolmarks, given how it functions. In particular, it "disables" Defender by setting the data for two specific Registry values, something few other observed methods do. 

Other techniques can include setting exclusions for Windows Defender; rather than turning it off completely, adding an exclusion "blinds" the tool by telling it to ignore certain paths, extensions, IP addresses, or processes.  Again, different TTPs, and as such, different toolmarks will be present.  The statement "turned off Windows Defender" still applies, but the mechanism for doing so leaves an artifact constellation (toolmarks) that varies depending upon the mechanism.

The "When"
Not only is the method used to disable a tool a valuable piece of intelligence, but so is the timing. That is to say, when during the attack cycle is the tool disabled?  Some ransomware executables may include a number of processes or Windows services (in some cases, over 150) that they will attempt to disable when they're launched (and prior to file encryption) but if a threat actor manually disables a security tool, knowing when and how they did so during their attack cycle can be value intrusion intel that provides insight into their capabilities. 

Deleting Volume Shadow Copies
Deleting Volume Shadow Copies is an action most often associated with ransomware attacks, employed as a means of preventing recovery and forcing the impacted organization to pay the ransom to get its files back.  However, it's also an effective counter-forensics technique, particular when it comes to long-persistent threat actors.

I once worked an engagement where a threat actor pushed out their RAT to several systems by creating remote Scheduled Tasks to launch the installer.  A week later, they pushed out a copy of the same RAT, but with a different config, to another system.  Just one.  However, in this case, they pushed it to the StartUp folder for a communal admin account.  As such, the EXE file sat there for 8 months; it was finally launched when the admins used the communal admin account in their recovery efforts for the engagement I was working.  I was able to get a full copy of the EXE file from one of the VSCs, demonstrating the value of data culled from VSCs.  I've had similar success on other engagements, particularly one involving the Poison Ivy RAT and the threat actor co-opting an internal employee to install it, and subsequently, partially remove it from the system.  The point is that VSCs can be an extremely valuable source of data.

Many analysts on the "intel side" consider deleting VSCs commonplace, and not worth a great deal of attention.  After all, this is most often accomplished using tools native to the operating system, such as vssadmin.exe.  But what if that's not the tool used?  What if the threat actor uses WMI instead, using a command such as:

Get-WmiObject Win32_Shadowcopy | ForEach-Object{$_.Delete();}

Or, what if the threat actor base64-encoded the above command and ran it via Powershell?  The same result is accomplished, but each action results in a different set of toolmarks.

Clearing Windows Event Logs
Another commonly observed counter-forensics technique is clearing the Windows Event Logs.  In some cases, it's as simple as three lines in a batch file, clearing just the System, Security, and Application Event Logs.  In other cases, it's a single line of code that is much more comprehensive:

FOR /F “delims=” %%I IN (‘WEVTUTIL EL’) DO (WEVTUTIL CL “%%I”)

As with the other actions we've discussed in this post, there are other ways to go about clearing Windows Event Logs, as well; WMI, Powershell (encoded, or not), external third party tools, etc.  However, each has its on set of toolmarks that can be associated with the method used, and are separate from the end result.

Addressing Counter-Forensics
Much of what we've discussed in this post constitute counter-forensics activities. Fortunately, there are ways to address instances of counter-forensics from a DFIR perspective, such as when Windows Event Logs have been cleared, as there are other data sources that can provide information in the absence of that data. For example, if you want to know when a user was logged into the system, you don't need the logs for that.  Instead, create a mini-timeline from data sources in the user profile, and you'll be able to see when that user was logged into the system.  However, if your question is, "what were the contents of the log records?", then you'll have to carve unallocated space to retrieve those records.

In some cases, an analyst may collect an image or selected triage files from a system, and find that some of the Windows Event Logs aren't populated.  I've been seeing this recently with respect to the Microsoft-Windows-TaskScheduler/Operational Event Log; on the two Win10 systems in my office, neither file is populated (the same is true with a number of the images downloaded from CTF sites).  This isn't because the logs were cleared, but rather because they had been disabled.  It seems that at some point, the settings for that Windows Event Log were modified such that they were disabled, and as such, the log isn't populated.  This doesn't mean that scheduled tasks aren't being executed, or that information about scheduled tasks isn't just means that the historical record normally seen via that Windows Event Log isn't available.