Wednesday, August 05, 2020

Toolmarks and Intrusion Intelligence

Very often, DFIR and intel analysts alike don't appear to consider such things as toolmarks associated with TTPs, nor intrusion intelligence. However, considering such things can lead to greater edge sharpness with respect to attribution, as well as to the intrusion itself.  

What I'm suggesting in this post is fully exploiting the data that most DFIR analysts already collect and therefore have available.  I'm not suggesting that additional tools be purchased; rather, what I'm illustrating is the value of going just below the surface of much of what's shared, and adding a bit of context regarding the how and when of various actions taken by threat actors.

Disable Security Tools
What used to be referred to as simply "disable security tools" in the MITRE ATT&CK framework is now identified as "impair defenses", with six subtechniques.  The one we're interested in at the moment is "disable or modify tools", which I think makes better sense, as we'll discuss in this section.

In TheDFIRReport regarding the Snatch ransomware actors, the following statement is made regarding the threat actor's activities:

"...turned off Windows Defender..."

Beyond that, there's no detail in the report regarding how Windows Defender was "turned off", and the question likely asked would be, "..does it really matter?"  I know a lot of folks have said, "...there are a lot of ways to turn off or disable Windows Defender...", and they're absolutely correct.  However, something like this should not be dismissed, as the toolmarks associated with a particular method or mechanism for disabling or modifying a tool such as Windows Defender will vary, and have been seen to vary between different threat actors. However, it is because they vary that make them so valuable.

Toolmarks associated with the means used by a particular threat actor or group to disable Defender, or any other tool, can be used as intrusion intelligence associated with that actor, and can be used to attribute the observed activity to that actor in the future.

Again, there are a number of ways to render Windows Defender ineffective.  For example, you can incapacitate the tool, or use native functionality to make any number of Registry modifications that significantly impact Defender.  For threat actors that gain access to systems via RDP, using a tool such as Defender Control is very effective, as it's simply a button click away; it also has it's own set of toolmarks, given how it functions. In particular, it "disables" Defender by setting the data for two specific Registry values, something few other observed methods do. 

Other techniques can include setting exclusions for Windows Defender; rather than turning it off completely, adding an exclusion "blinds" the tool by telling it to ignore certain paths, extensions, IP addresses, or processes.  Again, different TTPs, and as such, different toolmarks will be present.  The statement "turned off Windows Defender" still applies, but the mechanism for doing so leaves an artifact constellation (toolmarks) that varies depending upon the mechanism.

The "When"
Not only is the method used to disable a tool a valuable piece of intelligence, but so is the timing. That is to say, when during the attack cycle is the tool disabled?  Some ransomware executables may include a number of processes or Windows services (in some cases, over 150) that they will attempt to disable when they're launched (and prior to file encryption) but if a threat actor manually disables a security tool, knowing when and how they did so during their attack cycle can be value intrusion intel that provides insight into their capabilities. 

Deleting Volume Shadow Copies
Deleting Volume Shadow Copies is an action most often associated with ransomware attacks, employed as a means of preventing recovery and forcing the impacted organization to pay the ransom to get its files back.  However, it's also an effective counter-forensics technique, particular when it comes to long-persistent threat actors.

I once worked an engagement where a threat actor pushed out their RAT to several systems by creating remote Scheduled Tasks to launch the installer.  A week later, they pushed out a copy of the same RAT, but with a different config, to another system.  Just one.  However, in this case, they pushed it to the StartUp folder for a communal admin account.  As such, the EXE file sat there for 8 months; it was finally launched when the admins used the communal admin account in their recovery efforts for the engagement I was working.  I was able to get a full copy of the EXE file from one of the VSCs, demonstrating the value of data culled from VSCs.  I've had similar success on other engagements, particularly one involving the Poison Ivy RAT and the threat actor co-opting an internal employee to install it, and subsequently, partially remove it from the system.  The point is that VSCs can be an extremely valuable source of data.

Many analysts on the "intel side" consider deleting VSCs commonplace, and not worth a great deal of attention.  After all, this is most often accomplished using tools native to the operating system, such as vssadmin.exe.  But what if that's not the tool used?  What if the threat actor uses WMI instead, using a command such as:

Get-WmiObject Win32_Shadowcopy | ForEach-Object{$_.Delete();}

Or, what if the threat actor base64-encoded the above command and ran it via Powershell?  The same result is accomplished, but each action results in a different set of toolmarks.

Clearing Windows Event Logs
Another commonly observed counter-forensics technique is clearing the Windows Event Logs.  In some cases, it's as simple as three lines in a batch file, clearing just the System, Security, and Application Event Logs.  In other cases, it's a single line of code that is much more comprehensive:

FOR /F “delims=” %%I IN (‘WEVTUTIL EL’) DO (WEVTUTIL CL “%%I”)

As with the other actions we've discussed in this post, there are other ways to go about clearing Windows Event Logs, as well; WMI, Powershell (encoded, or not), external third party tools, etc.  However, each has its on set of toolmarks that can be associated with the method used, and are separate from the end result.

Addressing Counter-Forensics
Much of what we've discussed in this post constitute counter-forensics activities. Fortunately, there are ways to address instances of counter-forensics from a DFIR perspective, such as when Windows Event Logs have been cleared, as there are other data sources that can provide information in the absence of that data. For example, if you want to know when a user was logged into the system, you don't need the logs for that.  Instead, create a mini-timeline from data sources in the user profile, and you'll be able to see when that user was logged into the system.  However, if your question is, "what were the contents of the log records?", then you'll have to carve unallocated space to retrieve those records.

In some cases, an analyst may collect an image or selected triage files from a system, and find that some of the Windows Event Logs aren't populated.  I've been seeing this recently with respect to the Microsoft-Windows-TaskScheduler/Operational Event Log; on the two Win10 systems in my office, neither file is populated (the same is true with a number of the images downloaded from CTF sites).  This isn't because the logs were cleared, but rather because they had been disabled disabled.  It seems that at some point, the settings for that Windows Event Log were modified such that they were disabled, and as such, the log isn't populated.  This doesn't mean that scheduled tasks aren't being executed, or that information about scheduled tasks isn't available...it just means that the historical record normally seen via that Windows Event Log isn't available.

Tuesday, July 07, 2020

On Artifact Constellations And "Toolmarks"

Something I've been pretty focused on in my analysis for some time is the concept of "artifact constellations".  I originally referred to this concept as "artifact clusters", but I heard someone from the FBI's Cyber BAU team use the term "constellations", and co-opted the term, in part to facilitate our conversation, but also because it sounded much more appropriate.

Artifact constellations are found as a result of events (not an event, but events) that occur on a system within close temporal proximity (another borrowed term, this one from Mr. Walters), as a result of some action, taken either by the user or threat actor.  When someone interacts with the operating system and applications, there are direct artifacts as a result of that interaction.  There are also very often indirect artifacts, created as a result of the events occurring within the "eco-system";  that is, events generated by the operating system but are not a result of direct interaction by the user.

Artifacts should never be viewed in isolation, as this can lead to incorrect findings when an artifact constellation is completed with assumption.  This can be an issue, among other instances, when examining artifacts of program execution.  For example, if an analyst were to find a prefetch file or a UserAssist entry for CCleaner, does this mean that the user executed the capabilities of CCleaner, or simply that they launched the GUI?  When viewing only artifacts such as a prefetch file or a UserAssist entry in isolation, there is qualitatively no difference between launching CCleaner and taking advantage of its full capabilities, and simply launching the CCleaner UI, waiting a few minutes, and closing the application.

Artifact constellations will vary in the number of artifacts they contain, based on a number of factors, such as the version  and configuration of the operating system, the version and configuration of installed applications, the audit configuration of the system, etc.  All of these factors play an important role in the make up of the artifact constellation, which can also be viewed as a set of "toolmarks" related to the use of the application.

The overall idea here is that rather than pursuing and basing findings on individual artifacts in isolation, we instead pursue artifact constellations, as this allows us to develop a better sense of context, as well as overcome attempts at counter-forensics, however intentional (or otherwise).

If we are used to viewing artifacts in isolation and those artifacts are not available on the system, where does that leave us?  Let's say, for example, that an analyst is familiar/comfortable with pursuing Application Prefetch files as artifacts of program execution; what happens if those artifacts don't exist on the system?  Say, the version of Windows being examined is a server variant, or the threat actor launched programs from within alternate data streams, or the threat actor took counter-forensics measures and deleted the prefetch files (and possibly disabled application prefetching).  What happens then?  How does the examiner pursue the goals of their analysis if the artifacts with which they are most comfortable no longer exist?

Attempts at counter-forensics, no matter how unintentional, should also be considered.  For example, something many analysts have seen before is an AppCompatCache entry for a possibly malicious file with a last modification time more closely aligned with the installation time of the operating system.  This can be the result of the threat actor copying the file to the system and quickly time stomping it with the $STANDARD_INFORMATION attribute time stamps from files that are part of the legitimate Windows installation.  If the analyst views this one artifact as evidence of program execution in isolation from other artifacts in the constellation, they may also make an incorrect determination as to the threat actor's dwell time.

A recent article on TheDFIRReport site regarding the Snatch ransomware describes the actions of the threat actor, which includes "turned off Windows Defender".  However, the article does not mention how the threat actor did so.  Determining the how and the when (in relation to other events) with respect to the threat actor disabling Windows Defender can be very beneficial to developing threat intelligence about that actor, and identifying toolmarks associated with their activities.

Now, there are a number of ways to disable Windows Defender, and each will have it's own artifacts or "toolmarks". We know from the article that the threat actor accesses systems via RDP, so that provides some indication as to what artifacts would be available for analysis.

One way to disable Windows Defender via the command line is to use reg.exe:

reg.exe add "HKLM\SOFTWARE\Policies\Microsoft\Windows Defender" /v
"DisableAntiSpyware" /t REG_DWORD /d "1" /f > Nul

Disabling Windows Defender can also be achieved via Powershell.  In addition to  a Registry value (or values) being modified, there will also be entries in the Powershell Event Logs indicating the usage of Powershell.  Depending upon how the commands are launched, there may also be entries in a user's Powershell console history file.

Another way to disable Defender is via a freeware tool such as Defender Control, which is a simple GUI tool with two buttons, one to disable Defender, and one to enable it.  If a threat actor uses a tool such as this, the artifact constellation will likely appear as follows:
  • File downloaded to/created on the system
  • AppCompatCache entry, and perhaps an AmCache.hve entry
  • Launch via user account (compromised account used to RDP into the system) results in UserAssist and RecentApps entries
  • There likely won't be a Prefetch file, as workstations do not run Terminal Services by default
  • Registry values related to disabling Windows Defender modified
  • Windows Defender Event Log records for event IDs 5001 and 5010
Again, the constellation will vary depending upon the means used to disable Defender.  However, understanding the artifact constellation, and the toolmarks associated with various methods for disabling Defender, are not only beneficial for identifying threat actors, but can also be valuable when the threat actor has taken steps to perform counter-forensics (such as clearing Windows Event Logs).

So, why is the when important? Some ransomware variants have been observed to include a series of commands to turn Windows services off, via the 'net stop' command.  In fact, one variant was found to include more than 150 such commands, specifying several EDR, AV, and backup services. However, that variant did not include a command to disable the Windows Defender service; in that case, the threat actor had to do so manually, prior to moving their malware to the system.  In some instances, the threat actor will copy a batch file over to the system and execute it, so that they don't have to retype commands (and possibly make an error); sometimes, they leave the batch files in place when they're done.

In another instance, disabling of Windows Defender was actually determined to be a result of normal system administrator actions.  In fact, not only was Windows Defender manually disabled by administrators upon installation, but it was also regularly disabled via GPO.  It's easy to see that Defender was disabled, but determining the when can make a pretty significant difference in the incident, particularly if the analyst assumes that the threat actor disabled the application.

Conclusion
While it's very useful that there are cheat sheets available that provide us with a list of DFIR artifacts to examine, as analysts we are called upon to go beyond looking at artifacts in isolation, and instead base findings on artifact constellations.  Doing so also allows us to develop toolmarks associated with specific sets of activities, providing context and allowing us to better understand that threat actors.  It's easy to say that some event (Windows Defender was disabled) occurred, but developing the how and the when of that event provides the context to better visualize a threat actor's activities.

Sunday, June 14, 2020

Plugin Spotlight - consentstore, consentstore_tln, appcompatflags.pl update

These plugins were developed as a result of this article posted to Medium by Zach, aka, "svch0st".  The article is fascinating, in that Zach found that there're Registry keys that appear to track the applications that access the microphone and webcam on a Windows system.  In addition, there are values that specify the last start and stop times for the applications using those devices.  Zach then takes the article a step further by illustrating what it looks like when a RAT is used to access and record audio from the mic.

Running the consentstore.pl plugin against a hive extracted from one of my own systems, I can see the following:

microphone
C:#Users#harlan#AppData#Roaming#Zoom#bin#Zoom.exe
LastWrite time          2020-05-05 23:06:16Z
LastUsedTimeStart    2020-05-05 23:00:52Z
LastUsedTimeStop     2020-05-05 23:06:16Z

webcam
C:#Users#harlan#AppData#Roaming#Zoom#bin#Zoom.exe
LastWrite time          2020-05-05 23:05:24Z
LastUsedTimeStart    2020-05-05 23:01:30Z
LastUsedTimeStop     2020-05-05 23:05:24Z

As you can see from the above information,  the key LastWrite times correspond to the final time stamp, or the "LastUsedTimeStop".  

The consentstore_tln.pl plugin outputs the same information in the 5-field TLN format, illustrated 

1588719652|REG|||ConsentStore microphone "C:\Users\harlan\AppData\Roaming\Zoom\bin\Zoom.exe" LastUsedTimeStart
1588719976|REG|||ConsentStore microphone "C:\Users\harlan\AppData\Roaming\Zoom\bin\Zoom.exe" LastUsedTimeStop
1588719690|REG|||ConsentStore webcam "C:\Users\harlan\AppData\Roaming\Zoom\bin\Zoom.exe" LastUsedTimeStart

1588719924|REG|||ConsentStore webcam "C:\Users\harlan\AppData\Roaming\Zoom\bin\Zoom.exe" LastUsedTimeStop

Because the full name of the key is included in the timeline output, albeit with the "#" translated to back slashes, searches run across the timeline looking for pivot points (such as AppCompatCache or AmCache entries, user profile paths, etc.) will result in positive 'hits'.  For example, in Zach's article, the RAT used to access the microphone was found in the path "dev\shell.exe".  If an analyst found an entry for "dev\shell.exe" in the AppCompatCache or AmCache data, and then using that as a pivot point found something similar to the above, the analyst would not only have the insight that the file was on the system, but also what it had been used for.  As such, this also serves to extend the "program execution" artifact category a bit, because know we not only know that the file was executed, but we now also have insight into what it was used for, or what it did.

In addition, this information provides us with some very useful artifacts, particularly when viewed as part of an overall artifact constellation.  For example, this provides a view into "humanness", or indications of human interaction with the system.  I most instances when engaging with applications such as Zoom, the user has an option to use the mic and webcam on the local system, and has to click a button/make a choice to do so.

Further, as this is a "new" location of sorts, it is not yet covered/addressed by counter-forensics techniques.  From the above information retrieved from the Software hive, we can see that Zoom was launched from the user's profile path, and the dates and times that it ran, providing insight into user activity in the face of counter-forensics activities, even the entire user profile being deleted.

So, thanks to Zach for sharing the information, and providing the opportunity for me to view this information and create these two plugins.  Keep up the great work, Zach, and I'm going to keep watching to see what further topics you tackle.

AppCompatFlags
Not a new plugin, but I updated the appcompatflags.pl plugin based on the content provided by Christopher at TrustedSec, which looks like the AppCompatFlags key is another useful persistence location.

Plugin Spotlight - printer_settings, featureusage

Given the number of RegRipper plugins that are part of the distro, I thought it would be a good idea every now and then to spotlight a plugin or two, and share what led to the plugin being created, and discuss how it can be used as part of analysis.

printer_settings.pl
This plugin is a result of what I read about Project TajMahal. If you scroll down in Appendix II, to modules 65 and 66, you'll see the following statement:

Steals printed documents from spooler queue.

This is done by enabling the “KeepPrintedJobs” attribute for each (or just one) configured printer stored in Windows Registry. What this means is that print jobs will not be deleted once they're complete; as such, this serves as an interesting means of data collection, specifically, data from information repositories.

I thought that was interesting and tried setting the attribute via the UI, and then writing and testing a plugin to detect the attribute setting.  The result is the plugin.

So, how would you use this during an engagement?  A positive finding from the plugin would be a pivot point into deeper analysis; for example, if the attribute is set, what is the LastWrite time of the key (or keys) in question?  Does this time stamp then prove to be a useful pivot point within the greater context of an overall system timeline?  If you have an image of the system, what is the content of the spooler?

featureusage.pl
CrowdStrike recently posted an article on the various values and subkeys beneath the FeatureUsage key, so I'm not sure what I could add to that.

In short, the FeatureUsage artifacts reportedly serve as evidence of program execution, on Windows 10 version 1903 and higher.  The CrowdStrike blog post provides some very good information regarding the subkey contents; what really stood out for me is how the contents provide insight into humanness within the Windows Registry, as well as provide information that analysts can look to in the face of counter-forensics.


Sunday, May 31, 2020

Tips on Using RegRipper v3.0

With the "new" release, I thought it would be good to share a couple of tips as to how you can get the most out of RegRipper v3.0. I should note that for the most part, all of these tips are the same things I've recommended for using RegRipper v2.8, as well.

The "Kitchen Sink" Approach
When you take the "kitchen sink" approach and run every available plugin against a hive file, you're going to get a great deal of info back, some of which may not make sense or even apply to the case on which you're working.  As such, you're likely going to have questions about some of what you see, and whether it can be applied to the case you're working on.  I provided the GUI tool to operate in exactly this manner, because according to many, this is the primary use case, and how RegRipper is most often used. However, what follows are some tips that might be helpful, particularly if you do not want to use this approach.

Check The References
If you have a question about a plugin, feel free to open the plugin in Notepad (I use Notepad++ or UltraEdit) and take a look at the contents, particularly the "header".  If you're not sure what a "header" is, it's all the stuff commented out (preceded by '#') at the top of the plugin.  If you're using something like Notepad++, the header may appear in a different color, such as green, thanks to syntax highlighting.  Very often, the header will contain reference information or URLs that provide insight as to why the plugin was written and how the information returned by the plugin may be applied to specific use cases.

Finding a Plugin
Sometimes, you might want to check and see if there's a plugin that gets some information you're interested in, as it may be helpful to your case.  There is no online reference for the plugins; the v2.8 distro contains 386 plugins, and the v3.0 distro contains 248 plugins, so keeping a reference or wiki of some kind is still going to require searching.  Further, not all of the plugins look for specific values, but instead get all or most of the values beneath a key, so if you're looking for a specific value name, or some element that may be included in the data, you may not find it.

In order to see if there's a plugin that looks for a particular key or value name, I use the following command:

C:\perl\rr3\plugins>findstr /C:"UseLogonCredential" /i *.pl

...or to find any plugins that reference blog posts from PenTestLabs (hint: there are two), I use the following command:

C:\perl\rr3\plugins>findstr /C:"pentestlab" /i *.pl

If you don't find what you're looking for, ask.  Yep, it's that easy.  Just ask.  Sure, you can go on social media and say, "hey, RegRipper doesn't have a plugin that does this...", and that may very well be true.  However, RegRipper was originally designed to be a community-supported project; if you don't find a plugin that does something you need, either write one (Corey Harrell did a lot of that, starting off with simply copy-paste...), or share a request along with some data so that it can be written.  In most cases, I've turned a plugin around in an hour or so, with limited data for testing. As time goes on and more data becomes available, the testing improves, and the there may be corresponding improvements in the plugins, as well.

A final note on that thought...when looking for a plugin, spelling helps.  Tremendously.  You don't even know.

Building Profiles
I know that some folks are of the opinion that the RegRipper GUI doesn't allow you to modify the available profiles, but that is simply NOT the case.  In fact, all you need to do to create your own profiles is find the double-secret-monkey-stuff Windows tool called "Notepad".  ;-)  Really, it's that easy.

A "profile" is a list of plugins that are run by rip.exe, via the "-f" switch.  You can use rip to run individual plugins, but if you have a series of plugins that you want to run against a hive, the easiest way, and one that is self-documenting, is to use a profile.  To create a profile, just create a text file with no extension, and add the plugins you want to run, one on each line.  For example, to build out a profile that lets me check the Software hive for information related to connected USB devices, I'd create a file called "USB-Software" (again, no file extension), and then add the following plugins:

emdmgmt
portdev
volinfocache

That's all it takes. As new information is developed and new plugins become available, I might add some of those plugins to the profile. 

RegRipper v2.8
As a final note and just a reminder, I'm no longer supporting RegRipper v2.8.  I'll leave the repo up for the time being, but I'll be removing the repo before too long (date TBD).

I hope that someone finds this information useful.

Thursday, May 28, 2020

RegRipper v3.0

I recently released RegRipper v3.0, something I've been working on since Aug, 2019.

I am no longer supporting RegRipper 2.8.  I'll leave the repo up for the time being, but I will not be writing plugins to support that version.  You can move plugins written for v2.8 to the v3.0 plugins folder, and they will work fine.  However, due to modifications in the date output format, the reverse is not true.

What's New?
Fig. 1: RegRipper GUI
GUI - The GUI (i.e., rr.exe) no longer makes use of profiles.  When you launch the GUI, you'll see what appears in figure 1.  Note that you can select the hive, and the output folder for the report, but there is no longer a drop-down for selecting a profile.

Instead, what now happens is that the hive file type is "guessed"/determined, and the tool runs through the entire plugins folder to build a list of all plugins that apply to that hive, and then runs them.  All of them. There is no longer any need to maintain a profile for use with the GUI.  In the end, the idea of profiles seemed to be just too confusing.

The hive file types that RR "knows" are Software, System, SAM, NTUSER.DAT, USRCLASS.DAT, and AmCache.

However, the capability to run individual plugins and profiles still exists, albeit via the command line tool, rip.exe.  More about that later.

Date Format - the date output format has changed.  Phill Moore had asked for this via Twitter back in Feb, and more recently, a Github issue had been submitted via the Autopsy Github site.  The issue what was submitted asked for date output format IAW ISO 8601, but what was asked for was not, in fact, compliant with ISO 8601. Rather, what they'd asked for was the RFC 3339 profile.  That's very likely much more than you wanted to know, so to be brief, the date output format is now:

YYYY-MM-DD HH:MM:SS

Note the space between the date and time...that's what is NOT compliant with ISO 8601, but it is what was asked for.  In those instances where the time stamp is equivalent to UTC, I've added "Z" to the date output format.

Plugin Updates - As part of the process of "fixing" all 386 plugins in the 2.8 distro, a good number of them were updated, modified, consolidated, or simply "whacked".  In this case, "whacked" means removed from the main distro, moved to a separate folder, and may be addressed at a later date.

At the moment, the 3.0 distro contains 248 plugins.  The easiest way to find something specific in the plugins is to use a hidden MS tool called "findstr".  Navigate to the plugins folder and type a command such as:

findstr /C:"UseLogonCredential" /i *.pl

...or...

findstr /C:"pentestlab" /i *.pl

If you can't find a plugin that addresses a specific need, then reach out and ask.  I recently was provided some information about a key, and some sample data, by a co-worker, and within an hour was able to turn around a fully functional plugin.

RIP - the capabilities of the command line tool have been modified significantly, which you can see from the syntax info below:

Rip v.3.0 - CLI RegRipper tool
Rip [-r Reg hive file] [-f profile] [-p plugin] [options]
Parse Windows Registry files, using either a single module, or a profile.

  -r [hive] .........Registry hive file to parse
  -d ................Check to see if the hive is dirty
  -g ................Guess the hive file type
  -a ................Automatically run hive-specific plugins
  -aT ...............Automatically run hive-specific TLN plugins
  -f [profile].......use the profile
  -p [plugin]........use the plugin
  -l ................list all plugins
  -c ................Output plugin list in CSV format (use with -l)
  -s systemname......system name (TLN support)
  -u username........User name (TLN support)
  -uP ...............Update default profiles
  -h.................Help (print this information)

Ex: C:\>rip -r c:\case\system -f system
    C:\>rip -r c:\case\ntuser.dat -p userassist
    C:\>rip -r c:\case\ntuser.dat -a
    C:\>rip -l -c

All output goes to STDOUT; use redirection (ie, > or >>) to output to a file.

copyright 2020 Quantum Analytics Research, LLC

Notice the "-a" switch; this replicates what the GUI does, in that it gets the hive file type, then runs through the plugins folder and finds all plugins that pertain to that hive type, and then runs them.  The "-aT" switch does the same thing, but for the timeline (*_tln.pl) plugins.  As with the RR GUI, the hive file types that rip "knows" are Software, System, SAM, NTUSER.DAT, USRCLASS.DAT, and AmCache.  However, with rip.exe, you can still run the plugins designated for "all" hive types; rlo.pl, null.pl, del.pl, etc., via the command line using the "-p" switch.

Also, you still have the capability to run profiles via rip.exe.  This is very useful if you don't want to take a "kitchen sink" approach, but you want to be able to easily run several plugins, such as for a USB playbook.

Caveats
RegRipper is not and never was intended to be an "all knowing" tool.  It was intended to be a "good" tool that made people's jobs easier, and the only real way to do that is if analysts provide input.  So, rather than saying, "RegRipper doesn't...", why not grab some sample data, attach it to an email and send in a request?  I've been pretty good about turning something around within an hour, and more time and more data for testing simply means that the plugin becomes more useful for others, as well.

I haven't seen everything, nor do I know everything.  I do not offer myself up as an "expert".  This is to say that the available RegRipper plugins are based on either what I've seen or what others have shared with me.  For example, I read about Project TajMahal, did some testing, and the printer_settings.pl plugin checks to see if the KeepPrintedJobs property is enabled.  But that doesn't mean the everything pertinent to your case is included in a plugin; if that turns out to be the case, I'm more than happy to assist where I can, and were you allow me to do so.

Tuesday, April 14, 2020

Registry Analysis, pt II

In my last blog post, I provided a brief description of how I perform "Registry analysis", and I thought it would be a good idea to share the actual mechanics of getting to the point of performing Registry analysis.

First off, let me state clearly that I rarely perform Registry analysis in isolation from other data sources and artifacts on the system.  Most often, I'll incorporate file system metadata, as well as Windows Event Log metadata, into my analysis in order to develop a clearer picture of the activity.  Doing this helps me to 'see' activity that might be associated with a threat actor, and it goes a long way towards removing guesses and speculation from my analysis.

For instance, I'll incorporate Windows Event Log record metadata using the following command:

C:\tools>wevtx.bat d:\case\*.evtx > d:\case\evtx_events.txt

The above command places the record metadata, decorated using intrusion intel from the eventmap.txt file, into an intermediate file, with all of the entries in the 5-field TLN format.  I can then make use of just this file, or I can incorporate it into my overall timeline events file using the 'type' command:

C:\tools>type evtx_events.txt >> events.txt

That being said, there are times when I have been asked to "take a look at the Registry", and during those times, my hope is to have something around which to pivot...a service name, a specific date and time, some particular event, etc. I'll start this process by listing all of the Registry keys in the Software and System hives based on the key LastWrite times, using the following commands:

C:\tools>regtime -m HKLM/Software/ -r d:\case\software > d:\case\reg_events.txt
C:\tools>regtime -m HKLM/System/ -r d:\case\system >> d:\case\reg_events.txt

Note: RegRipper will tell you if the hive you're accessing is 'dirty', and if so, you'll want to strongly consider merging the transaction logs into the hive prior to parsing.  I like to do this as a separate process because I like to have the original hive file available so that I can look for deleted keys and values.

If there's a suspicion or evidence to suggest that a local user account was created, then adding metadata from the SAM hive is pretty simple and straightforward:

C:\rr3>rip -r d:\case\sam -p samparse_tln >> d:\case\reg_events.txt

When I say "evidence to suggest" that the threat actor added a local account to the system, one way to check for that is to hope the right auditing was enabled, and that you'd find the appropriate records in the Security Event Log. Another way to check is to parse the SAM Registry hive:

C:\rr3>rip -r d:\case\sam -p samparse

Then, correlate what you see to the ProfileList key from the Software hive:

C:\rr3>rip -r d:\case\software -p profilelist

Looking at these two data sources allows us to correlate user accounts and RIDs to user profiles on the system.  In many cases, we'll have to consider domain accounts (different SIDs), as well.

I'll also include other specific information from the Registry hives in the timeline:

C:\rr3>rip -r d:\case\system -p shimcache_tln >> d:\case\reg_events.txt
...

I'll also incorporate the AmCache metadata, as well:

C:\rr3>rip -r d:\case\amcache.hve -p amcache_tln >> d:\case\reg_events.txt

For a user, I generally want to create a separate mini-timeline, using similar commands as above:

C:\tools>regtime -m HKCU/ -r d:\case\user\ntuser.dat -u user > d:\case\user\reg_events.txt
C:\tools>regtime -m HKCU/ -r d:\case\user\usrclass.dat -u user >> d:\case\user\reg_events.txt
C:\rr3>rip -r d:\case\user\usrclass.dat -u user -p shellbags_tln >> d:\case\user\reg_events.txt
C:\rr3>rip -r d:\case\user\ntuser.dat -u user -p userassist_tln >> d:\case\user\reg_events.txt
...

Note: If you're generally looking at the same artifacts within a hive (NTUSER.DAT, etc.) over and over, it's a good idea to open Notepad and create a RegRipper profile.  That way, you have a documented, repeatable process, all in a single command line.

Note: If you're looking at multiple systems, it's not only a good idea to differentiate users on the system via the "-u" switch, but also differentiate the system by using the "-s" switch in the RegRipper command lines.  You can get the system name via the compname.pl RegRipper plugin.

Once the events file has been created, I have a source for parsing out specific items, specific time frames, or just the entire timeline, using parse.exe.  I can create the entire timeline, and based on items I find to pivot on, go back to the events file and pull out specific items using combinations of the type and find commands.  The complete timeline is going to contain all sorts of noise, much of it based on legitimate activity, such as operating system and application updates, normal user activity (logins, logoffs, day-to-day operations, etc.), and sometimes it's really helpful to be able to look at just the items of interest, and then view them in correlation with other items of interest.

Note: If you have a standard extraction process, or if you mount images using a means that makes the files accessible, all of this can be automated with something as simple as a batch or shell script.

Once I get to this point, the actual analysis begins...because parsing and display are not "analysis".  Getting one value or the LastWrite time to one key is not "analysis".  For me, analysis is an iterative process, and what I described above is just the first step.  From there, I'll keep a viewer handy (usually MiTeC's WRR) and a browser open, allowing me to dig in deeper and research items of interest.  This way, I can see values for keys for which there are not yet RegRipper plugins, such as when a new malware variant creates keys and values, or when a threat actor creates or modifies keys. When I do find something new like that (because the process facilitates finding something new), that then becomes a RegRipper plugin  (or a modification to an existing plugin), decoration via the eventmap.txt file, etc.  The point is that whatever 'new thing' is developed gets immediately baked back into the overall process.

For example, did a threat actor disable Windows Defender, and if so how? Via a batch file?  No problem, we can use RegRipper to check the Registry keys and values.  Via GPO?  Same thing...use RegRipper.  Via an sc.exe command?  No problem...we can use RegRipper for that, as well.

What about open source intrusion intel, such as this FireEye blog post?  The FireEye blog post is rich with intrusion intel that can be quickly and easily turned into plugins and event decoration, so that whenever those TTPs are visible in the data, the analyst is immediately notified, providing pivot points and making analysis vastly more consistent and efficient.

Sunday, April 12, 2020

Registry Analysis

When you see the words, "Registry analysis", what comes to mind? 

Okay, now...what actually happens when we 'do' this thing we call "Registry analysis"?  More often than not, what this refers to manifests itself as opening a Registry hive file in a viewer, "looking around", or maybe doing some searches or sorting based on dates.  But is that really Registry analysis, or is it simply parsing and viewing?

Often, when you get right down to it and peel back all of the layers (like an onion), "analysis" (in general) from an operational perspective manifests as:
  • Get a data source, often based on a list provided by an external resource
  • Open that data source in a viewer, or parse it and open the output in another application (Excel)
  • Locate specific items, again often based on an externally-provided list; this can include conducting a search based on some items (time) or keywords
  • Do the same with another data source
  • Lather, rinse, repeat
For example, an analyst might extract the MFT, parse it via a tool such as AnalyzeMFT or MFTECmd, search for specific files, or for files created or modified during a specific time frame, and then manually transpose that information into a spreadsheet.  If other data sources are then examined, the process is repeated, and as such, the overall approach to getting to the point of actually conducting analysis (i.e., looking at the output from more than one data source) is very manual, very time intensive, and as a result, very expensive.

To that point, 'cost' isn't just associated with time and expense.  It's also directly tied in with what's included in the analyst's final spreadsheet; more specifically, the approach lends itself to important artifacts and TTPs being missed.  OSINT regarding a threat actor group, based on analysis of the malware associated with the group, most often focuses on IOCs does not account for TTPs and behaviors (i.e., how the malware and tools are used...).  This includes not just of the threat actor's behaviors on the system, but also as a result of the threat actor's interactions with the ecosystem in which they're operating.  OSINT is not intrusion intelligence, and if the analyst uses that OSINT as the totality of what they look for, rather than just the beginning, then critical data is going to be missed.

One way of overcoming this is the use of automation to consume and correlate multiple data sources simultaneously, viewing them in relation to each other.  Some have looked at automation tools such as log2timeline or plaso, but have experienced challenges with respect to how the tools are used. Perhaps a better approach is a targeted, 'sniper forensics' approach, rather than the usual "spray and pray" approach.

For many analysts, what "Registry analysis" means is that they may have a list of "forensically relevant" items (i.e., keys and values), perhaps in a spreadsheet, that they use to manually peruse hive files.  As such, they'll open a hive in a viewer and use the viewer to navigate to specific keys and values (Eric's Registry Explorer makes great use of bookmarks).  This list of "forensically relevant" items within the Registry may be based on lists provided to the analyst, rather than developed by the analyst, and as such, may not be complete.  In many cases, these lists are stagnant, in that once they are received, they are neither extended, nor are new items (if determined) shared back with the source.

Rather than maintaining a list of keys and values that are "forensically relevant", analysts should instead consider what is "forensically relevant" based on the analysis goals of the case, and employ a process that allows them to not only find the items they're looking for, but to also 'see' new things.  For example, I like to employ a process that creates a timeline of activity, using Registry key LastWrite times, as well as parsing specific values based on their associated time stamps.  This process correlates hive files, as well...doing this using the Software hive, user's NTUSER.DAT and USRCLASS.DAT, as well as the AmCache.hve file, all in combination, can be extremely revealing.  I've used this several times to 'see' new things, such as what happens on a system when a user clicks on an ISO email attachment.  Viewing all of the 'events' from multiple sources, side-by-side, in a consolidated timeline provides a much more complete picture and a much more granular view than the traditional "manually add it to a spreadsheet" approach.

Adding additional sources...MFT, Windows Event Logs, etc...can be even more revealing of the overall TTPs, than simply viewing each of these data sources in isolation.

Sunday, April 05, 2020

Going Beyond

As an industry and community, we need to go beyond...go beyond looking at single artifacts to indicate or justify "evidence", and we need to go beyond having those lists of single artifacts provided to us.  Lists, such as the SANS DFIR poster of artifacts, are a good place to start, but they are not intended to be the end-all.  And we need to go beyond our own analysis, in isolation, and discuss and share what we see with others.

Here's a good example...in this recent blog post, the author points to Prefetch artifacts as evidence of file execution.  Prefetch artifacts are a great source of information, but (a) they don't tell the full story, and (b) they aren't the only artifact that illustrates "file execution".  They're one of many.  While it's a good idea to start with one artifact, we need to build on that one artifact and create (and pursue) artifact constellations.

This post, and numerous others, tend to look at artifacts in isolation, and not as part of an overall artifact constellation.  Subsequently, attempts at analysis fall apart (or simply fall short) when that one artifact, the one we discussed in isolation, is not present.  Consider Prefetch files...yes, they are great sources of information, but they are not the only source of information, and they are not present by default on Windows servers. 

And, no, I do not think that one blog post speaks for the entire community...not at all.  Last year, I took the opportunity to explore the images provided as part of the DefCon 2018 CTF.  I examined two of the images, but it was the analysis of the file server image that I found most interesting.  Rather than attempting to answer all of the questions in the CTF (CTF questions generally are not a good representation of real world engagements), I focused on one or two questions in particular.  In the case of the file server, there was a question regarding the use of an anti-forensics tool.  If you read my write-up, you'll see that I also reviewed three other publicly available write-ups...two relied on a UserAssist entry to answer the question, and the third relied on a Registry value that provided information about the contents of a user's desktop.  However, none of them (and again, these are just the public write-ups that I could find quickly) actually determined if the anti-forensics tool had been used, if the functionality in question had been deployed.

Wait...what?  What I'm saying is that one write up had answered the question based on what was on the user's desktop, and the two others had based their findings on UserAssist entries (i.e., that the user had double-clicked on an icon or program on their desktop).  However, neither had determined if anything had actually been deleted. I say this because there was also evidence that another anti-forensics tool (CCleaner) had been of interest to the user, as well. 

My point is that when we look at artifacts in isolation from each other, we only see part of the picture, and often a very small part.  If we only look at indications of what was on the user's desktop, that doesn't tell us if the application was ever launched.  If we look at artifacts of program execution (UserAssist, Prefetch, etc.), those artifacts, in and of themselves, will not tell us what the user did once the application was launched; it won't tell us what functionality the user employed, if any.

Here's another way to look at it.  Let's say the user has CCleaner (a GUI tool) on their desktop.  Looking at just UserAssist or Prefetch...or, how about UserAssist and Prefetch...artifacts, what is the difference between the user launching CCleaner and deleting stuff, and launching CCleaner, waiting and then closing it?

None.  There is no difference. Which is why we need to go beyond just the initial, easy artifacts, and instead look at artifact clusters or constellations, as much as possible, to provide a clear(er) picture of behavior.  This is due to the nature of what we, as examiners, are looking at today.  None of the incidents we're looking at...targeted threats/APTs, ransomware/crimeware, violations of acceptable use policies, insider threats, etc...are based on single events or records. 

Consider ransomware...for the most part, these events were looked at, more often than not, as, "files were encrypted". End of story. But the reality is that in many cases, going back years, ransomware incidents involved much more than just encrypting files.  Threat actors were embedded within environments for weeks or months before ever encrypting a file, and during that time they were collecting information and modifying the infrastructure to meet their needs.  I say "were", but "still are" applies equally well.  And we've seen an evolution of this "business model" over the past few months, in that we know that data was exfil'd during the time the actor was embedded within the infrastructure, not due to our analysis, but because the threat actor releases it publicly, in order to "encourage" victims to pay. A great deal of activity needs to occur for all of this to happen...settings need to be modified, tools need to be run, data needs to be pulled back to the threat actor's environment, etc.  And because these actions occur over time, we cannot simply look to one, or a few artifacts in isolation, in order to see the full picture (or as full a picture as possible).

Dr. Ali Hadi recently authored a pair of interesting blog posts on the topic of USB devices (here, and here).  In these posts, Dr. Hadi essentially addresses the question of, how do we go about performing out usual analysis when some of the artifacts in our constellation are absent? 

Something I found fascinating about Dr. Hadi's approach is that he's essentially provided a playbook for USB device analysis.  While he went back and forth between two different tools, both of his blog posts provide sufficient information to develop that playbook in either tool.  For example, while Dr Hadi incorporated the use of Registry Explorer, all of the artifacts (as well as others) can also be derived via RegRipper plugins.  As such, you can create RegRipper profiles of those plugins, and then run the automatically against the data you've collected, automating the extraction of the necessary data.  Doing so means that while some things may be missing, others may not, and analysts will be able to develop a more complete picture of activity, and subsequently, more accurate findings.  And automation will reduce the time it takes to collect this information, making analysis more efficient, more accurate, and more consistent across time, analysts, etc.

Okay, so what?  Well, again...we have to stop thinking in isolation.  In this case, it's not about just looking at artifact constellations, but it's also about sharing what we see and learn with other analysts.  What one analyst learns, even the fact that a particular technique is still in use, is valuable to other analysts, as it can be used to significantly decrease their analysis time, while at the same time increasing accuracy, efficiency, and consistency. 

Let's think bigger picture...are we (DFIR analysts) the only ones involved?  In today's business environment, that's highly unlikely.  Most things of value to a DFIR analyst, when examined from a different angle, will also be valuable to a SOC analyst, or an EDR/EPP detection engineer.  Here's an example...earlier this year, I read that a variant of Ryuk had been analyzed and found to contain code for deploying Wake-on-LAN packets in order to increase the number of systems it could reach, and encrypt. As a result, I wrote a detection rule to alert when such packets were found originating from a system; the point is that something found by malware reverse engineers could be effectively employed by SOC analysts, which in turn would result in more effective response from DFIR analysts.

We need to go beyond.  It's not about looking at artifacts in isolation, and it's not about conducting our own analysis in isolation.  The bad guys don't do it...after all, we track them as groups.  So why not pursue all aspects of DFIR as a 'group'; why not look at groups of artifacts (constellations), and share our analysis and findings not just with other DFIR analysts, but other members of our group (malware RE, threat intel analyst, SOC analyst, etc.), as well?

Wednesday, March 11, 2020

Ransomware

Hardly a week (sometimes a day??) passes without some mention of ransomware, and another organization or municipality (or three) feeling the impact of a ransomware attack. In fact, just recently, the City of Durham, NC, was hit with a Ryuk ransomware infection, which by some reports, impacted their 911 capability.  At the end of February, the BBC reported that two organizations impacted by a ransomware infection had been down for three weeks.

CrowdStrike recently released their 2020 Global Threat Report includes a great deal of information regarding ransomware, as viewed through CS's lens. The report includes more than a few pages of what CS had seen over the previous year, with some thoughts as to what they expect to see going forward.

In addition, just last week, Microsoft published an interesting blog post regarding human-operated ransomware attacks (with a very telling graphic available here).  All of the events at which I spoke in 2019 focused on this very topic, that many of the ransomware attacks weren't about something that AV products would detect and prevent. The general perception of these attacks seemed to be predominantly, "oh, if I have AV or NGAV, I'm good..."; well, no.  Because these are human-operated attacks, the human operator is able to modify the infrastructure to meet their needs.  For example, pull plain text credentials from memory, in order to escalate privileges and extend their reach to other systems.  The better part of this activity is missed by AV, because sometimes, malware isn't required to perform the "attack".  Instead, attackers simply use the native MS tools provided within the operating system distribution, something referred to as "living off the land". 

Further, as discussed in the CrowdStrike GTR, ransomware actors are increasingly modifying the infrastructure's they've targeted by disabling security products, enabling WinRM (sometimes through GPOs), and just making things easier for themselves.  These changes often go unnoticed by the system owners but do serve as precursors to the actor deploying ransomware.  This means that if these infrastructure modifications are detected, and there's a response plan in place, the overall impact of the ransomware being deployed can be obviated.

Impact
Something that is rarely discussed at length, or in an inclusive manner, is the impact of a ransomware attack, and why some organizations choose to pay the ransom.  Sure, we're all generally aware of what happens...files are encrypted, everyone's caught by surprise, and suddenly things need to happen that no one was prepared to do.  In some cases, such as with hospitals, diagnostics and patient record keeping gets reduced to by-hand processes, and the same is often true when a municipality's 911 services are taken down by ransomware.

I recently read this ZDNet article (similarly discussed in this NakedSecurity article), which discusses how 11 cases against six criminals were dismissed because the data was lost as a result of a ransomware attack.  The article also provides a list of other similar issues (police depts experiencing ransomware attacks), going back to 2017.

Tuesday, March 10, 2020

Revisiting Program Execution

As I prepare a presentation for a government agency, I've been thinking quite a bit about the idea of "program execution".  I've actually blogged on this topic before, and I thought that maybe now was a good time to revisit the topic.

What does that mean?  Well, generally speaking, it is accepted within the community that there are artifacts on systems that indicate that an application was executed.  One popular example is Prefetch files; the existence of an application prefetch file indicates that an application was run. There are other artifacts, as well, including UserAssist subkeys, etc.  Many of these artifacts tell us that an application was launched, or that a user launched an application (and when), but there's also a good bit that these artifacts don't tell us.

Here's an example...I was working a PCI case a number of years ago (and that "number" could be pretty big, like double digits...), and for those working those cases at the time, one of the things that Visa required of examiners was to populate a dashboard in the reports that included things such as "window of compromise", which is sort of what we refer to as 'dwell time'.  In this case, the actor gained access to a system and loaded their malware on it, launched it, and left...and within a relatively short period of time, AV ran a scheduled scan, found the malware, and quarantined it.  Through the analysis, we saw that it was another six weeks before the bad guy came back, found that the malware wasn't running, put a new version of the malware on the system and launched it.

Now, our finding was pretty huge, particularly for PCI cases.  A lot of organizations are aware of the number of credit cards they process on a regular basis, and there's always a bump around the holidays.  In this case, the malware was first placed on the system just prior to Thanksgiving, and wasn't refreshed until after the Christmas holiday.  Visa used the number of transactions and the "window of compromise" values to help them determine fines; as such, being able to demonstrate that the malware was not running on the system for a specific time period really had a significant impact on that finding, and subsequently, the fine.

My point is, just because we know that an application was launched, do we then definitively know that it was run, or in the case a GUI application, what functionality was employed?  The answer is no.  I've seen a number of cases, during timeline analysis, where something was launched, only to have the system immediately respond with Application Error and Windows Error Reporting events in the Windows Event Log.

My previous blog post on this topic linked to another article I'd written, one that addressed analysis of one of the images from the DefCon 2018 CTF challenge.  One of the questions I'd looked at specifically was question regarding the use of an anti-forensics application.  In the three publicly available write-ups I'd reviewed, one had answered based on an item on the user's desktop, and the other two responded based on program execution artifacts.  However, none of them had seen that attempts had been made to launch another anti-forensics application, and similarly, nor confirmed that actual anti-forensics had taken place, that anything had been deleted or modified.

The issue is that specifically with respect to GUI-based applications, the "program execution" artifacts will illustrate that the application was launched, but other efforts are required to determine what functionality was actually employed.  After all, there aren't that many applications that record what options the user selected, nor which buttons they pushed.  As such, "program execution" artifacts alone provide little qualitative difference between the user launching the application and letting is sit dormant on the screen, and the user actually using the application's functionality to do something.  In fact, additional analysis steps are required; in the case of the DefCon 2018 CTF image, determining if the observed "program execution" artifacts truly resulted in the use of the applications' functionality.


Thursday, February 20, 2020

RegRipper Update

Based on a Twitter thread from 19 Feb 2020, during which Phill Moore made the request, I updated RegRipper to check for "dirty" hives, and provided a warning that RegRipper does NOT automatically process Registry transaction logs.  This can be an important component of your investigation, and so per Phill's request, I updated RegRipper (both the UI and rip.pl/.exe) to provide the warning, as well as check to see if the hive is 'dirty'.

If you decide that you need to process the transaction logs, there are a couple of options available.  One is using yarp (from Maxim Suhanov) along with registryFlush.py (per this blog post).  Written in Python, this provides a cross-platform approach, if you need that flexibility.  Another method is to use Eric Zimmerman's rla.exe (part of Registry Explorer/RECmd) tool, which is Windows-based.

So, to re-iterate, RegRipper 2.8 does NOT automatically process transaction logs.  I haven't developed the code to do so, and there are a number of variables to doing so. For example, the current RegRipper repo contains plugins either intended just for XP systems (i.e., acmru.pl), and also contains plugins that can process data from hives from XP through Win10 (i.e., appcompatcache.pl, shimcache.pl).  Transaction logs from older systems (XP, etc.) follow a different format that more modern (Win8.1+) systems, and as such, would require additional code to address those files.

Also, don't forget about the update made to RegRipper on 4 Jan 2020, fixing the issue that began as of 1 Jan 2020 where Registry key LastWrite times were incorrectly reported as "0".  The updated code was added to one of the core module files (i.e., Base.pm), which is available in the RegRipper Github repo. 

These updates have been "compiled" into the Windows executables for the RegRipper tools (rr.exe, rip.exe), but if you're looking to install RegRipper on Linux, be sure to read the RegRipper readme file and update the Perl modules files according.

Monday, February 17, 2020

RID Hijacking

I read a fascinating blog post recently that described something called RID hijacking, which can be used as a method for maintaining elevated privileges on a system.  The PenTestLabs article not only outlines how to perform RID hijacking manually, but also using MetaSploit, Empire, a Powershell module, and then using the module via POSHC2.  In short, there are no shortage of ways to go about performing RID hijacking from an offensive perspective.

But how would this look from a blue team perspective?  I made a minor tweak to the output of one of the RegRipper plugins (the data was already available) so that the output appears as follows (run against a SAM hive from a Win2000 system):

Username          : Guest [501]
Full Name          :
User Comment    : Built-in account for guest access to the computer/domain
Account Type      :
Account Created : Fri Sep 27 03:32:48 2002 Z
Name                  :
Last Login Date   : Never
Pwd Reset Date   : Never
Pwd Fail Date      : Sat Aug 25 09:21:50 2012 Z
Login Count         : 0
Embedded RID    : 501
  --> Password does not expire
  --> Account Disabled
  --> Password not required
  --> Normal user account

I added the emphasis on the "Embedded RID" entry.  Again, the data itself was available in the plugin, all I did was add the line to display the RID from within the F value data (as described in the PenTestLab article).  I also added a bit of logic to compare the embedded value with the RID value from the Username field (in the brackets) and if they aren't same, print out a warning message.

I can't say that I've ever intentionally looked for this sort of thing during an investigation, as there was nothing that pointed to something like RID hijacking having occurred.  For example, there were no indications of no suspicious actions involving the Guest account, and subsequently the Guest account being used to log into the system and perform actions requiring Admin privileges. 

I have seen threat actors establish multiple means for returning to systems.  I've seen threat actors hit an RDP server, enable Sticky Keys, and then create an account, add it to the Administrators group (as well as the Remote Users group) and then add that account to the "SpecialAccounts\UserList" key path so that the icon for the account does not appear on the Welcome screen. 

Given that this capability is available, and not only via popular offensive security toolsets, I'd recommend checking for RID hijacking during investigations.  Automating the checking means that it's done every time, regardless of who runs the process.

Saturday, February 15, 2020

Using Intrusion Intelligence

In his book, "Call Sign Chaos", Jim Mattis made the statement that 'your own personal experiences are not enough to sustain you."  This statement was made in the context of reading for professional development, and it applies to much more than just warfighting.  It also applies equally well in the DFIR field, including DFIR analysis, threat hunting, and developing intrusion intelligence.  This is due to the fact that our own experiences are not enough to sustain us, neither as individual analysts nor as teams.  None of us want to go to an auto mechanic or a neurosurgeon who stopped their education the moment they received their diploma; as such, it's incumbent upon each of us to further our education and professional development through whatever means works for us.  Sometimes, that's as simple as reading...really reading...what someone else has written, and then incorporating what we learn into our own analysis process or methodology.

I recently read this excellent article from the Elastic team regarding their insights into the Gamaredon Group. I found that the article included a good bit of detail, not just in how the Elastic team goes about running down an incident, but also detail with respect to findings written in a manner that can be incorporated into threat hunting and DFIR analysis methodologies with little effort.

I've written and spoken often on the topic of threat actors modifying the target environment to meet their needs.  This has been about much more than just modifying the Windows firewall to allow an application to access the Internet; threat actors have been observed making what appear to be subtle changes to target environments, but when those changes are fully understood, they're actually very significant.  From enabling systems to store credentials in memory in plain text to disabling security tools, threat actors have been observed making significant modifications to the target environment.

In figure 12 in the Elastic article, we see where, via the macro, the actors modified the infrastructure to permit the execution of untrusted macros, as well as disabled warnings. However, this is nothing new.  Mari discussed the VBAWarnings Registry value in Feb, 2016, and the value was mentioned on TechNet as far back as 2012. 

As stated in the Elastic article, These small changes can end up having larger implications, and defenders can look for them as symptoms of more serious security issues. This is absolutely the case.  The article goes on to identify additional entities that have been observed modifying target environments in a similar manner.

Okay...so what? 

Well, there are a number of ways you can make use of this information...and this isn't all of the information from the article.  There are a total of 25 figures in the article, and the information discussed above is from three of those figures, two of which are EQL queries.  Depending upon the EDR solution you're using, you can monitor for modifications to the Registry, and specifically for that key path, and those values.  If you're in a position where you're not able to employ an EDR solution, or if you're limited to just DFIR analysis or retro-active threat hunting, you can check that the values exist, and if so, their data, in order to see if there is or was an issue.  If you do find the value data set in accordance with the article, the key LastWrite time may provide a suitable pivot point for analysis.

Thursday, February 13, 2020

Update: Prefetch + Stealth ADS Analysis

Not long ago, I took at look at an image that Dr. Ali Hadi had put together to demonstrate an aspect of digital analysis to his students.  Dr. Hadi's blog post describes how the use of the ADSs, particularly when launching programs from ADSs, bypasses "normal" analysis methodologies which can tend to focus on one, or just a few, artifacts.  I completely understand and agree with the point that Dr. Hadi made, and wanted to demonstrate the value of analysis that incorporates a corpus of artifacts, or 'artifact clusters'.  As we saw in my previous post, there were a number of areas (file system metadata, BAM key, AppCompatCache data, SRUM data) where red flags related to the specific activity could be found, all of which were separate from the artifacts on which Dr. Hadi's article focused.

I decided to take a further look at data sources on the system to see if there were other artifacts that would serve as pivot points in analysis.  For example, I found that the AmCache.hve file contained the following entry:

c:\users\ieuser\desktop\creepy\welcome.txt:putty.exe 
LastWrite: Sun May 26 08:41:35 2019
Hash: 2662d1bd840184ec61ddf920840ce28774078134

Interestingly, the hash maintained in the AmCache entry is for putty.exe, rather than welcome.txt, and was detected by 47 engines on VT.  I say "interestingly" because in some cases where hashes have been generated, they've been for the carrier file, not the ADS.

From the user's ActivitiesCache.db file, specifically the Activity table, I saw this entry in the ContentInfo column (the Executable column listed notepad.exe):

C:\Users\IEUser\Desktop\creepy\welcome.txt (file:Unmapped GUID: //C:/Users/IEUser/Desktop/creepy/welome.txt?VolumeId={20B25A2A-0000-0000-0000-100000000000}&ObjectId={0282E6B5-7F90-11E9-A75B-000C29C3F036}&KnownFolderId=ThisPCDesktopFolder&KnownFolderLength=23)

I added the bold text for a rather obvious misspelling that jumped out; however, there's nothing in the entry that specifically stands out as being associated with the ADSs.

I also took some other parsing steps that were not fruitful.  For example, I parsed out all of the unallocated space from the NTUSER.DAT, and also merged the transaction logs into the hive, and re-ran several RegRipper plugins.  Like I said, neither were fruitful, in this case.

I'm not sharing this because I disagree with Dr. Hadi's thesis...in fact, I completely agree with him.  Too often, we may find ourselves focusing on just one artifact, and as Dr. Hadi pointed out, we can get caught off-guard by some variation of that artifact with which we weren't familiar.  I've shared these articles, and the artifacts in them, in order to illustrate the value of using multiple data sources, and being able to find pivot points in your analysis.