Pages

Friday, April 29, 2022

Root Cause Analysis

One of the challenges within DFIR, particularly as we've moved to an enterprise approach by leveraging EDR telemetry, is the root cause analysis, or "RCA". In short, the challenge is observing malicious activity and determining the root cause; the challenge itself stems from the fact that EDR telemetry is only partial visibility, or that correlating observed malicious activity with causal data not evident or available via EDR telemetry requires additional context, and by extension, additional effort/expenditure of resources. It also requires an additional "leveling up" of skillsets. 

Yes, many organizations that deploy EDR tooling also include a means for extracting additional files/data from the endpoint, and what to collect isn't usually in question. Rather, how to truly exploit the collected data is the issue, with the exploitation of that data being the "levelling up".

When malicious activity is observed, or the impact of malicious activity is discovered (via threat hunting, etc.), the challenge then becomes, how do we determine the root cause? In the past, when someone has shared suspicious activity with me and sought guidance as to next steps, I've often asked, "...did a reboot or login occur just prior to the activity?" as a means of developing context around the observed activity. Was the executable launched via an auto-start mechanism, such as a Windows service, entry in a Run key, or as a Scheduled Task? The thought process has been to seek causal events for the observed activity, which can be important to not only determine the root cause, but perhaps to also identify a shift in TTPs, once we're to a point where we can arrive at attribution.

There are a lot of different ways for threat actors to persist on Windows systems, some of which were mentioned earlier in this post. Each has their advantages and disadvantages, and both require and provide different levels of access. For example, creating an entry in the user's Run key means that the malware won't start again until the user logs in, and then runs within the user context. However, if the threat actor has Admin privileges, they can create a Windows service or Scheduled Task, which then will run malware with SYSTEM level privileges. As a result, the persistence mechanism used provides a great deal of context beyond just the entry for the malware. 

To provide further insight into this topic, Krz wrote up an excellent blog post not long ago about how a laptop going on battery (unplugged from the power cord) can impact an investigation.

Based on the MITRE "Event Triggered Execution: Screensaver" page, I created a RegRipper plugin for my local repo to parse the screensaver settings out of the NTUSER.DAT hive, and ran it against a hive extracted from my local system, the output of which follows:

Launching screensaver v.20220427
screensaver v.20220427
(NTUSER.DAT) Gets user's screensaver settings
MITRE: T1546.002 (persistence)

Control Panel\Desktop
LastWrite: 2021-11-29 11:53:45Z

Screensaver is active.
SCRNSAVE.exe value not found.

Analysis Tip: Threat actors have been observed using the screen saver as a persistent mechanism.

Ref: https://cocomelonc.github.io/tutorial/2022/04/26/malware-pers-2.html

Now, if the "SCRNSAVE.exe" value (listed above as "not found") was set to something that it shouldn't be, we could then use the Desktop key LastWrite time as a pivot point for further analysis. As the Desktop value contains a number of values, we may not be able to specifically attribute the addition or alteration of the SCRNSAVE.exe value to the modification of the key LastWrite time, but it does provide us with some information we can pivot on and leverage in our analysis. 

Another instance to make clear why this example is so compelling...see Jorge's tweet here, which points us to Wietze's tweet, which in turn points us to a new #LOLBAS. Kind of a circuitous route (also apparently available as far back as 2001, from here), I know, but ultimately what this leads us to is the use of a Control Panel applet (desktop.cpl) to proxy the execution of arbitrary commands. In the example, 'calc.exe' is renamed to 'calc.scr' and launched by calling the "InstallScreenSaver" function of the applet. As a result, this persistence mechanism can be established via reg.exe or rundll32.exe; different command lines with different impacts on the system (re: artifacts associated with their execution) but both achieve the same result.

Further, we now have insight into the level of access achieved by the threat actor, the context under which the malware will run, and when the malware is expected to run. This also provides insights into the sophistication and intent of the threat actor, and can be applied toward attribution.

Monday, April 25, 2022

File Formats

Having an understanding of file formats is an important factor in DFIR work. In particular, analysts should understand what a proper file using a particular format should look like, so that they can see when something is amiss, or when the file itself has been manipulated in some manner.

Understanding file formats  goes well beyond understanding PE file formats and malware RE. Very often, various Microsoft file formats include data, or metadata (defined as "data about data") that can be mined/parsed, and then leveraged to tremendous effect, furthering overall analysis and intelligence development, often across multiple cases and campaigns.

LNK
Windows shortcut, or LNK files, have been covered extensively in this blog, as well as other blogs, in addition to having been well documented by MS. Suffice to say, LNK files can be leveraged by both good guys and bad guys, and if bad guys leverage them, so should the good guys...after all, the bad guys sending you an LNK file created in their environment is essentially just "free money", particularly if you're in CTI.

For example, the GOLDBACKDOOR report shows us a threat actor that sends an LNK file to their target, in a zip archive. So, the threat actor develops the LNK file in their environment, and sends that LNK file with all of it's metadata to the target. Now, as a DFIR analyst, you may have a copy of a file created within the threat actor's environment, one that contains information about their system(s). Why not take advantage of that metadata to develop a more meaningful threat intel picture?

Analysis of LNK files is similar to being an EOD tech (I would imagine)...you're looking at the construction of a "device", noting production mechanisms (based on tooling) as well as unique items that allow you to tie or "attribute" the LNK file in some manner. You can then leverage sites such as VirusTotal (via a retro-hunt) and populate your own MISP instance to build out a larger, more contextual threat intelligence picture. For example, consider the LNK file delivered as part of the Quantum ransomware campaign discussed in this TheDFIRReport article; the article provides an image of the metadata extracted from the LNK file. The machine ID, MAC address, and volume serial number (not shown) could be put into a Yara rule and submitted as a VirusTotal retro-hunt, providing insight into other instances or campaigns were LNK files with the same values were employed. You can also tighten or loosen the aperture of your retro-hunt by adding or removing values within the Yara rule.

Yet another example of how LNK metadata can be used...and likely the best example of this sort of data exploitation available thus far...can be seen in this Mandiant blog post from 2018. The post addresses differences between APT29 campaigns from 2016 and 2018, with references to differences in LNK files shows in figures 5 and 6. Reading through the article, you can see where the Mandiant team leveraged the data they had available to develop insights about the actor's campaigns.

OLE
OLE, or "object linking and embedding" (aka, "structured storage") is a file format most often associated with older versions of MS Office. As such, when MS transitioned the world to the "new" MSOffice file format, many likely thought, "okay, I'll never see that file format again." Oh, how wrong we were! The OLE format is used within a number of other files, including:

Files
Automatic JumpLists - all but one embedded stream consists of an LNK "file"
MSI files
Sticky Notes
Other files, some of which are application-dependent

We can look to a variety of tools to meet our needs in parsing these files:

OLE Parsing Tools
olefile
MiTeC SSV
ripOLE

For specifically parsing/working with MSI files, I'm told that folks use tools such as InstEdit, and orca.exe from MS.

Metadata that may be present in the document structure can be leveraged or exploited in a manner similar to LNK files, or better yet, really leveraged by combining it with what's found in LNK files. For example, the LNK file in the GOLDBACKDOOR report reportedly downloads a decoy .doc file, meaning that the metadata from the LNK file has a direct link to the metadata found in the .doc file.

Registry
At this point, the Windows Registry file format is well-understood, and documented (here, by Maxim Suhanov). As a result, we know how to parse it, but much like other file structures, we (as a community and industry) regularly fall short in truly exploiting the file format to our advantage. 

Registry keys have embedded time stamps, which as Lina L. astutely and articulately described in her blog post, can be manipulated. Time stamps are also visible in value data (albeit NOT in the structure of values) as strings, as binary data, or embedded within binary data streams. All of these can be used to further analysis, including possibly even to identify key LastWrite time manipulation (very much depending upon how it's done).

For example, a recent TheDFIRReport write-up on the Quantum ransomware indicates that when the user double-clicks the ISO file, artifacts of the ISO file being mounted on the system appear in the Windows Event Log. Okay, great...but does the drive letter assignment also appear in the MountedDevices key in the System hive? When the user double-clicks the LNK file embedded in the ISO file, is that action reflected in the user's UserAssist key?

Aside from the "normal" Registry hive files we're all familiar with...Software, System, SAM, Security, NTUSER.DAT, USRCLASS.DAT...other files on the system follow the same file format. This means that all of the tools we use to parse the 'usual suspects' can also be leveraged against these other files, which include BBI, DEFAULT, and ELAM in the system32\config folder, the AmCache.hve file, and the settings.dat files associated with MS Store apps (i.e., in the %user%\AppData\Local\Packages\windows.immersivecontrolpanel_cw5n1h2txyewy\Settings folder, etc.)

Tools
In 2008, Jolanta Thomassen created the tool regslack.pl, to parse deleted data from hive files as part of her thesis work. Since then additional tools have been created for parsing this information, not the least of which includes the del.pl and slack.pl RegRipper plugins. 

Tuesday, April 19, 2022

LNK (Ab)use

I've discussed LNK files a number of times in this blog, and to be honest, I really don't think that this is a subject that gets the attention it deserves. In my experience, and I humbly bow to collection bias here, LNK files are not as well understood as they (sh|c)ould be in the DFIR and CTI fields, which puts defenders at a disadvantage. When I suggest that LNK files aren't really well understood by DFIR and CTI teams, I'm basing that on my own experience with multiple such teams over the years, largely the result of direct interaction.

Why is that? Well, the LNK file format is well documented at the MS site, and there have been a number of tools written over the years for parsing these files. I've even gone so far as to create the smallest functioning LNK file, based on the minimum functional requirements, and with all of the metadata zero'd out or altered. However, IMHO, the real issue is that the actual functional, operational use of these files is not discussed often enough, and as such, they fall out of scope, much like other aspects of Windows systems (Ex: NTFS alternate data streams (ADSs)). As such, LNK files are not something that is closely examined during DFIR or proactive threat hunting engagements, and even more rarely do they appear in reporting. Finally, as effective as these files can be, they simply are not the "new shiny hotness"...they aren't 0-days, and they aren't the newest, hottest exploits. As a result, not a lot of focus and attention are given to these files, and it's likely that their use is less evident than might otherwise be the case.

V3ded's Github repo includes an article that does a great job of covering how LNK files can be abused. While the title refers to "initial access", it seems to describe the use of document macros to create an LNK file, rather than "weaponized" LNK files being delivered to a target. The difference is an important one...using macros (or some other method) to create LNK files on a target system means that the LNK file metadata is going to be specific to the target system itself. Alternatively, an LNK file delivered to the target is going to contain metadata specific to the threat actor's dev environment.

V3ded's article refers to a couple of means of persistence that are very interesting. One is the use of shortcut keys, defined within the structure of the LNK file. Per the article, an LNK file can be placed on the desktop and structured to be activated by a specific and common key sequence, such as Control+C (copy), Control+V (paste), or some other commonly used sequence. Another means of persistence involves the use of the "iconfilename" element within the LNK file structure; by pointing to an icon file located on a remote, TA-controlled system, attempts to access the resource (via activation of the LNK file) will involve authentication, allowing the threat actor to collect password hashes. What's interesting about both of these techniques is that neither one requires the threat actor to authenticate to collect information and gain access to your systems. This means that following containment and eradication steps, when you think you've ejected the threat actor and you've changed passwords in your environment, the threat actor is still able to collect information that may allow them to re-enter your environment. 

Again, when LNK files are sent to a target, from a threat actor, those files will contain metadata specific to the threat actor's dev environment. For example, consider MalwareByte's analysis of an "LNK attack" tied to a threat actor group referred to as "Higaisa". This attack apparently involved an LNK file being sent to the target in a zip archive; the LNK file would have to have been developed by the threat actor, and the file metadata would provide insight into the threat actor's development environment. 

A similar example could be seen in Mandiant's analysis CozyBear phishing campaigns. In this example, the Mandiant team did an excellent job leveraging LNK files, and looked specifically at differences between successive campaigns to derive insights into the threat actor's evolution.

Over time, we've also seen LNK files sent or linked to a target while embedded in ISO or IMG file formats, allowing them to bypass MOTW (re: ADS) restrictions. In cases such as this, the metadata within the LNK file structure can provide insights as to the build or production method, proliferation of build methods, etc., similar to the way EOD professionals look to toolmarks and artifacts to identify bombmakers. As such, these resources should not be ignored by DFIR and CTI professionals, but should instead be leveraged for a range of insight. For example, Yara rules can be used to perform retro-hunts via VirusTotal to look for the use and proliferation of specific platforms, as well as view changes in LNK file production techniques between campaigns, or simply over time. Tools can be designed to comb through specific locations within Windows systems (or images), scanning for persistence techniques and other anomalies, automatically providing them for analysis and development into retro-hunts.

Wednesday, April 13, 2022

Digging Into Open Reporting

As many readers of this blog are aware, I often find great value in open reporting, but that I also see the value in taking that open reporting a step (or three) further beyond where it exists now. In more than a few instances, something extra can be pulled from open reporting, something not presented or discussed in the article that can be of significant value to readers in domains such as DFIR, detection engineering, MSS/SOC monitoring, etc. As a result, I've spent a great deal of time during my career looking for alternate means for detecting activity (user, threat actor, malware) presented in open reporting, largely due to gaps in that reporting.

For example, there's a great deal of open reporting that is based solely on RE and analysis of malware that is part of the final stage of the attack (ransomware, etc.), without addressing anything that occurred prior to the final malware deployment, beginning with initial access. Now, I understand that not everyone has access to DFIR data to support this level of open reporting, but some do, and tremendous value can be derived if that information is shared. I understand that in some instances, authors feel that they can't share the information but very often it may simply be that they don't see the value in doing so.

MS DART and MSTIC recently published some excellent open reporting on Tarrask that I found quite fascinating, based initially on the title alone (which, oddly enough, is what also catches my eye with respect to beers and books). I mean, the title refers to using Scheduled Tasks for defense evasion...when I read that, I immediately wanted to know more, and I have to say, the DART and MSTIC teams did not disappoint. 

The article itself is full of "stuff" that can be unpacked. For example, there were some interesting statements in the article, such as:

Further investigation reveals forensic artifacts of the usage of Impacket tooling for lateral movement and execution...

So, what does that look like? I searched on the web and found that someone shared some forensic artifacts associated with the use of Impacket tooling about 2 yrs ago. While this is a great resource, unfortunately, many of the Windows Event Log records described are not available via the default logging configuration and require additional steps to set up. For example, while generating Security-Auditing records with event ID 4688 is just a matter of enabling Process Tracking, including the full command line for the process in the event record requires an additional Registry modification. Throughout my time in DFIR, there have been very few instances where the audit configuration of compromised systems was beyond default, and more than a few instances where threat actors took steps to disable the logging that was available.

Note that the article has a similar comment to that effect:

Neither of these are audited by default and must be explicitly turned on by an administrator. 

Ah, there you go! There are these great Windows Event Log artifacts that are available if the default audit configuration is modified prior to an attack taking place! 

Note: Even after an attack starts, there are ways to manipulate Windows Event Logs as a means of defense evasion. This is NOT part of the MS DART/MSTIC blog post, and is mentioned here to bring attention to issues with default audit configuration and monitoring systems for modifications to the audit configuration.

From the guidance section of the article:

Enumerate your Windows environment registry hives looking in the HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Schedule\TaskCache\Tree registry hive and identify any scheduled tasks without SD (security descriptor) Value within the Task Key...

Issues with nomenclature aside, this is pretty solid advice. Not having access to proactive threat hunting, I opted to create a simple RegRipper plugin for use in DFIR analysis and hunting. Actively threat hunting across endpoints in your environment for the same condition is a great way to go about detecting such things much earlier in the attack cycle. 

Something else to unpack from all of this is that, per the article, SYSTEM level privileges are required in order to remove the "SD" value from the Registry on a live, running system. As such, the threat actor first needs to access the system, and then escalate privileges...both of which provide opportunities for detection. If such activity is detected and responded to in a timely manner, the rest of the attack chain can be obviated.

Closing Thoughts
Not everyone has the ability to ingest STIX/TAXII, or Sigma rules, or other means of automated tooling, and as such, greater value can be derived from open reporting if observables are shared. This way, analysts from across the cyber spectrum...DFIR, detections, threat hunting, etc...can look at the data, see what things "look like" on systems and within their infrastructure, and then determine how best to apply them within the realm of their capabilities. As a DFIR analyst, I look for what I can use to create artifact constellations to increase the fidelity of detections, either for live monitoring/response within an enterprise, or during a more traditional "dead box" or "triage" DFIR process.

Additional Item of Note
Apparently, on more recent server versions of Windows, when process tracking is enabled, hashes are recorded in the AppLocker Event Log.

Friday, April 08, 2022

Timestomping Registry Keys

If you're worked in DFIR or threat intel for any amount of time, you've likely either seen or heard how threat actors modify systems to meet their own needs, configuring systems to provide data or hide their activities, as they make their way through an infrastructure. From disabling services, to modifying the system to maintain credentials in memory in plain text, to clearing Windows Event Logs, sometimes it seems that the threat actor knows more about the platform than the administrators. These system modifications are used to either provide easier access to the threat actor, or hide the impacts of their activities by "blinding" the administrators, or simply be removing clear evidence of the activity. Sometimes these system modifications go beyond the administrators, and meant to instead hamper/impede the efforts of forensic analysts investigating an incident. Today, thanks to the efforts of a bright, shining star in DFIR, we'll take a look at one such possible technique, one that hasn't been addressed in some time.

Continuing with her excellent content creation and sharing, Lina recently illustrated another means threat actors can use to disguise their activity on systems and hide their presence from DFIR analysts and threat hunters; in this case, Lina shared awareness of a technique to modify the LastWrite times on Registry keys. As we've seen in the past, Lina's blog posts are comprehensive, well-thought-out, and extremely cogent.

In her blog post, Lina used Joakim Schicht's SetRegTime (wiki) to modify Registry key LastWrite times, providing several attack demonstrations. She then followed it up with detection methodology insights, providing additional thoughts as to how threat hunters and DFIR analysts might go about determining if this technique was used. I was particularly interested in her "part 2", relying on key LastWrite time discrepancies in hopes of spotting the use of such a technique.

In addition to her blog post, Lina also tweets out her content, and in this case, Maxim stepped up to provide some additional color to Lina's content, providing more detection techniques. Maxim specifically mentioned the use of Registry transaction logs, something he's addressed in detail before in his own blog.

Other resources regarding Registry transaction logs and tools can be found:
- Andrea Fortuna (uses regipy)
- WindowsIR (refs Maxim's yarp + registryFlush.py)
- Mandiant

More Detection Possibilities
Adding to both Lina and Maxim's incredible contributions regarding detections, other possible detection methodologies might include:

** The use of the RegIdleBackup Scheduled Task; you might remember that this task used to backup the system Registry hive files into the C:\Windows\system32\config\RegBack folder. I say "used to" because a bit ago, we stopped seeing those Registry hive backups in the RegBack folder. Per Microsoft, this change is by design, but there is a way to get those hive backups back, by setting the EnablePeriodicBackup value to "1" and rebooting the system. 

** Or, you can create your own Scheduled Task that uses reg.exe to create periodic hive backups, with names appended with time stamps in order to differentiate individual backups.

Both of these methodologies would serve purposes beyond simply detecting or verifying the use of Registry key time stomping techniques. However, both of these methodologies require that system owners modify systems ahead of an incident, and I'll admit that the reality is that this simply doesn't happen nearly enough. So, what can we do with what we have?

Lina had a great detection methodology in "part 2", in that looking at an arbitrary key with subkeys, the line of reasoning is that the key LastWrite time should be equal to or more recent than the subkeys; remember that adding, deleting or modifying a value beneath the key via the usual API will cause the key LastWrite time to be updated, as well. This is an interesting approach, and would apply only under specific circumstances, but it is definitely worth testing. 

Some additional thoughts...

If an adversary copies the SetRegTime executable over to a system and uses it in a similar manner to the way Lina described, there may be indications of the use available via EDR telemetry, as well as the ShimCache and AmCache.hve files. If there's an entry in the AmCache.hve file, then the entry may also include a hash (even given the limitations of that process). This approach (by the adversary) isn't entirely underheard of, either copying the executable file over to the system, downloading once they're on the system, or including the executable in the resource section of another executable. Of course, if the adversary includes the code from the SetRegTime executable into their own, then that's a whole other ball of wax, but the key thing to remember is that nothing bad happens on a system without something happening, and that something will leave traces or an indicator.

There are a number of examples of the use of SetRegTime on the wiki; if some of the example time stamps are used, there's potential for a possible detection mechanism (yes, the wording there was intentional). Registry key LastWrite times are 64-bit FILETIME objects, or "QWORDs", which are comprised of two "DWORDs". Examples used in the wiki, such as "1743:04:01:00:00:00:000:0000" will result in one of the DWORDs consisting of all zeros. As it is unlikely the case that such a loss of granularity would be unintentional, so reading the two DWORDs and checking to see if one is all zeros might be a good automated DFIR analysis technique, either across the entire hive or by focusing on a subset of keys.

Conclusion
So, again...fantastic content by both Lina and Maxim! Be sure to visit both of their blogs, as well as give them a follow on Twitter for more amazing content!

Lina - blog - Twitter
Maxim - blog - Twitter

Addendum: After publishing this post, I wrote a RegRipper plugin (based on the sizes.pl plugin) to run through a hive file and find all the keys that had LastWrite times that were smaller/older than their most recently-updated subkey...and it turns out, there are a LOT! It seems that doing something...anything...that updates a subkey and causes the LastWrite time to be updated does not then roll that change up to the subkey's parent. This would be necessary for this detection methodology to work.