Monday, October 31, 2022

Testing Registry Modification Scenarios

After reading some of the various open reports regarding how malware or threat actors were "using" the Registry, manipulating it to meet their needs, I wanted to take a look and see what the effects or impacts of these actions might "look like" from a dead-box, DFIR perspective, looking solely at the Registry.  I wanted to start with an approach similar to what I've experienced during my time in IR, particularly the early days, before EDR, before things like Sysmon or enabling Process Tracking in the Security Event Log. I thought that would be appropriate, given what appears to be the shear number of organizations with limited visibility into their infrastructures. For those orgs that have deployed Sysmon, the current version (v14.1) has three event IDs (12, 13, and 14) that pertain to the Registry.

The first scenario I looked at was from this Avast write-up on Raspberry Robins's Roshtyak component; in the section titled "Indirect registry writes", the article describes the persistence mechanism of renaming the RunOnce key, adding a value, then re-renaming the key back to "RunOnce", apparently in an effort to avoid rules/filters that look specifically for values being added to the RunOnce key. As most analysts are likely aware, the purpose of the RunOnce key is exactly that...to launch executables once. When the RunOnce key is enumerated, the value is read, deleted, and the executable it pointed to is launched. In the past, I've read about malware executables that are launched from the RunOnce key, and the malware itself, once executed, will re-write a value to that key, essentially allowing the RunOnce key and the malware together to act as if the malware were launched from the Run key.

I wanted to perform this testing from a purely dead-box perspective. Using EDR tools, or relying on the Windows Event Logs. Depending upon your configuration, you could perhaps look to the Sysmon Event Log, or if the system had been rebooted, you could also look to the Microsoft-Windows-Shell-Core%4Operational.evtx Event Log and Events Ripper to percolate unusual executables.

For reference, information on the Registry file format specification can be found here.

Methodology
The first thing I did was use "reg save" to create a backup of the Software hive. I then renamed the RunOnce key, and added a value (i.e., "Calc"), and renamed the key back to "RunOnce", all via RegEdit. I then closed RegEdit and used "reg save" to create a second copy of the Software hive. I then opened RegEdit, deleted the value, and saved a third copy of the Software hive.

During this process, I did not reboot the system; rather, I 'simulated' a reboot of the system by simply deleting the added value from the RunOnce key. Had the system been rebooted, there would likely be an interesting event record (or two) in the Microsoft-Windows-Shell-Core%4Operational.evtx Event Log.

Finally, I created a specific RegRipper plugin to extract explicit information about the key from the hive file.

First Copy - Software
So, again, the first thing I wanted to do was create a baseline; in this case, based on the structure for the key node itself. 

Fig 1: Software hive, first copy output







Using the API available from the Perl Parse::Win32Registry module, I wrote a RegRipper plugin to assist me in this testing. I wanted to get the offset of the key node; that is, the location within the hive file for the node itself. I also wanted to get both the parsed and raw information for the key node. This way, I could not only see the parsed data from within the structure of the key node itself, but I could also see the raw, binary structure, as well.

Second Copy - Software2
After renaming the RunOnce key, adding a value, and re-renaming the key back to "RunOnce", I saved a second copy of the Software hive, and ran the runonce_test.pl plugin to retrieve the information illustrated in figure 2.

Fig 2: Plugin output, second copy, Software hive







We can see between figures 1 and 2 that there are no changes to the offset, the location of the key within the hive file itself. In fact, the only changes we do see are the LastWrite time (which is to be expected), and the number of values, which is now set to 1.

Third Copy - Software3
The third copy of the Software hive is where I had deleted the value that had been added. Again, this was intended to simulate rebooting the system, and did not account for the malware adding a reference to itself back to the RunOnce key once it was launched.

Figure 3 illustrates the output of the plugin run against the third copy of the Software hive.

Fig 3: Plugin output, third copy, Software hive







Again, the offset/location of the key node itself hasn't changed, which is to be expected. Deleting the value changes the number of values to "0", and adjusts the key LastWrite time (which is to be expected). 

I then ran the del.pl plugin (to get deleted keys and values from unallocated space within the hive file) against the third copy of the Software hive, opened the output in Notepad++, searched for "calc", and found the output shown in figure 4 below. I could have used regslack, from Jolanta Thomassen (go here to see Jolanta's thesis from 2008), but simply chose the RegRipper plugin because I was already using RegRipper.

Fig 4: Del.pl output from third copy, Software hive




Unfortunately, value nodes contain neither time stamps, nor a reference back to the original key node (parent key offset) to which they were a member, as described in sections 4.1.1 and 4.1.2 of the Registry file format specification for key nodes; value node structures are described in sections 4.4.1 and 4.4.2. 

Conclusion
As we can see from this testing, there's not much that we can see just from the Registry hive file that would lead us to believe that anything unusual had happened. While we might have an opportunity to see something of this activity via the transaction logs, that would depend a great deal upon how long after the activity that the incident was discovered, the amount of usage on the system, etc. It appears that the way this specific activity would be discerned would be through a combination of malware RE, EDR, Windows Event Log records, etc.

Next, I'll take a look at at least one of the scenarios presented in this Microsoft blog post.

Addendum, 1 Nov: Maxim Suhanov reached to me about running "yarp-print --deleted" to get a different view of deleted data within the hive, and I found some anomalous results that I simply cannot explain. As a result, I'm going to completely re-run the tests, fully documenting each step, and providing the results again.

Tuesday, October 18, 2022

Data Collection

During IR engagements, like many other analysts, I've seen different means of data exfiltration. During one engagement, the customer stated that they'd "...shut off all of our FTP servers...", but apparently "all" meant something different to them, because the threat actor found an FTP server that hadn't been shut off and used it to first transfer files out of the infrastructure to that server, and then from the server to another location. This approach may have been taken due to the threat actor discovering some modicum of monitoring going on within the infrastructure, and possibly being aware that FTP traffic going to a known IP address would not be flagged as suspicious or malicious.

During another incident, we saw the threat actor archive collected files and move them to an Internet-accessible web server, download the archives from the web server and then delete the archives. In that case, we collected a full image of the system, recovered about a dozen archives from unallocated space, and were able to open them; we'd captured the command line used to archive the files, including the password. As a result, we were able to share with the customer exactly what was taken, and this allowed us to understand a bit more about the threat actor's efforts and movement within the infrastructure.

When I was first writing books, the publisher wanted me to upload manuscripts to their FTP site, and rather than using command line FTP, or a particularly GUI client utility, they provided instructions for me to connect to their FTP site via Windows Explorer. What I learned from that was that the evidence of the connection to the FTP site appeared my shellbags. Very cool. 

Okay, so those are some ways to get data off of a system; what about data collection? What are some different ways that data can be collected?

Clipboard
Earlier this year, Lina blogged about performing clipboard forensics, which is not something I'd really thought about (not since 2008, at least), as it was not something I'd ever really encountered. MITRE does list the clipboard as a data collection technique, and some research revealed that some malware that targets crypto wallets will either get the contents of the clipboard, or replace the contents of the clipboard with their own crypto wallet address. 

Perl has a module for interacting with the Windows clipboard, as do other programming languages, such as Python. This makes it easy to interact with the clipboard, either extracting data from it, or 'pasting' data into it. You can view the contents of the clipboard by hitting "Windows Logo + V" on your keyboard.

Fig 1: Clipboard Settings
But, wait...there's more! More recent versions of Windows allow you to not only enable a clipboard history, maintaining multiple items in your clipboard, but also sync your clipboard across devices! So, if you have a Windows account, you can sync the clipboard contents across multiple devices, which is an interesting means of data exfiltration! 

Both of these settings manifest as Registry values, so they can be queried or even set (by threat actors, if the user hasn't already done so). For example, a threat actor can enable the clipboard history 

Digging into Lina's blog post led me to this ThinkDFIR post on "Clippy history", and just like it says there, once the clipboard history is enabled, the %AppData%\Microsoft\Windows\Clipboard folder is created. Much like what the ThinkDFIR post describes, if the user pins an item in their clipboard, additional data is created in the Clipboard folder, including JSON files that contain time stamps, all of which can be used by forensic analysts. The contents of the files themselves that contain the saved data are encrypted, however...there does seem to be (from the ThinkDFIR post comments) a tool available for decrypting and viewing the contents, but I haven't tried it.

Suffice to say that while the system is active, it's possible to have malware running via an autostart location or as a scheduled task, that can retrieve the contents of the clipboard as a data collection technique. Lina pointed out another means of performing clipboard "forensics"; beyond memory analysis, parsing the user's ActivitiesCache.db file may offer some possibilities.

Additional Resources
Cellebrite - Syncing Across Devices - Logging into multiple systems using the same Microsoft ID
ForensicFocus - An Investigator's Goldmine

Fig 2: Printer Properties Dialog
Printers And KeepPrintedJobs
Another means for collecting data to gain insight into an organization is by setting printers to retain copies of print jobs, rather than deleting them once the job is complete. This is a particularly insidious means of data collection, because it's not something admins, analysts, or responders usually check for, as even for some of us who've been in the industry for some time, the general understanding is the print jobs are deleted by default, once they've completed.

We say, "can be used", but has it been? According to MITRE, it has, by a group referred to as "TajMahal". This group has been observed using the Modify Registry technique as a means of data collection, specifically setting the KeepPrintedJobs attribute via the Registry. The printer properties dialog is visible in figure 2, with the KeepPrintedJobs attribute setting highlighted. 

While there isn't a great deal of detail around the Tah Mahal group's activities, Kaspersky mentioned their capability for stealing data in this manner in April 2019. The story was also picked up by Wired.comSecureList and Schneier on Security.

Fig 3: Print Job Files
The spool (.spl) and shadow (.shd) files are retained in the C:\Windows\System32\spool\PRINTERS folder, as illustrated in figure 3. The .spl file is an archive, and can be opened in archive tools such as 7-Zip. Within that archive, I found the text of the page I'd printed out (in my test of the functionality) in the file "Documents\1\Pages\1.fpage\[0].piece" within the archive.

I did some quick Googling for an SPL file viewer, more out of interest rather than wanting to actually do so. I found a few references, including an Enscript from OpenText, but nothing I really felt comfortable downloading.

Conclusion
There are more things in heaven and earth, dear reader, than are dreamt of in your philosophy...or your forensics class. As far as data collection goes, there are password stealers like Predator the Thief that try to collect credentials from a wide range of applications, and then there's just straight up grabbing files, including PST files, contents of a user's Downloads folder, etc. But then, there are other ways to collect sensitive data from users, such as from the clipboard, or from files they printed and then deleted...and thought were just...gone. 

Saturday, October 15, 2022

Events Ripper

Not long ago, I made a brief mention of Events Ripper, a proof-of-concept tool I wrote to quickly provide situational awareness and pivot points for analysts who were already on the road to developing a timeline. The idea behind the tool is that artifacts are compound objects, and have value based not just on their time stamps, their value can also be predicated on the analysis questions or goals, or just the nature of their path, or some other factor. 

The tool leverages the fact that analysts are already creating timelines, and uses the intermediate events file format to develop situational awareness and pivot points to facilitate analysis. Many times, we're looking through a timeline for some root cause or predicating event, but we're dealing with the fact that there was some normal system behavior (such as an update) that's caused a large number of events to be generated.

At the moment, the available plugins target Windows Event Log data, in many cases producing output similar to what analysts are used to seeing in ShimCache or AmCache parser output. So, of course, the output of the various plugins are going to depend upon the Windows Event Logs you've included in the timeline, as well as how long it's been since the activity in question occurred (i.e., logs roll over), and what specifically is being audited (although that pertains more to the Security Event Log). Further, they're doing to also depend upon what's being logged, something you can check via the auditpol.exe native utility (or the auditpol.pl RegRipper plugin). For example, I've once saw a Security Event Log with over 35,000 records, and they were all successful logins. Yep, that's it...just successful logins, and because of the nature of the system, most of them were type 3 logins...which is why I wrote a plugin to just get a count of logins by type, so that it's easy to see this information about your data quickly.

That's one of the keys to this tool...to be able to quickly and easily distill and discern some important insight about the data that you have from a system. As such, the real value of this tool comes from analysts using it, exploring it, and asking questions, talking about how to view and manage the data they have. 

Tools like this are especially useful in diverse environments where you are likely to encounter data sources with disparate content, such as consulting environments. During my time in consulting, I never...never...saw two identical environments. Ever organization is different. In fact, it wasn't unusual to find different application loads and audit configurations between departments, or sometimes even within the same department. So when you find something new, you create a plugin to parse it out and provide context, because you never know when you, or another analyst on your team, is going to see it, or something like it, again.

Another key value indicator of this tool is corporate knowledge retention. For example, our team worked an incident were we saw a Windows Defender event with event ID 2051; I'd never seen such an event record before (and haven't since), and no one else had any information about such an event record. So, after researching it, I wrote a plugin, so that what we learned about such an event is now available to every analyst who uses the tool, regardless of whether or not they're on our team. The same is true with respect to the hitman.pl plugin; we saw via the Application Event Log that the customer had had the Sophos HitmanPro product installed at one point (the Windows Event Logs also showed that it had been removed), and that the product had alerted on the file we were interested in, demonstrating that the file existed on the system for some time prior to the incident time frame.

Something else that a few analysts are familiar with is that Application Event Log records can often contain references to malicious software, in DCOM errors, Application Hang event records, and Windows Error Reporting event records. As such, I wrote plugins for each of these event records that lists the impacted applications in a format similar to what's seen in ShimCache or AmCache parser output. 

How To Use It
Here's example output from the ntfs.pl plugin:

D:\erip>erip -f g:\ntfs_events.txt -p ntfs
Launching ntfs v.20221010

Get NTFS volumes

System name: enzo

Mounted Volumes:
C:\ -  WDC WD5000BEKT-75KA9T0
D:\ -  WDC WD5000BEKT-75KA9T0
F:\ - Msft     Virtual Disk
F:\ - WD       My Passport 0741
F:\ - WD       My Passport 25E2
G:\ - SanDisk  Cruzer

Analysis Tip: Microsoft-Windows-Ntfs/145 events provide a list of mounted volumes.

From the above output, we can see that the C:\ and D:\ drives (the system named "enzo" has one hard drive split into two volumes), but we can also see other drive letters listed, along with their associated friendly names. We can likely find similar information in the Registry, and I'd definitely want to include that info, as well, but this is immediate, valuable insight from a limited data source, as I can quickly see drive letter mappings. However, I do need to keep in mind that this information may not be complete, but it's a good start.

Here's example output from the mount.pl plugin:

D:\erip>erip -f g:\vhd_events.txt -p mount
Launching mount v.20221010
Get VHD[X]/ISO files mounted

System name: Stewie

Files mounted (VHD[X], ISO):
D:\test\iso\b91c8f5adb81a262b2b95e2bf85fd7170e100885600af1f9bde322f10ac0e800.iso
D:\test\billid0574.iso
D:\test\iso\test.iso

Analysis Tip: Microsoft-Windows-VHDMP/1 events provide a list of files mounted or "surfaced".

Let's say that you look at the above output and think, "I want to see a timeline of all instances where 'test.iso' was involved"; well, that's easy enough to do, in a few simple steps:

type g:\vhd_events.txt | find "test.iso" > g:\iso_events.txt
parse -f g:\iso_events.txt > g:\tln.txt

Now, you have a timeline of all of the events that include "test.iso".

Interestingly enough, the above output is from one of my own systems, and once I saw it, I checked the values within the RecentDocs/.iso Registry key and found all three of those ISO files listed.

Using the two above plugins, I'm able to get a quick look at drive mappings for devices, as well as mounted ISO files, with minimal effort.

So What?
So, why does any of this matter? Red Canary recently shared some open reporting on Raspberry Robin, where they stated that this malware was spread via infected thumb drives. However, they also stated that there were "several intelligence gaps around this cluster", mentioning one of these gaps. Note that Cisco Talos also reports that Raspberry Robin spreads via "external drives"; however, Cybereason indicates that it could be "removable devices or ISO files". I'm not suggesting that this is a disparity in primary sources, but rather that it's pretty straightforward to gather insight and some solid answers based solely on one or two Windows Event Log files.

Conclusion
Tools like this provide for:

- Creating situational awareness and "pivot points" from your incident data
- Creating context and insights from your incident data
- Corporate knowledge retention, particularly for diverse environments, such as you find with consulting
- An alternate/additional means for analysts of all levels to contribute
- Fully exploiting limited data sets

However, tools like this (and timelines, as well) are limited by:

- Which Windows Event Logs are included
- The applications installed on the system
- The audit policy of the system
- How long it's been since the incident occurred


Wednesday, October 12, 2022

We Need Cybersecurity Mentors

I received a job description from a recruiter recently, along with the request that if I knew anyone who fit the bill and was interested, could I please forward the job description to them. The recruiter was looking for someone at an entry-level, with 1 - 3 yrs of experience, and the listed salary was for a low six-figure salary.

However, the list of Essential Skills were (copy-paste, with a few modifications):

- Practical mobile phone forensic analyst skills on hardware and software.
- Ability to run network and sandbox analysis on Windows, Linux, Mac, Android, iOS, and other platforms.
- Ability to use compliers[sic] and other software analytical tools for different platforms.
- Strong in tools such as <list of tools> and other analysis tools.
- Strong TCP/UDP/IP networking and protocol understanding, how they work, what they do, and what ports they use.
- Strong communication skills to relate findings in an understandable and useful way.
- Strong self-disciplined and self-starter that can think outside of the box and bring fresh insight and experience to the team.
- Comfortable with Linux shell and common GNU utilities.
- Ability to analyze, summarize, visualize, and detect anomalies from raw network communications data in a clear and effective manner.

Yeah, okay. I saw it, too. 

First, "1 - 3 yrs" of experience, entry-level, but "Essential Skills" for the role cover mobile (hardware *and* software), Windows, Linux, Mac, Android, iOS, and "other platforms".

Then, the applicant needs to understand TCP, UDP, IP, and "the ports they use".

Yes, there was a misspelling.

The last thing I'll mention is that, again, this is an entry-level position, but looking to "bring fresh insight and experience to the team". If someone is entry-level, what *experience* are they bringing to the team?

Okay, just to be clear...this is NOT a post to bash the job description...not at all. I'm not interested in calling anyone out, or putting anyone on the spot. All of the above is meant solely to let others know, yes, I'm seeing the same things and having similar thoughts as you are, so you're not alone in that sense.

What this post is to say is that when someone who's entry-level, someone with 1 - 3 yrs of experience in the field sees a job description such as this, they're going to immediately look at it and not apply. "But...why", you ask? Because there's no way you're going to be able to fulfill the stated "essential skills" with under 3 yrs of experience. Even folks looking at this description with a dozen years of experience are going to know that you're not going to be able to attain an "essential" level of all of these skills.

Ultimately, what's going to come of job descriptions such as this will be continued, circular reporting on how there aren't enough skilled people in the industry to fill all of the open positions.

But there is a solution! If' you're new to the cybersecurity field and thinking about looking around for a new role, or if you're looking to get into the field, even as a transitioning veteran...find a mentor. Find someone you trust, someone you can engage with to help you navigate the myriad twists and turns of the maze. Find someone with more experience who can help you navigate job descriptions, certifications, etc., or even just help you figure out which area of "cybersecurity" might be the most interesting to you. 

Finding a mentor can help you get over what might be preventing or dissuading you from applying for the above described role. As an example, my reaction to the job description was to respond to the email, saying, "...I'm sorry, but this makes no sense to me...", and why. I wasn't expecting a response, but I did get one. The recruiter shared that they were most interested in filling an entry-level role, and the message was that the "essential skills" really weren't so "essential". As a result, I'd come back with the message, "yes, go ahead and apply."

So, again...getting into the cybersecurity field can be daunting. Wait...no, I take that back. Not "can be"...is. It is daunting. There are so many options, so many opportunities, and the best way to go about deciphering and unraveling the process of getting into the field is to engage with someone who's already done it. If you're new to the field...a student, a transitioning vet, or if you're transitioning careers...reach out, engage, and find yourself a mentor. 

Monday, October 10, 2022

Post Compilation

For this post, I'll throw out a bunch of little snippets, or "post-lets", covering a variety of DFIR topics rather than one big post that covers one topic.

What's Old Is New Again
During Feb, 2016, Mari published a fascinating blog post regarding the VBAWarnings value. That was a bit more than 6 1/2 yrs ago, which in "Internet time" is several lifetimes. 

Just this past September, Avast shared a write-up of the Roshtyak component of Raspberry Robin, where they described some of the techniques used by this malware, including checking the VBAWarnings value as a means of "detecting" virtual or testing environments.

Getting PCAPs
When I've been asked on-site (or remotely), it's most often been after an incident has happened. However, that doesn't mean that I shouldn't have a means available for myself, or to share with IT admins, to collect pcaps. Having something like this readily available can be very beneficial, when you need it.

It seems that Windows 10 and above comes with a native tool for collecting network traffic data called pktmon.

Prefer Powershell? Doug Metz over at BakerStreetForensics has a solution for you.

I've used bulk_extractor to get pcaps from memory dumps; because this uses a different means for identifying network connections than Volatility, running them both is a really, REALLY good idea! So good, as a matter of fact, that I included an example of this in Investigating Windows Systems, which just shows that regardless of the version of Windows you're dealing with, the process still holds up.

Memory Analysis
Or, if you're looking for a bit more, consider bulk_extractor with record carving.

Also, if you're doing memory analysis, you might consider tools such as MemProcFS and MemProcFS-Analyzer. While I'm not a fan of a lot of the available GUI tools that folks (generally) use for analysis, this tweet from "evild3ad79" makes visualizing processes so much easier!

MOTW
MOTW, or "mark-of-the-web" is a pretty hot topic, as it should be. "MOTW" is the NTFS alternate data stream, or "ADS", attached to files downloaded from the Internet, and something we've seen expand over time. At first these were simply "zone identifier" ADSs, and contained just that...the "zone" for the downloaded file. We first saw these associated with files downloaded via IE and Outlook, and then later saw MOTW attached to files downloaded via other browsers. 

MOTW picked up steam a bit after MS announced that they were going to change the default behavior of systems running macros in Office documents downloaded from the Internet. We then saw some actors move to using archives rather than "weaponized" Office documents, and our attention shifted to archive utilities and MOTW propagation

For a bit of a different perspective on MOTW, Outflank published this article discussing MOTW from a red team perspective.

And, to top it all off, MS has shared information regarding how to disable the functionality (of attaching MOTW). What this does provide is an excellent opportunity for detections, both in the SOC (adding or modifying the SaveZoneInformation value) and for DFIR (checking the value).

Web Shells
Many, many moons ago (circa 2007, 2008), Chris Pogue and I were addressing investigating SQL Injection and web shells for the IBM ISS X-Force ERS team, codifying (or trying to) some basic processes for locating these attacks in a reactive, DFIR mode. We had a lot of different approaches, all of which could be addressed programmatically...things such as the first instance of a page being requested (across the history of the web server logs that you have available), the number of times a page was requested, the length of the request sent to the page, User Agents, etc. Now, all of these depended upon which fields were actually being logged, so we started with the default IIS logging fields and attempted to modify and address things from there. This way, encountering IIS logs with the fields having been modified (hopefully, added to...) or non-IIS web servers were considered "one-offs", and we found that the approach worked well. 

I learned recently that Aaron Shelmire authored a blog on this topic for Anomali; this was a great finding, not just because it lists some of the things we'd looked for, but also because Aaron and I worked together at one point. It's great to see contributions like this within the community.

Events Ripper
Not long ago, I released Events Ripper, a proof-of-concept tool based on RegRipper, in that it relies on plugins to extract and present data. The idea behind Events Ripper is to leverage what analysts are already doing to provide situational awareness and pivot points for analysis. So, when analysts are performing timeline creation (and moving to timeline analysis), they can leverage the events file they've already created to obtain insight into the system.

At this point, all of the Events Ripper plugins are based on data in an events file, from parsed Windows Event Logs (via wevtx.bat). For example, I wrote two plugins recently, mount.pl and ntfs.pl, that analysts can use to verify initial access used by Raspberry Robin; mount.pl runs through the events file looking for Microsoft-Windows-VHDMP/1 events indicating that a disk was "surfaced", and outputs a list of the VHD[X] and/or ISO files. Ntfs.pl looks for Microsoft-Windows-Ntfs/145 events to locate volumes, and map them to the drives or devices. Using these two plugins, you can get some quick insight as to how Raspberry Robin (or other malware) may have originally made it on to the system...via a USB thumb drive or ISO file delivered as an email attachment.

Interestingly enough, when I was developing and testing the mount.pl plugin, the Microsoft-Windows-VHDMP/Operational.evtx log file from my test system contained three ISO files. Checking the RecentDocs/.iso values in the Registry, I found those same three files listed, as well. 

Per a request from my esteemed co-worker Dray, all of the plugins display the system name, or names, as the case may be. It's not unusual for systems to start out as a gold image and be renamed, so you may have event records that still contain the original system name.

Thursday, October 06, 2022

Speaking Engagements

Every now and again, I have a need (re: "opportunity") to compile a list of recorded speaking events. The reasons vary...there's a particular message in one or more of the recordings, or someone wants to see/hear what was said, or it's more about showing examples of my presentation style. For the sake of simplicity, I thought I'd just take the list I'd compiled in Notepad++ and create a blog post.

Huntress TradeCraft Tuesdays
Bang For Your Buck: How Hackers Make Money - Ethan and I discuss various means by which threat actors monetize their activities, which is (in many cases) their ultimate goal. We also present some steps you can take to inhibit or obviate this.

Digital Forensics (or Necromancy) - Jamie and I talk about digital forensics with our special guest, Dr. Brian Carrier

ResponderCon 
Here's a link to my slides; I'll post a link to the recorded talk once it's available.

Update, 7 Nov: They're here! The video for my presentation can be found here.

Podcasts
I recently participated in the Horangi "Ask A CISO" podcast (link here, and on Spotify).

Older Events/Recordings
RVASec 2019 presentation
Nuix Unscripted
A couple of podcasts via OwlTail
Down the Security Rabbithole podcast from 2017
A podcast from 2009
CyberSpeak podcast from 2006 (24 Sept, 1 Apr)

Update, 27 Jan 2023: I appeared on the Future of Cyber Crime podcast with the wonderful Zaira Pirzada as the hostess! Thanks, Zaira, for the opportunity!

Update, 2 Feb 2023: Here's the video for the Future of Cyber Crime podcast.

Tuesday, September 20, 2022

ResponderCon Followup

I had the opportunity to speak at the recent ResponderCon, put on by Brian Carrier of BasisTech. I'll start out by saying that I really enjoyed attending an in-person event after 2 1/2 yrs of virtual events, and that Brian's idea to do something a bit different (different from OSDFCon) worked out really well. I know that there've been other ransomware-specific events, but I've not been able to attend them.

As soon as the agenda kicked off, it seemed as though the first four presentations had been coordinated...but they hadn't. It simply worked out that way. Brian referenced what he thought my content would be throughout his presentation, I referred back to Brian's content, Allan referred to content from the earlier presentations, and Dennis's presentation fit right in as if it were a seamless part of the overall theme. Congrats to Dennis, by the way, not only for his presentation, but also on his first time presenting. Ever.

During his presentation, Brian mentioned TheDFIRReport site, at one point referring to a Sodinokibi write-up from March, 2021. That report mentions that the threat actor deployed the ransomware executable to various endpoints by using BITS jobs to download the EXE from the domain controller. My presentation focused less on analysis of the ransomware EXE and more on threat actor behaviors, and Brian's mention of the above report (twice, as a matter of fact) provided excellent content. In particular, for the BITS deployment to work, the DC would have to (a) have the IIS web server installed and running, and (b) have the BITS server extensions installed/enabled, so that the web server knew how to respond to the BITS client requests. As such, the question becomes, did the victim org have that configuration in place for a specific reason, or did the threat actor modify the infrastructure to meet their own needs? 

However, the point is that without prior coordination or even really trying, the first four presentations seemed to mesh nicely and seem as if there was detailed planning involved. This is likely more due to the particular focus of the event, combined with some luck delivered when the organizing team decided upon the agenda. 

Unfortunately, due to a prior commitment (Tradecraft Tuesday webinar), I didn't get to attend Joseph Edwards' presentation, which was the one presentation I wanted to see (yes, even more than my own!).  ;-) I am going to run through the slides (available from the agenda and individual presentation pages), and view the presentation recording once it's available. I've been very interested in threat actor's use of LNK files and the subsequent use (or rather, lack thereof) by DFIR and threat intel teams. The last time I remember seeing extensive use of threat actor-delivered LNK files was when the Mandiant team compared Cozy Bear campaigns.

Going through my notes, comments made during one presentation kind of stood out, in that "event ID 1102" within the RDPClient/Operational Event Log was mentioned when looking for indications of lateral movement. The reason this stood out, and why I made a specific note, was due to the fact that many times in the industry, we refer to simply "event IDs" to identify event records; however, event IDs are not unique. For example, we most often think of "event log cleared" when someone says "event ID 1102"; however, it can mean something else entirely based on the event source (a field in the event record, not the log file where it was found). As a result, we should be referring to Windows Event Log records by their event source/ID pairs, rather than solely by their event ID. 

Something else that stood out for me was that during one presentation, the speaker was referring to artifacts in isolation. Specifically, they listed AmCache and ShimCache each as artifacts demonstrating process execution, and this simply isn't the case. It's easy for many who do not follow this line of thought to dismiss such things, but we have to remember that we have a lot of folks who are new, junior, or simply less experienced in the industry, and if they're hearing this messaging, but not hearing it being corrected, they're going to assume that this is how things are, and should be, done.

What Next?
ResponderCon was put on by the same folks that have run OSDFCon for quite some time now, and it seems that ResponderCon is something a bit more than just a focused version of OSDFCon. So, the question becomes, what next? What's the next iteration, or topic, or theme? 

If you have thoughts, insights, or just ideas you want to share, feel free to do so in the comments, or via social media, and be sure to include/tag Brian.

Monday, September 19, 2022

Deconstructing Florian's Bicycle

Not long ago, Florian Roth shared some fascinating thoughts via his post, The Bicycle of the Forensic Analyst, in which he discusses increases in efficiency in the forensic review process. I say "review" here, because "analysis" is a term that is often used incorrectly, but that's for another time. Specifically, Florian's post discusses efficiency in the forensic review process during incident response.

After reading Florian's article, I had some thoughts that I wanted share to that would extend what he's referring to, in part because I've seen, and continue to see the need for something just like what is discussed. I've shared my own thoughts on this topic previously.

My initial foray into digital forensics was not terribly different from Florian's, as he describes in his article. For me, it wasn't a lab crammed with equipment and two dozen drives, but the image his words create and perhaps even the sense of "where do I start?" was likely similar. At the same time, this was also a very manual process...open images in a viewer, or access data within images via some other means, and begin processing the data. Depending upon the circumstances, we might access and view the original data to verify that it *can* be viewed, and at that point, extract some modicum of data (referred to as "triage data") to begin the initial parsing and data presentation process before kicking off the full acquisition process. But again, this has often been a very manual process, and even with checklists, it can be tedious, time consuming, and prone to errors.

Over the years, analysts have adopted something similar to what Florian describes, using such tools as Yara, Thor (Lite), log2timeline/plaso, or CyLR. These are all great tools that provide considerable capabilities, making the analyst's job easier when used appropriately and correctly. I should note that several years ago, extensions for Yara and RegRipper were added to the Nuix Workstation product, putting the functionality and capability of both tools at the fingertips of investigators, allowing them to significantly extend their investigations from within the Nuix product. This is an example of how a commercial product provided the ability of its users to leverage the freeware tools in their parsing and data presentation process.

So, where does the "bicycle" come in? Florian said:

Processing the large stack of disk images manually felt like being deprived of something essential: the bicycle of forensic analysts.

His point is, if we have the means for creating vast efficiencies in our work, alleviating ourselves of manual, time-consuming, error-prone processes, why don't we do so? Why not leverage scanners to reduce our overhead and make our jobs easier?

So, what was achieved through Florian's use of scanners?

The automatic processing of the images accelerated our analysis a lot. From a situation where we processed only three disk images daily, we started scanning every disk image on our desk in a single day. And we could prioritize them for further manual analysis based on the scan reports.

Florian's article continues with a lot of great information regarding the use of scanners, and applying detection engineering to processing acquired images and data. He also provides examples of rules to identify the misuse/abuse of common tools. 

All of this is great, and it's something we should all look to in our work, keeping two things in mind. First, if we're going to download and use tools created by others (Yara/Thor, plaso, RegRipper, etc.), we have to understand what the tools actually do. We can't make assumptions about the tools and the functionality they provide, as these assumptions lead to significant gaps in analysis. By way of example, in March, 2020, Microsoft published a blog article addressing human-operated ransomware attacks. In that article, they identified a ransomware group that used WMI for persistence, and the team I was engaged with at the time received data from several customers impacted by that ransomware group. However, the team was unable to determine if WMI had been used for persistence because the toolset they were using to parse data did not have the capability to parse the OBJECTS.DATA file. The collection process included this file, but the parsing process did not parse the file for persistence mechanisms, and as a result, analysts assumed that the data had been parsed and yielded a negative response.

Fig 1: New DFIR Model
Second, we cannot download a tool and use it, expecting it to be up-to-date 6 months or a year later, unless we actually take steps to update it. And I'm not talking about going back to the original download site and getting more rules. The best way to go about updating the tools is it use the scanners as Florian described, leveraging the efficiencies they provide, but to then bake new findings as a result of our analysis back into the overall DFIR process, as illustrated in figure 1. Notice that following the "Analysis" phase, there's a loop that feeds back into the "Collect" phase (in case there's something additional that needs to be collected from a system) and then proceeds to the "Parse" phase, where those scanners and rules Florian described are applied. They can then be further used to enrich and decorate the data prior to presentation to the analyst. The key to this feedback loop is that rather than solely the knowledge and experience of the analyst assigned to an engagement being applied to the data, the collective corporate knowledge of all analysts, prior analysts, detection engineers, and threat intel analysts can be applied consistently to all engagements. 

So, to truly leverage the efficiencies of Florian's scanner "bicycles", we need to continually extend them by baking findings developed through analysis, digging into open reports, etc., back into the process.

Saturday, September 10, 2022

AmCache Revisited

Not long ago, I posted about When Windows Lies, and that post really wasn't so much about Windows "lying", per se, as it was about challenging analyst assumptions about artifacts, and recognizing misconceptions. Along the same lines, I've also posted about the (Mis)Use of Artifact Categories, in an attempt to address the reductionist approach that leads analysts to oversimplify and subsequently misinterpret artifacts based on their perceived category (i.e., program execution, persistence, lateral movement, etc.). This misinterpretation of artifacts can lead to incorrect findings, and subsequently, incorrect recommendations for improvements to address identified issues.

I recently ran across this LinkedIn post that begins by describing AmCache entries as "evidence of execution", which is somewhat counter to the seminal research conducted by Blanche Lagney regarding the AmCache.hve file, particularly with more recent versions of Windows 10. If you prefer a more auditory approach, there's also a Forensic Lunch where Blanche discussed her research (starts around 12:45), thanks to David and Matt! Suffice to say, data from the AmCache.hve file should not simply be considered as "evidence of execution", as it's far too reductionist; the artifact is much more nuanced that simply "evidence of execution". This overly simplistic approach is indicative of the misuse of artifact categories I mentioned earlier.

Unfortunately, misconceptions as to the nature and value of artifacts such as this (and others, most notably, ShimCache) continue to exist because we continue to treat the artifacts in isolation, rather than as part of a system. Reviewing Blanche's paper in detail, for example, makes it clear that the value of specific portions of the AmCache data depend upon such factors as which keys/values are in question, as well as which version of Windows is being discussed. Given these nuances, it's easy to see how a reductionist approach evolves, leaving us simply with "AmCache == execution". 

What we need to do is look at artifacts not in isolation, but in context with other data sources (Registry, WEVTX, etc.). This is important because artifacts can be manipulated; for example, there've been instances where malware (a PCI scraper) was written to disk and immediately "time stomped", using the time stamps from kernel32.dll to make it appear that the malware had always been part of the system. As a result, when the $STANDARD_INFORMATION attribute last modification time was included in the ShimCache entry for the malware, the analyst misinterpreted this as the time of execution, and reported a "window of compromise" of 4 years to the PCI Council, rather than a more correct 3 weeks. The "window of compromise" reported by the analyst is used, in part, to determine fines to be levied against the merchant/victim. 

So, yes...we need to first learn about artifacts by themselves, as we begin learning. We need to develop an understanding of the nature of an artifact, particularly how many artifacts and IOCs need to be viewed as compound objects (shoutz to Joe Slowik!). However, when we get to the next artifact, we then need to start viewing artifacts in the context of other artifacts, and understand how to go about developing artifact constellations that help us clearly demonstrate the behaviors we're seeing, the behaviors that drive our findings.

The overall take-away here, as with other instances, is that we have to recognize when we're viewing artifacts in isolation, and then avoid doing so. If some circumstance prevents correlation across multiple data sources, then we need to acknowledge this in our reporting, and clearly state our findings as such.

Saturday, September 03, 2022

LNK Builders

I've blogged a bit...okay, a LOT...over the years on the topic of parsing LNK files, but a subject I really haven't touched on is LNK builders or generators. This is actually an interesting topic because it ties into the cybercrime economy quite nicely. What that means is that there are "initial access brokers", or "IABs", who gain and sell access to systems, and there are "RaaS" or "ransomware-as-a-service" operators who will provide ransomware EXEs and infrastructure, for a price. There are a number of other for-pay services, one of which is LNK builders.

In March, 2020, the Checkpoint Research team published an article regarding the mLNK builder, which at the time was version 2.2. Reading through the article, you can see that the building includes a great deal of functionality, there's even a pricing table. Late in July, 2022, Acronis shared a YouTube video describing how version 4.2 of the mLNK builder available.

In March, 2022, the Google TAG published an article regarding the "Exotic Lily" IAB, describing (among other things) their use of LNK files, and including some toolmarks (drive serial number, machine ID) extracted from LNK metadata. Searching Twitter for "#exoticlily" returns a number of references that may lead to LNK samples embedded in archives or ISO files. 

In June, 2022, Cyble published an article regarding the Quantum LNK builder, which also includes features and pricing scheme for the builder. The article indicates a possible connection between the Lazarus group and the Quantum LNK builder; similarities in Powershell scripts may indicate this connection.

In August, 2022, SentinelLabs published an article that mentioned both the mLNK and Quantum builders. This is not to suggest that these are the only LNK builders or generators available, but it does speak to the prevalence of this "*-as-a-service" offering, particularly as some threat actors move away from the use of "weaponized" (via macros) Office documents, and toward the use of archives, ISO/IMG files, and embedded LNK files.

Freeware Options
In addition to creating shortcuts through the traditional means (i.e., right-clicking in a folder, etc.), there are a number of freely available tools that allow you to create malicious LNK files. However, from looking at them, there's little available to indicate that they provide the same breadth of capabilities as the for-pay options listed earlier in this article. Here's some of the options I found:

lnk-generator (circa 2017)
Booby-trapped shortcut (circa 2017) - includes script
LNKUp (circa 2017) - LNK data exfil payload generator
lnk-kisser (circa 2019) - payload generator
pylnk3 (circa 2020) - read/write LNK files in Python
SharpLNKGen-UI (circa 2021) - expert mode includes use of ADSs (Github)
Haxx generator (circa 2022) - free download
lnkbomb - Python source, EXE provided
lnk2pwn (circa 2018) - EXE provided
embed_exe_lnk - embed EXE in LNK, sample provided 

Next Steps
So, what's missing in all this is toolmarks; with all these options, what does the metadata from malicious LNK files created using the builders/generators look like? Is it possible that given a sample or enough samples, we can find toolmarks that allow us to understand which builder was used?

Consider this file, for example, which shows the parsed metadata from several samples (most recent sample begins on line 119). The first two samples, from Mandiant's Cozy Bear article, are very similar; in fact, they have the same volume serial number and machine ID. The third sample, beginning on line 91, has a lot of the information we'd look to use for comparison removed from the LNK file metadata; perhaps the description field could be used instead, along with specific offsets and values from the header (given that the time stamps are zero'd out). In fact, besides zero'd out time stamps, there's the SID embedded in the LNK file, which can be used to narrow down a search.

The final sample is interesting, in that the PropertyStoreDataBlock appears to be well-populated (unlike the previous samples in the file), and contains information that sheds light on the threat actor's development environment.

Perhaps, as time permits, I'll be able to use a common executable (the calculator, Solitaire, etc.), and create LNK files with some of the freeware tools, noting the similarities and differences in metadata/toolmarks. The idea behind this would be to demonstrate the value in exploring file metadata, regardless of the actual file, as a means of understanding the breadth of such things in threat actor campaigns.

Analysis: Situational Awareness + Timelines

I've talked and written about timelines as an analysis process for some time, in both this blog and in my books, because I've seen time and again over the years the incredible value in approaching an investigation by developing a timeline (including mini- and micro-timelines, and overlays), rather than leaving the timeline as something to be manually created in a spreadsheet after everything else is done.

Now, I know timelines can be "messy", in part because there's a LOT of activity that goes on on a system, even when it's "idle", such as Windows and application updates. This content can "muck up" a timeline and make it difficult to distill the malicious activity, particularly when discerning that malicious activity is predicated solely on the breadth of the analyst's knowledge and experience. Going back to my early days of "doing" IR, I remember sitting at an XP machine, opening the Task Manager, and seeing that third copy of svchost.exe running. I'd immediately drill in on that item, and the IT admin would be asking me, "...what were you looking at??" The same is true when it comes to timelines...there are going to be things that will immediately jump out to one analyst, were another analyst may not have experienced the same thing, and not developed the knowledge that this particular event was "malicious", or at least suspicious.

As such, one of the approaches I've used and advocated is to develop situational awareness via the use of mini- or micro-timelines, or overlays. A great example of this approach can be seen in the first IronMan movie, when Tony's stuck in the cave and in order to hide the fact that he's building his first suit of armor, he has successive parts of the armor drawn on thin paper. The image of the full armor only comes into view when the papers are laid on top of one another. I figured that this would be a much better example than referring to overhead projectors and cellophane sheets, etc., that those of us who grew up in the '70s and '80s are so very familiar with.

Now, let's add to that the idea of "indicators as composite objects". I'm not going to go into detail discussing this topic, because I stole borrowed it from Joe Slowik; to get a really good view into this topic, give Joe's RSA 2022 presentation a listen; it's under an hour and chock full of great stuff!

So, what this means is that we're going to have events in our timeline that are really composite objects. This means that the context of those events may not be readily available or accessible for a single system or incident. For example, we may see a "Microsoft-Windows-Security-Auditing/4625" event in the timeline, indicating a failed login attempt. But that event also includes additional factors...the login type, the login source, username, etc., all of which can have a significant impact...or not...on our investigation. But here we have a timeline with hundreds, or more likely, thousands of events, and no real way to develop situational awareness beyond our own individual experiences. 

Or, do we?

In order to try to develop more value and insight from timeline data, I developed Events Ripper, a proof of concept tool based on RegRipper that provides a means for digging deeper into individual events and providing analysts with insight and situational awareness they might not otherwise have, due to the volume of data, lack of experience, etc. In addition to addressing the issue of composite objects, this tool also serves, like RegRipper, to preserve corporate knowledge and intrusion intel, in that a plugin developed by one analyst is shared with all analysts, and the wealth of that analyst's knowledge and experience is available to the rest of the team regardless of their status, in an operational manner.

The basic idea behind Events Ripper is that an analyst is developing a timeline and wants to take things a step further and develop some modicum of situational awareness. The timeline process used results in an "events file" being created; this is an intermediate file between the raw data ($MFT, *.evtx, Registry hives, SRUM DB, etc.) and the timeline. The "events file" is a text file, with each line consisting of an event in the 5-field TLN format. 

For example, one plugin runs across all login (Security-Auditing/4624) events, and distinguishes between type 2, 3, and 10 logins, and presents source IP addresses for the type 3 and type 10 logins. Using this plugin, I was able to quickly determine that a system was accessible from the raw Internet via both file sharing and RDP. Another plugin does something similar for failed logins, in addition to translating the sub-status reason for the failed attempt.

The current set of plugins available with the tool are focused on Windows Event Log records from some of the various log files. As such, an analyst can run Events Ripper across an events file consisting of multiple data sources, or can opt to do so after parsing just the Windows Event Log files, as described in the tool readme. As this is provided as a standalone Windows executable, it can also be included in an automated process where data is received through 'normal' intake processes, and then parsed prior to being presented to the analyst. 

Like RegRipper, Events Ripper was intended for the development and propagation of corporate knowledge. You can write a plugin that looks for something specific in an event, a combination of elements within an event, or correlate events across the events file, and add Analysis Tips to the plugin, sharing things like reference URLs and other items the analyst may want to consider. Adding this plugin to your own corporate repo means that the experience and knowledge developed in investigating and unraveling this event (or combination of events) is no longer limited or restricted to just the analyst who did that work; now, it's shared with all analysts, even those who haven't yet joined your team.

At the moment, there are 16 plugins, but like RegRipper, this is likely to expand over time as new investigations develop and new ways of looking at the data are needed.

Saturday, August 27, 2022

When Windows Lies

"When Windows Lies"...what does that really mean? 

Mari had a fascinating blog post on this topic some years ago; she talked about the process DFIR analysts had been using to that point to determine the installation date of the operating system. In short...and this has happened several more times since then...while DFIR analysts had been using one process to assess the installation date, Windows developers had changed how this information is stored and tracked in Windows systems, reaffirming the notion that operating systems are NOT designed and maintained with forensic examiners in mind. ;-)

The take-away from Mari's blog article...for me, anyway...is the need for analysts to keep up-to-date with changes to the operating system; storage locations, log formats, etc., can (and do) change without notice. Does this mean that every analyst has to invest in research to keep up on these things? No, not at all...this is why we have a community, where this research and these findings are shared. But as she mentioned in the title and content of the article, if we just keep following our same methods, we're going to end up finding that "Windows lies". This idea or concept is not new; I've talked about the need for validation previously.

Earlier this year, a researcher used a twist on that title to talk about the "lies" analysts tools will "tell" them, specifically when it comes to USB device serial numbers. I understand the author presented at the recent SANS DFIR Summit; unfortunately, I was not able to view the presentation due to a previous commitment. However, if the content was similar, I'm not sure I'd use the term "lies" to describe what was happening here.

The author does a great job of documenting the approach they took in the article, with lots of screen captures. However, when describing the T300D Super Speed Toaster, the author states:

...I would have expected a device such as this to simply be a pass-through device.

I've used a lot of imaging devices in my time, but not this one; even so, just looking at the system (and without reading the online description of the device) I can't say that I would have assumed, in a million years, that this was simply a "pass-through device". Just looking at the front of the device, there's a good bit going on, and given that this is a "disk dock" for duplicating drives, I'm not at all sure that the designers took forensics processes into account.

As a result, in this case, the take-away isn't that it's about Windows "lying", as much as it is...once again...the analyst's assumptions. If the analyst feels that what they "know" is beyond reproach, and do not recognize what they "know" as assumption (even if it's more of, "...but that's how we've always done it..."), then it would appear that Windows is "lying" to them. So, again, we have the need for validation, but this time we've added the layer of "check your assumptions".

Earlier this year, Krz posted a pretty fascinating article, using the term "fools" in the title, as in "Windows fools you". In that case, what he meant was that during updates, Windows will "do things" as part of the update functionality that have an impact on subsequent response and analysis. As such, an analyst with minimal experience or visibility may assume that the "thing" done was the result of a threat actor's actions, simply because they weren't aware that this is "normal Windows functionality".

It's pretty clear that the use of the term "lies" is meant to garner attention to the content. Yes, it's absolutely critical that analysts understand the OS and data they're working with (including file formats), how their tools work, and when necessary, use multiple tools. But it's also incumbent upon analysts to check their assumptions and validate their findings, particularly when there's ample data to help dispel those assumptions. Critical thinking is paramount for DFIR analysts, and I think that both authors did a very good job in pointing that out.

Wednesday, August 24, 2022

Kudos and Recognition

During my time in the industry, I've seen a couple of interesting aspects of "information sharing". One is that not many like to do it. The other is that, over time, content creation and consumption has changed pretty dramatically.

Back in the day, folks like Chris Pogue, with his The Digital Standard blog, and Corey Harrell with his Journey Into IR blog, and even more recently, Mari with her Another Forensics Blog have all provided a great deal of relevant, well-developed information. A lot of what Mari shared as far back as 2015 has even been relevant very recently, particularly regarding deleted data in SQLite databases. And, puh-LEASE, let's not forget Jolanta Thomassen, who, in 2008, published her dissertation addressing unallocated space in Registry hives, along with the first tool (regslack) to parse and extract those contents - truly seminal work!

Many may not be aware, but there are some unsung heroes in the DFIR industry, unrecognized contributors who are developing and sharing some incredible content, but without really tooting their own horn. These folks have been doing some really phenomenal work that needs to be called out and held up, so I'm gonna toot their horn for them! So, in no particular order...

Lina is an IR consultant with Secureworks (an org for which I am an alum), and as string of alphabet soup following her name. Lina has developed some pretty incredible content, which she shares via her blog, as well as via LinkedIn, and in tweet threads. One of her posts I've enjoyed in particular is this one regarding clipboard analysis. Lina's content has always been well-considered, well-constructed, and very thoughtful. I have always enjoyed when content produced by practitioners, as it's very often the most relevant.

Krz is another analyst, and has dropped a good deal of high quality content, as well as written some of his own tools (including RegRipper plugins), which he also shares via Github. Not only did Krz uncover that Windows Updates will clear out valuable forensic resources, but also did some considerable research into how a system going on battery power impacts that system, and subsequently, forensic analysis.

Patrick Siewert has hung out his own shingle, and does a lot of work in the law enforcement and legal communities, in addition to sharing some pretty fascinating content. I have never had the opportunity to work with mobile devices (beyond laptops), but Patrick's article on cellular records analysis is a thorough and interesting treatment of the topic. 

Julian-Ferdinand Vögele recently shared a fascinating article titled The Rise of LNK Files, dropping a really good description of Windows shortcut files and their use. Anyone who's followed me for any amount of time knows I'm more than mildly interested in this topic, from a digital forensic and threat intel perspective. He's got some other really interesting articles on his blog, including this one regarding Scheduled Tasks, and like the other folks mentioned here, I'm looking forward to more great content in the future.

If you're looking for something less on the deep technical side or less DFIR focused, check out Maril's content. She's a leader in the "purple team" space, and she's got some really great content on personal branding that I strongly recommend that everyone take the time to watch, follow, digest, and consider. To add to that, it seems that Maril and her partners-in-crime (other #womenincyber) will be dropping the CyberQueensPodcast starting in Sept.

If you're into podcasts, give Jax a listen over at Outpost Gray (she also co-hosts the 2 Cyber Chicks podcast) and in particular, catch her chat with Dorota Koslowska tomorrow (25 Aug). Jax is a former US Army special ops EW/cyber warrant officer, and as you can imagine, she brings an interesting perspective to a range of subjects, a good bit of which she shares via her blog.

Let's be sure to recognize those who produce exceptional content, and in particular those who do so on a regular basis!

Saturday, August 13, 2022

Who "Owns" Your Infrastructure?

That's a good question.

You go into work every day, sit down at your desk, log in...but who actually "owns" the systems and network that you're using? Is it you, your employer...or someone else?

Anyone who's been involve in this industry for even a short time has either seen or heard how threat actors will modify an infrastructure to meet their needs, enabling or disabling functionality (as the case may be) to cover their tracks, make it harder for responders to track them, or to simply open new doors for follow-on activity.

Cisco (yes, *that* Cisco) was compromised in May 2022, and following their investigation, provided a thorough write-up of what occurred. From their write-up:

"Once the attacker had obtained initial access, they enrolled a series of new devices for MFA and authenticated successfully to the Cisco VPN." (emphasis added)

Throughout the course of the engagement, the threat actor apparently added a user, modified the Windows firewall, cleared Windows Event Logs, etc. Then, later in the Cisco write-up, we see that the threat actor modified the Windows Registry to allow for unauthenticated SYSTEM-level access back into systems by setting StickyKeys. What this means is that if Cisco goes about their remediation steps, including changing passwords, but misses this one, the threat actor can return, hit a key combination, and gain SYSTEM-level access back into the infrastructure without having to enter a password. There's no malware involved...this based on functionality provided by Microsoft.

Remember..the Cisco write-up states that the activity is attributed to an IAB, which means that this activity was likely intended to gain and provide access to a follow-on threat actor. As a result of response actions taken by the Cisco team, that follow-on access has been obviated.

On 11 Aug 2022, the SANS Internet Storm Center included this write-up regarding the use of a publicly available tool called nsudo. There's a screen capture in the middle of the write-up that shows a number of modifications the threat actor makes to the system, the first five of which are clearly Registry modifications via reg add. Later there's the use of the Powershell Mp-Preference module to enable Windows Defender exclusions, but I don't know if those will even take effect if the preceding commands to stop and delete Windows Defender succeeded. Either way, it's clear that the threat actor in this case is taking steps to modify the infrastructure to meet their needs.

It doesn't stop there; there is a great deal of native functionality that threat actors can leverage to modify systems to meet their needs. For example, it's one thing to clear Windows Event Logs or delete web server log files; as we saw with NotPetya in 2017, those logs can still be recovered. To take this a step further, I've seen threat actors use appcmd.exe to disable IIS logging; if the logs are never written, they can't be recovered. We've seen threat actors install remote access tools, and install virtual machines or hard drives from which to run their malicious software, because (a) the VMs are not identified as malicious by AV software, and (b) AV software doesn't "look inside" the VMs.

So what? What does all this mean?

What this means is that these modifications can be detected and responded to early in the attack cycle, inhibiting or even obviating follow-on activity (ransomware deployment?). When I was researching web shells, for example, I kept running into trouble with Windows Defender; no matter how "esoteric" the web shell, if I didn't disable Defender before downloading it, Defender would quite literally eat the web shell! Other tools do a great job of detecting and quarantining web shells, and even more identify them. That's a means of initial access, so detecting and quarantining the web shell means you've obviated the rest of the attack and forced the threat actor to look for another means, AND you know someone's knocking at your door!

Wednesday, August 10, 2022

Researching the Windows Registry

The Windows Registry is a magical place that I love to research because there's always something new and fun to find, and apply to detections and DFIR analysis! Some of my recent research topics have included default behaviors with respect to running macros in Office documents downloaded from the Internet, default settings for mounting ISO/IMG files, as well as how to go about enabling RDP account lockouts based on failed login attempts. 

Not long ago I ran across some settings specific to nested VHD files, and thought, well...okay, I've seen virtual machines installed on systems during incidents, as a means of defense evasion, and VHD/VHDX files are one such resource. Further, they don't require another application, like VMWare or VirtualBox.

Digging a bit further, I found this MS documentation:

"VHDs can be contained within a VHD, so Windows limits the number of nesting levels of VHDs that it will present to the system as a disk to two, with the maximum number of nesting levels specified by the registry value HKLM\System\CurrentControlSet\Services\FsDepends\Parameters\VirtualDiskMaxTreeDepth

Mounting VHDs can be prevented by setting the registry value HKLM\System\CurrentControlSet\Services\FsDepends\Parameters\VirtualDiskNoLocalMount to 1."

Hhhmmm...so I can modify a Registry value and prevent the default behavior of mounting VHD files! Very cool! This is pretty huge, because admins can set this value to "1" within their environment, and protect their infrastructure.

Almost 3 yrs ago, Will Dormann published an article about the dangers of VHD/VHDX files. Some of the issues Will points out are:

- VHD/VHDX files downloaded from the Internet do not propagate MOTW the way some archive utilities do, so even if the VHD is downloaded from the Internet and MOTW is applied, this does not transfer to any of the files within the VHD file. This behavior is similar to what we see with ISO/IMG files.

- AV doesn't scan inside VHD/VHDX files.

So, it may be worth it to modify the VirtualDiskNoLocalMount value.    

To check the various settings from a DFIR perspective, I use RegRipper:

(System) Get VHD[X] Settings

ControlSet001\Services\FsDepends\Parameters

LastWrite time: 2019-12-07 09:15:07Z

VirtualDiskExpandOnMount  0x0001
VirtualDiskMaxTreeDepth   0x0002
VirtualDiskNoLocalMount   0x0000

Analysis Tip: The values listed impact how Windows handles VHD[X] files, which can be used to bypass security measures, including AV and MOTW.

VirtualDiskMaxTreeDepth determines how deep to do with embedding VHD files.
VirtualDiskNoLocalMount set to 1 prevents mounting of VHD[X] files.

Ref: https://insights.sei.cmu.edu/blog/the-dangers-of-vhd-and-vhdx-files/

From what's in the Registry (above), we can see what's possible. In this case, per the Analysis Tip in the output of the RegRipper plugin, this system allows automatic mounting of the virtual disk file. You can look for access to .vhd/.vhdx files in the user's RecentDocs key. Also from a DFIR perspective, look for indications of files being mounted in the Microsoft-Windows-VHDMP%4Operational Event Log.