Monday, November 28, 2022

Post Compilation

Investigating Windows Systems
It's the time of year again when folks are looking for stocking stuffers for the DFIR nerd in their lives, and my recommendation is a copy of Investigating Windows Systems! The form factor for the book makes it a great stocking stuffer, and the content is well worth it!

Yes, I know that book was published in 2018, but when I set out to write the book, I wanted to do something different from the recipe of most DFIR books to that point, including my own. I wanted to write something that addressed the analysis process, so the book is full of pivot and decision points, etc. So, while artifacts may change over time...some come and go, others become new and change in format over time, others suddenly appear...it's the analysis process that doesn't change.

For example, chapter 4 addresses the analysis of a compromised web server, one that includes a memory dump. One of the issues I've run into over the past couple of years, since well after the book was published, is that there more than a few DFIR analysts who seem to believe that running a text search of a memory dump for IP addresses is "sufficient"; it's not. IP addresses are not often stored in ASCII format; as such, you'd likely want to use Volatility and bulk_extractor to locate the specific structures that include the binary representation of the IP address. As each tool looks for different structures, I recommend using them both...just look at ch 4 of IWS and see how different the information is between the two tools.

There's a lot of really good content in the book, such as "file system tunneling", covered beginning on pg 101. 

While some of the images used as the basis of analysis in the book are no longer available online, several are still available, and the overall analysis process applies regardless of the image.

Analysis
Speaking of analysis processes, I ran across this blog post recently, and it touched on a couple of very important concepts, particularly:

This highlights the risk of interpreting single artefacts (such as an event record, MFT entry, etc) in isolation, as it doesn't provide any context and is (potentially) subject to misinterpretation.

Exactly! When we view artifacts in isolation, we're missing critical factors such as context, and in a great many instances, grossly misinterpreting the "evidence". This misinterpretation happens a lot more than we'd like to think, not due to a lack of visibility, but due to it simply being the DFIR culture.

Another profound statement from the author was:

...instead of fumbling and guessing, I reached out to @randomaccess and started discussing plausible scenarios.

Again...exactly! Don't guess. Don't spackle gaps in analysis over with assumption and speculation. It's okay to fumble, as long as you learn from it. However, most importantly, there's no shame in asking for help. In fact, it's quite the opposite. Don't listen to that small voice insider of you that's giving you excuses, like, "...oh, they're too busy...", or "...I could never ask them...". Instead, listen the roaring Gunnery Sergeant Hartmann (from "Full Metal Jacket") who's screaming at you to reach out and ask someone, Private Joker!!

For me, it's very validating to see others within the industry advocating the same approach I've been sharing for several years. Cyber defense is a team sport folks, and going it alone just means that we, and our customers, are going to come up short.

Tools for Memory Analysis
In addition to the tools for memory analysis mentioned earlier in this blog post, several others have popped over time. For example, here're two:

MemProcFS
ProcMemScan

Now, I haven't tried either one of these tools, but they seem pretty great. 

Additional Resources:
CyberHacktics - Win10 Memory Analysis

Proactive Defense
"Proactive defense" means moving "left of bang", taking steps to inhibit or even obviate the threat actor, before or shortly after they gain initial access. For example, TheHackerNews recently reported on the Black Basta Ransomware gang, indicating that one means of gaining access is to coerce or trick a user into mounting a disk image (IMG) file and launching the VBS script embedded within it, to initially infect the system with Qakbot. Many have seen a similar technique to infect systems with Qakbot, sending ISO files with embedded LNK files. 

So, think about it...do your users require the ability to mount disk image files simply by double-clicking them? If not, consider taking these steps to address this issue; doing so will still allow your users to programmatically access disk image files, but will prevent them from mounting them by double-clicking, or by right-clicking and choosing "Mount" from the context menu. This quite literally cuts the head off of the attack, stopping the threat actor in their tracks. 

Taking proactive security steps...creating an accurate asset inventory (of both systems and applications), reducing your attack surface, and configuring systems beyond the default...means that you're going to have higher fidelity alerts, with greater context, which in turn helps alleviate alert fatigue for your SOC analysts. 

Open Reporting
Lots of us pursue/review open reporting when it comes to researching issues. I've done this more than a few times, searching for unique terms I find (i.e., Registry value names, etc.), first doing a wide search, then narrowing it a bit to try to find more specific information. 

However, I strongly caveat this approach, in part due to open reporting like this write-up on Raspberry Robin, specifically due to the section on Persistence. That section starts with (emphasis added by me):

Raspberry Robin installs itself into a registry “run” key in the Windows user’s hive, for example:

However, the key pointed to is "software\microsoft\windows\currentversion\runonce\". The Run key is very different from the RunOnce key, particularly regarding how it's handled by the OS. 

Within that section are two images, neither of which is numbered. The caption for the second image reads:

Raspberry Robin persistence process following an initial infection and running at each machine boot

Remember where I bolded "user's hive" above? Simply by the fact that persistence is written to a user's hive means that the process starts following the next time that user logs in, not "at each machine boot".

Open reporting can be very valuable during analysis, and can provide insight that an analyst may not have otherwise. However, open reporting does need to be reviewed with a critical eye, and not simply taken at face value.

Sunday, November 27, 2022

Challenge 7 Write-up

Dr. Ali Hadi recently posted another challenge image, this one (#7) being a lot closer to a real-world challenge than a lot of the CTFs I've seen over the years. What I mean by that is that in the 22+ years I've done DFIR work, I've never had a customer pose more than 3 to 5 questions that they wanted answered, certainly not 51. And, I've never had a customer ask me for the volume serial number in the image. Never. So, getting a challenge that had a fairly simple and straight forward "ask" (i.e., something bad may have happened, what was it and when??) was pretty close to real-world. 

I will say that there have been more than a few times where, following the answers to those questions, customers would ask additional questions...but again, not 37 questions, not 51 questions (like we see in some CTFs). And for the most part, the questions were the same regardless of the customer; once whatever it was was identified, questions of risk and reporting would come up, was any data taken, and if so, what data?

I worked the case from my perspective, and as promised, posted my findings, including my case notes and timeline excerpts. I also added a timeline overlay, as well as MITRE ATT&CK mappings (with observables) for the "case".

Jiri Vinopal posted his findings in this tweet thread; I saw the first tweet with the spoiler warning, and purposely did not pursue the rest of the thread until I'd completed my analysis and posted my findings. Once I posted my findings and went back to the thread, I saw this comment:

"...but it could be Windows server etc..so prefetching could be disabled..."

True, the image could be of a Windows server, but that's pretty trivial to check, as illustrated in figure 1.

Fig 1: RRPro winver.pl plugin output








Checking to see if Prefetching is enabled is pretty straightforward, as well, as illustrated in figure 2.

Fig 2: Prefetcher Settings via System Hive






If prefetching were disabled, one would think that the *.pf files would simply not be created, rather than having several of them deleted following the installation of the malicious Windows service. The Windows Registry is a hierarchal database that includes, in part, configuration information for the Windows OS and applications, replacing the myriad configuration and ini files from previous versions of the OS. A lot of what's in the Registry controls various aspects of the Windows eco-system, including Prefetching.

In addition to Jiri's write-up/tweet thread of analysis, Ali Alwashali posted a write-up of analysis, as well. If you've given the challenge a shot, or think you might be interested in pursuing a career in DFIR work, be sure to take a look at the different approaches, give them some thought, and make comments or ask questions.

Remediations and Detections
Jiri shared some remediation steps, as well as some IOCs, which I thought were a great addition to the write-up. These are always good to share from a case; I included the SysInternals.exe hash extracted from the AmCache.hve file, along with a link to the VT page, in my case notes.

What are some detections or threat hunting pivot points we can create from these findings? For many orgs, looking for new Windows service installations via detections or hunting will simply be too noisy, but monitoring for modifications to the /etc/hosts file might be something valuable, not just as a detection, but for hunting and for DFIR work.

Has anyone considered writing Yara rules for the malware found during their investigation of this case? Are there any other detections you can think of, for either EDR or a SIEM?

Lessons Learned
One of the things I really liked about this particular challenge is that, while the incident occurred within a "compressed" timeframe, it did provide several data sources that allowed us to illustrate where various artifacts fit within a "program execution" constellation. If you look at the various artifacts...UserAssist, BAM key, and even ShimCache and AmCache artifacts...they're all separated in time, but come together to build out an overall picture of what happened on the system. By looking at the artifacts together, in a constellation or in a timeline, we can see the development and progression of the incident, and then by adding in malware RE, the additional context and detail will build out an even more complete picture.

Conclusions
A couple of thoughts...

DFIR work is a team effort. Unfortunately, over the years, the "culture" of DFIR has been one that has developed into a bit of a "lone wolf" mentality. We all have different skill sets, to different degrees, as well as different perspectives, and bringing those to bear is the key to truly successful work. The best (and I mean, THE BEST) DFIR work I've done during my time in the industry has been when I've worked as part of team that's come together, leveraging specific skill sets to truly deliver high-quality analysis.

Thanks
Thanks to Dr. Hadi for providing this challenge, and thanks to Jiri for stepping up and sharing his analysis!

Sunday, November 20, 2022

Thoughts on Teaching Digital Forensics

When I first started writing books, my "recipe" for how to present the information followed the same structure I saw in other books at the time. While I was writing books to provide content along the lines of what I wanted to see, essentially filling in the gaps I saw in books on DFIR for Windows systems, I was following the same formula other books had used to that point. At the time, it made sense to do this, in order to spur adoption.

Later, when I sat down to write Investigating Windows Systems, I made a concerted effort to take a different approach. What I did this time was present a walk-through of various investigations using images available for download on the Internet (over time, some of them were no longer available). I started with the goals (where all investigations must start), and shared the process, including analysis decisions and pivot points, throughout the entire process.

Okay, what does this have to do with teaching? Well, a friend recently reached out and asked me to review a course that had been put together, and what I immediately noticed was that the course structure followed the same formula we've seen in the industry for years...a one-dimensional presentation of single artifacts, one after another, without tying them all together. In fact, it seems that many materials simply leave it to the analyst to figure out how to extrapolate a process out of the "building blocks" they're provided. IMHO, this is why we see a great many analysts manually constructing timelines in Excel, after an investigation is "complete", rather than building one from the very beginning to facilitate and expedite analysis, validation, etc.

Something else I've seen is that some courses and presentations address data sources and artifacts one-dimensionally. We see this not only in courses, but also in other presented material, because this is how many analysts learn, from the beginning. Ultimately, this approach leads to misinterpretation of data sources (ShimCache, anyone??) and misuse of artifact categories. Joe Slowik (Twitter, LinkedIn) hit the nail squarely on the head when he referred to IoCs as "composite objects" (the PDF should be required reading). 

How something is taught also helps address misconceptions; for example, I've been saying for sometime now that we're doing ourselves and the community a disservice when we refer to Windows Event Log records solely by their event ID; I'm not the only one to say this, Joachim Metz has said it, as well. The point is that event IDs, even within a single Windows Event Log, are NOT unique. However, it's this reductionist approach that also leads to misinterpretation of data sources; we don't feel that we can remember all of the nuances of different data sources, and rather than looking to additional data sources on which to build artifact constellations and verification, we reduce the data source to the point where it's easiest to understand.

So, we need a new approach to teaching this topic. Okay, great...so what would this approach look like? First, it would start off with core concepts of validation (through artifact constellations), and case notes. These would be consistent throughout, and the grade for the final project would be heavily based on the existence of case notes.

This approach is similar to the Dynamics mechanical engineering course I took during my undergraduate studies. I was in the EE program, and we all had to "cross-pollinate" with both mechanical and civil engineering. The professor for the Dynamics course would give points for following the correct process, even if one variable was left out. What I learned from this was that trying to memorize discrete facts didn't work as well as following a process; it was more correct to follow the process, even if one angular momentum variable was left out of the equation. 

The progression of this "new" course would include addressing, for example, artifact categories; you might start with "process execution" because it's a popular one. You might build on something that persists via a Run key value...the reason for this will become apparent shortly. Start with Prefetch files, and be sure to include outlier topics like those discussed by Dr Ali Hadi. Be sure to populate and maintain case notes, and create a timeline from the file system and Prefetch file metadata (embedded time stamps)...do this from the very beginning.

Next, go to Windows Event Logs. If the system has Sysmon installed, or if Process Tracking is enabled (along with the Registry mod that enables full command lines) in the Security Event Log, add those records to the timeline. As the executable is being launched from a Run key (remember, we chose such an entry for a reason, from above), be sure to add pertinent records from the Microsoft-Windows-Shell-Core%4Operational.evtx Event Log. Also look for WER or "Application Popup" (or other errors) that may be available from the Application Event Log. Also look for indications of malware detections in logs associated with AV and other monitoring tools (i.e., SentinelOne, Windows Defender, Sophos, WebRoot, etc.). Add these to the timeline.

Moving on to the Registry, we clearly have some significant opportunities here, as well. For example, looking at the ShimCache and AmCache.hve entries for the EXE If available), we have an opportunity clearly demonstrate the true nature and value of these artifacts, correcting the misinterpretations we so often see when artifacts are treated in isolation. We also need to bring in additional resources and Registry keys, such as the StartupApproved subkeys, etc.

We can then include additional artifacts like the user's ActivitiesCache.db, SRUM.db, etc., artifacts, but the overall concept here is to change the way we're teaching, and ultimately doing DF work. Start with a foundation that requires case notes and artifact constellations, along with an understanding of how this approach leads and applies to validation. Change the approach by emphasizing first principles from the very beginning, and keeping them part of the education process throughout, so that it becomes part of the DFIR culture.

Monday, November 14, 2022

RegRipper Value Proposition

I recently posted to LinkedIn, asking my network for their input regarding the value proposition of RegRipper; specifically, how is RegRipper v3.0 of "value" to them, how does it enhance their work? I did this because I really wanted to get the perspective of folks who use RegRipper; what I do with RegRipper could be referred to as both "maintain" and "abuse". Just kidding, but the point is that I know, beyond the shadow of a doubt, that I'm not a "typical user" of RegRipper...and that's the perspective I was looking for.

Unfortunately, things didn't go the way I'd hoped. The direct question of "what is the value proposition of RegRipper v3.0" was not directly answered. Other ideas came in, but what I wasn't getting was the perspective of folks who use the tool. As such, I thought I'd try something a little different...I thought I'd share my perspective.

From my perspective, and based on the original intent of RegRipper when it was first released in 2008, the value proposition for RegRipper consists of:

Development of Intrusion Intel
When an analyst finds something new, either through research, review of open reporting, or through their investigative process, they can write a plugin to address the finding, and include references, statements/comments, etc.

For example, several years ago, I read about Project Taj Mahal, and found it fascinating how simple it was to modify the Registry to "tell" printers to not delete copies of printed jobs. This provides an investigator the opportunity to detect a potential insider threat, just as much as it provides a threat actor with a means of data collection. I wrote a plugin for it, and now, I can run it either individually, or just have it run against every investigation, automatically.

Extending Capabilities
Writing a plugin means that the capabilities developed by one analyst are now available to all analysts, without every analyst having to experience the same investigation. Keep in mind, as well, that not all analysts will approach investigations the same way, so one analyst may find something of value that another analyst might miss, simply because their perspectives and backgrounds are different.

Over the years, a number of folks in the community have written plugins, but not all of them have opted to include those plugins in the Github repo. If they had, another analyst, at another organization, can run the plugin without ever having to first go through an investigation that includes those specific artifacts. The same is true within a team; one analyst could write a plugin, and all other analysts on the team would have access to that capability, without having to have that analyst there with them, even if that analyst were on PTO, parental leave, or had left the company. 

As a bit of a side note, writing things like RegRipper plugins or Yara rules provides a great opportunity when it comes to things like performance evaluations, KPIs, etc.

Retention of "Corporate Knowledge"
A plugin can be written and documented (comments, etc.) such that it provides more than just the basic information about the finding; as such, the "corporate knowledge" (references, context, etc.) is retained and available to analysts, even when the plugin author is unavailable. The plugin can be modified and maintained across versions of Windows, if needed.

All of these value propositions lead to greater efficiency, effectiveness and accuracy of analysts, providing greater context and letting them get to actual analysis faster, and overall reducing costs. 

Now, there are other "value propositions" for me, but they're unique to me. For example, all I need to do is consult the CPAN page for the base module, and I can create a tool (or set of tools) that I can exploit during testing. I've also modified the base module, as needed, to provide additional information that can be used for various purposes.

I'm still very interested to understand the value proposition of RegRipper to other analysts.