Pages

Wednesday, June 25, 2025

Program Execution, follow-up

 Last Nov, I published a blog post titled Program Execution: The ShimCache/AmCache Myth as a means of documenting, yet again and in one place, the meaning of the artifacts. I did this because I kept seeing the "...these artifacts illustrate program execution..." again and again, and this is simply incorrect. 

I recently ran across Mat's post on Medium called Chronos vs Chaos: The Art (and Pain) of Building a DFIR Timeline. Developing timelines is something I've done for a very long time, and continue to do even today. The folks I work with know that I document my incident reviews with a liberal application of timelining. I first talked about timelining in Windows Forensic Analysis 2/e, published in 2009, and by the time Windows Forensic Analysis 3/e was published 3 yrs later, timelining had it's own chapter.

In his post, Mat quite correctly states that one of the issues with timelining is the plethora (my word, not his) of time stamp formats. This is abundantly true...64-bit formats, 32-bit formats, string formats, etc. Mat also states, in the section regarding "gaps", that "Analysts must infer or corroborate from context, which is tricky"; this is very true, but one of the purposes of a timeline is to provide that context, by correlating various data sources and viewing them side-by-side.

Not quite halfway into the post, Mat brings up ShimCache and AmCache, and with respect to ShimCache, refers to it as:

A registry artifact that logs executables seen by the OS. Specifically, it records the file path and the file’s last modified time at the moment the program was executed...

So, "yes" to "executables seen by the OS", but "no" to "at the time the program was executed". 

Why do I say this? If you refer back to my previous blog post on this topic, and then refer to Mandiant's article on on ShimCache, the following statement will stand out to you:

It is important to understand there may be entries in the Shimcache that were not actually executed. [emphasis added]

So, a program doesn't actually have to be executed to appear in the ShimCache artifact.

With respect to the AmCache artifact, Mat states that it "does record execution times", but that is perhaps a too general, too broad-brush approach to the artifact. When considering the AmCache artifact in isolation, please refer to Analysis of the AmCache v2. For example, pg 27 of the linked PDF, under the "AmCache" section, states:

Furthermore, for the PE that is not part of a program, this is also a proof of execution. As for the last modification date of a registry File key, it corresponds with a run of ProgramDataUpdater more often than not.

This states that for Windows 10 version 1507, the File key LastWrite time is the last execution time, but not for the identified executable file. 

Finally, as an additional resource, Chris Ray over at Cyber Triage recently posted an Intro to ShimCache and AmCache, where he stated:

Due to the complex nature of these artifacts, it’s best to think of this data under evidence of existence rather than evidence of execution. In certain scenarios you can show a file executed with a high degree of confidence, but should never be the definitive proof that something ran.

Mat also states in his post, "AmCache is often used in conjunction with ShimCache...", which may be the case, but the "conjunction" part should not end there. If you're attempting to demonstrate program execution, for example, you should use all of the artifacts that Mat mentions in his post (MFT, Prefetch, UserAssist, ShimCache, AmCache, etc.), if available, in conjunction with others, to not only demonstrate program execution, but to also provide much greater insight and context than you'd get from just one of the artifacts.

When I was taking explosives training in the military, they had a saying for detonators: One is none, two is one. The idea is that one detonator, by itself, could fail, and has failed. But the likelihood of one of two detonators failing is extremely small. This idea can also be applied to demonstrating any particular category in digital forensics, including program execution...one artifact by itself, in isolation, is essentially "none". It could fail to do it's job, particularly if we're talking about ShimCache or AmCache by themselves. 

You should also consider additional artifacts to provide more granular context around the execution. If Process Tracking is enabled, the Security Event Log can be valuable, particularly if the system also has the Registry value set enabling full command lines. If Sysmon is installed, the Sysmon Event Log would prove incredibly valuable. The Application Event Log may provide indications of application failures, such as Application Pop-up or Windows Event Reporting failures. The Application Event Log may also contain DCOM/10028 messages referring to netscan or Advanced IP Scanner being executed. The Windows Defender Event Log may contain ../1116 records indicating a detection, followed by ../1119 records indicating a critical failure in attempting to quarantine the detected behavior. 

So, What?
Why does any of this matter? Who cares?

When I was performing PCI forensic investigations, one of the things Visa (the de facto "PCI Council", at the time) wanted us to include in our reports was a value called "window of compromise". This equated to the time from when the endpoint was compromised and the credit card gathering malware was placed on it, to the point where the compromise was detected and responded to/remediated. During one investigation, I found that the endpoint had been compromised, the malware dropped and launched, and then shortly thereafter, the installed AV detected and quarantined the malware. The threat actor then returned about 6 weeks later, on about 6 Jan, and put the malware back on the endpoint; this one wasn't detected by the AV. 

Now, if I had simply said that the "window of compromise" began when the malware was first placed on the system, without qualification or context, then Visa could have assessed a fine based on the number of credit cards processed over that 6 week period. That period was over the Thanksgiving-to-Christmas time frame is historical when more purchases are made, and the assessment of processing volume would have had a significant impact on the retailer. 

At the time, the malware that a lot of threat actors were using had a component that was "compiled" Perl code, and each time it was launched, the "compiled" Perl runtime was extracted into a unique folder path. Using the creation and last modification times of those folders, we could determine when and how often these components were run. As the malware had been quarantined by the AV, as expected, we found no indication of these folders during that 6 wk period.

The outcome of an investigation...your findings...can have a profound impact on someone, or on an organization. As such, having context beyond just the ShimCache or the AmCache, incorrectly put forth as "evidence of execution" solely and in isolation, is extremely important. 

I've Seen Things, pt II

As a follow-on to my previous post with this title, I wanted to keep the story going; in fact, there are likely to be several more posts in this series, so stay tuned.

And hey, I'm not the only one sharing my journey! Check out Josh's blog, particularly his recent post about how he broke into cybersecurity! I might have been drawn to Josh's post because, like me, he's a former Marine, although I can say that I was in the Corps back before computers were in common usage, back when we used radios that were 1950s tech, built in the 1970s. We didn't cross paths...I was off of active duty 12 yrs before Josh went to boot camp, but even so, there's some commonality in shared traditions and experiences.

Okay...back to it!

Programming
Programming is talked about a great deal within the industry, particularly within DFIR. Some folks will say that you absolutely need to be able to program, and even have very strong feelings about the language of choice, and others will do just fine with basic shell scripting and batch files. I've met some folks who are really great programmers, coming up with either individual projects, or more team or community based ones, like Volatility. A lot of the programming very often seems specialized, like HindSight, while other projects and contributions might be a bit more general. Even so, some of the absolute best DFIR analysts I've ever worked with have had minimal programming capabilities, not going much beyond shell scripting and regexes. 

As a result, when it comes to programming, your mileage may vary. I will say this, though...the experience of programming, in whichever language or framework you opt for, has the benefit of helping you understand how to break things down into manageable "chunks". Whether you're writing some code to manage logs, or you're leading an IR engagement, you'll realize that to get from A to Z, you first have to get from A to B, then to C, and then to D, and so on. Accomplishing a task by writing code forces you to approach the problem in that manner, and as such, has benefits outside of just getting the coding task completed.

I started programming BASIC on a MacIIe in the early '80s, and then did some small programming tasks on a Timex-Sinclair 1000; "small" because saving programs to or loading them from a tape recorder, or copying them out of a magazine, was a whole new level of hell. In high school, I took AP Computer Science the first year that it was offered; the course used Pascal as the language of choice, and when I got to college, it was back to BASIC on TRS-80 systems. When I got to graduate school (mid-'90s), it was MatLab, a small amount of C and some M68000 assembly, and then Java. At one point, just as I was leaving active duty, I had a small consulting gig teaching Java programming to a team at a business.

After I moved to the private sector, I taught myself Perl, initially because the network engineers were looking for someone to join their team with that skillset. I later ran across Dave Roth's work, and found some really fantastic use for his modules, bought his book, and even got in touch with him directly. I continued to stick with Perl, as at the time, it was the only language that had a functioning module for accessing offline Registry hives. I'd started writing my own module for this, but ran across James Macfarlane's module and figure, why not?

The Roles
My first role out of the military was at a small defense contracting firm, and that really didn't work out. After a short stay, I moved on to Trident Data Systems; these folks were another defense contractor, with offices in San Antonio, TX, and LA. As it turned out, I ended up on the commercial team, which was great. We did vulnerability assessments, and because we had a really good sales guy, that's what we did, and as a team, we got really good at it. Every now and then, something a bit different would come along, like a pen test, but for the most part, we had a lot of work in vuln assessments.

My boss in this jo was a retired Army Colonel; right off the bat, he told me that we were using ISS's Internet Scanner product, and that it would probably take about 2 - 3 years of running the tool regularly to really understand what it was doing. I thought he was right...there was a pretty, and pretty complicated GUI, and if you didn't turn off some of the defaults, like the check of the net send "vulnerability", you'd end up sending a message to every desktop, creating a headache for the local admin. 

As it turned out, within about 6 months, I started using those Perl skills I'd developed, along with Dave Roth's modules, to begin writing a replacement for Internet Scanner commercial product. The ISS product was a black box...you pushed a button, and it gave you answers back. Only we didn't know what the product was "looking at", nor how it was determining that something was "vulnerable". As we began to look closer and with a lot of digging into the MS KnowledgeBase, we began to figure out what the product was checking, and we started to see that some of the answers were wrong; one of the biggest issues we ran into was when the tool told us that the AutoAdminLogon functionality was "enabled" on 21 systems in one particular office, and it turned out that it was only truly enabled on one system. To make things worse, the customer knew that only one system had the functionality enabled; it had previously been enabled on the other 20 systems, but the customer had knowingly and painstakingly disabled it. So, it was a good thing that we were running the ISS product side-by-side with the new tool we were developing, and checking the work.

This was one of the first moments in my private sector career that really illustrated the need to understand, to really know, what your tools were doing and how they were doing it. This led to other moments throughout the ensuing 25+ years, as this lesson was continually revisited, again and again.