RegRipper Plugins
I don't often get requests on Github for modifications to RegRipper, but I got one recently that was very interesting. Duckexmachina said that they'd run log2timeline and found entries in one ControlSet within the System hive that wasn't in the one marked as "current", and as a result, those entries were not listed by the appcompatcache.pl plugin.
As such, as a test, I wrote shimcache.pl, which accesses all available ControlSets within the System hive, and displays the entries listed. In the limited testing I've done with the new plugin, I haven't yet found differences in the AppCompatCache entries in the available ControlSets; in the few System hives that I have available for testing, the LastWrite times for the keys in the available ControlSets have been identical.
As you can see in the below timeline excerpt, the AppCompatCache keys in both ControlSets appear to be written at shutdown:
Tue Mar 22 04:02:49 2016 Z
FILE - .A.. [107479040] C:\Windows\System32\config\SOFTWARE
FILE - .A.. [262144] C:\Windows\ServiceProfiles\NetworkService\NTUSER.DAT
FILE - .A.. [262144] C:\Windows\System32\config\SECURITY
FILE - .A.. [262144] C:\Windows\ServiceProfiles\LocalService\NTUSER.DAT
FILE - .A.. [18087936] C:\System Volume Information\Syscache.hve
REG - M... HKLM/System/ControlSet002/Control/Session Manager/AppCompatCache
REG - M... HKLM/System/ControlSet001/Control/Session Manager/AppCompatCache
FILE - .A.. [262144] C:\Windows\System32\config\SAM
FILE - .A.. [14942208] C:\Windows\System32\config\SYSTEM
Now there may be instances where this is not the case, but for the most part, what you see in the above timeline excerpt is what I tend to see in the recent timelines I've created.
I'll go ahead and leave the shimcache.pl plugin as part of the distribution, and see how folks use it. I'm not sure that adding the capability of parsing all available ControlSets is something that is necessary or even useful for all plugins that parse the System hive. If I need to see something from a historical perspective within the System hive, I'll either go to the RegBack folder and extract the copy of the hive stored there, or access any Volume Shadow Copies that may be available.
Tools
MS has updated their Sysmon tool to version 4.0. There's also this great presentation from Mark Russinovich that discusses how the tool can be used in an infrastructure. It's well worth the time to go through it.
Books
A quick update to my last blog post about writing books...every now and then (and it's not very often), when someone asks if a book is going to address "what's new" in an operating system, I'll find someone who will actually be able to add some detail to the request. For example, the question may be about new functionality to the operating system, such as Cortana, Continuum, new browsers (Edge, Spartan), new search functionality, etc., and the artifacts left on the system and in the Registry through their use.
These are all great questions, but something that isn't readily apparent to most folks is that I'm not a testing facility or company. I'm one guy. I do not have access to devices such as a Windows phone, a Surface device, etc. I'm writing this blog post using a Dell Latitude E6510...I don't have a touch screen device available to test functionality such as...well...the touch screen, a digital assistant, etc. I don't have access to a Windows phone.
RegRipper is open source and free. As some are aware, I end up giving a lot of the new books away. I don't have access to a device that runs Windows 10 and has a touch screen, or can run Cortana. I don't have access to MSDN to download and test new versions of Windows, MSOffice, etc.
Would I like to include those sorts of artifacts as part of RegRipper, or in a book? Yes, I would...I think it would be really cool. But instead of asking, "...does it cover...", ask yourself instead, "what am I willing to contribute?" It could be devices for testing, or the data extracted from said devices, along with a description of the testing performed, etc. I do what I can with the resources I have available, folks.
Analysis
I was pointed to this site recently, which begins a discussion of a technique for finding unknown malware on Windows systems. The page is described as "part 1 of 5", and after reading through it, while I think that it's a good idea to have things like this available to DFIR analysts, I don't agree with the process itself.
Here's why...I don't agree that long-running processes (hash computation/comparison, carving unallocated space, AV scans, etc.) are the first things that should be done when kicking off analysis. There is plenty of analysis that can be conducted in parallel while those processes are running, and the necessary data for that analysis should be extracted first.
Analysis should start with identified, discrete goals. After all, imaging and analyzing a system can be an expensive (in terms of time, money, staffing resources, etc.) process, so you want to have a reason for going down this road. Find all the bad stuff is not a goal; what constitutes bad in the context of the environment in which the system exists? Is the user a pen tester, or do they find vulnerabilities and write exploits? If so, bad takes on an entirely new context. When tasked with finding unknown malware, the first question should be, what leads us to believe that this system has malware on it? I mean, honestly, when a sysadmin or IT director walks into their office in the morning, do they have a listing of systems on the wall and just throw a dart at it, and whichever system the dart lands on suddenly has malware on it? No, that's not the case at all...there's usually something (unusual activity, process performance degradation, etc.) that leads someone to believe that there's malware on a system. And usually when these things are noticed, they're noticed at a particular time. Getting that information can help narrow down the search, and as such should be documented before kicking off analysis.
Once the analysis goals are documented, we have to remember that malware must execute in order to do damage. Well, that is...in most cases. As such, what we'd initially want to focus on is artifacts of process execution, and from there look for artifacts of malware on the system.
Something I discussed with another analyst recently is that I love analyzing Windows systems because the OS itself will very often record artifacts as the malware interacts with it's ecosystem. Some malware creates files and Registry keys/values, and this functionality can be found within the code of the malware itself. However, as some malware executes, there are events that may be recorded by the operating system that are not part of the malware code. It's like dropping a rock in a pond...there's nothing about the rock, in and of itself, that requires that ripples be produced; rather, this is something that the pond does as a reaction to the rock interacting with it. The same can very often be true with Windows systems and malware (or a dedicated adversary).
That being said, I'll look forward to reading the remaining four blog posts in the series.
Harlan,
ReplyDeleteI've been offline for a while and am trying to catch up on the DFIR community including my feeds. Excellent point and I completely agree about the area to explore first to find malware. Checking for process execution first is an effective technique to locate unknown malware. The majority of the time I'm able to find the malware solely by looking at these artifacts and then going from there fills in the blanks. The other benefit is this can help save time when dealing with a system someone (aka a end user) thinks might be infected but isn't. It's a good way to identify false positives without wasting a time of time doing un-needed examination steps.
Corey,
ReplyDeleteThanks for the comment, but not everyone feels the way you (or I) do about this topic. Most simply do not want to risk issues with what some deem "un-needed examination steps"...