I recently posted a little something on IOCs, and I wanted to take a moment to extend that, along the vein of what I was discussing, particularly the part where I said, "Engage with those with adjacent skill sets". I think that this is a particularly important undertaking, particularly when we're talking about IOCs, because we can get better, more valuable IOCs when we engage with those with adjacent skill sets, and understand what their needs are. This true for responders, DF analysts, malware analysts, and we can even include pen testers, as well.
First, a word on specialization. Many of us in the infosec world recognize the need and importance of understanding a general body of knowledge, but at the same time, many of us also gravitate toward a particular specialization, such as digital forensic analysis of mobile devices or Windows systems, web app assessments/pen testing, etc. Very often, we find this specialization to be extremely beneficial to us, not only in what we do, but in our careers, and often our hobbies, as well. What happens when we try to do too much...say, IR, DF, and pen testing? Well, we really don't become an expert at anything in particular, do we? Several years ago, I sat on a panel at the SANS Forensic Summit, and the question was asked about who you'd want to show up to respond to an incident...my answer was someone who responded to incidents, not someone who was doing a pen test last week, and pulling cable the week before that. Think about it...who do you want doing your brain surgery? A trained, experienced neurosurgeon, or someone who, up until 6 months ago, was driving trucks? My point is that due to how complex systems have become, who can really be an expert in everything?
Okay, so to tie that together, while it's good to specialize in something, it's also a really good idea to engage with those with other, ancillary, adjacent specializations. For example, as a digital forensic analyst, I've had some really outstanding successes when I've worked closely with an expert malware analyst (you know who you are, Sean!!). By working together with the malware analyst, and providing what they need to do their job better (i.e., context, specific information such as paths, etc.), and getting detailed information back from them, we were able to develop a great deal more (and more valuable) information for our customer, in less time, than if we'd each worked separately on the case, and even more so than if we'd worked solo, without the benefit of the other's skills.
Part of that success had to do with sharing IOCs between us. Well, we started by sharing the specific indicators associated with what was found...I had not found the indication of the infection until running the fourth AV scanner (a relatively obscure free AV scanner named 'a-squared'). From there, I referred to my timeline, and was able to determine why there seemed to be no Registry-based auto-start mechanisms for the malware. From there, I provided the information I had available to the malware analyst, including information about the target platform, where the malware was found, a copy of the malware itself, etc. After disabling WFP on his the analysis system, the malware guy got the infected files up and running, and pulled information out of memory that I was then able to use to further my analysis within the acquired image (i.e., domain name, etc.). Using this information, I targeted specific artifacts...for example, the malware analysts shared that once the malware DLL was in memory, it was no longer obfuscated, so he was able to parse out the PE headers of the file, including the import table. As the malware DLL imported functions from the WinInet API, it was reasonable to assume (and was verified through analysis) that the malware DLL itself was capable of off-system communications. This made it clear not only that additional malware was not required, but also told us exactly where we should be looking for artifacts. As such, in my analysis of the acquired image, I focused on the IE history for the user in question and the pagefile. I ran strings across the pagefile, looking for indications of the domain name we had, and then wrote a Perl script that would go into the pagefile at that offset for each hit of interest, and grab 100 bytes on either side of the hit. What we ended up getting was several web server responses to the malware's HTTP off-system communications, which included time stamps from the web server, as well as server directives telling the client to not cache anything.
Now, each of us focusing on our specialized areas of analysis, but treating each other like "customers", worked extremely well. We each had a wealth of information available to us and worked closely to determine was the other needed, and what was most valuable.
Addendum: While driving home from work in traffic today, I thought of something that might reveal a bit more about this exchange...
At the early stage of my analysis, I had an acquired image from a system, from which I'd created a timeline, and against which I had run four AV scans. The result of the last scan was where one file was identified as "malicious". Looking at that file within in the image, via FTK Imager, I could see that the file was obfuscated (by the contents of the import table), although I didn't know how it was obfuscated (packed, encrypted, both, etc.). And I only had the acquired image...I didn't have a memory dump. So before I sent it to the malware guy, I had to figure out how this thing was launched...after all, I couldn't just send him the file and ask him, "what can you tell me about this file?", because his answer would be "not much", or not much that I couldn't already figure out for myself (MD5, strings, etc.).
So, I developed some context around the file, by figuring out how it had been launched on the system...which turned out to be that a regular system DLL had been modified to load the malicious DLL. Ah, there's the context! So I could send both files to the malware guy. When I did, it turned out that the regular system DLL on the malware guy's analysis system was 'protected' by Windows File Protection (WFP), so he couldn't just copy the two files onto the system and reboot to get everything to work. So I told him which Registry key to use to just disable WFP for the installation process, and everything worked.
I should point out that many of the EXE samples of malware are run through dynamic analysis by the analyst copying the EXE to the desktop and double-clicking it. However, this was a DLL, so that wouldn't work. Using rundll32.exe might not work, either...figuring out any needed arguments would just take up a lot of time. So, by putting in a few more minutes (because that's all it really amounted to) of analysis, I was able to provide the malware guy with enough information to get the malware sample up and running on the test system.
Once the malware guy got things going on his end, he was able to provide me with information about Registry keys accessed by the malware (in this case, none), as well as more details regarding off-system communications. Because he saw HTTP traffic leaving the test system, this gave us some information as to where to look with respect to how it was doing this, which was confirmed by checking the import table of the DLL once it had been loaded into memory. The malware guy provided me with this information, which told me pretty much where to look for artifacts of the off-system communications.
Yes, 'strings' run against the now un-obfuscated DLL would have provided some information, but by parsing the import table, that information now had the benefit of additional context, which was further added to through the use of more detailed analysis of the malware. For example, the data stolen by the malware was never written to disk...once it was captured in memory, it was sent off the system via the HTTP communications. This was extremely valuable information, as it really narrowed down where we should expect to find artifacts of the data collected/collection process.
The point of all this is that rather than providing simply what was easiest, by collecting additional data/context about particular artifacts, we can provide useful information to someone else, which in turn increases the quality of the information (or intelligence) that we receive in turn, and very often, in a much quicker, more timely manner. In short, I help you to help me...
Now, back to our regularly scheduled program, already in progress...
Okay...so what? Consider this recent blog post...while it is very cool that 7Zip can be used to export the various sections of a PE file into different files, and you can then run 'strings' on those exported sections, one would think that parsing the PE headers would also be beneficial, and provide a modicum of context to the strings extracted from the .text section. Why does that matter? Well, consider this post...from my perspective as a host-based analyst, knowing that the string 'ClearEventLog' was found in the import table versus in the code section makes a huge difference in how I view and value that artifact.
Consider an email address that you found somewhere in an acquired image...wouldn't the relative value of that artifact depend on the context, such as whether or not the email address was associated with emails or not? And if the address were found to be in an email, wouldn't it's context...whether it was found in the To or From block, or within the body of the email...potentially have a significant impact on your exam?
My point is that sometimes what we may find to be interesting from our perspective may not be entirely useful to someone else, particularly someone with an adjacent skill set. Other times, something that we think is completely and totally trivial and not interesting at all may be enormously useful to someone else. A malware analyst may not find that a particular bit of malware that uses the Run key as a persistence mechanism as very interesting, but as a Windows analyst, that piece of information is extremely valuable to me, not only with respect to that particular sample, but also when correlated with other samples, as well as other threats.
So, some thoughts regarding IOCs:
1. Specific, targeted IOCs only really work well for specific samples of malware or specific incidents. By themselves, they may have very limited value. However, when combined/aggregated with other IOCs, they have much greater overall value, in that they will show trends from which more general IOCs can be derived.
2. You don't always have to create a new IOC...sometimes, updating one you already have works much better. Rather than having an IOC that looks for a specific file in a specific directory path (i.e., C:\Windows\ntshrui.dll), why not look for all files of a particular type in that directory, and potentially whitelist the good ones?
3. Working together with analysts with adjacent skill sets, and understanding what artifacts or indicators are valuable to them, produces better overall IOCs. A malware analyst may find an MD5 hash to be useful, but a DF analyst may find that the full path to the file itself, as well as PE headers from the file, are more useful. An malware analyst may find the output of 'strings' to be useful, but a DF analyst may need some sort of context around or associated with those strings.
4. No man, or analyst, is an island. As smart as each of may be, we're not as smart as several of us working together. Tired of the cliche's? Good. Working with other specialists...malware and memory analysts, host-based DF folks, network analysis folks...is a much better solution and produces a much better overall result than one guy trying to do it all himself.
1 comment:
I very much agree that working with other specialists, even if it is within the same specialty, is very beneficial. Bouncing ideas off one another and talking through things helps create a better deliverable work product for the client, as well as save time. Saving time benefits the client when you are charging by the hour, and in turn makes you a more valuable examiner to that client.
Post a Comment