Mari had a great post recently that touched on the topic of timelines, which also happens to be the topic of her presentation at the recent HTCIA conference (which, by all Twitter accounts, we very well received).
A little treasure that Mari added to the blog post was how she went about modifying a Volatility plugin in order to create a new one. Mari says in the post, "...nothing earth shattering...", but you know what, sometimes the best and most valuable things aren't earth shattering at all. In just a few minutes, Mari created a new plugin, and it also happens to be her first Volatility plugin. She shared her process, and you can see the code right there in the blog post.
Scripting
Speaking of sharing..well, this has to do with DFIR in general, but not Windows specifically...I ran across this fascinating blog post recently. In short, the author developed a means (using Python) for turning listings of cell tower locations (pulled from phones by Cellebrite) into a Google Map.
A while back, I'd written and shared a Perl script that did something similar, except with WiFi access points.
The point is that someone had a need and developed a tool and/or process for (semi-)automatically parsing and processing the original raw data into a final, useful output format.
.pub files
I ran across this ISC Handler Diary recently...pretty interesting stuff. Has anyone seen or looked at this from a process creation perspective? The .pub files are OLE compound "structured storage" files, so has anyone captured information from endpoints (IRL, or via a VM) that illustrates what happens when these files are launched?
For detection of these files within an acquired image, there are some really good Python tools available that are good for generally parsing OLE files. For example, there's Didier's oledump.py, as well as decalage/oletools. I really like oledump.py, and have tried using it for various testing purposes in the past, usually using files either from actual cases (after the fact), or test documents downloaded from public sources.
A while back I wrote some code (i.e., wmd.pl) specifically to parse OLE structured storage files, so I modified that code to essentially recurse through the OLE file structure, and when getting to a stream, simply dump the stream to STDOUT in a hex-dump format. However, as I'm digging through the API, there's some interesting information available embedded within the file structure itself. So, while I'm using Didier's oledump.py as a comparison for testing, I'm not entirely interested in replicating the great work that he's done already, as much as I'm looking for new things to pull out, and new ways to use the information, such as pulling out date information (for possible inclusion in a timeline, or inclusion in threat intelligence), etc.
So I downloaded a sample found on VirusTotal, and renamed the local copy to be more inline with the name of the file that was submitted to VT.
Here's the output of oledump.py, when run across the downloaded file:
Oledump.py output |
Now, here's the output from ole2.pl, the current iteration of the OLE parsing tool that I'm working on, when run against the same file:
Ole2.pl output |
As you can see, there are more than a few differences in the outputs, but that doesn't mean that there's anything wrong with either tool. In fact, it's quite the opposite. Oledump.py uses a different technique for tracking the various streams in the file; ole2.pl uses the designators from within the file itself.
The output of ole2.pl has 5 columns:
- the stream designator (from within the file itself)
- a tuple that tells me:
- is the stream a "file" (F) or a "directory" (D)
- if the stream contains a macro
- the "type" (from here); basically, is the property a PropertySet?
- the date (OLE VT_DATE format is explained here)
Part of the reason I wrote this script was to see which sections within the OLE file structure had dates associated with them, as perhaps that information can be used as part of building the threat intel picture of an incident. The script has embedded code to display the contents of each of the streams in a hex-dump format; I've disabled the code as I'm considering adding options for selecting specific streams to dump.
Both tools use the same technique for determining if macros exist in a stream (something I found at the VBA_Tools site).
One thing I haven't done is added code to look at the "trash" (described here) within the file format. I'm not entirely sure how useful something like this would be, but hey, it may be something worth looking at. There are still more capabilities I'm planning to add to this tool, because what I'm looking at is digging into the structure of the file format itself in order to see if I can develop indicators, which can then be clustered with other indicators. For example, the use of .pub file attachments has been seen by others (ex: MyOnlineSecurity) being delivered via specific emails. At this point, we have things such as the sender address, email content, name of the attachment, etc. Still others (ex: MoradLabs) have shared the results of dynamic analysis; in this case, the embedded macro launching bitsadmin.exe with specific parameters. Including attachment "tooling" may help provide additional insight into the use of this tactic by the adversary.
Something else I haven't implemented (yet) is extracting and displaying the macros. According to this site, the macros are compressed, and I have yet to find anything that will let me easily extract and decompress a macro from the stream in which its embedded, using Perl. Didier has done it in Python, so perhaps that's something I'll leave to his tool.
Threat Intel
I read this rather fascinating Cisco Continuum article recently, and I have to say, I'm still trying to digest it. Part of the reason for this is that the author says some things I agree with, but I need to go back and make sure I understand what they're saying, as I might be agreeing while at the same time misunderstanding what's being said.
A big take-away from the article was:
Where a team like Cisco’s Talos and products like AMP or SourceFire really has the advantage, Reid said, is in automating a lot of these processes for customers and applying them to security products that customers are already using. That’s where the future of threat intelligence, cybersecurity products and strategies are headed.
Regardless of the team or products, where we seem to be now is that processes are being automated and applied to devices and systems that clients are already using, in many cases because the company sold the client the devices, as part of a service.
No comments:
Post a Comment