Saturday, October 29, 2016

Ransomware

Ransomware
I think that we can all agree, whether you've experienced it within your enterprise or not, ransomware is a problem.  It's one of those things that you hope never happens to you, that you hope you never have to deal with, and you give a sigh of relief when you hear that someone else got hit.

The problem with that is that hoping isn't preparing.

Wait...what?  Prepare for a ransomware attack?  How would someone go about doing that?  Well, consider the quote from the movie "Blade":

Once you understand the nature of a thing, you know what it's capable of.

This is true for ransomware, as well as Deacon Frost.  If you understand what ransomware does (encrypts files), and how it gets into an infrastructure, you can take some simple (relative to your infrastructure and culture, of course) to prepare for such an incident to occur.  Interestingly enough, many of these steps are the same that you'd use to prepare for any type of incident.

First, some interesting reading and quotes...such as from this article:

The organization paid, and then executives quickly realized a plan needed to be put in place in case this happened again. Most organizations are not prepared for events like this that will only get worse, and what we see is usually a reactive response instead of proactive thinking.

....and...

I witnessed a hospital in California be shut down because of ransomware. They paid $2 million in bitcoins to have their network back.

The take-aways are "not prepared" and "$2 million"...because it would very likely have cost much less than $2 million to prepare for such attacks.

The major take-aways from the more general ransomware discussion should be that:

1.  Ransomware encrypts files.  That's it.

2.  Like other malware, those writing and deploying ransomware work to keep their product from being detected.

3.  The business model of ransomware will continue to evolve as methods are changed and new methods are developed, while methods that continue to work will keep being used.

Wait...ransomware has a business model?  You bet it does!  Some ransomware (Locky, etc.) is spread either through malicious email attachments, or links that direct a user's browser to a web site.  Anyone who does process creation monitoring on an infrastructure likely sees this.  In a webcast I gave last spring (as well as in subsequent presentations), I included a slide that illustrated the process tree of a user opening an email attachment, and then choosing to "Enable Content", at which point the ransomware took off.

Other ransomware (Samas, Le Chiffre, CryptoLuck) is deployed through a more directed means, bypassing email all together.  An intruder infiltrates an infrastructure through a vulnerable perimeter system, RDP, TeamViewer, etc., and deploys the ransomware in a dedicated fashion.  In the case of Samas ransomware, the adversary appears to have spent time elevating privileges and mapping the infrastructure in order locate systems to which they'd deploy the ransomware.  We've seen this in the timeline where the adversary would on one day, simply blast out the ransomware to a large number of systems (most appeared to be servers).

The Ransomware Economy
There are a couple of other really good posts on Secureworks blog regarding the Samas ransomware (here, and here).  The second blog post, by Kevin Strickland, talks about the evolution of the Samas ransomware; not long ago, I ran across this tweet that let us know that the evolution that Kevin talked about hasn't stopped.  This clearly illustrates that developers are continuing to "provide a better (i.e., less detectable) product", as part of the economy of ransomware.  The business models that are implemented the ransomware economy will continue to evolve, simply because there is money to be had.

There is also a ransomware economy on the "blue" (defender) side, albeit one that is markedly different from the "red" (attacker) side.

The blue-side economy does not evolve nearly as fast as the red-side.  How many victims of ransomware have not reported their incident to anyone, or simply wiped the box and moved on?  How many of those with encrypted files have chosen to pay the ransom rather than pay to have the incident investigated?  By the way, that's part of the red-side economy...make it more cost effective to pay the ransom than the cost of an investigation.

As long as the desire to obtain money is stronger that the desire to prevent that from happening, the red-side ransomware economy will continue to outstrip that of the blue-side.

Preparation
Preparation for a ransomware attack is, in many ways, no different from preparing for any other computer security incident.

The first step is user awareness.  If you see something, say something.  If you get an odd email with an attachment that asks you to "enable content", don't do it!  Instead, raise an alarm, say something.

The second step is to use technical means to protect yourself.  We all know that prevention works for only so long, because adversaries are much more dedicated to bypassing those prevention mechanisms than we are to paying to keep those protection mechanisms up to date.  As such, augmenting those prevention mechanisms with detection can be extremely effective, particularly when it comes to definitively nailing down the initial infection vector (IIV).  Why is this important?  Well, in the last couple of months, we've not only seen the deliver mechanism of familiar ransomware changing, but we've also seen entirely new ransomware variants infecting systems.  If you assume that the ransomware is getting in as an email attachment, then you're going to direct resources to something that isn't going to be at all effective.

Case in point...I recently examined a system infected with Odin Locky, and was told that the ransomware could not have gotten in via email, as a protection application had been purchased specifically for that purpose.  What I found was that the ransomware did, indeed, get on the system via email; however, the user had accessed their AOL email (bypassing the protection mechanism), and downloaded and executed the malicious attachment.

Tools such as Sysmon (or anything else that monitors process creation) can be extremely valuable when it comes to determining the IIV for ransomware.  Many variants will delete themselves after files are encrypted, (attempt to) delete VSCs, etc., and being able to track the process train back to it's origin can be extremely valuable in preventing such things in the future.  Again, it's about dedicating resources where they will be the most effective.  Why invest in email protections when the ransomware is getting on your systems as a result of a watering hole attack, or strategic web compromise?  Or what if it's neither of those?  What if the system had been compromised, a reverse shell (or some other access method, such as TeamViewer) installed and the system infected through that vector?

Ransomware will continue to be an issue, and new means for deploying are being developed all the time.  The difference between ransomware and, say, a targeted breach is that you know almost immediately when you've had files encrypted.  Further, during targeted breaches, the adversary will most often copy your critical files; with ransomware, the files are made unavailable to anyone.  In fact, if you can't decrypt/recover your files, there's really no difference between ransomware and secure deletion of your files.

We know that on the blue-side, prevention eventually fails.  As such, we need to incorporate detection into our security posture, so that if we can't prevent the infection or recover our files, we can determine the IIV for the ransomware and address that issue.

Addendum, 30 Oct: As a result of an exchange with (and thanks to) David Cowen, I think that I can encapsulate the ransomware business model to the following statement:

The red-side business model for ransomware converts a high number of low-value, blue-side assets into high-value attacker targets, with a corresponding high ROI (for the attacker).

What does mean?  I've asked a number of folks who are not particularly knowledgeable in infosec if there are any files on their individual systems without which they could simply not do their jobs, or without access to those files, their daily work would significantly suffer.  So far, 100% have said, "yes".  Considering this, it's abundantly clear that attackers have their own reciprocal Pyramid of Pain that they apply to defenders; that is, if you want to succeed (i.e., get paid), you need to impact your target in such a manner that it is more cost-effective (and less painful) to pay the ransom than it is perform any alternative.  In most cases, the alternative amounts to changing corporate culture.




AmCache.hve

I was working on an incident recently, and while extracting files from the image, I noticed that there was an AmCache.hve file.  Not knowing what I would find in the file, I extracted it to include in my analysis.  As I began my analysis, I found that the system I was examining was a Windows Server 2012 R2 Standard system.  This was just one system involved in the case, and I already had a couple of indicators.

As part of my analysis, I parsed the AppCompatCache value and found one of my indicators:

SYSVOL\downloads\malware.exe  Wed Oct 19 15:35:23 2016 Z

I was able to find a copy of the malware file in the file system, so I computed the MD5 hash, and pulled the PE compile time and interesting strings out of the file.  The compile time was  9 Jul 2016, 11:19:37 UTC.

I then parsed the AmCache.hve file and searched for the indicator, and found:

File Reference  : 28000017b6a
LastWrite          : Wed Oct 19 06:07:02 2016 Z
Path                   : C:\downloads\malware.exe
SHA-1               : 0000
Last Mod Time2: Wed Aug  3 13:36:53 2016 Z

File Reference   : 3300001e39f
LastWrite           : Wed Oct 19 15:36:07 2016 Z
Path                    : C:\downloads\malware.exe
SHA-1                : 0000
Last Mod Time2: Wed Oct 19 15:35:23 2016 Z

File Reference  : 2d000017b6a
LastWrite          : Wed Oct 19 06:14:30 2016 Z
Path                   : C:\Users\\Desktop\malware.exe
SHA-1               : 0000
Last Mod Time  : Wed Aug  3 13:36:54 2016 Z
Last Mod Time2: Wed Aug  3 13:36:53 2016 Z
Create Time       : Wed Oct 19 06:14:20 2016 Z
Compile Time    : Sat Jul  9 11:19:37 2016 Z

All of the SHA-1 hashes were identical across the three entries.  Do not ask for the hashes...I'm not going to provide them, as this is not the purpose of this post.

What this illustrates is the value of what what can be derived from the AmCache.hve file.  Had I not been able to retrieve a copy of the malware file from the file system, I would still have a great deal of information about the file, including (but not limited to) the fact that the same file was on the file system in three different locations.  In addition, I would also have the compile time of the executable file.

Sunday, October 16, 2016

Links and Updates

RegRipper Plugin
Not long ago, I read this blog post by Adapt Forward Cyber Security regarding an interesting persistence mechanism, and within 10 minutes, had RegRipper plugin written and tested against some existing data that I had available.

So why would I say this?  What's the point?  The point is that with something as simple as copy-paste, I extended the capabilities of the tool, and now have new functionality that will let me flag something that may be of interest, without having to memorize a checklist.  And as I pushed the new plugin out to the repository, everyone who downloads and uses the plugin now has that same capability, without having to have spent the time that the folks at Adapt Forward spent on this; through documentation and sharing, the DFIR community is able to extend the functionality of existing toolsets, as well as the reach of knowledge and experience.

Speaking of which, I was recently assisting with a case, and found some interesting artifacts in the Registry regarding LogMeIn logons; they didn't include the login source (there was more detail recovered from a Windows Event Log record), but they did include the user name and date/time.  This was the result of a creating a timeline that included Registry key LastWrite times, and led to investigating an unusual key/entry in the timeline.  I created a RegRipper plugin to extract the information (logmein.pl), and then created one to include the artifact in a timeline (logmein_tln.pl).  Shorty after creating them both, I pushed them up to Github.

Extending Tools, Extending Capabilities
Not long ago, I posted about parsing .pub files that were used to deliver malicious macros.  There didn't seem to be a great deal of interest from the community, but hey, what're you gonna do, right?  One comment that I did receive was, "yeah, so what...it's a limited infection vector."  You know what?  You're right, it is.  But the point of the post wasn't, "hey, look here's a new thing..."; it was "hey, look, here's an old thing that's back, and here's how, if you understand the details of the file structure, you can use that information to extend your threat intel, and possibly even your understanding of the actors using it."

And, oh, by the way, if you think that OLE is an old format, you're right...but if you think that it's not used any longer, you're way not right.  The OLE file format is used with Sticky Notes, as well as automatic Jump Lists.

Live Imaging
Mari had an excellent post recently in which she addressed live imaging of Mac systems.  As she pointed out in her post, there are times when live imaging is not only a good option, but the only option.

The same can also be true for Windows systems, and not just when encryption is involved.  There are times when the only way to get an image of a server is to do so using a live imaging process.

Something that needs to be taken into consideration during the live imaging of Windows systems is the state of various files and artifacts while the system is live and running.  For example, Windows Event Logs may be "open", and it's well known that the AppCompatCache data is written at system shutdown.

AmCache.hve
Not long ago, I commented regarding my experiences using the AmCache.hve file during investigations; in short, I had not had the same sort of experiences as those described by Eric Z.

That's changed.

Not long ago, I was examining some data from a point-of-sale breach investigation, and had noticed in the data that there were references to a number of tools that the adversary had used that were no longer available on the system.  I'd also found that the installed AV product wasn't writing detection events to the Application Event Log (as many such applications tend to do...), so I ran 'strings' across the quarantine index files, and was able to get the original path to the quarantined files, as well as what the AV product had alerted on.  In one instance, I found that a file had been identified by the AV product as "W32.Bundle.Toolbar"...okay, not terribly descriptive.

I parsed the AmCache.hve file (the system I was examining was a Windows 7 SP1 system), and searched the output for several of the file names I had from other sources (ShimCache, UserAssist, etc.), and lo and behold, I found a reference to the file mentioned above.  Okay, the AmCache entry had the same path, so I pushed the SHA-1 hash for the file up to VT, and the response identified the file as CCleaner.  This fit into the context of the examination, as we'd observed the adversary "cleaning up", using either native tools (living off the land), or using tools they'd brought with them.

Windows Event Log Analysis
Something I see over and over again (on Twitter, mostly, but also in other venues) is analysts referring to Windows Event Log records solely by their event ID, and not including the source.

Event IDs are not unique.  There are a number of event IDs out there that have different sources, and as such, have a completely different context with respect to your investigation.  Searching Google, it's easy to see (for example) that events with ID 4000 have multiple sources; DNS, SMTPSvc, Diagnostics-Networking, etc.  And that doesn't include non-MS applications...that's just what I found in a couple of seconds of searching.  So, searching across all event logs (or even just one event log file) for all events with a specific ID could result in data that has no relevance to the investigation, or even obscure the context of the investigation.

Okay...so what?  Who cares?  Well, something that I've found that really helps me out with an examination is to use eventmap.txt to "tag" events of interest ("interest", as in, "found to be interesting from previous exams") while creating a timeline.  One of the first things I'll do after opening the TLN file is to search for "[maldetect]" and "[alert]", and get a sense of what I'm working with (i.e., develop a bit of situational awareness).  This works out really well because I use the event source and ID in combination to identify records of interest.

As many of us still run across Windows XP and 2003 systems, this link provides a good explanation (and a graphic) of how wrapping of event records works in the Event Logs on those systems.