Saturday, February 25, 2017

Powershell Stuff, Windows 10

Source: InterWebs
I recently had some Windows 10 systems come across my desk, and as part of my timeline creation process, I extracted the Windows Event Log files of interest, and ran them through my regular sequence of commands.  While I was analyzing the system timeline, I ran across some interesting entries, specifically "Microsoft-Windows-Powershell/4104" events; these events appeared to contain long strings of encoded text.  Many of the events were clustered together, up to 20 at a time, and as I scrolled through these events, I saw strings (not encoded) that made me think that I was looking at activity of interest to my exam.  Further, the events themselves were clustered 'near' other events of interest, to include indications that a Python script 'compiled' as a Windows executable had been run in order to steal credentials.  So, I sort of figured that this was something worth looking into, so during a team call, I posed the question of, "...has anyone seen this..." to the group, and got a response; one of my teammates pointed me toward Matt Dunwoody's block-parser.py script.  My own research had revealed that FireEye, CrowdStrike, and even Microsoft had talked about these events, and what they mean.

From the FireEye blog post (authored by Matt Dunwoody):

Script block logging records blocks of code as they are executed by the PowerShell engine, thereby capturing the full contents of code executed by an attacker, including scripts and commands. Due to the nature of script block logging, it also records de-obfuscated code as it is executed. For example, in addition to recording the original obfuscated code, script block logging records the decoded commands passed with PowerShell’s -EncodedCommand argument, as well as those obfuscated with XOR, Base64, ROT13, encryption, etc., in addition to the original obfuscated code. Script block logging will not record output from the executed code. Script block logging events are recorded in EID 4104. Script blocks exceeding the maximum length of an event log message are fragmented into multiple entries. A script is available to parse script block logs and reassemble fragmented blocks (see reference 5).

While not available in PowerShell 4.0, PowerShell 5.0 will automatically log code blocks if the block’s contents match on a list of suspicious commands or scripting techniques, even if script block logging is not enabled. These suspicious blocks are logged at the “warning” level in EID 4104, unless script block logging is explicitly disabled. This feature ensures that some forensic data is logged for known-suspicious activity, even if logging is not enabled, but it is not considered to be a security feature by Microsoft. Enabling script block logging will capture all activity, not just blocks considered suspicious by the PowerShell process. This allows investigators to identify the full scope of attacker activity. The blocks that are not considered suspicious will also be logged to EID 4104, but with “verbose” or “information” levels.

While the blog post does not specify what constitutes 'suspicious', this was a pretty good description of what I was seeing.  This Windows Powershell blog post contains information similar to Matt's post, but doesn't use the term "suspicious", instead stating that "PowerShell automatically logs script blocks when they have content often used by malicious scripts."  So, pretty interesting stuff.

After updating my Python installation with the appropriate dependencies (thinks to Jamie for pointing me to an install binary I needed, as well as some further assistance with the script), I ran the following command:

block-parser.py -a -f blocks.txt Microsoft-Windows-Powershell%4Operational.evtx

That's right...you run the script directly against a .evtx file; hence, the need to ensure that you have the right dependencies in place in your Python installation (most of which can be installed easily using 'pip').  The output file "blocks.txt" contains the decoded script blocks, which turned out to be very revealing.  Rather than looking at long strings of clear and encoded text, broken up across multiple events, I could now point to a single, unified file containing the script blocks that had been run at a particular time, really adding context and clarity to my timeline and helping me build a picture of the attacker activity, providing excellent program execution artifacts.  The decoded script blocks contained some 'normal', non-malicious stuff, but also contained code that performed credential theft, and in one case, privilege escalation code, both of which could be found online, verbatim.

It turns out that "Microsoft-Windows-Powershell/4100" events are also very interesting, particularly when they follow an identified 4104 event, as these events can contain error messages indicating that the Powershell script didn't run properly.  This can be critical in determining such things as process execution (the script executed, but did it do so successfully?), as well as the window of compromise of a system.

Again, all of this work was done across what's now up to half a dozen Windows 10 Professional systems.  Many thanks to Kevin for the nudge in the right direction, and to Jamie for her help with the script.

Additional Reading
Matt's DerbyCon 2016 Presentation

Saturday, February 04, 2017

Updates, Links

VSCs
Not long ago, I blogged about a means for accessing files within VSCs, which was based on a tweet that I had seen.  However, I could not get the method described in the tweet to work, nor could others.

Dan/4n6k updated his blog post to include a reference to volrest.exe, which is part of the Windows 2003 Resource Kit (free download).  This is a great tool...it is part of the Win2003 resource kit but works on Win10...who knew?

In my earlier blog post, I had tried to make a copy of the System and SAM hives from a VSC; however, I kept receiving errors indicating that the files could not be found.  So, I tried using volrest.exe to see if there were any previous versions of files (in this case, folders) available in my profile folder:









Okay, so, are there any previous versions of the System and SAM files available?








Hhhhmm...that might explain why I wasn't able to copy the files in my tests...there were no previous versions of the files available.

Malware
The ISC handler's diary recently had a really good write-up regarding malware analysis regarding a malicious Office document, "fileless" malware, and UAC bypass.  This is a really good write-up of what the malware does, from start to finish, and provides not only individual indicators, but by providing them in the manner that they're shared, provides a view of the behavior of the malware, as well.  This can be extremely useful for detection, by looking to the individual indicators and seeing how you would detect them, perhaps even in a more general case than what is shared.

Not long ago, I remember reading something that stated that one variant of similar malware was using the same sort of UAC bypass technique, and it was changing the Registry value back to the original value after the "exploit" had completed.  This is why timeline analysis is so important, particularly when coupled with process creation monitoring.  Activity such as this happens far too quickly for a VSC to be created, so you won't have any historical artifacts available. However, the LastWrite time for the key would serve as an indicator, much like the Win32ClockProvider key, or the Policies\Secrets key (per the Volatility team).

Here is another write-up that walks through a similar issue (macro, Powershell, UAC bypass...)...

Ransomware
Not long after sharing some thoughts on ransomware, I ran across this little gem that struck (kind of) close to home.  Assuming that this was, in fact, the case (i.e. 70% of available CCTV cameras taken down by ransomware...), what does this tell us about the implications of something like this?  The ransom wasn't paid, and the systems were 'recovered', but at what cost, in manpower and time?

RegRipper Plugin Updates
I've updated a couple of the RegRipper plugins; maybe the most notable update is to the userassist.pl plugin.  Specifically, I removed the alerts function, and added printing of decoded value names that do not have time stamp values.

One of my co-workers reached to me recently and asked about some differences in the output between the plugin, and what XWays produces.  I took a look at it...he'd provided the NTUSER.DAT...and as I was going over the output, I remembered that when I had first written the plugin, I had it output only those entries that had associated time stamps.  Apparently, he'd seen something of value, so I modified the plugin to output a list of the value names whose data did not contain a time stamp.

I did not modify the userassist_tln.pl plugin, for obvious reasons.


Tuesday, January 31, 2017

DFIR Tools, Ransomware Thoughts

I like to keep up on new tools that are discussed in the community, because they offer insight into what other analysts are seeing.  The DFIR community at large isn't really that big on sharing what they've seen or done, and seeing tools being discussed is something of a "peek behind the curtain", as it were.

SRUM
A recent ISC handler diary entry described a tool for parsing System Resource Utilization Monitor (SRUM) data.

As soon as I read the diary entry, I went back through some of my recent cases, but wasn't able to find any systems with resource monitoring enabled.

SCCM
The folks at FireEye released a tool for parsing process execution information from the WMI repository.

I still strongly recommend that some form of process creation monitoring be installed or enabled on endpoints, whether its Sysmon, or something else.

Ransomware
Something else I've been interested in for quite some time is ransomware.  As an incident responder, I'm most often in contact with organizations that have suffered breaches, and these organizations vary greatly with respect to the maturity of their infosec programs.  However, the whole issue of ransomware is not just an annoyance that is part of the price of being part of a connected, e-commerce world.  In fact, ransomware is the implementation of a business model that monetizes what many organizations view as "low-value targets"; because it's a business model, we can expect to see developments and modifications/tweaks to that model to improve it's efficacy over the next year.

Last year, SecureWorks published a couple of posts regarding the Samas ransomware.  One of them illustrates the adversary's life cycle observed across multiple engagements; the other (authored by Kevin Strickland, of the IR team) specifically addresses the evolution of the Samas ransomware binary itself.

The folks at CrowdStrike published a blog post on the topic of ransomware, one that specifically discusses ransomware evolving over time.  A couple of thoughts regarding the post:

First, while there will be an evolution of tactics, some of the current techniques to infect an infrastructure will continue to be used.  Why?  Because they work.  The simple fact is that users will continue to click on things.  Anyone who monitors process creation events sees this on a weekly (daily?) basis, and this will continue to cost organizations money, in lost productivity as the IT staff attempt to recover.

Second, there's the statement, "Samas: This variant targets servers..."; simply put, no, it doesn't.  The Samas ransomware is just ransomware; it encrypts files.  As with Le Chiffre and several other variants of ransomware, there are actual people behind the deployment of the Samas ransomware.  The Samas ransomware has no capability whatsoever to target servers.  The vulnerable systems are targeted by an actual person.

Finally, I do agree with the authors of the post that a new approach is needed; actually, rather than a "new" approach, I'd recommend that organizations implement those basic measures that infosec folks have been talking about for 20+ years.  Make and verify backups, keep those backups off of the network.  Provide user awareness training, and hold folks responsible for that training.  Third-parties such as PhishMe will provide you with statistics, and identify those users who continue to click on suspicious attachments.

With respect to ransomware itself, is enough effort being put forth by organizations to develop and track threat intelligence?  CrowdStrike's blog post discusses an evolution of "TTPs", but what are those TTPs?  Ransomware is a threat that imposes significant costs on (and subsequently, a significant threat to) organizations by monetizing wide swathes of un-/under-protected systems.

Tools

Memory Analysis
When I've had the opportunity to conduct memory analysis, Volatility and bulk_extractor have been invaluable.

Back when I started in the industry, oh, lo so many years ago, 'strings' was pretty much the tool for memory "analysis".  Thanks to Volatility's strings plugin, there's so much more you can do; run 'strings' (I use the one from SysInternals) with the "-o" switch, and parse out any strings of interest.  From there, using the Volatility strings plugin lets you see where those strings of interest are located within the memory sample, providing significant context.

I've run bulk_extractor across memory samples, and been able to get pcap files that contained connections not present in Volatility's netscan plugin output.  That is not to say that one tool is "better" than the other...not at all.  Both tools do something different, and look for data in different ways, so using them in conjunction provides a more comprehensive view of the data.

If you do get a pcap file (from memory or any other data source), be sure to take a look at Lorna's ISC handler diary entry regarding packet analysis; there are some great tips available.  When conducting packet analysis, remember that besides WireShark, you might also want to take a look at the free version of NetWitness Investigator.

Carving
Like most analysts, I've needed to carve unallocated space (or other data blobs) for various items, including (but not limited to) executable images.  Carving unallocated space, or any data blob (memory dump, pagefile, etc.) for individual records (web history, EVT records, etc.) is pretty straight forward, as in many cases, these items fit within a sector.

Most analysts who've been around for a while are familiar with foremost (possible Windows .exe here) and scalpel as carving solutions.  I did some looking around recently to see if there were any updates on the topic of carving executables, and found Brian Baskin's pe_carve.py tool.  I updated my Python 2.7 installation to version 2.7.13, because the pip package manager became part of the installation package as of version 2.7.9.  Updating the installation so that I could run pe_carve.py was as simple as "pip install bitstring" and "pip install pefile".  That was it.  From there, all I had to do was run Brian's script.  The result was a folder with files that had valid PE headers, files that tools such as PEView parsed, but there were sections of the files that were clearly not part of the file.  But then, such is the nature of carving files from unallocated space.

Addendum, 1 Feb: One of the techniques I used to try to optimize analysis was to run 'strings' across the carved PE files, in hopes of locating .pdb strings or other indicators.  Unfortunately, in this case, I had nothing to go on other than file names.  I did find several references to the file names, but those strings were found in the files that were clearly part of the sectors included in the file that likely had little to do with the original file.

Also, someone on Twitter recommended FireEye's FLOSS tool, something you'd want to use in addition to 'strings'.

Hindsight
Hindsight, from Obsidian Forensics, is an awesome tool for parsing Chrome browser history.  If you haven't tried it, take a look.  I've used it quite successfully during engagements, most times to get a deeper understanding of a user's browsing activity during a particular time frame.  However, in one instance, I found the "smoking gun" in a ransomware case, where the user specifically used Chrome (was also using IE on a regular basis) to browse to a third-party email portal, download and activate a malicious document, and then infect their system with ransomware.  Doing so by-passed the corporate email portal protections intended specifically to prevent systems from being infected with...well...ransomware.  ;-)

Hindsight has been particularly helpful, in that I've used it to get a small number of items to add to a timeline (via tln.pl/.exe) that provide a great deal of context.

Shadow Copies
Volume shadow copies (VSCs) and how DFIR analysts take advantage of them is something I've always found fascinating.  Something I saw recently on Twitter was a command line that can be used to access files within Volume Shadow Copies on live systems; the included comment was, "Random observation - if you browse c$ on a PC remotely and add @TIMEZONE-Snapshot-Time, you can browse VSS snapshots of a PC."

An image included within the tweet chain/thread appeared as follows:

Source: Twitter










I can't be the only one that finds this fascinating...not so much that it can be done, but more along the lines of, "...is anyone doing this on systems within my infrastructure?"

Now, I haven't gotten this to work on my own system.  I am on a Windows 10 laptop, and can list the available shadow copies, but can't copy files using the above approach.  If anyone has had this work, could you share what you did?  I'd like to test this in a Win7 VM with Sysmon running, but I haven't been able to get it working there, either.

Addendum, 1 Feb: Tun provided a tweet to Dan's blog post that might be helpful with this technique.  Another "Dan" said on Twitter that he wasn't able to get the above technique to work.

As a side note to this topic, remember this blog post?  Pretty sneaky technique for launching malware.  What does that look like, and how do you hunt for it on your network?

Windows Event Logs
I recently ran across a fascinating MSDN article entitled, "Recreating Windows Event Log Files"; kind of makes you wonder, how can this be used by a bad guy, and more importantly, has it?

Maybe the real question is, are you instrumented to catch this happening on endpoints in your environment?  I did some testing recently, and was simply fascinated with what I saw in the data.

Monday, January 02, 2017

Links, Updates

I thought I'd make my first post of 2017 short and sweet...no frills in this one...

Tools
Volatility 2.6 is available!

Matt Suiche recently tweeted that Hibr2Bin is back to open source!

Here's a very good article on hibernation and page file analysis, which discusses the use of strings and page_brute on the page file, as well as Volatility on the hibernation file.  Do not forget bulk_extractor; I've used this to great effect during several examinations.

Willi's EVTXtract has been updated; this is a great tool for when the Windows Event Log has been cleared, or someone took a live image of a system and one or more of the *.evtx files are being reported as 'corrupt' by your other tools.

FireEye recently published an article describing how they parse the WMI repository for indications of process execution...

Cyber Interviews
Douglas has kept the content coming, so be sure to check out the interviews he's got posted so far...

Web Shells
Web shells are still something very much in use by the bad guys.  We've seen web shells used very recently, and TrustWave also published an interesting article on the topic.

Tenable published their own article on hunting for web shells.

Sunday, December 18, 2016

Updates

What's New?
The question I hear perhaps most often, when a new book comes out, or if I'm presenting at a conference, is "what's new?" or "what's the latest and greatest in Windows version <insert upcoming release version>?"

In January, 2012, after attending a conference, I was at baggage claim at my home airport along with a fellow attendee (to the conference)...he asked me about a Windows NT4 system he was analyzing.  In November of 2016, I was involved in an engagement where I was analyzing about half a dozen Windows 2003 server systems.  On an online forum just last week, I saw a question regarding tracking users accessing MSOffice 2003 documents on Windows 2003 systems.

The lesson here is, don't throw out or dismiss the "old stuff", because sometimes all you're left with is the old stuff.  Knowing the process for analysis is much more important than memorizing tidbits and interesting facts that you may or may not ever actually use.

Keeping Up
Douglas Brush recently started recording podcast interviews with industry luminaries, beginning with himself (the array is indexed at zero), and then going to Chris Pogue and David Cowen.  I took the opportunity to listen to the interview with Chris not long ago; he and I had worked together for some time at IBM (I was part of the ISS purchase, Chris was brought over from the IBM team).

Something Chris said during the interview was very poignant; it was one of those things that incident responders know to be true, even if it's not something you've ever stated or specifically crystallized in your own thoughts.  Chris mentioned during the interview that when faced with a number of apparently complex options, non-technical folks will often tend toward those options with which they are most familiar.  This is true not just in information security (Chris mentioned firewalls, IDS, etc.), but also during incident response.  As a responder and consultant, I've seen time and again where it takes some time for the IT staff that I'm working with to understand that while, yes, there is malware on this system, it's there because someone specifically put it there so that they could control it (hands on keyboard) and move laterally within the infrastructure.

Chris's interview was fascinating, and I spent some time recently listening to David's interview, as well.  I had some great take-aways from a lot of the things David said.  For example, a good bit of David's work is clearly related to the courts, and he does a great job of dispelling some of what may be seen as myth, as well as addressing a few simple facts that should (I say "should" because it's not always the case) persist across all DFIR work, whether you're headed to court to testify or not.  David also has some great comments on the value of sharing information within the community.

So far, it's been fascinating to listen to the folks being interviewed, but to be honest, there a lot of women who've done exceptional work in the field, as well, and should not be overlooked.  Mari DeGrazia, Jamie Levy, Cindy Murphy, Sarah Edwards, to name just a few.  These ladies, as well as many others, have had a significant impact on, and continue to contribute to, the community.

I did listen to the Silver Bullet Podcast not long ago, specifically the episode where Lesley Carhart was interviewed.  It's good to get a different perspective on the industry, and I'm not just talking about from a female perspective.  Lesley comes from a background that is much different from mine, so I found listening to her interview very enlightening.

Friday, November 18, 2016

The Joy of Open Source

Not long ago, I was involved in an IR engagement where an intruder had exploited a web-based application on a Windows 2003 system, created a local user account, accessed the system via Terminal Services using that account, run tools, and then deleted the account that they'd created before continuing on using accounts and stolen credentials.

The first data I got to look at was the Event Logs from the system; using evtparse, I created a mini-timeline and got a pretty decent look at what had occurred on the system.  The client had enabled process tracking so I could see the Security/592 and ../593 events, but unfortunately, the additional Registry value had not been created, so we weren't getting full command lines in the event records.  From the mini-timeline, I could "see" the intruder creating the account, using it, and then deleting it, all based on the event record source/ID pairs.

For account creation:
Security/624 - user account created
Security/628 - user account password set
Security/632 - member added to global security group
Security/642 - user account changed

For account deletion:
Security/630 - user account deleted
Security/633 - member removed from global security group
Security/637 - member removed from local security group

Once I was able to access an image of the system, a visual review of the file system (via FTK Imager) confirmed that the user profile was not visible within the active file system.  Knowing that the account had been a local account, I extracted the SAM Registry hive, and ran regslack.exe against it...and could clearly see two keys (username and RID, respectively), and two values (the "F" and "V" values) that had been deleted and were currently "residing" within unallocated space in the hive file.  What was interesting was that the values still included their complete binary data.

I was also able to see one of the deleted keys via RegistryExplorer.

SAM hive open in RegistryExplorer














Not that I needed to confirm it, but I also ran the RegRipper del.pl plugin against the hive and ended up finding indications of two other deleted keys, in addition to the previously-observed information.

Output of RR del.pl plugin (Excerpt)

















Not only that, but the plugin retrieves the full value data for the deleted values; as such, I was able to copy (via Notepad++) code for parsing the "F" and "V" value data out of the samparse.pl plugin and paste it into the del.pl plugin for temporary use, so that the binary data is parsed into something intelligible.

The del_tln.pl plugin (output below) made it relatively simple to add the deleted key information to a timeline, so that additional context would be visible.


Output of RR del_tln.pl plugin


If nothing else, this really illustrates one of the valuable aspects of open source software.  With relatively little effort and time, I was able to incorporate findings directly into my analysis, adding context and clarity to that analysis.  I've modified Perl and Python scripts to meet my own needs, and this is just another example of being able to make quick and easy changes to the available tools in order to meet immediate analysis needs.

Speaking of which, I've gone back and picked up something of a side project that I'd started a bit ago, based on a recent suggestion from a good friend. As I've started to dig into it a bit more, I've run into some challenges, particularly when it comes to "seeing" the data, and translating it into something readable.  Where I started with a hex editor and highlighting a DWORD value at a time, I've ended up writing and cobbling together bits of (open source) code to help me with this task. At first glance, it's like having a bunch of spare parts laying out on a workbench, but closer inspection reveals that it's all basically the same stuff, just being used in different ways.  What started a number of years ago with the header files from Peter Nordahl's ntchpwd utility became the first Registry parsing code that I wrote, which I'm still using to this day.

Take-Aways
Some take-aways from this experience...

When a new version of Windows comes out, everyone wants to know what the new 'thing' is...what's the latest and greatest artifact?  But what about the stuff that always works?  What about the old stuff that gets used again and again, because it works?

Understanding the artifact cluster associated with certain actions on the various versions of Windows can help in recognizing those actions when you don't have all of the artifacts available.  Using just the event record source/ID pairs, we could see the creation and deletion of the user account, even if we didn't have process information to confirm it for us.  In addition, the account deletion occurred through a GUI tool (mmc.exe running compmgmt.msc) and all the process creation information would show us is that the tool was run, not which buttons were pushed.  Even without the Event Log record metadata, we still had the information we extracted from unallocated space within the SAM hive file.

Having access to open source tools means that things can be tweaked and modified to suit your needs.  Don't program?  No problem.  Know someone who does?  Are you willing to ask for help?  No one person can know everything, and sometimes it's helpful to go to someone and get a fresh point of view.

Saturday, October 29, 2016

Ransomware

Ransomware
I think that we can all agree, whether you've experienced it within your enterprise or not, ransomware is a problem.  It's one of those things that you hope never happens to you, that you hope you never have to deal with, and you give a sigh of relief when you hear that someone else got hit.

The problem with that is that hoping isn't preparing.

Wait...what?  Prepare for a ransomware attack?  How would someone go about doing that?  Well, consider the quote from the movie "Blade":

Once you understand the nature of a thing, you know what it's capable of.

This is true for ransomware, as well as Deacon Frost.  If you understand what ransomware does (encrypts files), and how it gets into an infrastructure, you can take some simple (relative to your infrastructure and culture, of course) to prepare for such an incident to occur.  Interestingly enough, many of these steps are the same that you'd use to prepare for any type of incident.

First, some interesting reading and quotes...such as from this article:

The organization paid, and then executives quickly realized a plan needed to be put in place in case this happened again. Most organizations are not prepared for events like this that will only get worse, and what we see is usually a reactive response instead of proactive thinking.

....and...

I witnessed a hospital in California be shut down because of ransomware. They paid $2 million in bitcoins to have their network back.

The take-aways are "not prepared" and "$2 million"...because it would very likely have cost much less than $2 million to prepare for such attacks.

The major take-aways from the more general ransomware discussion should be that:

1.  Ransomware encrypts files.  That's it.

2.  Like other malware, those writing and deploying ransomware work to keep their product from being detected.

3.  The business model of ransomware will continue to evolve as methods are changed and new methods are developed, while methods that continue to work will keep being used.

Wait...ransomware has a business model?  You bet it does!  Some ransomware (Locky, etc.) is spread either through malicious email attachments, or links that direct a user's browser to a web site.  Anyone who does process creation monitoring on an infrastructure likely sees this.  In a webcast I gave last spring (as well as in subsequent presentations), I included a slide that illustrated the process tree of a user opening an email attachment, and then choosing to "Enable Content", at which point the ransomware took off.

Other ransomware (Samas, Le Chiffre, CryptoLuck) is deployed through a more directed means, bypassing email all together.  An intruder infiltrates an infrastructure through a vulnerable perimeter system, RDP, TeamViewer, etc., and deploys the ransomware in a dedicated fashion.  In the case of Samas ransomware, the adversary appears to have spent time elevating privileges and mapping the infrastructure in order locate systems to which they'd deploy the ransomware.  We've seen this in the timeline where the adversary would on one day, simply blast out the ransomware to a large number of systems (most appeared to be servers).

The Ransomware Economy
There are a couple of other really good posts on Secureworks blog regarding the Samas ransomware (here, and here).  The second blog post, by Kevin Strickland, talks about the evolution of the Samas ransomware; not long ago, I ran across this tweet that let us know that the evolution that Kevin talked about hasn't stopped.  This clearly illustrates that developers are continuing to "provide a better (i.e., less detectable) product", as part of the economy of ransomware.  The business models that are implemented the ransomware economy will continue to evolve, simply because there is money to be had.

There is also a ransomware economy on the "blue" (defender) side, albeit one that is markedly different from the "red" (attacker) side.

The blue-side economy does not evolve nearly as fast as the red-side.  How many victims of ransomware have not reported their incident to anyone, or simply wiped the box and moved on?  How many of those with encrypted files have chosen to pay the ransom rather than pay to have the incident investigated?  By the way, that's part of the red-side economy...make it more cost effective to pay the ransom than the cost of an investigation.

As long as the desire to obtain money is stronger that the desire to prevent that from happening, the red-side ransomware economy will continue to outstrip that of the blue-side.

Preparation
Preparation for a ransomware attack is, in many ways, no different from preparing for any other computer security incident.

The first step is user awareness.  If you see something, say something.  If you get an odd email with an attachment that asks you to "enable content", don't do it!  Instead, raise an alarm, say something.

The second step is to use technical means to protect yourself.  We all know that prevention works for only so long, because adversaries are much more dedicated to bypassing those prevention mechanisms than we are to paying to keep those protection mechanisms up to date.  As such, augmenting those prevention mechanisms with detection can be extremely effective, particularly when it comes to definitively nailing down the initial infection vector (IIV).  Why is this important?  Well, in the last couple of months, we've not only seen the deliver mechanism of familiar ransomware changing, but we've also seen entirely new ransomware variants infecting systems.  If you assume that the ransomware is getting in as an email attachment, then you're going to direct resources to something that isn't going to be at all effective.

Case in point...I recently examined a system infected with Odin Locky, and was told that the ransomware could not have gotten in via email, as a protection application had been purchased specifically for that purpose.  What I found was that the ransomware did, indeed, get on the system via email; however, the user had accessed their AOL email (bypassing the protection mechanism), and downloaded and executed the malicious attachment.

Tools such as Sysmon (or anything else that monitors process creation) can be extremely valuable when it comes to determining the IIV for ransomware.  Many variants will delete themselves after files are encrypted, (attempt to) delete VSCs, etc., and being able to track the process train back to it's origin can be extremely valuable in preventing such things in the future.  Again, it's about dedicating resources where they will be the most effective.  Why invest in email protections when the ransomware is getting on your systems as a result of a watering hole attack, or strategic web compromise?  Or what if it's neither of those?  What if the system had been compromised, a reverse shell (or some other access method, such as TeamViewer) installed and the system infected through that vector?

Ransomware will continue to be an issue, and new means for deploying are being developed all the time.  The difference between ransomware and, say, a targeted breach is that you know almost immediately when you've had files encrypted.  Further, during targeted breaches, the adversary will most often copy your critical files; with ransomware, the files are made unavailable to anyone.  In fact, if you can't decrypt/recover your files, there's really no difference between ransomware and secure deletion of your files.

We know that on the blue-side, prevention eventually fails.  As such, we need to incorporate detection into our security posture, so that if we can't prevent the infection or recover our files, we can determine the IIV for the ransomware and address that issue.

Addendum, 30 Oct: As a result of an exchange with (and thanks to) David Cowen, I think that I can encapsulate the ransomware business model to the following statement:

The red-side business model for ransomware converts a high number of low-value, blue-side assets into high-value attacker targets, with a corresponding high ROI (for the attacker).

What does mean?  I've asked a number of folks who are not particularly knowledgeable in infosec if there are any files on their individual systems without which they could simply not do their jobs, or without access to those files, their daily work would significantly suffer.  So far, 100% have said, "yes".  Considering this, it's abundantly clear that attackers have their own reciprocal Pyramid of Pain that they apply to defenders; that is, if you want to succeed (i.e., get paid), you need to impact your target in such a manner that it is more cost-effective (and less painful) to pay the ransom than it is perform any alternative.  In most cases, the alternative amounts to changing corporate culture.




AmCache.hve

I was working on an incident recently, and while extracting files from the image, I noticed that there was an AmCache.hve file.  Not knowing what I would find in the file, I extracted it to include in my analysis.  As I began my analysis, I found that the system I was examining was a Windows Server 2012 R2 Standard system.  This was just one system involved in the case, and I already had a couple of indicators.

As part of my analysis, I parsed the AppCompatCache value and found one of my indicators:

SYSVOL\downloads\malware.exe  Wed Oct 19 15:35:23 2016 Z

I was able to find a copy of the malware file in the file system, so I computed the MD5 hash, and pulled the PE compile time and interesting strings out of the file.  The compile time was  9 Jul 2016, 11:19:37 UTC.

I then parsed the AmCache.hve file and searched for the indicator, and found:

File Reference  : 28000017b6a
LastWrite          : Wed Oct 19 06:07:02 2016 Z
Path                   : C:\downloads\malware.exe
SHA-1               : 0000
Last Mod Time2: Wed Aug  3 13:36:53 2016 Z

File Reference   : 3300001e39f
LastWrite           : Wed Oct 19 15:36:07 2016 Z
Path                    : C:\downloads\malware.exe
SHA-1                : 0000
Last Mod Time2: Wed Oct 19 15:35:23 2016 Z

File Reference  : 2d000017b6a
LastWrite          : Wed Oct 19 06:14:30 2016 Z
Path                   : C:\Users\\Desktop\malware.exe
SHA-1               : 0000
Last Mod Time  : Wed Aug  3 13:36:54 2016 Z
Last Mod Time2: Wed Aug  3 13:36:53 2016 Z
Create Time       : Wed Oct 19 06:14:20 2016 Z
Compile Time    : Sat Jul  9 11:19:37 2016 Z

All of the SHA-1 hashes were identical across the three entries.  Do not ask for the hashes...I'm not going to provide them, as this is not the purpose of this post.

What this illustrates is the value of what what can be derived from the AmCache.hve file.  Had I not been able to retrieve a copy of the malware file from the file system, I would still have a great deal of information about the file, including (but not limited to) the fact that the same file was on the file system in three different locations.  In addition, I would also have the compile time of the executable file.