Pages

Sunday, December 31, 2017

WindowsIR 2017

I thought that for my final post of 2017, I'd pull together some loose ends and tie off some threads from throughout the year, and to do that, I figured I'd just go through the draft blog posts I have sitting around, pull out the content, and put it all into one final blog post for the year.

Investigating Windows Systems
Writing for my next book, Investigating Windows Systems, is going well.  I've got one more chapter to get completed and in to my tech reviewer, and the final manuscript is due in April, 2018.  The writing is coming along really well, particularly given everything that's gone on this year.

IWS is a departure from my previous books, in that instead of walking through artifacts one after another, this book walks through the process of analysis, using currently available CTF and forensic challenges posted online, and specifically calling out analysis decisions along the way.  For example, at one point in the analysis, why did I opt to take this action, rather than that one?  Why did I choose to pivot on this IP address, rather than run a scan?

IWS is not a book about the basics.  From the beginning, I start from the point that whomever is reading the book knows about the MFT, understands what a "timeline" is, etc.  Images and analysis scenarios are somewhat limited, given that I'm using what's already available online, but the available images do range from Windows XP, through Windows 10.  In several instances in the book, I mention creating a timeline, but do not include the exact process in the text of the book, for a couple of reasons.  One is that I've already covered the process.  The other is that this is the process I use; I don't require anyone to use the exact same process, step by step.  I will include information such as the list of commands used in online resources in support of the book, but the reader can create a timeline using any method of their choosing.  Or not.

Why Windows XP?  Well, this past summer (July, 2017), I was assisting with some analysis work for NotPetya, and WinXP and 2003 systems were involved.  Also, the book is about the analysis process, not about tools.  My goal is, in part, to illustrate the need for analysts to choose a particular tool based their understanding of the image(s) being analyzed and the goals of the analysis.  Too many times, I've heard, "...we aren't finished because vendor tool X kept crashing, and we kept restarting it...", without that same analyst having a reasoned decision behind running that tool in the first place.

Building a Personal Brand in InfoSec
I've blogged a couple of times this year (here, and here) on the topic of "getting started" in the cyber security field.  In some cases, thoughts along these lines are initiated by a series of questions in online forums, or some other event.  I recently read this AlienVault blog post on the topic, and I thought I'd share my thoughts on the topic, with respect to the contents of the article itself.  The author makes some very good points, and I saw an opportunity to tie their comments in with my own.

- Blogging is a good way to showcase your knowledge
Most definitely.  Not only that, a blog post is a great way to showcase your ability to write a coherent sentence.  Sound harsh?  Maybe it is.  I've been in the infosec/cybersecurity industry for about 20 yrs, and my blog goes back 13+ yrs.  This isn't me bragging, just stating a fact. When I look at a blog article from someone, anyone...new to the industry, been in the industry for a while, or returning to blogging after being away for a while...the first thing that stands out in my mind isn't the content as much as it the author's ability to communicate.  This is something that is not unique to me; I've read about this in books such as "The Articulate Executive in Action", and it's all about that initial, emotional, visceral response.

There are a LOT of things you can use as a basis for writing a blog post.  The first step is understanding that you do not have to come up with original or innovative content.  Not at all.  This is probably the single most difficult obstacle to blogging for most folks.  From my experience working on several consulting teams over two decades, it's the single most used excuse for not sharing information amongst the team itself, and I've also seen/heard it used as a reason for not blogging.

You can take a new (to you) look at a topic that's been discussed.  One of the biggest misconceptions (or maybe the biggest excuse) for folks is that they have nothing to share that's of any significance or value, and that simply is not the case at all.  Even if something you may write about has been seen before, the simple fact that others are still seeing it, or others are seeing it again after an apparent absence, is pretty significant.  Maybe others aren't seeing it due to some change in the delivery mechanism; for example, early in 2016, there was a lot of talk about ransomware, and most of the media articles indicated that ransomware is email-borne.  What if you see some ransomware that was, in fact, email-borne but bypassed protection/detection mechanisms due to some variation in the delivery?  Blah blah ransomware blah blah email blah user clicked on it blah.  Whether you know it or not, there's a significant story there.  Or, maybe I should say, whether you're willing to admit it to yourself or not, there's a significant story there.

Don't feel that you can create your own content?  You can write book reviews, or reviews of individual chapters of books.  I would caution you, however...a "book review" that consists of "...chapter 1 covers blah, chapter 2 covers blah..." is NOT a review at all.  It's the table of contents.  Even when reviewing a single chapter, there's so much to be said...what was it that the author said, or was it how they said it?  How did what you read in the pages impact you, or your daily SOP?  Too many times I've seen reviews consist of little more than a table of contents, and not be abundantly helpful.

- Conferences are an absolute goldmine for knowledge...
They definitely are, and the author makes a point of discussing what can be the real value of conferences, which is the networking aspect. 

A long time ago, I found that I was attending conference presentations, and found that it felt like the first day of high school.  I'd confidently walk into a room, take my seat, and within a few minutes of the presenter starting to speak, start to wonder if I was in the correct room.  It often turned out that either the presentation title was a place holder, or that I had completely misinterpreted the content from the 5 words used in the title.  Okay, shame on me.  Often times, however, presenters don't dig into the detail that I would've hoped for; on the other hand, when I've presented on what something look like (i.e., lateral movement, etc.), I've received a great deal of feedback.  When I've seen such presentations, I've had a great deal of feedback (and questions) myself.  Often a lot of the detail...what did you see, what did it look like, why did you decide to go left instead of right...comes from the networking aspect of conferences.

- Certifications are a hot topic in the tech industry, and they are HR’s best friend for screening applicants.
True, very true.  But this goes back to the blogging discussed above...I'm not so much interested in the certifications one has, as much as I'm interested in how the knowledge and skills are used and applied.  Case in point, I once worked with someone (several years ago) who went off to a week-long training course in digital forensics, and within 6 weeks of the end of the course, wrote a report to a client in which they demonstrated their lack of understanding of the MFT.  In a report.  To a client.

So, yes, certifications are viewed as providing an objective measure of an applicant's potential abilities, but the most important question to ask is, when someone is sent off to training, does anyone hold them accountable for what they learned?  When they come back from the training, do they have new tools, processes, and techniques that they then employ, and you can see the result of this in their work?  Or, is there little difference between the "before" and "after"?

On Writing
I mentioned above that I'm working on completing a book, and over the last couple of weeks/months, I've run across some excellent advice for folks who want to write books that cover 'cyber' topic.  Ed Amoroso shared some great Advice For Cyber Authors, and Brett Shavers shared a tip for those thinking of writing a DFIR book.

Fast Times at DFIR High
One of the draft posts I'd started earlier this year had been one in which I would recount the "funny" things that I'd seen go on over the past 20 years in infosec (what we used to call "cybersecurity"), and specifically within DFIR consulting work.  I'm sure that a great deal of what I could recount would ring true for many, and that many would also have their own stories, or their own twists on similar stories.  I'd had vignettes in the draft written down, things like a client calling late at night to declare an emergency and demand that someone be sent on-site before they provided any information about the incident at all, only to have someone show up and be told that they could go home (after arriving at the airport and traveling on-site, of course). Or things like a client sending password-enabled archives containing "information you'll want", but no other description...requiring that they all be opened and examined, questions asked as to their context, etc.

I opted to not continue with the blog article, and I deleted it, because it occurred to me that it wasn't quite as humorous as I would have hoped.  The simple fact is that the types of actions I experienced in, say, 2000, are still being seen by active IR consultants today.  I know because I've talked to some IR folks who've experienced them.  Further, it really came down to a "...cast the first stone..." issue; who are we (DFIR folks, responders, etc.) to chide clients for their actions (how inappropriate or misguided that we see them...) when our own actions are not pristine?  Largely, as a community (and I'm not including specific individuals in this...), we don't share experiences, general findings, etc., with each other.  Some do, most don't...and life goes on.

Final Thought for 2017
EDR. Endpoint visibility is going to become an even more important issue in the coming year.  Yes, I've refrained from making predictions, and I'm not making one right now...I'm simply saying that the past decade has shown the power of early detection, and the differences in outcomes when an organization understands that a breach has likely already occurred, and actively prepares for it.  Just this year, we've seen a number of very significant breaches make it into the mainstream media, with several others being mentioned but not making, say, the national news.  Even more (many, many more) breaches occur without ever being publicly announced in any manner.

Equifax is one of the breaches that we (apparently) know the most about, and given what's appeared in the public eye, EDR would've played a significant role in avoiding the issue overall.  Given what's been shared publicly, someone monitoring the systems would've seen something suspicious launched as a child process of the web server (such filters or alerts should be common place) and been able to initiate investigative actions immediately.

EDR gives visibility into endpoints, and yes, there is a great deal of data headed your way; however, there are ways to manage this volume of data.  Employ threat intelligence to alert you to those actions and events that require your attention, and have a plan in place for how to quickly investigate and address them.  The military refers to this as "immediate actions", things that you learn in a "safe" environment and that you practice in adverse environments, so that you can effectively employ them when required.  Ensure that you cover blind spots; not just OS variants, but also, what happens when the adversary moves from command line access (via a Trojan or web shell) to GUI shell (Windows Explorer) access?  If your EDR solution stops providing visibility when the adversary logs in via RDP and opens any graphical application on the desktop, you have a blind spot.

Effective use of EDR solutions can significantly reduce costs associated with the current "state of the breach"; that is, getting notified by an external third party that you've suffered a breach, weeks or months after the initial compromise occurred.  With more immediate detection and response, your requirement to notify (based on state laws, regulatory bodies, etc.) will be obviated.  Which would you rather be...the next Equifax, or, because you budgeted for a solution, you can go to regulatory bodies and state, yes, we were breached, but no critical/sensitive data was accessed?

Sunday, December 24, 2017

"Sophisticated" Attacks

I ran across an interesting article recently, which described a "highly sophisticated" email scam, but didn't go into a great deal of detail (or any level of detail, for that matter) as to how the scam was "sophisticated".

To be honest, I'm curious...not morbidly so, but more in an intellectual sense.  What is "sophisticated"?  I've been in meetings where attacks were described as sophisticated, and then conducted the actual incident response or root cause investigation, and found out that the attack was perpetrated via a system running the IIS web server and Terminal Services.  Web shells were found in the web content folder, and the Windows Event Logs showed lengthy history of failed login attempts via RDP.

Very often what is said to be "sophisticated" is often framed as such due to a dearth of actual data, which occurred for a number of reasons, most of which are traced back to a simple lack of preparedness on the part of the organization.  For example, were any attempts made to prevent the attack from succeeding?  I've seen a DNS server, per the client's description, running IIS and Terminal Services, with the web server logs chock full of scan attempts, and the Windows Event Logs full of failed login attempts...yet nothing was done, such as to say, "yeah, hey, this is a DNS server, we shouldn't be running these other services...", etc.  The system lacked any sort of configuration modifications "out of the box" to allow for things like increased logging, increased detail in logging, or instrumentation to provide visibility.  As such, in investigating the system, there was scant little we could determine with respect to things like lateral movement, or other "actions on the objective".

At this point, my report is shaping up to include words such as "dearth of available data due to a lack of instrumentation and visibility...", where the messaging at the client level is "this was a highly sophisticated attack". 

I've also seen ransomware attacks described as "highly sophisticated", as they apparently bypassed email protections.  In such cases, the data that illustrated the method of access and propagation of the ransomware itself was available, but apparently, it's easier to just say that the attack was "highly sophisticated", and leave it at that.  After all, who's really going to ask for anything beyond that?

After reading that first article, I then ran a quick Google search, and two of the six hits on the first page, without scrolling down, included the term "myth" in the URL title.  In fact, this Security Magazine article from April 2017 is more inline with my own experience in two decades of cybersecurity consulting.  Several years ago, I was working with a client and during the first few minutes of the first meeting, one of the IT staff made a point of emphasizing the fact that they did NOT use communal admin accounts.  I noted this, but also noted the emphasis...because later in the investigation, we found that the adversary had left a little "treat".  At one point, the adversary had pushed out RAT installers to less than a dozens systems via at.exe, and about a week later, pushed out a close variant of that same RAT installer to one other system; however, this installer had a different C2 configuration, so the low-level indicators (C2 IP address, file hash) that the client was looking for were obviated.  This installer was pushed out to the StartUp folder for the...wait for it..."admin" account.  Yes, that is the name of the account..."admin".  It turned out that this was likely pushed out in case the other RATs were discovered, and would provide a means of access back into the infrastructure at a later date.  After all, the "admin" profile already existed on a number of systems, meaning that the account had been set up in the domain, and then used to log into several systems.  The RAT installer was pushed out to one system, in the "admin" profile's StartUp folder, as a means of providing the adversary with access back into the infrastructure, in the advent of IR activities. 

As in the above described instance, a great many of the incidents I (and others) have responded to are not terribly sophisticated at all.  In fact, the simple elegance of some of these incidents are impressive, given the fact that the adversary knows more about the function of, say, Windows networking, beyond what you achieve through the GUI. 

For example, if an adversary modifies the "hosts" file on a Windows system, is that "highly sophisticated"?  Here is a MS article on host name resolution order; it was updated just shy of a year ago (as of this writing), but take a close look at the OS versions reference in the article.  So, are you not seeing suspicious domain queries in your DNS logs because there is no malware on your infrastructure, or is it because someone took advantage of Microsoft capabilities that go back over 20 yrs? 

Speaking of archaic capabilities, anyone remember this one?

My point is, what is truly a "highly sophisticated" attack?

Monday, October 30, 2017

Updates

Case Studies
Brett Shavers posted a fascinating article recently, in which discussed the value of case studies throughout his career, going back to his LEO days. I've thought for some time now that this is one of the great missed opportunities of the DFIR community, that more analysts don't share what they've seen or done.  At the same time, I've seen time and again the benefit that comes from doing something like this...we get a different perspective, or we learn a little bit more or new, *OR* we learn that we were incorrect about something, and find out what the real deal is...this can have a huge impact on our future work.  Throughout the time I've been involved in the community, I've heard "...but I don't want to be corrected..." more than once, and to be honest, I'm not at all sure why that is.  God knows that I don't know everything and if I've gotten something wrong, I'd love to know what it is so that I don't keep making the same mistake over and over again.

Taking Brett's blog post a bit further (whether he knew it or not), Phill Moore shared a post about documenting his work.

Cisco Talos Intelligence - Decoy Documents
The Cisco Talos Intel team recently blogged regarding the use of decoy documents during a real cyber conflict.  What I found fascinating about this write-up was the different approach that was taken with the documents; specifically, the payload is base64-encoded and spread across the document metadata.  When the document is opened, a macro reportedly extracts the segments from the metadata variables and concatenates them together, resulting in a base64-encoded PE file.  The folks at Talos were kind enough to provide hashes for three decoy documents, which were all available on VirusTotal (searching for some of the metadata items returned this Hybrid Analysis link).  As such, I took the opportunity to run them through my own tools (wmd.pl, oledmp.pl) to see what I could see.

The first interesting aspect of the documents is that even though the three documents were different in structure, some of the metadata (including time stamps) were identical.

For example, from wmd.pl, the three documents all contained the same information illustrated below:

Authress     : Rafael Moon
LastAuth     : Nick Daemoji
RevNum       : 7
AppName      : Microsoft Office Word
Created      : 03.10.2017, 01:36:00
Last Saved   : 04.10.2017, 14:20:00

From oledmp.pl, even though the first document had markedly different streams from the other two, the root entry and directory entries all had the same time stamp and CLSID:

Root Entry  Date: 04.10.2017, 14:20:11  CLSID: 00020906-0000-0000-C000-000000000046

So, given the combination of identical metadata and time stamps, it's possible that the values were modified or manipulated to their values.  I'd say that this is support by the fact that some of the metadata values were modified to include the

Remember, you can also use Didier Stevens' oledump.py to extract the compressed VBA macros.  For example, to view the complete VBA script from the second document, I just typed the following command:

oledump.py d:\cases\maldoc\apt28_2 -s 8 -v

Shoop*, there it is...the decompressed macro script.  Very nice.  Maybe Cory and I should do reprise DFwOST, and do a second edition, and include stuff like this, going beyond just tools for forensics, and looking at tools that allow you to parse documents, etc.

Something I've mentioned before are some values that I saw listed in the output of 'strings' run across the documents:

CMG="6B69C77E3682368232863286"
DPB="D6D47AEB8E3DE45AE45A1BA6E55A4B8B26955AAD8AB69D9BB0FF733C04402A5B44526BEF14E0D0"
GC="4143EDEEEEEEEEEE"

Again, I've seen these before in Office documents that contain macros, most times with different values.  The definition of these values can be found at the MS-OVA Project page, under the Stream example, and I haven't yet been able to determine how these fields are populated.

LNK Files
I've posted several times over the past couple of months regarding parsing metadata from LNK files that are part of an adversary's toolkit; those either sent to a target as an email attachment, or embedded in another document format, etc.  US-CERT posted an alert recently that I found to be very interesting along the same lines, particularly regarding the fact that the adversary modifying Windows shortcut files as a means of credential theft. The big take-away from this information (for me) is that the adversary incorporated something that rudimentary about how the operating system functions and used it as a means for collecting credentials.

Of course, the "elephant in the room" is, why are organizations still allowing out-bound SMB traffic?

USB Devices
There's been a good bit of activity online recently regarding USB devices and Windows 10 systems.  Eric Zimmerman blogged about some changes to the AmCache.hve file that include a significant amount of information about devices.  Matt Graeber tweeted that "Microsoft-Windows-Partition/Diagnostic EID 1006 gives you a raw dump of the partition table, MBR, and VBR upon drive insertion", and @Requiem_fr tweeted that USB device info can be found in the Microsoft-Windows-Kernel-PnP%4Configuration.evtx Windows Event Log.

A little over 12 years ago, Cory Altheide and I published a paper on tracking USB devices on Windows XP systems, and it's been pretty amazing to see not only how this sort of thing has grown over time, but also to see the number of artifacts that continue to be generated as new versions of Windows have been released.

*Please excuse my taking artistic liberties to work in a Deadpool reference...

Saturday, October 14, 2017

Stuff

Powershell
In preparing to do some testing in a Windows 7 VM, I decided to beef up PowerShell to ensure that artifacts are, in fact, created.  I wanted to make sure anything hinky that was done in PowerShell was recorded in some way.

The first step was to upgrade PowerShell to version 5.  I also found a couple of sites that recommended Registry settings to ensure the Module Logging and Script Block Logging were enabled, as well.

The idea behind this is that there have been a number of cases I've worked that have involved some sort of obfuscated PowerShell...Meterpreter, stuff loaded from the Registry, stuff that's come in via LNK files attached to emails (or embedded in email attachments), etc.  Heck, not just cases I've worked...look at social media on any given day and you're likely to see references to this sort of thing.  So, in an effort to help clients, one of the things I want to do is to go beyond just recommending "update your PowerShell" or "block PowerShell all together", and be able to show what the effect of updating PowerShell will likely be.

DDE
There's been a good bit of info floating around on Twitter this past week regarding the use of DDE in Office documents to launch malicious activity.  I first saw this mentioned via this NViso blog post, then I saw this NViso update (includes Yara rules), and anyone looking into this will usually find this SensePost blog article pretty quickly.  And don't think for a second that this is all there is...there's a great deal of discussion going on, and all you have to do is search for "dde" on Twitter to see most of it.

David Longenecker also posted an article on the DDE topic, as well.  Besides the technical component of his post, there's another aspect of David's write-up that may go unnoticed...look at the "Updated 11 October" section.  David could have quietly updated the information in the post, but instead went ahead and highlighted the fact that he'd made a mistake and then corrected it.

USB Devices
Matt Graeber recently tweeted about data he observed in the Microsoft-Windows-Partition/Diagnostic Windows Event Log, specifically events with ID 1006; he said, "...gives you a raw dump of the partition table, MBR, and VBR upon drive insertion."  Looking at records from that log, in the Details view of Event Viewer, there are data items such as Capacity, Manufacturer, Model, SerialNumber, etc.  And yes, there's also raw data from the partition table, MBR, and VBR, as well.

So, if you need to know something about devices connected to a Windows 10 system, try parsing the data from this *.evtx file.  What you'll end up with is not only the devices, but when, and how often.

Registry
Eric Zimmerman recently tweeted about the RecentApps key in the NTUSER.DAT hive; once I took a look at the key contents, I was pretty sure I was looking at something not too different from the "old" UserAssist data...pretty cool stuff.  I also found via Twitter that Jason Hale had blogged about the key, as well.

So, I wrote a recentapps.pl and a recentapps_tln.pl plugin, and uploaded them to the repository.  I only had one hive for testing, so YMMV.  I do like the TLN plugin...pushing that data into a timeline can be illuminating, I'm sure, for any case involving a Windows 10 system where the someone interacted with the Explorer shell.  In fact, creating a timeline using just the UserAssist and RecentApps information is pretty illuminating...using information from my own NTUSER.DAT hive file (extracted via FTK Imager), I see things like:

{1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\NOTEPAD.EXE (3)
[Program Execution] UserAssist - {1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\NOTEPAD.EXE (0)
{1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\NOTEPAD.EXE RecentItem: F:\ch5\notes.txt

...and...

{1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\WScript.exe RecentItem: C:\Users\harlan\Desktop\speak.vbs
{1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\WScript.exe (7)
[Program Execution] UserAssist - {1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\WScript.exe (0)

Each of the above two listings of three entries from a timeline all occurred in the same second, and provide a good bit of insight into the activities of the user.  For example, this augments the information provided by the RecentDocs key, by providing the date and time at which files were accessed, rather than just that of the most recently accessed file.  Add to this timeline entries from the DestList stream from JumpLists, as well as entries from AmCache.hve, etc., and you have a wealth of data regarding program execution and file access artifacts for a user, particularly where detailed process tracking (or some similar mechanism, such as Sysmon) is not enabled.

Eric also posted recently about changes to the contents of the AmCache.hve file...that file has, so far, been a great source of artifacts, so I'll have to go dig into Eric's post and update my own parser.  From reading Eric's findings, it appears that there's been some information added regarding devices and device drivers, which can be very valuable.  So, a good bit of the original data is still there (Eric points out that some of the MFT data is no longer in the hive file), and some new information has been added.

Friday, October 06, 2017

Updates

Ransomware
Eric over at Carbon Black recently posted regarding the Kangaroo ransomware.   Here are some cool things that Eric point out about the ransomware:

1. It's GUI based, and the folks using it to infect un-/under-protected RDP servers.

2. The ransomware time-stomps itself.  While on the surface this may seem to make it ransomware difficult to find during DFIR, that's not really the case at all, and to be honest, I'm not at all sure why this step was taken.

3. The ransomware clears the System and Security Event Logs, and removes VSCs.  As with the time stomping, I'm sure that clearing the Event Logs is intended to make things difficult but to be honest, most folks who've done this kind of work know (a) where to look for other artifacts, and (b) know how to recover cleared Windows Event Logs.

Eric's technical analysis doesn't mention a couple of things that are specific to ransomware.  For example, while Eric does state that the ransomware is deployed manually, there's no discussion of the time frame after accessing the RDP server in which the ransomware is deployed, nor if there are any attempts at network mapping or privilege escalation.  I'm sure this is the result of the analysis being based on samples of the ransomware, rather than due to responding to engagements involving this ransonware.  Earlier this spring, I saw two different ransomware engagements that were markedly different.  While both involved compromised RDP servers, for one, the bad guy got in, mucked about for a week (7 days total, albeit not continuously) installing Opera, Firefox, and a GIF image viewer, and then launched ransomware without ever attempting to escalate privileges.  As such, only the files in the compromised profile were affected.  On the other hand, in the second instance, the adversary accessed the RDP server and within 10 minutes escalated their privileges and launched the ransomware.  In this case, the entire RDP server was affected, as were other systems within the infrastructure.

Types of Ransomware
Speaking of ransomware, I ran across this article from CommVault recently, which discusses "5 major types" of ransomware.  Okay, that sparked my interest...that there are "5 major types". 

Once I started reading the article, I became even more interested, particularly in the fourth type, identified as "Samsam".  Okay, this is the name of a variant or family, not so much what I'd refer to as a "type" of ransomware...but okay.  Then I read this statement:

Once inside the network, the ransomware looks for other systems to attack.

I've worked with Samsam, or "Samas" ransomware for a while.  For example, I authored this blog post (note: prior employment) based on analysis of about half a dozen different ransomware engagements where Samas was deployed.  In all of those cases, a JBoss server was exploited (using JexBoss), and an adversary mapped the network (in several instances, using Hyena) before choosing the systems to which Samas was then deployed.  More recently (i.e., this spring), the engagements I was aware of involved RDP servers being compromised (credentials guessed), and much shorter timeframe between initial access and the ransomware being deployed. 

My point is, from what I've seen, the Samas ransomware doesn't do all the things that some folks say it does.  For example, I haven't yet seen where the ransomware looks for other systems.  Further, going back to Microsoft's own description of the ransomware modus operandi, I saw no evidence that the Samas ransomware "scans the network"...I did, however, find very clear evidence that the adversary did so.  So, a lot of what is attributed to the ransomware itself is, in reality, and based on looking at data, the actions of a person, at a keyboard.

If you want to see some really excellent information about the Samas ransomware, check out Kevin Strickland's blog post on the topic.  Kevin did some really great work, and I really can't say enough great things about the work he did, and what he shared.

Windows Registry
Over on the Follow the White Rabbit blog, @_N4rr34n6_ has an interesting article discussing the Windows Registry.  The article addresses setting up and using RegRipper and its various components, as well as other tools such as Corey Harrell's auto_rip and Phill Moore's RegRipper GUI, both of which clearly provide a different workflow placed over the basic code. 

Podcasts
I've had the honor and privilege to be asked to be involved on a couple of podcasts recently, and I thought I'd share the links to all of them in one place, for those who are interested in listening:

Doug Brush's CyberSecurity Interviews - I've followed Doug's CyberSecurity Interviews from the beginning, and greatly appreciated his invitation and opportunity to engage

Down the Security Rabbithole with Rafal and James; thanks to both of these fine gentlemen for offering me the opportunity to be part of the work they're doing

Nuix Unscripted - Corey did a really great job moderating Chris and I, which brought things full circle; not only did Chris and I used to work together, but Chris was one of the very first folks interviewed by Doug Brush...

Investigations
Chris Woods over at Nuix (transparency: this is my employer) posted an excellent article regarding three best practices for increasing the efficiency of examinations.  Interestingly enough, these are all things that I've endorsed over the years...defining clear analysis goals, collaboration, and using what has been learned from previous investigations.  I want to say something about "great minds", but the simple fact is that these are all "best practices" that simply make sense.  It's as simple as that.

WifiPasswordReveal
I ran across something really fascinating today..."wait," you ask, "more fascinating than making your computer recite lines from the Deadpool movie??"  Well...no...but almost!  Here is a fascinating article that illustrates not only the steps for how to reveal Wifi passwords on a Win7+ computer, but provides a batch file for doing so!  How cool is that?

LNK Metadata
A bit ago, I'd taken a look at a Windows shortcut/LNK file from a campaign someone had blogged about, and then submitted a Yara rule to detect submissions to VirusTotal, based on the MAC address, volume serial number, and SID embedded in the LNK file.  This was based on an LNK file that had been sent to victims as an attachment.

The Yara rule I submitted a while back looks like this:

rule ShellPhish 
{
strings:
        $birth_node = { 08 D4 0C 47 F8 73 C2 }
$vol_id        = { 7E E4 BC 9C }
        $sid             = "2287413414-4262531481-1086768478" wide ascii

condition:
all of them
}

So, pretty straightforward.  The thing is, over the past few days, I've seen a pretty significant up-tick in responses from the retro hunt, indicating a corresponding up-tick in submissions to VT.  Up to this point, I'd been seeing maybe one or two detections (again, based on submissions) a week; I've received about a few dozen or so in the past two days alone.  This up-tick in responses is an interesting change, particularly because I'm not seeing a corresponding mention of campaigns utilizing LNK files as attachments (to emails, or embedded in documents, etc.).

A couple of things I haven't done is note the first submission dates for the items, as well as the country from which they were submitted, and then downloaded the LNK file itself to parse out the command line, and note the differences.

So, why am I even mentioning this?  Well, this goes back to Jesse Kornblum's premise of using every part of the buffalo, albeit the fact that it's not directly associated with memory analysis.  The metadata in file formats such as documents and LNK files can be used to develop insight based on relationships, which can lead to attribution based on further developing the threat intelligence you already have available.

Thursday, September 28, 2017

Something on the fun/irreverent side

A while back, I read about some ransomware that, instead of leaving a ransom note, accessed the speech functionality of Windows systems to tell the user that the files on their system had been encrypted.  Hearing that, I started doing some research and put together a file that can play selected speech through the speakers of my laptop.  I thought it might be fun to take a different approach with this blog post and share the file.

Copy-paste the below file into an editor window, and save the file as 'speak.vbs', or (as I did, 'deadpool.vbs') on your desktop. Then simply double-click the file.

dim sapi
set sapi=createobject("sapi.spvoice")
Set sapi.Voice = sapi.GetVoices.Item(0)
sapi.Rate = 2
sapi.Volume = 100
sapi.speak "This shit's gonna have NUTS in it!"
sapi.speak "It's time to make the chimichangas!"
sapi.speak "hashtag drive by"

So, Windows 7 has just one 'voice', so there's no realy need for line 3; Windows 10 has two voices by default, so change the '0' to a '1' to switch things up a bit.

The cool thing is that you can attach a file like this to different actions on your system, or you can have fun with your friends (a la the days of SubSeven) and put a file like this in one of the autorun locations on their system.  Ah, good times!

Tuesday, September 26, 2017

Updates

ADS
It's been some time since I've had an opportunity to talk about NTFS alternate data streams (ADS), but the folks at Red Canary recently published an article where ADSs take center stage.  NTFS alternate data streams go back a long way, all the way to the first versions of NTFS, and were a 'feature' included to support resource forks in the HFS file system.  I'm sure that will all of the other possible artifacts on Windows systems today, ADSs are not something that is talked about at great length, but it is interesting how applications on Windows systems make use of ADSs.  What this means to examiners is that they really need to understand the context of those ADSs...for example, what happens if you find an ADS named "ZoneIdentifier" attached to an MS Word document or to a PNG file, and it is much larger than 26 bytes?

Equifax
Some thoughts on Equifax...

According to the Equifax announcement, the breach was discovered on 29 July 2017.  Having performed incident response activities for close to 20 years, it's no surprise to me at all that it took until 7 Sept for the announcement to be made.  Seriously.  This stuff takes time to work out.  Something that does concern me is the following statement:

The company has found no evidence of unauthorized activity on Equifax's core consumer or commercial credit reporting databases.

Like I said, I've been responding to incidents for some time, and I've used that very same language when reporting findings to clients.  However, most often that's followed by a statement along the lines of, "...due to a lack of instrumentation and visibility."  And that's the troubling part of this incident to me...here's an organization that collects vast amounts of extremely sensitive data in one place, and they have a breach that went undetected for 3 months.

Unfortunately, I'm afraid that this incident won't serve as an object lesson to other organizations, simply because of the breaches we've seen over the past couple of years...and more importantly, just the past couple of months...that similarly haven't served that purpose.  For a while now, I've used the analogy of a boxing ring, with a line of guys mounting the stairs one at a time to step into the ring.  As you're standing in line, you see that these guys are all getting into the ring, and they apparently have no conditioning or training, nor have they practiced...and each one that steps into the ring gets pounded by the professional opponent.  And yet, even seeing this, no one thinks about defending themselves, through conditioning, training, or practice, to survive beyond the first punch.  You can see it happening in front of you, with 10 or 20 guys in line ahead of you, and yet no one does anything but stand there in the line with their arms at their sides, apparently oblivious to their fate.

Threat Intelligence
Sergio/@cnoanalysis recently tweeted something that struck me as profound...that threat intelligence needs to be treated as a consumer product.

He's right...take this bit of threat intelligence, for example.  This is solely an example, and not at all intended to say that anyone's doing anything wrong, but it is a good example of what Sergio was referring to in his short but elegant tweet.  While some valuable and useful/usable threat intelligence can be extracted from the article, as is the case with articles from other organizations that are shared as "threat intelligence", this comes across more as a research project than a consumer product.  After all, how does someone who owns and manages an IT infrastructure make use of the information in the various figures?  How do illustrations of assembly language code help someone determine if this group has compromised their network?

Web Shells
Bart Blaze created a nice repository of PHP backdoors, which also includes links to other web shell resources.  This is a great resource for DFIR folks who have encountered such things.

Be sure to update your Yara rules!

Sharing is Caring
Speaking of Yara rules, the folks at NViso posted a Yara rule for detecting CCleaner 5.33, which is the version of the popular anti-forensics tool that was compromised to include a backdoor.

Going a step beyond the Yara rule, the folks at Talos indicate in their analysis of the compromised CCleaner that the malware payload is maintained in the Registry, in the path:

HKLM\Software\Microsoft\Windows NT\CurrentVersion\WbemPerf\001 - 004

Unfortunately, the Talos write-up doesn't specify if 001 is a key or value...yes, I know that for many this seems pedantic, but it makes a difference.  A pretty big difference.  With respect to automated tools for live examination of systems (Powershell, etc.), as well as post-mortem examinations (RegRipper, etc.), the differences in coding the applications to look for a key vs. a value could mean the difference between detection and not.

Ransomware
The Carbon Black folks had a couple of interesting blog posts on the topic of ransomware recently, one about earning quick money,  and the other about predictions regarding the evolution of ransomware.  From the second Cb post, prediction #3 was interesting to me, in part because this is a question I saw clients ask starting in 2016.  More recently, just a couple of months ago, I was on a client call set up by corporate counsel, when one of the IT staff interrupted the kick off of the call and wanted to know if sensitive data had been exfiltrated; rather than seeing this as a disruption of the call, this illustrated to me the paramount concern behind the question.  However, the simple fact is that even in 2017, organizations that are hit with these breaches (evidently some regulatory bodies are considering a ransomware infection to be a "breach") are neither prepared for a ransomware infection, nor are they instrumented to answer the question themselves. 

I suspect that a great many organizations are relying on their consulting staffs to tell them if the variant of ransomware has demonstrated an ability to exfiltrate data during testing, but that assumption is fraught with issues, as well.  For example, to assume that someone else has seen and tested that variant of ransomware, particularly when you're (as the "victim") are unable to provide a copy of the ransomware executable.  Further, what if the testing environment did not include any data or files that the variant would have wanted to, or was programmed to, exfil from the environment?

Looking at the Cb predictions, I'm not concerned with tracking them to see if they come true or not...my concern is, how will I, as an incident responder, address questions from clients who are not at all instrumented to detect the predicted evolution of ransomware?

On the subject of ransomware, Kaspersky identified a variant dubbed "nRansom", named as such because instead of demanding bitcoin, the bad guys demand nude photographs of the victim.

Attack of the Features
It turns out the MS Word has another feature that the bad guys have found and exploited, once again leaving the good folk using the application to catch up.

From the blog post:
The experts highlighted that there is no description for Microsoft Office documentation provides basically no description of the INCLUDEPICTURE field.

Nice. 

Tuesday, September 05, 2017

Stuff

QoTD
The quote of the day comes from Corey Tomlinson, content manager at Nuix.  In a recent blog post, Corey included the statement:

The best way to avoid mistakes or become more effective is to learn from collective experience, not just your own.

You'll need to read the entire post to get the context of the statement, but the point is that this is something that applies to SO much within the DFIR and threat hunting communit(y|ies).  Whether you're sharing experiences solely within your team, or you're engaging with others outside of your team and cross-pollinating, this is one of the best ways to extend and expand your effectiveness, not only as a DFIR analyst, but as a threat hunter, as well as an intel analyst.  None of us knows nor has seen everything, but together we can get a much wider aperture and insight.

Hindsight
Ryan released an update to hindsight recently...if you do any system analysis and encounter Chrome, you should really check it out.  I've used hindsight several times quite successfully...it's easy to use, and the returned data is easy to interpret and incorporate into a timeline.  In one case, I used it to demonstrate that a user had bypassed the infrastructure protections put in place by going around the Exchange server and using Chrome to access their AOL email...launching an attachment infected their system with ransomware.

Thanks, Ryan, for an extremely useful and valuable tool!

It's About Time
I ran across this blog post recently about time stamps and Outlook email attachments, and that got me thinking about how many sources and formats for 'time' there are on Windows systems.

Microsoft has a wonderful page available that discusses various times, such as File Times.  From that same page, you can get more information about MS-DOS Date and Time, which we find embedded in shell items (yes, as in Shellbags).

If nothing else, this really reminds me of the various aspects of time that we have to consider and deal with when conducting DFIR analysis.  We have to consider the source, and how mutable that source may be.  We have to consider the context of the time stamp (AppCompatCache).  

Using Every Part of The Buffalo
Okay, so I stole that section title from a paper that Jesse Kornblum wrote a while back; however, I'm not going to be referring to memory, in this case.  Rather, I'm going to be looking at document metadata.  Not long ago, the folks at ProofPoint posted a blog entry that discussed a campaign they were seeing that seemed very similar to something they'd seen three years ago.  Specifically, they looked at the metadata in Windows shortcut (LNK) files and noted something that was identical between the 2014 and 2017 campaigns.  Reading this, I thought I'd take a closer look at some of the artifacts, as the authors included hashes for the .docx ("need help.docx") file, as well as for a LNK file in their write-up.  I was able to locate copies of both online, and begin my analysis.

Once I downloaded the .docx file, I opened it in 7Zip and exported all of the files and folders, and quickly found the OLE object they referred to in the "word\embeddings\oleObject.bin" file.  Parsing this file with oledmp.pl, I found a couple of things...first, the OLE date embedded in the file is "10.08.2017, 15:46:51", giving us a reference time stamp.  At this point we don't know if the time stamp has been modified, or what...so let's just put that aside for the moment.

Next, I at the available streams in the OLE file:

Root Entry  Date: 10.08.2017, 15:46:51  CLSID: 0003000C-0000-0000-C000-000000000046
    1 F..       6                      \ ObjInfo
    2 F..   44511                  \ Ole10Native

Hhhmmm...that looks interesting.


Excerpt of oleObject.bin file









Okay, so we see what they were talking about in the ProofPoint post...right there at offset 0x9c is "4C", the beginning of the embedded LNK file.  Very cool.

This document appears to be identical to what was discussed in the ProofPoint blog post, at figure 16.  In the figure above, we can see a reference to "VID_20170809_1102376.mp4.lnk", and the "word\document.xml" file contains the text, "this is what we recorded, double click on the video icon to view it. The video is about 15 minutes."

I'd also downloaded the file from the IOCs section of the blog post referred to as "LNK object", and parsed it.  Most of the metadata was as one would expect...the time stamps embedded in the LNK file referred to the PowerShell executable from that system, do it was uninteresting.  However, there were a couple of items of interest:

machineID               john-win764                   
birth_obj_id_node  00:0c:29:ac:13:81 (VMWare)             
vol_sn                     CC9C-E694  

We can see the volume serial number that was listed in the ProofPoint blog, and we see the MAC address, as well.  An OUI lookup of the MAC address tells us that it's assigned to VMWare interface.  Does this mean that the development environment is a VMWare guest?  Not necessarily.  I'd done research in the past and found that LNK files created on my host system, when I had VMWare installed, would "pick up" the MAC address of the VMWare interface on the host.  What was interesting in that research was that the LNK file remained and functioned correctly, long after I had removed VMWare and installed VirtualBox.  Not surprising, I know...but it did verify that at one point, when the LNK file was created, I had had VMWare installed on my system.

As a side note, I have to say that this is the first time that I've seen an organization publicizing threat intel and incorporating metadata from artifacts sent to the victim.  I'm sure that this may have been done before, and honestly, I can't see everything...but I did find this to be very extremely interesting that the authors would not only parse the LNK file metadata, but tie it back to a previous (2014) campaign.  That is very cool!

In the above metadata, we also see that the NetBIOS name of the system on which the LNK object was created is "john-win764".  Something not visible in the metadata but easily found via strings is the SID, S-1-5-21-3345294922-2424827061-887656146-1000.

This also gives us some very interesting elements that we can use to put together a Yara rule and submit as a VT retrohunt, and determine if there are other similar LNK files that originated from the same system.  From there, hopefully we can tie them to specific campaigns.

Okay, so what does all this get us?  Well, as an incident responder in the private sector, attribution is a distraction.  Yes, there are folks who ask about it, but honestly, when you're having to understand a breach so that you can brief your board, your shareholders, and your clients as to the impact, the "who" isn't as important as the "what", specifically, "what is the risk/impact?"  However, if you're in the intel side of things, the above elements can assist you with attribution, particularly when it's been developed further through not only your own stores, but also via available resources such as VirusTotal.

Ransomware
On the ransomware front, there's more good news!! 

Not only have recently-observed Cerber variants been seen stealing credentials and Bitcoin wallets, but Spora is reportedly now able to also steal credentials, with the added whammy of logging key strokes!  The article also goes on to state that the ransomware can also access browser history.

Over the past 18 months, the ransomware cases that I've been involved with have changed directions markedly.  Initially, I thought folks wanted to know the infection vector so that they could take action...with no engagement beyond the report (such is the life of DFIR), it's impossible to tell how the information was used.  However, something that started happening quite a bit was that questions regarding access to sensitive (PHI, PII, PCI) data were being asked.  Honestly, my first thought...and likely the thought of any number of analysts...was, "...it's ransomware...".  But then I started to really think about the question, and I quickly realized that we didn't have the instrumentation and visibility to answer that question.  Only with some recent cases did clients have Process Tracking enabled in the Windows Event Log...while capture of the full command line wasn't enabled, we did at least get some process names that corresponded closely to what had been seen via testing.

So, in short, without instrumentation and visibility, the answer to the question, "....was sensitive data accessed and/or exfiltrated?" is "we don't know."

However, one thing is clear...there are folks out there who are exploring ways to extend and evolve the ransomware business model.  Over the past two years we've seen evolutions in ransomware itself, such as this blog post from Kevin Strickland of SecureWorks.  The business model of ransomware has also evolved, with players producing ransomware-as-a-service.  In short, this is going to continue to evolve and become an even greater threat to organizations.

Saturday, September 02, 2017

Updates

Office Maldocs, SANS Macros
HelpNetSecurity had a fascinating blog post recently on a change in tactics that they'd observed (actually, it originated from a SANS handler diary post), in that an adversary was using a feature built in to MS Word documents to infect systems, rather than embedding malicious macros in the documents.  The "feature" is one in which links embedded in the document are updated when the document is opened. In the case of the observed activity, the link update downloaded an RTF document, and things just sort of took off from there.

I've checked my personal system (Office 2010) as well as my corp system (Office 2016), and in both cases, this feature is enabled by default.

This is a great example of an evolution of behavior, and illustrates that "arms race" that is going on every day in the DFIR community.  We can't detect all possible means of compromise...quite frankly, I don't believe that there's a list out there that we can use as a basis, even if we could.  So, the blue team perspective is to instrument in a way that makes sense so that we can detect these things, and then respond as thoroughly as possible.

WMI Persistence
TrendMicro recently published a blog post that went into some detail discussing WMI persistence observed with respect to cryptocurrency miner infections.  While such infections aren't necessarily damaging to an organization (I've observed several that went undetected for months...), in the sense that they don't deprive or restrict the organization's ability to access their own assets and information, they are the result of someone breaching the perimeter and obtaining access to a system and it's resources.

Matt Graeber tweeted that on Windows 10, the creation of the WMI persistence mechanism appears in the Windows Event Logs.  While I understand that organizations cannot completely ignore their investment in systems and infrastructure, there needs to be some means by which older OSs are rolled out of inventory as they become obviated by the manufacturer.  I have seen, or known that others have seen, active Windows XP and 2003 systems as recently as August, 2017; again, I completely understand that organizations have invested a great deal of money, time, and other resources into maintaining the infrastructure that they'd developed (or legacy infrastructures), but from an information security perspective, there needs to be any eye toward (and an investment in) updating systems that have reached end-of-life.

I'd had a blog post published on my previous employer's corporate site last year; we'd discovered a similar persistence mechanism as a result of creating a mini-timeline to analyze one of several systems infected with Samas ransomware.  In this particular case, prior to the system being compromised and used as a jump host to map the network and deploy the ransomware, the system had been compromised via the same vulnerability and a cryptocoin miner installed.  There was a WMI persistence mechanism created at about the same time, and another artifact (i.e., the LastWrite time on the Win32_ClockProvider Registry key had been modified...) on the system pointed us in that direction.

InfoSec Program Maturity
Going back just a bit to the topic of the maturity of IT processes and by extension, infosec programs, with respect to ransomware...one of the things I've seen a lot of over the past year to 18 months, beyond the surge in ransomware cases that started in Feb, 2016, is the questions that clients who've been hit with ransomware have been asking.  These have actually been really good questions, such as, "...was sensitive data exposed or exfiltrated?"  In most instances with ransomware cases, the immediate urge was to respond, "...no, it was ransomware...", but pausing for a bit, the real answer was, "...we don't know."  Why didn't we know?  We had no way of knowing, because the systems weren't instrumented, and we didn't have the necessary visibility to be able to answer the questions.  Not just definitively...at all.

More recently with the NotPetya issues, we'd see where the client had Process Tracking enabled in the Windows Event Log, so that the Security Event Log was populated with pertinent records, albeit without the full command line.  As such, we could see the sequence of commands that were associated with NotPetya, and we could say with confidence that no additional commands have been run, but without the full command lines, we couldn't stated definitively that nothing else untoward had also been done.

So, some things to consider when thinking about or discussing the maturity of your IT and infosec programs include asking yourself, "...what are the questions we would have in the case of this type of incident?", and then, "...do we have the necessary instrumentation and visibility to answer those questions?"  Anyone who has sensitive data (PHI, PII, PCI, etc...) is going to have the question of "...was sensitive data exposed?", so the question would be, how would you determine that?  Were you tracking full process command lines to determine if sensitive data was marshaled and prepared for exfil?

Another aspect of this to consider is, if this information is being tracked because you do, in fact, have the necessary instrumentation, what's your aperture?  Are you covering just the domain controllers, or have you included other systems, including workstations?  Then, depending on what you're collecting, how quickly can you answer the questions?  Is it something you can do easily, because you've practiced and tweaked the process, or is it something you haven't even tried yet?

Something that's demonstrated (to me) on a daily basis is how mature the bad guy's process is, and I'm not just referring to targeted nation-state threat actors.  I've seen ransomware engagements where the bad guy got in to an RDP server, and within 10 min escalated privileges (his exploit included the CVE number in the file name), deployed ransomware and got out.  There are plenty of blog posts that talk about how targeted threat actors have been observed reacting to stimulus (i.e., attempts at containment, indications of being detected, etc.), and returning to infrastructures following eradication and remediation.

WEVTX
The folks at JPCERT recently (June) published their research on using Windows Event Logs to track lateral movement within an infrastructure.  This is really good stuff, but is dependent upon system owners properly configuring systems in order to actually generate the log records they refer to in the report (we just talked about infosec programs and visibility above...).

This is also an inherent issue with SIEMs...no amount of technology will be useful if you're not populating it with the appropriate information.

New RegRipper Plugin
James shared a link to a one-line PowerShell command designed to detect the presence of the CIA's AngelFire infection.  After reading this, it took me about 15 min to write a RegRipper plugin for it and upload it to the Github repository.

Tuesday, August 22, 2017

Beyond Getting Started

I blogged about getting started in the industry back in April (as well as here and here), and after having recently addressed the question on an online forum again, I thought I'd take things a step further.  Everyone has their own opinion as to the best way to 'get started' in the industry, and if you look wide enough and far enough, you'll start to see how those who post well thought out articles have some elements in common.

In the beginning...
We all start learning through imitation and repetition, because that's how we are taught.  Here's the process, follow the process.  This is true in civilian life, and it's even more true in the military.  You're given some information as to the "why", and then you're given the "how".  You do the "how", and you keep doing the "how" until you're getting the "how" right.  Once you've gotten along for a bit with the "how", you start going back to the "why", and sometimes you find out that based on the "why", the "how" that you were taught is pretty darned good.   Based on a detailed understanding of the "why", the "how" was painstakingly developed over time, and it's just about the best means for addressing the "why".

In other cases, some will start to explore doing the "how" better, or different, questioning the "why".  What are the base assumptions of the "why", and have they changed?  How has the "why" changed since it was first developed, and does that affect the "how"?

This is where critical thinking comes into play.  Why am I using this tool or following this process?  What are my base assumptions?  What are my goals, and how does the tool or process help me achieve those goals?  The worst thing you could ever do is justify following a process with the phrase, "...because this is how we've always done it."  That statement clearly shows that neither the "why" nor the "how" is understood, and you're just going through the motions.

Years ago, when I had the honor and the pleasure of working with Don Weber, he would regularly ask me "why"...why were we doing something and why were we doing it this way?  This got me to consider a lot about the decisions I was making and the actions I was taking as a team leader or lead responder, and I often found that my decisions were based not just on the technical aspects of what we were doing, but also the business aspects and the impact to the client.  I did not take offense at Don's questions, and actually appreciated them.

Learn to program
Lots of folks say it's important to learn a programming language, and some even go so far as to specify the particular language.  Thirty-five years ago, I started learning BASIC, programming on an Apple IIe.  Later, it was PASCAL, then MatLab and Java, and then Perl.  Now it seems that Python is the "de facto standard" for DFIR work...or is it?  Not long before NotPetya rocked the world, the folks at RawSec posted an article regarding carving EVTX records, and released a tool written in Go.  If you're working on Windows systems or in a Windows environment, PowerShell might be your programming language of choice...it all depends on what you want to do.

There is a great deal of diversity on this topic, and I'd suggest that the programming language you choose should be based on your needs.  The main point is that learning to program helps you see big problems as a series of smaller problems, some of which must performed in a serial fashion.  What we learn from programming is how to break bigger problems into smaller, logical steps.

Engage in the community
Within the DFIR "community", there's too much "liking" and retweeting, and not enough doing and asking of questions, nor actively engaging with others.  Not long ago, James Habben posted an excellent article on his blog on "being present", and he made a lot of important points that we can all learn from.  Further, he put a name to something that I've been aware of for some time; when presenting at a conference, there's often that one person how completely forgets that they're in a room full of other people, and kidnaps and dominates the presenter's time.  There are also those who attend the presentation (or training session) who spend the majority of their time engaged in something else entirely.

Rafal Los recently posted a fascinating article on the SecurityWeek web site.  I found his article well-considered and insightful, and extremely relevant.  It's also something I can relate to...like others, I get connection requests on LinkedIn from folks who've done nothing more than clicked a button.  I also find that after having accepted most connection requests, I never hear from the requester again.  I find that if I write a blog post (like this one) and share the link on Twitter and LinkedIn, I'll get "likes" and retweets, but not much in the way of comments.  If I ask someone what they "like" about the article...and I have done this...more often than not the response is that they didn't actually read it; they wanted to share it with their community.  Given that, there is no difference between having worked on writing and publishing the article, and not having done so.

Engaging in the community is not only a great way to learn, but also a great way to extend the community itself.  A friend recently asked me which sandbox I use for malware analysis, and why.  For me to develop a response beyond just, "I don't", I really had to think about the reasons why I don't use a sandbox.   I learned a little something from the engagement, just as I hope my friend did, as well.

An extension of engaging in the community is to write your own stuff.  Share your thoughts.  Instead of clicking "like" on a link to a blog post, add a comment to the post, or ask a question in the comments.  Instead of just clicking "like" or retweeting, share your reasons for doing so.  If it takes more than 140 characters to do so, write a blog post or comment, and share *that*.

I guess the overall point is this...if you're going to ask the question, "how do I get started in the DFIR industry?", the question itself presupposes some sort of action.  If you're just going to follow others, "like" and retweet all the things, and not actually read, engage, and think critically, then you're not really going to 'get started'.

Friday, August 18, 2017

Updates, New Stuff

Specificity
The folks at Talos recently posted an interesting article, "On Conveying Doubt".  While some have said that this article discusses "conveying doubt" (which it does), it's really about specificity of language.  Too often in our industry, while there is clarity of thought, there's a lack of specificity in the language we use to convey those thoughts; this includes all manner of communication methods; not only reports, but presentations and blog posts.  After all, it doesn't matter how good you or your analysis may be if you cannot communicate your methodology and findings.

Ransomware
Ransomware is a pretty significant issue in the IT community, particularly over the past 8 months or so.  My first real encounter with ransomware, from a DFIR perspective, started last year with a couple of Samas ransomware cases that came in; from a perspective external to my employer, this resulted in two corporate blog posts, one of which was written by a colleague/friend, and ended up being one of the most quoted and referenced blog posts published.  Interestingly enough, a lot of aspects about ransomware, in general, have continued to evolve since then, at an alarming rate.

Vitali Kremez recently posted an interesting analysis of the GlobeImposter ".726" ransomware.  From an IR perspective, where one has to work directly with clients, output from IDAPro and OllyDbg isn't entirely useful in most cases.

However, in this case, there are some interesting aspects of the ransomware that Vitali shared, specifically in section VI.(b).; that is, the command line that the ransomware executes.

Before I go any further, refer back to Kevin's blog post (linked above) regarding the evolution of the Samas ransomware.  At first, the ransomware included a copy of a secure deletion tool in one of it's resource sections, but later versions opted to reduce the overall size of the executable.  Apparently, the GlobeImposter ransomware does something similar, relying on tools native to the operating system to run commands that 'clean up' behind itself.  From the embedded batch file, we can see that it deletes VSCs, deletes user's Registry keys related to lateral movement via RDP, and then enumerates and clears *all* Windows Event Logs.  The batch file also includes the use of "taskill" to stop a number of processes, which is interesting, as several of them (i.e., Outlook, Excel, Word) would immediately alert the user to an issue.

FortiNet recently published an analysis of a similar variant.

Online research (including following #globeimposter on Twitter) indicates that the ransomware is JavaScript-based, and that the IIV is via spam email (i.e., "malspam").  If that's the case (and I'm not suggesting that it isn't), why does the embedded command line include deleting Registry keys and files associated with lateral movement via RDP?

Marco Ramilli took a look at some RaaS ransomware that was apparently distributed as an email attachment.  He refers to it as "TOPransomware", due to a misspelling in the ransom note instructions that tell the victim to download a TOP browser, as opposed to a TOR browser.  This is interesting, as some of Samas variants I've seen this year include a clear-text misspelling in the HTML ransom note page, setting the font color to "DrakRed".

I also ran across this analysis of the Shade ransomware, and aside from the analysis itself, I thought that a couple of comments in the post were very interesting.  First, "...has been a prominent strand from around 2014."  Okay, this one is new to me, but that doesn't really say much.  After all, as a consultant, my "keyhole" view is based largely on the clients who call us for assistance.  I don't claim to have seen everything and know everything...quite the opposite, in fact.  But this does illustrate the differences in perspective.

Second, "...spread like many other ransomware through email attachments..."; this is true, many other ransomware variants are spread as email attachments.  The TOPransomware mentioned previously was apparently discovered as a .vbs script attached to an email.  However, it is important to note that NOT ALL ransomware gets in via email.  This is an important distinction, as all of the protection mechanisms that you'd employ against email-borne ransomware attacks are completely useless against variants such as Samas and LeChiffre, which do not propagate via email.  My point is, if you purchase a solution that only protects you from email-borne attacks, you're still potentially at risk for other ransomware attacks.  Further, I've also seen where solutions meant to protect against email-borne ransomware attacks do not work when a user uses Chrome to access their AOL email, and the system/infrastructure gets infected that way.

On August 7, authorities in the Ukraine arrested a man for distributing the NotPetya malware (not ransomware, I know...) in order to assist tax evaders.  According to the article, he isn't thought to be the author of the destructive malware, but was instead found to be distributing a copy of the malware via his social media account so that companies could presumably infect themselves and not pay taxes.

Recently, this Mamba ransomware analysis was posted; more than anything else, this really highlights one of the visible gaps in this sort of analysis.  As the authors found the ransomware through online searches (i.e., OSINT), there's no information as to how this ransomware is distributed.  Is it as an email attachment?  What sort?  Or, is it deployed via more manual means, as has been observed with the Samas and LeChiffre ransomware?

The Grugq recently posted thoughts as to how ransomware has "changed the rules".  I started in the industry, specifically in the private sector, 20 yrs ago this month...and I have to say, many/most of the recommendations as to how to protect yourself against and recover from ransomware were recommendations from that time, as well.  What's old is new again, eh?

Cryptocurrency Miners
Cylance recently posted regarding cryptocurrency mining malware.  Figure 7b of the post provides some very useful information in the way of a high fidelity indicator..."stratum+tcp".  If you're monitoring process creation on endpoints and threat hunting, looking for this will provide an indicator on which you can pivot.  Clearly, you'll want to determine how the mining software got there, and performing that root cause analysis (RCA) will direct you to their access method.

Web Shells
The PAN guys had a fascinating write-up on the TwoFace web shell, which includes threat actor TTPs.  In the work I've done, I've most often found web shells on perimeter systems, and that initial access was exploited to move laterally to other systems within the network infrastructure.  I did, however, once see a web shell placed on an internal web server; that web shell was then used to move laterally to an MS SQL server.

Speaking of web shells, there's a fascinating little PHP web shell right here that you might want to check out.

Lifer
Retired cop Paul Tew recently released a tool he wrote called "lifer", which parses LNK files.  One of the things that makes tools like this really useful is use cases; Paul comes from an LE background, so he very likely has a completely different usage for such tools, but for me, I've used my own version of such tools to parse LNK files reportedly sent to victims of spam campaigns.

Want to generate your own malicious LNK files?  Give LNKup a try...

Supply Chain Compromises
Supply chain compromises are those in which a supplier or vendor is compromised in order to reach a target.  We've seen compromises such as this for some time, so it's nothing new.  Perhaps the most public supply chain compromise was Target.

More recently, we've recently seen the NotPetya malware, often incorrectly described as "ransomware".  Okay, my last reference to ransomware (for now...honest) comes from iTWire, in which Maersk was able to estimate the cost of a "ransomware" attack.  I didn't include this article in the Ransomware section above because if you read the article, you'll see that it refers to NotPetya.

Here is an ArsTechnica article that discusses another supply chain compromise, where a backdoor was added to a legitimate server-/network-management product.  What's interesting about the article is that it states that covert data collection could have occurred, which I'm sure is true.  The questions are, did it, and how would you detect something like this in your infrastructure?

Carving for EVTX
Speaking of NotPetya, here's an interesting article about carving for EVTX records that came out just days before NotPetya made it's rounds.  If you remember, NotPetya included a command line that cleared several Windows Event Log files

The folks who wrote the post also included a link to their GitHub site for the carver they developed, which includes not only the source code, but pre-compiled binaries (they're written in Go).  The site includes a carver (evtxdump) as well as a tool to monitor Windows Event Logs (evtxmon).