Monday, September 09, 2019

The Ransomware Economy

There's no doubt about it...cybercrime, and especially ransomware, is an entire economy in and of itself.

Don't believe me?

Read through this ProPublica article, not just once, but a couple of times.  And take notes.  Then go back and read the notes.  Here's what I got from the article:
  1. Organizations are looking to insurance policies to defray the costs of incidents.  Rather than investing in prevention, detection, and response, they're accepting (to some degree) that these incidents are going to happen, and seeking to establish a means to minimize their financial risk.  Hence, insurance policies.
  2. A ransomware incident occurs, and the policy kicks in.  Depending upon how the policy was set up, and what it covers, the deductible may be much less than the ransom.  Financial risk minimized.
  3. Insurance providers are more interested in getting ransoms paid quickly; getting the encryption keys and recovering files minimizes down time, and therefore any additional costs incurred as a result of services not being available.  So, insurance providers want the ransom paid, in order to minimize their financial exposure.
  4. There's also an entire economy that's popped up around ransom payment brokers, organizations that act as intermediaries between victim organizations, insurance providers, and the bad guys.
But is that the end?  Is this just about encrypting data and getting paid to unlock it?  I wouldn't think so, and here's why.  One of the things I've always been curious about is, is there any data exfiltration going on prior to data encryption?  Are bad guys taking anything before encrypting files?  In most cases, it's hard to tell...I'm aware of ransomware cases where the bad guys are actually in the environment for weeks or even months before encrypting files, and the artifacts of data staging and exfiltration may be fleeting, at best, and nonexistent during the incident response.

Not long ago, a fellow responder shared that many of the ransomware cases he works include an element of data exfiltration.  A recent 60Minutes segment on ransomware includes a similar statement; if you watch until 9:50 in the segment, you'll see mention of the bad guy further extorting an organization by threatening to leak their "internal data".

Let's look at some of the reporting on ransomware, such as this The Conversation article. At one point in the article, we see the statement:

Ransomware usually spreads via phishing emails or links...

Perhaps "usually", yes, but not always.  The 60Minutes segment mentioned the Samsam ransomware; during the first half of 2016, these guys were seen using the publicly available JexBoss exploit to gain access to organizations through JBoss CMS servers.  At that time, the average time between initial access to the organization and deploying the ransomware was 4 months. In 2017, in some cases, they switched to Terminal Services servers, gaining access via easily-guessed passwords.  Yes, some ransomware (some Ryuk incidents, for example) incidents begin with a phishing email, and then branch off into deploying remote access tools, internal reconnaissance, possibly privilege escalation, networking mapping, and finally, deploying the ransomware.

Another quote from the article:

Offenders will do their homework before launching an attack, in order to create the most severe disruption they possibly can.

Yes, they will.  But what does this mean?  This means a couple of things; first, they decide who to target, and when.  Employees within companies have targets against which they're judged; sales reps, for example, usually hit crunch time at the end of a quarter.  So, what the bad guys will do is send something to a sales rep that looks legit, and it's something that they need to open.  Yes, they're targeting individuals.

What does this look like, you ask?  While not related to ransomware, but take a look at the Mia Ash story, and you'll see what targeting looks like.  Going after sales reps, or the finance department, legal counsel...all of these are targets within an organization, and very often the "lure" looks attractive enough to obviate phishing awareness training.  However, this is only the beginning.  In the Mia Ash story, the adversary developed a relationship with their targets, to the point where, when it came time to send a weaponized document for the target to open, the target had no doubt in their mind regarding the fact that they were dealing with "Mia".

Something that isn't stated in the media is that, for some ransomware cases, once an adversary gains initial access to an infrastructure, there are a number of actions that must take place in order for them to have such an impact as to make paying the ransom the obvious choice going forward.  They need to observe and orient to where they are, collect information about the infrastructure, make decisions (that's the easy part, they're often quite practiced at this), and then act.  This is Col Boyd's OODA loop.  In some cases, this can take weeks, and in others, months.  Unfortunately, one of the things missing from public reporting of ransomware incidents, in addition to the observed initial access method, is the time that the adversary is on target before deploying ransomware.  It's not an easy task to go into a completely new infrastructure and find those files and systems that, if unavailable, would bring the organization to a halt.

With visibility, these actions can be detected, and responded to in a timely manner.  When I say, "responded to", I mean determining the initial infection vector and following a containment and eradication plan early in the adversary's process.  Let's say that you detect a new account being created on a system, because you have the visibility to do so...which user account was used to create the new one?  How did that user account gain access to the system on which the command was run?  Follow the tracks back to the starting point, and determine how the adversary got on the system, and then search your infrastructure for other, similar artifacts. 

It all starts with visibility.  Don't address ransomware by trying to figure out if you should restore systems from backup or pay the ransom; instead, catch the adversary early in their process and stop them before they encrypt their first file.

A Brief History of DFIR Time, pt I

Whether we like it or not, we're all time travelers. We're all moving through time, caught in the flow. In the western world, we're moving left-to-right, going along with the flow of time, from point A to point B. 

Sometimes it's interesting to look back at where we've been, what we've been witness to, and to reflect on and appreciate it.  Here's an abridged version of my take...

As a kid, my parents purchased a Timex-Sinclair 1000 computer.  I started out by following instructions for writing programs and saving them to a cassette tape...or trying to, as the case may be.  This wasn't the most reliable means (although it was the only one) for saving programs, and sometimes things would get corrupted, and I'd have to start all over.  As I learned a little bit of coding, I'd try different things...I'd start with the basic (no pun intended) recipe, and then make small modifications to see what happened.

In the early '80s, I was programming BASIC on the Apple IIe during a summer course.  Later, my parents purchased an Epson QX-10, which my father used for word processing.  During my senior year in high school, I took AP Computer Science, which involved programming PASCAL on the TRS-80 systems at the school.  My folks found a copy of Turbo PASCAL, which meant I could easily compile my programs at home in minutes, rather than trying to schedule time to get access to one of the TRS-80 systems at school, and get in before lunch, because compilation took over half an hour for some programs.

When I went to college (circa '85) we had a BASIC programming course, and we were still using the TRS-80 systems.  There were some mainframe systems in the physics building, and while I didn't get a real introduction to networking, some of did have fun sending messages to each other using the "wall" command.

After I got commissioned and went on active duty, I really didn't have a great deal of contact with computers.  In the Marine Corps at that time, Communications was a separate MOS from Data Processing, and as such, officers (and enlisted) for the MOSs attended separate schools.  For officers, both school houses were located on Quantico, at the time.  After training, I found that there was a great deal of cross-training in the fleet; quite often, CommOs were sent to data processing courses by their units.  The Marine Corps later combined the MOSs, along with the school houses and the curricula.

In the mid-'90s, I had the opportunity to attend graduate school, and I really got much more involved with computers.  I showed up with a 486DX desktop system that I used at home, and one of the first things I did was add a hard drive.  At the time, that meant putting it in the right location on the ribbon cable, and setting the correct jumpers on the drive chassis.  I later saved up and purchased additional RAM, going from 4MB to 16MB.  Yes, with an "M".  I also began going beyond Windows for Workgroups 3.11, and expanding into OS/2 2.1, and then later, OS/2 3.0 Warp.  At the time, I was using a SLIP/PPP script to dial into a local ISP, and then connecting remotely to the school systems.

Interestingly enough, I found someone in my local community who was running a BBS based on Amiga systems, and got a look at his setup.  That was a big deal at the time, because the town I lived in was close to a LATA border, meaning that while I could dial a number that was physically located about 10 miles south of me for no extra charge, the closest AOL POP was two miles north, and therefore, a long distance charge.  Eventually an airline pilot who lived in the local community set up an ISP, and I used that to access the Internet.

At school, I was working on SparcStations, using the Netscape browser.  I was learning about UseNet, SunOS, *nix-based systems, etc., none of which had anything to do with the curriculum.  I was the student rep to the sysadmin council when SATAN was released.  During the course of my "studies", I learned a little bit of C and C++ coding, a lot of MatLab, and a good bit of Java, at a time before Java was 1.0 GA status.  I played around with a bunch of different things with Java...I wrote programs to query fingerd, wrote an email spoofing program, and I wrote some code that connected the chargen port to the echo port...that was fun!

When I first started at graduate school, I didn't know it at the time, but I spent about 4 months walking by Gary Kildall's office every day on my way out of the building.  His office was next to one of the main doors that led out to the quad, where I'd go sit to each lunch. I never met Gary, nor took one of his courses, and again, it wasn't until much later that I found out who he was, and the role he played (or depending on your perspective, didn't play...) in the history of computing.  In one of my courses, I learned about the Hamming distance, and later took a seminar from Dr. Hamming himself.

As part of my master's thesis, I set up a lab; it consisted of two Cisco 2514 routers that I cross-connected, and from which I ran two small networks.  One was 10BaseT, the other 10Base2, and both had one Windows NT 3.51 server and three Windows 95 workstations.  The entire set up was connected to the campus backbone via a 10Base5 "vampire tap".  To collect data for my thesis, I wrote an SNMP polling application in Java, and processed the data using various statistical techniques in MatLab.

While I was in graduate school, one of my favorite courses was a new class in neural networks.  Part of the reason I liked it was due to how it was structured; the first half of the course was some instruction and small projects to get out feet wet, but the projects were small enough to allow us to stretch a bit, as well.  In many of the courses available at the time, the labs were such that it took most, if not all of the week to get them done, so there was very little learning beyond just finishing the minimum requirements for the lab.  In this course (and a few others), a different approach was taken, one that allowed the students to engage, experiment, and learn.  The second half of the course was a project, which was really cool to work on.  As it turned out, several of the students used that course as the basis for their master's thesis...one wrote a program that could discern 'dirty' images of six consecutive Cyrillic characters (i.e., something you'd seen in a satellite photo of Red Square, for example.)  Another student created a neural network to assist with sonar identification.

So, how does all this matter?  Well, 24+ years later, I can discern what's behind the terms "ML" and "AI" that we see with respect to cyber security products.  ;-)

My time in grad school was also when I started brushing up against "information security" in the world of computers.  During a C programming course, I finished my assigned labs and wanted to learn a bit more, so I downloaded a file called 'crack.c' to see what it did.  All I ever did was open it in an editor, but the senior sysadmin for the department got upset.  She even told me that I had "violated security policies".  When I asked her to see the policies, knowing that I had never signed such a policy, I learned that there really was no written "policy".  That was to change more than a year later when a new Admiral took over the school, but at the time, there was no written security policy that any students read or signed.

After I graduated, I spent 8 months processing out of the military, and during that time was assigned to the Marine detachment at the Defense Language Institute (DLI).  While there, one of the things I did was get the detachment's computer systems connected to the DLI campus area network (CAN), which was token ring.  Also during that time, the Commandant of the Marine Corps (Gen. Krulak) had stated that Marines were authorized to play "Marine DOOM"; the setup at the detachment was six Gateway systems connected via 10Base2, running IPX.  I was able to use what I had learned just down the street (literally) to help get the "network" up and running.

Wednesday, August 28, 2019

DFIR Open Mic Night

So, here's a thought...

At a well-attended DFIR conference, there should be a DFIR open mic comedy night.  In the evening, after the event is done for the day, use the venue for something a little light-hearted. For example, OSDFCon has had a mixer at the end of one of the days, where there've been finger foods and adult beverages, and there's a bar right there in the lobby.  Since the venue already has chairs and a mic, why not use them? 

So, Chatham House rules apply, as well as:

  • No sales pitches
  • No shaming anyone, any company or organization, product, etc.
  • Use no names, unless you're sending praise/shout-outs
  • Be cool - a little light profanity isn't an issue, but don't be vulgar or disgusting

I think a lot of folks would enjoy something like this...it's a great way to engage, and provide some folks who maybe didn't get their talk accepted a chance to get up on stage and see how they do with public speaking.

After all, there are a LOT of weird or funny things that go on during an IR engagement.  Some may not be funny at the time, but with a little embellishment ("don't let the facts get in the way of a good story...") some time later, they're freakin' hilarious!  So why not share these with everyone else?

For example, I did an IR years ago with a global organization, one that had multiple tools available.  We'd seen Mimikatz being run in the environment; in fact, the SOC (which was located in another country) had alerted the headquarters organization to this finding.  A manager in charge of one the other tools was running searches for "mimikatz" across the infrastructure; unfortunately, one of the detections being used was, "any command line that includes "mimikatz"".  This was intended to catch things like "Invoke-Mimikatz", but it caught everything else.  The first couple of times this happened, the SOC would send an alert, and the local SOC manager (a guy about 6' 7", 275 lbs, bodybuilder type) would go nuts, telling (well, not "telling", per se...) us that the bad guy was back, while the veins in his head and neck were popping out.

So, not funny at the time, but something we can laugh about now...

Thoughts?  Go?  No go?  Is this something you'd participate in, or just want to watch?  Or watch until maybe you got your nerve up (liquid courage) and then got up on stage? 

Friday, August 16, 2019

Program Execution...Or Not

Over the years, different means have been used to discuss the DFIR analysis process, and one of those has been artifact categories.  This is where categories are created and artifacts placed in the various columns, as they relate to those categories.  One such example is the SANS IR poster, which provides a great visual reminder for folks looking to employ this approach.  Honestly, it is a good way to approach analysis...looking at even a single system image, artifacts have grown over the years as the versions of Windows have progressed from XP to Win7, through to Win10, and as such, it benefits a large portion of the community to have a repeatable approach to analysis.

However, when using approaches such as this, we need to keep in mind the context of the category titles.  Yes, the artifacts of "program execution" provide an indication of applications that were launched on a system, but what does that mean?

At this point, I can guess what you're thinking...wait, what?? And believe me, I get it.  "Program execution" means exactly that...that a program was executed.  End of story.  But wait...is that what it really means?

Consider this for a second...someone unfamiliar with a program or application might "open" it on their first try, to see what it does.  Command line tools, for example, often contain information about their usage, which we can see by either typing the name of the program at the command prompt (no arguments), or by adding "/?" or "-h" ("--help" for Linuxphiles).  This causes the program to run, but not in the sense that the functionality of the program is actually used.

As an example, I opened a command prompt, and changed directories to the C:\Windows\Prefetch folder.  Most analysts who've been in the industry for some time are familiar with the application prefetch files, often referred to as simply "Prefetch". Specifically, these are files are widely known as an artifact of "program execution".

I first typed "dir ftp*.pf" to see if there were any Prefetch files that appeared to point to the use of ftp.exe, and got the expected result: File Not Found.  Next, I typed "ftp /?" at the prompt, which displayed the usage syntax of the application.

I then retyped (actually, I hit the up arrow twice...) the 'dir' command, and this time, I found that there was a file named FTP.EXE-7BA637EA.pf, which was 2,685 bytes in size.

So, what happened?  I ran the program, but only to the point where I could read the usage syntax. I didn't actually use the program to transfer files or exfil data in any way.  However, the artifacts of program execution were populated.

Now, the same thing applies to GUI applications, maybe even more so.  You can launch a GUI application, look around at the interface, maybe click a few of the options to see what functionality is available, and then close the UI without ever having employed the functionality provided by the application.

Case in point...consider this analysis of the DefCon 2018 CTF file server image.  Other publicly available write-ups addressed the question of interest (which application was used to delete forensic artifacts?) with various findings.  One was the result of the itempos.pl RegRipper plugin; not an artifact normally associated with program execution, but rather that the application was resident on the desktop.  The two other write-ups went with the UserAssist artifacts, widely associated with program execution; however, there was no verification that the application was actually used to, as stated in the CTF question, delete forensic artifacts.  As such, the GUI application could have been launched, closed, and then something else could have been used to take the specified actions. In fact, the actions in question were never verified.

As such, something to consider going forward is, when artifacts of program execution are found, what do they really mean?

Finally, a question...there is a way to make use of the FTP protocol on Windows workstations (XP, 7, 8, 10) that does not leave the 'normal' artifacts of program execution (i.e., Prefetch file, UserAssist entry) that does not involve disabling any default functionality.  What is it, and how would you determine/verify it?

Addendum, 18 Aug: So far, there's only been one attempt to answer the final question.  I know that there's more out there...check the comments to see the answer, but there's at least one more, and maybe even more than one!

Chasing the DFIR Cure, pt II

Following my first post on this topic, an interesting comment was shared that I thought would really benefit the discussion, as well as benefit from a further look.

To paraphrase, the comment was along the lines of,"...how do you justify the additional cost of a second (or third) look when the results are coming out the same?"

In my experience, that hasn't been an issue.

First, when someone has decided that additional eyes on the data is a justified step to take, the value is already understood, and the cost-benefit analysis has already been done.  Take the pro bono case I mentioned in my previous post; in that case, the value was understood prior to the attorney contacting me, and the decision was made after contact and initial discussions/scoping to do the work pro bono.

Second, while I have been aware of and party to "second looks", in my experience, the results are rarely the same as the "first look". The comment above assumes that the results would be the same, but I simply haven't found that to be the case. 

I did some pro bono work (different case from the one previously mentioned) involving someone leaving a firm, and "salting the earth" on their way out.  What I mean by this is that someone submitted their letter of resignation, and after they left the building, it was found that their system was infected with ransomware.  In this case, the system they were assigned was "communal", in that there was a critical application installed on the system, with just one license, so the CEO needed to have access to it at all times.  My analysis was actually the third look, and I was able to demonstrate that the evening prior to the "incident", someone had logged in at 9pm and browsed the web for about 6 minutes.  During that time, the system became infected, and a persistence mechanism was established, which led to the ransomware launching when the user (not the CEO) logged in the next morning and wrote out their letter of resignation.  The case was thrown out, and a counter suit for libel went forward, based on my report.

This is just one example, but the findings were different enough from the law enforcement officer's report and another consultant's report that the outcome was significantly impacted, and the direction of the case altered. 

Tuesday, August 13, 2019

Chasing the DFIR Cure

I've wondered for some time now, how do "customers" (recipients of DFIR services) determine the quality and accuracy of the rendered product?  Throughout my time in the industry, I've known some customers to "seek a second opinion", but is this something that's really pervasive?

I was watching a new medical mystery show recently called "Chasing the Cure".  This show is all about folks with severe, debilitating illnesses that have gone un- or mis-diagnosed for extended periods of time. This subject is near and dear to me because about 25 years ago, I went through a similar situation, although the duration of my issue was a few months, rather than years.  In one of the show segments, a woman was suffering from a debilitating ailment that had gone misdiagnosed for several years, and she was able to receive a confident diagnosis on the show (i.e., PCOS).  What I found interesting was that the doctors who made the diagnosis (on the show) were looking at the results of tests that had been ordered by previous doctors, meaning that several medical professionals had the same data in front of them, and in the case, the woman had been told by previous doctors that did not have the condition for which she was finally diagnosed.  So, this was not just a matter of two (or more) professionals having the same data and arriving at different conclusions, as much as it was about a  professional in the field specifically discounting a diagnosis.  The result was that the woman and her family suffered for several more years, where she could have received treatment much earlier.  However, not being satisfied with the answer she was given led her to continue seeking a diagnosis.

That got me to thinking...how do those who contract for DFIR services know if the analysis and findings they're receiving are correct and accurate?  Sure, if the findings don't go their way, they can seek a second (or third) opinion, but how do to they know that what they receive is correct? 

Several years ago, I was contacted by an attorney who had a case that involved a person being in a specific location based on computer evidence.  In short, the case had to do with someone claiming that they were at a convenience store, and this attorney's case would be held up if that person had been behind the computer keyboard.  The attorney had first 'contracted' with a part-time IT sysadmin who serviced their office to conduct analysis of the computer data, and the sysadmin had reportedly found evidence that the person had, in fact, been behind the keyboard.  The attorney asked me to confirm the finding, which I was able to do.  However,

The question, "...are these findings correct?" ever asked?  Does it matter?  I believe it does, and I also believe that there are a number of circumstances where it may behoove a customer/recipient to seek a second opinion:

PCI Investigations - the "window of compromise" is a variable in the "potentially how many credit card numbers were compromised" equation.  This then leads to corrective or punitive actions, such as fines, and as such, is significantly impacted by the findings from the investigation.  For example, misinterpreting the time stamp associated with the AppCompatCache data can extend the "window of compromise" from weeks to years, and severely impact the merchant, who receives a much greater fine.

Compliance - this goes back to things like PCI investigations, as during the course of analysis, the analyst may find that the merchant was not compliant with the PCI DSS (or standards set by another regulatory body) at the time of the breach.

Cyber Insurance - the results of an investigation can significantly impact the results of a claim; issues with data collection and interpretation may lead to findings that indicate considerable gaps in "due diligence", and claims may not be paid.

HR - findings as a result of a DFIR investigation in support of HR can significantly impact the employee, or the company.  Misinterpretations of the data may lead to an employee being unjustly accused or dismissed; I worked a pro bono case to this effect several years ago.

Ransomware - something we see reported quite often in the media is that "...there was no evidence of data exfiltration found...", and that's a good thing which fits the desired narrative.  But is it correct?  Were the DFIR analysts aware of the artifact locations within Windows systems that might provide a different view of that answer?  After all, the actors behind Samas and Ryuk ransomware deployments have been observed spending months (yes, I did spell that correctly...) within an infrastructure before deploying the ransomware, so...yeah...

I'm not suggesting that the industry is rampant with errors in data collection and interpretation, not at all.  There are a lot of great analysts out there doing a lot of great work, and providing accurate results and findings to their customers.  However, like any industry, these things do happen, and like other situations, when they happen they can have a serious impact. We also have to look at the fact that operating systems are getting more sophisticated all the time, and applications are flourishing and getting more numerous.  This is all to say that things are much more complex than they were 20 years ago, and with the number of people coming into the DFIR field, how do we keep up on keeping everyone at a common level of knowledge?

Another aspect of the industry that I've seen change over time is the use of collection, parsing, and pre-processing frameworks.  When I started out in the industry, even if a DFIR analyst collected a dozen or more images, they analyzed those images themselves.  Over time, as there's been a move to cover and address the enterprise, there's been a subsequent increase in the amount of available data.  As such, in a move establish a level of consistency, a lot of DFIR teams have developed means for the enterprise-level collection and pre-processing of data.  All of this can add an additional layer of abstraction between the data and the analyst.

I'm also fully aware that over time, we learn things.  I was talking to Brett Shavers recently, and he brought up the scenario of going back and looking at previous cases.  Like Brett, when I've done this, I've marveled at how far I've come since that case; what are some of the things I've learned since then that I could apply to the case if I were to address the issue today?

I would think, then, that without some compelling reason, most who purchase DFIR services accept the findings they receive as correct and accurate.  In this age of legal and regulatory requirements that both impact and depend on the results of DFIR analysis, the correct and accurate collection and interpretation of digital data is paramount, and there are a number of cases where the "customer" may benefit from a second, or even a third opinion.  After all, we do this with medical issues, don't we?

To that point, that 'compelling reason' would likely consist of findings that are markedly contrary and contradictory to the desired narrative.  There is likely a 'threshold' that some may accept; for example, consider the PCI example above...there are likely merchants who receive information about those findings and are able to absorb whatever judgement is levied by the bank or the PCI Council.  However, there are also those merchants for whom the judgement is what I've referred to as a "cratering fine"; that is, once the fine is levied, the business (a small mom-and-pop restaurant, for example) ceases to exist.  I've seen this happen.  In such cases, given what's at stake, it may behoove the merchant to seek a second opinion.

Thursday, July 04, 2019

IWS Review

A while back, I sent a signed copy of IWS to someone who was willing to write a review, and they've gone about trying to post their review with some difficulty, albeit not for a lack of trying on their part.  I greatly appreciate the effort, not just in reading and engaging with the book, and writing a review, but also trying to get it posted and shared!  So much so, in fact, that I offered to host the review here. 

While the book hasn't been out for a full year yet, I'm still just as nervous as I was the first day, regarding how the book will be viewed by the community.  This book is a pretty radical departure from my previous books, as well as from any other DFIR book I could find.  As such, with something this new (I had pretty much the same feelings regarding Windows Registry Forensics...), I greatly appreciate hearing how folks found the book, as well as if they were able to get anything from it...did it provide value?

Thanks, Dmitri, for taking the time read the book, and for putting in the additional effort to write down and share your thoughts! 

So...a review, by Dmitri...

Investigating Windows Systems - Review

I've read several books by Harlan, and I've never been disappointed. I love his direct way of writing. IWS is thinner and smaller than his other books but no less important, on the contrary.

Harlan writes that IWS is not for beginners, I still see myself as a beginner and should contradict Harlan here, also IWS is a book that is important, or may be, for any beginner, although some pieces in the book are not so easy with an effort of the reader and a search on the Internet everything becomes understandable.

The book is well organized. It teaches you from the beginning that a good analysis plan is important. It teaches you to focus between 'nice to know' and 'need to know'.

The book is divided into several cases (finding malware, user activity, web server compromise). Harlan explains to you how he would deal with these cases himself, and then teaches you how to make a self-reflection. What did you learn from your case, and how would you tackle it next time?

The book is not about the analysis of images themselves, nor about which tools you should use, but about how you should do the analysis, what plan you make. He teaches you to make the difference between a targeted approach and an automated approach.

In the last part, Harlan will teach you how to set up a testing environment, and convince you that testing changes in the file system yourself by deleting files, installing programs, …,  is often more instructive than just asking for help on the net.

I really enjoyed the book.

Dimitri Deryckere
1st Inspector 
Computer Crime Unit – Pz Regio Tielt - Belgium

Tuesday, June 04, 2019

What's New

A New Blogger
I ran across a fascinating blog post during my Sunday morning reading, one that was interesting in a number of ways.  First off, this is apparently Charity's first blog post...so, congrats!  Thank you for stepping up and sharing your insight. It is refreshing and oh-so-important for those newer to the field to speak up and share their perspective, as those who've been around for a while may find this perspective valuable; I know I do.

I don't know Charity, we've never met.  I simply ran across a tweet announcing her first blog post, and thought, cool...I'll take a look.  I like to see what folks are bringing to the table, in particular due to the fact that when I was in college and 'coming up' in my career, there were no resources such as college programs for infosec or "the cybers".  It's been fascinating to watch the evolution of the field over time, and to see folks like Charity bring a new perspective to the field.

I took martial arts training for a bit while I was in junior high school, and then again during my first deployment to Okinawa.  In both instances, sparring was involved...when I was studying shorin ryu, a good bit of the sparring came during testing before the grandmaster.  One of the take-aways for me from that sparring was that while there are rules for such things, they only apply to those who follow them.  Admittedly, I did learn this truism while playing soccer and wrestling in high school, but I think it really became a bit more obvious while I was sparring.  For example, we didn't wear head gear during the sparring, because one of the rules was that the head was not a target.  I seem to remember that this was a test of your understanding and use of the technique, as well as self-control, under controlled circumstances. During my training sessions prior to testing, everyone adhered to that rule...likely because we were all from the same unit.  However, during the testing, there were athletes from different locations on the island, and as such, you never knew how they trained.  During one sparring session, the first shot my opponent took was directly to my head.

The point is that as an IR consultant, you may find yourself sparring not only with the threat actor with whom you are engaged, but also with the culture of the folks you are assisting.  Agreed upon rules, such as not forcing the password change until everyone is ready, only works if everyone who agrees to them actually adheres to and follows them.

To one of Charity's points, during IR engagements, I also got to see how those I was working with reacted to stress, and very often, my first goal was to bring down that stress level so that we could move and respond in more of a coordinated fashion, and less based on adrenaline. During one IR engagement, I arrived on-site at 10:30pm on a Friday night, knowing that the team had already been working 20+ hr days for the last 10 days or so.  My first action was to get everyone to go home; it took some time, but we got the last person out the door by 11:30pm.  Even the following day, within the first few hours, some of the folks at the site were having trouble typing commands, and would click the wrong buttons in a UI.  Too many mistakes were being made, making simple actions take far too long to accomplish.  Having everyone get some rest, and then focusing their efforts once we were back in the fight got us to the point where we were making some headway, and once we were able to build up some forward momentum, the stress levels started to come down.

Also, it makes me wonder if Charity and Lesley or Richard have had a chance to meet yet.

Roll-Ups
Roll-ups are a great way to get an overview of what happened during a given week or month, and to maybe see something that you might have missed.  ThisWeekIn4n6 is a great resource for just this purpose, but like other such roll-ups and consolidations, the descriptions of various articles or events can sometimes fall short of the mark.

Okay, before we go any further, I am not knocking the effort that goes into producing a roll-up like this every week, along with a monthly overview.  Not at all.  It's a lot of work and greatly appreciated, as I know I've seen things in the weekly roll-up that I would not have seen otherwise.  As such, I do not expect a full-blown review of each of the resources listed; however, I will say that I'm no different from anyone else, in that I may not look at something that contains some real solid information, based on the description.

One example is Ali's recent blog post titled Creating a Hidden Prefetch File to Bypass Normal Forensic Analysis. Ali addresses what happens on a system when an executable is launched from within an ADS, and the effect that may have on "normal" DF analysis. I was interested in the article because (a) Ali is a friend, and (b) I've been interested in ADSs since about 1998.  As such, I was enticed to read the full content of the article, and I found value in it.

In the roll-up description, there is the statement, "This prefetch file was not detected by forensic tools". Should it have been?  Most tools, particularly the commercial ones, present data on which the examiner then needs to perform analysis; however, is it incumbent upon a tool to 'detect' this sort of thing?

As another example, a co-worker and I had a blog post published on our corporate web site recently; the roll-up description simply states that we "looked at increased TrickBot activity from GRIM SPIDER.".  Really, it is SO much more than that! The purpose of the article that Eric and I worked on (Eric did the lion's share of the work) was to stand on the shoulders of the CS Intel team and illustrate what was seen "on the ground" across a number of IR engagements.  Yes, the article started out mentioning the described increase in activity, in order to tie it to the previous blog post from our Intel team.  However, it then extended that information by incorporating what was observed through incident response, given the aperture and collection bias of the IR team. Not only was this deep dive discussed in the article, but we wanted to make it easier to digest by presenting a table of observables (table 1 in the article) by mapping MITRE ATT&CK tactics and techniques to the actual data from the engagement, to illustrate what the technique "looks like".

My point is simply this...sometimes the description of an article isn't inline with the content, and sometimes it isn't even enticing.  However, there is a lot of great content out there, content that shouldn't be passed over because of a synopsis that perhaps doesn't generate interest.

LNK Files
Huntress Labs recently posted an article regarding an LNK file they'd seen, and the deep dive that they took into the LNK file and the PowerShell that it launched. While the article includes 'deep dive' in the title, there is no mention of LNK file metadata.  Now, this may be due to the fact that the LNK file was created on, rather than sent to, the target system, but there isn't enough detail within the article for a reader to know.

The technique described, while very interesting, is not new.  I mentioned Felix's post regarding booby-trapped LNK files.  It is fascinating to see other examples of how this technique is used, and it would be very interesting to see some of the other TTPs surrounding this particular use of the technique, for context.  Also, it would be very interesting to see what this 'looks like' with respect to EDR tooling...I suspect that the parent process would be the Windows Explorer shell, shown launching PowerShell, with a child process of mshta.exe; however, there doesn't seem to be any information regarding where the file was found with in the file system, how it is launched, etc.

I think that this was a great dive into the technique used, and I enjoy reading technical analyses such as this, as they show the lengths that an adversary will go to in order to avoid detection, as well as buy themselves some time as these things are unraveled.  However, in this particular case, there's a good bit that isn't said; for example, if the LNK file was created on the system where it was found, then one can assume that there was a bit of a 'noisy' process to create it.  After all, this isn't something that you can create directly via the API.  Yes, you can get it started via the API, but additional tooling is required to append data to the end of the LNK file AND include a command in the LNK file to read that data; this isn't 'normal'.  Yes, it can be accomplished via scripting, but that script would have had to have been copied over and executed, TTPs which can be detected.  If the LNK file was created on another system, such as the bad guy's system, then there is likely metadata in the file that can tell us about that system, as well as toolmarks that can tell us something about how the LNK file was created.

The folks at Huntress Labs did a great job of unraveling the LNK file, but for whatever reason, a great deal was left unsaid.  I, for one, would be very interested in knowing more about the file, as there is undoubtedly valuable information that can be used to develop threat intelligence.

Speaking of unraveling LNK files, the folks at JPCERT/CC recently blogged about weaponized LNK files being used against targets.  This one is interesting due to the fact that targeted users receive an email that contains a link to the weaponized LNK file, which is hosted in a cloud service.  As such, the LNK file is not an attachment to the email, nor is it in a zipped archive that is attached to the email; it's downloaded to the target system and executed after the user clicks a link.

Unfortunately, the article only gives an overview of the LNK file; there are some aspects of this issue that bear closer examination.  I understand that some things may have been presented from a high-level view due to the sensitivity of the case being worked, translating to English, etc.  However, based on prior experience, there may be go deal of intelligence that can be gleaned from the samples.

I did a search for several (albeit not all) of the hashes listed, and could not find samples. I think that there may be something interesting there, if we're able to look at the full metadata of the LNK files, especially if it's plotted across time (when the phishing emails were sent) or campaigns.  I'd like to see more, in part because I think that there's more to see...

Wednesday, May 29, 2019

Troubleshooting and Deep Knowledge

What do we do when a tool that we're using doesn't return what we expect to see?

How often does this happen?  How often do we run a tool in the course of our DFIR investigation process and for some reason, we don't see something in the output of the tool that we thought we would see?  How often do we look for something, only to find that it's not there?  More importantly, what do we do about it?

I can't speak for anyone else, but I tend to want to figure out why, particularly if the data I'm trying to parse is important to my overall analysis, based on my goals.  If it's just a curiosity, I'll leave it until later, but if the data itself is paramount to the analysis, I generally tend to want to know why I'm seeing (or not, as the case may be) what I'm seeing. 

Several years ago, while I was QSA certified and working on PCI cases (sssshhhh...don't let that little secret out...), we ran across a case where we knew that two specific brands of credit cards were being used at the site, but for some reason, our scans were coming up empty for those specific brands.  We were getting lots of search results for the major card brands, but nothing for these two brands that we expected to see.  We started peeling back the layers of the process, and go to the point where we were running scans of just straight text files that contained test data (several of the brands provided a small number of "valid" but unassigned credit card numbers), but we were still coming up empty on the brands.  As we investigated further, we narrowed the issue down to a built-in API function, and determined that it was not, in fact, returning "valid" credit cards numbers.  Rather, it was, just not all of them; it did not consider these two brands as 'valid' for some reason.  We got some programming assistance and built our own replacement function, one that was a bit slower but more complete.

The moral of the story is that we looked at the components involved...the data, the tool, and the examiner...and worked methodically to figure out where the issue lay.  Was it an assumption on our part, that these two brands were actually being used at the site?  How did we know?  Was it an assumption, or was it information that came from a specific, identifiable source?  What about the data?  Was there an issue with the data?  Or was it the tool?

As it turned out, it was an issue of deep knowledge.  We knew what the data should look like, and we had an understanding of where the data should reside.  As a bit of a follow-on to my post on Deep Knowledge in DFIR work,  I wanted to take a few moments to discuss an issue I ran into with the DefCon 2018 CTF File Server image, and what I did to address it.

I was reviewing some of the data, and I wanted to check and see if I could see a reference to the counter-forensics tool in the AppCompatCache data; as such, I ran the RegRipper appcompatcache.pl plugin, and saw the following:

Launching appcompatcache v.20190112
appcompatcache v.20190112
(System) Parse files from System hive AppCompatCache

Signature: 0x0

That was it...nothing else.  No listing of file paths and names, no time stamps...nada.

Okay, so...now what?  Do I throw up my hands and assume that the tool doesn't work?

No, not at all.  I start troubleshooting the issue.

As  I mentioned, there are three components to this process...the data, the tool, and the operator.  Any one of the three could have an "issue".  It's entirely possible that the tool does not function properly.  For example, when looking at a Registry hive via a viewer, I like to use the MiTeC Windows Registry Recovery (WRR) tool, even though I know that the tool has a deficiency.  Specifically, for value data over a certain size, a different data type is used to store that data (similar to non-resident files within the MFT using a run list), and WRR does not handle those data types. The AppCompatCache data is exactly that data type.  However, I could navigate through the Registry structure to the key in question, even though I could not directly view the raw data value via WRR.

First step...check the tool.  Is the plugin working correctly?  It works correctly against the System hive from the HR server, but that's a different version of Windows.  I don't have a System hive available that matches the Windows version from the file server image, but for the most part, the plugins seems to be working correctly.

RegRipper plugins are Perl scripts, which means that they're essentially text files.  Opening the appcompatcache.pl plugin in Notepad++, we see that lines 96 and 97 are commented out (that is, the lines start with "#"):

# ::rptMsg("Length of data: ".length($app_data));
# probe($app_data);

This is essentially some troubleshooting code; when it's not commented out, it will tell me the length of the data, and then print a hex dump of the data (via the probe() function).

Assuming that the issue may have been with the plugin, I removed those "#" symbols, saved the file, and re-ran the plugin in hopes that I'd get some more information.  For example, if the plugin wasn't parsing the data correctly, I'd see a line telling me the length of the data, and then a hex dump of the data itself.  However, when I ran the plugin, I saw simply "Length of data:" and nothing else; no length value, no hex dump.

Next step...check the data itself.  I opened the hive in WRR, and went to the Control\Session Manager key to check the AppCompatibility subkey (the path is listed in the RegRipper plugin), and...nothing.  I couldn't find the AppCompatibility subkey.  It didn't seem to exist.

I ran the RegRipper del.pl plugin against the hive file, redirecting the output to a file:

rip.pl -r f:\defcon2\files\system -p del > f:\defcon2\files\system_del.txt

From the output:

------------- Deleted Data ------------
  598220                          10 00 00 00 6f 6d 70 61          ....ompa
  598230  74 43 61 63 68 65 00 00                          tCache..

Okay, so let's try another tool.  Using yarp-print.py, I ran the following command line:

yarp-print.py --deleted e:\defcon2\files\system > e:\defcon2\files\system_del_yarp.txt

Opening the output file, there were quite a few recovered keys and values. I ran a search for "ompat", as well as one for "cache", and found nothing related to the AppCompatCache value.

Okay, so now we're pretty sure that the reason the RegRipper plugin didn't return what we expected is because the key and value question don't exist within the hive file.  This is pretty unusual, so the next question would be, "why?"

I checked the LastWrite time of the ControlSet001\Control\Session Manager key via WRR, and then pivoted into the timeline of system activity.  Events near that time are shown below:

Wed Aug  8 03:51:12 2018 Z
  FILE                       - M... [37092] C:\ProgramData\Microsoft\Windows Defender\Support\MPLog-02112017-053110.log

Wed Aug  8 03:51:03 2018 Z
  REG                        - M... HKLM/Software/Microsoft/MpSigStub 
  FILE                       - M... [5732] C:\Windows\Temp\MpSigStub.log
  FILE                       - MA.. [4096] C:\ProgramData\Microsoft\Windows Defender\Definition Updates\{8DCEFA0C-BEA3-48DC-B17D-E363C91F2F5D}\$I30
  REG                        - M... HKLM/Software/Microsoft/Windows/CurrentVersion/WindowsUpdate/Auto Update/Results/Install 
  FILE                       - MA.. [48] C:\Windows\Temp\2A62F43A-0724-4D0C-8B60-BC284D249D64-Sigs\
  FILE                       - MA.. [48] C:\ProgramData\Microsoft\Windows Defender\Definition Updates\{8DCEFA0C-BEA3-48DC-B17D-E363C91F2F5D}\

Wed Aug  8 03:51:02 2018 Z
  REG                        - M... HKLM/Software/Microsoft/Windows Defender/Signature Updates 
  FILE                       - M... [5699701] C:\ProgramData\Microsoft\Windows Defender\Scans\mpcache-008087D650ED729E08CB8F27E1DE2E1889585057.bin
  FILE                       - MA.B [1999848] C:\ProgramData\Microsoft\Windows Defender\Scans\mpcache-008087D650ED729E08CB8F27E1DE2E1889585057.bin.83
  REG                        - M... HKLM/System/ControlSet001/Control/Session Manager 

Interesting.  It looks as if Windows Defender was being updated or a scan was being run at about that time.  Checking the support log from 03:51:12, we see the following at the end of the file:

[Tue Aug 07 2018 20:51:03] Process scan started.
[Tue Aug 07 2018 20:51:04] Process scan completed.

Checking the timezone settings for the system, we see that the ActiveTimeBias is 7 hrs, and that a scan was started and completed shortly after the Registry key in question was last modified.

What about the other ControlSet?  Pivoting within the timeline to the point where the Control\Session Manager key within ControlSet002 was modified, we see the following:

Tue Aug  7 20:25:33 2018 Z
  FILE                       - MA.. [655360] C:\Windows\System32\$I30
  FILE                       - MA.. [56] C:\Program Files (x86)\
  FILE                       - MA.. [56] C:\Program Files\
  FILE                       - MA.. [56] C:\Program Files (x86)\$TXF_DATA
  FILE                       - MA.. [56] C:\Program Files\$TXF_DATA
  FILE                       - .A.. [20] C:\Windows\AppCompat\Programs\RecentFileCache.bcf
  EVTX     WIN-M5327EF98B9   - NetBT/4321;,WIN-M5327EF98B9:0,74.118.139.11,74.118.139.201
  REG                        - M... HKLM/Software/Wow6432Node/Microsoft/Windows/CurrentVersion/explorer 
  FILE                       - MA.. [520] C:\Users\mpowers\AppData\Local\Microsoft\Windows Mail\Backup\new\
  FILE                       - MA.. [56] C:\Windows\System32\
  FILE                       - MA.. [4096] C:\Windows\SysWOW64\config\systemprofile\AppData\LocalLow\Microsoft\CryptnetUrlCache\Content\$I30
  FILE                       - MA.. [4096] C:\Program Files\$I30
  FILE                       - MA.. [256] C:\Windows\System32\sysprep\Panther\IE\
  REG                        - M... HKLM/System/ControlSet002/Control/Session Manager

Looking nearby within the timeline, we see that there are a number of modifications to both the file system and Registry hives going on leading up to, as well as following that time.

Remember my previous blog post regarding the deletion of forensic artifacts? It appeared that the PrivaZer application was launched at approximately 20:21:51 Z on 7 Aug, and completed its work approximately 10 minutes later; the LastWrite time for the Session Manager key seen above is right in the middle of that activity. As such, it would appear that the reason why there is no AppCompatCache data available is due to the deletion of forensic artifacts by the PrivaZer application.  The later LastWrite time for the key found within ControlSet001 may be due to other factors, such as activity that occurred after the key of interest was deleted.

Summary
When engaged in analysis, there are number of components at play, in particular the data, the tool being used, and the analyst.  Any one of these, or any combination thereof, could result in an "issue".  For example, in this case, an examiner could have easily come to the determination that the tool (i.e., the RegRipper plugin) was the issue. I can't remember ever seeing a System hive file that did not have an AppCompatCache value.  As such, why wouldn't it be an issue with the tool? 

It turns out that all of the components for running this issue down were right there.  We had an open source tool, one which we could literally open in Notepad and read what it does, that would start us down the troubleshooting road.  We had other tools available; without WRR, I could have just as easily imported the hive into RegEdit, and from there, seen that the AppCompatibility subkey didn't exist. 

Tuesday, May 21, 2019

DefCon 2018 CTF File Server Take-Aways

In my last post, I shared some findings with respect to analyzing an image for anti- or counter-forensics techniques, which was part of the DefCon 2018 CTF.  Working on the analysis for the question I pursued, writing it up, and reflecting on it afterwards really brought out a number of what I consider important take-aways that apply to DFIR analysis.

Version
The Windows version matters.  The first DefCon 2018 CTF image I looked at was Windows 2016, and the file server image was Windows 2008 R2.  Each had some artifacts that the other didn't.  For example, the file server image had a SysCache.hve file, and the HR server didn't.

Both of the images that I've looked at so far from the CTF were of Windows server systems, and by default, application prefetching is not enabled on server editions.

Something else to consider is, was the system 32- or 64-bit?  This is an important factor, as some artifacts associated with 32-bit applications will be found in a different location on a 64-bit system.

The version of Windows you're working with is very important, as it tells you what should and should not be available for analysis, and sort of sets the expectations.  Now, once you start digging in, things could change.  Someone could have enabled more detailed logging, or enabled application prefetching on a server edition of Windows; if that's the case, it's gravy.

This is also true for applications, as well as other operating systems.  Versions and family matter. 

Execution
A file existing on the system does not definitively mean it was executed; the same is true with a GUI application.  Just because a GUI application was launched by a user, does that mean that it was actually run, that the functionality available through the GUI was run?  That's the challenge with GUI applications, isn't it?  How do you tell when they're actually run beyond just the GUI being opened?  Did the user select various options in the UI and then click "Ok", or did they simply open the GUI and then close it?

Yeah, I get it...why open the application and launch the UI if you don't actually intend to run it?  We often think or say the same thing when it comes to data staging and exfiltration, don't we?  Why would an actor bother to stage an archive if they weren't going to exfil it somehow?  So, when we see data staged, we would assume (of course) that it was exfiltrated.  However, we may end up making that statement about exfiltration in the absence of any actual data to support it. What if the actor tried to exfil the archive, but failed? 

If you don't have the data available to clearly and definitively illustrate that something happened, say that.  It's better to do that, than it is to state that something occurred, only to find out later that it didn't.

Consider this...an application running on the system can be looked at as a variation of Locard's Exchange Principle; in this case, the two "objects" are the application and the eco-system in which it is executing.  In the case of an application that reportedly performs anti- or counter-forensics functions, one would expect "forensic artifacts" to be removed somehow; this would constitute the "material" exchanged between the two objects.  "Contact" would be the execution of the application, and in the case of a GUI application, specifically clicking on "Ok" once the appropriate settings have been selected.  In the case of the file server CTF question, the assumption in the analysis is that "forensic artifacts" were deleted.

Why does it matter that we determine if the executable was actually launched, or if the functionality within the GUI tool was actually launched by the user?  Isn't it enough to simply say that the file exists on the system and that it had been run, and leave it at that?  We have to remember that as an expert, our findings are going to be used by someone to make a decision, or are going to impact someone.  If you're an IR consultant, it's likely that your findings will be used to make critical business decisions; i.e., what resources to employ, or possibly even external notification mandated by compliance regulations or even legislation.  I'm not even going to mention the legal/law enforcement perspective on this, as there are plenty of other folks out there who can speak authoritatively on that topic.  However, the point remains the same; what you state in your findings are going to impact someone.

Something else to consider is that throughout my time in the industry, I've seen more than a few times when a customer has contacted my team, not to perform DFIR work, but rather to review the DFIR work done by others.  In one instance, a teammate was asked by the client, "what questions would you ask?"  He was given the report produced by the other team, and went through it with a fine-tooth comb. I'm also aware of other instances where a customer has provided the same data to two different consultants, and compared the reports.  I can't say that this has happened often, but it has happened.  I've also been in situations where the customer has hired two different consulting companies, and shared the reports they (the customer) received with the other company.

Micro-Timelines and Overlays
One thing that was clear to me in both of the posts regarding the DefCon CTF images was the value and utility of micro-timelines.  Full timelines of system activity are very valuable, as well, because they serve to provide a significant amount of context around system activity at some point in time.  However, full system timelines are also very noisy, because Windows systems are very noisy.  As we saw in the last blog post, there was a Windows update being installed around the same time that "forensic artifacts" were being deleted.  This sort of circumstance can create confusion, and lead to the misinterpretation of data. 

However, by using micro-timelines, we can focus on specific sets of artifacts and start building out a skeleton analysis.  We can then use that skeleton as an overlay, and pivot into other micro-timelines, as well as into the full timeline.  Timeline analysis is an iterative process, adding and removing layers as the picture comes more into focus. 

This process can be especially valuable when dealing with activity that does not occur in an immediately sequential manner.  In my last blog post, the user launched an application, and then enabled and launched some modicum of the application's functionality.  This occurred in a straightforward fashion, albeit with other system activity (i.e., Windows update) occurring around the same time.  But what if that hadn't been the case?  What if the activity had been dispersed over days or weeks?  What if the user had used RegEdit to delete some of the forensic artifacts, and had done a few at a time?  What if the user had also performed online searches for tips on how to remove indications of activity, and then those artifacts were impacted, but over a period of days?  Having mini- and micro-timelines to develop overlays and use as pivot points would make the analysis so much easier and efficient that scrolling through a full system timeline.

Deep Knowledge
Something that working and reflecting on my previous post brought to mind is that deep knowledge has considerable value.  By that, I mean deep knowledge not only of data structures, but also of the context of the available data.

Back when I was performing PCI investigations (long ago, in a galaxy far, far away...), one of the analysts on our team followed our standard process for running searches for credit card numbers (CCNs) across the images in their case.  We had a standardized process for many of the functions associated with PCI investigations due not only to what was mandated for the investigations, but the time frame in which the information had to be provided, as well.  By having standardized, repeatable processes, no one had to figure out what steps to take, or what to do next.  In one instance, the search resulted in CCNs being found "in" the Software hive.  Closer examination revealed that the CCNs were not value names, nor were they stored in a value data; rather, they were located in unallocated space within the hive file, part of sectors added to the logical file as it "grew".  Apparently, the bad guy had run tools to collect CCNs into a text file, and then at one point in their process, deleted that text file.  Those sectors that contained the data were added to the Software hive as it grew in size.

Having the ability to investigate, discover, and understand this was critical, as it had to be reported to Visa (Visa ran the PCI Council at the time) and there were strong indications that someone at Visa actually read the reports that were sent in...often, in great detail.  In fact, I have no doubt in my mind that there were other DFIR analysts who reviewed the reports, as well.  As such, being able to tie our findings to data in a reproducible manner was absolutely critical.  Actually, it should always be paramount, and foremost on your mind.

MITRE ATT&CK
One of the aspects of the MITRE ATT&CK framework that I've been considering for some time now is something of a reverse mapping for DFIR. What I mean by this is that right now, using the DefCon 2018 CTF file server image, we can map the activity from the question of interest (counter-forensics) to the Defense Evasion tactic, and then to the Indicator Removal From Host technique.  From there, we might take things a step further and add the findings from my last blog post as "observables"; that is, the user's use of the PrivaZer application led to specific "forensic artifacts" being deleted, which included artifacts such as the UserAssist value name found to have been deleted, the "empty" keys we saw, etc.

From there, we can then say that in order to identify those observables for the technique, we would want to look for the artifacts, or look in the locations, identified in the blog post.  This would be a reverse mapping, going from the artifacts back to the tactic. 

Let's say that you were examining an instance of data exfiltration, one that you found had occurred through the use of a bitsadmin 'upload' job.  Within the Data Exfiltration tactic, the use of bitsadmin.exe (or PowerShell) to create an 'upload' job might be tagged with the Automated Exfiltration or the Scheduled Transfer technique (or both), depending upon how the job is created.  The observable would be the command line (if EDR monitoring was in place) used to set up the job, or in the case of DFIR analysis, a series of records in the BitsAdmin Client Event Log. 

Reversing this mapping, we know that one way to identify either of the two techniques would be to look in the BitsAdmin Client Event Log.  You might also include web server logs as a DFIR artifact that might help you definitively identify another aspect of data exfiltration.

In this way, we can extend our usual mapping from tactic to technique, adding "observables" and data sources, which then allows us to do a reverse mapping back to the tactic.  Sharing this across the DFIR team means that now analysts don't have to have had the experience of investigating a case of data exfiltration in order to know what to look for.

Saturday, May 18, 2019

DefCon 2018 File Server

After engaging with the first image from the DefCon 2018 CTF, I thought it would be fun, and instructive, to take a look at the second image in the CTF, the File Server.

As before, not having signed up for the CTF itself, I found the questions associated with the image at the following sites:
HackStreetBoys
InfoSecurityGeek
Caffeinated4n6

One of the CTF questions that caught my attention was, What tool was used to delete forensic artifacts?  I've long been interested in two aspects of DFIR work that are directly associated with that question; anti- (or counter-) forensics, and what something looks like in the data (i.e., how is the behavior represented in the data?).

I found the answers to the question were interesting.  The HackStreetBoys response referenced the output of the itempos.pl plugin, which listed the contents of the mpowers user desktop.  A reference to a program was found, one that was determined to be used for counter-forensic purposes, and that program was the response to the CTF question.  The other two responses referenced the UserAssist data, and similarly, it seems that something was found that could be an anti-forensic tool, was found on Google and seen to be an anti-forensics tool, and that was the answer.

However, beyond that, there was little in the way of verification that the program had actually been used to perform the specified task.  This is not to say that any analysis was incorrect; in the publicly available write-ups I reviewed, the answers to this question were reasonable guesses based on some modicum of the available data.

We know that the system was running Windows 2008 R2, and that tells us a good bit in and of itself.  For example, being a server version of Windows, application prefetching is not enabled by default, something we can easily verify both via the Registry, as well as via visual inspection of the image.  Something else that the version information tells us is that we won't have access to other artifacts associated with program execution, such as the contents of the BAM subkeys.  Further, there are other artifact differences with respect to the first image in the CTF; for example, the File Server image contains a SysCache.hve file, but not an AmCache.hve file.

As such, the InfoSecurityGeek and Caffeinated4n6 blogs focused on the UserAssist data, and rightly so.  If you look closely at the Registry Explorer screen capture in the InfoSecurityGeek blog, you'll see that the identified value in the UserAssist data does not have a time stamp associated with it.  I've seen this happen, where the value data in the Registry either does not contain the time stamp associated with when the user launched the program, or the data itself is all zeros.

As a bit of a side note...and this is more of a personal/professional preference..I tend to prefer to not hang a finding on a single data point.  What I try to do is find multiple data points, understanding the context of each, that help build out the story of what happened.  For example, the fact that a program file existed on a system does not directly correlate to the fact that it was executed or launched.  Similarly, the fact that a GUI program was launched does not definitively state that it had be run.  I can open RegEdit and browser the Registry (or not) and then close it, but that does not definitively demonstrate that I changed anything in the Registry through the use of RegEdit.

From reviewing all three blog posts, we know that a file exists on the mpowers user desktop that, based on Google searches, is capable of taking anti- or counter-forensics actions (i.e., deleting forensic artifacts).  We also know that it is a GUI program, and that indications are that the user launched the GUI.  But what we don't know is, was the program actually run, in order to "delete forensic artifacts"?

Or do we?

Based on the question, the assumption is that forensics artifacts had been deleted, and as such, would not be found via our 'normal' processes.  This is illustrated by the fact that a good bit (albeit not all) of the data we'd normally look to with regard to user activity appears to have been removed; the RecentDocs key for the user isn't populated, there are no visible LNK files in the user's Recent folder, and there are only two JumpLists in the user's AutomaticDestinations folder.  Not definitive, but it's something.

So, my approach was to look to deleted keys and values within the user's NTUSER.DAT hive. One of the values found was:

P:\Hfref\zcbjref\Qrfxgbc\CevinMre.rkr

Knowing that the value names are Rot-13 encoded, that entry decodes to:

C:\Users\mpowers\Desktop\PrivaZer.exe

Tracking the value data based on the offset listed in the value node structure, and then parsing the data at that location, here is what I found:

00 00 00 00 01 00 00 00 02 00 00 00 E4 6B 01 00   .............k..
00 00 80 BF 00 00 80 BF 00 00 80 BF 00 00 80 BF   ................
00 00 80 BF 00 00 80 BF 00 00 80 BF 00 00 80 BF   ................
00 00 80 BF 00 00 80 BF FF FF FF FF 30 DA 50 46   ............0.PF
8C 2E D4 01 00 00 00 00                           ........

As you can see, there are 8 bytes toward the end of the data that are a FILETIME object. With a little scripting-action, I was able to extract the data and translate the time stamp into something human-readable:

Tue Aug  7 20:21:51 2018

The process of getting the above time stamp involved getting the offset to the data from the value structure, locating the data in the binary contents of the NTUSER.DAT file (via a hex editor) and then writing (well, not writing so much as copy-paste, right from the RegRipper userassist.pl plugin...) a small bit of code to go in and extract and process the data. Remember, the "active" value within the hive file contained data for which the time stamp was all zeros; in this case, we now  have a time stamp value that we can work with.  Within a micro-timeline of user activity, we see the following:

Tue Aug  7 20:24:14 2018 Z
  REG    mpowers - M... HKCU/mpowers/Software/Microsoft/Windows/CurrentVersion/Explorer/StreamMRU 
  REG    mpowers - M... HKCU/mpowers/Software/Microsoft/Windows/CurrentVersion/Explorer/ComDlg32/CIDSizeMRU 

Tue Aug  7 20:24:13 2018 Z
  REG    mpowers - M... HKCU/mpowers/Software/Microsoft/Windows/CurrentVersion/Explorer/ComDlg32/OpenSavePidlMRU 
  REG    mpowers - M... HKCU/mpowers/Software/Microsoft/Windows/CurrentVersion/Explorer/ComDlg32/LastVisitedPidlMRU 
  REG    mpowers - M... HKCU/mpowers/Software/Microsoft/Windows/CurrentVersion/Explorer/RecentDocs 

So, we see a number of Registry keys modified, and from our use of the appropriate RegRipper plugins (as well as a viewer, to verify) we can see that all of these keys are empty; that is, none is populated with any values.  This may be the "delete forensic artifacts" that we were looking for...

Pivoting into a more comprehensive timeline of system activity based on the time stamp, we see the following activity:

Tue Aug  7 20:23:53 2018 Z
  FILE      - MA.B [43680] C:\Users\mpowers\AppData\Local\Temp\000\717000000000000000000_p.0x0

Tue Aug  7 20:22:18 2018 Z
  FILE      - M... [221] C:\Users\mpowers\AppData\Local\Temp\000\new_version.txt

Tue Aug  7 20:22:17 2018 Z
  FILE      - .A.B [221] C:\Users\mpowers\AppData\Local\Temp\000\new_version.txt

Tue Aug  7 20:22:15 2018 Z
  FILE       - MA.. [4096] C:\Users\mpowers\Downloads\$I30
  FILE       - MA.. [56] C:\Users\mpowers\Downloads\


Tue Aug  7 20:21:56 2018 Z
  FILE       - .A.B [314] C:\Users\mpowers\AppData\Local\Temp\000\data.ini

Tue Aug  7 20:21:55 2018 Z
  FILE       - MA.B [3096] C:\$Extend\$RmMetadata\$Txf\00000000000060CC\
  FILE       - MA.B [870278] C:\Users\mpowers\AppData\Local\Temp\000\sqlite3.dll
  FILE       - MA.B [56] C:\$Extend\$RmMetadata\$Txf\00000000000060CC\$TXF_DATA
  FILE      - ...B [4096] C:\Users\mpowers\AppData\Local\Temp\000\$I30
  FILE      - ...B [48] C:\Users\mpowers\AppData\Local\Temp\000\

Following the activity illustrated above, we see a significant series of events within the timeline that may be indicative of artifacts being "cleaned up"; specifically, folders and Registry keys are being modified presumably emptied), and files deleted.

Something else that is very instructive from the system timeline is that while the apparent deletion activity is going on, there is also an update being applied.  This is a great example demonstrating how verbose Windows systems can be, particularly when it comes to creating events on the systems.  When reviewing the timeline, I noticed that there were more than a few files being created, rather than modified, that appeared to associated with a Windows update.  There were also some Registry keys that were being "modified".  I then checked the following log file:

C:\Windows\Logs\CBS\DeepClean.log

The entries in this log file indicated not only that there were updates being applied, but also which updates were applied, and which were skipped.  This is great illustration of why micro-timelines, pivot points, and overlays are so valuable in timeline analysis; throwing everything into a single file or view would lead to a massive amount of data, and an analyst might miss important artifacts, or "signal", buried amongst the "noise".

Finally, there was a whole swath of events similar to the following:

 - MA.B [0] \[orphan]\00000000000000000000000000000000552.0x0

After all of that activity, we see the following:

Tue Aug  7 20:31:36 2018 Z
  FILE      - M... [314] C:\Users\mpowers\AppData\Local\Temp\000\data.ini

Tue Aug  7 20:30:59 2018 Z
  FILE      - M... [0] C:\Users\mpowers\Desktop\PrivaZer registry backups\WIN-M5327EF98B9\00000000000000.0x0
  FILE      - MA.. [48] C:\Users\mpowers\Desktop\PrivaZer registry backups\WIN-M5327EF98B9\
  REG       - M... HKLM/Software/Microsoft/Windows/CurrentVersion/RunOnce 
  FILE      - M... [0] C:\Users\mpowers\Desktop\PrivaZer registry backups\WIN-M5327EF98B9\000000000000000000.0x0

We see that the data.ini file associated with the PrivaZer tool was last modified in the timeline extract above.  The final 2 lines in the file are:

[last_erase_date2]
717=131781474597770000

That long string of numbers could be a FILETIME object; assuming it is and converting it to something human readable, we get:

2018-08-07T20:31:00+00:00

Okay, the granularity of the time stamp is only to the minute, and it is stored in a 100-nanosecond-epoch format, but what it seems to give us is the last date that the program was run.  What's interesting is that within the timeline of system activity, there doesn't seem to be any more activity following that time stamp that is associated with mass deletion or counter-forensics activity, likely as a result of the PrivaZer program.  There is a significant amount of failed login activity that picks up shortly thereafter within the overall timeline; this activity could be seen as counter-forensics in nature.

Why is this important?  Well, about 3 minutes prior to the above activity kicking off, there were two attempts to install CCleaner on the system. I say "attempts", because the timeline contains several application error or crash events related to the following file:

C:\Users\mpowers\Downloads\ccsetup544pro.exe

Following these crashes, there does not appear to be any timeline data indicative of the program being installed (i.e., files and Registry keys being created or updated, etc.), nor does there seem to be any indication of the use of CCleaner.

By correlating the time that the PrivaZer program was launched to additional events that occurred on the system, we now have much more solid information that the program was used to "delete forensic artifacts".  This is important because some of the deleted artifacts could have been removed with native tools such as RegEdit or reg.exe; having an apparent privacy tool on the user desktop does not necessarily mean that it was used to perform the actions in question.  Yes, it is a logical guess, based on the data.  However, by looking a bit closer at the data, we can see further activity associated with program execution.

Conclusion
Again, my reason for sharing this isn't to point out anything with anyone else's analysis...not at all.  But something I've seen repeatedly during my time in the industry has been statements made regarding findings that are not tied to data.  One example of this is the question of data exfiltration; often exfiltration is assumed when data has been staged and archived.  After all, why would an actor go through the work of collecting, staging, and archiving data if they weren't going to exfiltrate it?  But why assume data exfiltration in that case when the data available to illustrate data exfiltration (i.e., packet captures, netflow data, logs, etc.) isn't available; why not simply state that the data was not available?

That's what I attempted to illustrate here.  Yes, there is a file on the user's desktop called "PrivaZer.exe", and yes, per Google searches, this program can be used to "delete forensic artifacts".  But how do we know that it was, in fact, used to "delete forensic artifacts", if we don't look at the data to see if there were indications that the program was actually run?  Think about it...RegEdit could have been used to "delete forensic artifacts", specifically those within the Registry. The user could have used RegEdit to delete the contents of the keys themselves.

However, this activity would have left artifacts itself.  RegEdit is an "applet", in MS terms, and one of the artifacts retrieved by the RegRipper applets.pl plugin is the last Registry key that had focus when RegEdit was closed.  Not only was this appropriate key (i.e., the RegEdit subkey beneath the Applets key) not present within the 'active' Registry, I did not find an indication of the key having been deleted.