Wednesday, August 22, 2018

Updates

RegRipper
Not all RegRipper plugins come from external sources; in fact, a good number of the plugins I've written start as something I've run across on the Internet, from various sources (most times Twitter).  Sometimes it's a blog post, other times it's a malware write-up, or it could be the result of a working a forensic challenge.

Based on Adam's post, I created a plugin (named wsh_settings.pl) that outputs the values of the Settings key, and includes an Analysis Tip that references the Remote value.

I also updated the clsid.pl plugin to incorporate looking for the TreatAs value. On a sample hive that I have from a Win7 SP1 system, I ran the following command:

rip -r d:\cases\test\software -p clsid | find "TreatAs"

I got a total of 9 hits, 7 of which were all for the same GUID (i.e., {F20DA720-C02F-11CE-927B-0800095AE340}), which appears to refer to packager.dll.

I also created a TLN output version of clsid.pl (named clsid_tln.pl) so that this information can be used to create timeline (in and of itself), or can be added to a timeline that incorporates other data sources.  I know from initial testing that under "normal" circumstances, the LastWrite times for the keys may be lumped together around the same time, but what we're looking for here is outliers, timeline entries that correspond with other suspicious activity, forming an artifact cluster.

Book
I received an email from my publisher on 20 Aug 2018 telling me that Investigating Windows Systems had officially been published, and is available here through the publisher!  I'm not sure what that means with respect to the book actually being available or shipped (if you pre-ordered it) from Amazon; for me, it's a milestone, something I can mark off my list. That's #9 down (as in, nine books that I've authored), and I'm currently working on Practical Windows Investigations, which will is due out next year.

IWS is a bit of a departure from my previous books; instead of listing various artifacts that you could use in an investigation, and leaving it to the reader to figure out how to string them together, I used images available online to illustrate what an investigation might look like.  Using the images, I provide analysis goals that are more inline with what one might expect to see during a real world IR investigation.  I then walk through the analysis of the image (based on the stated goals), providing decision pivot points along the way.  However, these investigations are somewhat naturally limited...they aren't enterprise level, don't involve things like lateral movement, etc.  As such, these things aren't addressed, but I did try to cover as much as I could with what was available.

I have a GitHub repo for the book - it doesn't contain a great deal at the moment, just links to the images used, and in the folder for chapter 4, code that I wrote for that particular chapter.  I'm sure I'll be adding material over time, either based on requests or based on interesting things from my notes and folders for the book.

Practical Windows Investigations is going to swing the pendulum back a bit, so to speak, in that rather than just looking at artifacts, I'm focusing on different aspects of investigations and addressing what can be achieved when pursuing those avenues.  The book is currently spec'd at 12 chapters, and the list is not too different from what was listed in this post from March.

The current chapters are:

Core Concepts
How to analyze Windows Event Logs
How to get the most out of RegRipper
Malware Detection
How to determine data exfiltration
File (LNK, DOCX/DOC, PDF) Analysis
How to investigate lateral movement
How to investigate program execution
How to investigate user activity
How to correlate/associate a device with a user (USB, Bluetooth)
How to detect/analyze the use of anti-forensics
Making use of VSCs

As with my previous books, tools used for analysis will be free and open source tools; this is due to the fact that I simply do not have access to commercial tools.  This is a topic that is continually brought up during prospectus reviews, and the reviewers simply do not seem to understand. 

Saturday, August 18, 2018

Updates

Win10 Notification Database
Leave it to MS to make our jobs as DFIR analysts fun, all day, every day!  Actually, that is one of the things I've always found fascinating about analyzing Windows systems is that the version of Windows, more often than not, will predicate how far you're able to go with your analysis.

An interesting artifact that's available on Win10 systems is the notification database, which is where those pop up messages you receive on the desktop are stored.  Over the past couple of months, I've noticed that on my work computer, I get more of these messages, because it now ties into Outlook.  It turns out that this database is a SQLite database. Lots of folks in the community use various means to parse SQLite databases; one of the popular ways to do this is via Python, and subsequently, you can often find either samples via tutorials, or full-on scripts to parse these databases for you.

MalwareMaloney posted a very interesting article on parsing the write-ahead logging (.wal) file for the database.  Also, as David pointed out,

Anytime you're working with a SQLite database, you should consider taking a look at Mari's blog post on recovering deleted data.

RegRipper
Based on input from a user, I updated the sizes.pl plugin in a way that you may find useful; it now displays a brief sample of the data 'found' by the plugin (default is 48 bytes/characters).  So, instead of just finding a value of a certain size (or above) and telling you that it found it, the plugin now displays a portion of the data itself.  The method of display is based on the data type...if it's a string, it outputs a portion of the string, and if the data is binary, it outputs of hex dump of the pre-determined length.  That length, as well as the minimum data size, can be modified by opening the plugin in Notepad (or any other editor) and modifying the "$output_size" and "$min_size" values, respectively.

Here is some sample output from the plugin, run against a Software hive known to contain a malicious Powershell script:

sizes v.20180817
(All) Scans a hive file looking for binary value data of a min size (5000)

Key  : \4MX64uqR  Value: Dp8m09KD  Size: 7056 bytes
Data Sample (first 48 bytes) : aQBmACgAWwBJAG4AdABQAHQAcgBdADoAOgBTAGkAegBlACAA...

From here, I'd definitely pivot on the key name ("4MX64uqR"), looking into a timeline, as well as searching other locations in the Registry (auto start locations??) and file system for references to the name.

Interestingly enough, while working on updating this plugin, I referred back to pg 34 of Windows Registry Forensics and for the table of value types.  Good thing I keep my copy handy for just this sort of emergency.  ;-)

Mari has an excellent example of how she has used this plugin in actual analysis here.

IWS
Speaking of books, Investigating Windows Systems is due out soon.  I'm really looking forward to this one, as it's a different approach all together from my previous books.  Rather than listing the various artifacts that are available on Windows systems, folks like Phill MooreDavid Cowen and Ali Al-Shemery graciously allowed me access to the images that they put together so that I could work through them.  The purpose of the book is to illustrate a way of stringing the various artifacts together in to a full-blown investigation, with analysis decisions called out and discussed along the way.

What I wanted to do with the book is present something more of a "real world" analysis approach.  Some of the images came with 30 or more questions that had to be answered as part of the challenge, and in my limited experience, that seemed a bit much.

The Github repo for the book has links to the images used, and for one chapter, has the code I used to complete a task.  Over time, I may add other bits and pieces of information, as well.

OSDFCon
My submission for OSDFCon was accepted, so I'll be at the conference to talk about RegRipper, and how you can really get the most out of it.

Here is the list of speakers at the conference...I'm thinking that my speaker bio had something to do with me being selected.  ;-)

Friday, August 10, 2018

Gaps and Up-Hill Battles

I've been thinking about writing a RegRipper plugin that looks for settings that indicate certain functionality in Windows has been disabled, such as maintaining JumpLists, as well as MRUs in the Registry.

Hold on for a segue...

David Cowen recently requested input via his blog, regarding the use of anti-forensics tools seen in the wild.  A little more than two years ago, Kevin S. wrote this really awesome blog post regarding the evolution of the Samas/Samsam ransomware, and I know you're going to ask, "great, but what does this have to do with anti-forensics tools?"  Well, about half way down the post, you'll see where Kevin mentions that one of the early variants of Samas included a copy of sdelete in one of its resource sections.

As usual, Brett made some very poignant comments, in this case regarding seeing anti- or counter-forensics tools in the wild; specifically, just having the program on a system doesn't mean it was used, and with limited visibility, it's difficult to see how it was used.

Now, coming back around...

What constitutes counter-forensics efforts, particularly when it comes to user intent?  Do we really know the difference between user intent and operating system/application functionality?  Maybe more importantly, do we know that there is such a thing?

I've seen too many times where leaps (assumptions, speculation) have been made without first fully examining the available data, or even just looking a little bit closer.  Back in the days of XP, an issue I ran into was an empty Recycle Bin for a user, which might have been a bad thing, particularly following a legal hold.  So, an analyst would preview an image, find an empty Recycle Bin, and assume that the user had emptied it, following the legal hold announcement.  But wait...what was the NukeOnDelete setting?  Wait...the what?  Yes, with this functionality enabled, the user would delete a file as they normally would, but the file would not appear in the Recycle Bin. 

Other functionality that's similar to this includes, did the user clear the IE history, or was it the result of the regularly scheduled purge?  Did the user delete files and run defrag after a legal hold order, or was defrag run automatically by the OS?

Skipping ahead to modern times, what happens if you get a "violation of acceptable use policies" case, or a harassment case, or any other case where user activity is a (or the) central focus of your examination, and the user has no JumpLists.  Yes, the automatic JumpLists folder is empty.  Does that seem possible?  Or did the user suspect someone would be looking at their system, and purposely delete them?  Well, did you check if the tracking of JumpLists had been disabled via the Registry?

My point is that there is functionality within Windows to disable the recording and maintenance of various artifacts that analysts use to do their jobs.  This functionality can be enabled or disabled through the Registry.  As such, if an analyst does not find what they expect to find (i.e., files in the Recycle Bin, RecentDocs populated, etc.) then it's a good idea to check for the settings.

Oh, and yes, I did write the plugin, the current iteration of which is called disablemru.pl.  I'll keep adding to it as more information becomes available.

Friday, August 03, 2018

The Future of IR, pt I

I've been doing IR work for a while now, and I've had the great fortune of watching things grow over time.

When I started in the industry, there were no real training courses or programs available, and IR business models pretty much required (and still do) that a new analyst is out on the road as soon as they're hired.  I got started in the industry and developed some skills, had some training (initial EnCase training in '99), and when I got to a consulting position, I was extremely fortunate to have a boss who took an interest in mentoring me and providing guidance, which I greatly appreciate to this day.

However, I've seen this same issue with business models as recently as 2018.  New candidates for IR teams are interviewed, and once they're hired and go through the corporate on-boarding, there is little if any facility for training or supervising...the analysts are left to themselves.  Yes, they are provided with tools, software products, and report templates, but for the most part, that's it.  How are they communicating with clients?  Are they sending in regular updates?  If so, are those updates appropriate, and more importantly, are they technically correct?  Are the analysts maintaining case notes?

Over the years of doing IR work, I ran into the usual issues that most of us see...like trying to find physical systems in a data center by accessing the system remotely and opening the CD-ROM tray.  But two things kept popping into my mind; one was, I really wished that there was a way to get a broader view of what was happening during an incident.  Rather than the client sending me the systems that they thought were involved or the "key" systems, what if I could get a wider view of the incident?  This was very evident when there were indications on the systems I was examining that pointed to them being accessed from other systems, or accessing other systems, on the same infrastructure.

The other was that I was spending a lot of time looking at what was left behind after a process ran.  I hadn't seen BigFoot tromp across the field, because due to the nature of IR, all I had to look at were footprints that were several days old.  What if, instead of telling the client that there were gaps in the data available for analysis (because I'm not going to guess or speculate...) I actually had a recording of the process command lines?

During one particularly fascinating engagement, it turned out that the client had installed a monitoring program on two of the affected systems.  The program was one of those applications that parents use to monitor their kid's computers, and what the client provided was 3-frame-per-second videos of what went on.  As such, I just accessed the folder, found all of the frames with command prompts open, and could see exactly what the adversary typed in at the prompt.  I then went back and watched the videos to see what the adversary was doing via the browser, as well as via other GUI applications.

How useful are process command lines?  Right now, there're considerable artifacts on systems that give analysts a view into what programs were run on a system, but not how they were run.  For instance, during an engagement where we had established process creation monitoring across the enterprise, an alert was triggered on the use of rar.exe, which is very often as a means of staging files for exfiltration.  The alert was not for "rar.exe", as the file had been renamed, but was instead for command line options that had been used, and as such, we had the password used to encrypt the archives.  When we receive the image from the system and recovered the archives (they'd been deleted after exfil), we were able to open the archives and show the client exactly what was taken.

So, things have progressed quite a bit over the years, while some things remain the same.  While there have been significant in-roads made into establishing enterprise-wide visibility, the increase of device types (Windows, Mac, Linux, IoT, mobile, etc.) still requires us to have the ability to go out and get (or receive) individual devices or systems for collection and analysis; those skills will always be required.  As such, if the business model isn't changed in some meaningful way, we are going to continue to have instances where someone without the appropriate skill sets is sent out on their own.

The next step in the evolution of IR is MDR, which does more than just mash MSS and IR together.  What I mean by that is that the typical MSS functionality receives an alert, enriches it somehow, and sends the client a ticket (email, text, etc.).  This then requires that the client receive and understand the message, and figure out how they need to respond...or that they call someone to get them to respond.  While this is happening, the adversary is embedding themselves deeply within the infrastructure...in the words of Jesse Ventura from the original Predator movie, "...like an Alabama tick."

Okay, so what do you do?  Well, if you're going to have enterprise-wide visibility, how about adding enterprise-wide response and remediation?  If we're able to monitor process command lines, what if we could specify conditions that are known pretty universally to be "bad", and stop the processes?  For example, every day, hundreds of thousands of us log into our computers, open Outlook, check our email, and read attachments.  This is all normal.  What isn't normal is when that Word document that arrived as an email attachment "opens" a command prompt and downloads a file to the system (as a result of an embedded macro).  If it isn't normal and it isn't supposed to happen and we know it's bad, why not automatically block it?  Why not respond at software speeds, rather than waiting for the detection to get back to the SOC, for an analyst to review it, for the analyst to send a ticket, and for the client to receive the ticket, then open it, read it, and figure out what to do about it?  In that time, your infrastructure could be hit by a dedicated adversary, or by ransomware. 

If you stop the download from occurring, you prevent all sorts of bad follow-on things from happening, like having to report a "personal data breach", per GDPR. 

Of course, the next step would be to automatically isolate the system on the network.  Yes, I completely understand that if someone's trying to do work and they can't communicate off of their own system, it's going to hamper or even obviate their workflow.  But if that were the case, why did they say, "Yes, I think I will "enable content", thank you very much!", after the phishing training showed them why they shouldn't do that?  I get that it's a pain in the hind end, but which is worse...enterprise-wide ransomware that not only shuts everything down but requires you to report to GDPR, or one person having to have a new computer to get work done?

So, the overall point I'm trying to make here is that the future of IR is going to be to detect and respond faster.  Faster than we have been.  Get ahead of the adversary, get inside their OODA loop, and cycle through the decision process faster than they can respond.  I've seen this in action...the military has things called "immediate actions", which are actions that, when a condition is met, you respond immediately.  In the military, you train at these things until they're automatic muscle memory, so that when those things go occur (say, your rifle jams), you perform the actions immediately.  We can apply these sorts of things to the OODA loop by removing the need to make decisions under duress because we made them ahead of time; we made the decision regarding a specific action while we had the time to think about it, so that we didn't have to try to make the decision during an incident.

In order to detect and respond quicker, this is going to require a couple of things:
- Visibility
- Intelligence
- Planning

I'll be addressing these topics in future blog posts.

Notes, etc.

Case Notes
@mattnotmax recently posted a blog on "contemporaneous notes".

But wait...there's more.

I agree with what Matt said regarding notes.  1000%.

I've been on engagements where things have gone sideways.  In one instance, I was assigned by the PoC to work with a network engineer to get access to network device logs.  We walked through a high-level network diagram on a white board, and at every point, I was told by the network engineer that there were no logs available from that device, nor that one, and definitely not that one.  I took a picture with my cell phone of the white board to document my "findings" in that regard, and when I met with the PoC the following morning, he seemed bothered by the fact that I hadn't gotten anywhere the previous day.  He called in the network engineer, who then said that he'd never said that logs were not available.  I pulled up the photo of the white board on my laptop and walked through it.  By the time we got to the end of the engagement, I never did get any logs from any of the network devices.  However, I had documented my findings (written, as well as with the photo) and had a defensible position, not just for myself, but also for my boss to take up the chain, if the issue ended up going that far.

As a side note, I make it a practice that if I get the sense that something is fishy, or that an engagement could go sideways, I'll call my boss and inform my management chain first, so that they hear it from me.  Better that, than the first they hear of any issue is an angry call from a client.  This is where notes really help...IR engagements are highly stressful events for everyone involved, and having notes is going to help you when it comes to who said/did what, and when.

Rules
One of the questions Matt addresses in his blog post is the 'rules' of the notes; whenever I've been asked about this, the question has always been, "what's the standard?"  My response has always been pretty simple...reproduceability.  You need to write your notes to the point that 6 months or a year later, you (or more importantly, someone else) can take the notes and the data, and reproduce the results.

Standards
In fact, I have been asked the question about the "standard" so much over the years that it's become a high-fidelity indicator that notes will not be taken.  So far, every single time I've been asked about the "standard" to which notes should be taken, note have not been kept.

When I started with ISS in 2006, I was provided dongles for AccessData FTK, as well as for EnCase 4.22 and 6.19.  When I performed work, I noted which application I used right along with what I did...because that was important.  When we (our team) determined that one of the built-in functions for a commercial tool was not, in fact, finding all valid credit card numbers (kind of important for PFI work...), we worked with Lance Mueller to get our own function working, and made the use of the result script part of our standard operating (and repeatable) procedure.

Part of the PFI work at the time also included a search for hashes, file names, and a few other indicators.  Our notes had to have the date of the work, and our SOP required that just prior to running the searches for those indicators that we pull down the latest-and-greatest copies of the indicator lists.

Why was this important?  Well, if someone found something in the data (or on the original system) six or eight months later, we had clear and concise documentation as to what steps were taken when.  That way, if the analyst was on leave or on another engagement, a team lead or the director could easily answer the questions.  With some simple, clear, and concise notes, the team lead could say, "Yes, but that case was worked 8 months ago, and that indicator was only made available as part of last month's list."  Boom.  Done.  Next.

Application
Another question that comes up is, what application should I use?  Well, I started with Notepad, because it was there.  I loved it.  When I received a laptop from a client and had to remove the hard drive for imaging, I could paste a link to the online instructions I followed, and I could download the instructions and keep them in a separate file, or print them out and keep them in a folder.  URL link or printed out, I had an "appendix" to my notes.  When I got to the point where I wanted to add photos to my notes, I simply used Write or Word.  Depending upon your requirements, you may not need anything schmancy or "high speed, low drag"...just what you've got available may work just fine.

If you're looking for something a little more refined when it comes to keeping notes, take a look at ForensicNotes.

Conclusion
Many of us say/talk about it...we keep notes because at some point in the future, someone's going to have questions.  I was in a role where we said that repeatedly...and then it happened.  Fully a year after working an engagement, questions came up.  Serious questions.  Through corporate counsel.  And guess what...there were no notes.  No one had any clue as to what happened, and by "no one", I'm only referring to those who actually worked the case.  Look, we say these things not because we're being silly or we're bored or we just want to be mean...we say them because they're significant, serious, and some of us have seen the damage that can occur to reputation of a company or an individual when notes aren't maintained.

Even with all of this being said, and even with Matt's blog post, there is no doubt in my mind that many DFIR folks are going to continue to say, "I don't want my notes to be discoverable, because a defense attorney would tear them/me apart on the stand."  According to Brett Shavers, that's going to happen anyway, with or without notes, so better to have the notes and be able to answer questions with confidence, or at least something more than, "I don't know".  The simple fact is that if you're not keeping case notes, you're doing yourself, your fellow analysts, and your client all a huge disservice.

Thursday, August 02, 2018

Some New Items

DFIR Skillz
Brett Shavers posted another great article in which he discussed a much-needed skill in DFIR, albeit one that isn't taught in any courses.  That is, communicating to others. If you really think about it, this is incredibly, critically, vitally important.  What good is it to have a good, great, or even the best threat intel or DFIR analyst, if they are unable to communicate with others and share their findings?  And I'm not talking about just the end results, but also being able to clearly articulate what led to those findings.  What is it that you're seeing, for example, that indicates that there's an adversary active in an environment, versus a bunch of persistence mechanisms kicking off when systems are booted?  Can you articulate your reasoning, and can you articulate where the gaps are, as well?

Something to keep in mind...there is a distinct difference between being not able to clearly delineate or share findings, and simply being unwilling to do so.

DFIR Skillz - Tech Skillz
A great way to develop technical analysis DFIR skills is to practice technical analysis DFIR skills.  There are a number of DFIR challenge images posted online and available for download that you can use to practice skills.

The CFReDS data leakage case provides a great opportunity to work with different types of data from a Windows 7 system, as well as from external devices.

The LoneWolf scenario at the DigitalCorpora site is pretty fascinating, as it allows you to practice using a number of tools, such as hindsight, Volatility, bulk_extractor, etc.  The scenario includes an image of a Win 10 laptop with a user profile that includes browser (Chrome, IE) history, a hibernation file, a memory dump, a page file, a swap file, etc.  This scenario and the accompanying data was produced by Thomas Moore, as his final project for one of Simson Garfinkel's courses at GMU.  The challenge in this scenario will be learning from the image, having not taken the course.

Ali Hadi, PhD, makes a number of datasets available for download.  I especially like challenge #1, as it's a great opportunity to try your hand at various analysis tasks, such as using Yara to detect webshells, etc.  There are also a couple of other really cool things you can do with the available data; many thanks to @binaryz0ne for providing the scenarios and the datasets.

These are just a few examples of what's available; perhaps the best way to really get the most from these scenarios is to work with a mentor.  I can see that many enthusiasts will download the images, maybe start down the road a bit, but not really get anywhere meaningful due to road blocks of some kind.  Having someone that you can bounce ideas off ("how does this analysis plan look?"), seek guidance from, etc., would be a great way to move beyond where you are now, and really expand your skill sets.

Blogging
DFIRDudes (Hadar and Martin) have kicked off (here's the tweet announcing it) a new blog with an inaugural post on StartupInfo files.  This is a great idea, because as Brett had mentioned previously, there's a lot of great info that is peppered on to Twitter that really needs a much more permanent home someplace, one that's a bit roomier (and more persistent) than 280 characters.  The first sentence of the first post really sets the tone, and gives the blog a good push away from the dock.

If you're on the fence about starting a blog, check out Phill's post, because his answer is a resounding "yes".

Retail Breaches
HelpNetSecurity had a fascinating article recently that discusses a surge in retail breaches.  While this data is based on survey, I still find it fascinating that in 2018, more organizations haven't pursued the implementations of instrumentation and visibility into their infrastructures that would provide for early detection and response.  And yes, I do understand that the focus of the survey (and as a result, the data) is retailers, organizations that wouldn't necessarily have the budget for such things.

Perhaps the most telling part of the article is that, "Security spending is up but not aligning with risk."

RegRipper updates
I've received some great contributions to the repository over the past month or so; many, many thanks to those who've contributed plugins!