Wednesday, May 04, 2016


RegRipper Plugins
I don't often get requests on Github for modifications to RegRipper, but I got one recently that was very interesting. Duckexmachina said that they'd run log2timeline and found entries in one ControlSet within the System hive that wasn't in the one marked as "current", and as a result, those entries were not listed by the plugin.

As such, as a test, I wrote, which accesses all available ControlSets within the System hive, and displays the entries listed.  In the limited testing I've done with the new plugin, I haven't yet found differences in the AppCompatCache entries in the available ControlSets; in the few System hives that I have available for testing, the LastWrite times for the keys in the available ControlSets have been identical.

As you can see in the below timeline excerpt, the AppCompatCache keys in both ControlSets appear to be written at shutdown:

Tue Mar 22 04:02:49 2016 Z
  FILE                       - .A.. [107479040] C:\Windows\System32\config\SOFTWARE
  FILE                       - .A.. [262144] C:\Windows\ServiceProfiles\NetworkService\NTUSER.DAT
  FILE                       - .A.. [262144] C:\Windows\System32\config\SECURITY
  FILE                       - .A.. [262144] C:\Windows\ServiceProfiles\LocalService\NTUSER.DAT
  FILE                       - .A.. [18087936] C:\System Volume Information\Syscache.hve
  REG                        - M... HKLM/System/ControlSet002/Control/Session Manager/AppCompatCache 
  REG                        - M... HKLM/System/ControlSet001/Control/Session Manager/AppCompatCache 
  FILE                       - .A.. [262144] C:\Windows\System32\config\SAM
  FILE                       - .A.. [14942208] C:\Windows\System32\config\SYSTEM

Now there may be instances where this is not the case, but for the most part, what you see in the above timeline excerpt is what I tend to see in the recent timelines I've created.

I'll go ahead and leave the plugin as part of the distribution, and see how folks use it.  I'm not sure that adding the capability of parsing all available ControlSets is something that is necessary or even useful for all plugins that parse the System hive.  If I need to see something from a historical perspective within the System hive, I'll either go to the RegBack folder and extract the copy of the hive stored there, or access any Volume Shadow Copies that may be available.

MS has updated their Sysmon tool to version 4.0.  There's also this great presentation from Mark Russinovich that discusses how the tool can be used in an infrastructure.  It's well worth the time to go through it.

A quick update to my last blog post about writing books...every now and then (and it's not very often), when someone asks if a book is going to address "what's new" in an operating system, I'll find someone who will actually be able to add some detail to the request.  For example, the question may  be about new functionality to the operating system, such as Cortana, Continuum, new browsers (Edge, Spartan), new search functionality, etc., and the artifacts left on the system and in the Registry through their use.

These are all great questions, but something that isn't readily apparent to most folks is that I'm not a testing facility or company.  I'm one guy.  I do not have access to devices such as a Windows phone,  a Surface device, etc.  I'm writing this blog post using a Dell Latitude E6510...I don't have a touch screen device available to test functionality such as...well...the touch screen, a digital assistant, etc.  I don't have access to a Windows phone.

RegRipper is open source and free.  As some are aware, I end up giving a lot of the new books away.  I don't have access to a device that runs Windows 10 and has a touch screen, or can run Cortana.  I don't have access to MSDN to download and test new versions of Windows, MSOffice, etc.

Would I like to include those sorts of artifacts as part of RegRipper, or in a book?  Yes, I would...I think it would be really cool.  But instead of asking, "...does it cover...", ask yourself instead, "what am I willing to contribute?"  It could be devices for testing, or the data extracted from said devices, along with a description of the testing performed, etc.  I do what I can with the resources I have available, folks.

I was pointed to this site recently, which begins a discussion of a technique for finding unknown malware on Windows systems.  The page is described as "part 1 of 5", and after reading through it, while I think that it's a good idea to have things like this available to DFIR analysts, I don't agree with the process itself.

Here's why...I don't agree that long-running processes (hash computation/comparison, carving unallocated space, AV scans, etc.) are the first things that should be done when kicking off analysis.  There is plenty of analysis that can be conducted in parallel while those processes are running, and the necessary data for that analysis should be extracted first.

Analysis should start with identified, discrete goals.  After all, imaging and analyzing a system can be an expensive (in terms of time, money, staffing resources, etc.) process, so you want to have a reason for going down this road.  Find all the bad stuff is not a goal; what constitutes bad in the context of the environment in which the system exists?  Is the user a pen tester, or do they find vulnerabilities and write exploits?  If so, bad takes on an entirely new context. When tasked with finding unknown malware, the first question should be, what leads us to believe that this system has malware on it?  I mean, honestly, when a sysadmin or IT director walks into their office in the morning, do they have a listing of systems on the wall and just throw a dart at it, and whichever system the dart lands on suddenly has malware on it?  No, that's not the case at all...there's usually something (unusual activity, process performance degradation, etc.) that leads someone to believe that there's malware on a system.  And usually when these things are noticed, they're noticed at a particular time.  Getting that information can help narrow down the search, and as such should be documented before kicking off analysis.

Once the analysis goals are documented, we have to remember that malware must execute in order to do damage.  Well, that most cases.  As such, what we'd initially want to focus on is artifacts of process execution, and from there look for artifacts of malware on the system.

Something I discussed with another analyst recently is that I love analyzing Windows systems because the OS itself will very often record artifacts as the malware interacts with it's ecosystem.  Some malware creates files and Registry keys/values, and this functionality can be found within the code of the malware itself.  However, as some malware executes, there are events that may be recorded by the operating system that are not part of the malware code.  It's like dropping a rock in a pond...there's nothing about the rock, in and of itself, that requires that ripples be produced; rather, this is something that the pond does as a reaction to the rock interacting with it.  The same can very often be true with Windows systems and malware (or a dedicated adversary).

That being said, I'll look forward to reading the remaining four blog posts in the series.

Monday, May 02, 2016

Thoughts on Books and Book Writing

The new book has been out for a couple of weeks now, and already there are two customer reviews (many thanks to Daniel Garcia and Amazon Customer for their reviews).  Daniel also wrote a more extensive review of the book on his blog, found here.  Daniel, thanks for the extensive work in reading and then writing about the book, I greatly appreciate it.

Here's my take on what the book covers...not a review, just a description of the book itself for those who may have questions.

Does it cover ... ?
One question I get every time a book is released is, "Does it cover changes to ?"  I got the with all of the Windows Forensic Analysis books, and I got it when the first edition of this book was released ("Does it cover changes in Windows 7?").  In fact, I got that question from someone at a conference I was speaking at recently.  I thought that was pretty odd, as most often these questions are posted to public forums, and I don't see them.  As such, I thought I'd try to address the question here, so that maybe people could see my reasoning, and ask questions that way.

What I try to do with the books is address an analysis process, and perhaps show different ways that Registry data can be incorporated into the overall analysis plan.  Here's a really good example of how incorporating Registry data into an analysis process worked out FTW.  But that's just one, and a recent one...the book is full of other examples of how I've incorporated Registry data into an examination, and how doing so has been extremely valuable.

One of the things I wanted to do with this book was not just talk about how I have used Registry data in my analysis, but illustrate how others have done so, as well.  As such, I set up a contest, asking people to send me short write-ups regarding how they've used Registry analysis in their case work.  I thought it would be great to get different perspectives, and illustrate how others across the industry were doing this sort of work.  I got a single submission.

My point is simply this...there really is not suitable forum (online, book, etc.) or means by which to address every change that can occur in the Registry.  I'm not just talking about between versions of Windows...sometimes, it's simply the passage of time that leads to some change creeping into the operating system.  For example, take this blog post that's less than a year old...Yogesh found that a value beneath a Registry key that contains the SSID of a wireless network.  With the operating system alone, there will be changes along the way, possibly a great many.  Add to that applications, and you'll get a whole new level of how would that be maintained?  As a list?  Where would it be maintained?

As such, what I've tried to do in the book is share some thoughts on artifact categories and the analysis process, in hopes that the analysis process itself would cast a wide enough net to pick up things that may have changed between versions of Windows, or simply not been discussed (or not discussed at great length) previously.

Book Writing
Sometimes, I think about why I write books; what's my reason or motivation for writing the books that I write?  I ask this question of myself, usually when starting a new book, or following a break after finishing a book.

I guess the biggest reason is that when I first started looking around for resources the covered DFIR work and topics specific to Windows systems, there really weren't least, not any that I wanted to use/own.  Some of those that were available were very general, and with few exceptions, you could replace "Windows" with "Linux" and have the same book.  As such, I set out to write a book that I wanted to use, something I would refer to...and specifically with respect to the Windows Registry Forensics books, I still do.  In fact, almost everything that remained the same between the two editions did so because I still use it, and find it to be extremely valuable reference material.

So, while I wish that those interested in something particular in a book, like covering "changes to the Registry in ", would describe the changes that they're referring to before the book goes to the publisher, that simply hasn't been the case.  I have reached out to the community because I honestly believe that folks have good ideas, and that a book that includes something one person finds interesting will surely be of interest to someone else.  However, the result has been...well, you know where I'm going with this.  Regardless, as long as I have ideas and feel like writing, I will.

Thursday, April 14, 2016

Training Philosophy

I have always felt that everyone, including DFIR analysts, need to take some level of responsibility for their own professional education.  What does this mean?  There are a couple of ways to go about this in any industry; professional reading, attending training courses, engaging with others within the community, etc.  Very often, it's beneficial to engaging in more than one manner, particularly as people tend to take in information and learn new skills in different ways.

Specifically with respect to DFIR, there are training courses available that you can attend, and it doesn't take a great deal of effort to find many of these courses.  You attend the training, sit in a classroom, listen to lecture and run through lab exercises.  All of this is great, and a great way to learn something that is perhaps completely new to you, or simply a new way of performing a task.  But what happens beyond that?  What happens beyond what's presented, beyond the classroom?  Do analysts take responsibility for their education, incorporating what they learned into their day-to-day job and then going beyond what's presented in a formal setting?  Do they explore new data sources, tools, and processes?  Or do they sign up again for the course the following year in order to get new information?

When I was on the IBM ISS ERS team, we had a team member tell us that they could only learn something if they were sitting in a classroom, and someone was teaching them the subject.  On the surface, we were like, "wait...what?"  After all, do you really want an employee and a fellow team member who states that they can't learn anything new without being taught, in a classroom?  However, if you look beyond the visceral surface reaction to that statement, what they were saying was, they have too much going on operationally to take the time out to start from square 1 to learn something.  The business model behind their position requires them to be as billable as possible, which ends up meaning that out of their business day, they don't have a great deal of time available for things like non-billable professional development.  Taking them out of operational rotation, and putting them in a classroom environment where they weren't responsible for analysis, reporting, submitting travel claims, sending updates, and other billable commitments, would give them the opportunity to learn something new.  But what was important, following the training, is what they did with it.  Was that training away from the daily grind of analysis, expense reports and conference calls used as the basis for developing new skills, or was the end of the training the end of learning?

Learning New Skills
Back in 1982, I took a BASIC programming class on the Apple IIe, and the teacher's philosophy was to provide us with some basic (no pun intended) information, and then cut us loose to explore.  Those of us in the class would try different things, some (most) of which didn't work, or didn't work as intended.  If we found something that worked really well, we'd share it.  If we found something that didn't work, or didn't work quite right, we'd share that, as well, and someone would usually be able to figure out why we weren't seeing what we expected to see.

Jump ahead about 13 years, and my linear algebra professor during my graduate studies had the same philosophy.  Where most professors would give a project and the students would struggle for the rest of the week to "get" the core part of the project, this professor would provide us with the core bit of code (we were using MatLab) to the exercise or lab, and our "project" was to learn.  Of course, some did the minimum and moved on, and others would really push the boundaries of the subject.  I remember one such project were I spent a lot of time observing not just the effect of the code on different shaped matrices, but also the effect of running the output back into the code.

So now, in my professional life, I still seek to learn new things, and employ what I  learn in an exploratory manner.  What happens when I do this new thing?  Or, what happens if I take this one thing that I learned, and share it with someone else?  When I learn something new, I like to try it out and see how to best employ it as part of my analysis process, even if it means changing what I do, rather than simply adding to it.  As part of that, when someone mentions a tool, I don't wait for them to explain every possible use of the tool to me.  After all, particularly if we're talking about the use of native Windows tool, I can very often go look for myself.

So you wanna learn...
If you're interested in trying your skills out on some available data, Mari recently shared this MindMap of forensic challenges with me.  This one resource provides links to all sorts of challenges, and scenarios with data available for analysts to test their skills, try out new tools, or simply dust off some old techniques.  The available data covers disk, memory, pcap analysis, and more.

This means that if an analyst wants to learn more about a tool or process, there is data available that they can use to develop their knowledge base, and add to their skillz.  So, if someone talks about a tool or process, there's nothing to stop you from taking responsibility for your own education, downloading the data and employing the tool/process on your own.

Manager's Responsibility
When I was a 2ndLt, I learned that one of my responsibilities as a platoon commander was to ensure that my Marines were properly trained, and I learned that there were two aspects to that.  The first was to ensure that they received the necessary training, be it formal, schoolhouse instruction, via an MCI correspondence course, or some other method.  The second was to ensure that once trained, the Marine employed the training.  After all, what good is it to send someone off to learn something new, only to have them return to the operational cycle and simply go back to what they were doing before they left?  I mean, you could have achieved the same thing by letting them go on vacation for a week, and saved yourself the money spent on the training, right?

Now, admittedly, the military is great about training you to do something, and then ensuring that you then have opportunity to employ that new skill.  In the private sector, particularly with DFIR training, things are often...not that way.

The Point
So, the point of all this is simple...for me, learning is a matter of doing.  I'm sure that this is the case for others, as well.  Someone can point to a tool or process, and give general thoughts on how it can be used, or even provide examples of how they've used it.  However, for me to really learn more about the topic, I need to actually do something.

The exception to this is understanding the decision to use the tool or process.  For example, what led an analyst to decide to run, say, plaso against an image, rather than extract specific data sources, in order to create and analyze a timeline while running an AV scan?  What leads an analyst to decide to use a specific process or to look at specific data sources, while not looking at others?  That's something that you can only get by engaging with someone and asking questions...but asking those questions is also taking responsibility for your own education.

Tuesday, April 12, 2016


I ran across this corporate blog post regarding the Ramdo click-fraud malware recently, and one particular statement caught my eye, namely:

Documented by Microsoft in 2014, Ramdo continues to evolve to evade host-based and network-based detection.

I thought, hold on a second...if this was documented in April 2014 (2 yrs ago), what about it makes host-based detection so difficult?  I decided to take a look at what some of the AV sites were saying about the malware.  After all, the MSRT link indicates that the malware writes it's configuration information to a couple of Registry values, and the Win32/Ramdo.A write-up provides even more information along these lines.

I updated the RegRipper plugin with checks for the various values identified by several sources, but because I have limited data for testing, I don't feel comfortable that this new version of the plugin is ready for release.

Speaking of RegRipper plugins, just a reminder that the newly published Windows Registry Forensics 2e not only includes descriptions of a number of current plugins, but also includes an entire chapter devoted just to RegRipper, covering topics such as how to use it, and how to write your own plugins.

Timeline Analysis
The book also covers, starting on page 53 (of the softcover edition), tools that I use to incorporate Registry information into timeline analysis.  I've used to this methodology to considerable effect over the years, including very recently to locate a novel WMI persistence technique, which another analyst was able to completely unravel.

For those who may not be aware, mimikatz includes the capability to clear the Event Log, as well as reportedly stop the Event Service from generating new events.

Okay, so someone can apparently stop the Windows Event Log service from generating event records, and then steal your credentials. If nothing else, this really illustrates the need for process creation monitoring on endpoints.

Addendum, 14 Apr: I did some testing last night, and found that when using the mimikatz functionality to clear the Windows Event Log, a Microsoft-Windows-EventLog/1102 event is generated.  Unfortunately, when I tried the "event::drop" functionality, I got an error.

Something else to keep in mind is that this isn't the only way that adversaries can be observed interacting with the Windows Event Log.  Not only are native tools (wevtutil.exe, PowerShell, WMI) available, but MS provides LogParser for free.

Ghost in the (Power)Shell
The folks at Carbon Black recently posted an article regarding the use of Powershell in attacks.  As I read through the article, I wasn't abundantly clear on what was meant by the adversary attempting to "cloak" attacks by using PowerShell, but due in part to the statistics shared in the article, it does give a view into how PowerShell is being used in some environments.  I'm going to guess that because many organizations still aren't using any sort of process creation monitoring, nor are many logging the use of Powershell, this is how the use of Powershell would be considered "cloaked".

Be sure to take a look at the United Threat Research report described in the Cb article, as well.

Tuesday, April 05, 2016

Cool Stuff, re: WMI Persistence

In case you missed it, the blog post titled, "A Novel WMI Persistence Implementation" was posted to the Dell SecureWorks web site recently.  In short, this blog post presented the results of several SecureWorks team members working together and bringing technical expertise to bear in order to run an issue of an unusual persistence mechanism to ground.  The specifics of the issue are covered thoroughly in the blog post.

What was found was a novel WMI persistence mechanism that appeared to have been employed to avoid not just detection by those who administered the infected system, but also by forensic analysts.  In short, the persistence mechanism used was a variation on what was discussed during a MIRCon 2014 presentation; you'll see what I mean you compare figure 1 from the blog post to slide 45 of the presentation.

After the blog post was published and SecureWorks marketing had tweeted about the blog post, they saw that Matt Graeber had tweeted a request for additional information.  The ensuing exchange included Matt providing a command line for parsing embedded text from a binary MOF file:

mofcomp.exe -MOF:recovered.mof -MFL:ms_409.mof -Amendment:MS_409 binarymof.tmp

What this command does is go into the binary MOF file (binarymof.tmp), and attempt to extract the text that it was created from, essentially "decompiling" it, and placing that text into the file "recovered.mof".

It was no accident that Matt was asking about this; here is Matt's BlackHat 2015 paper, and his presentation.

Windows Registry Forensics, 2E

Okay, the book is out!  At last!  This is the second edition to Windows Registry Forensics, and this one comes with a good bit of new material.

Chapter 1 lays out what I see as the core concepts of analysis, in general, as well as providing a foundational understanding of the Registry itself, from a binary perspective.  I know that there are some who likely feel that they've seen all of this before, but I tend to use this information all the time.

Chapter 2 is again about tools.  I only cover available free and open-source tools that run on Windows systems, for the simple fact that I do not have access to the commercial tools.  Some of the old tools are still applicable, there are new tools available, and some tools are now under license, and in some cases, the strict terms of the license prevent me from including them in the book.  Hopefully, chapter 1 laid the foundation for analysts to be able to make educated decisions as to which tool(s) they prefer to use.

Chapters 3 and 4 remain the same in their focus as with the first edition, but the content of the chapters has changed, and in a lot of aspects, been updated.

Chapter 5 is my answer to anyone who has looked or is looking for a manual on how to use RegRipper.  I get that most folks download the tool and run it as it, but for my own use, I do not use the GUI.  At all.  Ever.  I use rip.exe from the command line, exclusively.  But I also want folks to know that there are more targeted (and perhaps efficient) ways to use RegRipper to your advantage.  I also include information regarding how you can write your own plugins, but as always, if you don't feel comfortable doing so, please consider reaching to me, as I'm more that happy to help with a plugin.  It's pretty easy to write a plugin if you can (a) concisely describe what you're looking for, and (b) provide sample data.

Now, I know folks are going to ask about specific content, and that usually comes as the question, "do you talk about Windows 10?"  My response to that it to ask specifically what they're referring to, and very often, there's no response to that question.  The purpose of this book is not to provide a list of all possible Registry keys and values of interest or value, for all possible investigations, and for all possible combinations of Windows versions and applications.  That's simply not something that can be achieved.  The purpose of this book is to provide an understanding of the value  and context of the Windows Registry, that can be applied to a number of investigations.

Thoughts on Writing Books
There's no doubt about it, writing a book is hard.  For the most part, actually writing the book is easy, once you get started.  Sometimes it's the "getting started" that can be hard.  I find that I'll go through phases where I'll be writing furiously, and when I really need to stop (for sleep, life, etc.), I'll take a few minutes to jot down some notes on where I wanted to go with a thought.

While I have done this enough to find ways to make the process easier, there are still difficulties associated with writing a book.  That's just the reality.  It's easier now than it was the first time, and even the second time.   I'm much better at the planning for writing a book, and can even provide advice to others on how to best go about it (and what to expect).

At this point, after having written the books that I have, I have to say that the single hardest part of writing books is not getting feedback from the community.

Take the first edition of Windows Registry Forensics, for example.  I received questions such as, "...are you planning a second edition?", and when I asked for input on what that second edition should cover, I didn't get a response.

I think that from a 50,000 foot view, there's an expectation that things will be different in the next version of Windows, but the simple fact is that, when it comes to Registry forensics, the basic principles have remained the same through all available versions. Keys are still keys, deleted keys are still discovered the same way, values are still values, etc.  From an application layer perspective, its inevitable that each new version of Windows would include something "new", with respect to the Registry.  New keys, new values, etc.  The same is true with new versions of applications, and that includes malware, as well.  While the basic principles remain constant, stuff at the application layer changes, and it's very difficult to keep up without some sort of assistance.

Writing a book like this would be significantly easier if those within the community were to provide feedback and input, rather than waiting for the book to be published, and ask, "...did you talk about X?"  Even so, I hope that folks find the book useful, and that some who have received their copy of the book find the time to write a review.  Thanks.

Wednesday, March 23, 2016


RegRipper Plugin Update
Okay, this isn't so much an update as it is a new plugin.  Patrick Seagren sent me a plugin called, which he's been using to extract Cortana searches from the Registry hives.  Patrick sent the plugin and some test data, so I tested the plugin out and added it to the repository.

Process Creation Monitoring
When it comes to process creation monitoring, there appears to be a new kid on the block.  NoVirusThanks is offering their Process Logger Service free for personal use.

Looking at the web site, the service appears to record process creation event information in a flat text file, with the date and time, process ID, as well as the parent process ID.  While this does record some basic information about the processes, it doesn't look like it's the easiest to parse and include in current analysis techniques.

Other alternatives include native Windows auditing for Process Tracking (along with an update to improve the data collected), installing Sysmon, or going for a solution of a commercial nature such as Carbon Black.  Note that incorporating the process creation information into the Windows Event Log (via either process) means that the data can be pulled from live systems via WMI or Powershell, forwarded to a central logging server (Splunk?), or extracted from an acquired image.

Process creation monitoring can be extremely valuable for detecting and responding to things such as Powershell Malware, as well as providing critical information for responders to determine the root cause of a ransomware infection.

AirBusCyberSecurity recently published this post that walks through dynamic analysis of "fileless" malware; in this case, Kovter.  While it's interesting that they went with a pre-compiled platform, pre-stocked with monitoring tools, the results of their analysis did demonstrate how powerful and valuable this sort of technique (monitoring process creation) can be, particularly when it comes to detection of issues.

As a side note, while I greatly appreciate the work that was done to produce and publish that blog post, there are a couple of things that I don't necessarily agree with in the content that begin with this statement:

Kovter is able to conceal itself in the registry and maintain persistence through the use of several concealed run keys.

None of what's done is "concealed".  The Registry is essentially a "file system within a file", and none of what the malware does with respect to persistence is particularly "concealed".  "Run" keys have been used for persistence since the Registry was first used; if you're doing any form of DFIR work and not looking in the Run keys, well, that still doesn't make them "concealed".

Also, I'm not really sure I agree with this "fileless" thing.  Just because persistence is maintained via Registry value doesn't make something "fileless".

Ransomware and Attribution
Speaking of ransomware engagements, a couple of interesting articles have popped up recently with respect to ransomware attacks and attribution.  This recent Reuter's article shares some thoughts from analysts regarding attribution for observed attacks.  Shortly after this article came out, Val Smith expanded upon information from the article in his blog post, and this ThreatPost article went on to suggest that what analyst's are seeing is really "false flag" operations.

While there are clearly theories regarding attribution for the attacks, there doesn't appear to be any clear indicators or evidence...not that are shared, anyway...that tie the attacks to a particular group, or geographic location.

This article popped up recently, describing how another hospital was hit with ransomware.  What's interesting about the article is that there is NO information about how the bad guys gained access to the systems, but the author of the article refers to and quotes a TrustWave blog post; is the implication that this may be how the systems were infected?  Who knows?

David Cowen recently posted a very interesting article, in which he shared the results of tool testing, specifically several file carving tools.  I've seen comments and reviews from others who've read this same post that've said that David one tool or another "near the bottom", but to be honest, that doesn't appear to be the case at all.  The key to this sort of testing is to understand the strengths and "weaknesses" of various tools.  For example, bulk extractor was listed as the fastest tool in the test, but David also included the statement that it would benefit from more filters, and BE was the only free option.

Testing such as this, as well as what folks like Mari have done, is extremely valuable in not only extending our knowledge as a community, but also for showing others how this sort of thing can be done, and then shared.

Malware Analysis and Threat Intel
I ran across this interesting post regarding Dridex analysis recently...what attracted my attention was this statement:

...detail how I go about analyzing various samples, instead of just presenting my findings...

While I do think that discussing not just the "what" but also the "how" is extremely beneficial, I'm going to jump off of the beaten path here for a bit and take a look at the following statement: the loader binary off virustotal...

The author of the post is clearly stating where they got the copy of the malware that they're analyzing in the post, but this statement jumped out at me, for an entirely different reason all together.

When I read posts such as this, as well as what is shared as "threat intel", I look at it from the perspective of an DF analyst and an incident responder, asking myself, " can I use this on an engagement?"  While I greatly appreciate the effort that goes into creating this sort of content, I also realize that very often, a good deal of "threat intel" is developed purely through open source collection, without the benefit of context from an active engagement.  Now, this is not a bad thing...not at all.  But it is something that needs to be kept in mind.

In examples such as this one, understanding that the analysis relies primarily on a malware sample collected from VT should tell us that any mention of the initial infection vector (IIV) is likely going to be speculation, or the result of open source collection, as well.  The corollary is that the IIV is not going to be the result of seeing this during an active incident.

I'll say it again...information such as this post, as well as other material shared as "threat intel" a valuable part of what we do.  However, at the same time, we do need to understand the source of this information.  Information shared as a result of open source collection and analysis can be used to create filters or triggers, which can then be used to detect these issues much earlier in the infection process, allowing responders to then get to affected systems sooner, and conduct analysis to determine the IIV.

Monday, March 14, 2016

Links and Stuff

I've spent some time discussing MS's Sysmon in this blog, describing how useful it can be, not only in a malware testing environment, but also in a corporate environment.

Mark Russinovich gives a great use case for Sysmon, from RSA in this online PowerPoint presentation.  If you have any questions about Sysmon and how useful it can be, this presentation is definitely worth the time to browse through it.

Ransomware and Computer Speech
Ransomware has been in the "news" quite a bit lately; not just the opportunistic stuff like Locky, but also what appears to be more targeted stuff.  IntelSecurity has an interesting write-up on the Samsam targeted ransomware, although a great deal of the content of that PDF is spent on the code of the ransomware, and not so much the TTPs employed by the threat group.  This sort of thing is a great way to showcase your RE skillz, but may not be as relevant to folks who are trying to find this stuff within their infrastructure.

I ran across a write up regarding a new type of ransomware recently, this one called Cerber.  Apparently, this particular ransomware, as part of the infection, drops a *.vbs script on the system that makes the computer tell the user that its infected.  Wait...what?

Well, I started looking into it and found several sites that discussed this, and provided examples of how to do it.  It turns out that its really pretty simple, and depending upon the version of Windows you're using, you may have a greater range of options available.  For example, per this site, on Windows 10 you can select a different voice (a female "Anna" voice, rather than the default male "David" voice), as well as change the speed or volume of the speech.

...and then, much hilarity ensued...

Document Macros
Okay, back to the topic of ransomware, some of it (Locky, for example) ends up on systems as a result of the user opening a document that contains macros, and choosing to enable the content.

If you do find a system that's been infected (not just with ransomware, but anything else, really...), and you find a suspicious document, this presentation from Decalage provides a really good understanding of what macros can do.  Also, take a look at this blog post, as well as Mari's post, to help you determine if the user chose to enable the content of the MS Office document.

Why bother with this at all?  That's a great question, particularly in the face of ransomware attacks, where some organizations are paying tens or hundreds of thousands of dollars to get their documents can they then justify paying for a DFIR investigation?  Well, my point is this...if you don't know how the stuff gets in, you're not going to stop it the next time it (or something else) gets in.

You need to do a root cause investigation.

Do not...I repeat, do NOT...base any decisions made after an infection, compromise, or breach on assumption or emotion.  Base them on actual data, and facts.  Base them on findings developed from all the data available, not just some of it, with the gaps filled in with speculation.

Jump Lists
One way to determine which files a user had accessed, and with which application, is by analyzing Jump Lists.  Jump Lists are a "new" artifact, as of Windows 7, and they persist through to Windows 10.  Eric Zimmerman recently posted on understanding Jump Lists in depth; as you would expect, his post is written from the perspective of a tool developer.

Eric noted that the format for the DestList stream on Windows 10 systems has changed slightly...that an offset changed.  It's important to know and understand this, as it does affect how tools will work.

Mysterious Records in the Index.dat File
I was conducting analysis on a Windows 2003 server recently, and I found that a user account created in Dec 2015 contained activity within the IE index.dat file dating back to 2013...and like any other analyst, I thought, "okay, that's weird".  I noted it my case notes and continued on with my analysis, knowing that I'd get to the bottom of this issue.

First, parsing the index.dat.  Similar to the Event Logs, I've got a couple of tools that I use, one that parses the file based on the header information, and the other that bypasses the header information all together and parses the file on a binary basis.  These tools provide me with visibility into the records recorded within the files, as well as allowing me to add those records to a timeline as necessary.  I've also developed a modified version of Jon Glass's WebCacheV01.dat parsing script that I use to incorporate the contents of IE10+ web activity database files in timelines.

So, back to why the index.dat file for a user account (and profile) created in Dec 2015 contained activity from 2013.  Essentially, there was malware on the system in 2013 running with System privileges and utilizing the WinInet API, which resulted in web browser activity being recorded in the index.dat file within the "Default User" profile.  As such, when the new user account was created in Dec 2015, and the account was used to access the system, the profile was created by copying content from the "Default User" profile.  As IE wasn't being used/launched via the Windows Explorer shell (another program was using the WinInet API), the index.dat file was not subject to the cache clearance mechanisms we might usually expect to see (by default, using IE on a regular basis causes the cache to be cleared every 20 days).

Getting to the bottom of the analysis didn't take days or weeks of just took a few minutes to finish up documenting (yes, I do that...) what I'd already found, and then circling back to confirm some findings, based on a targeted approach to analysis.