Thursday, April 14, 2016

Training Philosophy

I have always felt that everyone, including DFIR analysts, need to take some level of responsibility for their own professional education.  What does this mean?  There are a couple of ways to go about this in any industry; professional reading, attending training courses, engaging with others within the community, etc.  Very often, it's beneficial to engaging in more than one manner, particularly as people tend to take in information and learn new skills in different ways.

Specifically with respect to DFIR, there are training courses available that you can attend, and it doesn't take a great deal of effort to find many of these courses.  You attend the training, sit in a classroom, listen to lecture and run through lab exercises.  All of this is great, and a great way to learn something that is perhaps completely new to you, or simply a new way of performing a task.  But what happens beyond that?  What happens beyond what's presented, beyond the classroom?  Do analysts take responsibility for their education, incorporating what they learned into their day-to-day job and then going beyond what's presented in a formal setting?  Do they explore new data sources, tools, and processes?  Or do they sign up again for the course the following year in order to get new information?

When I was on the IBM ISS ERS team, we had a team member tell us that they could only learn something if they were sitting in a classroom, and someone was teaching them the subject.  On the surface, we were like, "wait...what?"  After all, do you really want an employee and a fellow team member who states that they can't learn anything new without being taught, in a classroom?  However, if you look beyond the visceral surface reaction to that statement, what they were saying was, they have too much going on operationally to take the time out to start from square 1 to learn something.  The business model behind their position requires them to be as billable as possible, which ends up meaning that out of their business day, they don't have a great deal of time available for things like non-billable professional development.  Taking them out of operational rotation, and putting them in a classroom environment where they weren't responsible for analysis, reporting, submitting travel claims, sending updates, and other billable commitments, would give them the opportunity to learn something new.  But what was important, following the training, is what they did with it.  Was that training away from the daily grind of analysis, expense reports and conference calls used as the basis for developing new skills, or was the end of the training the end of learning?

Learning New Skills
Back in 1982, I took a BASIC programming class on the Apple IIe, and the teacher's philosophy was to provide us with some basic (no pun intended) information, and then cut us loose to explore.  Those of us in the class would try different things, some (most) of which didn't work, or didn't work as intended.  If we found something that worked really well, we'd share it.  If we found something that didn't work, or didn't work quite right, we'd share that, as well, and someone would usually be able to figure out why we weren't seeing what we expected to see.

Jump ahead about 13 years, and my linear algebra professor during my graduate studies had the same philosophy.  Where most professors would give a project and the students would struggle for the rest of the week to "get" the core part of the project, this professor would provide us with the core bit of code (we were using MatLab) to the exercise or lab, and our "project" was to learn.  Of course, some did the minimum and moved on, and others would really push the boundaries of the subject.  I remember one such project were I spent a lot of time observing not just the effect of the code on different shaped matrices, but also the effect of running the output back into the code.

So now, in my professional life, I still seek to learn new things, and employ what I  learn in an exploratory manner.  What happens when I do this new thing?  Or, what happens if I take this one thing that I learned, and share it with someone else?  When I learn something new, I like to try it out and see how to best employ it as part of my analysis process, even if it means changing what I do, rather than simply adding to it.  As part of that, when someone mentions a tool, I don't wait for them to explain every possible use of the tool to me.  After all, particularly if we're talking about the use of native Windows tool, I can very often go look for myself.

So you wanna learn...
If you're interested in trying your skills out on some available data, Mari recently shared this MindMap of forensic challenges with me.  This one resource provides links to all sorts of challenges, and scenarios with data available for analysts to test their skills, try out new tools, or simply dust off some old techniques.  The available data covers disk, memory, pcap analysis, and more.

This means that if an analyst wants to learn more about a tool or process, there is data available that they can use to develop their knowledge base, and add to their skillz.  So, if someone talks about a tool or process, there's nothing to stop you from taking responsibility for your own education, downloading the data and employing the tool/process on your own.

Manager's Responsibility
When I was a 2ndLt, I learned that one of my responsibilities as a platoon commander was to ensure that my Marines were properly trained, and I learned that there were two aspects to that.  The first was to ensure that they received the necessary training, be it formal, schoolhouse instruction, via an MCI correspondence course, or some other method.  The second was to ensure that once trained, the Marine employed the training.  After all, what good is it to send someone off to learn something new, only to have them return to the operational cycle and simply go back to what they were doing before they left?  I mean, you could have achieved the same thing by letting them go on vacation for a week, and saved yourself the money spent on the training, right?

Now, admittedly, the military is great about training you to do something, and then ensuring that you then have opportunity to employ that new skill.  In the private sector, particularly with DFIR training, things are often...not that way.

The Point
So, the point of all this is simple...for me, learning is a matter of doing.  I'm sure that this is the case for others, as well.  Someone can point to a tool or process, and give general thoughts on how it can be used, or even provide examples of how they've used it.  However, for me to really learn more about the topic, I need to actually do something.

The exception to this is understanding the decision to use the tool or process.  For example, what led an analyst to decide to run, say, plaso against an image, rather than extract specific data sources, in order to create and analyze a timeline while running an AV scan?  What leads an analyst to decide to use a specific process or to look at specific data sources, while not looking at others?  That's something that you can only get by engaging with someone and asking questions...but asking those questions is also taking responsibility for your own education.

Tuesday, April 12, 2016

Links

Ramdo
I ran across this corporate blog post regarding the Ramdo click-fraud malware recently, and one particular statement caught my eye, namely:

Documented by Microsoft in 2014, Ramdo continues to evolve to evade host-based and network-based detection.

I thought, hold on a second...if this was documented in April 2014 (2 yrs ago), what about it makes host-based detection so difficult?  I decided to take a look at what some of the AV sites were saying about the malware.  After all, the MSRT link indicates that the malware writes it's configuration information to a couple of Registry values, and the Win32/Ramdo.A write-up provides even more information along these lines.

I updated the RegRipper malware.pl plugin with checks for the various values identified by several sources, but because I have limited data for testing, I don't feel comfortable that this new version of the plugin is ready for release.

Book
Speaking of RegRipper plugins, just a reminder that the newly published Windows Registry Forensics 2e not only includes descriptions of a number of current plugins, but also includes an entire chapter devoted just to RegRipper, covering topics such as how to use it, and how to write your own plugins.

Timeline Analysis
The book also covers, starting on page 53 (of the softcover edition), tools that I use to incorporate Registry information into timeline analysis.  I've used to this methodology to considerable effect over the years, including very recently to locate a novel WMI persistence technique, which another analyst was able to completely unravel.

Mimikatz
For those who may not be aware, mimikatz includes the capability to clear the Event Log, as well as reportedly stop the Event Service from generating new events.

Okay, so someone can apparently stop the Windows Event Log service from generating event records, and then steal your credentials. If nothing else, this really illustrates the need for process creation monitoring on endpoints.

Addendum, 14 Apr: I did some testing last night, and found that when using the mimikatz functionality to clear the Windows Event Log, a Microsoft-Windows-EventLog/1102 event is generated.  Unfortunately, when I tried the "event::drop" functionality, I got an error.

Something else to keep in mind is that this isn't the only way that adversaries can be observed interacting with the Windows Event Log.  Not only are native tools (wevtutil.exe, PowerShell, WMI) available, but MS provides LogParser for free.

Ghost in the (Power)Shell
The folks at Carbon Black recently posted an article regarding the use of Powershell in attacks.  As I read through the article, I wasn't abundantly clear on what was meant by the adversary attempting to "cloak" attacks by using PowerShell, but due in part to the statistics shared in the article, it does give a view into how PowerShell is being used in some environments.  I'm going to guess that because many organizations still aren't using any sort of process creation monitoring, nor are many logging the use of Powershell, this is how the use of Powershell would be considered "cloaked".

Be sure to take a look at the United Threat Research report described in the Cb article, as well.

Tuesday, April 05, 2016

Cool Stuff, re: WMI Persistence

In case you missed it, the blog post titled, "A Novel WMI Persistence Implementation" was posted to the Dell SecureWorks web site recently.  In short, this blog post presented the results of several SecureWorks team members working together and bringing technical expertise to bear in order to run an issue of an unusual persistence mechanism to ground.  The specifics of the issue are covered thoroughly in the blog post.

What was found was a novel WMI persistence mechanism that appeared to have been employed to avoid not just detection by those who administered the infected system, but also by forensic analysts.  In short, the persistence mechanism used was a variation on what was discussed during a MIRCon 2014 presentation; you'll see what I mean you compare figure 1 from the blog post to slide 45 of the presentation.

After the blog post was published and SecureWorks marketing had tweeted about the blog post, they saw that Matt Graeber had tweeted a request for additional information.  The ensuing exchange included Matt providing a command line for parsing embedded text from a binary MOF file:

mofcomp.exe -MOF:recovered.mof -MFL:ms_409.mof -Amendment:MS_409 binarymof.tmp

What this command does is go into the binary MOF file (binarymof.tmp), and attempt to extract the text that it was created from, essentially "decompiling" it, and placing that text into the file "recovered.mof".

It was no accident that Matt was asking about this; here is Matt's BlackHat 2015 paper, and his presentation.


Windows Registry Forensics, 2E

Okay, the book is out!  At last!  This is the second edition to Windows Registry Forensics, and this one comes with a good bit of new material.

Chapter 1 lays out what I see as the core concepts of analysis, in general, as well as providing a foundational understanding of the Registry itself, from a binary perspective.  I know that there are some who likely feel that they've seen all of this before, but I tend to use this information all the time.

Chapter 2 is again about tools.  I only cover available free and open-source tools that run on Windows systems, for the simple fact that I do not have access to the commercial tools.  Some of the old tools are still applicable, there are new tools available, and some tools are now under license, and in some cases, the strict terms of the license prevent me from including them in the book.  Hopefully, chapter 1 laid the foundation for analysts to be able to make educated decisions as to which tool(s) they prefer to use.

Chapters 3 and 4 remain the same in their focus as with the first edition, but the content of the chapters has changed, and in a lot of aspects, been updated.

Chapter 5 is my answer to anyone who has looked or is looking for a manual on how to use RegRipper.  I get that most folks download the tool and run it as it, but for my own use, I do not use the GUI.  At all.  Ever.  I use rip.exe from the command line, exclusively.  But I also want folks to know that there are more targeted (and perhaps efficient) ways to use RegRipper to your advantage.  I also include information regarding how you can write your own plugins, but as always, if you don't feel comfortable doing so, please consider reaching to me, as I'm more that happy to help with a plugin.  It's pretty easy to write a plugin if you can (a) concisely describe what you're looking for, and (b) provide sample data.

Now, I know folks are going to ask about specific content, and that usually comes as the question, "do you talk about Windows 10?"  My response to that it to ask specifically what they're referring to, and very often, there's no response to that question.  The purpose of this book is not to provide a list of all possible Registry keys and values of interest or value, for all possible investigations, and for all possible combinations of Windows versions and applications.  That's simply not something that can be achieved.  The purpose of this book is to provide an understanding of the value  and context of the Windows Registry, that can be applied to a number of investigations.

Thoughts on Writing Books
There's no doubt about it, writing a book is hard.  For the most part, actually writing the book is easy, once you get started.  Sometimes it's the "getting started" that can be hard.  I find that I'll go through phases where I'll be writing furiously, and when I really need to stop (for sleep, life, etc.), I'll take a few minutes to jot down some notes on where I wanted to go with a thought.

While I have done this enough to find ways to make the process easier, there are still difficulties associated with writing a book.  That's just the reality.  It's easier now than it was the first time, and even the second time.   I'm much better at the planning for writing a book, and can even provide advice to others on how to best go about it (and what to expect).

At this point, after having written the books that I have, I have to say that the single hardest part of writing books is not getting feedback from the community.

Take the first edition of Windows Registry Forensics, for example.  I received questions such as, "...are you planning a second edition?", and when I asked for input on what that second edition should cover, I didn't get a response.

I think that from a 50,000 foot view, there's an expectation that things will be different in the next version of Windows, but the simple fact is that, when it comes to Registry forensics, the basic principles have remained the same through all available versions. Keys are still keys, deleted keys are still discovered the same way, values are still values, etc.  From an application layer perspective, its inevitable that each new version of Windows would include something "new", with respect to the Registry.  New keys, new values, etc.  The same is true with new versions of applications, and that includes malware, as well.  While the basic principles remain constant, stuff at the application layer changes, and it's very difficult to keep up without some sort of assistance.

Writing a book like this would be significantly easier if those within the community were to provide feedback and input, rather than waiting for the book to be published, and ask, "...did you talk about X?"  Even so, I hope that folks find the book useful, and that some who have received their copy of the book find the time to write a review.  Thanks.

Wednesday, March 23, 2016

Links

RegRipper Plugin Update
Okay, this isn't so much an update as it is a new plugin.  Patrick Seagren sent me a plugin called cortana.pl, which he's been using to extract Cortana searches from the Registry hives.  Patrick sent the plugin and some test data, so I tested the plugin out and added it to the repository.

Process Creation Monitoring
When it comes to process creation monitoring, there appears to be a new kid on the block.  NoVirusThanks is offering their Process Logger Service free for personal use.

Looking at the web site, the service appears to record process creation event information in a flat text file, with the date and time, process ID, as well as the parent process ID.  While this does record some basic information about the processes, it doesn't look like it's the easiest to parse and include in current analysis techniques.

Other alternatives include native Windows auditing for Process Tracking (along with an update to improve the data collected), installing Sysmon, or going for a solution of a commercial nature such as Carbon Black.  Note that incorporating the process creation information into the Windows Event Log (via either process) means that the data can be pulled from live systems via WMI or Powershell, forwarded to a central logging server (Splunk?), or extracted from an acquired image.

Process creation monitoring can be extremely valuable for detecting and responding to things such as Powershell Malware, as well as providing critical information for responders to determine the root cause of a ransomware infection.

AirBusCyberSecurity recently published this post that walks through dynamic analysis of "fileless" malware; in this case, Kovter.  While it's interesting that they went with a pre-compiled platform, pre-stocked with monitoring tools, the results of their analysis did demonstrate how powerful and valuable this sort of technique (monitoring process creation) can be, particularly when it comes to detection of issues.

As a side note, while I greatly appreciate the work that was done to produce and publish that blog post, there are a couple of things that I don't necessarily agree with in the content that begin with this statement:

Kovter is able to conceal itself in the registry and maintain persistence through the use of several concealed run keys.

None of what's done is "concealed".  The Registry is essentially a "file system within a file", and none of what the malware does with respect to persistence is particularly "concealed".  "Run" keys have been used for persistence since the Registry was first used; if you're doing any form of DFIR work and not looking in the Run keys, well, that still doesn't make them "concealed".

Also, I'm not really sure I agree with this "fileless" thing.  Just because persistence is maintained via Registry value doesn't make something "fileless".

Ransomware and Attribution
Speaking of ransomware engagements, a couple of interesting articles have popped up recently with respect to ransomware attacks and attribution.  This recent Reuter's article shares some thoughts from analysts regarding attribution for observed attacks.  Shortly after this article came out, Val Smith expanded upon information from the article in his blog post, and this ThreatPost article went on to suggest that what analyst's are seeing is really "false flag" operations.

While there are clearly theories regarding attribution for the attacks, there doesn't appear to be any clear indicators or evidence...not that are shared, anyway...that tie the attacks to a particular group, or geographic location.

This article popped up recently, describing how another hospital was hit with ransomware.  What's interesting about the article is that there is NO information about how the bad guys gained access to the systems, but the author of the article refers to and quotes a TrustWave blog post; is the implication that this may be how the systems were infected?  Who knows?

Carving
David Cowen recently posted a very interesting article, in which he shared the results of tool testing, specifically several file carving tools.  I've seen comments and reviews from others who've read this same post that've said that David one tool or another "near the bottom", but to be honest, that doesn't appear to be the case at all.  The key to this sort of testing is to understand the strengths and "weaknesses" of various tools.  For example, bulk extractor was listed as the fastest tool in the test, but David also included the statement that it would benefit from more filters, and BE was the only free option.

Testing such as this, as well as what folks like Mari have done, is extremely valuable in not only extending our knowledge as a community, but also for showing others how this sort of thing can be done, and then shared.

Malware Analysis and Threat Intel
I ran across this interesting post regarding Dridex analysis recently...what attracted my attention was this statement:

...detail how I go about analyzing various samples, instead of just presenting my findings...

While I do think that discussing not just the "what" but also the "how" is extremely beneficial, I'm going to jump off of the beaten path here for a bit and take a look at the following statement:

...got the loader binary off virustotal...

The author of the post is clearly stating where they got the copy of the malware that they're analyzing in the post, but this statement jumped out at me, for an entirely different reason all together.

When I read posts such as this, as well as what is shared as "threat intel", I look at it from the perspective of an DF analyst and an incident responder, asking myself, "..how can I use this on an engagement?"  While I greatly appreciate the effort that goes into creating this sort of content, I also realize that very often, a good deal of "threat intel" is developed purely through open source collection, without the benefit of context from an active engagement.  Now, this is not a bad thing...not at all.  But it is something that needs to be kept in mind.

In examples such as this one, understanding that the analysis relies primarily on a malware sample collected from VT should tell us that any mention of the initial infection vector (IIV) is likely going to be speculation, or the result of open source collection, as well.  The corollary is that the IIV is not going to be the result of seeing this during an active incident.

I'll say it again...information such as this post, as well as other material shared as "threat intel"...is a valuable part of what we do.  However, at the same time, we do need to understand the source of this information.  Information shared as a result of open source collection and analysis can be used to create filters or triggers, which can then be used to detect these issues much earlier in the infection process, allowing responders to then get to affected systems sooner, and conduct analysis to determine the IIV.

Monday, March 14, 2016

Links and Stuff

Sysmon
I've spent some time discussing MS's Sysmon in this blog, describing how useful it can be, not only in a malware testing environment, but also in a corporate environment.

Mark Russinovich gives a great use case for Sysmon, from RSA in this online PowerPoint presentation.  If you have any questions about Sysmon and how useful it can be, this presentation is definitely worth the time to browse through it.

Ransomware and Computer Speech
Ransomware has been in the "news" quite a bit lately; not just the opportunistic stuff like Locky, but also what appears to be more targeted stuff.  IntelSecurity has an interesting write-up on the Samsam targeted ransomware, although a great deal of the content of that PDF is spent on the code of the ransomware, and not so much the TTPs employed by the threat group.  This sort of thing is a great way to showcase your RE skillz, but may not be as relevant to folks who are trying to find this stuff within their infrastructure.

I ran across a write up regarding a new type of ransomware recently, this one called Cerber.  Apparently, this particular ransomware, as part of the infection, drops a *.vbs script on the system that makes the computer tell the user that its infected.  Wait...what?

Well, I started looking into it and found several sites that discussed this, and provided examples of how to do it.  It turns out that its really pretty simple, and depending upon the version of Windows you're using, you may have a greater range of options available.  For example, per this site, on Windows 10 you can select a different voice (a female "Anna" voice, rather than the default male "David" voice), as well as change the speed or volume of the speech.

...and then, much hilarity ensued...

Document Macros
Okay, back to the topic of ransomware, some of it (Locky, for example) ends up on systems as a result of the user opening a document that contains macros, and choosing to enable the content.

If you do find a system that's been infected (not just with ransomware, but anything else, really...), and you find a suspicious document, this presentation from Decalage provides a really good understanding of what macros can do.  Also, take a look at this blog post, as well as Mari's post, to help you determine if the user chose to enable the content of the MS Office document.

Why bother with this at all?  That's a great question, particularly in the face of ransomware attacks, where some organizations are paying tens or hundreds of thousands of dollars to get their documents back...how can they then justify paying for a DFIR investigation?  Well, my point is this...if you don't know how the stuff gets in, you're not going to stop it the next time it (or something else) gets in.

You need to do a root cause investigation.

Do not...I repeat, do NOT...base any decisions made after an infection, compromise, or breach on assumption or emotion.  Base them on actual data, and facts.  Base them on findings developed from all the data available, not just some of it, with the gaps filled in with speculation.

Jump Lists
One way to determine which files a user had accessed, and with which application, is by analyzing Jump Lists.  Jump Lists are a "new" artifact, as of Windows 7, and they persist through to Windows 10.  Eric Zimmerman recently posted on understanding Jump Lists in depth; as you would expect, his post is written from the perspective of a tool developer.

Eric noted that the format for the DestList stream on Windows 10 systems has changed slightly...that an offset changed.  It's important to know and understand this, as it does affect how tools will work.

Mysterious Records in the Index.dat File
I was conducting analysis on a Windows 2003 server recently, and I found that a user account created in Dec 2015 contained activity within the IE index.dat file dating back to 2013...and like any other analyst, I thought, "okay, that's weird".  I noted it my case notes and continued on with my analysis, knowing that I'd get to the bottom of this issue.

First, parsing the index.dat.  Similar to the Event Logs, I've got a couple of tools that I use, one that parses the file based on the header information, and the other that bypasses the header information all together and parses the file on a binary basis.  These tools provide me with visibility into the records recorded within the files, as well as allowing me to add those records to a timeline as necessary.  I've also developed a modified version of Jon Glass's WebCacheV01.dat parsing script that I use to incorporate the contents of IE10+ web activity database files in timelines.

So, back to why the index.dat file for a user account (and profile) created in Dec 2015 contained activity from 2013.  Essentially, there was malware on the system in 2013 running with System privileges and utilizing the WinInet API, which resulted in web browser activity being recorded in the index.dat file within the "Default User" profile.  As such, when the new user account was created in Dec 2015, and the account was used to access the system, the profile was created by copying content from the "Default User" profile.  As IE wasn't being used/launched via the Windows Explorer shell (another program was using the WinInet API), the index.dat file was not subject to the cache clearance mechanisms we might usually expect to see (by default, using IE on a regular basis causes the cache to be cleared every 20 days).

Getting to the bottom of the analysis didn't take days or weeks of analysis...it just took a few minutes to finish up documenting (yes, I do that...) what I'd already found, and then circling back to confirm some findings, based on a targeted approach to analysis.

Sunday, March 13, 2016

Event Logs

I've discussed Windows Event Log analysis in this blog before (here, and here), but it's been a while, and I've recently been involved in some analysis that has led me to believe that it might be a good idea to bring up the topic again.

Formats
I've said time and again...to the point that many of you are very likely tired of hearing me say it...that the version of Windows that you're analyzing matters.  The available artifacts and their formats differ significantly between versions of Windows, and any discussion of (Windows) Event Logs is a great example of this fact.

Windows XP and 2003 use (I say "use" because I'm still seeing these systems in my analysis; in the past month alone I've analyzed a small handful of Windows 2003 server images) a logging format referred to as "Event Logs".  MS does a great job in documenting the structure of the Event Log/*.evt file format, header, records, and even the EOF record structure.  In short, these Event Logs are a "circular buffer" to which individual records are written.  The limiting factor for these Event Logs is the file size; as new records are written, older records will simply be overwritten.  These systems have three main Event Logs; Security (secevent.evt), System (sysevent.evt), and Application (appevent.evt).  There may be others but they are often application specific.

Windows Vista systems and beyond use a "binary XML" format for the Windows Event Log/*.evtx files.  Besides the different format structure for event records and the files themselves, perhaps one of the most notable aspects of Windows Event Logs is the number of log files available.  On a default installation of Windows 7, I counted 140+ *.evtx files; on a Windows 10 system, I counted 289 files.  Now, this does not mean that records are written to these logs all the time; in fact, some of the Windows Event Log files may never be written to, based on the configuration and use of the system.  However, it's often likely that if you're following clusters of indicators as part of your analysis process (i.e., looking for groups of indicators close together, rather than one single indicator or event, that indicate a particular action) it's likely that you'll find more indications of the event in question.

Tools
Of the tools I use (and provide along with the book materials) in my daily work, there are two specifically related to (Windows Event) Logs.

First, there is evtparse.exe.  This tool does not use the Windows API to parse Event Logs/*.evt files on a binary basis, bypassing the header information and basically "carving" *.evt files for valid records.

The ability to parse individual event records from *.evt files, regardless of what the file header says with respect the number of event records, etc., is valuable.  I originally wrote this tool after I ran into a case where the Event Logs had been cleared.  When this occurred, the "current" *.evt files were deleted (sectors comprising the files became part of unallocated space) and "new" *.evt files were created from available sectors within unallocated space.  What happened was that one of the *.evt files contained header information that indicated that there were no event records in the file, but there was clearly something there.  I was able to recover or "carve" valid event records from the file.  I've also used evtparse.pl as the basis for a tool that would carve unstructured data (pagefile, unallocated space, even a memory dump) for *.evt records.

The other tool I use is evtxparse.exe.  Note the "" in the name.  This is NOT the same thing as evtparse.exe.  Evtxparse.exe is part of a set of tools, used with wevtx.bat, LogParser (a free tool from MS), and eventmap.txt to parse either an individual *.evtx file or multiple *.evtx files into the timeline/TLN format I use for my analysis.  The wevtx.bat file launches LogParser to parse the file(s), writing the parsed records to a temporary file, which is then parsed by evtxparse.exe.  During that parsing, the eventmap.txt file is used to apply a modicum of "threat intel" (in short, stuff I've learned from previous engagements...) to the entries being included in the timeline events file, so that its easier for me to identify pivot points for analysis.

A major caveat to this is that LogParser relies on the native DLLs/API of the system on which it's being run.  This means that you can't successfully run LogParser on Windows XP while trying to parse *.evtx files, nor can you successfully run LogParser on Windows 10 to parse Windows 2003 *.evt files (without first running wevtutil to change the format of the *.evt files to *.evtx).

Both tools are provided as Windows executables, along with the Perl source code.

When I run across *.evtx files that LogParser has difficulty parsing, my go-to tool is Willi Ballenthin's EVTXtract.  There have been several instances where this tool set has worked extremely well, particularly when the Windows Event Logs that I'm interested in are reported by other tools as being "corrupt".  In one particular instance, we'd found that the Windows Event Logs had been cleared, and we were able to not only retrieve a number of valid event records from unallocated space, but we were able to locate THE smoking gun record that we were looking for.

Gaps
Not long ago, I was asked a question about gaps in Windows Event Logs; specifically, is there something out there that allows someone to remove specific records from an active Windows Event Log on a live machine?  Actually, this question has come up twice since the beginning of this year alone, in two different contexts.

There has been talk about there being, or that there have been, tools for removing specific records from Windows Event Logs on live systems, but all the talk comes back to the same thing...no one I've even spoken to has any actual data showing that this actually happened.  There's been mention of a tool called "WinZapper" likely having been used, but when I've asked if the records were parsed and sorted by record number to confirm this, no one has any explicit data to support the fact that the tool had been used; it all comes back to speculation, and "it could have been used".

As I mentioned, this is pretty trivial to check.  Wevtx.bat, for example, contains a LopParser command line that includes printing the record number for each event.  You can run this command line on a Windows 7 (or 10) system to parse *.evtx files, or on a Windows XP system to parse *.evt files, and get similar results.

Evtparse.exe (note that there is no "x" in the tool name...) includes a switch for listing event records sequentially, displaying only the record number and time generated value for each event record.  This output can then easily be sorted to look for gaps, or parsed via a script to do the same thing.  For example, using either tool, you can then simply import the output into Excel and sort based on the record numbers and search it manually/visually, or write a script that looks for gaps in the record numbers.

So, when someone asks me if it's possible that specific event records were removed from a log, the first question I would ask in response would be, were records removed from the log?  After all, this is pretty trivial to check, and if there are no gaps, then the question itself becomes academic.

Creating Event Records
There are a number of ways to create event records on live systems, should you be inclined to do so.  For example, MS includes the eventcreate.exe tool, which allows you to create event records (with limitations; be sure to read the documentation).

Visual Basic can be used to write to the Event Log; for an example, see this StackOverflow post.  Note that the post also links to this MSDN page, but as is often the case on the InterWebs, the second response goes off-topic.

You can also use Powershell to create new Windows Event Logs, or create event records.

Wednesday, February 24, 2016

Links: Plugin Updates and Other Things

Plugin Updates
Mari has done some fascinating research into MS Office Trust Records and posted her findings here. Based on her sharing her findings and sample data, I was able to update the trustrecords.pl plugin.  Further, Mari's description of what she had done was so clear and concise that I was able to replicate what she did and generate some of my own sample data.

The last update to the trustrecords.pl plugin was from 16 July 2012; since then, no one's used it or apparently had any issues with it or questions about what it does.  For this update, I added a check for the VBAWarnings value, and added parsing of the last 4 bytes of the TrustRecords value data, printing "Enable Content button clicked" if the data is is in accordance with Mari's findings.  I also changed how the plugin determines which version of Office is installed. I also made sure to update the trustrecords_tln.pl plugin accordingly, as well.

So, from the sample data that Mari provided, the output of the trustrecords.pl plugin looks like this:

**Word**
----------
Security key LastWrite: Wed Feb 24 15:58:02 2016 Z
VBAWarnings = Enable all macros

Wed Feb 24 15:08:55 2016 Z : %USERPROFILE%/Downloads/test-document-domingo.doc
**Enable Content button clicked.

...and the output of the trustrecords_tln.pl plugin looks like this:

1456326535|REG|||TrustRecords - %USERPROFILE%/Downloads/test-document-domingo.doc [Enable Content button clicked]

Addendum, 25 Feb
Default Macro Settings (MSWord 2010)
After publishing this blog post yesterday, there was something that I ran across in my own testing that I felt was important to point out.  Specifically, when I first opened MSWord 2010 and went to the Trust Center, I saw the default Macro Settings, illustrated in the image to the right; this is with no VBAWarnings value in the Registry.  Once I started selecting other options, the VBAWarnings value was created.

What this seems to indicate is that if the VBAWarnings value exists in the Registry, even if the Macro Settings appear as seen in the image above (the data for the value would be "2"), that someone specifically changed the value.  So, if the VBAWarnings value doesn't exist in the Registry, it appears (based on limited testing) that the default behavior is to disable macros with a notification.  If the setting is changed, the VBAWarnings value is created.  If the VBAWarnings value is set to "2", then it may be that the Macro Settings were set to something else, and then changed back.

For example, take a look at the plugin output I shared earlier in this post.  You'll notice that the LastWrite time of the Security key is 50 min later than the TrustRecords time stamp for the document.   In this case, this is due to the fact that Mari produced the sample data (hive) for the document, and then later modified the Macro Settings because I'd reached back to her and said that the hive didn't contain a VBAWarnings value.

Something else to think about...has anyone actually used the reading_locations.pl plugin?  If you read Jason's blog post on the topic, it seems like it could be pretty interesting in the right instance or case.  For example, if an employee was thought to have modified a document and claimed that they hadn't, this data might show otherwise.
**end addendum**

Also, I ran across a report of malware using a persistence mechanism I hadn't seen before, so I updated termserv.pl to address the "new" key.

Process Creation Monitoring
My recent look into and description of PECapture got me thinking about process creation monitoring again.

Speaking of process creation monitoring, Dell SecureWorks recently made information publicly available regarding the AdWind RAT.  If you read through the post, you'll see that the RAT infection process spawns a number of external commands, rather than using APIs to do the work.  As such, if you're recording process creation events on your endpoints, filters can be created to watch for these commands in order to detect this (and other) activity.

Malicious LNK
Wait, what?  Since when did those two words go together?  Well, as of the morning of 24 Feb, the ISC handlers have made it "a thing" with this blog post.  Pretty fascinating, and thanks to the handlers for walking through how they pulled things out of the LNK file; it looks as if their primary tool was a hex editor.

A couple of things...

First, process creation monitoring of what this "looks like" when executing would be very interesting to see.  If there's one thing that I've found interesting of late is how DFIR folks can nod their heads knowingly at something like that, but when it comes to actual detection, that's another matter entirely.  Yes, the blog post lists the command line used  but the question is, how would you detect this if you had process creation monitoring in place?

Second, in this case, the handlers report that "the ACE file contains a .lnk file"; so, the ACE file doesn't contain code that creates the .lnk file, but instead contains the actual .lnk file itself.  Great...so, let's grab Eric Zimmerman's LECmd tool, or my own code, and see what the NetBIOS name is of the system on which the LNK file was created.  Or, just go here to get that (I see the machine name listed, but not the volume serial number...).  But I'd like to parse it myself, just to see what the shell items "look like" in the LNK file.

As a side note, it's always kind of fascinating to me how some within the "community" will have data in front of them, and for whatever reason, just keep it.  Just as an example (and I'm not disparaging the work the handlers did, but commenting on an observation...), the handlers have the LNK file, but they're not sharing the vol SN, NetBIOS name, or shell items included in the LNK file, just the command line and the embedded payload.  I'm sure that this is a case of "this is what we feel is important, and the other stuff isn't...", but what happens when others find something similar?  How do we start correlating, mapping and linking similar incidents if some data that might reveal something useful about the author is deemed unnecessary by some?

Like I said, not disparaging the work that the handlers did, just thinking out loud a bit.

8Kb One-Liner
There was a fascinating post over at Decalage recently regarding a single command line that was 8Kb long.  They did a really good walk-through for determining what a macro was up to, even after the author took some pretty significant steps to make getting to a human-readable format tough.

I think it would be fascinating to get a copy of this sample and run it on a system with SysMon running, to see what the process tree looks like for something like this.  That way, anyone using process creation monitoring could write a filter rule or watchlist to monitor for this in their environment.

From the Trenches
The "From the Trenches" stuff I've been posting doesn't seem to have generated much interest, so I'm going to discontinue those posts and move on to other things.