Saturday, February 13, 2016

From the Trenches

I had an idea recently...there are a lot of really fascinating stories from the infosec industry that aren't being shared or documented in any way.  Most folks may not think of it this way, but these stories are sort of our "corporate history", they're what led to us being who and where we are today.

Some of my fondest memories from the military were sitting around with other folks, telling "war stories".  Whether it was at a bar after a long day (or week), or we were just sitting around a fire while we were still out in the field, it was a great way to bond and share a sort of "corporate history".  Even today, when I run into someone I knew "back in the day", the conversation invariably turns to one of us saying, "hey, do you remember when...?"  I see a lot of value in sharing this sort of thing within our community, as well.

While I was still on active duty, I applied for and was assigned to the graduate program at the Naval Postgraduate School.  I showed up in June, 1994, and spent most of my time in Spanagel Hall (bldg 17 on this map).  At the time, I had no idea that every day (for about a month), I was walking by Gary Kildall's office.  It was several years later that I was reading a book on some of the "history" behind MS/DOS and Silicon Valley that I read about Digital Research, and made the connection.  I've always found that kind thing fascinating...getting an inside view of things from the people who were there (or, in Gary's case, allegedly not there...), attending the meetings, etc.  Maybe that's why I find the "Live To Tell" show on the History Channel so fascinating.

As a bit of a side note, after taking a class where I learned about "Hamming distance" while I was at NPS, I took a class from Richard Hamming.  That's like reading Marvel Comics and then talking to Stan Lee about developing Marvel Comics characters.

So, my idea is to share experiences I've had within the industry since I started doing this sort (infosec consulting) of work, in hopes that others will do the same.  My intention here is not to embarrass anyone, nor to be negative...rather, to just present humorous things that I've seen or experienced, as a kind of "behind the scenes" sort of thing.  I'm not sure at this point if I'm going to make these posts their own separate standalone posts, or include shorter stories along with other posts...I'll start by doing both and see what works.

War Dialing
One of the first civilian jobs I had after leaving active duty was with SAIC.  I was working with a small team...myself, a retired Viet Nam-era Army Colonel, and two other analysts...that was trying to establish itself in performing assessment services.  If anyone's ever worked for a company like this, they were often described as "400 companies all competing with each other for the same work", and in our case, that was true.  We would sometimes loose work to another group within the company, and then be asked to assist them as their staffing requirements for the work grew.

This was back in 1998, when laptops generally came with a modem and a PCMCIA expansion slot, and your network interface card came in a small hard plastic case.  Also, most of the laptops had 3.5 disk drives built in still, although some came with an external unit that you connected to a port.

One particular engagement I was assigned to was to perform "war dialing" against a client located in one of the WTC towers.  So, we flew to New York, went to the main office and had our introductory meeting.  During the meeting, we went over our "concept of operations" (i.e., what we were planning to do) again, and again requested a separate area from where we could work, preferably something out of view of the employees, and away from the traffic patterns of the office (such as a conference room).  As is often the case, this wasn't something that had been set up for us ahead of time, so two of us ended up piling into an empty cubicle in the cube-farm..not ideal, but it would work for us.

At the time, the tools of choice for this work were Tone Loc and THC Scan.  I don't remember which one we were using at the time, but we kicked off our scan using a range of phone numbers, but without randomizing the list.  As such, two of us were hunkered down in this cubicle, with normal office traffic going on all around us.  We had turned the speakers on the laptop we were using (being in a cubicle rather than a conference room meant we only had access to one phone line...), and leaned in really close so we could hear what was going on over the modem.  It was a game for us to listen to what was going on and try to guess if the system on the other end was a fax machine, someone's desk phone or something else, assuming it picked up.

So, yeah...this was the early version of scanning for vulnerabilities.  This was only a few years after ISS had been formed, and the Internet Scanner product wasn't yet well known, nor heavily used.  While a scan was going on, there really wasn't a great deal to do, beyond monitoring the scan for problems, particularly something that might happen that we needed to tell the boss about; better that he hear it from us first, rather than from the client.

As we're listening to the modem, every now and then we know that we hit a desk phone (rather than a modem in a computer) because the phone would pick up and you'd hear someone saying "hello...hello..." on the other end.  After a while, we heard echos...the sequence of numbers being dialed was in an order that we could hear the person speaking via the laptop speakers, as well as above the din of the office noise.  We knew that the numbers were getting closer, so we threw jackets over the laptop in an attempt to muffle the noise...we were concerned that the person who picked up the phone in the cubicles on either side of us would hear themselves.

Because of the lack of space and phone lines available for the work, it took us another day to finish up the scan.  After we finished, we had a check-out meeting with our client point of contact, who shared a funny story about our scan with us.  It seems that there was a corporate policy to report unusual events; there posters all over the office, and apparently training for employees, telling them what an "unusual event" might look like, to whom to report it, etc.  So, after about a day and a half of the "war dialing", only one call had come in.  Our scan had apparently dialed two sequential numbers that terminated in the mainframe room, and the one person in the room felt that having to get up to answer one phone, then walk across the room to answer the other (both calls of which hung up) constituted an "unusual event"...that's how it was reported to the security staff.

About two years later, when I was working at another company, we used ISS's Internet Scanner, run from within the infrastructure, to perform vulnerability assessments.  This tool would tell us if the computer scanned had modems installed.  No more "war dialing" entire phone lists for me...it was deemed too disruptive or intrusive to the environment.

Friday, February 12, 2016

Links

Presentations
I recently ran across this presentation from Eric Rowe of the Canadian Police College (presentation hosted by Bloomsburg University in PA).  The title of the presentation is Volume Shadow Copy and Registry Forensics, so it caught my eye.  Overall, it was a good presentation, and something I've done myself, more than once.

Education and Training
Now and again, I see posts to various lists and forums asking about how to get hands-on experience.  If you can't afford the courses that provide this, it's still not difficult to get it.  There are a number of sites available online that provide access to images, and tools are available just about...well...everywhere.

For images, Lance's first practical is still available online, having been originally posted about 8 years ago; the image is of a Windows XP system, and it includes System Restore Points.  Want to work with some Volume Shadow Copies (VSCs) in a Win7 image?  David has an image available with this blog post.  For other sorts of images, there's the Digital Corpora site,  the CFreds "hacking case" from NIST, and the InfoSec Short Takes competition scenario and image, to name a few.

From these images, you can select individual artifacts to extract and parse, in order to get familiar with the data and the tools, you can follow the scenario provided with each image (answer the questions, etc.), or you can conduct analysis under the tutelage of a mentor.  All of these provide great opportunities for education and training.

If I were in a position to hire for an opening, and an applicant stated that they'd downloaded and analyzed one of these images, I would ask to see their case notes and report, or even a blog post they'd written, something to show that they had a thought process.  I'd also ask questions about various investigative decisions they'd made throughout the process.

Tools
OLETools - this is a (Python) package of tools for analyzing MS structured storage files, the old style Office docs (more info/links availablespecifically here).  Tools such as these would be most helpful when performing malware detection with documents that use this format.  To upgrade or install the tools, you can use 'pip' within your Python installation.

Jon Glass has released an updated version of his WebCacheV01.dat parser, written in Python.  I've been using esedbexport and esedbviewer, but this will be a great addition to my toolkit, because with the code available, I was able modify it so that the information parsed from the WebCacheV01.dat file can be added directly into my timeline analysis process (that is, I modified the script to output in TLN format).  Thanks to David Cowen's blog, if you need to install (updated) libraries for use by Python, here's a great place you can go to get the compiled binaries.

For converting Python scripts into standalone Windows executable files, py2exe appears to be the solution; at least, that's what I'm finding in my searches.

Speaking of Python, if you're into (or new to) Python programming for DFIR work, you might want to check out (Mastering) Python Forensics; Jon's review can be found here.  I'm considering getting a copy; from what I can see so far, it's similar to Perl Scripting for Windows Security.

Books

Speaking of books, Windows Registry Forensics, 2/e is due out in April.  I'm looking forward to this one for a couple of reasons; first, a lot of the material is completely rewritten, and there's not only some new material with respect to the hives themselves, but I added a chapter on using RegRipper.  My hope is that analysts will read the chapter, and get a better understanding of how to use RegRipper to further their investigations, and go beyond simply downloading the distribution and running everything via the GUI.

Second, this book has entirely new cover art!  This is awesome!  When the first edition came out, I took two copies to a conference to give away at the end of my presentation...I had received the copies the day before leaving for the conference, they were that new.  When I went to give one of the books away, the recipient said, "I already have that one."  But there was no way that they could have, because it was brand new.  The issue was that the publisher had decided to several books (not just my titles) with the same color scheme!  Rather than reading the title of the books, most folks were simply looking at the color and thinking, "...yeah, I've seen/got that one...".  Right now, I have 8 Syngress books on my book shelf, comprised of two color schemes...both of which are simply just slightly different shades of green!  Most folks don't know the difference between Digital Forensics with Open Source Tools and Windows Registry Forensics 1/e, because there is no difference in the color scheme.  It's great to see this change.

Wednesday, February 03, 2016

Updated samparse.pl plugin

I received an email from randomaccess last night, and got a look at it this morning.  In the email, he pointed out there there had been some changes to the SAM Registry hive as of Windows 8/8.1, apparently due to the ability to log into the system using an MSDN Live account.  Several new values seem to be added to the user RID key, specifically, GivenName, SurName, and InternetUserName.  He provided a sample SAM hive and an explanation of what he was looking for, and I was able to update the samparse.pl plugin, send him a copy, and update the GitHub repository, all in pretty short order.

This is a great example of what I've said time and again since I released RegRipper; if you need a plugin and don't feel that you can create or update one yourself, all you need to do is provide a concise description of what you're looking for, and some sample data.  It's that easy, and I've always been able to turn a new or updated plugin around pretty quickly.

Now, I know some folks are hesitant to share data/hive files with me, for fear of exposure.  I know people are afraid to share information for fear it will end up in my blog, and I have actually had someone tell me recently that they were hesitant to share something with me because they thought I would take the information and write a new book around it.  Folks, if you take a close look at the blog and books, I don't expose data in either one.  I've received hive files from two members of law enforcement, one of whom shared hive files from a Windows phone.  That's right...law enforcement.  And I haven't exposed, nor have I shared any of that data.  Just sayin'...

Interestingly enough, randomaccess also asked in his email if I'd "updated the samparse plugin for the latest book", which was kind of an interesting question.  The short answer is "no", I don't generally update plugins only when I'm releasing a new book.  If you've followed this blog, you're aware that plugins get created or updated all the time, without a new book being released.  The more extensive response is that I simply haven't seen a SAM hive myself that contains the information in question, nor has anyone provided a hive that I could used to update and test the plugin, until now.

And yes, the second edition of Windows Registry Forensics is due to hit the shelves in April, 2016.

Wednesday, January 27, 2016

The Need for Instrumentation

Almost everyone likes spies, right?  Jason Bourne, James Bond, that sort of thing?  One of things you don't see in the movies is the training these super spies go through, but you have to imagine that it's pretty extensive, if they can pop up in a city that they maybe haven't been to and transition seamlessly into the environment.

The same thing is true of targeted adversaries...they're able to seamlessly blend into your environment.  Like special operations forces, they learn how to use tools native to the environment in order to get the information that they're after, whether it's initial reconnaissance of the host or the infrastructure, locating items of interest, moving laterally within the infrastructure, or exfiltrating data.

I caught this post from JPCERT/CC that discusses Windows commands abused by attackers.  The author takes a different approach from previous posts and shares some of the command lines used, but also focuses on the frequency of use for each tool.  There's also a section in the post that recommends using GPOs to restrict the use of unnecessary commands.  An alternative approach might be to track attempts to use the tools, by creating a trigger to write a Windows Event Log record (discussed previously in this post).  When incorporated into an overall log management (SEIM, filtering, alerting, etc.) framework, this can be an extremely valuable detection mechanism.

If you're not familiar with some of the tools that you see listed in the JPCERT/CC blog post, try running them, starting by typing the command followed by "/?".

TradeCraft Tuesday - Episode #6 discusses how Powershell can be used and abused. The presenters (one of whom is Kyle Hanslovan) strongly encourage interaction (wow, does that sound familiar at all?) with the presentation via Twitter.  During the presentation, the guys talk about Powershell being used to push base64 encoded commands into the Registry for later use (often referred to as "fileless"), and it doesn't stop there.  Their discussion of the power of Powershell for post-exploitation activities really highlights the need for a suitable level of instrumentation in order to achieve visibility.

The use of native commands by an adversary or intruder is not new...it's been talked about before.  For example, the guys at SecureWorks talked about the same thing in the articles Linking Users to Systems and Living off the Land.  Rather than talking about what could be done, these articles show you data that illustrates what was actually done; not might or could, but did.

So, what do you do?  Well, I've posted previously about how you can go about monitoring for command line activity, which is usually manifest when access is achieved via RATs.

Not all abuse of native Windows commands and functionality is going to be as obvious as some of what's been discussed already.  Take this recent SecureWorks post for example...it illustrates how GPOs have been observed being abused by dedicated actors.  An intruder moving about your infrastructure via Terminal Services won't be as easy to detect using command line process creation monitoring, unless and until they resort to some form of non-GUI interaction.

Saturday, January 23, 2016

Training

On the heels of my Skills Dilemma blog post, I wanted to share some thoughts on training.  Throughout my career, I've been on both sides of that fence...in the military and in private sector consulting, I've received, as well as developed and conducted training at various levels.  I've attended presentations, and developed and conducted presentations, at a number of levels and at a variety of venues.

Corey's also had some pretty interesting thoughts with respect to training in his blog.

Purpose
There are a lot of great training options out there.  When you're looking at the various options, are you looking to use up training funds, or are you looking for specific skills development?  What is the purpose of the training?  What's your intent in either attending the training, or sending someone, a staff member, to that training?

If you're just looking to use up training funds so that you'll get that money included in your budget next year, well, pretty much anything will do.  However, if you're looking for specific skills development, whether it's basic or advanced skills, you may want to look closely at what's being taught in the course.

What would really help with this is a Yelp-like system for reviewing training courses.  Wait...what?  You think someone should actually evaluate the services they pay for and receive?  What are you, out of your mind?

So, here's my thought...as a manager, you sit down with one of your staff and develop performance indicators and goals for the coming year, as well as a plan for achieving those goals.  The two of you decide that in order to meet those goals, one step is to attend a specific training course.  Your staff member attends, and then you both write a review.  You both write a review of the course based on what you agreed you wanted to achieve by attending the course; the staff member based on attending the course, and you (as the manager) based on your observation of your staff member's use of their new skills.

Accountability
I'll say it again...there are a lot of great training options out there, but in my experience, what's missing is accountability for that training.  What I mean by that is, if you're a manager and you send someone off to training (whether they obtain a certification or not), do you hold them accountable for that training once they return?

Here's an example...when I was on active duty and was stationed overseas, there was an NBC (nuclear, biological, chemical) response course being conducted, and being the junior guy, I was sent.  After I'd passed the course and received my certificate, I returned to my unit, and did absolutely NOTHING with respect to NBC.  Basically, I was simply away for a week.  I was sent off to the training for no other reason than I was the low guy on the totem pole, and when I returned, I was neither asked about, nor required to use or implement that training in any way.  There was no accountability.

Later in my tenure in the military, I found an opening for some advanced training for a Sgt who worked for me.  I felt strongly that this Sgt should be promoted and advance in his career, and one way to shore up his chances was to ensure that he advanced in his military occupational specialty.  I got him a seat in the training course, got his travel set up, and while he was gone, I found a position in another unit where he would put his new-found skills to good use.  When he returned, I informed him of his transfer (which had other benefits for him, as well).  His new role required him to teach junior Marines about the device he'd been trained on, as well as train the new officers attending the Basic Communication Officers Course on how to use and deploy the device, as well.  He was held accountable for the training he'd received.

How often do we do this?  Be honest.  I've seen an analyst that had attended some pretty extensive training, only to return and within the next couple of weeks, not know how to determine if a file had been time stomped or not.  I know that the basics of how to conduct that type of analysis were covered in the training they'd attended.

Generalist vs. Specialist
What kind of training are you interested in?  Basic skills development, or advanced training in a very specific skill set?  What specific skills are you looking for?  Are they skills specific to your environment?

There's a lot of good generalist training out there, training that provides a broad range of skills.  You may want to start there, and then develop an in-house training program ("brown bag" lunch presentations, half- or full-day training, mentoring, etc.) that reinforces and extends those basic skills into something that's specific to your environment.

Analysis

A bit ago, I posted about doing analysis, and that post didn't really seem to get much traction at all.  What was I trying for?  To start a conversation about how we _do_ analysis.  When we make statements to a client or to another analyst, on what are we basing those findings?  Somewhere between the raw data and our findings is where we _do_ analysis; I know what that looks like for me, and I've shared it (in this blog, in my books, etc.), and what I've wanted to do for some time is go beyond the passivity of sitting in a classroom, and start a conversation where analysts engage and discuss analysis.

I have to wonder...is this even possible?  Will analysts talk about what they do?  For me, I'm more than happy to.  But will this spark a conversation?

I thought I'd try a different tact this time around.  In a recent blog post, I mentioned that two Prefetch parsers had recently been released.  While it is interesting to see these tools being made available, I have to ask...how are analysts using these tools?  How are analysts using these tools to conduct analysis, and achieve the results that they're sharing with their clients?

Don't get me wrong...I think having tools is a wonderful idea.  We all have our favorite tools that we tend to gravitate toward or reach for under different circumstances.  Whether it's commercial or free/open source tools, it doesn't really matter.  Whether you're using a dongle or a Linux distro...it doesn't matter.  What does matter is, how are you using it, and how are you interpreting the data?

Someone told me recently, "...I know you have an issue with EnCase...", and to be honest, that's simply not the case.  I don't have an issue with EnCase at all, nor with FTK.  I do have an issue with how those tools are used by analysts, and the issue extends to any other tool that is run auto-magically and expected to spit out true results with little to no analysis.

What do the tools really do for us?  Well, basically, most tools parse data of some sort, and display it.  It's then up to us, as analysts, to analyze that data...interpret it, either within the context of that and other data, or by providing additional context, by incorporating either additional data from the same source, or data from external sources.

RegRipper is a great example.  The idea behind RegRipper (as well as the other tools I've written) is to parse and display data for analysis...that's it.  RegRipper started as a bunch of scripts I had sitting around...every time I'd work on a system and have to dig through the Registry to find something, I'd write a script to do the actual work for me.  In some cases, a script was simply to follow a key path (or several key paths) that I didn't want to have to memorize. In other cases, I'd write a script to handle ROT-13 decoding or binary parsing; I figured, rather than having to do all of that again, I'd write  a script to automate it.

For a while, that's all RegRipper did...parse and display data.  If you had key words you wanted to "pivot" on, you could do so with just about any text editor, but that's still a lot of data.  So then I started adding "alerts"; I'd have the script (or tool) do some basic searching to look for things that were known to be "bad", in particular, file paths in specific locations.  For example, an .exe file in the root of the user profile, or in the root of the Recycle Bin, is a very bad thing, so I wanted those to pop out and be put right in front of the analyst.  I found...and still find...this to be an incredibly useful functionality, but to date,

Here's an example of what I'm talking about with respect to analysis...I ran across this forensics challenge walk-through recently, and just for sh*ts and grins, I downloaded the Registry hive (NTUSER.DAT, Software, System) files.  I ran the appcompatcache.pl RegRipper plugin against the system hive, and found the following "interesting" entries within the AppCompatCache value:

C:\dllhot.exe  Tue Apr  3 18:08:50 2012 Z  Executed
C:\Windows\TEMP\a.exe  Tue Apr  3 23:54:46 2012 Z  Executed
c:\windows\system32\dllhost\svchost.exe  Tue Apr  3 22:40:25 2012 Z  Executed
C:\windows\system32\hydrakatz.exe  Wed Apr  4 01:00:45 2012 Z  Executed
C:\Windows\system32\icacls.exe  Tue Jul 14 01:14:21 2009 Z  Executed

Now, the question is, for each of those entries, what do they mean?  Do they mean that the .exe file was "executed" on the date and time listed?

No, that's not what the entries mean at all.  Check out Mandiant's white paper on the subject.  You can verify what they're saying in the whitepaper by creating a timeline from the shim cache data and file system metadata (just the $MFT will suffice); if the files that had been executed were not deleted from the system, you'll see that the time stamp included in the shim cache data is, in fact, the last modification time from the file system (specifically, the $STANDARD_INFORMATION attribute) metadata.

I use this as an example, simply because it's something that I see a great deal of; in fact, I recently experienced a "tale of two analysts", where I reviewed work that had previously been conducted, by two separate analysts.  The first analyst did not parse the Shim Cache data, and the second parsed it, but assumed that what the data meant was that the .exe files of interested had been executed at the time displayed alongside the entry.

Again, this is just an example, and not meant to focus the spotlight on anyone.  I've talked with a number of analysts, and in just about every conversation, they've either known someone who's made the same mistake misinterpreting the Shim Cache data, or they've admitted to misinterpreting it themselves.  I get it; no one's perfect, and we all make mistakes.  I chose this one as an example, because it's perhaps one of the most misinterpreted data sources.  A lot of analysts who have attended (or conducted) expensive training courses have made this mistake.

Pointing out mistakes isn't the point I'm trying to make...it's that we, as a community, need to engage in a community-wide conversation about analysis.  What resources do we have available now, and what do we need?  We can't all attend training courses, and when we do, what happens most often is that we learn something cool, and then don't see it again for 6 months or a year, and we forget the nuances of that particular analysis.  Dedicated resources are great, but they (forums, emails, documents) need to be searched.  What about just-in-time resources, like asking a question?  Would that help?


Wednesday, January 20, 2016

Resources, Link Mashup

Monitoring
MS's Sysmon was recently updated to version 3.2, with the addition of capturing opens for raw read access to disks and volumes.  If you're interested in monitoring your infrastructure and performing threat hunting at all, I'd highly recommend that you consider installing something like this on your systems.  While Sysmon is not nearly as fully-featured as something like Carbon Black, employing Sysmon along with centralized log collection and filtering will provide you with a level of visibility that you likely hadn't even imagined was possible previously.

This page talks about using Sysmon and NXLog.

The fine analysts of the Dell SecureWorks CTU-SO recently had an article posted  that describes what the bad guys like to do with Windows Event Logs, and both of the case studies could be "caught" with the right instrumentation in place.  You can also use process creation monitoring (via Sysmon, or some other means) to detect when an intruder is living off the land within your environment.

The key to effective monitoring and subsequent threat hunting is visibility, which is achieved through telemetry and instrumentation.  How are bad guys able to persist within an infrastructure for a year or more without being detected?  It's not that they aren't doing stuff, it's that they're doing stuff that isn't detected due to a lack of visibility.

MS KB article 3004375 outlines how to improve Windows command-line auditing, and this post from LogRhythm discusses how to enable Powershell command line logging (another post discussing the same thing is here).  The MS KB article gives you some basic information regarding process creation, and Sysmon provides much more insight.  Regardless of which option you choose, however, all are useless unless you're doing some sort of centralized log collection and filtering, so be sure to incorporate the necessary and appropriate logs into your SEIM, and get those filters written.

Windows Event Logs
Speaking of Windows Event Logs, sometimes it can be very difficult to find information regarding various event source/ID pairs.  Microsoft has a great deal of information available regarding Windows Event Log records, and I very often can easily find the pages with a quick Google search.  For example, I recently found this page on Firewall Rule Processing events, based on a question I saw in an online forum.

From Deus Ex Machina, you can look up a wide range of Windows Event Log records here or here.  I've found both to be very useful.  I've used this site more than once to get information about *.evtx records that I couldn't find any place else.

Another source of information about Windows Event Log records and how they can be used can often be one of the TechNet blogs.  For example, here's a really good blog post from Jessica Payne regarding tracking lateral movement...

With respect to the Windows Event Logs, I've been looking at ways to increase instrumentation on Windows systems, and something I would recommend is putting triggers in place for various activities, and writing a record to the Windows Event Log.  I found this blog post recently that discusses using PowerShell to write to the Windows Event Log, so whatever you trap or trigger on a system can launch the appropriate command or run a batch file the contains the command.  Of course, in a networked environment, I'd highly recommend a SEIM be set up, as well.

One thought regarding filtering and analyzing Windows Event Log records sent to a SEIM...when looking at various Windows Event Log records, we have to look at them in the context of the system, rather than in isolation, as what they actually refer to can be very different.  A suspicious record related to WMI, for example, when viewed in isolation may end up being part of known and documented activity when viewed in the context of the system.

Analysis
PoorBillionaire recently released a Windows Prefetch Parser, which is reportedly capable of handling *.pf files from XP systems all the way up through Windows 10 systems.  On 19 Jan, Eric Zimmerman did the same, making his own Prefetch parser available.

Having tools available is great, but what we really need to do is talk about how those tools can be used most effectively as part of our analysis.  There's no single correct way to use the tool, but the issue becomes, how do you correctly interpret the data once you have it?

I recently encountered a "tale of two analysts", where both had access to the same data.  One analyst did not parse the ShimCache data at all as part of their analysis, while the other did and misinterpreted the information that the tool (whichever one that was) displayed for them.

So, my point is that having tools to parse data is great, but if the focus is tools and parsing data, but not analyzing and correctly interpreting the data, what have the tools really gotten us?

Creating a Timeline
I was browsing around recently and ran across an older blog post (yeah, I know it's like 18 months old...), and in the very beginning of that post, something caught my eye.  Specifically, a couple of quotes from the blog post:

...my reasons for carrying this out after the filesystem timeline is purely down to the time it takes to process.

...and...

The problem with it though is the sheer amount of information it can contain! It is very important when working with a super timeline to have a pivot point to allow you to narrow down the time frame you are interested in.

The post also states that timeline analysis is an extremely powerful tool, and I agree, 100%.  What I would offer to analysts is a more deliberate approach to timeline analysis, based on what Chris Pogue coined as Sniper Forensics.

Speaking of analysis, the folks at RSA released a really good look at analyzing carrier files used during a phish.  The post provides a pretty thorough walk-through of the tool and techniques used to parse through an old (or should I say, "OLE") style MS Word document to identify and analyze embedded macros.

Powershell
Not long ago, I ran across an interesting artifact...a folder with the following name:

C:\Users\user\AppData\Local\Microsoft\Windows\PowerShell\CommandAnalysis\

The folder contained an index file, and a bunch of files with names that follow the format "PowerShell_AnalysisCacheEntry_GUID".  Doing some research into this, I ran across this BoyWonder blog post, which seems to indicate that this is a cache (yeah, okay, that's in the name, I get it...), and possibly used for functionality similar to auto-complete.  It doesn't appear to illustrate what was run, though.  For that, you might want to see the LogRhythm link earlier in this post.

As it turned out, the folder path I listed above was part of legitimate activity performed by an administrator.


Tuesday, January 19, 2016

More Registry Fun

Once, on a blog far, far away, there was this post that discussed the use of the Unicode RLO control character to "hide" malware in the Registry, particularly from GUI viewers that processed Unicode.

Recently, Jamie shared this Symantec article with me; figure 1 in the article illustrates an interesting aspect of the malware when it comes to persistence...it apparently prepends a null character to the beginning of the value name.  Interesting, some seem to think that this makes tools like RegEdit "brokken".

So, I wrote a RegRipper plugin called "null.pl" that runs through a hive file looking for key and value names that being with a null character.  Jamie also shared a couple of sample hives, so I got to test the plugin out.   The following image illustrates the plugin output when run against one of the hives:

Figure 1













All in all, the turn-around time was pretty quick.  I started this morning, and had the plugin written, tested, and uploaded to Github before lunch.

Later in the day, Eric Zimmerman followed up by testing the hive that Jamie graciously shared with me against the Registry Explorer.  I should also note that MiTeC WRR has no issues with the value names; it displays them as follows:

Figure 2






Addendum, 20 Jan: On a whim, I ran the fileless.pl plugin against the hive, and it detected the two values with the "fileless" data seen in figure 2.