Sunday, July 26, 2015

BSidesCincy Follow up

I had the distinct honor of speaking at @BSidesCincy this past weekend, and I greatly appreciate the opportunity that Justin, Josh, and the entire crew provided for me to speak.

In my time, I've been to a number of conferences.  I started with Usenix back in '99, and that experience was a bit different for me, in that the vast majority of my public speaking to that point had been in the military (during training, and while providing training).  Over the years, I've attended (I prefer to speak at conferences in an attempt to keep costs to my employer down...) several conferences that left me wondering what the point was.  Sometimes, what the speaker actually spoke about had little or nothing to do with the advertised title of their talk, and in a few cases, with the conference theme itself.  With BSidesCincy, it was clear that folks from the same community, with similar experiences and concerns, were coming together to share and discuss those experiences and concerns., and IMHO, that's the real way to make forward progress in this community.

Unfortunately due to flights (or rather, the lack of direct flights), I had to depart the venue after scarfing down lunch.  I did get to see John Davidson's presentation, and I've got to say, it was pretty fascinating.  Even though I'm more of a DFIR guy, my day job does include hunting (albeit largely from a host-based perspective), so a lot of what John talked about made complete sense, as at one point or another, I had some similar thoughts.  John took those thoughts and then took them further.  From a 50,000 ft view, John's presentation was about "here's the issue I ran into and here's how I addressed it...", which resulted in a really good presentation.  Further, it addressed something that I've heard over the years throughout the community...that folks don't want to hear, "...this is what you need to do...", as much as they want to hear, "...this is the issue I faced, and here's what I did to address it...", illustrating their methodology and thought processes throughout.

Thanks to Adrian, here's a link to the video of my presentation.

Again, to Justin, Josh, Adrian, and everyone on the BSidesCincy crew...thanks so much for having me out and for giving me the opportunity to share a bit with everyone.  I had a really great time engaging with everyone I got to speak with and meet.  I hope that for you, it was worth it and that some folks came away with something that they could use.

Addendum, 30 July: I was asked recently via Twitter if I was going to post the slides somewhere, and to be honest, I really don't see the point.  First, I don't read off of my slides...most of what I talk about during a presentation isn't in the slides (on the screen or in the notes).  Second, if you're looking to use the slides as a reference, the information that is posted in the slides is already available in my blog, or someplace else.  Many of the event IDs mentioned in this presentation were taken from eventmap.txt.

Finally, slide 4 of the presentation is to the left.  I used this slide to illustrate a point I was trying to make...that is, the annual reports that we see from some security companies tell us that when these consultants respond to a breach, they're able to see something (some indicators, artifacts, etc.) that allow them to populate the "dwell time" statistics.  The thought that I shared was that if the consultants can find this, what's stopping the local IT security staff from finding these things earlier in the game?

I didn't get many comments on this thought at the I on target, or am I way off and clearly making things up?  Does it make sense?  Does it generate any additional or follow-on thoughts?

So, what would be the point of releasing the slides?  I talk to this slide in the video, and so far, haven't heard any comments or feedback on the point, so why release the slides, just to get...well...still NOT get comments or thoughts?  So far, there have been a good number of RTs and "Favorites" for the original version of this post, but as of yet, no comments regarding the content of the video.

Monday, July 13, 2015

Ghost Busting

First, read Jack's post, Don't wait for an intrusion to find you.

Next, read this post (The Blue Team Myth).

Notice any similarities...not in content, but in the basic thought behind them?

Yeah, me, too.  Great minds, eh?  Okay, maybe not...but

So, you're probably wondering what this has to do with Ghostbusters...well, to many, an intruder within the infrastructure may seem like a ghost, moving between systems and through firewalls, apparently like an apparition.  One thing I've seen time and again during incident response is that an intruder is not encumbered by the artificialities of an infrastructure, where someone shouldn't be able to access systems, due either to policies and roles, or to local laws (European privacy laws, etc.). Yeah, that doesn't stop an adversary.

Jack's right about a number of things.  First, the old adage about an intruder needing to be right once, and a network defender needing to be right all the time is...well...wrong.  Consider this...prevention, by itself, is ineffective.  In defending a network, you need to include prevention, detection, and response in your security plan.  Given that, what is the adversary's definition of success?  What is their goal?  Once you arrive at what you believe to be the adversary's goal, you'll realize that there are plenty of opportunities for defenders and responders to "win", to get inside the adversary's OODA loop and disrupt/hamper/impede their activities.

Jack is exactly intruder needs to accomplish five stages in order to succeed, and all five of those stages require one thing in common...execution of commands.  Something has to run on the systems.  Doesn't it then make sense to have some sort of process creation monitoring in place, such as Bit9's Carbon Black, or MS's Sysmon?

Here's another way to look at the beginning of my blog post, I mention two annual security/threat reports, and describe what some of the statistics mean.  In short, one metric that the investigators report on is dwell long (as far as they can tell based on the artifacts) a targeted actor was embedded within the infrastructure before being detected.  What this means is that when investigators look at the available data, they're able to determine (at least up to a point) the earliest indicators of the adversary's activities, be it early indicators (use of web shells), or the actual initial infection vector (IIV), such as a strategic web compromise, or an email with a link to a malicious site, or with a weaponized document attached.  The point is that the investigators are able to find indicators...Registry keys/values, Windows Event Log records, etc...of the adversaries activities.  And all of these are indicators that could have been used to detect the adversaries activities much sooner.

Finally, one other thought...Jack's steps 4 and 5 are cyclic.  Wait...what?  What I mean by that is that following credential theft and establishing persistence, the adversary needs to orient to where they are and begin taking steps to locate data.  What are they looking for, what are they interested in, and where is located?  Is the data that the adversary is interested in on a server someplace (file server, database server), or are there bits of the data that they're interested in located on workstations, in emails, reports, spreadsheets, etc.?  So, what you may see (assuming you have the instrumentation to observe it) is the adversary collecting data for analysis in order to assist them in targeting the specific information they're interested in; this might be directory listings, emails, etc.  You may see this data exfiltrated so that the adversary can determine what it is they want.

Tuesday, June 23, 2015

The Blue Team Myth

The 2015 M-Trends Report states that the median number of days that threat groups were present in a victim's network before detection was 205 days, which is down 24 days from 2013.  But that's the median number of days...the longest persistence was 2,982 days.  That's over 8 years.  The report also states that 69% of organizations were informed of the breach by an external entity.

The 2015 TrustWave Global Security Report indicates that 81% of companies did not detect breaches on their own, and that the median length of time it took to detect a breach was 86 days.

What do these numbers tell us?   They tell us that the organizations in question did not detect the breaches, but when they called in consultants to assist, those consultants found evidence of the breaches that allowed them to identify for how long (to a point, of course) that the intruders had persisted within the infrastructure.  I'm going to go out on a limb here and just say that I recognize that the consultants may not have been able to trace the breach information back to the initial infection vector (IIV) in every case, so the numbers in the report are likely just a little fuzzy.  

What the numbers tell us is that the consultants, based on their combined experience, knowledge, and methodologies, were able to find indicators and artifacts that others did not.  This means that the local IT staff either wasn't looking, or didn't have a complete understanding of what to look for.

The numbers in the reports are supported by my own experiences, as well as those of a myriad of other responders.  A while back, I worked on an engagement during which we were able to trace the adversary's activities back to the original phishing email and attached weaponized PDF.  The emails had been received 21 months before we had been called.  I've done analysis on breaches where indicators went back two years or more, and we were never able to track down the IIV.

The numbers also tell us that the adage that attackers need to be right 1 out of 100 times and defenders need to be right 100 out of 100 times doesn't hold, and that it's a myth.  What artifacts did the consultants who provided input to the reports find, and could administrators of the compromised infrastructure have found those artifacts, as well, if they'd been looking?  Based on my own personal experiences, I would say that the answer is a definite yes.

By now, we are all well aware that prevention doesn't work, that regardless of how many technical measures we put in place, that as long as people are involved in some way, at some point someone will make their way into an infrastructure.  So what we need to do is include detection and response; put our protection mechanisms in place, monitor them, look/hunt for issues throughout our infrastructure, and be ready to take some sort of action when issues are found.

In my previous blog post, I shared some thoughts on hunting, in short, describing, in general terms, what folks can look for within their environments.  One person commented (via Twitter) that I should spend less time talking about it, and more time listing those artifacts that hunters should look for.

So what should you look for? Check out this video of Lee's presentation...he shares a lot of great information.  Much like the information here, or in any of the HowTo blog posts from July, 2013.  That's a start.  Eventmap.txt has some interesting tips, as do many RegRipper plugins.  Want more?  Take a hard look at your threat intelligence...if you're getting it from a vendor, does it give you what you need?  How about the threat intel you're developing internally?  Similarly, does it meet your needs?  MD5 hashes and IP addresses aren't going to get you as far as TTPs, if you have the processes and methodologies to find those TTPs.

Monday, June 22, 2015

Hunting, and Knowing What To Hunt For

Like many others of my generation, when I was a kid I'd go and play outside for hours and hours.  Sometimes, while running through the woods, I'd see trash...but seeing it often, I wouldn't think much of it.  Sometimes it was a tire in the creek, and other times it might be bottles or cigarette butts in a small cluster.

When I went through my initial military training, we spent a lot of time in the outdoors, but during the first 6 months, there was no real "this is what to look for training".  It wasn't until a couple of years later, when I returned to be an instructor that the course was teaching new officers the difference between moving through the woods during the day and at night.

Much later, after my military service ended, I got engaged in horseback riding, particularly trail riding.  Anyone who's ever done this knows that you become more aware of your surroundings, for a variety of reasons, but most importantly, the safety of your horse, you, and those you may be riding with.  You develop an awareness of your surroundings; what's moving near you, and what's further away.  Are there runners or walks with dogs (or children in strollers) down the trail?  Sometimes you can be warned of the impending approach of others not so much visually, as by what you hear or smell (some riders like to smoke while they're riding).  You can tell what is or might be in the area by visual observation (scat, droppings, etc.), as well as listening, and even smell.  Why is this important?   There's a lot that can spook a horse...horses will smell a carcass well before a human will, and as you ride up, a gaggle of vultures might suddenly take flight.  Depending on your horse's temperament, a flock of turkey hens running through a field might spook them or just catch their attention.  

If it's recently rained in the area where I'm riding, I'll visually sweep the ground looking for footprints and signs of animals and people.  If I see what appear to be fresh impressions of running shoes with dog paw prints near by, I might be looking for a walker or runner with a dog.  Most runners have a sense of courtesy to slow down to a walk and may be even say something when approaching horses from the rear.  Most...not all.  And not everyone who walks a dog really thinks about whether their dog is habituated to horses, or even how the dog will reach when they see horses.  I'll also look for signs of deer, because they usually (albeit not always) move in groups.  More than once in the springtime, I've come across a fawn coiled up in the tall grass...deer teach their fawns to remain very still, regardless of the circumstances, while they go off to forage.  More than once I've seen the fawn well before the fawn has lost its nerve and suddenly bolted from its hiding place.

Now, because of this level of awareness, I had an interesting experience several years ago while hiking with friends at the base of Mount Rainier.  At the park entrance, there was a small lake with signs prohibiting fishing, and a sign telling hikers what to do if they spotted a bear.  We hiked about 3 1/2 miles to a small meadow with wild blueberries growing.  The entire way out and back, there were NO signs of animal insects, no sounds of birds or squirrels.  No scat or markings of any kind.  The absence of any and all fauna was very extremely evident, and more than a bit odd.

So what? what's my point?  If you're familiar with the environment, and aware of your surroundings while performing DFIR work, and know what should be there, you know what to look for, as well as what data sources to go to if you're looking for suspicious activity.  It's all about knowing what to hunt for when you're hunting.  This is particularly true if you're performing hunting operations in your own environment; however, it can also be applied in cases where you're a consultant (like me), engaging with an unfamiliar infrastructure.

In a recent presentation  (BrightTalk, may require registration) at InfoSecurity Europe 2015, Lee Lawson discussed some of the indicators that are very likely available to you right now, particularly if you've done nothing to modify the default audit configuration on Windows systems.

Not long ago, the folks at Rapid7 shared this video, describing four indicators you could use to detect lateral movement within your infrastructure.  Unfortunately, two of them are not logged as a result of the default audit configuration of Windows systems, and the presenter doesn't mention alternate Windows Event Log source/ID pairs you can look for instead.

There are a lot of ways to hunt for indications of activity, such as lateral movement, but what needs to happen is that admins need to start looking, even if it means building their list of indicators (and hopefully automating as much of it as makes sense to do...) over time.  If you don't know what to look for, ask someone.

If someone were to ask me (this is my opinion, and should not be misconstrued as a statement of my employer's opinion or business model) what was the one thing they could do to increase their odds of detecting malicious behavior, I'd say, "install Sysmon", but that's predicated on someone actually looking at the logs.

Addendum: @Cyborg_DFIR over on Twitter read the original version of this post (prior to the addendum) and felt that I talked more about explaining what you meant instead of talking about it. Fair enough.  I don't dispute that.  But what I will say is that I, and others, have talked about what to look for.  A lot. Does that mean it should stop?  No, of course not.  Just because something's been talked about before doesn't mean that it doesn't bear repeating or being brought up again.

During July 2013, I wrote 12 articles whose titles started with "HowTo:" and went on to describe what to look for, if you were looking for specific things.  More recently, I addressed the topic of detecting lateral movement within the last month.

So, to @Cyborg_DFIR and others who have similar thoughts, questions or comments...I'm more than willing to share what I know, but it's so much easier if you can narrow it down a bit.  Is there something specific you're interested in hearing about...or better yet, discussing?

Thursday, June 11, 2015

RegRipper plugin update

I just pushed out an update to the plugin, and committed it to the Github archive.  The update was based on a request I'd received to make the output of the tool a bit more manageable, and that was right along the lines of something I'd been thinking about doing for some time, anyway.

In short, the update simply puts the output for each entry on a single line, with the app path and name first, then the date, then other data (that's specific to 32-bit XP, actually), and finally, the executed flag.  Each segment is separated by two spaces.

So, what does this mean?  Well, the format puts everything on one line, so if you redirect the output to a file and any searches you run will give you the entire line, not just the first line (as with the old format).

Another way to use this new format is the way I like to use it, to determine if there's anything in the ShimCache data that requires my attention: -r d:\cases\local\system -p appcompatcache | find "programdata" /i

Done.  That's it.  Using DOS commands, it's quick and easy to run through a data sample quickly to see if there's anything of interest.

What techniques do you use during Registry analysis?

Sunday, June 07, 2015


If you haven't heard, the new SANS DFIR "Evidence of..." poster is available.  I can see looking at the second page of the poster that they've added some items of interest that have been talked about recently.  For example, under "Browser Usage", I see mention of Mari's Google Analytics Cookies, and under "Program Execution", I see a reference to the AmCache.hve and the RecentFileCache.bcf files.

What's New in Windows 10
Speaking of "evidence of...", one question I see quite often is, "..what's new in Windows {insert newest version number}?"  Some folks at Champlain College have written up a Windows 10 Forensics guide.

The Problem with RegRipper 
I was reading herrcore's blog post in which malware that persists via a user's shell extensions, and came across the following statement in the post:

The problem with the two tools I mentioned; RegRipper ( plugin) and Autoruns is that they rely on the Shell Extension to be registered using the standard method with HKEY_CLASSES_ROOT.

This quote is taken from the first sentence under a header that says, "A Blind Spot in our Incident Response Tools".

Since I first released RegRipper, my intention was that it would be community-based. That is, that by providing a framework such as RegRipper, the strength of the tool would be not simply that it would be downloaded and blindly pointed at Registry hives, but rather that analysts would either write and contribute their own plugins, or request (providing sample data, as well) plugins be developed.  That's right.  I totally get that not everyone programs in Perl...which is why I provide Windows executable files for the framework.  This is also why if someone doesn't program in Perl but strongly believes that they have a good idea for a plugin, all they need to do is write up a concise description of what it is they'd like to see, and email it to me along with some sample data.  When this has happened, I've been able to turn around a functioning plugin pretty quickly...usually within 4 hrs, and often within 1 hr.

This sort of thing has happened before.  At the SANS DFIR Summit in 2012, I sat in a presentation, during which the speaker stated, "...RegRipper does not have a plugin for {insert functionality}...".  That speaker also stated that RegRipper "does not scale to the enterprise" (which is was NEVER designed to do); however, that speaker had never reached to me to say, "'s a plugin I wrote...", nor did they ever ask me for such functionality in a plugin.  I did find out after the presentation that the presentation had been written several months prior, and that a plugin had been written...the speaker had simply not updated their materials.

I get that there are some roadblocks to having a community-based tool...I completely understand that not everyone programs (nor wants to learn), and that of those who do, not all program in Perl.  That's okay.  I also understand that a major roadblock for a lot of DFIR analysts is simply...communicating.  There are a lot of reasons for this, but the fact is that most DFIR analysts simply do not want to communicate with others within the community.

I would suggest that the real "blindspot" or "problem" isn't whether or not RegRipper (or any other tool) contains specific functionality, but rather, what are we, as analysts, are willing to do to communicate with each other.

That being said, I contacted herrcore, and as a result of an email exchange where he sent me some sample data.  I'm researching his findings a little bit more to see if I can modify a current plugin (,, or would it make sense to write a new one?

In the meantime, here are some links where I've discussed RegRipper in this blog:
Mar, 2011 - Using RegRipper
Jul, 2012 - Thoughts on RegRipper Support
Aug, 2012 - RegRipper Updates

Addendum, 8 June: This morning, I updated a plugin, and created two others, and committed them to the plugin repository. - I updated this plugin by adding "programdata" to the @alerts list in the alertCheckPath() function, and ran it against the test data that herrcore provided.  The result was:

ALERT: inprocserver: programdata found in path: c:\programdata\{9a88e103-a20a-4e
a5-8636-c73b709a5bf8}\ieapfltr.dll - new plugin; run it against the NTUSER.DAT hive to get the values from the \Shell Extensions\Cached key.  Running it via rip.exe displays the time value, and the two available CLSID values; I added a simple lookup for the second one.  As such, the output appears as follows:

Tue May 26 05:31:59 2015  First Load: {BDFA381D-D8C6-441A-BA17-95EB7FEBEA81} (IDriveFolderExt)
Tue May 26 05:34:57 2015  First Load: {9C73F5E5-7AE7-4E32-A8E8-8D23B85255BF} (IShellFolder)
Tue May 26 05:34:57 2015  First Load: {7007ACC7-3202-11D1-AAD2-00805FC1270E} (IShellFolder)
Tue May 26 05:35:05 2015  First Load: {F6BF8414-962C-40FE-90F1-B80A7E72DB9A} (IDriveFolderExt)

Simple enough.  However, if you use your command line fu, you can do something like this: -r d:\cases\herrcore\ntusertest2.dat -p cached | find "F6BF"
Launching cached v.20150608
Tue May 26 05:35:05 2015  First Load: {F6BF8414-962C-40FE-90F1-B80A7E72DB9A} (IDriveFolderExt) - new plugin; similar to, except that this plugin's output can be added directly to a timeline.  Also, this plugin does not have the lookup that has...below is an example of the output: -r d:\cases\herrcore\ntusertest2.dat -p cached_tln -u keydet89 | find "F6BF"
Launching cached_tln v.20150608
1432618505|REG||keydet89|Cached Shell Ext First Load: {F6BF8414-962C-40FE-90F1-B80A7E72DB9A}

So, there you have it.  Typical caveats apply...such as "..these are based on extremely limited test data..", etc.

Also, something I wanted to mention as a result of looking through the hives herrcore sent me was that there were other indicators added to the USRCLASS.DAT hive, as seen in the image to the left.  You can see that in addition to the CLSID\{GUID}\InprocServer32 key, another key was added with the path Drive\ShellEx\FolderExtensions\{GUID}.

Thursday, May 28, 2015

Detecting Lateral Movement

Almost two years ago, I posted this article that addressed how to track lateral movement within an infrastructure.  At the time, I'd been using this information successfully during engagements, and I still use it today.

This morning, I saw this video from Rapid7, and I thought that Mike did a great job with the presentation.  Mike made some very good points during his presentation.  For example, "SMB" is native to a Windows infrastructure, and with the right credentials, an adversary can go just about anywhere they please.

There were some things missing in the presentation, some caveats that need to be mentioned; I do understand that they were likely left out for the sake of time.  However, they are important.  For example:

Security-Auditing/4698 events - Scheduled Task creation; under Advanced Security Audit Policy settings, for Object Access, you need to have Audit Other Object Access Events enabled for this event to appear in your Windows Event Logs.

Security-Auditing/4697 events - Service installation; similar to the previous events, systems are not configured to audit for system creation via the Security Event Log by default.

So, the take-away here is that in order for these (and other) events to be useful, what admins need to do is properly configure auditing on systems, as well as employ a SEIM with some sort of filtering capability.  Increasing auditing alone will not be useful...I've seen that time and time again when an incident is identified; auditing is ramped up suddenly, and the Security Event Logs start filling up and rolling over in a matter of a few hours, causing valuable information to be lost.  The best thing to do is to enable auditing that makes sense within your infrastructure ahead of time, employing the appropriate settings (what to audit, increasing the default size of the Windows Event Log files, etc.) before an incident occurs.

Also, consider the use of MS's Sysmon, sending the collected data to a SEIM (Splunk??).  Monitoring process creation (including the command line) is extremely valuable, and not just in incident response.  For IR, having the process creation information available (along with a means to monitor it in a timely manner) reduces IR engagements from days or weeks to hours or even minutes.  If setting up Sysmon, Splunk, and filters is too daunting a task, consider employing something like Carbon Black.

Thanks to Rapid7 for sharing the's some great information.

Description of Security Events in Windows 7/Windows Server 2008 R2