Thursday, October 08, 2015

Threat Intel and Threat Hunting Conferences in 2016

I'm looking for input from the community, with respect to conferences in 2016 that cover DFIR, threat intelligence, and threat hunting.

Members of the team I work with have some pretty significant content that we're looking to share, so I thought I'd reach to the community and see what conferences are out there that folks are looking to (for content) in 2016.

So, if you're looking at conferences in 2016 that cover digital forensics, incident response, as well as targeted threat hunting and response, I'd appreciate hearing about them.


Tuesday, October 06, 2015

Tools, Links - ESEDB, etc.

CapLoader - This looks like an interesting tool...and another one...for carving packets from memory dumps.  I'm not clear as to how this tool differs from the Volatility modules, in particular netscan and Jamaal's ethscan, but it does seem interesting.  If you were curious as to how to install Rekall on Windows systems, see this blog post. - I'm not entirely sure how this Python script differs from RegRipper's plugin, but it does offer an interesting alternative, if you're looking for one.  The output does appear similar to what produces.

MFT Parsers
Mari posted an excellent blog article based on her review of several MFT parsing tools.  One of the key factors to Mari's post is that she understands what it is she's looking at (MFT records), so she's able to evaluate the various tools for her own use, rather than simply running them and not questioning the output.

This is the sort of thing that the DFIR community would benefit from greatly, if more practitioners engaged in it, or if more practitioners responded to it.  Honestly evaluating tools can be of significant benefit to the community, particularly when they're evaluated in light of a particular practitioner's analysis process.  It's even more beneficial if that process is shared, so that others can determine the usefulness of the tools for themselves, and decide if those tools can/should be incorporated into their own analysis process.

Yara is a tool that's been available for some time, and is worth mentioning again.  A while back, Corey Harrell asked a question via social media regarding how to detect the presence of web shells on a system, using only the HTTP access logs.  Shortly thereafter, Aaron Shelmire posted this article to the ThreatStream blog.  In the article, Aaron mentions a couple of methods for detecting web shells using only the HTTP access logs, in accordance with Corey's question, and his suggestions are very valuable.  One method of detection he mentioned, as well, was using a change control and inventory process, or a file system monitoring tool, to detect new pages being added to web directories.  Aaron also mentions that searching for patterns within files can produce false positives, but if you use something like Yara, you can reduce those false positives.  Yara can be used by sysadmins who want to keep ahead of things or those who want to perform hunting, as well as by analysts and responders engaged in DFIR analysis.

An excellent resource for web shell rules can be found here.

ESEDB Databases
Windows systems utilize ESE format databases more and more, and as such, it's imperative that analysts be able to identify the databases, as well as extract data for incorporation into their analysis processes.

An example of an ESE format database includes the IE10+ WebCacheV01.dat web history database. Also, Brent Muir recently posted regarding Windows 10 Cortana Notification Center Forensics.

A note on Brent's post...I have a Windows 10 laptop that was upgraded from Win7; I opened the notification center as he described towards the end of the post.  Even after rebooting the system, the value containing the timestamp does not exist in my NTUSER.DAT.  However, I'm also not running Cortana, as this is an older laptop, so that may have an impact as to whether or not the value gets created.

There's also the Search Index database; reading through the post responses, you can see how the contents of this database can be useful to a forensic analyst.  One of the response lists tools that can be used to parse the database.

So, at this point, that's three databases that use the ESE format, all of which can provide significant value to a forensic analyst.

Parsing Tools
WoanWare ESEDBViewer - deprecated, but still a good visual tool.

libesedb - needs to be compiled for the Windows platform; most of the web sites that I've found that mention the tool don't provide a compiled copy

esentutl - native tool that lets you copy and/or recover an ESEDB file.  While one method of obtaining a copy of an ESEDB file for analysis might be to create a VSC and copy the file in question from the shadow copy, this option might not always be available.  As such, this native tool may be of significant use to an analyst. - It looks as if Jon Glass has updated his Python script for parsing the WebCacheV01.dat file.

Addendum, 7 Oct: I found this post over at Sanderson Forensics this morning...if you're analyzing Windows systems, you want to be sure to read the post and include the tool in your toolkit...

Saturday, September 12, 2015

Updates, Links, etc.

RegRipper Plugin Updates
I updated a plugin recently and provided a new one, and thought I'd share some information about those updates here...

The updated plugin is, originally written in 2011; the update that I added to the code was to specifically look for and alert on the value described in this blog post.  So, four years later, I added a small bit of code to the plugin to look for something specific in the data.

I added the plugin, which can be run against any hive; it has specific sections in its code that describe what's being looked for, in which hive, along with references as to the sources from which the keys, values or data in question were derived - why was I looking for them in the first place?  Essentially, these are all artifacts I find myself looking for time and again, and I figured I'd just keep them together in one plugin.  If you look at the plugin contents, you'll see that I copied the code from and included it.

There are a couple of other plugins I thought I'd mention, in case folks hadn't considered using them....

The plugin was written to address malware maintaining configuration information in a Registry value, as described in this Symantec post.  You can run this plugin against any hive.

The plugin is an interesting plugin, and the use of the plugin was illustrated in this Secureworks blog post.  As you can see in the figure to the left, there are two Registry keys that appear to have the same name.

In testing for this particular issue, I had specifically crafted two Registry key names, using the method outlined in the Secureworks blog post.  This allowed me to create some useful data that mimicked what we'd seen, and provided an opportunity for more comprehensive testing.

As you can see from the output of the plugin listed below, I had also crafted a Registry value name using the same method, to see if the plugin would detect that, as well.

C:\Perl\rr>rip -r d:\cases\local\ntuser.dat -p rlo
Launching rlo v.20130904
rlo v.20130904
(All) Parse hive, check key/value names for RLO character

RLO control char detected in key name: \Software\gpu.etadp  [gpupdate]
RLO control char detected in key name: \Software\.etadpupg  [gpupdate]
RLO control char detected in value name: \Software\.etadpupg :.etadpupg [gpupdate]

Now, when running the plugin, analysts need to keep in mind that it's looking for something very specific; in this case, indications of the RLO Unicode control character.  What's great about plugins like this is that you can include them in your process, run them every time you're conducting analysis, and they'll alert you when there's an issue.

Just as PSA, I have provided these plugins but I haven't updated any of the profiles...I leave that up to the users.  So, if you're downloading the plugins folder and refreshing it in place, do not expect to see the

Anti-Forensic Malware
I ran across this InfoSecurity Magazine article recently, and while the title caught my attention, I was more than a bit surprised at the lack of substance.

There are a couple of statements in the blog post that I wanted to address, and share my thoughts on...

Increasingly, bad actors are using techniques that leave little trace on physical disks. And unfortunately, the white hats aren’t keeping up: There’s a shortage of digital forensics practitioners able to investigate these types of offensives.

As to the first sentence, sometimes, yes.  Other times, not so much.

The second statement regarding "white hats" is somewhat ambiguous, don't you think?  Who are the "white hats"?  From my perspective, if "white hats" are the folks investigating these breaches, it's not so much that we aren't keeping up, as it is that the breaches themselves aren't being detected in a timely manner, due to a lack of instrumentation.  By the time the "white hats" get the call to investigate the breach, a great deal of the potential evidence has been obviated.

Finally, I don't know that I agree with the final statement, regarding the shortage of practitioners.  Sometimes, there's nothing to investigate.  As I described in a recent blog post, when putting together some recent presentations, I looked at the statistics in annual security trends reports.  One of the statistics I found interested was dwell time, or median time to detection.  The point I tried to make in the presentations was that when consultants go on-site to investigate a breach, they're able to see indicators that allow them to identify these numbers.  For example in the M-Trends 2015 report, there was an infrastructure that had been compromised 8 years before the compromise was detected.

I would suggest that it's not so much a shortage of practitioners able to investigate these breaches, it's a lack of management oversight that prevents the infrastructure from being instrumented in a manner that provides for timely detection of breaches.  By the time some breaches are detected (many through external, third party notification), the systems in question have likely been rebooted multiple times, potentially obviating memory analysis all together.

If a crime is committed and the perpetrator had to walk across a muddy field to commit that crime (leaving footprints), and that field is dug up and paved over with a parking lot before the crime is reported, you cannot then say that there aren't enough trained responders able to investigate the crime.

...seen a rise in file-less malware, which exists only in volatile memory and avoids installation on a target’s file system.

"File-less malware"?  Like Poweliks?  Here's a TrendMicro blog post regarding PhaseBot, which references a TrendMicro article on Poweliks.  Sure, there may not be a file on disk, but there's something pulled from the Registry, isn't there?

Malware comes from doesn't magically appear out of nowhere.  If you take a system off of the main network and reboot it, and find indications of malware persisting, then it's somewhere on the system.  Just because it is in memory, but there are no obvious indications of the malware within the file system doesn't mean that it can't be found.

At the recent HTCIA 2015 Conference, I attended Ryan's presentation on "Hunting in the Dark", and I found it fascinating that at a sufficient level of abstraction, those of us who are doing "hunting" are doing very similar things; we may use different terms to describe it (what Ryan refers to as "harvesting and stacking", the folks I work with call it "using strategic rules")

Ryan's presentation was mostly directed to folks who work within one environment, and was intended to address the question of, " do I get started?"  Ryan had some very good advice for folks in that position...start small, take a small bite, and use it to get familiar with your infrastructure to learn what is "normal", and what might not be normal.

Along those lines, a friend of mine recently asked a question regarding detecting web shells in an environment using only web server logs.  Apparently in response to that question, ThreatStream posted an article explaining just how to do this.  So this is an example of how someone can start hunting within their own environment, with limited resources.  If you're hunting for web shells, there are number of other things I'd recommend looking at, but the original question was how to do so using only the web server logs.

The folks at ThreatStream also posted this article regarding "evasive maneuvers" used by a threat actor group.  If you read the article, you will quickly see that it is more about obfuscation techniques used in the malware and it's communications means, which can significantly effect network monitoring.  Reading the article, many folks will likely take a look at their own massive lists of C2 domain names and IP addresses, and append those listed in the article to that list.  So, like most of what's put forth as 'threat intelligence', articles such as this are really more a means for analysts to say, "hey, look how smart I am, because I figured this out...".  I'm sure that the discussion of assembly language code is interesting, and useful to other malware reverse engineers, but how does a CISO or IT staff utilize the contents of the third figure to protect and defend their infrastructure?

However, for anyone who's familiar with the Pyramid of Pain, you'll understand the efficacy of a bigger list of items that might...and do...change quickly.  Instead, if you're interested in hunting, I'd recommend looking for items such as the persistence mechanism listed in the article, as well as monitoring for the creation of new values (if you can).

Like I said, I agree with Ryan's approach to hunting, if you're new to it...start small, and learn what that set of artifacts looks like in your environment.  I did the same thing years ago, before the terms "APT" and "hunting" were in vogue...back then, I filed it under "doing my job".  Essentially, I wrote some small code that would give me a list of all systems visible to the domain controllers, and then reach out to each one and pull the values listed beneath the Run keys, for the system and the logged in user.  The first time I ran this, I had a pretty big list, and as I started seeing what was normal and verifying entries, they got whitelisted.  In a relatively short time, I could run this search during a meeting or while I was at lunch, and come back to about half a page of entries that had to be run down.

I ran across this post over at PostModernSecurity recently, and I think that it really illustrates somethings about the #DFIR community beyond just the fact that these tools are available for use.

The author starts his post with:

...I covet and hoard security tools. But I’m also frugal and impatient,..

Having written some open source tools, I generally don't appreciate it when someone "covets and hoards" what I've written, largely because in releasing the tools, I'd like to get some feedback as to if and how the tool fills a need.  I know that the tool meets my needs...after all, that's why I wrote it.  But in releasing it and sharing it with others, I've very often been disappointed when someone says that they've downloaded the tool, and the conversation ends right there, at that point...suddenly and in a very awkward manner.

Then there's the "frugal and impatient" part...I think that's probably true for a lot of us, isn't it?  At least, sometimes, that is.  However, there are a few caveats one needs to keep in mind when using tools like those the author has listed.  For instance, what is the veracity of the tools? How accurate are they?

More importantly, I saw the links to the free "malware analysis" sites...some referenced performing "behavioral analysis".  Okay, great...but more important than the information provided by these tools is how that information is interpreted by the analyst.  If the analyst is focused on free and easy, the question then becomes, how much effort have they put into understanding the issue, and are they able to correctly interpret the data returned by the tools?

For example, look at how often the ShimCache or AppCompatCache data from the Windows Registry is misinterpreted by analysts.  That misinterpretation then becomes the basis for findings that then become statements in reports to clients.

There are other examples, but the point is that if the analyst hasn't engaged in the academic rigor to understand something and they're just using a bunch of free tools, the question then becomes, is the analyst correctly interpreting the data that they're being provided by those tools?

Don't get me wrong...I think that the list of tools is a good one, and I can see myself using some of them at some point in the future.  But when I do so, I'll very likely be looking for certain things, and verifying the data that I get back from the tools.

Saturday, September 05, 2015

Registry Analysis

I gave a presentation on Registry analysis at the recent HTCIA2015 Conference, and I thought that there were some things from the presentation that might be worth sharing.

What is Registry analysis?  
For the purposes of DFIR work, Registry analysis is the collection and interpretation of data and metadata from Registry keys and values.

The collection part is's the interpretation part of that definition that is extremely important.  In my experience, I see a lot of issues with interpretation of data collected from the Registry.  The two biggest ones are what the timestamps associated with ShimCache entries mean, and what persistence via a particular key path really means.

Many times, you'll see the timestamps embedded in the ShimCache data referred to as either the execution time, or "creation/modification" time.  Referring to this timestamp as the "execution time" can be very bad, particularly if you're using it to demonstrate the window of compromise during an incident, or the time between first infection and discovery.  If the file is placed on a system and timestomped prior to being added to the ShimCache, or the method for getting it on the system preserves the original last modification time, that could significantly skew your understanding of the event.  Analysts need to remember that for systems beyond 32-bit XP, the timestamp in the ShimCache data is the last modification time from the file system metadata; for NTFS, this means the $STANDARD_INFORMATION attribute within the MFT record.

Ryan's slides include some great information about the ShimCache data, as does the original white paper on the subject.

With respect to persistence, I see a lot of write-ups that state that malware creates persistence by creating a value beneath the Run key in the HKCU hive, and the write-up then states that that means that the malware will be started again the next time the system reboots.  That's not the case at all...because if the persistence exists in a user's hive, then the malware won't be reactivated following a reboot until that user logs in.  I completely understand how this is misinterpreted, particularly (although not exclusively) by malware analysts...MS says this a lot in their own malware write-ups.  While simple testing will demonstrate otherwise, the vast majority of the time, you'll see malware analysts repeating this statement.

The point is that not all of the persistence locations within the Registry allow applications and programs to start on system start.  Some require that a user log in first, and others require some other trigger or mechanism, such as an application being launched.  It's very easy...too simply make the statement that any Registry value used for persistence allows the application to start on system reboot, because there's very little in the way of accountability.  I've seen instances during incident response where malware was installed only when a particular user logged into the system; if the malware used a Registry value in that user's NTUSER.DAT hive for persistence, the system was rebooted, and the user account was not used to log in, then the malware would not be active.  Making an incorrect statement about the malware could significantly impact the client's decision-making process (regarding AV licenses), or the decisions made by regulatory or compliance bodies (i.e., fines, sanctions, etc.).

Both of these items, when misinterpreted, can significantly impact the overall analysis of the incident.

Why do we do it?
There is an incredible amount of value in Registry analysis, and even more so when we incorporate it with other types of analysis.  Registry analysis is rarely performed in isolation; rather, most often, it's used to augment other analysis processes, particularly timeline analysis, allowing analysts to develop a clearer, more focused picture of the incident.  Registry analysis can be a significant benefit, particularly when we don't have the instrumentation in place that we would like to have (i.e., process creation monitoring, logging, etc.), but analysts also need to realize that Registry analysis is NOT the be-all-end-all of analysis.

In the presentation, I mention several of the annual security trend reports that we see; for example, from TrustWave, or Mandiant.  My point of bringing these up is that the reports generally have statistics such as dwell time or median number of days to detection, statistics which are based on some sort of empirical evidence that provides analysts with artifacts/indicators of an adversary's earliest entry into the compromised infrastructure.  If you've ever done this sort of analysis work, you'll know that you may not always be able to determine the initial infection vector (IIV), tracking back to say, the original phishing email or original web link/SWC site.  Regardless, this is always based on some sort of hard indicator that an analyst can point to as the earliest artifact, and sometimes, this may be a Registry key or value.

Think about it...for an analyst to determine that the earliest data of compromise was...for example, in the M-Trends 2015 Threat Report, 8 yrs prior to the team being called in...there has to be something on the system, some artifact that acts as a digital muddy boot print on a white carpet.  The fact of the matter is that it's something that the analyst can point to and show to another analyst in order to get corroboration.  This isn't something where the analysts sit around rolling D&D dice...they have hard evidence, and that evidence may often be Registry keys, or value data.

Wednesday, September 02, 2015

HTCIA2015 Conference Follow up

I spoke at the HTCIA 2015 conference, held in Orlando, FL, on Mon, 31 Aug.  In fact, I gave two presentations...Registry analysis, and lateral movement.  You can see the video for the lateral movement presentation I gave at BSideCincy here...many thanks to the BSides Cincy guys and Adrian.

I haven't spoken at, or attended an HTCIA conference in quite a while.  I had no idea if I was going to make it to this one, between airline delays and tropical storms.  This one was held at the Rosen Shingle Creek Resort, a huge ("palatial" doesn't cover it) conference center..."huge", in the manner of Caesar's Palace.  In fact, there was an Avon conference going on at the same time as the HTCIA conference, and there very well could have been other conferences there, as well.  Given the humidity and volume of rain, having everything you'd need in one location was a very good thing.  In fact, the rain was so heavy on Monday afternoon, after the final presentation, that there were leaks in the room.

After presenting on Monday, I attended Mari's presentation, which I've seen before...however, this is one of those presentations that it pays to see again.  I think that many times when we're deeply engaged in forensic analysis, we don't often think about other artifacts that may be of use...either we aren't aware of them, due to lack of exposure, or we simply forgot.  However, if you're doing ANYTHING at all related to determining what the user may have done on the system, you've got to at least consider what Mari was talking about.  Why?  Well, we all know that browsers have an automatic cache clean-up mechanism; if the user is right at about 19 days since the last cache clean-up in IE, and they do something bad, it's likely that the artifacts of activity are going to be deleted...which doesn't make them impossible to find, just harder.  The cookies that Mari has researched AND provided a tool to collect can illustrate user activity long after the fact, either in specific activity, or simply illustrating the fact that the user was active on the system at a particular time.

Also, Mari is one of the very few denizens of the DFIR community who finds something, digs into it, researches it and runs it down, then writes it up and provides a tool to do the things she talked about in her write-up.  This is very rare and unique within the community, and extremely valuable.  Her presentation on Google Analytics cookies could very well provide coverage of a gap that many don't even know exist in their analysis.

I was also able to see Ryan's presentation on Tuesday morning.  This one wasn't as heavily attended as the presentations on Monday, which is (I guess) to be expected.  But I'll tell you...a lot of folks missed some very good information.  I attended for a couple of was that Ryan is a competitor, as much as a compatriot, within the community.  We both do very similar work, so I wanted to see what he was sharing about what he does.  I'm generally not particularly interested in presentations that talk about "hunting", because my experience at big conferences has often been that the titles of presentations don't match up with the content, but Ryan's definitely did so.  Some of what I liked about his presentation was how he broke things down...rather than going whole hog with an enterprise roll-out of some commercial package, Ryan broke things down with, " are the big things I look for during an initial sweep...", and proceeded from there.  He also recommended means for folks who want to start hunting in their own organization, and that they start small.  Trying to do it all can be completely overwhelming, so a lot of folks don't even start.  But taking just one small piece, and then using it to get familiar with what things look like in your environment, what constitutes "noise" vs "signal"...that's the way to get started.

What's interesting is that what Ryan talked about is exactly what I do in my day job.  I either go in blind, with very little information, on an IR engagement, or I do a hunt, where a client will call and say, "hey, I don't have any specific information that tells me that I've been compromised, but I want a sanity check...", and so I do a "blind" hunt, pretty much exactly as Ryan described in his presentation.   So it was interesting for me to see that, at a certain level of abstraction, we are pretty much doing the same things.  Now, of course there are some, exact steps in the process, and even the artifacts that we're looking for or at, may be a little different.  But the fact of the matter is that just like I mentioned in my presentation, when a bad guy "moves through" an environment such as the Windows OS, there are going to be artifacts.  Looking for a footprint here, an over-turned stone there, and maybe a broken branch or two will give you the picture of where the bad guy went and what they did.  For me, seeing what Ryan recommended looking at was validating...because what he was talking about is what I do while both hunting and performing DFIR work.  It was also good to see him recommending ways that folks could start doing these sorts of things in their own environments.  It doesn't take a big commercial suite, or any special simply takes the desire, and the rest of what's needed (i.e., how to collect the information, what to look for, etc.) is all available.

All in all, I had a good time, and learned a lot from the folks I was able to engage with.

Addendum: While not related to the conference, here are some other good slides that provide information about a similar topic as Ryan's...

Friday, August 28, 2015

Updates & Links

HTCIA2015 Presentations
For those of you attending HTCIA2015 (or just interested), I printed my presentations to PDF format and uploaded them to my GitHub site.  Unfortunately, as you'll see, particularly with the Registry analysis presentation, there are slides that are just place holders, so you won't know what is said unless you're actually there.

I recently read this post at the SecurityIntelligence web site, and was more than just a little happy to see a malware write-up that contained host-based indicators that could be used by analysts to determine if a system had been affected by this malware.  The same could be extended to an image acquired from the system, or to the entire infrastructure.

However, something does concern me about the write-up, and is found in the section titled "Dyre's Run Key in Non-Admin Installations".  The write-up states:

Until a few weeks ago, these non-admin installations had Dyre register a run key in the Windows Registry, designed to have it automatically run as soon as the computer is rebooted by the user:

The write-up then goes on to list the user's Run key, located in the NTUSER.DAT hive file.  This goes back to what I've said before about specificity and clarity of language...the malware does not "register a run key"; it creates a value beneath the Run key.  When this occurs, the persistence only works to re-start the malware when the user logs in, not when the system is rebooted.

I know that this seems pedantic, but Registry keys and values have different structures and properties, and are therefore...well...different.  The commands to create or retrieve Registry keys via reg.exe are different from those for values.  If you approached a developer who had no DFIR background and asked them to create a tool to look for a specific Registry key on all systems within an infrastructure, when you really meant value, you'd get a piece of code that likely returned nothing, or incorrect information.

I understand that Registry analysis is one of the least understood areas of DFIR analysis work.  So many Registry indicators are misunderstood and misinterpreted, that I think that it's important that analysts from across the many fields in information security (malware RE, DFIR, etc.) accept a common structure and usage of terminology.

That same section does, however, include the command used to create the Scheduled Task, and what's listed in the write-up provides a great deal of information regarding how an analyst can detect this either on a system, within an acquired image, or across an enterprise.  It can also be used to detect the persistence mechanism being created, if you're using something like SysMon or Carbon Black.

I would say that I'm adding this one to my bag of tricks, but it's already there...the timeline analysis process that I use can already detect this "renovation".  I think that more than anything, I'm just glad to see this level of detail provided by someone doing malware analysis, as it's not often that you see such things.

Plugin Updates
I've recently written a RegRipper plugin that may prove to be helpful, and someone else has updated another plugin... - there is malware out there that modifies the "(Default)" value beneath the HKCR\Network\SharingHandler key, which essentially removes the hand icon from shared resources. I wrote this plugin recently in order to help analysts determine if the value had been modified.  In the hives that I have available, the value simply points to "ntshrui.dll". - "randomaccess" made some updates to the plugin, and shared them, so I'm including the plugin in the distribution.  Thanks to "randomaccess" for providing the plugin...I hope that folks will find the information it provides valuable.

Windows 10
It's likely that many of you may have recently updated your Win7 to Win10, via the free upgrade...I did.

I know that when I present at conferences, one of the questions I get asked quite often is, "...what's the new hotness in Windows 10?"  Well, I'm providing some links part because my thoughts are that if you don't understand the old hotness (i.e., Registry analysis, ADSs, Jump Lists, etc.), what good is the new hotness?

Some Win10 Forensics Resources
Brent Muir's slides on SlideShare
PDF Document from Champlain
Zena Forensics - Win10 Prefetch files

Sunday, August 16, 2015


RegRipper Plugin Updates
I made some updates to a couple of plugins recently.  One was to the plugins; the update was to add collecting subkey names and LastWrite times from beneath the Nla\Cache\Intranet key.  At this point, I'm not 100% clear on what the dates refer to, but I'm hoping that will come as the data is observed and added to timelines.

I also updated the plugin based on input from Yogesh Khatri.  Specifically, he found that in some cases, there's a string (REG_SZ) value named "DhcpNetworkHint" that, if you reverse the individual nibbles of the string, will spell out the SSID.

This is an interesting finding in a couple of ways.  First, Yogesh found that by reading the string in hex and reversing the nibbles, you'd get the SSID.  That in itself is pretty cool.  However, what this also says is that if you'd doing a keyword search for the SSID, that search will not return this value.

Corey's most recent blog post, Go Against The Grain, is a pretty interesting read.  It is an interesting thought.  As a consultant, I'm not usually "in" an infrastructure long enough to try to effect change in this manner, but it would be very interesting to hear how others may have utilized this approach.

"New" Tools
Eric Zimmerman recently released a tool for parsing the AmCache.hve file, which is a "new" file on Windows systems that follows the Registry hive file format.  So, the file follows the same format as the more "traditional" Registry hive files, but it's not part of the Registry that we see when we open RegEdit on a live system.

Yogesh Khatri blogged about the AmCache.hve file back in 2013 (here, and then again here).

While Eric's blog post focuses on Windows 10, it's important to point out that the AmCache. hve file was first seen on Windows 8 systems, and I started seeing them in images of Windows 7 systems since about the beginning of 2015.  Volatility has a parser for AmCache.hve files found in memory, and RegRipper has had a plugin to parse the AmCache.hve file since Dec, 2013.

I applaud Eric for adding a winnowing capability to his tool; in an age where threat hunting is a popular topic for discussion, data reduction (or, "how do I find the needle in the haystack with no prior knowledge or experience?") is extremely important.  I've tried doing something similar with my own tools (including, to some extent, some RegRipper plugins) by including an alerting capability based on file paths found in various data sources (i.e., Prefetch file metadata, Registry value data, etc.).  The thought behind adding this capability was that items that would likely be of interest to the analyst would be pulled out.  However, to date, the one comment I've received about this capability has been, "...if it says 'temp was found in the path', what does that mean?"

Again, Eric's addition of the data reduction technique to his tool is really very interesting, and is very likely to be extremely valuable.

Shell Items
I had an interesting chat with David Cowen recently regarding stuff he'd found in LNK files; specifically, Windows shortcut/LNK files can contain shell item ID lists, which can contain extremely valuable information, depending upon the type of examination you're performing.  Specifically, some shell item ID lists (think shellbags) comprise paths to files, such as those found on network shares and USB devices.  In many cases, the individual shell items contain not only the name of a folder in the path, but also timestamp information.  Many of the shell items also contain the MFT reference number (record number and sequence number combined) for the folder.  Using this information, you can build a historical picture of what some portion of the file system looked like, at a point in the past.

Conference Presentations And Questions
Many times when I present at a conference and open up for questions, one question I hear many times is, "What's new in Windows (insert next version number)?"  Many times, I'm sort of mystified by questions like this, as I don't tend to see the "newest hotness" as something that requires immediate attention if analysts aren't familiar with "old hotness", such as ADSs, Registry analysis, etc.

As an example, I saw this tweet not long ago, which led to this SANS ISC Handler's Diary post.  Jump Lists are not necessarily "new hotness", and have been part of Windows systems since Windows 7.  As far as resources go, the ForensicsWiki Jump Lists page was originally created on 23 Aug 2011.  I get that the tweet was likely bringing attention back to something of value, rather than pointing out something that is "new".

As a bit of a side note, I tend to use my own tools for parsing files like Jump Lists, because the allow me to incorporate the data in to the timelines I create, if that's something I need to do.