Wednesday, February 24, 2016

Links: Plugin Updates and Other Things

Plugin Updates
Mari has done some fascinating research into MS Office Trust Records and posted her findings here. Based on her sharing her findings and sample data, I was able to update the trustrecords.pl plugin.  Further, Mari's description of what she had done was so clear and concise that I was able to replicate what she did and generate some of my own sample data.

The last update to the trustrecords.pl plugin was from 16 July 2012; since then, no one's used it or apparently had any issues with it or questions about what it does.  For this update, I added a check for the VBAWarnings value, and added parsing of the last 4 bytes of the TrustRecords value data, printing "Enable Content button clicked" if the data is is in accordance with Mari's findings.  I also changed how the plugin determines which version of Office is installed. I also made sure to update the trustrecords_tln.pl plugin accordingly, as well.

So, from the sample data that Mari provided, the output of the trustrecords.pl plugin looks like this:

**Word**
----------
Security key LastWrite: Wed Feb 24 15:58:02 2016 Z
VBAWarnings = Enable all macros

Wed Feb 24 15:08:55 2016 Z : %USERPROFILE%/Downloads/test-document-domingo.doc
**Enable Content button clicked.

...and the output of the trustrecords_tln.pl plugin looks like this:

1456326535|REG|||TrustRecords - %USERPROFILE%/Downloads/test-document-domingo.doc [Enable Content button clicked]

Addendum, 25 Feb
Default Macro Settings (MSWord 2010)
After publishing this blog post yesterday, there was something that I ran across in my own testing that I felt was important to point out.  Specifically, when I first opened MSWord 2010 and went to the Trust Center, I saw the default Macro Settings, illustrated in the image to the right; this is with no VBAWarnings value in the Registry.  Once I started selecting other options, the VBAWarnings value was created.

What this seems to indicate is that if the VBAWarnings value exists in the Registry, even if the Macro Settings appear as seen in the image above (the data for the value would be "2"), that someone specifically changed the value.  So, if the VBAWarnings value doesn't exist in the Registry, it appears (based on limited testing) that the default behavior is to disable macros with a notification.  If the setting is changed, the VBAWarnings value is created.  If the VBAWarnings value is set to "2", then it may be that the Macro Settings were set to something else, and then changed back.

For example, take a look at the plugin output I shared earlier in this post.  You'll notice that the LastWrite time of the Security key is 50 min later than the TrustRecords time stamp for the document.   In this case, this is due to the fact that Mari produced the sample data (hive) for the document, and then later modified the Macro Settings because I'd reached back to her and said that the hive didn't contain a VBAWarnings value.

Something else to think about...has anyone actually used the reading_locations.pl plugin?  If you read Jason's blog post on the topic, it seems like it could be pretty interesting in the right instance or case.  For example, if an employee was thought to have modified a document and claimed that they hadn't, this data might show otherwise.
**end addendum**

Also, I ran across a report of malware using a persistence mechanism I hadn't seen before, so I updated termserv.pl to address the "new" key.

Process Creation Monitoring
My recent look into and description of PECapture got me thinking about process creation monitoring again.

Speaking of process creation monitoring, Dell SecureWorks recently made information publicly available regarding the AdWind RAT.  If you read through the post, you'll see that the RAT infection process spawns a number of external commands, rather than using APIs to do the work.  As such, if you're recording process creation events on your endpoints, filters can be created to watch for these commands in order to detect this (and other) activity.

Malicious LNK
Wait, what?  Since when did those two words go together?  Well, as of the morning of 24 Feb, the ISC handlers have made it "a thing" with this blog post.  Pretty fascinating, and thanks to the handlers for walking through how they pulled things out of the LNK file; it looks as if their primary tool was a hex editor.

A couple of things...

First, process creation monitoring of what this "looks like" when executing would be very interesting to see.  If there's one thing that I've found interesting of late is how DFIR folks can nod their heads knowingly at something like that, but when it comes to actual detection, that's another matter entirely.  Yes, the blog post lists the command line used  but the question is, how would you detect this if you had process creation monitoring in place?

Second, in this case, the handlers report that "the ACE file contains a .lnk file"; so, the ACE file doesn't contain code that creates the .lnk file, but instead contains the actual .lnk file itself.  Great...so, let's grab Eric Zimmerman's LECmd tool, or my own code, and see what the NetBIOS name is of the system on which the LNK file was created.  Or, just go here to get that (I see the machine name listed, but not the volume serial number...).  But I'd like to parse it myself, just to see what the shell items "look like" in the LNK file.

As a side note, it's always kind of fascinating to me how some within the "community" will have data in front of them, and for whatever reason, just keep it.  Just as an example (and I'm not disparaging the work the handlers did, but commenting on an observation...), the handlers have the LNK file, but they're not sharing the vol SN, NetBIOS name, or shell items included in the LNK file, just the command line and the embedded payload.  I'm sure that this is a case of "this is what we feel is important, and the other stuff isn't...", but what happens when others find something similar?  How do we start correlating, mapping and linking similar incidents if some data that might reveal something useful about the author is deemed unnecessary by some?

Like I said, not disparaging the work that the handlers did, just thinking out loud a bit.

8Kb One-Liner
There was a fascinating post over at Decalage recently regarding a single command line that was 8Kb long.  They did a really good walk-through for determining what a macro was up to, even after the author took some pretty significant steps to make getting to a human-readable format tough.

I think it would be fascinating to get a copy of this sample and run it on a system with SysMon running, to see what the process tree looks like for something like this.  That way, anyone using process creation monitoring could write a filter rule or watchlist to monitor for this in their environment.

From the Trenches
The "From the Trenches" stuff I've been posting doesn't seem to have generated much interest, so I'm going to discontinue those posts and move on to other things.

Friday, February 19, 2016

Links

Plugin Update
Thanks to input (and a couple of hives) from two co-workers yesterday, I was able to update the appcompatcache.pl RegRipper plugin to work correctly with Windows 10 systems.  In one case, the hive I was testing was reportedly from a Surface tablet.

Last year, Eric documented the changes to that he'd observed in the structure format from Windows 10; they appear to similar to Windows 8.1.

Something interesting that I ran across was similar to the last two images in Eric's blog post; specifically, the odd entries that appeared similar in format to (will appear wrapped):

00000000 0004000300030000 000a000028000000 014c 9E2F88E3.Twitter wgeqdkkx372wm

If you look closely at the entries in the images from Eric's blog, you'll see that the time stamp reads "12/31/1600 5:00:00pm -0700".  Looking at the raw data for one of the examples I had available indicated that the 64-bit time stamp was "00 09 00 00 00 00 00 00".  The entry at the offset should be a 64-bit FILETIME object, but for some reason with the oddly-formatted entries, what should be the time stamp field is...something else.  Eric's post is from April 2015 (almost a year ago) and as yet, there doesn't appear to have been any additional research conducted as to what these entries refer to.

For the appcompatcache.pl plugin, the time stamp is not included in the output if it's essentially 0.  For the appcompatcache_tln.pl plugin, the "0" time stamp value is still be included in TLN output, so you'll likely have a few entries clustered at 1 Jan 1970.

Hunting for Executable Code in Windows Environments
I ran across this interesting blog post this morning.  What struck me most about it is that it's another means for "hunting" in a Windows environment that looks to processes executing on the endpoint.

This tool, PECapture (runs as a GUI or a service), captures a copy of the executable, as well as the execution time stamp and a hash.

I have to say that as much as I think this is a great idea, it doesn't appear to capture the full command line, which I've found to be very valuable.  Let's say an adversary is staging the data that was found for exfil, and uses a tool like WinRAR; capturing the command line would also allow you to capture the password they use.  In a situation like that, I don't need a copy of rar.exe (or whatever it's been named to...), but I do need the full command line.

I think that for the time being, I'll continue using Sysmon, but I add that if you're doing malware testing, having both Sysmon and PECapture running on your test system might be a very good idea.  One of the things that some malware will do is run intermediate, non-native executables, which are then deleted after use, so having the ability to capture a copy of the executable would be very useful.

I do think that it's interesting that this tool is yet another does part of what Carbon Black does...

Yet Another "From the Trenches"
I had to dig back further into the vault for one of my first "consulting" gigs...

Years and years ago (I should've started, "Once, in a galaxy far, far away...."), while I was still on active duty, I applied for and was able to attend the Naval Postgraduate School.  While preparing to conduct testing and data collection for my master's thesis, I set up a small network in an unused room; the network consisted of a 10-Base2 network (server, two workstations) connected to a 10-BaseT network (server, 2 workstations), connected to Cisco routers, and the entire thing was connected to the campus 10-Base5 backbone via a "vampire" tap.  The network servers were Windows NT 3.51, and the workstations were all Windows 95, running on older systems that I'd repurposed; I had spent considerable time searching the MS KnowledgeBase, just to get information on how to set up Win95 on most of the systems.

For me, the value of setting up this network was what I learned.  If you looked at the curriculum for the school at the time, you could find six classes on "networking", spread across three departments...none of which actually taught students to set up a network.  So for me, this was invaluable experience.

While I was processing out of the military, I spent eight months just hanging around the Marine Detachment at DLI.  I was just a "floater" officer, and spent most of my time just making the Marines nervous.  However, I did end up with a task...the Marine Commandant, Gen Krulak, had made the statement that Marines were authorized to play "Marine DOOM", which was essentially a Marine-specific WAD for DOOM.  So, in the spring of '97, the Marine Det had purchased six Gateway computer systems, and had them linked together via a 10BaseT network (the game ran on a network protocol called "IPX").  The systems were all set up on a circular credenza-type desk, with six individual stations separated by partitions.  I'd come back from exercising during lunch and see half a dozen Marines enthusiastically playing the game.

At one point, we had a Staff Sergeant in the detachment...I'm not sure why he was there, as he didn't seem to be assigned to a language class, but being a typical Marine SSgt, he began looking for an office to make his own.  He settled on the game room, and in order to make the space a bit more usable, decided to separate the credenza-desk in half, and then turn the flat of each half against the opposite wall.  So the SSgt got a bunch of Marines (what we call a "workin' party") and went about disassembling the small six-station LAN, separating the credenza and turning things around.  They were just about done when I happened to walk by the doorway, and I popped my head in just to see how things were going.  The SSgt caught my eye, and came over...they were trying to set the LAN back up again, and it wasn't working.  The SSgt was very enthusiastic, as apparently they were almost done, and getting the LAN working again was the final task.  So putting on my desktop support hat, I listened to the SSgt explain how they'd carefully disassembled and then re-assembled it EXACTLY as it had been before.  I didn't add the emphasis with the word "exactly"...the SSgt had become much more enthusiastic at that word.

So I began looking at the backs of the computer systems nearest to me, and sure enough all of the systems had been connected.  When I got to the system that was as the "end", I noticed that the coax cable had been run directly into the connector for the network card.  I knew enough about networking and Marines that I had an idea of what was going on...and sure enough, when I moved the keyboard aside, I saw the t-connector and 50 ohm terminator sitting there.  To verify the condition of the network, I asked the SSgt to try the command to test the network, and he verified that there was "no joy".  I was reaching down into one of the credenza stations, behind the computer and no one could see what I was doing...I quickly connected the terminator to the t-connector, connected it to the jack on the NIC, and then reconnected the coax cable.  I told the SSgt to try again, and was almost immediately informed (by the Marine's shouts) that things were working again.  The SSgt came running over to ask me what I'd done.

To this day, I haven't told him.  ;-)

Tuesday, February 16, 2016

Tools, Links, From the Trenches, part deux

There's been considerable traffic online in various forums regarding tools lately, and I wanted to take a moment to not just talk about the tools, but the use of tools, in general.

Tools List
I ran across a link recently that pointed to this list of tools from Mary Ellen...it's a pretty long list of tools, but there's nothing about the list that talks about how the tools are used.

Take the TSK tools, for example.  I've used these tools for quite some time, but when I look at my RTFM-ish reference for the tools, it's clear that I don't use them to the fullest extent that they're capable.

LECmd
Eric Zimmerman recently released LECmd, a tool to parse all of the data out of an LNK file.

From the beginning of Eric's page for the tool:
In short, because existing tools didn't expose all available data structures and/or silently dropped data structures. 

In my opinion, the worst sin a forensics tool vendor can commit is to silently drop data without any warning to an end user. To arbitrarily choose to report or withhold information available from a file format takes the decision away from the end user and can lead to an embarrassing situation for an examiner.

I agree with Eric's statement...sort of.  As both an analyst and an open source tool developer, this is something that I've encountered from multiple perspectives.

As an analyst, I believe that it's the responsibility of the analyst to understand the data that they're looking at, or for.  If you're blindly running tools that do automated parsing, how do you know if the tool is missing some data structure, one that may or may not be of any significance?  Or, how do you correctly interpret the data that the tool is showing you?  Is it even possible to do correct analysis if data is missing and you don't know it, or the data's there but viewed or interpreted incorrectly?

Back when I was doing PFI work, our team began to suspect that the commercial forensics suite we were using was missing certain card numbers, something we were able to verify using test data.  I went into the user forum for the tool and asked about the code behind the IsValidCreditCard() function, and asked, "what is a valid credit card number?"  In response, I was directed to a wiki page on credit card numbers...but I knew from testing that this wasn't correct.  I persisted, and finally got word from someone a bit higher up within the company that we were correct; the function did not consider certain types of CCNs valid.  With some help, we wrote our own function that was capable of correctly locating and testing the full range of CCNs required for the work we were doing.  It was slower than the original built-in function, but it got the job done and was more accurate.  It was the knowledge of what we were looking for, and some testing, that led us to that result.

As a tool developer, I've tried to keep up with the available data structures as much as possible.  For example, take a look at this post from June, 2013.  The point is that tool development evolves, in part due to what becomes available (i.e., new artifacts), as well as in part due to knowledge and awareness of the available structures.

With respect to RegRipper, it's difficult to keep up with new structures or changes to existing structures, particularly when I don't have access to those data types.  Fortunately, a very few folks (Eric, Mitch, Shafik, to name a few...) have been kind enough to share some sample data with me, so that I can update the plugins in question.

Something that LECmd is capable of is performing MAC address lookups for vendor names.  Wait...what?  Who knew that there were MAC addresses in LNK files/structures?  Apparently, it's been known for some time.  I think it's great that Eric's including this in his tool, but I have to wonder, how is this going to be used?  I'm not disagreeing with his tool getting this information, but I wonder, is "more data" going to give way to "less signal, more noise"?  Are incorrect conclusions going to be drawn by the analyst, as the newly available data is misinterpreted?

I've used the information that Eric mentions to illustrate that VMWare had been installed on the system at one time.  That's right...an LNK file had a MAC address associated with VMWare, and I was able to demonstrate that at one point, VMWare had been installed on the system I was examining.  In that case, it may have been possible that someone had used a VM to perform the actions that resulted in the incident alert.  As such, the information available can be useful, but it requires correct interpretation of the data.  While not the fault of the tool developer, there is a danger that having more information on the analyst's plate will have a detrimental effect on analysis.

My point is that I agree with Eric that information should be available to the analyst, but I also believe that it's the responsibility of the analyst to (a) recognize what information can be derived from various data sources, and (b) be able to correctly interpret and utilize that data.  After all, there are still folks out there, pretty senior folks, who do not correctly interpret ShimCache data, and believe that the time stamp associated with each entry is when that program was executed.

From the Trenches
I thought I'd continue with a couple of "war stories" from the early days of my commercial infosec consulting career...

In 1999, I was working on a commercial vulnerability assessment team with Trident Data Systems (TDS).  I say "commercial" because there were about a dozen of us on this team, with our offices on one side of the building, and the rest of the company was focused on fulfilling contracts for the federal government.  As such, we were somewhat 'apart' from the rest of the company, with respect to contracts, the tools we used, and our profit margin.  It was really good work, and I had an opportunity to use some of the skills I learned in the military while working with some really great folks, most of whom didn't have military experience themselves (some did).

One day, my boss (a retired Army LtCol) called me into his office to let me know that a client had signed a contract.  He laid out what was in the statement of work, and asked me who I wanted to take on the engagement.  Of course, that was a trick question, of sorts...instead of telling me who was available to go, I had the pick of everyone.  Fortunately, I got to take three people with me, two of which were my first choice, and the third I picked when another person wasn't available.  I was also told that we'd be taking a new team member along to get them used to the whole "consulting" thing.  After all this was over with, I got my team together and let everyone know what we were doing, for whom, when we'd be leaving, etc.  Once that was done, I reached out to connect with my point of contact.

When the day came to leave, we met at the office and took a van to the airport.  Everyone was together, and we had all of our gear.  We made our way to the airport, flew to the city, got to our hotel and checked in, all without any incidents or delays.  So far, so good.  What made it really cool was that while I was getting settled in my room, there was a knock at the door...the hotel staff was bringing around a tray of freshly baked (still warm) cookies, and cartons of milk on ice, for all of the new check-ins.  Score!

The next morning, we met for breakfast and did a verbal walk-through of our 'concept of operations' for the day.  I liked to do this to make sure that everyone was on the same sheet of music regarding not just the overall task, but also their individual tasks that would help us complete the work for the client.  We wanted to start off looking and acting professional, and deliver the best service we could to the client.  But we also wanted to be sure that we weren't so willing to help that we got roped into doing things that weren't in the statement of work, to the point where we'd burned the hours but hadn't finished the work that we were there to do.

The day started off with an intro meeting with the CIO.  Our point of contact escorted us up to the CIO's office and we began our introductions, and a description of the work we'd be doing.  Our team was standing in the office (this wasn't going to take long), with our laptop bags at our feet.  The laptops were still in the bags, and hadn't been taken out, let alone even powered on.

Again, this was 1999...no 'bluetooth' (no one had any devices that communicate in that manner), and we were still using laptops that, in order to connect them to a network, you had to plug in a PCMCIA card into one of the slots.

About halfway through our chat with the CIO, his office door popped open, and an IT staff member stuck their head in, to share something urgent.  He said, "...the security guys, their scan took down the server."  He never even looked at us or acknowledged our presence in the room...after all, we were "the security guys".  The CIO acknowledged the statement, and the IT guy left.  No one said a word about what had just occurred...there seemed to be an understanding, as if our team would say, "...we hear that all the time...", and the CIO would say, "...see what I have to work with?"

Saturday, February 13, 2016

From the Trenches

I had an idea recently...there are a lot of really fascinating stories from the infosec industry that aren't being shared or documented in any way.  Most folks may not think of it this way, but these stories are sort of our "corporate history", they're what led to us being who and where we are today.

Some of my fondest memories from the military were sitting around with other folks, telling "war stories".  Whether it was at a bar after a long day (or week), or we were just sitting around a fire while we were still out in the field, it was a great way to bond and share a sort of "corporate history".  Even today, when I run into someone I knew "back in the day", the conversation invariably turns to one of us saying, "hey, do you remember when...?"  I see a lot of value in sharing this sort of thing within our community, as well.

While I was still on active duty, I applied for and was assigned to the graduate program at the Naval Postgraduate School.  I showed up in June, 1994, and spent most of my time in Spanagel Hall (bldg 17 on this map).  At the time, I had no idea that every day (for about a month), I was walking by Gary Kildall's office.  It was several years later that I was reading a book on some of the "history" behind MS/DOS and Silicon Valley that I read about Digital Research, and made the connection.  I've always found that kind thing fascinating...getting an inside view of things from the people who were there (or, in Gary's case, allegedly not there...), attending the meetings, etc.  Maybe that's why I find the "Live To Tell" show on the History Channel so fascinating.

As a bit of a side note, after taking a class where I learned about "Hamming distance" while I was at NPS, I took a class from Richard Hamming.  That's like reading Marvel Comics and then talking to Stan Lee about developing Marvel Comics characters.

So, my idea is to share experiences I've had within the industry since I started doing this sort (infosec consulting) of work, in hopes that others will do the same.  My intention here is not to embarrass anyone, nor to be negative...rather, to just present humorous things that I've seen or experienced, as a kind of "behind the scenes" sort of thing.  I'm not sure at this point if I'm going to make these posts their own separate standalone posts, or include shorter stories along with other posts...I'll start by doing both and see what works.

War Dialing
One of the first civilian jobs I had after leaving active duty was with SAIC.  I was working with a small team...myself, a retired Viet Nam-era Army Colonel, and two other analysts...that was trying to establish itself in performing assessment services.  If anyone's ever worked for a company like this, they were often described as "400 companies all competing with each other for the same work", and in our case, that was true.  We would sometimes loose work to another group within the company, and then be asked to assist them as their staffing requirements for the work grew.

This was back in 1998, when laptops generally came with a modem and a PCMCIA expansion slot, and your network interface card came in a small hard plastic case.  Also, most of the laptops had 3.5 disk drives built in still, although some came with an external unit that you connected to a port.

One particular engagement I was assigned to was to perform "war dialing" against a client located in one of the WTC towers.  So, we flew to New York, went to the main office and had our introductory meeting.  During the meeting, we went over our "concept of operations" (i.e., what we were planning to do) again, and again requested a separate area from where we could work, preferably something out of view of the employees, and away from the traffic patterns of the office (such as a conference room).  As is often the case, this wasn't something that had been set up for us ahead of time, so two of us ended up piling into an empty cubicle in the cube-farm..not ideal, but it would work for us.

At the time, the tools of choice for this work were Tone Loc and THC Scan.  I don't remember which one we were using at the time, but we kicked off our scan using a range of phone numbers, but without randomizing the list.  As such, two of us were hunkered down in this cubicle, with normal office traffic going on all around us.  We had turned the speakers on the laptop we were using (being in a cubicle rather than a conference room meant we only had access to one phone line...), and leaned in really close so we could hear what was going on over the modem.  It was a game for us to listen to what was going on and try to guess if the system on the other end was a fax machine, someone's desk phone or something else, assuming it picked up.

So, yeah...this was the early version of scanning for vulnerabilities.  This was only a few years after ISS had been formed, and the Internet Scanner product wasn't yet well known, nor heavily used.  While a scan was going on, there really wasn't a great deal to do, beyond monitoring the scan for problems, particularly something that might happen that we needed to tell the boss about; better that he hear it from us first, rather than from the client.

As we're listening to the modem, every now and then we know that we hit a desk phone (rather than a modem in a computer) because the phone would pick up and you'd hear someone saying "hello...hello..." on the other end.  After a while, we heard echos...the sequence of numbers being dialed was in an order that we could hear the person speaking via the laptop speakers, as well as above the din of the office noise.  We knew that the numbers were getting closer, so we threw jackets over the laptop in an attempt to muffle the noise...we were concerned that the person who picked up the phone in the cubicles on either side of us would hear themselves.

Because of the lack of space and phone lines available for the work, it took us another day to finish up the scan.  After we finished, we had a check-out meeting with our client point of contact, who shared a funny story about our scan with us.  It seems that there was a corporate policy to report unusual events; there posters all over the office, and apparently training for employees, telling them what an "unusual event" might look like, to whom to report it, etc.  So, after about a day and a half of the "war dialing", only one call had come in.  Our scan had apparently dialed two sequential numbers that terminated in the mainframe room, and the one person in the room felt that having to get up to answer one phone, then walk across the room to answer the other (both calls of which hung up) constituted an "unusual event"...that's how it was reported to the security staff.

About two years later, when I was working at another company, we used ISS's Internet Scanner, run from within the infrastructure, to perform vulnerability assessments.  This tool would tell us if the computer scanned had modems installed.  No more "war dialing" entire phone lists for me...it was deemed too disruptive or intrusive to the environment.

Wednesday, February 03, 2016

Updated samparse.pl plugin

I received an email from randomaccess last night, and got a look at it this morning.  In the email, he pointed out there there had been some changes to the SAM Registry hive as of Windows 8/8.1, apparently due to the ability to log into the system using an MSDN Live account.  Several new values seem to be added to the user RID key, specifically, GivenName, SurName, and InternetUserName.  He provided a sample SAM hive and an explanation of what he was looking for, and I was able to update the samparse.pl plugin, send him a copy, and update the GitHub repository, all in pretty short order.

This is a great example of what I've said time and again since I released RegRipper; if you need a plugin and don't feel that you can create or update one yourself, all you need to do is provide a concise description of what you're looking for, and some sample data.  It's that easy, and I've always been able to turn a new or updated plugin around pretty quickly.

Now, I know some folks are hesitant to share data/hive files with me, for fear of exposure.  I know people are afraid to share information for fear it will end up in my blog, and I have actually had someone tell me recently that they were hesitant to share something with me because they thought I would take the information and write a new book around it.  Folks, if you take a close look at the blog and books, I don't expose data in either one.  I've received hive files from two members of law enforcement, one of whom shared hive files from a Windows phone.  That's right...law enforcement.  And I haven't exposed, nor have I shared any of that data.  Just sayin'...

Interestingly enough, randomaccess also asked in his email if I'd "updated the samparse plugin for the latest book", which was kind of an interesting question.  The short answer is "no", I don't generally update plugins only when I'm releasing a new book.  If you've followed this blog, you're aware that plugins get created or updated all the time, without a new book being released.  The more extensive response is that I simply haven't seen a SAM hive myself that contains the information in question, nor has anyone provided a hive that I could used to update and test the plugin, until now.

And yes, the second edition of Windows Registry Forensics is due to hit the shelves in April, 2016.