The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Saturday, October 20, 2018
OSDFCon Trip Report
Interestingly enough, one speaker could not make it at the last minute, and Brian simply shifted the room schedule a bit to better accommodate people. He clearly understood the nature of the business we're in, and the absent presenter suffered no apparent consequences as a result. This wasn't one of the lightning talks at the end of the day, this was one of the talks during the first half of the conference, where everyone was in the same room. It was very gracious of Brian to simply roll with it and move on.
The Talks
Unfortunately, I didn't get a chance to attend all of the talks that I wanted to see. At OSDFCon, by its very nature, you see people you haven't seen in a while, and want to catch up. Or, as is very often the case, you see people you only know from online. And then, of course, you meet people you know only from online because they decide to drop in, as a surprise.
However, I do like the format. Talk times are much shorter, which not only falls in line with my attention span, but also gets the speakers to focus a bit more, which is really great, from the perspective of the listener, as well as the speaker. I also like the lightning talks...short snippets of info that someone puts together quickly, very often focusing on the fact that they have only 5 mins, and therefore distilling it down, and boiling away the extra fluff.
My Talk
I feel my talk went pretty well, but then, there's always the bias of "it's my talk". I was pleasantly surprised when I turned around just before kicking the talk off to find the room pretty packed, with people standing in the back. I try to make things entertaining, and I don't want to put everything I'm going to say on the slides, mostly because it's not about me talking at the audience, as much as its about us engaging. As such, there's really no point in me providing my slide pack to those who couldn't attend the presentation, because the slides are just place holders, and the real value of the presentation comes from the engagement.
In short, the purpose of my talk was that I wanted to let people know that if they're just downloading RegRipper and running the GUI, they aren't getting the full power out of the tool. I added a command line switch to rip.exe earlier this year ("rip -uP") that will run through the plugins folder, and recreate all of the default profiles (software, sam, system, ntuser, usrclass, amcache, all) based on the "hive" field in the config headers of the plugin.
To-Do
Something that is a recurring theme of this conference is how to get folks new to the community to contribute and keep the community alive, as well as how to just get folks in the community to contribute. Well, a couple of things came out of my talk that might be of interest to someone in the community.
One way to contribute is this...someone asked if there was a way to determine for which version of Windows a plugin was written. There is a field in the %config header metadata that can be used for that purpose, but there's no overall list or table that identifies the Windows version for which a plugin was written. For example, there are two plugins that extract information about user searches from the NTUSER.DAT hive, one for XP (acmru.pl) and one for Vista+ (wordwheelquery.pl). There's really no point in running acmru.pl against an NTUSER.DAT from a Windows 7 system.
So, one project that someone might want to take on is to put together a table or spreadsheet that provides this list. Just sayin'...and I'm sure that there are other ideas as to projects or things folks can do to contribute.
For example, some talks I'd love to see is how folks (not the authors) use the various open source tools that are available in order to solve problems. Actually, this could easily start out as a blog post, and then morph into a presentation...how did someone use an open source tool (or several tools) to solve a problem that they ran into? This might make a great "thunder talk"...10 to 15 min talks at the next OSDFCon, where the speaker shares the issue, and then how they went about solving it. Something like this has multiple benefits...it could illustrate the (or, a novel) use of the tool(s), as well as give DFIR folks who haven't spoken in front of a group before a chance to dip their toe in that pool.
Conversations
Like I said, a recurring theme of the conference is getting those in the community, even those new to the community, involved in keeping the community alive, in some capacity. Jessica said something several times that struck home with me...that it's up to those of us who've been in the community for a while to lead the way, not by telling, but by doing. Now, not everyone's going to be able to, or even want to, contribute in the same way. For example, many folks may not feel that they can contribute by writing tools, which is fine. But a way you can contribute is by using the tools and then sharing how you used them. Another way to contribute is by writing reviews of books and papers; by "writing reviews", I don't mean a table of contents, but instead something more in-depth (books and papers usually already have a table of contents).
Shout Outz
Brian Carrier, Mari DeGrazia, Jessica Hyde, Jared Greenhill, Brooke Gottlieb, Mark McKinnon/Mark McKinnon, Cory Altheide, Cem Gurkok, Thomas Millar, the entire Volatility crew, Ali Hadi, Yogesh Khatri, the PolySwarm folks...I tried to get everyone, and I apologize for anyone I may have missed!
Also, I have to give a huge THANK YOU to the Basis Tech folks who came out, the vendors who were there, and to the hotel staff for helping make this conference go off without a hitch.
Final Words
As always, OSDFCon is well-populated and well-attended. There was a slack channel established for the conference (albeit not by Brian or his team, but it was available), and the Twitter hashtag for the conference seems to have been pretty well-used.
To follow up on some of the above-mentioned conversations, many of us who've been around for a while (or more than just "a while") are also willing to do more than lead by doing. Many of us are also willing to answer questions...so ask. Some of us are also willing to mentor and help folks in a more direct and meaningful manner. Never presented before, but feel like you might want to? Some of us are willing to help in a way that goes beyond just sending an email or tweet of encouragement. Just ask.
Monday, October 06, 2014
Stuff
Here's a really good...no, I take that back...a great blog post by Sean Mason on "IR muscle memory". Take the time to give it a read, it'll be worth it, for no other reason than because it's valuable advice. Incident response cannot be something that you talk about once and never actually do; it needs to be part of muscle memory. Can you detect an incident, and if so, how does your organization react? Or, if you receive an external notification of a security incident, how does your organization respond?
A couple of quotes from the blog post that I found interesting were:
...say, “Containment” without having any understanding of what is involved...
Yes, sometimes a consultant (or CISSP) will say this, and sometimes, there is that lack of understanding of how this will affect the business. This is why having IR built into the DNA of an organization is so important...understanding how the business will be affected by some response or containment procedure is critical.
There is also a modicum of patience and discipline required when it comes to containment, particularly when it comes to targeted threats. If the necessary instrumentation is not in place to monitor the environment, then prematurely pulling the trigger on some containment procedures rather than taking the time to prepare and conduct the containment procedures in a near-simultaneous manner will likely cause the threat actors to react, changing what they do. When dealing with these incidents, if someone on the response team decides, "...hey, I can make that change now, so I'll go ahead and take care of it..." can lead to a lot more work.
Another comment from the blog post:
...as a leader and a technologist, you always want everyone to know everything wing-to-wing, and while this can work great in a small organization the reality is that it doesn’t scale for a number of reasons in larger orgs.
I agree wholeheartedly with this. For larger teams in particular, it doesn't scale well for everyone to be an expert in everything, but it does work well to have designated pockets of deep expertise.
I know that I'll never be as good a malware reverse engineer as some of the folks I've had the honor of working with. I can put a great deal of effort into becoming good at it, but that effort would be effort that I wouldn't be spending become better at DFIR analysis. Also, I've found that an effective approach is to gather as much as I can about the malware...OS and version it's installed on, where it was found in the file system, persistence mechanism, any artifacts or indicators associated with the malware, etc. I provide these to the RE analyst, and continue my analysis while they dig deep into the malware itself. When the RE analyst finds something, they provide it back to me and I continue with my analysis.
A great example of this occurred a number of years ago. I have found some malware that was used to steal banking credentials (NOT Zeus) and shared it with the RE analyst, providing a second file and the information/intel needed to run the malware. The malware itself was obfuscated, and in return I got a mutex (I didn't have a memory dump, but I did have a hibernation file and the pagefile), the API used for off-system communications, and other valuable information. With that, I was able to nail down the specific user affected, the initial infection vector, when the infection occurred, etc.
On smaller teams, you won't be able to have those silos, but on larger teams, in larger organizations, it helps to pockets of deep expertise, and someone you can reach to for further assistance. This is particularly valuable in incidents, due to the ability to perform parallel analysis; rather than having one analyst who many not, say, analyze disk images on a regular basis try to wring as much information and intel out of an acquired image as they can while working an IR engagement, have that task run in parallel by someone with a deeper expertise. You're likely to get the info you need (and more) in a much more timely manner, while not loosing any time or focus on the engagement itself. On smaller teams, you're likely going to have a broader base of skill sets that aren't as deep as what you will find with individuals on larger teams. Larger teams can take advantage of pockets of skill sets, and even geographic dispersion, to keep the flow of the incident response going.
The rest of Sean's blog post is equally interesting. Sean goes on to provide his thoughts on people, process, and metrics, all with great insight.
To further Sean's thoughts, a great follow-on to his post is this article from WSJ; in particular, the following quote from that article:
“You are going to get hacked. The bad guy will get you. Whether you are viewed as a success by your board of directors is going to depend on your response.”
IR Fail?
Here's an interesting article from Kelly Jackson Higgins (DarkReading) that talks about Fortune 500 companies having IR teams, but many being pessimistic about their team's ability to handle a data breach. From my perspective, it's good to see that more firms are moving to having a computer security incident response plan, or CSIRP, and that these companies are actually thinking about things like, "...we need a plan...", and "...how good is our IR team?" Even if there is pessimism about the current team's effectiveness, at least there's thought going in that direction, and a realization and admission of the team's current state. From my perspective, this isn't really so much of a failure as it is a success that we've come this far.
From the article:
So why are aren't Target, TJ Maxx, and others sharing their war stories to help the next potential victim?
Yeah, you're not going to. Sharing is not a natural reaction within the DFIR community. This doesn't mean that it doesn't happen...years ago, while working an IR with a client, I heard that there was a forum in the local area where IT folks from different organizations in the same vertical came together and discussed issues and solutions. In fact, the DLP solution that my client had in place, which proved to be extremely valuable during the IR engagement, had been purchased as a result of engaging with others in their community. My point is, sharing can be powerful, and sharing information or intel that helps the next guy when they're attacked doesn't necessarily give away 'secret sauce' or competitive advantage.
Having an IR plan in place isn't enough, either.
No, it's not. You can't have a plan written by consultants sitting on a shelf...that's worse than not having a plan at all, because the organization will see that binder sitting on the shelf (literally or figuratively) and think that they've checked a box and have achieved some modicum of success. A CSIRP needs to be organic to an organization (remember Sean's blog post?); it needs to be owned and practiced by the organization. You can get assistance in writing it, reviewing it, and practicing the processes laid out in the CSIRP. Having an outside consulting firm come in and run an IR exercise...anything from a table top (in the military, we called this a "tactical exercise without troops", or TEWT) exercise to a full-on IR engagement...is a fantastic idea.
Over the years, I've seen a wide variety of organizations as a consultant. I've seen those that have been caught completely by surprise by a data breach, those that have IR plans but do not employ them, and I've seen those that have a practiced plan and want someone there to help guide them. Invariably, those organizations that have been thinking seriously about the need for incident detection and response end up faring much better than others, in a variety of metrics, including the overall cost of the incident.
RegRipper
In a few short weeks, I will be presenting at OSDFCon, talking about some changes to RegRipper that I've had in the works. I'll say right now that the changes I've been thinking about and working on are not ones that will significantly impact the use of the tool...so come on by and give it a listen.
OSDFCon and OMFW
I've attended and presented at OSDFCon before, and this is has always been a really great conference to attend.
Whether you're going to be at OSDFCon or not, I highly recommend that you consider attending the Open Memory Forensics Workshop, or OMFW 2014. This is the premier conference on memory analysis, put on by the top minds in memory analysis, from the Volatility Foundation.
If you're attending OSDFCon, be sure to come see Mari DeGrazia's presentation!
RegRipper Tutorial
Speaking of RegRipper, this tutorial was posted recently regarding how to set up and use RegRipper...I have say, I have somewhat mixed feelings about it. Part of me appreciates the interest in the tool, but
In the name of full disclosure, the author did contact me and ask me to review the article after it was complete. I responded, but to be honest, at the time that the request came in, I didn't have the cycles to focus on reviewing the article, and I definitely didn't have the cycles to address everything that I read in the article. So what you're seeing now is what I've worked on a few minutes at a time, here and there, since the article was published. I'm not going to address everything in the article, because I simply don't have the time to do so, so what I opted to do was pull out just a couple of comments and address them here.
For example...
I have often heard RegRipper mentioned on forums and websites and how it was supposed to make examining event logs, registry files and other similar files a breeze.
I'm not sure which forums or websites state this, but this is not the case at all. RegRipper is named as it is because it's intended for use against the Windows Registry...and only the Registry. It's not intended for use against any other files, in particular the Windows Event Logs. Right after I first released RegRipper, I did receive a request to have it parse PST files, but that simply wasn't/isn't practical.
As I wrote earlier there is a huge community out there writing plugins for RegRipper.
First off, there is no mention of "a huge community" in the tutorial, up to that point. Second, there is not a "huge community out there writing plugins". Yes, some plugins have been submitted over time, and some folks have suggested modifications to plugins...but there is not a "huge community" by any means. In fact, my understanding is that the vast majority of users simply download the tool and run the GUI...and that's it. Asking users specifically, via email or in person, what they'd like to see done to make the tool more useful does not often lead to responses such as requests for new plugins.
I could continue with a lot of the different things I found to be amiss (such as in the Downloads section), but it is not my intent to deride this effort. Again, I greatly appreciate the interest in the tool, and I wanted to address a couple of the comments because I felt that they were wide-spread misconceptions that should be addressed. I'm not going to do a walk-through and correct everything I find...instead, I'll refer folks to the various blog posts I've written, as well as to Windows Registry Forensics.
Friday, October 30, 2015
OSDFCon follow-up

OSDFCon isn't so much a DFIR conference as it is a "tools and frameworks" conference, centered around the Autopsy toolkit. However, the folks who attend this conference are, for the most part, developers and DFIR practitioners. Many of the attendees are one or the other, while a number are both. This makes for a very interesting time.
Brian asked me to come by and, along with several other folks, give an update to a previous presentation. Last year, I talked about some updates I was considering/working on for RegRipper, and this time I gave a quick update on what I was looking at in the coming year. Based on that, my hope for next year's conference is to have something available to give a presentation, along with a demo, of what I talked about.
I really liked hearing about the new stuff in Volatility 2.5, as well as seeing the plugins that came out of the contest...and congrats to the contest winners!
Something I like about this particular conference is the type of folks that it brings together. Working on the types of cases I tend to work gives me a sort of myopic view of things, so it's good to meet up with others and hear about the kinds of cases they work, and the challenges they face.
Take-Aways
There are a lot of really smart people at this conference, and what I really like to see is frameworks and solutions to DFIR problems being created by DFIR practitioners, even if they are specific to those individual's needs.
Many of the solutions...whether it be Turbinia, or Autopsy, or Willi's tools, or whatever...provide an excellent means for data collection and presentation. I think we still have a challenge to overcome...data interpretation. Sure, we get now get data from an image or from across the enterprise much faster because we've put stuff in the cloud, or we've got a fast, multi-threaded design in our framework, and that's awesome. But what happens if that data is misunderstood and misinterpreted? This thought started to gel with me right after I registered for the conference and was talking to Christa about CyberTriage, and then during the conference, I made a comment to that effect to Cory...to which he responded, "Baby steps." He's right. But now that we can get to the data faster, the nex step is to make sure that we're getting the right data, and that it's being interpreted and understood correctly. Maybe the data interpretation phase is beyond the scope of a conference that's about open source tools...although there may be space for an open source tool that incorporates threat intelligence. Just sayin'...
Maybe I've just given myself the basis for a presentation next year. ;-)
Finally, a huge thanks to Brian and his staff for continuing to put on an excellent conference, in both format and content. In fact, I still believe that this is one of the better conferences available today. The format is great, requiring speakers to focus on the guts of what they want to convey, and the breaks allow for interaction not only with speakers but with other attendees, as well.
Tuesday, September 20, 2022
ResponderCon Followup
I had the opportunity to speak at the recent ResponderCon, put on by Brian Carrier of BasisTech. I'll start out by saying that I really enjoyed attending an in-person event after 2 1/2 yrs of virtual events, and that Brian's idea to do something a bit different (different from OSDFCon) worked out really well. I know that there've been other ransomware-specific events, but I've not been able to attend them.
As soon as the agenda kicked off, it seemed as though the first four presentations had been coordinated...but they hadn't. It simply worked out that way. Brian referenced what he thought my content would be throughout his presentation, I referred back to Brian's content, Allan referred to content from the earlier presentations, and Dennis's presentation fit right in as if it were a seamless part of the overall theme. Congrats to Dennis, by the way, not only for his presentation, but also on his first time presenting. Ever.
During his presentation, Brian mentioned TheDFIRReport site, at one point referring to a Sodinokibi write-up from March, 2021. That report mentions that the threat actor deployed the ransomware executable to various endpoints by using BITS jobs to download the EXE from the domain controller. My presentation focused less on analysis of the ransomware EXE and more on threat actor behaviors, and Brian's mention of the above report (twice, as a matter of fact) provided excellent content. In particular, for the BITS deployment to work, the DC would have to (a) have the IIS web server installed and running, and (b) have the BITS server extensions installed/enabled, so that the web server knew how to respond to the BITS client requests. As such, the question becomes, did the victim org have that configuration in place for a specific reason, or did the threat actor modify the infrastructure to meet their own needs?
However, the point is that without prior coordination or even really trying, the first four presentations seemed to mesh nicely and seem as if there was detailed planning involved. This is likely more due to the particular focus of the event, combined with some luck delivered when the organizing team decided upon the agenda.
Unfortunately, due to a prior commitment (Tradecraft Tuesday webinar), I didn't get to attend Joseph Edwards' presentation, which was the one presentation I wanted to see (yes, even more than my own!). ;-) I am going to run through the slides (available from the agenda and individual presentation pages), and view the presentation recording once it's available. I've been very interested in threat actor's use of LNK files and the subsequent use (or rather, lack thereof) by DFIR and threat intel teams. The last time I remember seeing extensive use of threat actor-delivered LNK files was when the Mandiant team compared Cozy Bear campaigns.
Going through my notes, comments made during one presentation kind of stood out, in that "event ID 1102" within the RDPClient/Operational Event Log was mentioned when looking for indications of lateral movement. The reason this stood out, and why I made a specific note, was due to the fact that many times in the industry, we refer to simply "event IDs" to identify event records; however, event IDs are not unique. For example, we most often think of "event log cleared" when someone says "event ID 1102"; however, it can mean something else entirely based on the event source (a field in the event record, not the log file where it was found). As a result, we should be referring to Windows Event Log records by their event source/ID pairs, rather than solely by their event ID.
Something else that stood out for me was that during one presentation, the speaker was referring to artifacts in isolation. Specifically, they listed AmCache and ShimCache each as artifacts demonstrating process execution, and this simply isn't the case. It's easy for many who do not follow this line of thought to dismiss such things, but we have to remember that we have a lot of folks who are new, junior, or simply less experienced in the industry, and if they're hearing this messaging, but not hearing it being corrected, they're going to assume that this is how things are, and should be, done.
What Next?
ResponderCon was put on by the same folks that have run OSDFCon for quite some time now, and it seems that ResponderCon is something a bit more than just a focused version of OSDFCon. So, the question becomes, what next? What's the next iteration, or topic, or theme?
If you have thoughts, insights, or just ideas you want to share, feel free to do so in the comments, or via social media, and be sure to include/tag Brian.
Sunday, October 11, 2020
#OSDFCON
The agenda for the 11th annual Open Source Digital Forensics Conference has been posted. I've attended OSDFCON before (several times), it's one of the conferences where I've enjoyed presenting over the years. Maybe someone reading this remembers the "mall-wear" incident from a number of years ago.
So, on 18 Nov, I'll be speaking on Effectively Using RRv3. This past spring, I shared some information about about this new version of RegRipper (here, and here), as well as highlighting specific plugins. What I'd like to do is, in the same vein as the conference agenda, crowd-source some of the content for my 30 min presentation.
What would you like to see, hear, or learn about during my 30-ish minute presentation regarding RegRipper 3.0?
Thursday, November 07, 2013
Conferences
OMFW
I've always enjoyed the format that Aaron has used for the OMFW, going back to the very first one. That first time, there was a short presentation followed by a panel, and back and forth, with breaks. It was fast-moving, the important stuff was shared, and if you wanted more information, there was usually a web site that you could visit in order to download the tools, etc.
This time around, there was greater focus on things like upcoming updates to Volatility, and the creation of the Volatility Foundation. Also, a presentation by George M. Garner, Jr., was added, so there were more speakers, more variety in topics discussed, and a faster pace, all of which worked out well.
The presentations that I really got the most out of were those that were more akin to use cases.
Sean and Steven did a great job showing how they'd used various Volatility plugins and techniques to get ahead of the bad guys during an engagement, by moving faster than the bad guys could react and getting inside their OODA loop.
Cem's presentation was pretty fascinating, in that it all seemed to have started with a claim by someone that they could hide via a rootkit on Mac OSX systems. Cem's very unassuming, and disproved the claim pretty conclusively, apparently derailing a book (or at least a chapter of the book) in the process!
Jamie's presentation involved leveraging CybOX with Volatility, and was very interesting, as well as well-received.
There was more build-up and hype to Jamaal's presentation than there was actual presentation! ;-) But that doesn't take anything at all from what Jamaal talked about...he'd developed a plugin called ethscan that will scan a memory dump (Windows, Linux, Mac) and produce a pcap. Jamaal pointed out quite correctly that many times when responding to an incident, you won't have access to a pcap file from the incident; however, it's possible that you can pull the information you need out of the memory buffer from the system(s) involved.
What's really great about OMFW is that not only does Aaron get some of the big names that are really working hard (thanks to them!) to push the envelope in this area of study to present, but there are also a lot of great talks in a very short time period. I'll admit that I wasn't really interested in what goes into the framework itself (that's more for the developers), but there were presentations on Android and Linux memory analysis; there's something for everyone. You may not be interested in one presentation, but wait a few minutes...someone will talk about a plugin or a process, and you'll be glued to what they're saying.
Swag this year was a cool coffee mug and Volatility stickers.
Here's a wrap-up from last year's conference. You can keep up on new developments in Volatility, as well as the Volatility training schedule, at the Volatility Labs blog.
OSDFCon
I've attended this conference before, and just as in the past, there is a lot of great information shared, with something for everyone. Personally, I'm more interested in the talks that present how a practitioner used open source tools to accomplish something, solve a problem, or overcome a challenge. I'm not so much interested in academic presentations, nor so much in talks that talk about open source tools that folks have developed. As in the past, I'd suggest yet again that there be multiple tracks for this conference...one for academics and developers, and another for practitioners, by practitioners.
As part of full disclosure, I did not attend any of the training or tutorials, and I could not attend all of the presentations.
You can see the program of talks here.
Thoughts and Take Aways
Visualization in DFIR is a sticky point...in some ways, it may be a solution without a problem. Okay, so the recommendation is, "don't use pie charts"...got it. But how does one use visualization techniques to perform analysis, when malware and intrusions follow the Least Frequency of Occurrence? How can a histogram show an analyst when the bad guy or the malware compromised a system when activities such as normal user activity, software and system updates, etc., are the overwhelming available activity? Maybe there is a way to take a bite out of this, but I'm not sure that academics can really start to address this until there is a crossover into the practitioner's end of the pool. I only mention this because it's a recurring thought that I have each time I attend this conference.
As Simson pointed out, much of the current visualization occurs after the analyst has completed their examination and is preparing a report, either for a customer or for presentation in court. Maybe that's just the nature of the beast.
Swag this year was a plastic coffee cup for the car with the TSK logo, TSK stickers, and a DVD of Autopsy.
Resources
Link to Kristinn's stuff
Thanks
We should all give a great, big Thank You to everyone involved in making both of these conferences possible. It takes a lot of work to organize a conference...I can only imagine that it's right up there with herding cats down a beach...and providing a forum to bring folks together. So, to the organizers and presenters, to everyone who worked so hard on making these conferences possible, to those who sat at tables to provide clues to the clueless ("...where's the bathroom?")...thank you.
What else
There is another thing that I really like about DFIR-related conferences; interacting with other DFIR folks that I don't get to see very often, and even those who are not directly involved with what we do on a day-to-day basis. Unfortunately, it seems that few folks who attend these conferences want to engage and talk about DFIR topics, but now and again I find someone who does.
In this case, a good friend of mine wanted to discuss "...is what we do a 'science' or an 'art'?" at lunch. And when I say "discuss", I don't mean stand around and listen to others, I mean actively engaging in discussion. That's what a small group of us...there were only four of us at the table...did during lunch on Tuesday. Many times, finding DFIR folks at DFIR conferences that want to actively engage in discussion and sharing of DFIR topics...new malware autostart/persistence mechanisms seen, new/novel uses of tools, etc...is hard to do. I've been to conferences before where, for whatever reason, you just can't find anyone to discuss anything related to DFIR, or to what they do. In this instance, that wasn't the case, and some fascinating discussion ensued.
Saturday, August 18, 2018
Updates
Leave it to MS to make our jobs as DFIR analysts fun, all day, every day! Actually, that is one of the things I've always found fascinating about analyzing Windows systems is that the version of Windows, more often than not, will predicate how far you're able to go with your analysis.
An interesting artifact that's available on Win10 systems is the notification database, which is where those pop up messages you receive on the desktop are stored. Over the past couple of months, I've noticed that on my work computer, I get more of these messages, because it now ties into Outlook. It turns out that this database is a SQLite database. Lots of folks in the community use various means to parse SQLite databases; one of the popular ways to do this is via Python, and subsequently, you can often find either samples via tutorials, or full-on scripts to parse these databases for you.
MalwareMaloney posted a very interesting article on parsing the write-ahead logging (.wal) file for the database. Also, as David pointed out,
Anytime you're working with a SQLite database, you should consider taking a look at Mari's blog post on recovering deleted data.
RegRipper
Based on input from a user, I updated the sizes.pl plugin in a way that you may find useful; it now displays a brief sample of the data 'found' by the plugin (default is 48 bytes/characters). So, instead of just finding a value of a certain size (or above) and telling you that it found it, the plugin now displays a portion of the data itself. The method of display is based on the data type...if it's a string, it outputs a portion of the string, and if the data is binary, it outputs of hex dump of the pre-determined length. That length, as well as the minimum data size, can be modified by opening the plugin in Notepad (or any other editor) and modifying the "$output_size" and "$min_size" values, respectively.
Here is some sample output from the plugin, run against a Software hive known to contain a malicious Powershell script:
sizes v.20180817
(All) Scans a hive file looking for binary value data of a min size (5000)
Key : \4MX64uqR Value: Dp8m09KD Size: 7056 bytes
Data Sample (first 48 bytes) : aQBmACgAWwBJAG4AdABQAHQAcgBdADoAOgBTAGkAegBlACAA...
From here, I'd definitely pivot on the key name ("4MX64uqR"), looking into a timeline, as well as searching other locations in the Registry (auto start locations??) and file system for references to the name.
Interestingly enough, while working on updating this plugin, I referred back to pg 34 of Windows Registry Forensics and for the table of value types. Good thing I keep my copy handy for just this sort of emergency. ;-)
Mari has an excellent example of how she has used this plugin in actual analysis here.
IWS
Speaking of books, Investigating Windows Systems is due out soon. I'm really looking forward to this one, as it's a different approach all together from my previous books. Rather than listing the various artifacts that are available on Windows systems, folks like Phill Moore, David Cowen and Ali Al-Shemery graciously allowed me access to the images that they put together so that I could work through them. The purpose of the book is to illustrate a way of stringing the various artifacts together in to a full-blown investigation, with analysis decisions called out and discussed along the way.
What I wanted to do with the book is present something more of a "real world" analysis approach. Some of the images came with 30 or more questions that had to be answered as part of the challenge, and in my limited experience, that seemed a bit much.
The Github repo for the book has links to the images used, and for one chapter, has the code I used to complete a task. Over time, I may add other bits and pieces of information, as well.
OSDFCon
My submission for OSDFCon was accepted, so I'll be at the conference to talk about RegRipper, and how you can really get the most out of it.
Here is the list of speakers at the conference...I'm thinking that my speaker bio had something to do with me being selected. ;-)
Monday, November 23, 2020
Speaking at Conferences, 2020 edition
As you can imagine, 2020 has been a very "different" year, for a lot of reasons, and impacts of the events of the year have extended far and wide. One of the impacts is conference attendance, and to address this, several conferences have opted to go fully virtual.
The Open Source Digital Forensics Conference (OSDFCon) is one such conference. You can watch this year's conference via YouTube, or view the agenda with presentation PDFs. Brian and his team (and his hair!) at BasisTech did a fantastic job of pulling together speakers and setting up the infrastructure to hold this conference completely online this year.
Speakers submitted pre-recorded presentations, and then during the time of their presentation, accessed the Discord channel set up for their talk in order to answer questions and generally interact with those viewing the presentation.
I've attended (in person) and spoken at this conference in the past, and I've thoroughly enjoyed the mix of presentations and attendees. This time around, presenting was very different, particularly given that I wasn't doing so in a room filled with people. I tend to prefer speaking and engaging in person, as well as observing micro-expressions and using those to draw people out, as more often than not, what they're afraid to say or ask is, in reality, extremely impactful and insightful.
In many ways, an online virtual conference is no different from an in-person event. In both cases, you're going to have your vocal folks who overwhelm others. A good example of this was the Discord channel for my talk; even before I logged in for the presentation, someone had already posted a comment about textbooks for DFIR courses. I have to wonder, much like an "IRL" conference, how many folks were in the channel but were afraid to make a statement or ask a question.
Overall, I do think that the pandemic will have impacts that extend far beyond the wide-spread distribution of a vaccine. One thought is that this is an interesting opportunity for those doing event planning to re-invent what they do, if not their industry. Even after we move back to in-person meetings and conferences, there will still be significant value in holding virtual or hybrid events, and planning for such an event to be seamless and easy to access for the target audience will likely become an industry unto itself.
Addendum, 24 Nov: Here is the link to the video for my presentation.
Other videos:
Video for Brian's RDPiece presentation
Asif's Investigating WSL presentation
Linux Forensics for IoT
Addendum, 27 Nov: This morning, I uploaded my slides for the OSDFCon and Group-IB CyberCrimeCon 2020 presentations.
Sunday, October 28, 2018
Updates
While I was attending OSDFCon, I had a chance to (finally!) meet and speak with Jessica Hyde, a very smart and knowledgeable person, former Marine, and an all-around very nice lady. As part of the conversation, she shared with me some of her thoughts regarding IWS, which is something I sincerely hope she shares with the community. One of her comments regarding the book was that the price point put it out of reach for many of her students; I shared that with the publisher, and received the following as a response:
I’m happy to pass on a discount code that Jessica and her students, and anyone else you run across, can use on our website (www.elsevier.com) for a 30% discount AND we always offer free shipping. The discount code is: FOREN318.
What this demonstrates is that if you have a question, thought, or comment, share it. If action needs to or can be taken, someone will do so. In this case, my concern is the value of the book content to the community, and Jessica graciously shared her thoughts with me, and as a result, I did what I could to try and bring the book closer to where others might have an easier time purchasing it.
So how can you share your thoughts? Write a blog post or an email. Write a review of the book, and specify what you'd like to see. What did you find good, useful or valuable about the book content, and what didn't you like? Write a review and post it to the Amazon page for the book, or to the Elsevier page; both pages provide a facility for posting a review.
Artifacts of Program Execution
Adam recently posted a very comprehensive list of artifacts indicative of program execution, in a manner similar to many other blogs and even books, including my own. A couple of take-aways from this list include:
- Things keep changing with Windows systems. Even as far back as Windows XP, there were differences in artifacts, depending upon the Service Pack. In the case of the Shim Cache data, there were differences in data available on 32-bit and 64-bit systems. More recently, artifacts have changed between updates to Windows 10.
- While Adam did a great job of listing the artifacts, something analysts need to consider is the context available from viewing multiple artifacts together, as a cluster, as you would in a timeline. For example, let's say there's an issue where when and how Defrag was executed is critical; creating a timeline using the user's UserAssist entries, the timestamps available in the Application Prefetch file, and the contents of the Task Scheduler Event Log can provide a great deal of context to the analyst. Do not view the artifacts in isolation; seek to use an analysis methodology that allows you to see the artifacts in clusters, for context. This also helps in spotting attempts by an adversary to impede analysis.
So, take-aways...know the version of Windows you're working with because it is important, particularly when you ask questions, or seek assistance. Also, seek assistance. And don't view artifacts in isolation.
Artifacts and Evidence
A while back (6 1/2 yrs ago), I wrote about indirect and secondary artifacts, and included a discussion of the subject in WFA 3/e.
Chris Sanders recently posted some thoughts regarding evidence intention, which seemed to me to be along the same thought process. Chris differentiates intentional evidence (i.e., evidence generated to attest to an event) from unintentional evidence (i.e., evidence created as a byproduct of some non-attestation function).
Towards the end of the blog post, Chris lists six characteristics of unintentional evidence, all of which are true. To his point, not only may some unintentional evidence have multiple names, it may be called different things by the uninitiated, or those who (for whatever reason) choose to not follow convention or common practice. Consider NTFS alternate data streams, as an example. In my early days of researching this topic, I found that MS themselves referred to this artifact as both "alternate" and "multiple" data streams.
Some other things to consider, as well...yes, unintentional evidence artifacts often are quirky and have exceptions, which means they are very often misunderstood and misinterpreted. Consider the example of Shim Cache entry from Chris's blog post; in my experience, this is perhaps the most commonly misinterpreted artifact to date, for the simple fact that the time stamps are commonly referred to as the "date of execution". Another aspect of this artifact is that it's taken as standalone, and should not be...there may be evidence of time stomping occurring prior to the file being included as a Shim Cache record.
Finally, Chris is absolutely correct that many of these artifacts have poor documentation, if they have any at all. I see this as a short-coming of the community, not of the vendor. The simple fact is that, as a community, we're so busy pushing ahead that we aren't stopping to consider the value to the community as a whole that we're leaving behind. Yes, the vendor may poorly document an artifact, or the documentation may simply be part of the source code that we cannot see, but what we're not doing as a community is documenting and sharing our findings. There've been too many instances during my years doing DFIR work that I would share something with someone who would respond with, "oh, yeah...we've seen that before" only to have no documentation, not even a Notepad document or something scribbled on a napkin to which they can refer me. This is a loss for everyone.
Tuesday, May 13, 2014
Links

Tim Vidas has posted OpenLV, an update to the popular LiveView tool that many of use have used before. When conducting an investigation, there are a number of ways to access acquired images, such as via any number of analysis frameworks (DFF, ProDiscover, Autopsy, etc.) that provide a great deal of functionality for interacting with data. There are tools for mounting an acquired image as a read-only volume (FTK Imager, etc.), but OpenLV allows you to boot the acquired image. This can provide a great deal of visibility into the system, allowing the investigator to see what the intruder saw, interact with the system the way the intruder interacted with it, and even verify malware autostart functionality.
Be sure to check out the DFRWS Proceedings, written by Tim, Matthew Geiger, and Brian Kaplan.
EVTXtract
The other day I was answering a question about Windows Event Log analysis, and I ran across Willi Ballenthin's tool, EVTXtract (PDF here). This tool allows an analyst to recover deleted Windows Event Log records. The Windows Event Log (.evtx) files follow a binary structure that's much different from the Event Log (.evt) files on Windows XP and 2003, but deleted records can apparently be recovered, at least in some cases.
ThunderBird Parser
Mari has shared her ThunderBird Parser. Her blog post has some great information...she talks about what issue she faced and how she chose to address it by writing her own code. Doing this not only helped her understand the underlying data on much more intimate level, but it also opened that understanding up to other analysts.
Conferences
My conference attendance changed recently, and I am no longer a member of Suzanne Widup's author panel at the SANS DFIR Summit in Austin, TX. I was really looking forward to speaking on the panel (I've written a book or two), and discussing various topics around writing DFIR books. In fact, we'd already started addressing some questions in my blog, and I was really looking forward to hearing and addressing other questions.
Brian Carrier has opened up the call for papers for the OSDFCon, to be held in Herndon, VA, on 5 Nov. This has always been a great conference to attend (see here), and needs more practitioners to submit presentations. In fact, I've recommended to Mari that she submit to the conference to give a presentation on the ThunderBird email parser, or any of the other tools she's written. I've already submitted two presentation ideas.
I'm also looking for thoughts and ideas for other conferences to which I can submit to the CfP. CEIC is out because it's already come and gone. If anyone has any thoughts regarding a conference (or conferences) that are specific to DFIR, and include topics on addressing targeted threats, I'd greatly appreciate it if you'd comment here or drop me an email. Thanks.
Monday, December 29, 2014
Final Post of 2014

I wanted to thank two people in particular for their contributions to the DFIR field during 2014. Both have exemplified the best in information sharing, not just in providing technical content but also in providing content that pushes the field toward better analysis processes.
Corey's most recent blog post continues his research into process hollowing, incorporating what he's found with respect to the Poweliks malware. If you haven't taken a good look at his blog post and incorporated this into your analysis process yet, you should strongly consider doing so very soon.
Maria's post on time stomping was, as always, very insightful. Maria doesn't blog often but when she does, there's always some great content. I was glad to see her extend the rudimentary testing I'd done and blogged about, particularly because very recently, I'd seen an example of what she'd blogged about during an engagement I was working on.
Maria's also been getting a lot of mileage out of her Google cookies presentation, which I saw at the OSDFCon this year. If you haven't looked at the content of her presentation, you really should. In the words of Hamlet, "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy", and I'm sure Maria was saying, "There are more things in a Windows image than are dreamt of in your timeline."
Tying both Corey and Maria's contributions together, I was doing some analysis recently regarding a particular malware variant that wrote it's files to one location, copied them to another, time stomped those files, and injected itself into the svchost.exe process. This variant utilized the keystroke logging capability of the malware, and the keystroke log file was re-stomped after each successive update. It was kind of like an early nerd Christmas gift to see what two well respected members of the community had talked about right there, in the wild. In the words of one of my favorite characters, "Fascinating."
Volatility
The year would not be complete without a huge THANK YOU to the Volatility folks for all they do, from the framework, to the book, to the training class. 2014 saw me not only attending the course, but also receiving a copy of the book.
Shellbags
On the whole, it might be fair to refer to 2014 (maybe just the latter half) as the "Year of the Shellbag Research". Eric Zimmerman (Shellbag Explorer), Willi Ballenthin, Dan Pullega, and Joachim Metz should be recognized for the work they've been putting into analyzing and documenting shellbags. To learn more about what Eric and others have done to further the parsing and analysis of shellbags, be sure to check out David Cowen's Forensic Lunch podcasts (28 Nov, 12 Dec).
TriForce
Speaking of David Cowen, I still think that TriForce is a great example of the outcome of research in the field of forensic analysis. Seriously. I don't always use things like the USN change journal in my analysis...sometimes, quite simply, it's not applicable...but when I have incorporated into a timeline (by choice...), the data has proved to be extremely valuable and illuminating.
There are many others who have made significant contributions to the DFIR field over the past year, and I'm sure I'm not going to get to every one of them, but here are a few...
Ken Johnson has updated his file history research.
Basis Technology - Autopsy 3.1
Didier Stevens - FileScanner
Foxton Software - Free Tools
James Habben - Firefox cache and index parsers
Lateral Movement
Part of what I do puts me in the position of tracking a bad guy's lateral movement between systems, so I'm always interested in seeing what other analysts may be seeing. I ran across a couple of posts on the RSA blog that discussed confirming Remote Desktop Connections (part 1, part 2). I'm glad to see someone use RegRipper, but I was more than a little surprised that other artifacts associated with the use of RDP (either to or from a system) weren't mentioned, such as RemoteConnectionManager Windows Event Log records, and JumpLists (as described in this July, 2013 blog post).
One of the things that I have found...interesting...over time is the number of new sources of artifacts that get added to the Windows operating system with each new iteration. It's pretty fascinating, really, and something that DFIR analysts should really take advantage of, particularly when we no longer have to rely on a single artifact (a login record in the Security Event Log) as an indicator, but can instead look to clusters of artifacts that serve to provide an indication of activity. This is particularly valuable when some of the artifacts within the cluster are not available...the remaining artifacts still serve as reliable indicators.
Finally, as the year draws to a close, here's an update on the WRA 2/e Contest. To date (in over 2 months) there has been only a single submission. I had hoped that the contest would be much better received (no coding required), but alas, it was not to be the case.
Wednesday, February 11, 2015
Tools
If you do have Process Tracking enabled in your Windows Event Log, Willi Ballenthin has released a pretty fascinating tool called process-forest that will parse Windows Security Event Logs (Security.evtx) for process tracking events, and assemble what's found into a process tree, sorting by PID/PPID. Agan, if you've enabled Process Tracking in your logging policy, this tool will be very valuable for displaying the information in your logs in a manner that's a bit more meaningful. If you're a consultant (like me) then having this tool as an option, should the client have the appropriate audit configuration, can provide a quick view of available data that may be very beneficial.
Willi has also released a Python script for parsing AmCache Registry hive files, which were new to Windows 8, and are available in Windows 10. To get more of an understanding of the information available in this hive file that was first available on Windows 8, check out Yogesh's blog post here, with part 2 here. RegRipper has had an amcache.pl plugin for over a year.
After reading Jon Glass's blog post on parsing the IE10+ WebCacheV01.dat web history file, I used his code as the basis for creating a script similar to destlist.py, in that I can now parse the history information from the file and include it in my analysis. This can be very helpful if I need to incorporate it into a timeline, or if I just want to take a look at the information separately. Thanks to Jon for providing the example code, and to Jamie Levy/@gleeda for helping me parse out the last accessed time stamp information. Don't expect anything spectacularly new from this code, as it's based on Jon's code...I just needed something to meet my needs.
The NCC Group has released a tool called "Windows Activity Logger", which produces (per the description on the web site), a three hour rolling window of insight into system activity by recording process creation, along with thread creation, LoadImage events, etc. The free version of the tool allows you to run it on up to 10 hosts. I'm not sure how effective a "3 hr rolling window" is for some IR engagements (notification occurs months after the fact) but it's definitely a good tool for testing within a lab environment. I can also see how this can be useful if you have some sort of alerting going on, so that you're able to respond within a meaningful time, in order to take advantage of the available data.
I was doing some reading recently regarding CrowdStrike's new modules in their CrowdResponse tool to assist with collecting application execution information from hosts. Part of this included the ability to parse SuperFetch files. As I dug into it a bit more, I ran across the ReWolf SuperFetch Dumper (read about the tool here).
Speaking of Windows 10, there was a recent post to an online forum in which the OP stated that he'd seen something different in the DestList stream of Windows 10 *.automaticDestinations-ms Jump Lists. I downloaded the Windows 10 Technical Preview, installed it in VirtualBox, and ran it. I then extracted the two Jump List files from the appropriate folder and started looking at a hex view of their DestList streams. Within pretty short order, I began to see that many of the offsets that I'd identified previously were the same as they were for Windows 7 and 8, so I ran my destlist.py/.exe tool, and found that they were pretty much the same...the tool worked just fine, exactly as expected. As yet, there's been nothing specific from the OP about what they'd seen that was different, but it's entirely possible. Whenever a new version of Windows comes out, DFIR folks seem to immediately ask, "...what's new?" Why not instead focus on what's the same? There seem to be more artifacts that don't change much between versions that there are wildly new structures and formats. After all, the OLE/structured storage format used by the Jump List files has been around for a very long time.
Resources
Dell SecureWorks Tools
WindowsIR: FOSS Tools
Loki - Simple IOC Scanner
Writing Tools
I mentioned during my presentation at OSDFCon that tools like RegRipper come from my own use cases. I've discussed my motivation for writing DFIR books, but I've never really discussed why I write tools.
Why do I write my own tools?
First, writing my own tools allows me to become more familiar with the data itself. Writing RegRipper put me in a position to become more familiar with Registry data. Writing an MFT parser got me much more familiar with the MFT, and forced me to look really hard a some of the short descriptions in Brian's book (File System Forensic Analysis).
Sometimes, I want/need a tool to do something specific, and there simply isn't something available that meets my immediate need, my current use case. Here's an example...I was once asked to take a look at an image acquired from a system; during the acquisition process, there were a number of sector errors reported, apparently. I was able to open the image in FTK Imager, but could not extract a directory listing, and TSK fls.exe threw an error and quit before any output was generated. I wanted to see if I could add file system metadata to a timeline...I was able to use both FTK Imager and TSK icat.exe to extract most of the $MFT file. Using a Perl script I'd written for parsing the MFT, I was able to incorporate some file system metadata into the timeline...this was something I was not able to do with other tools.
Why do I share the tools I write?
I share the tools I write in the hopes that others will find them useful, and provide feedback or input as to how the tools might be more useful. However, I know that this is not why people download and use tools. So, rather than expecting feedback, I now put my tools up on GitHub for two reasons; one is so that I can download them for my own use, regardless of where I am. Two, there are a very small number of analysts who will actually use to the tools and give me their feedback, so I share the tools for them.
One of the drawbacks of sharing free tools is that those who use them have no "skin in the game". The tools are free, so it's just as easy to delete them or never use them as it is to download them. However, there's nothing in the freely available tools that pushes or encourages those who use them to "develop" them further. Now, I do get "it doesn't work" emails every now and then, and when I can get a clear, concise description of what's going on, or actual sample data to test against, I can see if I can figure out what the issue is and roll out/commit an update. However, I have also heard folks say, "we couldn't get it to work"...and nothing else. Someone recently told me that the output of "recbin -d dir -c" was "messed up", and it turns out that what they meant was that the time stamp was in Unix epoch format.
Similarly, those who incorporate free tools into their distributions or courses seem to rarely contribute to the development and extension of those tools. I know that RegRipper is incorporated into several *nix-based forensic tools distributions, as well as used in a number of courses, and some courses incorporate scenarios and labs into their coursework; yet, it's extremely rare to get something from one of those groups that extends the use of the tool (i.e., even ideas for new or updated plugins).
I am very thankfully to those folks who have shared data; however, it's been limited. The only way to expand and extend the capabilities of tools like RegRipper and others is to use them thoughtfully, thinking critically, and looking beyond the output to seeing what other data may be available. If this isn't something you feel comfortable doing, providing the data may be a better way to approach it, and may result in updates much faster.
Why do I write multiple tools?
I know that some folks have difficulty remembering when and how to use various tools, and to be honest, I get it. There are a lot of tools out there that do various useful things for analysts. I know that writing multiple tools means that I have to remember which tool to run under which circumstance, and to help me remember, I tend to give them descriptive names. For example, for parsing Event Log/*.evt files from XP/2003 systems, I called the tool "evtparse.pl", rather than "tool1.pl".
My rationale for writing multiple tools has everything to do with my use case, and the goals of my analysis. In some cases, I may be assisting another analyst, and may use a single data source (a single Event Log file, or multiple files) in order to collect or validate their findings. In such cases, I usually do not have access to the full image, and it's impractical for whomever I'm working with to share the full image. Instead, I'll usually get Registry hives in one zipped archive, and Windows Event Logs in another.
In other cases, I may not need all of the data in an acquired image in order to address my analysis goals. For example, if the question I'm trying to answer is, "did someone access this Win7 via Terminal Services Client?", all I need is a limited amount of data to parse.
Finally, if I'm giving a presentation or teaching a class, I would most likely not want to run a full application multiple times, for each different data source that I have available.
Tool Requests
Every now and then, I get requests to create a tool or to update some of the tools I've written. A while back, a good friend of mine reached out and asked me to assist with parsing Facebook chat messages that had been parsed out of an image via EnCase...she wanted to get them all parsed and reassembled into a complete conversation. That turned out to be pretty fun, and I had an initial script turned around in about an hour, with a final polished script finished by the end of the day (about four hours).
One tool I was asked to update is recbin, something I wrote to parse both XP/2003 INFO2 files as well as the $I* files found in the Vista+ Recycle Bin. I received a request to update the tool to point it to a folder and parse all of the $I* files in that folder, but I never got around to adding that bit of code. However, when that person followed up with me recently, it took all of about 5 minutes Googling to come up with a batch file that would help with that issue...
@echo off
echo Searching %1 for new $I* files...
for %%F in (%1\$I*) do (recbin -f %%F)
This isn't any different from Corey Harrell's auto_rip script. My point is that getting the capabilities and functionality out of the tools you have available is often very easy. After I sent this batch file to person who asked about it, I was asked how the output could be listed in CSV format, so I added the "-c" switch to the recbin command, and sent the "new" batch file back.
Tool Updates
Many times, I don't get a request for new capabilities to a tool; instead, I find something interesting and based on what I read, I update one of the tools myself. A great example of this is Brian Baskin's DJ Forensic Analysis blog post; the post was published on 11 Nov, and on the morning of 12 Nov, I wrote three RegRipper plugins, did some quick testing (with the limited data that I had available), and committed the new plugins to the GitHub repository...all before 8:30am. The three plugins can be used by anyone doing analysis to validate Brian's findings, and then hopefully expand upon them.
Sunday, September 19, 2021
Distros and RegRipper
Over the years, every now and then I've taken a look around to try to see where RegRipper is used. I noticed early on that it's included in several security-oriented Linux distros. So, I took the opportunity to compile some of the links I'd found, and I then extended those a bit with some Googling. I will admit, I was a little surprised to see how, over time, how far RegRipper has gone, from a "here, look at this" perspective.
Not all of the below links are current, some are several years old. As such, they are not the latest and greatest; however, they may still apply and they may still be useful/valuable.
RegRipper on Linux (Distros)
Kali, Kali GitLab
SANS SIFT
CAINE
Installing RegRipper on Linux
Install RRv2.8 on Ubuntu
CentOS RegRipper package
Arch Linux
RegRipper Docker Image
Install RegRipper via Chocolatey
Forensic Suites
Something I've always been curious about is why the value of RegRipper being incorporated into and maintained through a forensic analysis suite isn't more of "a thing", but that fact doesn't prevent RegRipper and tools like it from being extremely valuable in a wide range of analyses.
RegRipper is accessible via Autopsy
OSForensics Tutorial
Launching RegRipper via OpenText/EnCase
When I worked for Nuix, I worked with Dan Berry's developers to build extensions for Yara and RegRipper (Nuix RegRipper Github) giving users of the Workstation product access to these open source tools in order to extend their capabilities. While both extensions really do a great deal to leverage the open source tool for use by the investigator, I was especially happy to see how the RegRipper extension turned out. The extension would automatically locate hive files, regardless of the Windows version (including the AmCache.hve file), automatically run the appropriate plugins against the hive, and then automatically incorporate the RegRipper output into the case file. In this way, the results were automatically incorporated into any searches the investigator would run across the case. During testing, we added images of Windows XP, Windows 2008 and Windows 7 systems to a case file, and the extension ran flawlessly.
It seems that RegRipper (as well as other tools) have been incorporated into KAPE, particularly into the Registry and timelining modules. This means that whether you're using KAPE freely, or you're using the enterprise license, you're likely using RegRipper and other tools I've written, to some extend.
I look back on this section, and I really have to wonder why, given how I've extended RegRipper since last year, why there is no desire to incorporate RegRipper into (and maintain it through) a commercial forensic analysis suite. Seriously.
Presentations/Courses
I've covered RegRipper as a topic in this blog, as well as in my books. I've also given presentations discussing the use of RegRipper, as have others. Here are just a few links:
OSDFCon 2020 - Effectively Using RegRipper (video)
PluralSight Course
RegRipper in Academia
Okay, I don't have a lot of links here, but that's because there were just so many. I typed "site:edu RegRipper" into a Google search and got a LOT of hits back; rather than listing the links, I'm just going to give you the search I ran and let you do with it what you will. Interestingly, the first link in the returned search results was from my alma mater, the Naval Postgraduate School; specifically, Jason Shaver's thesis from 2015.
Monday, June 30, 2014
RegRipper

Some folks have reached to me recently and said, "I have the most recent download...", and that's apparently not been the case. I left the Google Code page for RegRipper populated in part because there is some information that I put in the Wiki pages that I still want to be able to access.
Just a note...if you think that the download link is broken, be sure to check to see if your infrastructure allows access to Dropbox.
If you want to see what's going to be new with RegRipper, be sure to vote for the presentation at OSDFCon.
Thursday, May 02, 2019
What's New...
The folks over at Magnet Forensics have several free tools available, which were discussed recently on the 13Cubed YouTube channel.
Analysis
Okay, so parsers abound (more on that later), but what then? Parsing is great, but that's just the first step toward analysis, and the true value of a tool, regardless of where it comes from, is how it integrates into your analysis process. Parsing is not analysis.
On a bit of a side note on that topic, IWS was released last fall, and I'm still somewhat curious to see how it's received by the community. In the book, I exposed my analysis process, in a manner that anyone can follow along with; with the exception of the first image (which seems to be no longer accessible), anyone can download the images, and perform their own analysis. This is a complete departure from my previous work; in all of my previous books, I'd followed a pretty standard formula...here's an artifact, here's how to parse it, here's what it can tell you...but like other works, I'd pretty much left it to the reader to figure out how to stitch everything together into a cohesive investigation. IWS is my first real shot at doing that, going beyond just throwing up my findings into a blog post. Yes, I've been told, "...it's good...", and "...I like it...", but like a brewer who's stepped outside their comfort zone and tried something radically different from their previous approach, I'm really curious to better understand what readers really think about the content of the book, how it impacts them, and
JumpList AppID
Not long ago, I saw a question about an Automatic JumpList application ID; the OP was asking which application the AppID referred to. I did a quick Google search and found no reference to that specific AppID, but it did get me thinking...how can someone go about determining the AppID of a JumpList, when said AppID is not publicly listed?
I came up with two means for doing so, and in hindsight, they are not mutually exclusive; that is, one supports the other.
The first method would be to parse the JumpList and get a list of files that it points to; you can get this information from both the individual LNK streams within the JumpList file, as well as the DestList stream. From there, you could then check the Registry for file associations; that is, which application is associated with the listed file's extensions.
The second method would be to create a timeline of system activity, and include the JumpList DestList stream as a data source. With additional information, such as UserAssist and BAM entries, RecentDocs values, Prefetch metadata, etc., you should be able to 'see' applications that were launched prior to at least some of the files in the JumpList DestList stream being accessed.
Presentations
I've got a couple of upcoming presentations, the first one being at RVASec, coming up later this month. I've submitted to OSDFCon this year, as well, but that's in the fall.
A friend asked me to provide a presentation to her forensics class, which I did recently, via Zoom. I resurrected a presentation on Registry analysis, which I thought was appropriate given the audience, and it seemed to pretty well-received. If anything, I hope that seeds were planted.