The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Monday, May 30, 2011
Wednesday, May 25, 2011
Tools
I've run across a number of tools recently, some directly related to forensics, and others more related more to IR or RE work. I wanted to go ahead and put those tools out there, to see what others think...
Memory Analysis
There have been a number of changes recently on the memory analysis front. For example, Mandiant recently released their RedLine tool, and HBGary released the Community Edition of their Responder product.
While we're on the topic of memory analysis tools, let's not forget the erstwhile and formidable Volatility.
Also, if you're performing memory dumps from live systems, be sure to take a look at the MoonSol Windows Memory Toolkit.
SQLite Tools
CCL-Forensics has a trial version of epilog available download, for working with SQLite databases (found on smartphones, etc.). One of the most noticeable benefits of epilog is that it allows you to recover deleted records, which can be very beneficial for analysts and investigators.
I'm familiar with the SQLite Database Browser...epilog would be interesting to try.
MFT Tools
Sometimes you need a tool to parse the NTFS $MFT file, for a variety of reasons. A version of my own mft.pl is available online, and Dave Kovar provided his analyzemft.pl tool online, as well. Mark McKinnon has chimed in and provided MFT parsing tools for Windows, Linux, and MacOSX.
Other Tools
HBGary also made their AcroScrub tool available, which uses WMI to reach across the enterprise and scan for older versions of Adobe Reader.
A very interesting tool that I ran across is Flash Dissector. If you deal with or even run across SWF files, you might want to take a look at this tool, as well as the companion tools in the SWFRETools set.
The read_open_xml.pl Perl script is still available for parsing metadata from Office 2007 documents.
From the same site as the SWFRETools are some malware write-ups including NiteAim, and Downloader-IstBar. As a complete aside, here's a very interesting Gh0stNet writeup that Chris pointed me to recently (fans of Ron White refer to him as "Tater Salad"...fans of Chris Pogue should refer to him as "Beefcake" or "Bread Puddin'"...).
ADSs
Alternate data streams isn't something that you see discussed much these days. I recently received a question about a specific ADS, and thought I'd include some tools in this list. I've used Frank's LADS, as well as Mark's streams.exe. Scanning for ADSs is part of my malware detection process checklist, particularly when the goal of the analysis is to determine if there's any malware on the system.
Also, I ran across this listing at MS of Known Alternate Stream Names. This is some very useful information when processing the output of the above tools, because what often happens is that someone uses one of the above tools and finds one of the listed ADSs, and after the panic that ensues, their attitude switches back to the other side of the spectrum, to apathy...and that's when they're most likely to get hit.
Here are some additional resources from Symantec, IronGeek, and MS. Also, be sure to check out what I've written about these in WFA 2/e.
Scanners
Microsoft recently released their Safety Scanner, which is a one-shot micro-scanner...download it, run it, and it expires after 10 days, and then you have to download it again. This shouldn't replace the use of Security Essentials or other AV tools, but I'm pointing this out because it could be very useful when included as part of your malware detection process. For example, you could mount an acquired image via FTK Imager or ImDisk and scan the image. Also, the folks at ForensicArtifacts recently posted on accessing VSCs (their first research link actually goes back to my post by the same title...thanks to AntiForensics for reposting the entire thing...)...without having to have EnCase or PDE, you could easily scan the mounted VSC, as well.
Frameworks
The Digital Forensics Framework (DFF) is open source, and was recently updated to include support for the AFF format, as well as mailbox reconstruction via Joachim Metz's libpff.
Christopher Brown, of TechPathways, has made ProDiscover Basic Edition v6.10.0.2 available, as well. As a side note, Chris recently tweeted that he's just finished the beta of the full version of ProDiscover, adding the ability to image and diff VSCs. Wowzers!
Sites
TZWorks - free "prototypes" tools, including the Windows Shellbags parser, an EVTX file parser, and others. Definitely worth checking out.
WoanWare - several free forensics tools including a couple for browser forensics, and (like TZWorks) a "USBStor parser".
NirSoft - the link to the site goes to the forensics tools, but there are a lot of free tools available at the NirSoft site...too many to list.
The Open Source Digital Forensics site is a good source of tools, as well.
OSDFC
Speaking of tools, let's not forget that the OSDFC is right around the corner...
Addendum
Check out Phil Harvey's EXIFTool (comes with a standalone Windows EXE)...there's a long list of supported file types at the tool page.
Additional lists of tools include Mike's Forensic Tools, as well as the tools at MiTeC (thanks to Anonymous' comment). Also, Mark McKinnon has posted some freely available tools, as well.
Memory Analysis
There have been a number of changes recently on the memory analysis front. For example, Mandiant recently released their RedLine tool, and HBGary released the Community Edition of their Responder product.
While we're on the topic of memory analysis tools, let's not forget the erstwhile and formidable Volatility.
Also, if you're performing memory dumps from live systems, be sure to take a look at the MoonSol Windows Memory Toolkit.
SQLite Tools
CCL-Forensics has a trial version of epilog available download, for working with SQLite databases (found on smartphones, etc.). One of the most noticeable benefits of epilog is that it allows you to recover deleted records, which can be very beneficial for analysts and investigators.
I'm familiar with the SQLite Database Browser...epilog would be interesting to try.
MFT Tools
Sometimes you need a tool to parse the NTFS $MFT file, for a variety of reasons. A version of my own mft.pl is available online, and Dave Kovar provided his analyzemft.pl tool online, as well. Mark McKinnon has chimed in and provided MFT parsing tools for Windows, Linux, and MacOSX.
Other Tools
HBGary also made their AcroScrub tool available, which uses WMI to reach across the enterprise and scan for older versions of Adobe Reader.
A very interesting tool that I ran across is Flash Dissector. If you deal with or even run across SWF files, you might want to take a look at this tool, as well as the companion tools in the SWFRETools set.
The read_open_xml.pl Perl script is still available for parsing metadata from Office 2007 documents.
ADSs
Alternate data streams isn't something that you see discussed much these days. I recently received a question about a specific ADS, and thought I'd include some tools in this list. I've used Frank's LADS, as well as Mark's streams.exe. Scanning for ADSs is part of my malware detection process checklist, particularly when the goal of the analysis is to determine if there's any malware on the system.
Also, I ran across this listing at MS of Known Alternate Stream Names. This is some very useful information when processing the output of the above tools, because what often happens is that someone uses one of the above tools and finds one of the listed ADSs, and after the panic that ensues, their attitude switches back to the other side of the spectrum, to apathy...and that's when they're most likely to get hit.
Here are some additional resources from Symantec, IronGeek, and MS. Also, be sure to check out what I've written about these in WFA 2/e.
Scanners
Microsoft recently released their Safety Scanner, which is a one-shot micro-scanner...download it, run it, and it expires after 10 days, and then you have to download it again. This shouldn't replace the use of Security Essentials or other AV tools, but I'm pointing this out because it could be very useful when included as part of your malware detection process. For example, you could mount an acquired image via FTK Imager or ImDisk and scan the image. Also, the folks at ForensicArtifacts recently posted on accessing VSCs (their first research link actually goes back to my post by the same title...thanks to AntiForensics for reposting the entire thing...)...without having to have EnCase or PDE, you could easily scan the mounted VSC, as well.
Frameworks
The Digital Forensics Framework (DFF) is open source, and was recently updated to include support for the AFF format, as well as mailbox reconstruction via Joachim Metz's libpff.
Christopher Brown, of TechPathways, has made ProDiscover Basic Edition v6.10.0.2 available, as well. As a side note, Chris recently tweeted that he's just finished the beta of the full version of ProDiscover, adding the ability to image and diff VSCs. Wowzers!
Sites
TZWorks - free "prototypes" tools, including the Windows Shellbags parser, an EVTX file parser, and others. Definitely worth checking out.
WoanWare - several free forensics tools including a couple for browser forensics, and (like TZWorks) a "USBStor parser".
NirSoft - the link to the site goes to the forensics tools, but there are a lot of free tools available at the NirSoft site...too many to list.
The Open Source Digital Forensics site is a good source of tools, as well.
OSDFC
Speaking of tools, let's not forget that the OSDFC is right around the corner...
Addendum
Check out Phil Harvey's EXIFTool (comes with a standalone Windows EXE)...there's a long list of supported file types at the tool page.
Additional lists of tools include Mike's Forensic Tools, as well as the tools at MiTeC (thanks to Anonymous' comment). Also, Mark McKinnon has posted some freely available tools, as well.
Sunday, May 22, 2011
Brain Droppings
NoVA Forensics Meetup
The next NoVA Forensics Meetup will be held on Wed, 1 June 2011, from 7-8:30pm. As to a location, I met with the great folks at Reverse Space, a hacker space in Herndon where some of the folks have an interest in forensics. Thanks to Carl and Richard for taking the time to meet with me, and for offering to host our meetings.
I hope that we get a big turn-out for our currently scheduled presentation, titled "Build your own packet capture engine".
Our meetup in July will be scheduled for Wednesday, 6 July, and we've already got an offer of a presentation regarding setting up virtual machines to use for dynamic malware analysis.
As to further topics, I'd like to get suggestions regarding how we can expand our following; for example, Chris from the NoVA Hackers group told me that they follow the AHA participation model. I'd like the development of this group to be a group effort, and as such will be asking participants and attendees for thoughts, ideas, comments (and to even volunteer their own efforts) regarding how this group can expand. For example, do we need a mailing list or is the Win4n6 Group sufficient? If you have anything that you'd like to offer up, please feel free to drop me a line.
Breakin' In
Speaking of the NoVA Forensics Meetup, at our last meeting, one of our guests asked me how to go about getting into the business. I tried to give a coherent answer, but as with many things, this question is one of those that have been marinating for some time, not just in my brain housing group, but within the community.
From my own perspective, when interviewing someone for a forensics position, I'm most interested in what they can do...I'm not so much interested that someone is an expert in a particular vendor's application. I'm more interested in methodology, process, what problems have you solved, where have you stumbled and what have you learned. In short, are you tied to a single application, or do you fall back to a process or methodology? How do you go about solving problems? When you do something in particular (adding or skipping a step in your process), do you have a reason for doing so?
But the question really goes much deeper than that, doesn't it? How does one find out about available positions and what it really takes to fill them? One way to find available positions and job listings is via searches on Monster and Indeed.com. Another is to take part in communities, such as the...[cough]...NoVA Forensics Meetup, or online communities such as lists and forums.
Breaches
eWeek recently (6 May) had an article regarding the Sony breach available, written by Fahmida Rashad, which started off by stating:
Sony could have prevented the breach if they’d applied some fundamental security measures...
Sometimes, I don't know about that. Is it really possible to say that, just because _a_ way was found to access the network, that including these "fundamental security measures" would have prevented the breach?
The article went on to quote Eugene Spafford's comments that Sony failed to employ a firewall, and used outdated versions of their web server. 'Spaf' testified before Congress on 4 May, where these statements were apparently made.
Interestingly, a BBC News article from 4 May indicates that at least some of the data stolen was from an "outdated database".
The eWeek article also indicates (as did other articles) that Data Forte, Guidance Software and Protiviti were forensics firms hired to address the breach.
As an aside, there was another statement made within the article that caught my interest:
“There are no consequences for many companies that under-invest in security,” Philip Lieberman, CEO of Lieberman Software, told eWEEK.
As a responder and analyst, I deal in facts. When I've been asked to assist in breach investigations, I have done so by addressing the questions posed to me through analysis of the available data. I do not often have knowledge of what occurred with respect to regulatory or legislative oversight. Now and again, I have seen news articles in the media that have mentioned some of the fallout of the incidents I've been involved with, but I don't see many of these. What I find interesting about Lieberman's statement is that this is the perception.
The Big Data Problem
I read a couple of interesting (albeit apparently diametrically opposed) posts recently; one was Corey Harrell's Triaging My Way (shoutz to Frank Sinatra) post where Corey talked about focusing on the data needed to answer the specific questions of your case. Corey's post provides an excellent example of a triage process in which specific data is extracted/accessed based on specific questions. If there is a question about the web browsing habits of a specific user, there are a number of specific locations an analyst can go within the system to get information to answer that question.
The other blog post was Marcus Thompson's We have a problem, part II post, which says, in part, that we (forensic analysts) have a "big data" problem, given the ever-increasing volume (and decreasing cost) of storage media. Now, I'm old enough to remember when you could boot a computer off of a 5 1/4" floppy disk, remove that disk and insert the storage disk that held your documents...before the time of hard drives that were actually installed in systems. This dearth of storage media naturally leads to backlogs in analysis, as well as intelligence collection.
I would suggest that the "big data" problem is particularly an issue in the face of the use of traditional analysis techniques. Traditional techniques applied to Corey's example (above) states that all potential sources of media must be collected, and keyword searches run. Wait...what? Well, no wonder we have backlogs! If I'm interested in a particular web site that the user may have visited, why would I run a keyword search across all of the EXEs and DLLs in the system32 directory? While there may be files on the 1TB USB-connected external hard drive, what is the likelihood that the user's web browser history is stored there? And why would I examine the contents of the Administrator (or any other) account profile if it hasn't been accessed in two years?
Another variant on this issue was discussed, in part, in Mike Viscuso's excellent Understanding APT presentation (at the recent AccessData User's Conference)...the presentation indicates that the threat isn't really terribly "advanced", but mentions that the threat makes detection "just hard enough".
Writing Open Source Tools
This is a topic that came up when Cory and I were working on DFwOST...Cory thought that it would be a good section to add, and I agreed, but for the life of me, I couldn't find a place to put it in the book where it just didn't seem awkward. I still think that it's important, in part because open source tools come from somewhere, but also because I think that a lot more folks out there really have something to contribute to the community as a whole.
To start off, my own motivation for writing open source tools is to simply solve a problem or address something that I've encountered. This is where RegRipper came from...I found that I'd been looking at many of the same Registry keys/values over and over again, and had built up quite a few scripts. As such, I wanted a "better" (that's sort of relative, isn't it??) to manage these things, particularly when there was so many, and they seemed to use a lot of the same code over and over.
I write tools in Perl because it's widely available and there a LOT of resources available for anyone interested in learning to use it...even if just to read it. I know the same is true for Python, but back in '98-'99 when I started teaching myself Perl, I did so because the network monitoring guys in our office were looking for folks who could write Perl, and infosec work was as hard for folks to sell back then as forensic analysis is now.
When I write Perl scripts, I (in most cases) try to document the code enough so that someone can at least open the script in Notepad and read the comments to see what the script does. I don't always try for the most elegant solution, reducing the number of keystrokes to accomplish a task, as making the steps available not only lets someone see more clearly what was done, but it also lets someone else modify the code to meet their needs...simply comment out the lines in question and modify the script to meet your own needs.
DFF
Speaking of open source tools, one of the tools discussed in DFwOST is the Digital Forensics Framework, of which version 1.1.0 was recently released. This version includes a couple of updates, as well as a bug fix to the ntfs module. I've downloaded it and got it running nicely on a Windows XP system...great work and a huge thanks to the DFF folks for their work. Be sure to check out the DFF blog for some tips on how you can use this open source forensic analysis application.
The next NoVA Forensics Meetup will be held on Wed, 1 June 2011, from 7-8:30pm. As to a location, I met with the great folks at Reverse Space, a hacker space in Herndon where some of the folks have an interest in forensics. Thanks to Carl and Richard for taking the time to meet with me, and for offering to host our meetings.
I hope that we get a big turn-out for our currently scheduled presentation, titled "Build your own packet capture engine".
Our meetup in July will be scheduled for Wednesday, 6 July, and we've already got an offer of a presentation regarding setting up virtual machines to use for dynamic malware analysis.
As to further topics, I'd like to get suggestions regarding how we can expand our following; for example, Chris from the NoVA Hackers group told me that they follow the AHA participation model. I'd like the development of this group to be a group effort, and as such will be asking participants and attendees for thoughts, ideas, comments (and to even volunteer their own efforts) regarding how this group can expand. For example, do we need a mailing list or is the Win4n6 Group sufficient? If you have anything that you'd like to offer up, please feel free to drop me a line.
Breakin' In
Speaking of the NoVA Forensics Meetup, at our last meeting, one of our guests asked me how to go about getting into the business. I tried to give a coherent answer, but as with many things, this question is one of those that have been marinating for some time, not just in my brain housing group, but within the community.
From my own perspective, when interviewing someone for a forensics position, I'm most interested in what they can do...I'm not so much interested that someone is an expert in a particular vendor's application. I'm more interested in methodology, process, what problems have you solved, where have you stumbled and what have you learned. In short, are you tied to a single application, or do you fall back to a process or methodology? How do you go about solving problems? When you do something in particular (adding or skipping a step in your process), do you have a reason for doing so?
But the question really goes much deeper than that, doesn't it? How does one find out about available positions and what it really takes to fill them? One way to find available positions and job listings is via searches on Monster and Indeed.com. Another is to take part in communities, such as the...[cough]...NoVA Forensics Meetup, or online communities such as lists and forums.
Breaches
eWeek recently (6 May) had an article regarding the Sony breach available, written by Fahmida Rashad, which started off by stating:
Sony could have prevented the breach if they’d applied some fundamental security measures...
Sometimes, I don't know about that. Is it really possible to say that, just because _a_ way was found to access the network, that including these "fundamental security measures" would have prevented the breach?
The article went on to quote Eugene Spafford's comments that Sony failed to employ a firewall, and used outdated versions of their web server. 'Spaf' testified before Congress on 4 May, where these statements were apparently made.
Interestingly, a BBC News article from 4 May indicates that at least some of the data stolen was from an "outdated database".
The eWeek article also indicates (as did other articles) that Data Forte, Guidance Software and Protiviti were forensics firms hired to address the breach.
As an aside, there was another statement made within the article that caught my interest:
“There are no consequences for many companies that under-invest in security,” Philip Lieberman, CEO of Lieberman Software, told eWEEK.
As a responder and analyst, I deal in facts. When I've been asked to assist in breach investigations, I have done so by addressing the questions posed to me through analysis of the available data. I do not often have knowledge of what occurred with respect to regulatory or legislative oversight. Now and again, I have seen news articles in the media that have mentioned some of the fallout of the incidents I've been involved with, but I don't see many of these. What I find interesting about Lieberman's statement is that this is the perception.
The Big Data Problem
I read a couple of interesting (albeit apparently diametrically opposed) posts recently; one was Corey Harrell's Triaging My Way (shoutz to Frank Sinatra) post where Corey talked about focusing on the data needed to answer the specific questions of your case. Corey's post provides an excellent example of a triage process in which specific data is extracted/accessed based on specific questions. If there is a question about the web browsing habits of a specific user, there are a number of specific locations an analyst can go within the system to get information to answer that question.
The other blog post was Marcus Thompson's We have a problem, part II post, which says, in part, that we (forensic analysts) have a "big data" problem, given the ever-increasing volume (and decreasing cost) of storage media. Now, I'm old enough to remember when you could boot a computer off of a 5 1/4" floppy disk, remove that disk and insert the storage disk that held your documents...before the time of hard drives that were actually installed in systems. This dearth of storage media naturally leads to backlogs in analysis, as well as intelligence collection.
I would suggest that the "big data" problem is particularly an issue in the face of the use of traditional analysis techniques. Traditional techniques applied to Corey's example (above) states that all potential sources of media must be collected, and keyword searches run. Wait...what? Well, no wonder we have backlogs! If I'm interested in a particular web site that the user may have visited, why would I run a keyword search across all of the EXEs and DLLs in the system32 directory? While there may be files on the 1TB USB-connected external hard drive, what is the likelihood that the user's web browser history is stored there? And why would I examine the contents of the Administrator (or any other) account profile if it hasn't been accessed in two years?
Another variant on this issue was discussed, in part, in Mike Viscuso's excellent Understanding APT presentation (at the recent AccessData User's Conference)...the presentation indicates that the threat isn't really terribly "advanced", but mentions that the threat makes detection "just hard enough".
Writing Open Source Tools
This is a topic that came up when Cory and I were working on DFwOST...Cory thought that it would be a good section to add, and I agreed, but for the life of me, I couldn't find a place to put it in the book where it just didn't seem awkward. I still think that it's important, in part because open source tools come from somewhere, but also because I think that a lot more folks out there really have something to contribute to the community as a whole.
To start off, my own motivation for writing open source tools is to simply solve a problem or address something that I've encountered. This is where RegRipper came from...I found that I'd been looking at many of the same Registry keys/values over and over again, and had built up quite a few scripts. As such, I wanted a "better" (that's sort of relative, isn't it??) to manage these things, particularly when there was so many, and they seemed to use a lot of the same code over and over.
I write tools in Perl because it's widely available and there a LOT of resources available for anyone interested in learning to use it...even if just to read it. I know the same is true for Python, but back in '98-'99 when I started teaching myself Perl, I did so because the network monitoring guys in our office were looking for folks who could write Perl, and infosec work was as hard for folks to sell back then as forensic analysis is now.
When I write Perl scripts, I (in most cases) try to document the code enough so that someone can at least open the script in Notepad and read the comments to see what the script does. I don't always try for the most elegant solution, reducing the number of keystrokes to accomplish a task, as making the steps available not only lets someone see more clearly what was done, but it also lets someone else modify the code to meet their needs...simply comment out the lines in question and modify the script to meet your own needs.
DFF
Speaking of open source tools, one of the tools discussed in DFwOST is the Digital Forensics Framework, of which version 1.1.0 was recently released. This version includes a couple of updates, as well as a bug fix to the ntfs module. I've downloaded it and got it running nicely on a Windows XP system...great work and a huge thanks to the DFF folks for their work. Be sure to check out the DFF blog for some tips on how you can use this open source forensic analysis application.
Thursday, May 05, 2011
Updates
NoVA Forensics Meetup
Last night's meetup went pretty well...there's nothing wrong with humble beginnings. We had about 16 people show up, and a nice mix of folks...some vets, some new to the community...but it's all good. Sometimes having new folks ask questions in front of those who've done it for a while gets the vets to think about/question their assumptions. Overall, the evening went well...we had some good interaction, good questions, and we gave away a couple of books.
I think that we'd like to keep this on a Wed or Thu evening, perhaps once a month...maybe spread it out over the summer due to vacations, etc. (we'll see). What we do need now is a facility with presentation capability. Also, I don't think that we want to have the presentations fall on just one person...we can do a couple of quick talks of a half hour each, or just have someone start a discussion by posing a question to the group.
Besides just basic information sharing, these can be good networking events for the folks who show up. Looking to add to your team? Looking for a job? Looking for advice on how to "break in" to the business? Just come on by and talk to folks.
So, thanks to everyone who showed up and made this first event a success. For them, and for those who couldn't make it, we'll be having more of these meetups...so keep your eyes out and don't hold back on the thoughts, comments, or questions.
Volatility
Most folks familiar with memory analysis know about the simply awesome work provided through the Volatility project. For those who don't know, this is an open source project, written in Python, for conducting memory analysis.
Volatility now has a Python implementation of RegRipper built-in, thanks to lg, and you can read a bit more about the RegListPlugin. Gleeda's got an excellent blog post regarding the use of the UserAssist plugin.
I've talked a bit in my blog, books, and presentations about finding alternate sources of forensic data when the sources we're looking for (or at) may be insufficient. I've talked about XP System Restore Points, and I've pulled together some really good material on Volume Shadow Copies for my next book. I've also talked about carving Event Log event records from unallocated space, as well as parsing information regarding HTTP requests from the pagefile. Volatility provides an unprecedented level of access to yet another excellent resource...memory. And not just memory extracted from a live running system...you can also use Volatility to parse data from a hibernation file, which you may find within an (laptop) image. Let's say that you're interested in finding out how long that system has been compromised; i.e., you're trying to determine the window of exposure. One of the sources I've turned to is crash dump logs...these are appended (the actual crash dump file is overwritten) with information about each crash, and include a pslist-like listing of processes. Sometimes you may find references to the malware in these listings, or in the specific details regarding the crashing process. Now, assume that you're looking at a laptop, and find a hibernation file...you know when the file was created, and using Volatility, you can parse that file and find specifics about what processes were running at the time that the system went into hibernation mode.
And that's not all you can use Volatility for...Andre posted to the SemperSecurus blog about using Volatility to study a Flash 0-day vulnerability.
If you haven't looked at Volatility, and you do have access to memory, you should really consider diving in and giving it a shot.
Best Tool
Lance posted to his blog, asking readers what they consider to be the best imaging and analysis tools. As of the time that I'm writing this post, there are seven comments (several are pretty much just "agree" posts), and even reading through some of the thoughts and comments, I keep coming back to the same thought...that the best tool available to an analyst is that grey matter between their ears.
This brings to mind a number of thoughts, particularly due to the fact that last week I had two opportunities to consider some things for topics of analyst training, education and experience...during one of these opportunities, I was considering the fact that when I (like many other analysts) "came up through the ranks", there were no formal schools available to non-LE analysts, aside from vendor-specific training. Some went that route, but there were others who couldn't afford it. For myself, I took the EnCase v.3.0 Introductory course in 1999...I was so fascinated by the approach taken to file signature analysis that I went home and wrote my own Perl code for this; not to create a tool, per se, but more to really understand what was happening "under the hood". Over the years, knowing how things work and knowing what I needed to look for really helped me a lot...it wasn't a matter of having to have a specific tool as much as it was knowing the process and being able to justify the purchase of a product, if need be.
Breaches
If the recent spate of breaches hasn't yet convinced you that no one is safe from computer security incidents, take a look at this story from The State Worker which talks about the PII/PCI data of 2000 LE retirees being compromised. I know, 2000 seems like such a small number, but hey...regardless of whether its 77 million or 2000, if you're one of those people who's data was compromised, it's everything.
While the story is light on details (i.e., how the breach was identified, when the IT staff reacted in relation to when the incident actually occurred, etc.), if you read through the story, you see a statement that's common throughout these types of announcements; specifically, "...taken steps to enhance security and strengthen [the] infrastructure...". The sequence of events for incidents like this (and keep in mind, these are only the ones that are reported) is, breach, time passes, someone is notified of the breach, then steps are taken to "enhance security". We find ourselves coming to this dance far too often.
Incident Preparedness
Not long ago, I talked about incident preparation and proactive IR...recently, CERT Societe Generale (French CERT) posted a 6 Step IRM Worm Infection cheat sheet. I think that things like this are very important, particularly when the basic steps necessarily assume certain things about your infrastructure. For example, look at step 1 of PDF includes several of the basic components of a CSIRP...if you have all of the stuff outlined in the PDF already covered, then you're almost to a complete CSIRP, so why not just finish it off and formalize the entire thing?
Step 3, Containment, mentions neutralizing propagation vectors...incident responders need to understand malware characteristics in order to respond effectively to these sorts of incidents.
One note about this, and these sorts of incidents...worms can be especially virulent strains of malware, so this applies to malware in general...relying on your AV vendor to be your IR team is a mistake. Incident responders have seen this time and again, and it's especially difficult for folks who do what I do, because we often get called after response efforts via the AV vendor have been ineffective, and have exhausted the local IT staff. I'm not saying that AV vendors can't be effective...what I am saying is that in my experience, throwing signature files at an infrastructure based on samples provided by on-site staff doesn't work. AV vendors are generally good at what they do, but AV is only part of the overall security solution. Malware infections need to be responded to with an IR mindset, not through an AV business model.
Firefighters don't learn about putting out a fire during a fire. Surgeons don't learn their craft during surgery. Organizations shouldn't hope to learn IR during an incident...and the model of turning your response over to an external third party clearly doesn't work. You need to be ready for that big incident...as you can see just from the media, it's a wave on the horizon headed for your organization.
Last night's meetup went pretty well...there's nothing wrong with humble beginnings. We had about 16 people show up, and a nice mix of folks...some vets, some new to the community...but it's all good. Sometimes having new folks ask questions in front of those who've done it for a while gets the vets to think about/question their assumptions. Overall, the evening went well...we had some good interaction, good questions, and we gave away a couple of books.
I think that we'd like to keep this on a Wed or Thu evening, perhaps once a month...maybe spread it out over the summer due to vacations, etc. (we'll see). What we do need now is a facility with presentation capability. Also, I don't think that we want to have the presentations fall on just one person...we can do a couple of quick talks of a half hour each, or just have someone start a discussion by posing a question to the group.
Besides just basic information sharing, these can be good networking events for the folks who show up. Looking to add to your team? Looking for a job? Looking for advice on how to "break in" to the business? Just come on by and talk to folks.
So, thanks to everyone who showed up and made this first event a success. For them, and for those who couldn't make it, we'll be having more of these meetups...so keep your eyes out and don't hold back on the thoughts, comments, or questions.
Volatility
Most folks familiar with memory analysis know about the simply awesome work provided through the Volatility project. For those who don't know, this is an open source project, written in Python, for conducting memory analysis.
Volatility now has a Python implementation of RegRipper built-in, thanks to lg, and you can read a bit more about the RegListPlugin. Gleeda's got an excellent blog post regarding the use of the UserAssist plugin.
I've talked a bit in my blog, books, and presentations about finding alternate sources of forensic data when the sources we're looking for (or at) may be insufficient. I've talked about XP System Restore Points, and I've pulled together some really good material on Volume Shadow Copies for my next book. I've also talked about carving Event Log event records from unallocated space, as well as parsing information regarding HTTP requests from the pagefile. Volatility provides an unprecedented level of access to yet another excellent resource...memory. And not just memory extracted from a live running system...you can also use Volatility to parse data from a hibernation file, which you may find within an (laptop) image. Let's say that you're interested in finding out how long that system has been compromised; i.e., you're trying to determine the window of exposure. One of the sources I've turned to is crash dump logs...these are appended (the actual crash dump file is overwritten) with information about each crash, and include a pslist-like listing of processes. Sometimes you may find references to the malware in these listings, or in the specific details regarding the crashing process. Now, assume that you're looking at a laptop, and find a hibernation file...you know when the file was created, and using Volatility, you can parse that file and find specifics about what processes were running at the time that the system went into hibernation mode.
And that's not all you can use Volatility for...Andre posted to the SemperSecurus blog about using Volatility to study a Flash 0-day vulnerability.
If you haven't looked at Volatility, and you do have access to memory, you should really consider diving in and giving it a shot.
Best Tool
Lance posted to his blog, asking readers what they consider to be the best imaging and analysis tools. As of the time that I'm writing this post, there are seven comments (several are pretty much just "agree" posts), and even reading through some of the thoughts and comments, I keep coming back to the same thought...that the best tool available to an analyst is that grey matter between their ears.
This brings to mind a number of thoughts, particularly due to the fact that last week I had two opportunities to consider some things for topics of analyst training, education and experience...during one of these opportunities, I was considering the fact that when I (like many other analysts) "came up through the ranks", there were no formal schools available to non-LE analysts, aside from vendor-specific training. Some went that route, but there were others who couldn't afford it. For myself, I took the EnCase v.3.0 Introductory course in 1999...I was so fascinated by the approach taken to file signature analysis that I went home and wrote my own Perl code for this; not to create a tool, per se, but more to really understand what was happening "under the hood". Over the years, knowing how things work and knowing what I needed to look for really helped me a lot...it wasn't a matter of having to have a specific tool as much as it was knowing the process and being able to justify the purchase of a product, if need be.
Breaches
If the recent spate of breaches hasn't yet convinced you that no one is safe from computer security incidents, take a look at this story from The State Worker which talks about the PII/PCI data of 2000 LE retirees being compromised. I know, 2000 seems like such a small number, but hey...regardless of whether its 77 million or 2000, if you're one of those people who's data was compromised, it's everything.
While the story is light on details (i.e., how the breach was identified, when the IT staff reacted in relation to when the incident actually occurred, etc.), if you read through the story, you see a statement that's common throughout these types of announcements; specifically, "...taken steps to enhance security and strengthen [the] infrastructure...". The sequence of events for incidents like this (and keep in mind, these are only the ones that are reported) is, breach, time passes, someone is notified of the breach, then steps are taken to "enhance security". We find ourselves coming to this dance far too often.
Incident Preparedness
Not long ago, I talked about incident preparation and proactive IR...recently, CERT Societe Generale (French CERT) posted a 6 Step IRM Worm Infection cheat sheet. I think that things like this are very important, particularly when the basic steps necessarily assume certain things about your infrastructure. For example, look at step 1 of PDF includes several of the basic components of a CSIRP...if you have all of the stuff outlined in the PDF already covered, then you're almost to a complete CSIRP, so why not just finish it off and formalize the entire thing?
Step 3, Containment, mentions neutralizing propagation vectors...incident responders need to understand malware characteristics in order to respond effectively to these sorts of incidents.
One note about this, and these sorts of incidents...worms can be especially virulent strains of malware, so this applies to malware in general...relying on your AV vendor to be your IR team is a mistake. Incident responders have seen this time and again, and it's especially difficult for folks who do what I do, because we often get called after response efforts via the AV vendor have been ineffective, and have exhausted the local IT staff. I'm not saying that AV vendors can't be effective...what I am saying is that in my experience, throwing signature files at an infrastructure based on samples provided by on-site staff doesn't work. AV vendors are generally good at what they do, but AV is only part of the overall security solution. Malware infections need to be responded to with an IR mindset, not through an AV business model.
Firefighters don't learn about putting out a fire during a fire. Surgeons don't learn their craft during surgery. Organizations shouldn't hope to learn IR during an incident...and the model of turning your response over to an external third party clearly doesn't work. You need to be ready for that big incident...as you can see just from the media, it's a wave on the horizon headed for your organization.
Thursday, April 28, 2011
NoVA Forensic Meet-Up
I've scheduled a room at the Reston Public Library on Wed, 4 May 2011, for the first NoVA Forensic Meet-Up. We're scheduled for 7pm to 8:30pm, in meeting room 1.
I'll be talking about the presentation I'll give at the OSDF Conference in June, specifically, "Extending RegRipper".
Due to the short time frame, I'll try to put together and post some slides or something, but if not, it's pretty easy to do a discussion format.
Hope to see you there!
Addendum (1 May): I uploaded the presentation (in PDF) that I'll be working from; the library doesn't have any projection capabilities in the room we've reserved, so be sure to download a copy of the PDF before you come on out.
I'll be talking about the presentation I'll give at the OSDF Conference in June, specifically, "Extending RegRipper".
Due to the short time frame, I'll try to put together and post some slides or something, but if not, it's pretty easy to do a discussion format.
Hope to see you there!
Addendum (1 May): I uploaded the presentation (in PDF) that I'll be working from; the library doesn't have any projection capabilities in the room we've reserved, so be sure to download a copy of the PDF before you come on out.
Tuesday, April 26, 2011
Proactive IR
There are a couple of things that are true about security, in general, and IR specifically. One is that security seems to be difficult for some folks to understand (it's not generally part of our culture), so those of us in the industry tend to use a lot of analogies in an attempt to describe things to others that aren't in our area of expertise. Sometimes this works, sometimes it doesn't.
Another thing that's true is that the current model for IR doesn't work For consulting companies, it's hard to keep a staff of trained, dedicated, experienced responders available and on the bench, because if they sit unused they get pulled off into other areas (because those guys "do security stuff") and like many areas of information security (web app assessments, pen testing, malware RE, etc.) the hard-core technical skills are perishable. Most companies that need such skills simply don't keep these sorts of folks around, as they look to consulting companies to provide this service.
Why doesn't this work? Think about it this way...who calls emergency incident responders? Well, those who need emergency incident response, of course. Many of us who work (or have worked) as incident responders know all too well what happens...the responders show up, often well after the incident actually occurred, and have to first develop an understanding of not just what happened (as opposed to what the customer thinks may have happened), but also "get the lay of the land"; that is, understand what the network infrastructure "looks like", what logs may be available, etc. All of this takes time, and that time means that (a) the incident isn't "responded to" right away, and (b) the clock keeps ticking as far as billing is concerned. Ultimately, what's determined with respect to the customer's needs really varies; in fact, the questions that the customer had (i.e, "what data left our network?") may not be answered at all.
So, if it doesn't work, what do we do about this? Well, the first thing is that a cultural shift is needed. Now, follow me here...all companies that provide a service or product (which is pretty much every one of them) have business processes in place, right? There's sales, customer validation, provisioning and fulfillment, and billing and collections...right? Companies have processes in place (documented or otherwise) for providing their product or service to customers, and then getting paid. Companies also have processes in place for hiring and paying employees...because without employees to provide those products or services, where would you be?
Ever since I started in information security, one of the things I've seen across the board is that most companies do not have information security as a business process. Companies will process, store and manage all manner of sensitive data...PCI, PHI, PII, critical intellectual property, manufacturing processes and plans, etc...and not have processes for protecting that data, or responding to incidents involving the possible exposure or modification of that data.
Okay, how about those analogies? Like many, I consider my family to be critical, so I have smoke alarms in my home, fire extinguishers, we have basic first aid materials, etc. So, essentially, we measures in place to prevent certain incidents, detect others, and we've taken steps to ensure that we can respond appropriately to protect those items we've deemed "critical".
Here's another analogy...when I went to my undergraduate education, we were required to take boxing. If you're standing in a class and see everyone in line getting punched in the face because they don't keep their gloves up, what do you do? Do you stand there and convince yourself that you're not going to get punched in the face? When you do get punched in the face because you didn't keep your gloves up, do you blame the other guy ("hey, dude! WTF?!?!") or do you accept responsibility for getting punched in the face? Or, do you see what's happening, realize that it's inevitable, listen to what you're being told, and develop a culture of security and get your gloves up? The thing about getting punched in the face is no matter what you say or do afterward, the fact remains...you got punched in the face.
Here's another IRL example...I recently ran across this WaPo article that describes how farms in Illinois are pre-staging critical infrastructure information in an easily accessible location for emergency responders; the intention is to "prevent or reduce property damage, injuries and even deaths" in the event of an incident. Variations of the program have reportedly been rolled out in other states, and seem to be effective. What I find interesting about the program is that in Illinois, aerial maps are taken to each farm, and the farmers (those who established, designed, and maintain the infrastructure) assist in identifying structures, etc. This isn't a "here's $40K, write us a CSIRP"...instead, the farmer has to take some ownership in the process, but I guess they do that because a 1 hour or one afternoon interview can mean the difference between minor damage and loosing everything.
Sound familiar?
As a responder, I'm aware of various legislation and regulatory bodies that have mandated the need for incident response capabilities...Visa PCI, NCUA, etc. States have laws for notification in the case of PII breaches, which indirectly require an IR capability. Right now, who's better able to respond to a breach...local IT staff who know and work in the infrastructure every day (and just need a little bit of training in incident response and containment) or someone who will arrive on-site in anywhere between 6 and 72 hours, and will still need to develop an understanding of your infrastructure?
If the local IT staff knew how to respond appropriately, and was able to contain the incident and collect the necessary data (because they had the training and tools, and processes for doing so), analysis performed by that trusted third party adviser could begin much sooner, reducing response time and overall cost. If the local IT staff (under the leadership of a C-level executive, like the farmer) were to take steps to prepare for the incident...identify and correct shortfalls in the infrastructure, determine where configuration changes to systems or the addition of monitoring would assist in preventing and detecting incidents, determine where critical data resides/transits, develops a plan for response, etc...just as is mandated in compliance requirements, then the entire game would change. Incidents would be detected by the internal staff closer to when they actually occur...rather than months later, by an external third party. Incident response would begin much quicker, and containment and scoping would follow suit.
Let's say you have a database containing 650K records (PII, PCI, PHI, whatever). According to most compliance requirements, if you cannot explicitly determine which records were exposed, you have to report on ALL of them. Think of the cost associated with that...direct costs of reporting and notification, followed by indirect costs of cleanup, fines, lawsuits, etc. Now, compare that to the cost of doing something like having your DBA write a stored procedure (includes authorization and logging) for accessing the data, rather than simply allowing direct access to the data.
Being ready for an incident is going to take work, but it's going to be less costly in the long run when (not if) an incident occurs.
What are some things you can do to prepare? Identify logging sources, and if necessary, modify them appropriately (add Process Tracking to your Windows Event Logs, increase logs size, set up a means for centralized log collection, etc.). Develop and maintain accurate network maps, and know where your critical data is located. The problem with hiring someone to do this for you is that you don't have any ownership; when the job's done, you have a map that is an accurate snapshot, but how accurate is it 6 months later? Making incident detection and tier 1 response (i.e., scoping, data collection) a business process, with the help of a trusted adviser, is going to be quicker, easier and far less costly in the long run, and those advisers will be there when you need the tier 3 analysis completed.
What about looking at things like Carbon Black? Cb has a number of uses besides just IR, and can help you solve a number of other problems. However, with respect to IR, it can not only tell you what was run and when, but it can keep a copy of it for you...so when it comes to determining the capabilities of the malware downloaded to your system, you already have a copy available; call that trusted adviser and have them analyze it for you.
Remember the first Mission: Impossible movie? After his team was wiped out, Ethan made it back to the safe house and as he reached the top of the stairwell, took the light bulb out of the socket and crushed it in his jacket, then spread the shards on the floor as he backed toward his room. What this does is provide a free detection mechanism...anyone approaching the room isn't going to know that the shards are their until they step on them and alert Ethan to their presence; incident detection.
So what are you going to do? Wait until an incident happens, or worse, wait until someone told you that an incident happened, and then call someone for help? You'll have to find someone, sign contracts, get them on-site, and then help them understand your infrastructure so that they can respond effectively. When they're first there, you're not going to trust them (they're new, after all) and you're not going to speak their language. In most cases, you're not going to know the answer to their questions...do we even have firewall logs? What about DHCP...do we log that? What will happen is that you will continue to hemorrhage data throughout this process.
The other option is to have detection mechanisms and a response plan in place and tested, and have a trusted adviser that you can call for assistance. Your local IT staff needs to be trained to perform the initial response, scoping and assessment, and even containment. While the IT director is on the phone with that trusted adviser, designated individuals are collecting and preserving data...because they know where it is and how to get it. The questions that the trusted adviser (or any other consulting firm) would ask are being answered before the call is being made, not afterward ("Uh...we had no idea that you'd ask that..."). That way, you don't loose the whole farm, and if you do get punched in the face, you're not knocked out.
By the way...one final note. This doesn't apply solely to large companies. Small business are loosing money hand over fist and some are even going out of business...you just don't hear about it as much. These same things can be done inexpensively and effectively, and need to be done. The difference is, do you get it done, even if you have to have a payment plan, or do you sit by and wait for an incident to put you out of business and lay off your employees?
Another thing that's true is that the current model for IR doesn't work For consulting companies, it's hard to keep a staff of trained, dedicated, experienced responders available and on the bench, because if they sit unused they get pulled off into other areas (because those guys "do security stuff") and like many areas of information security (web app assessments, pen testing, malware RE, etc.) the hard-core technical skills are perishable. Most companies that need such skills simply don't keep these sorts of folks around, as they look to consulting companies to provide this service.
Why doesn't this work? Think about it this way...who calls emergency incident responders? Well, those who need emergency incident response, of course. Many of us who work (or have worked) as incident responders know all too well what happens...the responders show up, often well after the incident actually occurred, and have to first develop an understanding of not just what happened (as opposed to what the customer thinks may have happened), but also "get the lay of the land"; that is, understand what the network infrastructure "looks like", what logs may be available, etc. All of this takes time, and that time means that (a) the incident isn't "responded to" right away, and (b) the clock keeps ticking as far as billing is concerned. Ultimately, what's determined with respect to the customer's needs really varies; in fact, the questions that the customer had (i.e, "what data left our network?") may not be answered at all.
So, if it doesn't work, what do we do about this? Well, the first thing is that a cultural shift is needed. Now, follow me here...all companies that provide a service or product (which is pretty much every one of them) have business processes in place, right? There's sales, customer validation, provisioning and fulfillment, and billing and collections...right? Companies have processes in place (documented or otherwise) for providing their product or service to customers, and then getting paid. Companies also have processes in place for hiring and paying employees...because without employees to provide those products or services, where would you be?
Ever since I started in information security, one of the things I've seen across the board is that most companies do not have information security as a business process. Companies will process, store and manage all manner of sensitive data...PCI, PHI, PII, critical intellectual property, manufacturing processes and plans, etc...and not have processes for protecting that data, or responding to incidents involving the possible exposure or modification of that data.
Okay, how about those analogies? Like many, I consider my family to be critical, so I have smoke alarms in my home, fire extinguishers, we have basic first aid materials, etc. So, essentially, we measures in place to prevent certain incidents, detect others, and we've taken steps to ensure that we can respond appropriately to protect those items we've deemed "critical".
Here's another analogy...when I went to my undergraduate education, we were required to take boxing. If you're standing in a class and see everyone in line getting punched in the face because they don't keep their gloves up, what do you do? Do you stand there and convince yourself that you're not going to get punched in the face? When you do get punched in the face because you didn't keep your gloves up, do you blame the other guy ("hey, dude! WTF?!?!") or do you accept responsibility for getting punched in the face? Or, do you see what's happening, realize that it's inevitable, listen to what you're being told, and develop a culture of security and get your gloves up? The thing about getting punched in the face is no matter what you say or do afterward, the fact remains...you got punched in the face.
Here's another IRL example...I recently ran across this WaPo article that describes how farms in Illinois are pre-staging critical infrastructure information in an easily accessible location for emergency responders; the intention is to "prevent or reduce property damage, injuries and even deaths" in the event of an incident. Variations of the program have reportedly been rolled out in other states, and seem to be effective. What I find interesting about the program is that in Illinois, aerial maps are taken to each farm, and the farmers (those who established, designed, and maintain the infrastructure) assist in identifying structures, etc. This isn't a "here's $40K, write us a CSIRP"...instead, the farmer has to take some ownership in the process, but I guess they do that because a 1 hour or one afternoon interview can mean the difference between minor damage and loosing everything.
Sound familiar?
As a responder, I'm aware of various legislation and regulatory bodies that have mandated the need for incident response capabilities...Visa PCI, NCUA, etc. States have laws for notification in the case of PII breaches, which indirectly require an IR capability. Right now, who's better able to respond to a breach...local IT staff who know and work in the infrastructure every day (and just need a little bit of training in incident response and containment) or someone who will arrive on-site in anywhere between 6 and 72 hours, and will still need to develop an understanding of your infrastructure?
If the local IT staff knew how to respond appropriately, and was able to contain the incident and collect the necessary data (because they had the training and tools, and processes for doing so), analysis performed by that trusted third party adviser could begin much sooner, reducing response time and overall cost. If the local IT staff (under the leadership of a C-level executive, like the farmer) were to take steps to prepare for the incident...identify and correct shortfalls in the infrastructure, determine where configuration changes to systems or the addition of monitoring would assist in preventing and detecting incidents, determine where critical data resides/transits, develops a plan for response, etc...just as is mandated in compliance requirements, then the entire game would change. Incidents would be detected by the internal staff closer to when they actually occur...rather than months later, by an external third party. Incident response would begin much quicker, and containment and scoping would follow suit.
Let's say you have a database containing 650K records (PII, PCI, PHI, whatever). According to most compliance requirements, if you cannot explicitly determine which records were exposed, you have to report on ALL of them. Think of the cost associated with that...direct costs of reporting and notification, followed by indirect costs of cleanup, fines, lawsuits, etc. Now, compare that to the cost of doing something like having your DBA write a stored procedure (includes authorization and logging) for accessing the data, rather than simply allowing direct access to the data.
Being ready for an incident is going to take work, but it's going to be less costly in the long run when (not if) an incident occurs.
What are some things you can do to prepare? Identify logging sources, and if necessary, modify them appropriately (add Process Tracking to your Windows Event Logs, increase logs size, set up a means for centralized log collection, etc.). Develop and maintain accurate network maps, and know where your critical data is located. The problem with hiring someone to do this for you is that you don't have any ownership; when the job's done, you have a map that is an accurate snapshot, but how accurate is it 6 months later? Making incident detection and tier 1 response (i.e., scoping, data collection) a business process, with the help of a trusted adviser, is going to be quicker, easier and far less costly in the long run, and those advisers will be there when you need the tier 3 analysis completed.
What about looking at things like Carbon Black? Cb has a number of uses besides just IR, and can help you solve a number of other problems. However, with respect to IR, it can not only tell you what was run and when, but it can keep a copy of it for you...so when it comes to determining the capabilities of the malware downloaded to your system, you already have a copy available; call that trusted adviser and have them analyze it for you.
Remember the first Mission: Impossible movie? After his team was wiped out, Ethan made it back to the safe house and as he reached the top of the stairwell, took the light bulb out of the socket and crushed it in his jacket, then spread the shards on the floor as he backed toward his room. What this does is provide a free detection mechanism...anyone approaching the room isn't going to know that the shards are their until they step on them and alert Ethan to their presence; incident detection.
So what are you going to do? Wait until an incident happens, or worse, wait until someone told you that an incident happened, and then call someone for help? You'll have to find someone, sign contracts, get them on-site, and then help them understand your infrastructure so that they can respond effectively. When they're first there, you're not going to trust them (they're new, after all) and you're not going to speak their language. In most cases, you're not going to know the answer to their questions...do we even have firewall logs? What about DHCP...do we log that? What will happen is that you will continue to hemorrhage data throughout this process.
The other option is to have detection mechanisms and a response plan in place and tested, and have a trusted adviser that you can call for assistance. Your local IT staff needs to be trained to perform the initial response, scoping and assessment, and even containment. While the IT director is on the phone with that trusted adviser, designated individuals are collecting and preserving data...because they know where it is and how to get it. The questions that the trusted adviser (or any other consulting firm) would ask are being answered before the call is being made, not afterward ("Uh...we had no idea that you'd ask that..."). That way, you don't loose the whole farm, and if you do get punched in the face, you're not knocked out.
By the way...one final note. This doesn't apply solely to large companies. Small business are loosing money hand over fist and some are even going out of business...you just don't hear about it as much. These same things can be done inexpensively and effectively, and need to be done. The difference is, do you get it done, even if you have to have a payment plan, or do you sit by and wait for an incident to put you out of business and lay off your employees?
Friday, April 22, 2011
Extending RegRipper (aka, "Forensic Scanner")
I'll be presenting on "Extending RegRipper" at Brian Carrier's Open Source Digital Forensics Conference on 14 June, along with Cory Altheide, and I wanted to provide a bit of background with regards to what my presentation will cover...
In '98-'99, I was working for Trident Data Systems, Inc., (TDS) conducting vulnerability assessments for organizations. One of the things we did as part of this work was run ISS’s Internet Scanner (now owned by IBM) against the infrastructure; either a full, broad-brush scan or just very specific segments, depending upon the needs and wants of the organization. I became very interested in how the scanner worked, and began to note differences in how the scanner would report its findings based on the level of access we had to the systems within the infrastructure. Something else I noticed was that many of the checks that were scanned were a result of the ISS X-Force vulnerability discovery team. In short, a couple of very smart folks would discover a vulnerability, add a means of scanning for that vulnerability via the Internet Scanner framework, and roll it out to thousands of customers. Within fairly short order, this check can be rolled out to hundreds or thousands of analysts, none of whom have any prior knowledge of the vulnerability, nor have had to invest the time to investigate it. This became even more clear as I started to create an open-source (albeit proprietary) scanner to replace the use of Internet Scanner, due in large part to significant issues with inaccurate checks, and the need to adapt the output. I could create a check to be run, and give it to an analyst going on-site, and they wouldn't need to have any prior knowledge of the issue, nor would they have to invest time in discovery and analysis, but they could run the check and easily review and understand the results.
Other aspects of information security also benefit from the use of scanners. Penetration testing and web application assessments benefit from scanners that include frameworks for providing new and updated checks to be run, and many of the analysts running the scanners have no prior knowledge of the checks that are being run. Nessus (from Tenable) is a very good example of this sort of scanner; the plugins run by the scanner are text-based, providing instructions for the scanner. These plugins are easy to open and read, and provide a great deal of information regarding how the checks are constructed and run.
Given all of the benefits derived from scanners in other disciplines within information security, it just stands to reason that digital forensic analysis would also benefit from a similar framework.
The forensic scanner is not intended to replace the analyst; rather, it is intended as a framework for documenting and retaining the institutional knowledge of all analysts on the team, and remove the tedium of looking for that "low-hanging fruit" that likely exists in most, if not all, exams.
A number of commercially available forensic analysis applications (EnCase, ProDiscover) have scripting languages and scanner-like functionality; however, in most cases, this functionality is based on proprietary APIs, and in some cases, scripting languages (ProDiscover uses Perl as it's scripting language, but the API for accessing the data is unique to the application).
A scanner framework is not meant to replace the use of commercial forensic analysis applications; rather, the scanner framework would augment and enhance the use of those applications, by providing an easy and efficient means for educating new analysts, as well as "sweeping up" the "low-hanging fruit", leaving the deeper analysis for the more experienced analysts.
This scanner framework would be based on easily available tools and techniques. For example, the scanner would be designed to access acquired images mounted read-only via the operating system (Linux mount command) or via freely available applications (Windows - FTK Imager v3.0, ImDisk, vhd/vmdk, etc.); that way, the scanner can make use of currently available APIs (via Perl, Python, etc.) in order to access data within the acquired image, and do so in a "forensically sound manner" (i.e., not making any changes to the original data).
The scanner is not intended to run in isolation; rather, it is intended to be used with other tools (here, here) as part of an overall process. The purpose of the scanner is to provide a means for retention, efficient deployment, and proliferation of institutional digital forensic knowledge.
Benefits
Some benefits of a forensic scanner framework such as this include, but are not limited to, the following:
1. Knowledge Retention - None of us knows everything, and we all see new things during examinations. When an analyst sees or discovers something new, a plugin can be written or updated. Once this is done, that knowledge exists, regardless of the state of the analyst (she goes on vacation, leaves for another position, etc.). Enforcing best practice documentation of the plugin ensures that as much knowledge as possible is retained along with the application, providing an excellent educational tool, as well as a ready means for adapting or improving the plugin.
2. Establish a career progression - When new folks are brought aboard a team, they have to start somewhere. In most cases, particularly with consulting organizations, skilled/experienced analysts are hired, but as the industry develops, this won't always be the case. The forensic scanner provides an ancillary framework for developing "home grown" expertise where inexperienced analysts are hired. Starting the new analysts off in a lab environment and having them begin learning the necessary procedures by acquiring and verifying media puts them in an excellent position to run the scanner. For example, the analyst either goes on-site and conducts acquisition, or acquires media sent to the lab, and prepares the necessary documentation. Then, they mount the acquired image and run the scanner, providing the more experienced analyst with the path to the acquired image and the report.
This framework also provides an objective means for personnel assessment; managers can easily track the plugins that are improved or developed by various analysts.
3. Teamwork - In many environments, development of plugins likely will not occur in a vacuum or in isolation. Plugins need to be reviewed, and can be improved based on the experience of other analysts. For example, let's say an analyst runs across a Zeus infection and decides to write a plugin for the artifacts. When the plugin is reviewed, another analyst mentions that Zeus will load differently based on the permissions of the user upon infection. The plugin can them be documented and modified to include additional conditions.
New plugins can be introduced and discussed during team meetings or through virtual conferences and collaboration, but regardless of the method, it introduces a very important aspect of forensic analysis...peer review.
4. Ease of modification - One size does not fit all. There are times when analysts will not be working with full images, but instead will only have access to selected files from systems. A properly constructed framework will provide the means necessary for accessing and scanning these limited data sets, as well. Also, reporting of the scanner can be modified according to the needs of the analyst, or organization.
5. Flexibility - A scanner framework is not limited to just acquired images. For example, F-Response provides a means of access to live, remote systems in a manner that is similar to an acquired image (i.e., much of the same API can be used, as with RegRipper), so the framework used to access images can also be used against systems accessed via F-Response. As the images themselves would be mounted read-only in order to be scanned, Volume Shadow Copies could also be mounted and scanned using the same scanner and same plugins.
Another means of flexibility comes about through the use of "idle" resources. What I mean by that is that many times, analysts working on-site or actively engaged in analysis may be extremely busy, so running the scanner and providing the output to another, off-site analyst who is not actively engaged frees up the on-site team and provides answers/solutions in a timely and efficient manner. Or, data can be provided and the off-site analyst can write a plugin based on that data, and that plugin can be run against all other systems/images. In these instances, entire images do not have to be sent to the off-site analyst, as this takes considerable time and can expose sensitive data. Instead, only very specific data is sent, making for a much smaller data set (KB as opposed to GB).
In '98-'99, I was working for Trident Data Systems, Inc., (TDS) conducting vulnerability assessments for organizations. One of the things we did as part of this work was run ISS’s Internet Scanner (now owned by IBM) against the infrastructure; either a full, broad-brush scan or just very specific segments, depending upon the needs and wants of the organization. I became very interested in how the scanner worked, and began to note differences in how the scanner would report its findings based on the level of access we had to the systems within the infrastructure. Something else I noticed was that many of the checks that were scanned were a result of the ISS X-Force vulnerability discovery team. In short, a couple of very smart folks would discover a vulnerability, add a means of scanning for that vulnerability via the Internet Scanner framework, and roll it out to thousands of customers. Within fairly short order, this check can be rolled out to hundreds or thousands of analysts, none of whom have any prior knowledge of the vulnerability, nor have had to invest the time to investigate it. This became even more clear as I started to create an open-source (albeit proprietary) scanner to replace the use of Internet Scanner, due in large part to significant issues with inaccurate checks, and the need to adapt the output. I could create a check to be run, and give it to an analyst going on-site, and they wouldn't need to have any prior knowledge of the issue, nor would they have to invest time in discovery and analysis, but they could run the check and easily review and understand the results.
Other aspects of information security also benefit from the use of scanners. Penetration testing and web application assessments benefit from scanners that include frameworks for providing new and updated checks to be run, and many of the analysts running the scanners have no prior knowledge of the checks that are being run. Nessus (from Tenable) is a very good example of this sort of scanner; the plugins run by the scanner are text-based, providing instructions for the scanner. These plugins are easy to open and read, and provide a great deal of information regarding how the checks are constructed and run.
Given all of the benefits derived from scanners in other disciplines within information security, it just stands to reason that digital forensic analysis would also benefit from a similar framework.
The forensic scanner is not intended to replace the analyst; rather, it is intended as a framework for documenting and retaining the institutional knowledge of all analysts on the team, and remove the tedium of looking for that "low-hanging fruit" that likely exists in most, if not all, exams.
A number of commercially available forensic analysis applications (EnCase, ProDiscover) have scripting languages and scanner-like functionality; however, in most cases, this functionality is based on proprietary APIs, and in some cases, scripting languages (ProDiscover uses Perl as it's scripting language, but the API for accessing the data is unique to the application).
A scanner framework is not meant to replace the use of commercial forensic analysis applications; rather, the scanner framework would augment and enhance the use of those applications, by providing an easy and efficient means for educating new analysts, as well as "sweeping up" the "low-hanging fruit", leaving the deeper analysis for the more experienced analysts.
This scanner framework would be based on easily available tools and techniques. For example, the scanner would be designed to access acquired images mounted read-only via the operating system (Linux mount command) or via freely available applications (Windows - FTK Imager v3.0, ImDisk, vhd/vmdk, etc.); that way, the scanner can make use of currently available APIs (via Perl, Python, etc.) in order to access data within the acquired image, and do so in a "forensically sound manner" (i.e., not making any changes to the original data).
The scanner is not intended to run in isolation; rather, it is intended to be used with other tools (here, here) as part of an overall process. The purpose of the scanner is to provide a means for retention, efficient deployment, and proliferation of institutional digital forensic knowledge.
Benefits
Some benefits of a forensic scanner framework such as this include, but are not limited to, the following:
1. Knowledge Retention - None of us knows everything, and we all see new things during examinations. When an analyst sees or discovers something new, a plugin can be written or updated. Once this is done, that knowledge exists, regardless of the state of the analyst (she goes on vacation, leaves for another position, etc.). Enforcing best practice documentation of the plugin ensures that as much knowledge as possible is retained along with the application, providing an excellent educational tool, as well as a ready means for adapting or improving the plugin.
2. Establish a career progression - When new folks are brought aboard a team, they have to start somewhere. In most cases, particularly with consulting organizations, skilled/experienced analysts are hired, but as the industry develops, this won't always be the case. The forensic scanner provides an ancillary framework for developing "home grown" expertise where inexperienced analysts are hired. Starting the new analysts off in a lab environment and having them begin learning the necessary procedures by acquiring and verifying media puts them in an excellent position to run the scanner. For example, the analyst either goes on-site and conducts acquisition, or acquires media sent to the lab, and prepares the necessary documentation. Then, they mount the acquired image and run the scanner, providing the more experienced analyst with the path to the acquired image and the report.
This framework also provides an objective means for personnel assessment; managers can easily track the plugins that are improved or developed by various analysts.
3. Teamwork - In many environments, development of plugins likely will not occur in a vacuum or in isolation. Plugins need to be reviewed, and can be improved based on the experience of other analysts. For example, let's say an analyst runs across a Zeus infection and decides to write a plugin for the artifacts. When the plugin is reviewed, another analyst mentions that Zeus will load differently based on the permissions of the user upon infection. The plugin can them be documented and modified to include additional conditions.
New plugins can be introduced and discussed during team meetings or through virtual conferences and collaboration, but regardless of the method, it introduces a very important aspect of forensic analysis...peer review.
4. Ease of modification - One size does not fit all. There are times when analysts will not be working with full images, but instead will only have access to selected files from systems. A properly constructed framework will provide the means necessary for accessing and scanning these limited data sets, as well. Also, reporting of the scanner can be modified according to the needs of the analyst, or organization.
Another means of flexibility comes about through the use of "idle" resources. What I mean by that is that many times, analysts working on-site or actively engaged in analysis may be extremely busy, so running the scanner and providing the output to another, off-site analyst who is not actively engaged frees up the on-site team and provides answers/solutions in a timely and efficient manner. Or, data can be provided and the off-site analyst can write a plugin based on that data, and that plugin can be run against all other systems/images. In these instances, entire images do not have to be sent to the off-site analyst, as this takes considerable time and can expose sensitive data. Instead, only very specific data is sent, making for a much smaller data set (KB as opposed to GB).
Updates
Book Update
I've received the counter-signed contract from Syngress for Windows Forensic Analysis 3/e, and I'm finishing up a couple of the chapters to get in for review. This book is NOT the same as 2/e, in that I did not start with the manuscript from that edition (the way I did when I started 2/e). Instead, 3/e is a companion edition...if you already have 2/e, you will want to have 3/e, as well. This is because the information in 2/e is still valid, and in many instances (in particular information such as the PE file format, etc.) hasn't changed. Also, 2/e focused primarily on XP, and those systems are still around...there hasn't been a huge corporate shift to Windows 7 yet. As such, 3/e will shift focus to Windows 7...it will also focus more on solving problems, rather than simply depositing technical information in your lap and leaving you to figure out what to do with it.
Another new aspect of WFA 3/e is that rather than providing an accompanying DVD, the tools (as with WRF) will be provided online. Providing the tools in this manner is just so much easier for everyone, particularly when someone purchases the ebook/Kindle version of the book, or leaves their DVD at home. As with my previous books, I will do my best to provide functioning, tested code along with book, and provide links to other tools mentioned, described, or discussed in the book.
Accessing VSCs
I've posted before on accessing Volume Shadow Copies, but thanks to a recent blog post from Corey Harrell, I thought that it might be a good idea to revive the topic. In his post, A Little Help with Volume Shadow Copies, Corey walks through a means for automating access to several VSCs, as well as automating the collection of information from each. Corey does this through the use of a batch file.
Accessing VSCs in this manner is nothing new...it's been around for a while. This post appeared on the Forensics from the sausage factory blog over a year ago. In this post, copying of specific files via robocopy is demonstrated, showing how to use batch files to mount VSCs, copy files and then unmount the VSCs. Corey's script takes a different approach, in that rather than copying files, he rips Registry hives using RegRipper (more accurately, rip.exe). Corey was kind enough to provide a commented copy of one iteration of his batch file for inclusion in the materials associated with WFA 3/e (see above).
More than anything else, this is just the beginning. Corey's had a need and used already-available information as a stepping stone to meeting his needs. Whether you use the VHD method for mounting images, or the VMWare method (described by Rob Lee and Jimmy Weg), or some other method, the fact is that once you mount the VSC, it's simply a matter of getting the job done. You can either copy out the Registry hives, or do as Corey's done, and run RegRipper (you'll still have the image and VSCs to access if you need the original data) on the hives. You can copy or query for other files, as well, or use other tools (some I'll mention later). In fact, with the right tools and a little bit of thought, you can do pretty much anything...compare files by hash, look for specific files, etc. You may need to build some tools (or reach to someone for assistance), or download some tools, but you can piece some pretty decent automated (and self-documenting) functionality together and achieve a great deal.
Open Source Tools Book
Speaking of books, the book that Cory Altheide wrote (I was a minor co-author), Digital Forensics with Open Source Tools (aka, "DFwOST"), has been published and should be available to those who pre-ordered it soon. Also, a really good idea is to follow @syngress on Twitter...I've been asked a couple of times if I will be providing a discount; I didn't provide the discount, Syngress did via Twitter. I simply "RT'd" it. You should really check this book out. Cory's goal was to provide a means for folks with a basic understanding of digital forensics (and limited means) with an understanding of some of the open source tools available to them, and how to get them installed and configured. And he did a great job of it!
My books have focused on the analysis of Windows systems, and have discussed/described free and open source tools that can assist an analyst. Cory's book focuses on the open source tools, and covers several that you can use to analyze Linux, MacOSX and Windows systems.
SANS Forensic Summit
I don't know if you've seen it, but Rob's posted the agenda for this year's SANS Forensic Summit, to be held on 7 and 8 June, in Austin, TX. Check it out...there are a number of great speakers, and several panels, which have proven to be an excellent format for conferences, as opposed to just having speaker after speaker.
It looks like Chris is gonna kick right off with his "Sniper Forensics" presentation, which has been getting him a LOT of mileage. Richard Bejtlich is also presenting, in addition to being on a panel on the second day. All in all, it looks like this will be another great opportunity to hear some good presentations, as well as to mingle with some of the folks in the business who are right there in the trenches.
OSDFC
I wanted to give another plug for Brian Carrier's OSDFC, the open source conference coming up on 14 June in McLean, VA. Cory Altheide and I will both be presenting; I'm presenting in the morning, and Cory's got clean-up in the afternoon; that's Brian's tactic to get everyone to stay, by saving the best for last! ;-) I hope that this will be another great opportunity to mingle with others in the community...I had several interesting conversations with attendees at last year's conference. Also, don't forget...DFwOST is out! Bring your copy and get both of us to sign it...although you may have to wait for the cocktail reception at the end for that!
Scalpel
There's an announcement over at the DFS Forensics blog that scalpel 2.0 is available. There are some interesting enhancements, and the download contains pre-compiled Windows binaries and the source code.
USBStor
I received another question today that I see time and again, via email and in the lists/forums, having to do with LastWrite times on the USBStor subkeys and how they apply to the time that a USB device was last connected to the system.
In this particular case, the person who emailed me had confiscated and secured the thumb drive, and then found that the LastWrite time (apparently, the system itself was still active) for the USBStor subkey had been updated recently.
Folks, I really don't understand how this can be written and talked about so much, published in books (WFA 2/e, WRF, etc.) and STILL be so misunderstood. Rob Lee's even made PDFs available that describe very clearly how to perform USB device analysis (XP, Vista/Win7).
If you want to know more about what may have caused the USBStor subkey LastWrite time to be updated when the device hadn't been connected, or more about why all of the USBStor subkeys have the same LastWrite time, put together a timeline. Seriously. I've seen both of these questions (some even include, "...I need to explain this in court..."), and a great way to answer it is to create a timeline of activity on the system and see what occurred around that time.
I've received the counter-signed contract from Syngress for Windows Forensic Analysis 3/e, and I'm finishing up a couple of the chapters to get in for review. This book is NOT the same as 2/e, in that I did not start with the manuscript from that edition (the way I did when I started 2/e). Instead, 3/e is a companion edition...if you already have 2/e, you will want to have 3/e, as well. This is because the information in 2/e is still valid, and in many instances (in particular information such as the PE file format, etc.) hasn't changed. Also, 2/e focused primarily on XP, and those systems are still around...there hasn't been a huge corporate shift to Windows 7 yet. As such, 3/e will shift focus to Windows 7...it will also focus more on solving problems, rather than simply depositing technical information in your lap and leaving you to figure out what to do with it.
Another new aspect of WFA 3/e is that rather than providing an accompanying DVD, the tools (as with WRF) will be provided online. Providing the tools in this manner is just so much easier for everyone, particularly when someone purchases the ebook/Kindle version of the book, or leaves their DVD at home. As with my previous books, I will do my best to provide functioning, tested code along with book, and provide links to other tools mentioned, described, or discussed in the book.
Accessing VSCs
I've posted before on accessing Volume Shadow Copies, but thanks to a recent blog post from Corey Harrell, I thought that it might be a good idea to revive the topic. In his post, A Little Help with Volume Shadow Copies, Corey walks through a means for automating access to several VSCs, as well as automating the collection of information from each. Corey does this through the use of a batch file.
Accessing VSCs in this manner is nothing new...it's been around for a while. This post appeared on the Forensics from the sausage factory blog over a year ago. In this post, copying of specific files via robocopy is demonstrated, showing how to use batch files to mount VSCs, copy files and then unmount the VSCs. Corey's script takes a different approach, in that rather than copying files, he rips Registry hives using RegRipper (more accurately, rip.exe). Corey was kind enough to provide a commented copy of one iteration of his batch file for inclusion in the materials associated with WFA 3/e (see above).
More than anything else, this is just the beginning. Corey's had a need and used already-available information as a stepping stone to meeting his needs. Whether you use the VHD method for mounting images, or the VMWare method (described by Rob Lee and Jimmy Weg), or some other method, the fact is that once you mount the VSC, it's simply a matter of getting the job done. You can either copy out the Registry hives, or do as Corey's done, and run RegRipper (you'll still have the image and VSCs to access if you need the original data) on the hives. You can copy or query for other files, as well, or use other tools (some I'll mention later). In fact, with the right tools and a little bit of thought, you can do pretty much anything...compare files by hash, look for specific files, etc. You may need to build some tools (or reach to someone for assistance), or download some tools, but you can piece some pretty decent automated (and self-documenting) functionality together and achieve a great deal.
Open Source Tools Book
Speaking of books, the book that Cory Altheide wrote (I was a minor co-author), Digital Forensics with Open Source Tools (aka, "DFwOST"), has been published and should be available to those who pre-ordered it soon. Also, a really good idea is to follow @syngress on Twitter...I've been asked a couple of times if I will be providing a discount; I didn't provide the discount, Syngress did via Twitter. I simply "RT'd" it. You should really check this book out. Cory's goal was to provide a means for folks with a basic understanding of digital forensics (and limited means) with an understanding of some of the open source tools available to them, and how to get them installed and configured. And he did a great job of it!
My books have focused on the analysis of Windows systems, and have discussed/described free and open source tools that can assist an analyst. Cory's book focuses on the open source tools, and covers several that you can use to analyze Linux, MacOSX and Windows systems.
SANS Forensic Summit
I don't know if you've seen it, but Rob's posted the agenda for this year's SANS Forensic Summit, to be held on 7 and 8 June, in Austin, TX. Check it out...there are a number of great speakers, and several panels, which have proven to be an excellent format for conferences, as opposed to just having speaker after speaker.
It looks like Chris is gonna kick right off with his "Sniper Forensics" presentation, which has been getting him a LOT of mileage. Richard Bejtlich is also presenting, in addition to being on a panel on the second day. All in all, it looks like this will be another great opportunity to hear some good presentations, as well as to mingle with some of the folks in the business who are right there in the trenches.
OSDFC
I wanted to give another plug for Brian Carrier's OSDFC, the open source conference coming up on 14 June in McLean, VA. Cory Altheide and I will both be presenting; I'm presenting in the morning, and Cory's got clean-up in the afternoon; that's Brian's tactic to get everyone to stay, by saving the best for last! ;-) I hope that this will be another great opportunity to mingle with others in the community...I had several interesting conversations with attendees at last year's conference. Also, don't forget...DFwOST is out! Bring your copy and get both of us to sign it...although you may have to wait for the cocktail reception at the end for that!
Scalpel
There's an announcement over at the DFS Forensics blog that scalpel 2.0 is available. There are some interesting enhancements, and the download contains pre-compiled Windows binaries and the source code.
USBStor
I received another question today that I see time and again, via email and in the lists/forums, having to do with LastWrite times on the USBStor subkeys and how they apply to the time that a USB device was last connected to the system.
In this particular case, the person who emailed me had confiscated and secured the thumb drive, and then found that the LastWrite time (apparently, the system itself was still active) for the USBStor subkey had been updated recently.
Folks, I really don't understand how this can be written and talked about so much, published in books (WFA 2/e, WRF, etc.) and STILL be so misunderstood. Rob Lee's even made PDFs available that describe very clearly how to perform USB device analysis (XP, Vista/Win7).
If you want to know more about what may have caused the USBStor subkey LastWrite time to be updated when the device hadn't been connected, or more about why all of the USBStor subkeys have the same LastWrite time, put together a timeline. Seriously. I've seen both of these questions (some even include, "...I need to explain this in court..."), and a great way to answer it is to create a timeline of activity on the system and see what occurred around that time.
Wednesday, April 13, 2011
Links
Book Review
Ken Pryor posted a review of Windows Registry Forensics over on his blog...I greatly appreciate the effort folks put into these reviews. Thanks, Ken, for taking the time to read the book and put your thoughts into a blog post!
If you're thinking about purchasing the book, take a look at Ken's review or any of the reviews on the Amazon site. I've also been fielding questions, which come in from time to time.
Book Sales Numbers
Speaking of books, I was able to get sales numbers for foreign language editions of Windows Forensic Analysis; of the two editions, the book has been translated into Chinese, French, and most recently Korean. The numbers may be a bit off, as it took Elsevier (thanks, btw...) some time to get the numbers, but here's how the books are doing so far:
Chinese - 4000 printed, 3281 sold to date
French - 1000 copies printed, 494 sold to date
Korean - 1000 copies printed, 700 sold to date
Pretty nifty.
DFwOST
Speaking of books, a hard copy of Digital Forensics with Open Source Tools showed up on my doorstep today! Cory Altheide was the primary author...heck, the entire book was his idea...and I have to tell you, he did a great job! Once, in a galaxy far, far away (actually, it was on the IBM ISS ERS team, but close enough...), I worked with Cory and saw firsthand that he's one of the most knowledgeable and capable forensicy folks I've ever worked with. Not only is Cory REALLY smart, but he also likes beer! Actually, I think his preference is single malt scotch...I know that sounds like some kind of personal ad but if you see him at a conference...you know what I'm sayin'!
At first glance, the book turned our really well. I was more interested in the formatting and how some of the images turned out more than anything else; spelling issues weren't my primary focus. The book is chock full of some really good information, and the content is mostly directed at beginners; however, I think everyone will find something useful. For example, one of the open source tools that Cory described was the Digital Forensics Framework; I installed v1.0 on my Windows 7 analysis system today, and it fired up quite nicely (I'll be discussing DFF more in a later post).
Carbon Black
The guys over at Kyrus Tech are really moving along with Carbon Black. If you haven't heard of this product, you really should check it out! Cb is a lightweight sensor that monitors execution on systems, watching for new stuff being launched.
Kyrus recently sent out invitations to folks to download their latest version of Cb, and they've also set up a user forum (on Ning) for folks to engage with Kyrus and each other regarding the use of the sensor, and the resulting data.
Here's a good read on Cb vs. the RSA hack...
But Cb isn't just about security and IR...one of Kyrus' case studies involved cost reduction across an enterprise by determining how many employees were actually using the full breadth of an office application suite; by reducing the licenses in accordance with actual usage, and purchasing separate copies of the component applications for the employees who actually used them, the organization was able to realize a significant cost savings.
OMFW
Aaron Walters is back at it again! Prior to DFRWS 2008, Aaron had the first Open Memory Forensics Workshop, and I have to say, the format was a welcome change to many of the conferences I'd attended in the past. Having short talks followed by panels was a great way to break up the long periods of sitting and listening, and I found the format engaging and stimulating. Even better was the technical content based on who was there and presenting...all of the big names (Aaron, Moyix/Brendan, George M. Garner, Jr., etc) in memory acquisition and analysis were there, and it looks like Aaron's planning another OMFW soon!
Ken Pryor posted a review of Windows Registry Forensics over on his blog...I greatly appreciate the effort folks put into these reviews. Thanks, Ken, for taking the time to read the book and put your thoughts into a blog post!
If you're thinking about purchasing the book, take a look at Ken's review or any of the reviews on the Amazon site. I've also been fielding questions, which come in from time to time.
Book Sales Numbers
Speaking of books, I was able to get sales numbers for foreign language editions of Windows Forensic Analysis; of the two editions, the book has been translated into Chinese, French, and most recently Korean. The numbers may be a bit off, as it took Elsevier (thanks, btw...) some time to get the numbers, but here's how the books are doing so far:
Chinese - 4000 printed, 3281 sold to date
French - 1000 copies printed, 494 sold to date
Korean - 1000 copies printed, 700 sold to date
Pretty nifty.
DFwOST
Speaking of books, a hard copy of Digital Forensics with Open Source Tools showed up on my doorstep today! Cory Altheide was the primary author...heck, the entire book was his idea...and I have to tell you, he did a great job! Once, in a galaxy far, far away (actually, it was on the IBM ISS ERS team, but close enough...), I worked with Cory and saw firsthand that he's one of the most knowledgeable and capable forensicy folks I've ever worked with. Not only is Cory REALLY smart, but he also likes beer! Actually, I think his preference is single malt scotch...I know that sounds like some kind of personal ad but if you see him at a conference...you know what I'm sayin'!
At first glance, the book turned our really well. I was more interested in the formatting and how some of the images turned out more than anything else; spelling issues weren't my primary focus. The book is chock full of some really good information, and the content is mostly directed at beginners; however, I think everyone will find something useful. For example, one of the open source tools that Cory described was the Digital Forensics Framework; I installed v1.0 on my Windows 7 analysis system today, and it fired up quite nicely (I'll be discussing DFF more in a later post).
Carbon Black
The guys over at Kyrus Tech are really moving along with Carbon Black. If you haven't heard of this product, you really should check it out! Cb is a lightweight sensor that monitors execution on systems, watching for new stuff being launched.
Kyrus recently sent out invitations to folks to download their latest version of Cb, and they've also set up a user forum (on Ning) for folks to engage with Kyrus and each other regarding the use of the sensor, and the resulting data.
Here's a good read on Cb vs. the RSA hack...
But Cb isn't just about security and IR...one of Kyrus' case studies involved cost reduction across an enterprise by determining how many employees were actually using the full breadth of an office application suite; by reducing the licenses in accordance with actual usage, and purchasing separate copies of the component applications for the employees who actually used them, the organization was able to realize a significant cost savings.
OMFW
Aaron Walters is back at it again! Prior to DFRWS 2008, Aaron had the first Open Memory Forensics Workshop, and I have to say, the format was a welcome change to many of the conferences I'd attended in the past. Having short talks followed by panels was a great way to break up the long periods of sitting and listening, and I found the format engaging and stimulating. Even better was the technical content based on who was there and presenting...all of the big names (Aaron, Moyix/Brendan, George M. Garner, Jr., etc) in memory acquisition and analysis were there, and it looks like Aaron's planning another OMFW soon!
Tuesday, April 12, 2011
Using RegRipper
I've received a couple of questions about RegRipper and it's use, and I thought that I'd take the opportunity to provide some more information about the use of this free, open source tool.
First, let me say that Windows Registry Forensics (WRF) is something of a user guide for RegRipper. I found that even though I had provided a PDF document and several blog posts that talked about how to use RegRipper, and answered a lot of questions in various lists and forums, there were still questions and some confusion. In fact, in most cases, there seem to be the same questions again and again. In an attempt to address this situation, I thought that perhaps writing a bit more extensive user guide for RegRipper and providing it in one location, in WRF, would be useful.
An example of the questions I receive have to do with getting the UserAssist data from an NTUSER.DAT hive file collected from one of the versions of Windows. As it says on pg. 185 of WRF, the userassist.pl plugin was written specifically for Windows XP systems, while the userassist2.pl plugin was written to work on all versions of Windows. There is also a third RegRipper plugin, win7_ua.pl, which was written in 2008 in response to the use of Vignere encryption (vice ROT-13) of the value names in Windows 7 Beta. So, if you want to get UserAssist information from any version of Windows, except Windows 7 Beta, you can use userassist2.pl.
Terminology
In short, RegRipper runs plugins, which are simply Perl scripts (the files that end in ".pl") located in the ".\plugins" directory of the installation. You can run a list of plugins against a hive file by selecting a plugins file or "profile", which is a flat text file, with NO extension, that has the plugins listed in order. Within the profile, lines that begin with "#" are treated as comment lines and skipped...this allows you do comment out specific plugins or add your own documentation.
So, again...RegRipper (both the GUI and the CLI "rip") are similar to the Nessus vulnerability scanner, in that it is simply an engine that runs plugins. The "plugins" are Perl scripts located in the ".\plugins" directory...files that end in the ".pl" extension. If you want to run more than one plugin against a particular hive at a time, you can create a "plugins file" or "profile", which is a file with NO extension located in the ".\plugins" directory; this file is simply a text file that contains a list of plugins to be run, in order, with one plugin (drop the ".pl" extension from the plugin name) listed on each line. You can comment the profile using "#"...RegRipper ignores lines that start with this character.
Listing Plugins
To get a list of plugins (files with ".pl" extension located in the ".\plugins" directory), there are a couple of things you can do. The package shipped with WRF, as well as provided online, includes the Plugin Browser, a GUI means not only for seeing details about the available plugins, but also building or editing profiles. Or, if you like, you can run the following command from the command line:
This command will provide a list of plugins right to STDOUT. Another option, to provide you with the same information in .csv format, would be to use the following command line:
Just open the resulting file in Excel or your favorite spreadsheet application, and sort to your heart's content!
Another thing...if you have any questions about the syntax for rip.pl/.exe, simply type the following command at the command prompt:
or
Other switches ("/?") will also work, as well. And hey, if worse comes to worse and you just don't like the command prompt, open the rip.pl file in Notepad or a text editor! ;-)
Reporting Issues
When you do have what appears to be an issue, sometimes it's very helpful to look at a couple of things first. You can actually do a bit of troubleshooting on your own, and it doesn't require any programming ability to do so. When I first released RegRipper back in 2008, several people I knew ran it against the live NTUSER.DAT on their system. Don't do that...RegRipper is intended for "dead box analysis", meaning that it's designed to be run against hive files extracted from other systems, not against the hives from the system you're currently logged into. Others ran it against hive files from the systemprofile directory, and one person even ran it across a file named "NTUSER.DAT" that was 256K of zeros. So, if you have an issue...try looking at the file in a viewer (there's an excellent free one listed in WRF). Maybe the reason the plugin is telling you that a key or value doesn't exist is because...well...it doesn't exist (or RegRipper can't find it in the provided path). Also, look at the version of Windows you're running the plugin against. Where this can be important is, for example, the UserAssist data, as the UserAssist subkeys (those listed between the UserAssist and Count keys) are different from XP to Windows 7. Another one is the ACMru key...running the acmru.pl plugin against a Windows 7 NTUSER.DAT won't reveal any information, as that key isn't used on Windows 7.
If, at this point, you still can't figure out what the problem is, please feel free to contact me, and include a concise, thorough description of the issue. For example, please be sure to include the version of Windows from with the hive was acquired, which hive you're working with, and which plugin you used. If possible, please provide a copy of the hive. Also, there are several plugins that are now available that I didn't write, so it might also be a good idea to provide the plugin itself.
Finally, remember...RegRipper is free, and open source. This means that you can write your own plugins (WRF explains how...) and you can see what various plugins do simply by opening them in a text editor. Many of the plugins I wrote and provided with the distribution contain links to references in the comments of the plugin, which can be very useful for validation, and even just as general interest. I know a lot of folks are going to say, "...but I don't program, nor do I understand Perl...", and that's okay...in many cases, there is some plain English in the comments of the plugin that tell you what it's trying to do.
A great big THANKS to Brett Shavers for setting up and maintaining the RegRipper.net site.
First, let me say that Windows Registry Forensics (WRF) is something of a user guide for RegRipper. I found that even though I had provided a PDF document and several blog posts that talked about how to use RegRipper, and answered a lot of questions in various lists and forums, there were still questions and some confusion. In fact, in most cases, there seem to be the same questions again and again. In an attempt to address this situation, I thought that perhaps writing a bit more extensive user guide for RegRipper and providing it in one location, in WRF, would be useful.
An example of the questions I receive have to do with getting the UserAssist data from an NTUSER.DAT hive file collected from one of the versions of Windows. As it says on pg. 185 of WRF, the userassist.pl plugin was written specifically for Windows XP systems, while the userassist2.pl plugin was written to work on all versions of Windows. There is also a third RegRipper plugin, win7_ua.pl, which was written in 2008 in response to the use of Vignere encryption (vice ROT-13) of the value names in Windows 7 Beta. So, if you want to get UserAssist information from any version of Windows, except Windows 7 Beta, you can use userassist2.pl.
Terminology
In short, RegRipper runs plugins, which are simply Perl scripts (the files that end in ".pl") located in the ".\plugins" directory of the installation. You can run a list of plugins against a hive file by selecting a plugins file or "profile", which is a flat text file, with NO extension, that has the plugins listed in order. Within the profile, lines that begin with "#" are treated as comment lines and skipped...this allows you do comment out specific plugins or add your own documentation.
So, again...RegRipper (both the GUI and the CLI "rip") are similar to the Nessus vulnerability scanner, in that it is simply an engine that runs plugins. The "plugins" are Perl scripts located in the ".\plugins" directory...files that end in the ".pl" extension. If you want to run more than one plugin against a particular hive at a time, you can create a "plugins file" or "profile", which is a file with NO extension located in the ".\plugins" directory; this file is simply a text file that contains a list of plugins to be run, in order, with one plugin (drop the ".pl" extension from the plugin name) listed on each line. You can comment the profile using "#"...RegRipper ignores lines that start with this character.
Listing Plugins
To get a list of plugins (files with ".pl" extension located in the ".\plugins" directory), there are a couple of things you can do. The package shipped with WRF, as well as provided online, includes the Plugin Browser, a GUI means not only for seeing details about the available plugins, but also building or editing profiles. Or, if you like, you can run the following command from the command line:
C:\tools>rip -l
This command will provide a list of plugins right to STDOUT. Another option, to provide you with the same information in .csv format, would be to use the following command line:
C:\tools>rip -l -c > plugins.csv
Just open the resulting file in Excel or your favorite spreadsheet application, and sort to your heart's content!
Another thing...if you have any questions about the syntax for rip.pl/.exe, simply type the following command at the command prompt:
C:\tools>rip
or
C:\tools>rip -h
Other switches ("/?") will also work, as well. And hey, if worse comes to worse and you just don't like the command prompt, open the rip.pl file in Notepad or a text editor! ;-)
Reporting Issues
When you do have what appears to be an issue, sometimes it's very helpful to look at a couple of things first. You can actually do a bit of troubleshooting on your own, and it doesn't require any programming ability to do so. When I first released RegRipper back in 2008, several people I knew ran it against the live NTUSER.DAT on their system. Don't do that...RegRipper is intended for "dead box analysis", meaning that it's designed to be run against hive files extracted from other systems, not against the hives from the system you're currently logged into. Others ran it against hive files from the systemprofile directory, and one person even ran it across a file named "NTUSER.DAT" that was 256K of zeros. So, if you have an issue...try looking at the file in a viewer (there's an excellent free one listed in WRF). Maybe the reason the plugin is telling you that a key or value doesn't exist is because...well...it doesn't exist (or RegRipper can't find it in the provided path). Also, look at the version of Windows you're running the plugin against. Where this can be important is, for example, the UserAssist data, as the UserAssist subkeys (those listed between the UserAssist and Count keys) are different from XP to Windows 7. Another one is the ACMru key...running the acmru.pl plugin against a Windows 7 NTUSER.DAT won't reveal any information, as that key isn't used on Windows 7.
If, at this point, you still can't figure out what the problem is, please feel free to contact me, and include a concise, thorough description of the issue. For example, please be sure to include the version of Windows from with the hive was acquired, which hive you're working with, and which plugin you used. If possible, please provide a copy of the hive. Also, there are several plugins that are now available that I didn't write, so it might also be a good idea to provide the plugin itself.
Finally, remember...RegRipper is free, and open source. This means that you can write your own plugins (WRF explains how...) and you can see what various plugins do simply by opening them in a text editor. Many of the plugins I wrote and provided with the distribution contain links to references in the comments of the plugin, which can be very useful for validation, and even just as general interest. I know a lot of folks are going to say, "...but I don't program, nor do I understand Perl...", and that's okay...in many cases, there is some plain English in the comments of the plugin that tell you what it's trying to do.
A great big THANKS to Brett Shavers for setting up and maintaining the RegRipper.net site.
Monday, April 11, 2011
Links and Stuff
Digital Forensic Search
Corey Harrell's done some pretty interesting things lately...most recently, he set up a search mechanism that targets a specific subset of Internet resources that are specific to the digital forensics community. Sometimes when we're searching for something, we head off to our favorite search engine and cast a wide net...and we may not get that many hits initially that are pertinent to what we're looking for; by narrowing the field a bit, more relevant hits may be returned.
One of the issues with the community is that there's a lot of good information out there, but it's out there. Many analysts have expressed a bit of frustration that they can't seem to find what they're looking for when they need it, and that they don't know that they need it until...well...they need it. I've also talked to people who've done hours of research but not documented any of it, so when the issue they were working on comes up again, they have to go back and redo all of that research again.
Rootkit Evolution
Greg Hoglund posted to his Fast Horizon blog recently, and the title...Rootkit Evolution...sparked my curiosity. Sadly, when I read the post, it wasn't really anything more than a sales pitch for Digital DNA, whereas I had expected...well...something about the evolution of rootkits. I mean, that's kind of what the title suggested. Anyway, one statement from the blog caught my interest, however:
...we are still ahead of the threat.
While I don't disagree with this, I would suggest that attackers may find that it isn't necessary to employ rootkit technology. Now, don't get me wrong...I'm sure that this does happen. But for the most part, is it really necessary? Look at many of the available annual reports, such the Verizon Business Security report, M-Trends, or TrustWave's Global Security Report...some of the commonalities you may see across the board include considerable persistence without the need to deploy rootkits.
So...is the research important? Yes, it is. They're still being used (see the Chinese bootkit). Now and then, these thing pop up (well, not really...someone finds one...) during an incident as a well-designed rootkit can be very effective. It's just like NTFS alternate data streams...as soon as the security community considers them passe and stops looking for them, that's when they'll make a resurgence and be used more and more by the bad guys.
What a Tweet Looks Like
Ever wondered what a tweet looks like? I'm sure you have! ;-) By way of a couple of different links comes a very interesting write-up of what a tweet looks like from a developer's standpoint...click on the big picture in the middle of the post to enlarge the map-of-a-tweet (or "Twitter Status Object"). Most forensic analysts will likely look at the map and see the value right away.
Okay, so how would you get at this? This sort of information would likely be in some unstructured area of the disk, right...the pagefile or unallocated space. So, if you were to run strings or BinText against the pagefile or unallocated space extracted from an image via blkls, you would end up with a list of strings along with their offsets within the data. What I've done is write a Perl script that goes into the data at the offset that I'm interested in, and extracts however many bytes on either side of the offset that I specify. I've used this methodology to extract not only URIs and server responses from the pagefile, but also Windows XP/2003 Event Log records from unallocated space, translating them directly to timeline/TLN format. Doing this provided me with a capability that went beyond simply carving for files, as I needed to carve for specific, perhaps well-defined data structures.
Something like this could be used to quickly and easily extract tweets from unallocated space or the pagefile. Run strings/BinText, then search the output to see if you have any unique search terms, such as a user name or screen name. Then, run the script that goes to the offset of each search term and extracts the appropriate amount of data. This can be extremely valuable functionality to an examiner, and can be added to an overall data extraction process using free and open source tools.
Writing Open Source Tools
The above section, the imminent publication of Digital Forensics with Open Source Tools (the book was Cory Altheide's idea and he was the primary author, as is due to be published on 15 April), and the upcoming Open Source Forensics Conference (at which Cory and I will both be speaking...), all combine to make a good transition to some comments on writing open source tools. Also, this is a topic that Cory and I had considered addressing in the book, but had decided that it was too big for a sidebar, and didn't quite fit in anywhere in particular. After all, with all of the open source tools discussed in the book, we would really need to get input from others to really do the topic justice. As such, I thought I could post a few comments here...
For me, writing open source tools starts as a way to serve my own needs when conducting analysis. Throughout my career, I have had access to commercial forensic analysis applications, and each has served their purpose. However, as with any tool, these applications have their strengths and weaknesses. When conducting PCI forensic investigations, a commercial application made it easy to set up a process that all of the analysts could employ, but we also found out that some of the built-in functions were not exactly accurate, and that affected our results. The result of that was to seek outside assistance to rewrite the built-in functions, in order to get something that was more accurate and better suited to our needs. We would then export the results and run them through an open-source process to prepare an accurate count of unique numbers, etc.
So, sometimes I'd write an open source tool in order to massage some data into a format that is better suited to presentation or dissemination. However, there have been other times when no commercial application had the functionality I needed, so I wroted something to meet my needs. A great example of this is the MBR infector detector script. Another is the script I wrote to carve Windows XP/2003 Event Log records from unallocated space.
I can guess that one response to all this is going to be, "...but I don't know how to program...", and my response to that is, you don't have to...you just have to know someone who does. Not every analyst needs needs to know how to program, although many analysts out there can tell you that understanding programming (anything from batch files all the way to assembly...) can be extremely beneficial. However, having someone who understands what you do and can program can be extremely beneficial when it comes to DFIR work.
Too many times, when it comes to DFIR work, analysts are sort of left on their own. Business models of dictate the necessity for this...but having a support mechanism for engagements of all kinds can be an extremely effective means of extending your team's capabilities, as well as preserving corporate intellectual property.
Even if you aren't part of a DFIR team, you can still develop and take advantage of this sort of relationship. If you know someone within the community with programming skills, what does it hurt to seek their assistance? If they, in turn, provide you with effective, timely support, then you have a great opportunity to further the relationship by supporting them in some manner...even if that's just a "thank you" for their efforts. Many folks with some programming capabilities are simply seeking new challenges and new opportunities to learn, or employ their skills in new ways. So when it comes to writing open source tools, many times, the only real "cost" involved is a "thank you" and acknowledgement of someone's efforts to support you.
Scanners
Lenny's got a post up that lists three tools for scanning the file system for malware with custom signatures. these are all excellent tools; in fact, if you remember that I had provided instructions (from MHL) regarding how to install pescanner.py on Windows systems, two of the tools that Lenny mentions (ClamAV, Yara) can be included in the setup for pescanner.py and the signatures used to locate suspicious files.
Signatures are one way to locate malware and other suspicious files on a system. However, signatures change, and they must be kept up to date. You can also use signatures to locate all packed files, as well as files "hidden" using other obfuscation methods.
Keep in mind, however, that this is only part of the solution. Because signatures within malware files do change, we also need to consider the network, memory, and other parts of the system (i.e., Registry, etc.) to look for indicators of malware. In fact, many times, this may be our first indicator of malware. I've found previous infection attempts were malware has been loaded on a system, only to be detected and quarantined by the installed AV product. I could see the names of the files within the AV and Application Event Logs. Interestingly enough, files of the same name were created a couple of weeks later, indicating that the bad guy had obfuscated his malware so that the AV wouldn't detected it, and was able to get it successfully installed.
There's more to malware detection than just scanning files for signatures. If all you have is an acquired image from a system, and a malware infection was suspected, there are a number of other things you can look at in order to find ancillary indicators of an infection. Scanners should be part of the malware detection process.
Corey Harrell's done some pretty interesting things lately...most recently, he set up a search mechanism that targets a specific subset of Internet resources that are specific to the digital forensics community. Sometimes when we're searching for something, we head off to our favorite search engine and cast a wide net...and we may not get that many hits initially that are pertinent to what we're looking for; by narrowing the field a bit, more relevant hits may be returned.
One of the issues with the community is that there's a lot of good information out there, but it's out there. Many analysts have expressed a bit of frustration that they can't seem to find what they're looking for when they need it, and that they don't know that they need it until...well...they need it. I've also talked to people who've done hours of research but not documented any of it, so when the issue they were working on comes up again, they have to go back and redo all of that research again.
Rootkit Evolution
Greg Hoglund posted to his Fast Horizon blog recently, and the title...Rootkit Evolution...sparked my curiosity. Sadly, when I read the post, it wasn't really anything more than a sales pitch for Digital DNA, whereas I had expected...well...something about the evolution of rootkits. I mean, that's kind of what the title suggested. Anyway, one statement from the blog caught my interest, however:
...we are still ahead of the threat.
While I don't disagree with this, I would suggest that attackers may find that it isn't necessary to employ rootkit technology. Now, don't get me wrong...I'm sure that this does happen. But for the most part, is it really necessary? Look at many of the available annual reports, such the Verizon Business Security report, M-Trends, or TrustWave's Global Security Report...some of the commonalities you may see across the board include considerable persistence without the need to deploy rootkits.
So...is the research important? Yes, it is. They're still being used (see the Chinese bootkit). Now and then, these thing pop up (well, not really...someone finds one...) during an incident as a well-designed rootkit can be very effective. It's just like NTFS alternate data streams...as soon as the security community considers them passe and stops looking for them, that's when they'll make a resurgence and be used more and more by the bad guys.
What a Tweet Looks Like
Ever wondered what a tweet looks like? I'm sure you have! ;-) By way of a couple of different links comes a very interesting write-up of what a tweet looks like from a developer's standpoint...click on the big picture in the middle of the post to enlarge the map-of-a-tweet (or "Twitter Status Object"). Most forensic analysts will likely look at the map and see the value right away.
Okay, so how would you get at this? This sort of information would likely be in some unstructured area of the disk, right...the pagefile or unallocated space. So, if you were to run strings or BinText against the pagefile or unallocated space extracted from an image via blkls, you would end up with a list of strings along with their offsets within the data. What I've done is write a Perl script that goes into the data at the offset that I'm interested in, and extracts however many bytes on either side of the offset that I specify. I've used this methodology to extract not only URIs and server responses from the pagefile, but also Windows XP/2003 Event Log records from unallocated space, translating them directly to timeline/TLN format. Doing this provided me with a capability that went beyond simply carving for files, as I needed to carve for specific, perhaps well-defined data structures.
Something like this could be used to quickly and easily extract tweets from unallocated space or the pagefile. Run strings/BinText, then search the output to see if you have any unique search terms, such as a user name or screen name. Then, run the script that goes to the offset of each search term and extracts the appropriate amount of data. This can be extremely valuable functionality to an examiner, and can be added to an overall data extraction process using free and open source tools.
Writing Open Source Tools
The above section, the imminent publication of Digital Forensics with Open Source Tools (the book was Cory Altheide's idea and he was the primary author, as is due to be published on 15 April), and the upcoming Open Source Forensics Conference (at which Cory and I will both be speaking...), all combine to make a good transition to some comments on writing open source tools. Also, this is a topic that Cory and I had considered addressing in the book, but had decided that it was too big for a sidebar, and didn't quite fit in anywhere in particular. After all, with all of the open source tools discussed in the book, we would really need to get input from others to really do the topic justice. As such, I thought I could post a few comments here...
For me, writing open source tools starts as a way to serve my own needs when conducting analysis. Throughout my career, I have had access to commercial forensic analysis applications, and each has served their purpose. However, as with any tool, these applications have their strengths and weaknesses. When conducting PCI forensic investigations, a commercial application made it easy to set up a process that all of the analysts could employ, but we also found out that some of the built-in functions were not exactly accurate, and that affected our results. The result of that was to seek outside assistance to rewrite the built-in functions, in order to get something that was more accurate and better suited to our needs. We would then export the results and run them through an open-source process to prepare an accurate count of unique numbers, etc.
So, sometimes I'd write an open source tool in order to massage some data into a format that is better suited to presentation or dissemination. However, there have been other times when no commercial application had the functionality I needed, so I wroted something to meet my needs. A great example of this is the MBR infector detector script. Another is the script I wrote to carve Windows XP/2003 Event Log records from unallocated space.
I can guess that one response to all this is going to be, "...but I don't know how to program...", and my response to that is, you don't have to...you just have to know someone who does. Not every analyst needs needs to know how to program, although many analysts out there can tell you that understanding programming (anything from batch files all the way to assembly...) can be extremely beneficial. However, having someone who understands what you do and can program can be extremely beneficial when it comes to DFIR work.
Too many times, when it comes to DFIR work, analysts are sort of left on their own. Business models of dictate the necessity for this...but having a support mechanism for engagements of all kinds can be an extremely effective means of extending your team's capabilities, as well as preserving corporate intellectual property.
Even if you aren't part of a DFIR team, you can still develop and take advantage of this sort of relationship. If you know someone within the community with programming skills, what does it hurt to seek their assistance? If they, in turn, provide you with effective, timely support, then you have a great opportunity to further the relationship by supporting them in some manner...even if that's just a "thank you" for their efforts. Many folks with some programming capabilities are simply seeking new challenges and new opportunities to learn, or employ their skills in new ways. So when it comes to writing open source tools, many times, the only real "cost" involved is a "thank you" and acknowledgement of someone's efforts to support you.
Scanners
Lenny's got a post up that lists three tools for scanning the file system for malware with custom signatures. these are all excellent tools; in fact, if you remember that I had provided instructions (from MHL) regarding how to install pescanner.py on Windows systems, two of the tools that Lenny mentions (ClamAV, Yara) can be included in the setup for pescanner.py and the signatures used to locate suspicious files.
Signatures are one way to locate malware and other suspicious files on a system. However, signatures change, and they must be kept up to date. You can also use signatures to locate all packed files, as well as files "hidden" using other obfuscation methods.
Keep in mind, however, that this is only part of the solution. Because signatures within malware files do change, we also need to consider the network, memory, and other parts of the system (i.e., Registry, etc.) to look for indicators of malware. In fact, many times, this may be our first indicator of malware. I've found previous infection attempts were malware has been loaded on a system, only to be detected and quarantined by the installed AV product. I could see the names of the files within the AV and Application Event Logs. Interestingly enough, files of the same name were created a couple of weeks later, indicating that the bad guy had obfuscated his malware so that the AV wouldn't detected it, and was able to get it successfully installed.
There's more to malware detection than just scanning files for signatures. If all you have is an acquired image from a system, and a malware infection was suspected, there are a number of other things you can look at in order to find ancillary indicators of an infection. Scanners should be part of the malware detection process.
Thursday, April 07, 2011
Links
Digital Forensics Framework
The guys over at DFF have an open-source framework used as both a digital investigation and development platform. As this is an open-source tool, Cory did discuss a previous version of this tool in the Digital Forensics with Open Source Tools book.
The DFF guys recently posted on Time Filtering, using DFF to filter the image based on time stamp information.
While I think that this is a great step forward, I also think at the same time that this sort of data visualization is of limited value. As I've been creating timelines, I've been looking at ways to potentially present the information in a visual manner that would make analysis easier and more efficient; to be honest, I have yet to find something like this. Others have talked about such presentation methods as a histogram showing volumes of activity, but in the same breath, they'll also talk about malware following the Least Frequency of Occurrence (LFO) on systems; I'm not sure that showing spikes in activity necessarily lends itself to finding those things that occur least frequently on a system.
AntiForensics
Craig Ball wrote this article for Law Technology News, on the use of antiforensics. Many times, measures taken to foil the work of forensic analysts were originally intended as a privacy measure, but even if those actions are intended specifically to hide the user's activities from the analyst, they are often not even a speed bump in the road to analysis.
During an investigation I determined that an "evidence eliminator" application had been used. This analysis was of an older case, and the image was from a system that had been acquired several years prior to my analysis. When I did some research on the version of the application, I found that it deleted specific Registry keys...but I was able to recover the most recently deleted keys from unallocated space within the hive file itself. Previous subkeys and values were recovered via the hives found in the System Restore Points.
Antiforensics techniques target the training of the analyst...that's pretty much it. For more information, see the Parsing EVT Records section below.
Images
The DFF guys also included a link to the Digital Corpora site, from which the NTFS image described in the DFF blog post was downloaded. This is a great place to go to get access to some images, one of which is of a Vista system, apparently.
Hacking
One issue that continues to be a threat is disgruntled former employees. Gucci was recently confronted with this issue. What's interesting about the post to the Sophos NakedSecurity blog is that the fired former employee reportedly gained access to the network by first having created an account for a fictional employee, and then after being fired, social engineered the helpdesk and convincing them that he was that fictional employee. After that, he was able to return time and time again. This is just an example of how someone can use an organization's process against itself, taking full advantage of that process to perform a wide range of malicious actions.
Using RegRipper
I recently received the following quote from someone who used RegRipper and the regtime.pl plugin recently, but asked to remain anonymous (permission was given to post the quote, however):
I have the date and time in which an IDS caught a piece of malware being downloaded on the network to a user's machine. I need/needed to look for clues to see if the exe actually executed or not. I was using FTK's registry viewer to create a timeline of last write times for Keys but Registry Viewer doesn't let you export in a format other than HTML, which is just not helpful.
RegRipper gives me a nice line by line way to see the time and date stamps in a way in which they are much more viewable, WHICH IS GREAT.
Now, I'm not posting this to poke fun at nor admonish AccessData...not at all. I'm also not saying that one tool is any better or worse than another...all tools have their strengths and weaknesses, and the real power of a tool is in who uses it. I wanted to post this publicly to demonstrate to some who may not have used RegRipper or be familiar with it to see that, even though it's not a commercial tool, it can still be very useful. I tend to think that a number of folks in the DFIR community use specific tools because they feel that they have to...their employer purchased a tool or set of tools, based on some ancillary knowledge of the industry or due to a customer requirement. As such, there's considerable reticence toward trying or incorporating new tools, and rather than seeking the best tool to solve the problem, the problem is redefined to conform to the tools being used.
Open Source Conference
Speaking of tools, Brian Carrier sent out an email recently announcing the Sleuth Kit and Open Source Digital Forensics Conference on 14 June 2011 in McLean, VA. The day before the conference, there will be "two half-day workshops that will allow you to get hands-on experience with analyzing web browser artifacts and making timelines with open source tools."
Speakers at the conference will include Cory Altheide, Brian Carrier and Jon Stewart. You had me at "Cory Altheide". ;-) While remaining a fairly brief conference, this still looks as if it will be a good one, and I'm hoping that Cory and I will have copies of Digital Forensics with Open Source Tools available.
Chinese Bootkit
There's a new post over on the ThreatPost blog that discusses a Chinese bootkit. There's some interesting information available, and a graphic that demonstrates the process by which systems are infected. Part of that process includes an MBR infector, something for which I'd written a Perl script to help me detect during forensic analysis. Unfortunately, there isn't a great deal of information available in the blog post about the MBR infector, but I will say that it appears that these sorts of malware are popping up more frequently, so this is definitely something you would want to include in your malware detection process. After all, with the right tools, it only takes a few seconds to check for the possibility of an MBR infector, so we're not talking about extending your process by a day or more. This Net-Security article indicates that the MBR infector copies the original MBR to the third sector, so the MBR infector detector would work very well in helping you find indications of this bit of malware.
Parsing EVT Records
Lance recently posted about an EnScript he provided to help parse "classic" Windows Event Log (.evt) records from unallocated space. This is very similar to my recent post about the same thing, albeit the fact that the approach I took uses only free and open-source tools; however, if you're a heavy EnCase user, you'll probably want to go with Lance's solution. More than anything else, I think that what this shows is that there's a need for these sorts of things within the community...many times, there simply isn't one, single way to accomplish something, and having multiple tools is a good thing.
I've recently received a number of requests to share this code and technique, and the first time I did so, I sent the script within 10-15 min of receiving the request. And then didn't hear a thing back until I followed up three days later. Really...is it so hard to thank someone for sending you something that you asked for, and just acknowledging that you received it?
Presentations
I was over reviewing the offerings on the "What's New" page at the e-Evidence.info web site, and found Nick Klein's presentation from RuxCon. Interestingly, slide 9 includes the bullet, "Be specific in defining the objectives and what evidence might assist in determining the facts." Slide 11 of that presentation is all about documenting what you do. This is interesting to me because it's very similar to what Chris talks about in his Sniper Forensics presentations.
Malware Analysis
MalwareAnalyzer 2.9 was released recently. This project is written in Python, but provided as a Windows executable. I haven't seen too much out there about this one, but projects like this are always worth a look.
The guys over at DFF have an open-source framework used as both a digital investigation and development platform. As this is an open-source tool, Cory did discuss a previous version of this tool in the Digital Forensics with Open Source Tools book.
The DFF guys recently posted on Time Filtering, using DFF to filter the image based on time stamp information.
While I think that this is a great step forward, I also think at the same time that this sort of data visualization is of limited value. As I've been creating timelines, I've been looking at ways to potentially present the information in a visual manner that would make analysis easier and more efficient; to be honest, I have yet to find something like this. Others have talked about such presentation methods as a histogram showing volumes of activity, but in the same breath, they'll also talk about malware following the Least Frequency of Occurrence (LFO) on systems; I'm not sure that showing spikes in activity necessarily lends itself to finding those things that occur least frequently on a system.
AntiForensics
Craig Ball wrote this article for Law Technology News, on the use of antiforensics. Many times, measures taken to foil the work of forensic analysts were originally intended as a privacy measure, but even if those actions are intended specifically to hide the user's activities from the analyst, they are often not even a speed bump in the road to analysis.
During an investigation I determined that an "evidence eliminator" application had been used. This analysis was of an older case, and the image was from a system that had been acquired several years prior to my analysis. When I did some research on the version of the application, I found that it deleted specific Registry keys...but I was able to recover the most recently deleted keys from unallocated space within the hive file itself. Previous subkeys and values were recovered via the hives found in the System Restore Points.
Antiforensics techniques target the training of the analyst...that's pretty much it. For more information, see the Parsing EVT Records section below.
Images
The DFF guys also included a link to the Digital Corpora site, from which the NTFS image described in the DFF blog post was downloaded. This is a great place to go to get access to some images, one of which is of a Vista system, apparently.
Hacking
One issue that continues to be a threat is disgruntled former employees. Gucci was recently confronted with this issue. What's interesting about the post to the Sophos NakedSecurity blog is that the fired former employee reportedly gained access to the network by first having created an account for a fictional employee, and then after being fired, social engineered the helpdesk and convincing them that he was that fictional employee. After that, he was able to return time and time again. This is just an example of how someone can use an organization's process against itself, taking full advantage of that process to perform a wide range of malicious actions.
Using RegRipper
I recently received the following quote from someone who used RegRipper and the regtime.pl plugin recently, but asked to remain anonymous (permission was given to post the quote, however):
I have the date and time in which an IDS caught a piece of malware being downloaded on the network to a user's machine. I need/needed to look for clues to see if the exe actually executed or not. I was using FTK's registry viewer to create a timeline of last write times for Keys but Registry Viewer doesn't let you export in a format other than HTML, which is just not helpful.
RegRipper gives me a nice line by line way to see the time and date stamps in a way in which they are much more viewable, WHICH IS GREAT.
Now, I'm not posting this to poke fun at nor admonish AccessData...not at all. I'm also not saying that one tool is any better or worse than another...all tools have their strengths and weaknesses, and the real power of a tool is in who uses it. I wanted to post this publicly to demonstrate to some who may not have used RegRipper or be familiar with it to see that, even though it's not a commercial tool, it can still be very useful. I tend to think that a number of folks in the DFIR community use specific tools because they feel that they have to...their employer purchased a tool or set of tools, based on some ancillary knowledge of the industry or due to a customer requirement. As such, there's considerable reticence toward trying or incorporating new tools, and rather than seeking the best tool to solve the problem, the problem is redefined to conform to the tools being used.
Open Source Conference
Speaking of tools, Brian Carrier sent out an email recently announcing the Sleuth Kit and Open Source Digital Forensics Conference on 14 June 2011 in McLean, VA. The day before the conference, there will be "two half-day workshops that will allow you to get hands-on experience with analyzing web browser artifacts and making timelines with open source tools."
Speakers at the conference will include Cory Altheide, Brian Carrier and Jon Stewart. You had me at "Cory Altheide". ;-) While remaining a fairly brief conference, this still looks as if it will be a good one, and I'm hoping that Cory and I will have copies of Digital Forensics with Open Source Tools available.
Chinese Bootkit
There's a new post over on the ThreatPost blog that discusses a Chinese bootkit. There's some interesting information available, and a graphic that demonstrates the process by which systems are infected. Part of that process includes an MBR infector, something for which I'd written a Perl script to help me detect during forensic analysis. Unfortunately, there isn't a great deal of information available in the blog post about the MBR infector, but I will say that it appears that these sorts of malware are popping up more frequently, so this is definitely something you would want to include in your malware detection process. After all, with the right tools, it only takes a few seconds to check for the possibility of an MBR infector, so we're not talking about extending your process by a day or more. This Net-Security article indicates that the MBR infector copies the original MBR to the third sector, so the MBR infector detector would work very well in helping you find indications of this bit of malware.
Parsing EVT Records
Lance recently posted about an EnScript he provided to help parse "classic" Windows Event Log (.evt) records from unallocated space. This is very similar to my recent post about the same thing, albeit the fact that the approach I took uses only free and open-source tools; however, if you're a heavy EnCase user, you'll probably want to go with Lance's solution. More than anything else, I think that what this shows is that there's a need for these sorts of things within the community...many times, there simply isn't one, single way to accomplish something, and having multiple tools is a good thing.
I've recently received a number of requests to share this code and technique, and the first time I did so, I sent the script within 10-15 min of receiving the request. And then didn't hear a thing back until I followed up three days later. Really...is it so hard to thank someone for sending you something that you asked for, and just acknowledging that you received it?
Presentations
I was over reviewing the offerings on the "What's New" page at the e-Evidence.info web site, and found Nick Klein's presentation from RuxCon. Interestingly, slide 9 includes the bullet, "Be specific in defining the objectives and what evidence might assist in determining the facts." Slide 11 of that presentation is all about documenting what you do. This is interesting to me because it's very similar to what Chris talks about in his Sniper Forensics presentations.
Malware Analysis
MalwareAnalyzer 2.9 was released recently. This project is written in Python, but provided as a Windows executable. I haven't seen too much out there about this one, but projects like this are always worth a look.
Subscribe to:
Posts (Atom)