Exploit Artifacts
The Journey into IR blog has an interesting post regarding exploit artifacts. This one is specific to CVE 2010-0094, and provides information with respect to exploit artifacts for Admin and non-admin users. The post lists some excellent file system and Registry artifacts, and they're worth taking a look at and cataloging. One of the things I do in my malware detection process is look for PE files (with .tmp or .exe/.dll extensions) in the user %TEMP% directory (based on the operating system version)...this blog post mentions several file system artifacts in that directory that would be of interest and worth checking out during a wide range of exams (CP/Trojan Defense, intrusion, malware detection, etc.).
One thing I would say about the post is that the Registry artifact posted is confusing...it says that the path to the Registry key affected is "HKLM-Admin", but it should say "HKCU-Admin" for the admin user.
Be sure to take a look at the follow-on post, as well...
Addendum: Plugin complete...
SOP
One of the issues faced in corporate environments is that when something is detected, it's simply quicker to take the box offline, "nuke it from orbit", and reinstall the OS, apps and data. This was noted in a recent SANS ISC post. The age-old issue with this approach is that if a thorough exam isn't performed, and the infection or compromise vector isn't determined (based on fact and not speculation) then the system remains open to exploitation and compromise all over again. So, while the argument appears to be that "it's quicker" to just nuke the box and move on, is it really quicker to keep doing it over and over again?
I completely understand the need to quickly reprovision systems and get them back into service. But there's such a thing as moving too quickly...to keep with the movie references from the SANS ISC post, consider Arnold S. in "Twins"; "You move too soon."
Anyway, there are a couple of things that you should consider adding to your standard operating procedures (SOP) for these situations. The SANS ISC post mentions using disk2vhd, which is free and is a good option to consider. Another is to download FTK Imager (free) and acquire a copy of the drive before reprovisioning. If you need to mount the image as a VHD file, MS provides the free vhdtool.exe.
The point is that "nuking from orbit" is indeed a quicker response...the first time. However, not understanding the overall issue of how the system became infected/compromised in the first place quickly reduces the value of this approach, as the system becomes re-infected. Consider the situation where the compromise is NOT a result of out-of-date patches...what good is setting the system up all over again with up-to-date patches?
If you still find yourself without time to pursue more than simply making a copy/image of the system, why not outsource the analysis? Contact someone you know and trust within the DFIR community and ask for assistance. If you're addressing this from a corporate perspective, consider establishing a retainer-based setup with a trusted advisor.
APIMonitor
Ever want a tool similar to strace, for Windows? Did you ever want to see more about what an application is doing, and how it's doing it? Check out APIMonitor. While this tool does appear similar to ProcessMonitor, there are some interesting differences, such as the buffer view, and the ability to monitor services.
Review
Eric J. Huber, of the AFoD blog, posted a review of Windows Registry Forensics on Amazon a bit ago, but I wanted to mention it again here, as this is what helps get the word about the book (and its potential value) out to the community.
Thanks again to Eric and the others who've posted reviews on Amazon, or on other publicly-accessible sites!
One thing I'd like to mention with respect to the content of the review is that the tools from the DVD that accompanies WRF are available online, and the link is continually available from the Books page associated with this blog.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Monday, March 21, 2011
Sunday, March 20, 2011
Using RegRipper
Sometimes I'll receive or see questions in a forum about RegRipper...does it to this, does it do that? Sometimes I get the sense from these questions that there's tire-kicking going on, and that's fine...but I've always thought that it's been pretty clear what RegRipper does...what problem does it try to solve.
RegRipper is an open-source tool that allows the user (usually an analyst or responder) to extract specific information (keys, key LastWrite times, values, etc.) from the Registry.
As a side effect, RegRipper is also an excellent facility for retention of institutional knowledge. Let's say an analyst finds something that she hasn't seen before through Registry analysis, as a result of 10 hours of dedicated analysis. She can write a plugin, documenting everything, and then provide that plugin to other team members. Now, without having her knowledge or expertise, or having spent that same 10 hours digging, everyone of the other analysts on her team can "find" and extract that same information. Two years later, after some analysts have left and new ones have been hired, we have the same effect, again without the new analysts having to spend 10 or more hours to find the same thing. And the data is extracted every time.
RegRipper is NOT a search tool, although there are plugins that will parse through binary data to retrieve information that would not be found via any of the usual search tools. You can program the ability to do some searching into a plugin, sure...but RegRipper is not a tool you would use to perform general searches of Registry hives.
Below are some of the more popular questions I get:
Does RegRipper work with Windows 7?
This is one of those questions that I'm not sure I know how to answer. If I say, "yes", I'm afraid that there's an expectation that every possible plugin for Windows 7 that could ever be written has been written and is included with the distribution. If I say, "yes, but...", anything after the comma will get lost, and we're back at the last answer.
The fact of the matter is that RegRipper works with all versions of Windows from NT up through and including Windows 7. I've used it on everything from Windows 2000 through XP and on to Vista and Windows 7 systems. It works because the Registry structure, on a binary and data structure level, remains the same across all versions. Where things go haywire a bit is when a key or value has been added, moved or deleted...which happens quite often between Windows versions.
So, the long answer is that yes, RegRipper works on Windows 7, but the caveat is that it must have the plugin for the data in which you're interested.
Sometimes, the above question is more often asked as, Why doesn't RegRipper do X? The answer to that is usually, Because you haven't written the plugin yet, my friend. ;-) Folks, RegRipper is open-source, and free. It comes with a great deal of documentation on how to use it. For example, if you want to know what rip.exe can do, just type "rip -?" or "rip -h" at the command prompt.
Does RegRipper do X?
Much like Nessus, RegRipper is an engine that runs plugins. If you want it to do something, you can make it do it. The tool is open source, and is written in Perl.
One of the tools I included with RegRipper is "rip", either with the .pl or .exe extension, which is simply the command line version of RegRipper. Rip has some cool features. For example, you can run either single plugins or entire profiles from the command line, and capture the information to files using DOS redirection. The output from rip goes to STDERR and STDOUT, so use the appropriate redirection to capture everything.
If you want to know what plugins you have, use rip -l -c > plugins.csv, and open the resulting file in Excel. When the Registry forensics book was released, I included a GUI tool called "Plugin Browser" that lets you browse through the plugins one at a time.
Can you make RegRipper do X?
Yes. And so can you. RegRipper is open-source, based on Perl. There's very little in the way of a "proprietary" API...in fact, there isn't any at all. RegRipper encapsulates some regular Perl APIs, such as the print() function, but that's it...it's just encapsulated. RegRipper is based on the Parse::Win32Registry module by James McFarlane, which is easily installed into ActiveState Perl using the Perl Package Manager (PPM).
Need some help? No problem. There are a number of plugins available, and you can open any of these in an editor (or even NotePad) and use them as a basis. In fact, this is exactly how I do it.
If you need even more help, and would like me to write a plugin for you, all I need is a clear, concise description of what you want, and a sample hive that contains the data. That's it. If you give me those, and I have the time available, I can usually turn around a working plugin in very short order.
If you have any questions, or don't understand something, the best thing to do is ask. RegRipper is a powerful and very useful tool...I'm not saying this because I wrote it; I'm saying it because I wrote it and use it on every engagement. I use RegRipper to look at specific keys and values to provide insight into the system under analysis, as well as provide some context about the engagement overall. I also use it locate malware that wasn't detected by AV. I've use RegRipper to catalogue new intrusion artifacts, as well as demonstrate that a user account was used to view (or in one instance, not view) specific files.
If you're using RegRipper, a new distribution of plugins (not so much new plugins as more...) was included along with Windows Registry Forensics, as well as online. I've updated a couple of the plugins, added a few more, and Brett's provided others via RegRipper.net.
RegRipper is an open-source tool that allows the user (usually an analyst or responder) to extract specific information (keys, key LastWrite times, values, etc.) from the Registry.
As a side effect, RegRipper is also an excellent facility for retention of institutional knowledge. Let's say an analyst finds something that she hasn't seen before through Registry analysis, as a result of 10 hours of dedicated analysis. She can write a plugin, documenting everything, and then provide that plugin to other team members. Now, without having her knowledge or expertise, or having spent that same 10 hours digging, everyone of the other analysts on her team can "find" and extract that same information. Two years later, after some analysts have left and new ones have been hired, we have the same effect, again without the new analysts having to spend 10 or more hours to find the same thing. And the data is extracted every time.
RegRipper is NOT a search tool, although there are plugins that will parse through binary data to retrieve information that would not be found via any of the usual search tools. You can program the ability to do some searching into a plugin, sure...but RegRipper is not a tool you would use to perform general searches of Registry hives.
Below are some of the more popular questions I get:
Does RegRipper work with Windows 7?
This is one of those questions that I'm not sure I know how to answer. If I say, "yes", I'm afraid that there's an expectation that every possible plugin for Windows 7 that could ever be written has been written and is included with the distribution. If I say, "yes, but...", anything after the comma will get lost, and we're back at the last answer.
The fact of the matter is that RegRipper works with all versions of Windows from NT up through and including Windows 7. I've used it on everything from Windows 2000 through XP and on to Vista and Windows 7 systems. It works because the Registry structure, on a binary and data structure level, remains the same across all versions. Where things go haywire a bit is when a key or value has been added, moved or deleted...which happens quite often between Windows versions.
So, the long answer is that yes, RegRipper works on Windows 7, but the caveat is that it must have the plugin for the data in which you're interested.
Sometimes, the above question is more often asked as, Why doesn't RegRipper do X? The answer to that is usually, Because you haven't written the plugin yet, my friend. ;-) Folks, RegRipper is open-source, and free. It comes with a great deal of documentation on how to use it. For example, if you want to know what rip.exe can do, just type "rip -?" or "rip -h" at the command prompt.
Does RegRipper do X?
Much like Nessus, RegRipper is an engine that runs plugins. If you want it to do something, you can make it do it. The tool is open source, and is written in Perl.
One of the tools I included with RegRipper is "rip", either with the .pl or .exe extension, which is simply the command line version of RegRipper. Rip has some cool features. For example, you can run either single plugins or entire profiles from the command line, and capture the information to files using DOS redirection. The output from rip goes to STDERR and STDOUT, so use the appropriate redirection to capture everything.
If you want to know what plugins you have, use rip -l -c > plugins.csv, and open the resulting file in Excel. When the Registry forensics book was released, I included a GUI tool called "Plugin Browser" that lets you browse through the plugins one at a time.
Can you make RegRipper do X?
Yes. And so can you. RegRipper is open-source, based on Perl. There's very little in the way of a "proprietary" API...in fact, there isn't any at all. RegRipper encapsulates some regular Perl APIs, such as the print() function, but that's it...it's just encapsulated. RegRipper is based on the Parse::Win32Registry module by James McFarlane, which is easily installed into ActiveState Perl using the Perl Package Manager (PPM).
Need some help? No problem. There are a number of plugins available, and you can open any of these in an editor (or even NotePad) and use them as a basis. In fact, this is exactly how I do it.
If you need even more help, and would like me to write a plugin for you, all I need is a clear, concise description of what you want, and a sample hive that contains the data. That's it. If you give me those, and I have the time available, I can usually turn around a working plugin in very short order.
If you have any questions, or don't understand something, the best thing to do is ask. RegRipper is a powerful and very useful tool...I'm not saying this because I wrote it; I'm saying it because I wrote it and use it on every engagement. I use RegRipper to look at specific keys and values to provide insight into the system under analysis, as well as provide some context about the engagement overall. I also use it locate malware that wasn't detected by AV. I've use RegRipper to catalogue new intrusion artifacts, as well as demonstrate that a user account was used to view (or in one instance, not view) specific files.
If you're using RegRipper, a new distribution of plugins (not so much new plugins as more...) was included along with Windows Registry Forensics, as well as online. I've updated a couple of the plugins, added a few more, and Brett's provided others via RegRipper.net.
Friday, March 11, 2011
Links and Notes
Forensic Meet-up
There are plans afoot for a forensics meet-up in the Northern VA area (Chantilly - Centreville - Herndon - Reston) on 31 Mar 2011. The meet-up will likely start around 6:30pm - 7pm, and the location is TBD for the moment...keep an eye here, or on the Win4n6 group. This first meet-up will be free-form, and I'll work up something of an informal agenda.
As more folks become aware of this meet-up, I guess my initial concern would be where to meet. I'd like this to be informal, and everyone to relax and have a beer. If the interest is for something a bit more formal, then we may move to a different agenda later. Eventually, my hope is that this becomes something useful to folks, as we can discuss and implement innovation in the DF and IR fields...
F-Response Patent
On Fri, 11 Mar 2011, Matt announced that F-Response had received a patent for remote forensic innovation! Congrats, Matt...this is very well deserved!
Of specific note is that F-Response provides, "...forensic grade write-protection..." for remote forensics and raw access to systems.
This is fantastic news for Matt, and for the community as a whole! Matt's contributions to the field have been phenomenal, to say the least.
RegRipper Plugins
I recently wrote up some new plugins (and updated the samparse.pl plugin)...
notify.pl - Parses the Notify subkeys within the Software hive for registered Winlogon Notification DLLs, based on Mark's Case of the SysInternals-Blocking Malware post
init_dlls.pl - Checks for keys similar to the one mentioned in Mark's Case of the Malicious AutoStart post
renocide.pl - Checks for an artifact key mentioned on the MMPC site for the Win32/Renocide malware
These plugins are meant to demonstrate a couple of things...first, that Registry analysis can be used in conjunction with other analysis methods to detect malware within acquired images, where AV scanners might fail. I've run AV scans before where two commercial and three free AV scanners didn't find anything, but the fourth free scanner found something. I've also seen where AV used by customers has failed due not to having the incorrect DAT file, but to having the incorrect scanning engine. We're all susceptible to this, and if you use AV as part of your malware detection process for when you examine acquired images, then this is something that you'll need to be aware of, as well.
Second, all three of these plugins took me less than 30 minutes...total...to write and test. In fact, the only real slow-down was deciding how to make the output a bit more useful...for the notify.pl plugin, I copied code from the userassist.pl plugin to list all of registered DLLs sorted based on their key LastWrite times. This means that if I want to deploy any of these plugins as part of my timeline creation toolkit, it's simply a matter of minutes for me to modify them. So in less than 30 minutes, I was able to add three new plugins to the library, and saved everyone who uses those plugins the time for researching and writing those plugins themselves. This serves not only as a force multiplier, but also as a library for institutional knowledge within the community as a whole.
You can get copies of these plugins from Brett's RegRipper.net site.
As a side note, running RegRipper is just part of the malware detection process that I use regularly, and what I'm writing about and detailing for my next book. Part of the supporting materials for this book will include a checklist, as well.
There are plans afoot for a forensics meet-up in the Northern VA area (Chantilly - Centreville - Herndon - Reston) on 31 Mar 2011. The meet-up will likely start around 6:30pm - 7pm, and the location is TBD for the moment...keep an eye here, or on the Win4n6 group. This first meet-up will be free-form, and I'll work up something of an informal agenda.
As more folks become aware of this meet-up, I guess my initial concern would be where to meet. I'd like this to be informal, and everyone to relax and have a beer. If the interest is for something a bit more formal, then we may move to a different agenda later. Eventually, my hope is that this becomes something useful to folks, as we can discuss and implement innovation in the DF and IR fields...
F-Response Patent
On Fri, 11 Mar 2011, Matt announced that F-Response had received a patent for remote forensic innovation! Congrats, Matt...this is very well deserved!
Of specific note is that F-Response provides, "...forensic grade write-protection..." for remote forensics and raw access to systems.
This is fantastic news for Matt, and for the community as a whole! Matt's contributions to the field have been phenomenal, to say the least.
RegRipper Plugins
I recently wrote up some new plugins (and updated the samparse.pl plugin)...
notify.pl - Parses the Notify subkeys within the Software hive for registered Winlogon Notification DLLs, based on Mark's Case of the SysInternals-Blocking Malware post
init_dlls.pl - Checks for keys similar to the one mentioned in Mark's Case of the Malicious AutoStart post
renocide.pl - Checks for an artifact key mentioned on the MMPC site for the Win32/Renocide malware
These plugins are meant to demonstrate a couple of things...first, that Registry analysis can be used in conjunction with other analysis methods to detect malware within acquired images, where AV scanners might fail. I've run AV scans before where two commercial and three free AV scanners didn't find anything, but the fourth free scanner found something. I've also seen where AV used by customers has failed due not to having the incorrect DAT file, but to having the incorrect scanning engine. We're all susceptible to this, and if you use AV as part of your malware detection process for when you examine acquired images, then this is something that you'll need to be aware of, as well.
Second, all three of these plugins took me less than 30 minutes...total...to write and test. In fact, the only real slow-down was deciding how to make the output a bit more useful...for the notify.pl plugin, I copied code from the userassist.pl plugin to list all of registered DLLs sorted based on their key LastWrite times. This means that if I want to deploy any of these plugins as part of my timeline creation toolkit, it's simply a matter of minutes for me to modify them. So in less than 30 minutes, I was able to add three new plugins to the library, and saved everyone who uses those plugins the time for researching and writing those plugins themselves. This serves not only as a force multiplier, but also as a library for institutional knowledge within the community as a whole.
You can get copies of these plugins from Brett's RegRipper.net site.
As a side note, running RegRipper is just part of the malware detection process that I use regularly, and what I'm writing about and detailing for my next book. Part of the supporting materials for this book will include a checklist, as well.
Wednesday, March 09, 2011
More Malware Detection
Given my last post which mentioned part of my malware detection process, I thought it would be a good idea to mention a couple of bits of malware that I've seen described online recently.
First, from Mark's blog comes The Case of the SysInternals-Blocking Malware; as the title would lead you to believe, the responder working on this one had some issues troubleshooting the malware, as it kept blocking his use of SysInternals tools. The malware was eventually identified as Swimnag, which apparently uses the Notify key as it's persistence mechanism.
All told last night, it took me less than 10 minutes to write, test, and modify a RegRipper plugin to display the name, LastWrite time, and DLLName values of the Notify subkeys. I could put a few more minutes into manipulating the output a bit. Speaking of which, has anyone taken a shot at writing a plugin for the type of malware described in The Case of the Malicious AutoStart?
Addendum: Took me about 10-15 min, but I wrote up init_dlls.pl to locate value names (for the Malicious AutoStart issue) that end in Init_DLLs.
Another bit of joy mentioned on the MMPC this morning is Win32/Renocide. The write-up for this one is an interesting bit of reading, in that it spreads not just via local, removable and network drives (on a network, it can spread via NetBIOS), but it also looks for specific file sharing applications, and uses those to spread, as well. The persistence mechanisms are nothing new, but what I did notice is that one of the artifacts of an infection is a change to the firewall settings...this is one of those things that I encapsulate in "Registry analysis" when attempting to detect the presence of malware in an acquired image. Interestingly enough, this malware also maintains its configuration in a Registry key (Software\Microsoft\DRM\amty); if you locate this key in the Registry, the LastWrite time should give you an approximate time that the system was infected.
First, from Mark's blog comes The Case of the SysInternals-Blocking Malware; as the title would lead you to believe, the responder working on this one had some issues troubleshooting the malware, as it kept blocking his use of SysInternals tools. The malware was eventually identified as Swimnag, which apparently uses the Notify key as it's persistence mechanism.
All told last night, it took me less than 10 minutes to write, test, and modify a RegRipper plugin to display the name, LastWrite time, and DLLName values of the Notify subkeys. I could put a few more minutes into manipulating the output a bit. Speaking of which, has anyone taken a shot at writing a plugin for the type of malware described in The Case of the Malicious AutoStart?
Addendum: Took me about 10-15 min, but I wrote up init_dlls.pl to locate value names (for the Malicious AutoStart issue) that end in Init_DLLs.
Another bit of joy mentioned on the MMPC this morning is Win32/Renocide. The write-up for this one is an interesting bit of reading, in that it spreads not just via local, removable and network drives (on a network, it can spread via NetBIOS), but it also looks for specific file sharing applications, and uses those to spread, as well. The persistence mechanisms are nothing new, but what I did notice is that one of the artifacts of an infection is a change to the firewall settings...this is one of those things that I encapsulate in "Registry analysis" when attempting to detect the presence of malware in an acquired image. Interestingly enough, this malware also maintains its configuration in a Registry key (Software\Microsoft\DRM\amty); if you locate this key in the Registry, the LastWrite time should give you an approximate time that the system was infected.
Monday, March 07, 2011
MBR Infector Detector
Now and again, I get those analysis gigs where someone suspects that a system may have been infected with some sort of malware, but they aren't sure, and don't really have anything specific (Event Log entry, AV alert, etc.) to point to. I know that others get these sorts of gigs as well, and like them, I have a process that I go through when examining images of these systems. This usually starts with checking for installed AV products (MRT, etc.) to review their logs, as well as checking for AV having been run before the system was taken offline...if logs are available, they can tell you a lot, particularly the product and version run. From there, I also mount the image and scan it with other AV tools.
One of the steps on my list is to also look for MBR infectors. What's an "MBR infector", you ask? Read on...
F-Secure "Hippie" Description (1996)
SecurityVibes - Mebroot (2008)
F-Secure - Mebroot (3 Mar 2008)
Symantec - Mebroot (30 July 2010)
Sunbelt - TDSS/TDL4 (15 Nov 2010)
F-Secure, 17 Feb 2011
MMPC - Sinowal, aka Mbroot, Mebroot (8 Feb 2011)
MMPC - Win32/Fibebol.A (7 Mar 2011)
If you read through the above links, particularly those that are AV vendor descriptions of MBR infectors, you'll notice some commonalities...in particular, when the MBR is infected, other sectors prior to the first partition (usually, sector 63) contain something...a copy of the MBR, code to be injected into the system, something. Now, this doesn't mean that this is the case for ALL MBR infectors, just those that have been mentioned publicly.
Usually, what I would do is load the image into FTK Imager, and scan through the sectors manually...but why to do that, when you make the computer do it? That's right...I wrote a (wait for it!) Perl script (mbr.pl) to do this for me!
So, what the script does is scan through a range of sectors from an image file; by default, it will scan through sectors 0 through 63 inclusive, but the analyst can set different sectors to be scanned. When a sector that does NOT contain all zeros is found, the script will flag it. By "flag it", in summary mode, the script will just list the sector number. In a more detailed mode (which is the default), the script will print out the contents of the sector to STDOUT, in a hex viewer-like format. This way, it's real easy for the analyst to see, "hey, this sector just contains some strings associated with Dell installs", or "Hey, this sector is the start of a PE file!" Because the output goes to STDOUT, you can pipe it through "more" or redirect the output to a file.
Also, using another switch, the analyst can dump the raw sectors to disk. This allows you to generate MD5 or ssdeep hashes, run ssdeep hash comparisons, submit the raw dump to VirusTotal, etc.
Overall, it's pretty cool. I usually run mmls against the image anyway, and many times I'll see that the first partition starts at sector 63. Other times, I've found the starting sector for the first partition by searching the image via FTK Imager for "NTFS". Regardless, with the output of mmls, I can then run mbr.pl as part of my malware detection process, and just like other parts of the process, if nothing unusual is found, that's okay. If something is found, it's usually correlated against the output of other steps in the process. The overall goal is to as thorough a job as possible.
One of the steps on my list is to also look for MBR infectors. What's an "MBR infector", you ask? Read on...
F-Secure "Hippie" Description (1996)
SecurityVibes - Mebroot (2008)
F-Secure - Mebroot (3 Mar 2008)
Symantec - Mebroot (30 July 2010)
Sunbelt - TDSS/TDL4 (15 Nov 2010)
F-Secure, 17 Feb 2011
MMPC - Sinowal, aka Mbroot, Mebroot (8 Feb 2011)
MMPC - Win32/Fibebol.A (7 Mar 2011)
If you read through the above links, particularly those that are AV vendor descriptions of MBR infectors, you'll notice some commonalities...in particular, when the MBR is infected, other sectors prior to the first partition (usually, sector 63) contain something...a copy of the MBR, code to be injected into the system, something. Now, this doesn't mean that this is the case for ALL MBR infectors, just those that have been mentioned publicly.
Usually, what I would do is load the image into FTK Imager, and scan through the sectors manually...but why to do that, when you make the computer do it? That's right...I wrote a (wait for it!) Perl script (mbr.pl) to do this for me!
So, what the script does is scan through a range of sectors from an image file; by default, it will scan through sectors 0 through 63 inclusive, but the analyst can set different sectors to be scanned. When a sector that does NOT contain all zeros is found, the script will flag it. By "flag it", in summary mode, the script will just list the sector number. In a more detailed mode (which is the default), the script will print out the contents of the sector to STDOUT, in a hex viewer-like format. This way, it's real easy for the analyst to see, "hey, this sector just contains some strings associated with Dell installs", or "Hey, this sector is the start of a PE file!" Because the output goes to STDOUT, you can pipe it through "more" or redirect the output to a file.
Also, using another switch, the analyst can dump the raw sectors to disk. This allows you to generate MD5 or ssdeep hashes, run ssdeep hash comparisons, submit the raw dump to VirusTotal, etc.
Overall, it's pretty cool. I usually run mmls against the image anyway, and many times I'll see that the first partition starts at sector 63. Other times, I've found the starting sector for the first partition by searching the image via FTK Imager for "NTFS". Regardless, with the output of mmls, I can then run mbr.pl as part of my malware detection process, and just like other parts of the process, if nothing unusual is found, that's okay. If something is found, it's usually correlated against the output of other steps in the process. The overall goal is to as thorough a job as possible.
Thursday, March 03, 2011
Cybercrime and Espionage
I recently finished reading through Cybercrime and Espionage: An Analysis of Subversive Multi-Vector Threats, by John Pirc and Will Gragido, and wanted to share my thoughts on the book.
First, a couple of points of clarification. For one, I'm reading the ebook on a Kindle. I don't have the book-book version, so I can't compare the formatting...but I do have a copy of Windows Registry Forensics on the Kindle, so I can make something of a comparison there. Also, I wasn't sent a copy of the book to review...I'm writing this review entirely on my own accord, and because I think that are some very interesting statements in and thoughts generated by this book. Having a new book out myself right now, I think that this is something of a distinction.
The authors do a very good job of laying the groundwork early in the book, in particular pointing out that there is a lot about cybercrime that isn't new, but has instead been around for centuries. Wanting what others have, and securing it for one's own profit are age-old desires/motivators, and bits and bytes are simply the new medium.
I am somewhat familiar with most of the compliance standards that the authors discuss, such as the PCI Data Security Standard (I spent three years as a QSA-certified PCI examiner, part of a QIRA team), HIPAA, and others (the credit union NCUA wasn't mentioned by the authors, but would have fit within the chapter nicely).
The authors also spend considerable time in the cyber-realm, particularly in developing and describing their Subversive Multi-Vector Threat (SMT) taxonomy, in which they include the APT and even Spc. Manning. The authors build up to their taxonomy and provide examples, and then take the time to go beyond that and provide descriptions of intelligence gathering processes, as well as means that can be used to attempt to protect organizations.
Throughout the book, the authors provide considerable background and definitions; I think that this is helpful, as it provides both the uninitiated reader, as well as the more experienced (in the subject matter being addressed) with a common, level playing field. Through this development of background and supporting definitions, the reader should easily see where things such as insider threats come from, for example. In chapter 6, the authors spend considerable time explaining different avenues for gathering information and developing intelligence. At one point, the issue of "trust" is brought up; wouldn't it be easy for an operative (in search of, say, corporate intelligence) to single out a disgruntled employee and earn their "trust"?
This is not a technical book, but it's definitely something that will get you to think about what's really going on in the world around you. This should apply to the CIO, CISO, IT Director, even to the IT admin who's wondering if they've been "hacked". Books that provide solutions are good, but so are books that challenge your thinking and (as the authors describe in the MOSAIC process) base assumptions about your surroundings.
Thoughts
What I really liked about the book, in addition to what the authors presented, was the thoughts that reading that material generated. The following are thoughts that I had based on my reading, and viewing that material through the lens of my own experience, and are not things that you'll necessarily find stated specifically in the book.
What is deemed "adequate and reasonable" security is often decided by those with budgeting concerns/constraints, but with little understanding of the risk or the threat.
Compliance comes down to the auditor versus the attacker, with the target infrastructure as the stage. The attacker is not constrained a specific compliance "standard"; in fact, the attacker may actually use that "standard" and compliance to it against the infrastructure itself.
Auditors are often not technical, and they do not see across the various domains of the "standard" to which they are auditing, and are not able to bring other factors into consideration (i.e., corporate culture, economics, business models, etc.). Auditing is a point-in-time assessment and usually based on a checklist of some kind; do you have a firewall, yes/no; do you have IDS/IPS, yes/no. While requiring an organization to meet a compliance standard will likely raise their level of "security", it's often a small step that's coming too late. "Compliance" is more of a band-aid, and an attempt to modify the corporate culture to take the threats seriously.
First, a couple of points of clarification. For one, I'm reading the ebook on a Kindle. I don't have the book-book version, so I can't compare the formatting...but I do have a copy of Windows Registry Forensics on the Kindle, so I can make something of a comparison there. Also, I wasn't sent a copy of the book to review...I'm writing this review entirely on my own accord, and because I think that are some very interesting statements in and thoughts generated by this book. Having a new book out myself right now, I think that this is something of a distinction.
The authors do a very good job of laying the groundwork early in the book, in particular pointing out that there is a lot about cybercrime that isn't new, but has instead been around for centuries. Wanting what others have, and securing it for one's own profit are age-old desires/motivators, and bits and bytes are simply the new medium.
I am somewhat familiar with most of the compliance standards that the authors discuss, such as the PCI Data Security Standard (I spent three years as a QSA-certified PCI examiner, part of a QIRA team), HIPAA, and others (the credit union NCUA wasn't mentioned by the authors, but would have fit within the chapter nicely).
The authors also spend considerable time in the cyber-realm, particularly in developing and describing their Subversive Multi-Vector Threat (SMT) taxonomy, in which they include the APT and even Spc. Manning. The authors build up to their taxonomy and provide examples, and then take the time to go beyond that and provide descriptions of intelligence gathering processes, as well as means that can be used to attempt to protect organizations.
Throughout the book, the authors provide considerable background and definitions; I think that this is helpful, as it provides both the uninitiated reader, as well as the more experienced (in the subject matter being addressed) with a common, level playing field. Through this development of background and supporting definitions, the reader should easily see where things such as insider threats come from, for example. In chapter 6, the authors spend considerable time explaining different avenues for gathering information and developing intelligence. At one point, the issue of "trust" is brought up; wouldn't it be easy for an operative (in search of, say, corporate intelligence) to single out a disgruntled employee and earn their "trust"?
This is not a technical book, but it's definitely something that will get you to think about what's really going on in the world around you. This should apply to the CIO, CISO, IT Director, even to the IT admin who's wondering if they've been "hacked". Books that provide solutions are good, but so are books that challenge your thinking and (as the authors describe in the MOSAIC process) base assumptions about your surroundings.
Thoughts
What I really liked about the book, in addition to what the authors presented, was the thoughts that reading that material generated. The following are thoughts that I had based on my reading, and viewing that material through the lens of my own experience, and are not things that you'll necessarily find stated specifically in the book.
What is deemed "adequate and reasonable" security is often decided by those with budgeting concerns/constraints, but with little understanding of the risk or the threat.
Compliance comes down to the auditor versus the attacker, with the target infrastructure as the stage. The attacker is not constrained a specific compliance "standard"; in fact, the attacker may actually use that "standard" and compliance to it against the infrastructure itself.
Auditors are often not technical, and they do not see across the various domains of the "standard" to which they are auditing, and are not able to bring other factors into consideration (i.e., corporate culture, economics, business models, etc.). Auditing is a point-in-time assessment and usually based on a checklist of some kind; do you have a firewall, yes/no; do you have IDS/IPS, yes/no. While requiring an organization to meet a compliance standard will likely raise their level of "security", it's often a small step that's coming too late. "Compliance" is more of a band-aid, and an attempt to modify the corporate culture to take the threats seriously.
Good IR Work
Mark Russinovich recently posted The Case of the Malicious Autostart to his blog. I have to say, I think we are all very fortunate that Mark decided to post this; besides providing a very good demonstration of the use of the tools that Mark has written and made available, but it also demonstrates what others within the community are seeing. Chris Pogue recently did something similar with his Webcheck.dll post to the Spiderlabs Anterior blog, and it's good to see these kinds of things posted publicly.
Mark's post provides some really good information about what was found during a support call, and the tools and techniques used to find it, as well as to dig deeper. One thing that's interesting to point out is that the infection of the system may have included subversion of Windows File Protection (not that that's not trivial...), as it's mentioned that the user32.dll files in the system32 and dllcache directories were modified.
Posts like this give the rest of us an opportunity to see what others are facing and how they're addressing those challenges. Being the tech support in my household, I'm somewhat familiar with these tools and their use, but I can't say that I've seen something like this. What I like to do is see how this methodology fits into my own processes.
In the comments to the post, a user ("Mihailik") asks about determining the infection vector, to which Mark responds:
Unfortunately, that's a question just about anyone fighting a new malware infection will have a near impossible time of determining. Unless you actually see the infection as it takes place, you can't know - it could have been someone executing a malicious email attachment, opening an infected document, or via a network-spreading worm.
I would suggest that by using timeline analysis, many of us have been able to determine infection vectors. I know that folks using timelines have nailed down the original infection vector in some cases to phishing emails, attachments, browser drive-bys, etc. The timeline may give an indication of where you should look, and examination of the actual files (PDF or Word document, Java .jar file, etc.) will illuminate the issue further. Determining the infection vector may not have been something that could be easily done on this system, during this support engagement, but for more IR-specific engagements, this is often a question that analysts are asked to address.
Mark's post provides some really good information about what was found during a support call, and the tools and techniques used to find it, as well as to dig deeper. One thing that's interesting to point out is that the infection of the system may have included subversion of Windows File Protection (not that that's not trivial...), as it's mentioned that the user32.dll files in the system32 and dllcache directories were modified.
Posts like this give the rest of us an opportunity to see what others are facing and how they're addressing those challenges. Being the tech support in my household, I'm somewhat familiar with these tools and their use, but I can't say that I've seen something like this. What I like to do is see how this methodology fits into my own processes.
In the comments to the post, a user ("Mihailik") asks about determining the infection vector, to which Mark responds:
Unfortunately, that's a question just about anyone fighting a new malware infection will have a near impossible time of determining. Unless you actually see the infection as it takes place, you can't know - it could have been someone executing a malicious email attachment, opening an infected document, or via a network-spreading worm.
I would suggest that by using timeline analysis, many of us have been able to determine infection vectors. I know that folks using timelines have nailed down the original infection vector in some cases to phishing emails, attachments, browser drive-bys, etc. The timeline may give an indication of where you should look, and examination of the actual files (PDF or Word document, Java .jar file, etc.) will illuminate the issue further. Determining the infection vector may not have been something that could be easily done on this system, during this support engagement, but for more IR-specific engagements, this is often a question that analysts are asked to address.
Thursday, February 24, 2011
Webcheck.dll
I saw via Twitter today that a new post had gone up on the TrustWave SpiderLabs Anterior blog, regarding some malware that the TW folks (by that, I mean Chris Pogue) had detected during some engagements.
I think it's great when analysts and organizations share this kind of information, so that the rest of us can see what others are seeing. So, a big thanks goes out to TrustWave, and the next time you see Chris at a conference, be sure to say hi and buy him a beer...or better yet, treat him some bread pudding!
What I'd like to do is take a moment to go through the post and discuss some things that might add something of a different perspective or view to the issue, or perhaps something .
As you can see from the post, Chris uses timeline analysis to locate the malware in question, and he's got some really good information in the post about creating the timeline for analysis (Chris uses the log2timeline tools). I'm sure that there's quite a bit about the engagement and the analysis that weren't mentioned in the post, as Chris jumps right to the target date within his timeline and locates the malware.
I like the fact that Chris uses multiple analysis techniques to corroborate and check his findings. For example, in the post, Chris mentions looking at the file's $STANDARD_INFORMATION and $FILE_NAME attributes in the MFT, and confirming that there were no indications of "time stomping" going on. This is a great example that demonstrates that anti-forensics techniques target the analyst and their training, and that a knowledgeable analyst isn't slowed down by these techniques. I think that the post also demonstrates how timelines can be used to add context to what you're looking at, as well as increase the level of confidence that the analyst has in that data.
One of the things that kind of struck me as odd in the post is that there's mention of the "regedit/1" entry in the RunMRU key, and then the post jumps right to discussing the InProcServer32 key, based on the timeline. The RunMRU information (ie, key LastWrite time) is from a user's hive, so another key of interest to check might be the following:
As you'd think, this key contains information about the key that had focus when the user closed RegEdit. Specifically, the LastKey value (mentioned in MS KB 244004) contains the name of that key. This value might be used to make a bit of a transition between the RunMRU data and the changes to the InProcServer32 key that's mentioned in the post, and possible provide insight into how the malware was actually deployed on the system.
As Chris points out in the post, the value type being changed from "REG_SZ" (string value) to "REG_EXPAND_SZ" does allow for the use of unexpanded references to environment variables, such as %SystemRoot%. One statement that I don't really follow is:
So now the threading for webcheck.dll is no longer pointing to the legitimate file, but to the malware!
The threading model listed doesn't have anything to do with the path...I'm going to have to reach to Chris and find out what he was referring to in that statement. He follows that up with this statement later in the post:
So not only did the attackers use a legitimate threading, but they made sure to use a shell extension that was trusted by Windows.
Again, I'm not clear on the "threading" part of that statement, but Chris is quite correct about the shell extension issue. Basically, the Windows shell (Explorer.exe), which is launched when a user logs into the system, will load the approved shell extensions, which includes this particular malware. Trust seems to be implicit, as there are no checks run when Explorer goes to load a shell extension DLL. This is a bit different from the shell extension issue I'd mentioned last August, in part because it doesn't use the DLL Search Order issue. Instead, it simply points Explorer directly to the malware through the use of an explicit path.
All in all, I'm glad to see that Chris and the TrustWave folks sharing this kind of thing with the community. I do think that there's more that isn't being said, like how the malware actually got on the system (ie, how it was deployed), but hey, we all know that there are some things that can't be said about engagements. And that's okay.
I think it's great when analysts and organizations share this kind of information, so that the rest of us can see what others are seeing. So, a big thanks goes out to TrustWave, and the next time you see Chris at a conference, be sure to say hi and buy him a beer...or better yet, treat him some bread pudding!
What I'd like to do is take a moment to go through the post and discuss some things that might add something of a different perspective or view to the issue, or perhaps something .As you can see from the post, Chris uses timeline analysis to locate the malware in question, and he's got some really good information in the post about creating the timeline for analysis (Chris uses the log2timeline tools). I'm sure that there's quite a bit about the engagement and the analysis that weren't mentioned in the post, as Chris jumps right to the target date within his timeline and locates the malware.
I like the fact that Chris uses multiple analysis techniques to corroborate and check his findings. For example, in the post, Chris mentions looking at the file's $STANDARD_INFORMATION and $FILE_NAME attributes in the MFT, and confirming that there were no indications of "time stomping" going on. This is a great example that demonstrates that anti-forensics techniques target the analyst and their training, and that a knowledgeable analyst isn't slowed down by these techniques. I think that the post also demonstrates how timelines can be used to add context to what you're looking at, as well as increase the level of confidence that the analyst has in that data.
One of the things that kind of struck me as odd in the post is that there's mention of the "regedit/1" entry in the RunMRU key, and then the post jumps right to discussing the InProcServer32 key, based on the timeline. The RunMRU information (ie, key LastWrite time) is from a user's hive, so another key of interest to check might be the following:
Software\Microsoft\Windows\CurrentVersion\Applets\Regedit
As you'd think, this key contains information about the key that had focus when the user closed RegEdit. Specifically, the LastKey value (mentioned in MS KB 244004) contains the name of that key. This value might be used to make a bit of a transition between the RunMRU data and the changes to the InProcServer32 key that's mentioned in the post, and possible provide insight into how the malware was actually deployed on the system.
As Chris points out in the post, the value type being changed from "REG_SZ" (string value) to "REG_EXPAND_SZ" does allow for the use of unexpanded references to environment variables, such as %SystemRoot%. One statement that I don't really follow is:
So now the threading for webcheck.dll is no longer pointing to the legitimate file, but to the malware!
The threading model listed doesn't have anything to do with the path...I'm going to have to reach to Chris and find out what he was referring to in that statement. He follows that up with this statement later in the post:
So not only did the attackers use a legitimate threading, but they made sure to use a shell extension that was trusted by Windows.
Again, I'm not clear on the "threading" part of that statement, but Chris is quite correct about the shell extension issue. Basically, the Windows shell (Explorer.exe), which is launched when a user logs into the system, will load the approved shell extensions, which includes this particular malware. Trust seems to be implicit, as there are no checks run when Explorer goes to load a shell extension DLL. This is a bit different from the shell extension issue I'd mentioned last August, in part because it doesn't use the DLL Search Order issue. Instead, it simply points Explorer directly to the malware through the use of an explicit path.
All in all, I'm glad to see that Chris and the TrustWave folks sharing this kind of thing with the community. I do think that there's more that isn't being said, like how the malware actually got on the system (ie, how it was deployed), but hey, we all know that there are some things that can't be said about engagements. And that's okay.
Tuesday, February 22, 2011
Links
WFA Translations Thanks to the great folks at Elsevier and BJ Public, Windows Forensic Analysis 2/e is now available in Korean! Very cool! A copy of this goes on my shelf right next to the French translation, as well as the Chinese translation of WFA 1/e. I have to say, this is very, very cool...to look up at my bookshelf and see that someone thought enough of something I wrote to translate it. I have no idea what it says, but it's cool, just the same!
Security Ripcord
After a lengthy absence, Don is finally back blogging again...and he returns with a bang! Don's latest post, It will never be too expensive, takes a shot at the statement made by security "professionals" and "experts" to their customers for years...that one should employ/deploy enough "security" to make it too expensive for an attacker to get in, and they'll go away. It's pretty clear from Don's comments and reasoning that this is clearly a statement that has been made over and over for years, without folks really thinking about what's being said, and how the threat itself has evolved. This statement may have been true in the early days of the Internet, but the Internet and cybercrime have changed and developed...but apparently, the thinking behind the statement hasn't. Don does a great job of pointing this out, and with all the talk about advanced persistent and subversive multi-vector threats, doesn't it stand to reason that for some folks, for the really dedicated folks, they're just gonna laugh at your expensive security as they blow right on through it? Much like he did when he was in the Corps, Don takes a shot at this statement and hits it squarely, center of mass.
Sniper Forensics
Speaking of snipers, Part 3 of Chris's Sniper Forensics series has been posted to the SpiderLabs Anterior blog, be sure to check it out. I think that one of the biggest things that Chris points out in this segment is defining the target...with respect to IR, why are you there? What are you there to do? When it comes to digital forensic analysis, the same thing applies; if someone where to ship me a drive (or image), with clearly defined goals of what they were looking for, I might usually estimate an analysis time of 24 hrs or less. However, if you were to ship me a drive and say, "...find all bad stuff...", I could spend several weeks looking and never find why you define as "bad"...in part, because you never defined it. I've seen analysts leave site with a drive, having been told to find "bad stuff"; some initial searching found clear indications of the existence and use of "hacker" tools. However, upon reporting that to the customer, the analyst was told that the former employee's job was to hack stuff, so the existence of hacking tools wasn't "bad". Hey, who knew.
Defining the goals of an engagement, and the parameters for success, are critical to that engagement. Having a contract that just says "do analysis" without any defined goals of that analysis leave the analyst or the team with...what? Once the hours in the contract have been consumed, a report is delivered to the customer who then says, "no, this isn't what we wanted."
This isn't just an important point for consultants, it's equally important (or perhaps even more so) for the customer. If you're seeking IR or DF services, ensure that you clearly define what you're looking for, and ensure that this is stated clearly in the contract before you sign it.
MFT Slack
Lance posted an Enscript recently that was written to extract the slack space from MFT records. This isn't something that I've had a need to do before, but it is interesting and worth taking a look at. I'm not entirely sure what context you can get from items found in MFT slack, as MFT records are 1024 bytes long; however, Lance thought enough of this to write an Enscript for it, so there must be something there!
On that note, I recently updated the online repository for the Windows Registry Forensics tools, to include Jolanta's regslack tool for locating deleted keys and values in hive files. For usage and examples of how this tool has been used, check out the book.
Thursday, February 17, 2011
Links
WRF Book Reviews
Thanks to everyone who's purchased a copy of Windows Registry Forensics, and in particular to those who've taken the opportunity to post their thoughts, or more formally write a review. Paul Sanderson (@sandersonforens on Twitter), from across the pond, posted a review of the book to his blog recently. Paul has the Kindle version of the book, as that's all that's available over there at the moment.
Speaking of the book, it seems to be doing well, if the Amazon ranking is any indicator. There are a couple pretty helpful things to point out about this book that might be helpful in understanding why it might be useful for you.
First, this is the first book of it's kind. Ever.
Seriously...there are no books out there that come at the topic from the perspective of a forensic analyst. There are a number of books that include discussion of some Registry keys that are of use to an analyst, but none that discusses the Registry at the depth and breadth of WRF. My primary motivation for writing the book was to fill that gap, because I really feel that the Registry is a critical and too-often-overlooked source of forensic artifacts on a system.
Second, the book makes use of primarily free and open-source tools. This can be very important to smaller organizations, such as LE, professors and students at local community colleges, and even just to those learning or looking to develop new skills, as it puts the described capability within reach. I know that not everyone can (or wants to) learn Perl, but the tools are there...and to make it even easier, many of the tools I've written have been provided as standalone EXEs (compiled with Perl2Exe).
SPoE
I was bopping around the TwitterVerse recently and ran across a link to Mike Murr's blog, and from there found a two year old post on the Single Piece of Evidence (SPoE) myth. Mike's always been good for some well-thought-out opinions and comments, and this one is no different. What he mentions with regards to anti-forensics techniques is similar to the reactions surrounding the release of XP and then Windows 7..."OMG! What is this going to do to my investigations!" The fact is that, while each brings it's own nuances and inherent anti-forensics, each also brings with it a whole bunch of new artifacts. As Mike points out, when a user interacts with a Windows system, there are a lot of artifacts...and the same is true, albeit to a somewhat lesser degree, for malware, as well.
Hacking
If you've been watching the news lately, you're probably aware that no one seems to be immune to being compromised. Even small boroughs in PA are susceptible. While the linked article doesn't have a great deal of supporting information (no mention of the use of online banking, did the bank see a legit login from a different IP address, what the independent consultant determined, etc.). In the past, a lot of these types of issues have been attributed to the Zeus bot, and in some cases, there haven't been any indications of Zeus on what were thought to be the affected systems.
Mayor Mowen was just being a mayor when he said, "You guard against it by increasing your firewall, which is what we are in the process of doing."
Analysis
F-Secure recently posted this analysis of an MBR infector. The analysis includes a good number of hex views and listings of code in assembly language and lots of detail. It would be a good idea to read through this carefully and take note of what's said. For example, this baddy infects the MBR, and appears to make a copy (like Mebroot, here, and here) of the original MBR, placing it in sector 5.
The analysis includes the following:
Why an MBR File System Infector? Probably because it can bypass Windows File Protection (WFP). As WFP is running in protected mode, any WFP-protected file will be restored immediately if the file is replaced.
I'm not entirely sure that this is a good line of reasoning in and of itself, in part because the bad guys have done a good job at shutting WFP off for a minute, and installing their malware. WFP is not a polling service, so when it wakes back up, it has no way of knowing that a protected file was modified or replaced. However, it appears from the write-up that the malware infects userinit.exe at startup, possibly prior to WFP starting up, which may be the entire reason for infecting the MBR first.
I'm curious as to which version of Windows this malware was executed on, as the version of userinit.exe on my XP system, as well as a Windows XP VM I have, is 26Kb in size, slightly larger than what's shown in the write-up. However, the write-up does provide a couple of good tips that an analyst can use to detect the presence of this malware.
If you're a responder or forensic analyst, and this one is tough for you to read and follow, keep something in mind that Cory posted to Twitter today (17 Feb), which included (in part), "...you shouldn't rely on malware fetishists for incident response advice." Excellent point.
FakeAV
Another interesting analysis popped up...thanks to Ken for posting his analysis of a FakeAV bit of malware. In his write-up, Ken does a pretty thorough job of documenting what he did, how he did it, and what he found. For example, one of the Registry key that were modified is:
HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Associations
According to Ken, the "LowRiskFileTypes" value was added with the data ".exe". This is interesting, but how does it apply to the malware infection. MS KB article 883260 provides some hints to this. This same KB article provides a hint about why the following key might also be updated:
HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Attachments
In Ken's write-up, this key had the "SaveZoneInformation" value added, with "1" as the data. Remember how back with Windows XP SP2, files downloaded via Outlook or IE had a "zoneid" Alternate Data Stream (ADS) attached to the file (this is described on pg. 314 of WFA 2/e)? When analyzing a system with ADS on it, it's usually pretty easy to see them at a glance, as some of the commercial forensic analysis applications list them in red. However, this setting appears to tell the system to not create the ADS.
Finally, Ken mentions that there was a change to the following key:
HKCU\Software\Microsoft\Internet Explorer\Download
In this case, the CheckExeSignatures value was changed from 'yes' to 'no'. TechNet has some information about the value (maybe...the path is a little different,using "Main" in the key path, rather than "Download") and what effect it may have on the system. Other malware seems to modify the same set of keys, as seen here, and with this FakeAV write-up from Sophos.
There are a couple of interesting take-aways from this write-up, I think. First of all, it's great that Ken took the time not only to run this analysis, but to share it with others. I think one of the biggest mistakes analysts make when it comes to sharing and collaboration is assuming that everyone else has already seen everything. I tried to shoot this one down at the WACCI conference last year, where I actually met Ken. I don't see many FakeAV issues at all, to be honest.
Another take-away is that publicly available information provided by the vendor is utilized to take some steps towards not only allowing the malware to run, but also towards anti-forensics. Think about it...by telling the system to not create the ADS for downloaded files, that removes one of the indicators that analysts could use to identify the malware's propagation mechanism.
Finally, this clearly demonstrates that there are steps analysts can use to detect malware on a system beyond running an AV scanner. For example, there are number of Registry keys that Ken identified, as well as what's listed in other write-ups, that could be used to detect the potential presence of malware, even if the malware itself is not detected by AV. The rather odd value in the user's Run key is something of a give-away, but RegRipper already has a plugin that dumps the contents of that key. Now we have other keys that can be used with RegRipper or a forensic scanner to comb through the appropriate hives and check for the possible existence of malware.
IR
Last, but certainly not least, Chris posted today regarding IR activities; in particular, using a batch file to dump Registry hives from a live system for analysis. Chris actually posted the batch file he used...take a look. This one shows that Chris's command line kung fu is very good!
Thanks to everyone who's purchased a copy of Windows Registry Forensics, and in particular to those who've taken the opportunity to post their thoughts, or more formally write a review. Paul Sanderson (@sandersonforens on Twitter), from across the pond, posted a review of the book to his blog recently. Paul has the Kindle version of the book, as that's all that's available over there at the moment.
Speaking of the book, it seems to be doing well, if the Amazon ranking is any indicator. There are a couple pretty helpful things to point out about this book that might be helpful in understanding why it might be useful for you.
First, this is the first book of it's kind. Ever.
Seriously...there are no books out there that come at the topic from the perspective of a forensic analyst. There are a number of books that include discussion of some Registry keys that are of use to an analyst, but none that discusses the Registry at the depth and breadth of WRF. My primary motivation for writing the book was to fill that gap, because I really feel that the Registry is a critical and too-often-overlooked source of forensic artifacts on a system.
Second, the book makes use of primarily free and open-source tools. This can be very important to smaller organizations, such as LE, professors and students at local community colleges, and even just to those learning or looking to develop new skills, as it puts the described capability within reach. I know that not everyone can (or wants to) learn Perl, but the tools are there...and to make it even easier, many of the tools I've written have been provided as standalone EXEs (compiled with Perl2Exe).
SPoE
I was bopping around the TwitterVerse recently and ran across a link to Mike Murr's blog, and from there found a two year old post on the Single Piece of Evidence (SPoE) myth. Mike's always been good for some well-thought-out opinions and comments, and this one is no different. What he mentions with regards to anti-forensics techniques is similar to the reactions surrounding the release of XP and then Windows 7..."OMG! What is this going to do to my investigations!" The fact is that, while each brings it's own nuances and inherent anti-forensics, each also brings with it a whole bunch of new artifacts. As Mike points out, when a user interacts with a Windows system, there are a lot of artifacts...and the same is true, albeit to a somewhat lesser degree, for malware, as well.
Hacking
If you've been watching the news lately, you're probably aware that no one seems to be immune to being compromised. Even small boroughs in PA are susceptible. While the linked article doesn't have a great deal of supporting information (no mention of the use of online banking, did the bank see a legit login from a different IP address, what the independent consultant determined, etc.). In the past, a lot of these types of issues have been attributed to the Zeus bot, and in some cases, there haven't been any indications of Zeus on what were thought to be the affected systems.
Mayor Mowen was just being a mayor when he said, "You guard against it by increasing your firewall, which is what we are in the process of doing."
I bring this up, because there's traffic on Twitter recently with respect to the RSA conference, and the definition of "cyberwar", or more specifically, just "war". I'd suggest that any action that forces your adversary to turn away for his main objective (attacking, etc.) and focus resource elsewhere, even if this tactic is used purely for an attrition effect, should be considered part of "war", declared or otherwise.
Analysis
F-Secure recently posted this analysis of an MBR infector. The analysis includes a good number of hex views and listings of code in assembly language and lots of detail. It would be a good idea to read through this carefully and take note of what's said. For example, this baddy infects the MBR, and appears to make a copy (like Mebroot, here, and here) of the original MBR, placing it in sector 5.
The analysis includes the following:
Why an MBR File System Infector? Probably because it can bypass Windows File Protection (WFP). As WFP is running in protected mode, any WFP-protected file will be restored immediately if the file is replaced.
I'm not entirely sure that this is a good line of reasoning in and of itself, in part because the bad guys have done a good job at shutting WFP off for a minute, and installing their malware. WFP is not a polling service, so when it wakes back up, it has no way of knowing that a protected file was modified or replaced. However, it appears from the write-up that the malware infects userinit.exe at startup, possibly prior to WFP starting up, which may be the entire reason for infecting the MBR first.
I'm curious as to which version of Windows this malware was executed on, as the version of userinit.exe on my XP system, as well as a Windows XP VM I have, is 26Kb in size, slightly larger than what's shown in the write-up. However, the write-up does provide a couple of good tips that an analyst can use to detect the presence of this malware.
If you're a responder or forensic analyst, and this one is tough for you to read and follow, keep something in mind that Cory posted to Twitter today (17 Feb), which included (in part), "...you shouldn't rely on malware fetishists for incident response advice." Excellent point.
FakeAV
Another interesting analysis popped up...thanks to Ken for posting his analysis of a FakeAV bit of malware. In his write-up, Ken does a pretty thorough job of documenting what he did, how he did it, and what he found. For example, one of the Registry key that were modified is:
HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Associations
According to Ken, the "LowRiskFileTypes" value was added with the data ".exe". This is interesting, but how does it apply to the malware infection. MS KB article 883260 provides some hints to this. This same KB article provides a hint about why the following key might also be updated:
HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Attachments
In Ken's write-up, this key had the "SaveZoneInformation" value added, with "1" as the data. Remember how back with Windows XP SP2, files downloaded via Outlook or IE had a "zoneid" Alternate Data Stream (ADS) attached to the file (this is described on pg. 314 of WFA 2/e)? When analyzing a system with ADS on it, it's usually pretty easy to see them at a glance, as some of the commercial forensic analysis applications list them in red. However, this setting appears to tell the system to not create the ADS.
Finally, Ken mentions that there was a change to the following key:
HKCU\Software\Microsoft\Internet Explorer\Download
In this case, the CheckExeSignatures value was changed from 'yes' to 'no'. TechNet has some information about the value (maybe...the path is a little different,using "Main" in the key path, rather than "Download") and what effect it may have on the system. Other malware seems to modify the same set of keys, as seen here, and with this FakeAV write-up from Sophos.
There are a couple of interesting take-aways from this write-up, I think. First of all, it's great that Ken took the time not only to run this analysis, but to share it with others. I think one of the biggest mistakes analysts make when it comes to sharing and collaboration is assuming that everyone else has already seen everything. I tried to shoot this one down at the WACCI conference last year, where I actually met Ken. I don't see many FakeAV issues at all, to be honest.
Another take-away is that publicly available information provided by the vendor is utilized to take some steps towards not only allowing the malware to run, but also towards anti-forensics. Think about it...by telling the system to not create the ADS for downloaded files, that removes one of the indicators that analysts could use to identify the malware's propagation mechanism.
Finally, this clearly demonstrates that there are steps analysts can use to detect malware on a system beyond running an AV scanner. For example, there are number of Registry keys that Ken identified, as well as what's listed in other write-ups, that could be used to detect the potential presence of malware, even if the malware itself is not detected by AV. The rather odd value in the user's Run key is something of a give-away, but RegRipper already has a plugin that dumps the contents of that key. Now we have other keys that can be used with RegRipper or a forensic scanner to comb through the appropriate hives and check for the possible existence of malware.
IR
Last, but certainly not least, Chris posted today regarding IR activities; in particular, using a batch file to dump Registry hives from a live system for analysis. Chris actually posted the batch file he used...take a look. This one shows that Chris's command line kung fu is very good!
Monday, February 14, 2011
Links, Tools and Stuff
PDF Stream Dumper
From over at RE Corner comes the PDF Stream Dumper tool; actually, this one has been out for some time now. This tool was written in VB6, and comes with a number of automation scripts. Swing on by Lenny's blog for some create examples of how to use it, or check out this KernelMode page for some other examples of the dumper being used.
If you're not too put off by CLI tools, you might consider using this in conjunction with Didier's PDF tools. Didier's stuff is also in use by VirusTotal. That's not to say that one's better to use than the other...it's good to have both available.
While we're on the subject of document metadata, it's a good idea to mention Kristinn Gudjonsson, creator of log2timeline, also created the read_open_xml.pl Perl script for extracting metadata from MS Word 2007 documents (use and output described at the SANS Forensic Blog).
TechRadar
There's an interesting article up on TechRadar about how to perform a forensic PC investigation, and it references using OSForensics, available from PassMark Software. I have to say, I'm a bit concerned about articles like this, even when they suggest early in the article that performing the actions described in the article can be "a little morally dubious".
The beta of OSForensics was recently made available for a limited time, for free. However, that offer was originally made as "LE only", but seems to have changed recently.
OSForensics
It looks like the folks at PassMark Software removed the LE-only restriction for downloading the OSForensics beta, so I downloaded the 32-bit version to my XP system this morning.
After installing OSForensics and looking around (noticed the nice icons and graphics), I created a new case, and then began looking for a way to load a test image into the tool. I didn't have much luck, so I went immediately to the Help, which is provided online, in HTML format. I went through the index and found the word "Image", and from there found this:
In many cases it may be desirable to work with data from a disk image rather than the physical disk itself. Whilst OSForensics does not deal with disk images directly itself Passmark provides a set of free external tools in order to support working with disk images.
So, it appears that OSForensics is not intended for dead-box/post-mortem analysis. Some of the available tools, such as System Information and Memory Viewer, pertain to the system on which OSForensics is running. PassMark does offer the OSMount program, which allows you to mount a raw/dd image as a drive letter, and from there you can use OSForensics in the intended fashion. As such, I'd guess that there'd be no issues using any of the various other mounting techniques and tools, including accessing VSCs.
Of all of the functionality, the one that really jumps out is the hash set comparison tools. PassMark provides a number of hash sets for known-good OS files at their download site; however, as with any similar functionality based on hash sets, I can easily see how this can become cumbersome very quickly. You either scan for all of the hashes, or you run into issues with analysts deciding which hash sets to run, and (more importantly) documenting those that they do run.
OSForensics also provides string and file name search functionality, logging of activity, and the ability to install OSForensics to a USB drive. I'm sure that this tool will be useful to examiners; for my own uses, however, it simply does not provide enough of the core functionality that I tend to use during my examinations. As a test, I mounted a test image as a read-only F:\ drive and opened OSForensics, and I have to say, moving through the interface wasn't the most intuitive, or easy to use. However, I may be somewhat biased, given my experience and usual work processes.
No Alternative
Eric's got a rather insightful post over at the AFoD blog. More and more folks are getting into the cell phone and smart phone market, and those little buggers are really very powerful when you take a look at them. They also tend to contain more and more storage space. Of course, we need to keep in mind that the tablet market is still there in that space between the smart phone and the laptop, as well.
I can see where Eric's going with the post, but I have to say from the private/corporate perspective, this isn't such a huge issue. I would expect that if it ever does become and issue, it'll be an emergency (for legal/compliance purposes) and one-off, not something that gets done on a regular basis, with the cost of applications and training being amortized across multiple customers. However, from a public perspective, I can definitely see how this is going to be more and more of an issue...after all, how "gangsta" can you really be lugging around a Dell Latitude laptop?
Reading/Education
There are some great new resources over at the e-Evidence site, including stuff about MacOSX artifacts, iPhone and smart devices, Windows artifacts, etc. This site is always a great place to go and find lots of new and interesting stuff.
Network and Wireless
A question popped up on a list this morning regarding wireless assessments and tools. The original question asked about an alternative to NetStumbler, that supported a specific NIC, and the first response was for ViStumbler. ViStumbler is open-source and was originally written to be supported by Vista, but apparently runs on Windows 7, as well.
If you're doing any network forensics, you might also consider NetworkMiner as a viable resource, and something to add to your toolkit right alongside Wireshark.
Tool Sites
ForensicCtrl had a listing of free computer forensics tools available.
List of Windows open source tools
Check out the Collaborative RCE Tools library for a wide range of tools.
From over at RE Corner comes the PDF Stream Dumper tool; actually, this one has been out for some time now. This tool was written in VB6, and comes with a number of automation scripts. Swing on by Lenny's blog for some create examples of how to use it, or check out this KernelMode page for some other examples of the dumper being used.
If you're not too put off by CLI tools, you might consider using this in conjunction with Didier's PDF tools. Didier's stuff is also in use by VirusTotal. That's not to say that one's better to use than the other...it's good to have both available.
While we're on the subject of document metadata, it's a good idea to mention Kristinn Gudjonsson, creator of log2timeline, also created the read_open_xml.pl Perl script for extracting metadata from MS Word 2007 documents (use and output described at the SANS Forensic Blog).
TechRadar
There's an interesting article up on TechRadar about how to perform a forensic PC investigation, and it references using OSForensics, available from PassMark Software. I have to say, I'm a bit concerned about articles like this, even when they suggest early in the article that performing the actions described in the article can be "a little morally dubious".
The beta of OSForensics was recently made available for a limited time, for free. However, that offer was originally made as "LE only", but seems to have changed recently.
OSForensics
It looks like the folks at PassMark Software removed the LE-only restriction for downloading the OSForensics beta, so I downloaded the 32-bit version to my XP system this morning.
After installing OSForensics and looking around (noticed the nice icons and graphics), I created a new case, and then began looking for a way to load a test image into the tool. I didn't have much luck, so I went immediately to the Help, which is provided online, in HTML format. I went through the index and found the word "Image", and from there found this:
In many cases it may be desirable to work with data from a disk image rather than the physical disk itself. Whilst OSForensics does not deal with disk images directly itself Passmark provides a set of free external tools in order to support working with disk images.
So, it appears that OSForensics is not intended for dead-box/post-mortem analysis. Some of the available tools, such as System Information and Memory Viewer, pertain to the system on which OSForensics is running. PassMark does offer the OSMount program, which allows you to mount a raw/dd image as a drive letter, and from there you can use OSForensics in the intended fashion. As such, I'd guess that there'd be no issues using any of the various other mounting techniques and tools, including accessing VSCs.
Of all of the functionality, the one that really jumps out is the hash set comparison tools. PassMark provides a number of hash sets for known-good OS files at their download site; however, as with any similar functionality based on hash sets, I can easily see how this can become cumbersome very quickly. You either scan for all of the hashes, or you run into issues with analysts deciding which hash sets to run, and (more importantly) documenting those that they do run.
OSForensics also provides string and file name search functionality, logging of activity, and the ability to install OSForensics to a USB drive. I'm sure that this tool will be useful to examiners; for my own uses, however, it simply does not provide enough of the core functionality that I tend to use during my examinations. As a test, I mounted a test image as a read-only F:\ drive and opened OSForensics, and I have to say, moving through the interface wasn't the most intuitive, or easy to use. However, I may be somewhat biased, given my experience and usual work processes.
No Alternative
Eric's got a rather insightful post over at the AFoD blog. More and more folks are getting into the cell phone and smart phone market, and those little buggers are really very powerful when you take a look at them. They also tend to contain more and more storage space. Of course, we need to keep in mind that the tablet market is still there in that space between the smart phone and the laptop, as well.
I can see where Eric's going with the post, but I have to say from the private/corporate perspective, this isn't such a huge issue. I would expect that if it ever does become and issue, it'll be an emergency (for legal/compliance purposes) and one-off, not something that gets done on a regular basis, with the cost of applications and training being amortized across multiple customers. However, from a public perspective, I can definitely see how this is going to be more and more of an issue...after all, how "gangsta" can you really be lugging around a Dell Latitude laptop?
Reading/Education
There are some great new resources over at the e-Evidence site, including stuff about MacOSX artifacts, iPhone and smart devices, Windows artifacts, etc. This site is always a great place to go and find lots of new and interesting stuff.
Network and Wireless
A question popped up on a list this morning regarding wireless assessments and tools. The original question asked about an alternative to NetStumbler, that supported a specific NIC, and the first response was for ViStumbler. ViStumbler is open-source and was originally written to be supported by Vista, but apparently runs on Windows 7, as well.
If you're doing any network forensics, you might also consider NetworkMiner as a viable resource, and something to add to your toolkit right alongside Wireshark.
Tool Sites
ForensicCtrl had a listing of free computer forensics tools available.
List of Windows open source tools
Check out the Collaborative RCE Tools library for a wide range of tools.
Tuesday, February 08, 2011
Carving
I was looking at a Windows 2003 system, and found that I was somewhat short on Event Log entries, with respect to the incident window. As I looked and used my evtrpt.pl Perl script to get statistics on the Sec, Sys, and App Event Logs, I noticed that Sec and Sys Event Logs only contained a few days of event records. The Application Event Logs actually went back a while past the incident window. I looked a bit closer to the output of evtrpt.pl and noticed that the Security Event Log had an event ID 517 record, indicating that the Event Log had been cleared.
So the first thing I did was run TSK blkls against the image to extract the unallocated space from the image file. I then ran MS's strings.exe (with the "-o" and "-n 4" switches), and then had two files to work with...the unallocated space, and a list of strings found in unallocated space, along with the offset of each string. So I then wrote a Perl script that would go through the strings output and find each line the contained "LfLe", the "magic number" for Windows 2000/XP/2003 event records.
With this list, the script would then run through the unallocated space by first going to the offset of the "LfLe" string, and backing up 4 bytes (DWORD). According to the well-documented event record structure, this DWORD should be the size of the record. As values can vary, and there is no one specific value that is correct, the way to check for a valid event record is to advance through unallocated space for the length provided by the DWORD, and the last DWORD in this blob should be the same as the size of the record. For example, if the initial size DWORD is 124 bytes, you should be able to advance through the file 120 bytes, and the next DWORD should also be 124.
Using this approach, I was able to extract over 330 deleted event records. I've used similar techniques in the past, to extract 100 bytes on either side of a keyword from the pagefile. This is an excellent way to gather additional information that you wouldn't normally be able to 'see' through most tools, as well as to look for and carve well-defined structures from unstructured data.
So the first thing I did was run TSK blkls against the image to extract the unallocated space from the image file. I then ran MS's strings.exe (with the "-o" and "-n 4" switches), and then had two files to work with...the unallocated space, and a list of strings found in unallocated space, along with the offset of each string. So I then wrote a Perl script that would go through the strings output and find each line the contained "LfLe", the "magic number" for Windows 2000/XP/2003 event records.
With this list, the script would then run through the unallocated space by first going to the offset of the "LfLe" string, and backing up 4 bytes (DWORD). According to the well-documented event record structure, this DWORD should be the size of the record. As values can vary, and there is no one specific value that is correct, the way to check for a valid event record is to advance through unallocated space for the length provided by the DWORD, and the last DWORD in this blob should be the same as the size of the record. For example, if the initial size DWORD is 124 bytes, you should be able to advance through the file 120 bytes, and the next DWORD should also be 124.
Using this approach, I was able to extract over 330 deleted event records. I've used similar techniques in the past, to extract 100 bytes on either side of a keyword from the pagefile. This is an excellent way to gather additional information that you wouldn't normally be able to 'see' through most tools, as well as to look for and carve well-defined structures from unstructured data.
Tools and Stuff
RegRipper
Brett Shavers, who maintains the RegRipper site, has compiled an archive of new plugins and posted them for download. Brett's done a fantastic service for the DF community, in not only setting up the site for RegRipper, but maintaining it, and posting this archive of plugins. A huge thanks to Brett...and if you see him at a conference, be sure to buy him a beer!
As a side note, along with the release of Windows Registry Forensics, I had posted the DVD contents here, as well. The archive contains what's on the DVD, so while you can get it, it's really most helpful when used in conjunction with the book.
El Jefe
Over at the HolisticInfoSec blog, Russ shared a little El Jefe love recently. Russ says that El Jefe is a Windows-based process monitoring tool that "intercepts native Windows API process creation calls, allowing you to track, monitor, and correlate process creation events. " Very cool. The tool is in version 1.1 and is available from the good folks at Immunity, and runs on Windows 2000/XP through Windows 7, reportedly in both 32- and 64-bit versions. This looks like a great tool not only for dynamic malware analysis, but perhaps also for incident preparation. I mean, wouldn't you like to know what ran on a system?
Anti-Rootkit
I haven't been doing a lot of live box forensics/IR work, but I ran across the Tuluka kernel inspector recently, and it caught my eye. If you've read my books, you know that I've used GMER in the past. I can't say that I've really had issues with rootkits, and many times I just get to do "dead box" forensics, but this looks like another tool that folks may find useful.
NetworkMiner
Erik sent out an email recently to say that NetworkMiner had gone to version 1.0. Congrats to Erik and all the folks who've worked on or used NetworkMiner! NM is an excellent compliment to other network data analysis tool such as Wireshark. Per Erik, some of the new features include:
Here are some new features in NetworkMiner since the previous version:
* Support for Per-Packet Information header (WTAP_ENCAP_PPI) as used by Kismet and sometimes Wireshark WiFi sniffing.
* Extraction of Facebook as well as Twitter messages into the message tab. Added support to extract emails sent with Microsoft Hotmail (I.e. Windows Live) into Messages tab.
* Extraction of twitter passwords from when settings are changed. Facebook user account names are also extracted (but not Facebook passwords).
* Extraction of gmailchat parameter from cookies in order to identify users through their Google account logins.
* Protocol parser for Syslog. Syslog messages are displayed on the Parameter tab.
Pretty cool stuff! Check it out, and be sure to check out the NM Wiki if you have any questions! Along with tools like Wireshark and NetWitness Investigator, NetworkMiner can be extremely useful for IR from a network perspective.
EvtxParser
Andreas has released v1.0.7 of his EvtxParser, a Perl-based approach for parsing Vista and Windows 7 Windows Event Log/EVTX files.
Highlighter
Mandiant has released a new version of Highlighter. Not much else to say, really...if you use this tool, take a look at the updates. I know several folks who find Highlighter to be very useful.
PointSec
More of a process than a tool, the folks over at Digital Forensic Solutions have posted to their blog about how to go about examining PointSec-encrypted drives. I can't say that I've had issues with encrypted drives...I've either had the admin boot the system and we'd image it live, or I acquired images of the drives with the customer knowing full well that the images would be encrypted (imaging job, no analysis). However, DFS's post provides some great information.
Java
Also not a tool, but really kind of cool...Corey's written up a nice post about some analysis he did that involved looking into the Java cache folder. Corey walks through identification of the issue, going so far as to demonstrate decompiling a Java .jar file. What I really like about Corey's posts is how complete they are, without giving away any case specific information. This isn't something that you see very often in the IR/DF community...but Corey clearly demonstrates how easy it is to do this and provide a valuable teaching moment. Great job, Corey...thanks!
Brett Shavers, who maintains the RegRipper site, has compiled an archive of new plugins and posted them for download. Brett's done a fantastic service for the DF community, in not only setting up the site for RegRipper, but maintaining it, and posting this archive of plugins. A huge thanks to Brett...and if you see him at a conference, be sure to buy him a beer!
As a side note, along with the release of Windows Registry Forensics, I had posted the DVD contents here, as well. The archive contains what's on the DVD, so while you can get it, it's really most helpful when used in conjunction with the book.
El Jefe
Over at the HolisticInfoSec blog, Russ shared a little El Jefe love recently. Russ says that El Jefe is a Windows-based process monitoring tool that "intercepts native Windows API process creation calls, allowing you to track, monitor, and correlate process creation events. " Very cool. The tool is in version 1.1 and is available from the good folks at Immunity, and runs on Windows 2000/XP through Windows 7, reportedly in both 32- and 64-bit versions. This looks like a great tool not only for dynamic malware analysis, but perhaps also for incident preparation. I mean, wouldn't you like to know what ran on a system?
Anti-Rootkit
I haven't been doing a lot of live box forensics/IR work, but I ran across the Tuluka kernel inspector recently, and it caught my eye. If you've read my books, you know that I've used GMER in the past. I can't say that I've really had issues with rootkits, and many times I just get to do "dead box" forensics, but this looks like another tool that folks may find useful.
NetworkMiner
Erik sent out an email recently to say that NetworkMiner had gone to version 1.0. Congrats to Erik and all the folks who've worked on or used NetworkMiner! NM is an excellent compliment to other network data analysis tool such as Wireshark. Per Erik, some of the new features include:
Here are some new features in NetworkMiner since the previous version:
* Support for Per-Packet Information header (WTAP_ENCAP_PPI) as used by Kismet and sometimes Wireshark WiFi sniffing.
* Extraction of Facebook as well as Twitter messages into the message tab. Added support to extract emails sent with Microsoft Hotmail (I.e. Windows Live) into Messages tab.
* Extraction of twitter passwords from when settings are changed. Facebook user account names are also extracted (but not Facebook passwords).
* Extraction of gmailchat parameter from cookies in order to identify users through their Google account logins.
* Protocol parser for Syslog. Syslog messages are displayed on the Parameter tab.
Pretty cool stuff! Check it out, and be sure to check out the NM Wiki if you have any questions! Along with tools like Wireshark and NetWitness Investigator, NetworkMiner can be extremely useful for IR from a network perspective.
EvtxParser
Andreas has released v1.0.7 of his EvtxParser, a Perl-based approach for parsing Vista and Windows 7 Windows Event Log/EVTX files.
Highlighter
Mandiant has released a new version of Highlighter. Not much else to say, really...if you use this tool, take a look at the updates. I know several folks who find Highlighter to be very useful.
PointSec
More of a process than a tool, the folks over at Digital Forensic Solutions have posted to their blog about how to go about examining PointSec-encrypted drives. I can't say that I've had issues with encrypted drives...I've either had the admin boot the system and we'd image it live, or I acquired images of the drives with the customer knowing full well that the images would be encrypted (imaging job, no analysis). However, DFS's post provides some great information.
Java
Also not a tool, but really kind of cool...Corey's written up a nice post about some analysis he did that involved looking into the Java cache folder. Corey walks through identification of the issue, going so far as to demonstrate decompiling a Java .jar file. What I really like about Corey's posts is how complete they are, without giving away any case specific information. This isn't something that you see very often in the IR/DF community...but Corey clearly demonstrates how easy it is to do this and provide a valuable teaching moment. Great job, Corey...thanks!
Thursday, January 27, 2011
WRF book available!!
It seems that the Windows Registry Forensics book is available, as it was shipped to the DoD CyberCrime Conference. I'm looking forward to getting my copy!
If you have the Kindle edition of this book, and want the DVD contents, go here. Also, I've added a Books page to the blog, so check there in the future.
Addendum: Reviews!
Brad and Dave have been nice enough to post reviews of the book thus far! Thanks so much, guys...your efforts are greatly appreciated!
Now, they're both up on the Amazon page for the book, as well...
Speaking of reviews, but specific to WFA 2/e, Eric Huber posted this So You'd Like to...Learn Digital Forensics page on Amazon. In it, he says:
Harlan Carvey's Windows Forensic Analysis DVD Toolkit, Second Edition
is the best book available on Windows digital forensics.
Thanks, Eric!!
If you have the Kindle edition of this book, and want the DVD contents, go here. Also, I've added a Books page to the blog, so check there in the future.
Addendum: Reviews!
Brad and Dave have been nice enough to post reviews of the book thus far! Thanks so much, guys...your efforts are greatly appreciated!
Now, they're both up on the Amazon page for the book, as well...
Speaking of reviews, but specific to WFA 2/e, Eric Huber posted this So You'd Like to...Learn Digital Forensics page on Amazon. In it, he says:
Harlan Carvey's Windows Forensic Analysis DVD Toolkit, Second Edition
Thanks, Eric!!
Friday, January 21, 2011
New Tools and Links
ProDiscover
Chris Brown has updated ProDiscover to version 6.8. This may not interest a lot of folks but if you haven't kept up with PD, you should consider taking a look.
If you go to the Resource Center, you'll find a couple of things. First off, there's a whitepaper that demonstrates how to use ProDiscover to access Volume Shadow Copies on live remote systems. There's also a webinar available that demonstrates this. Further down the page, ProDiscover Basic Edition (BE) v 6.8 is available for download...BE now incorporates the Registry, EventLog and Internet History viewers.
Chris also shared with me that PD v6.8 (not BE, of course) includes the following:
Added full support for Microsoft Bitlocker protected disks on Vista and Windows7. This means that users can add any bitlocker protected disk/image to a project and perform all investigative functions provided that they have the bitlocker recovery key.
The image compare feature in the last update is very cool for getting the diff's on volume shadow copies.
Added support for Linux Ext4 file system.
Added a Thumbs.db viewer.
These are just some of the capabilities he mentioned, and there are more updates to come in the future. Chris is really working hard to make ProDiscover a valuable resource.
MS Tool
Troy Larson reached to me the other day to let me know that MS had released the beta of their Attack Surface Analyzer tool. I did some looking around with respect to this tool, and while there are lot of 'retweets', there isn't much out there showing its use.
Okay, so here's what the tool does...you install the tool and run a baseline of the system. After you do something...install or update an app, for example...you rerun the tool. In both cases, .cab files are created, and you can then run a diff between the two of them. I see two immediate uses for something like this...first, analysts and forensic researchers can add this to their bag of tricks and see what happens on a system when an app is installed or updated, or when updates are installed. The second, which I don't really see happening, is that organizations can install this on their critical systems (after testing, of course) and create baselines of systems, which can be compared to another snapshot after an incident.
I'll admit, I haven't worked with this tool yet, so I don't know if it creates the .cab files in a specific location or the user can specify the location, or even what's covered in the snapshot, but something like this might end up being very useful. Troy says that this tool has "great potential for artifact hunters", and I agree.
CyberSpeak is back!
After a bit of an absence, Ovie is back with the CyberSpeak podcast, posting an interview with Mark Wade of the Harris Corporation. The two of them talked about an article that Mark had written for DFINews...the interview was apparently based on pt. 1 of the article, now there's a pt. 2. Mark's got some great information based on his research into the application prefetch files generated by Windows systems.
During the interview, Mark mentioned being able to use time-based analysis of the application prefetch files to learn something about the user and their actions. Two thoughts on this...unless the programs that were run are in a specific user's profile directory (and in some cases, even if they are...), you're going to have to do more analysis to tie the prefetch files to when a user was logged in...application prefetch files are indirect artifacts generated by the OS, and are not directly tied to a specific user.
The second thought is...timeline analysis! All you would need to do to perform the analysis Mark referred to is generate a nano-timeline using only the metadata from the application prefetch files themselves. Of course, you could build on that, using the file system metadata for those files, and the contents of the UserAssist subkeys (and possibly the RecentDocs key) to build a more complete picture of the user's activities.
Gettin' Local
A recent article in the Washington Post stated that Virginia has seen a rise in CP cases. I caught this on the radio, and decided to see if I could find the article. The article states that the increase is a result of the growth of the Internet and P2P sharing networks. I'm sure that along with this has been an increase in the "I didn't do it" claims, more commonly referred to as the "Trojan Defense".
There's a great deal of analysis that can be done quickly and thoroughly to obviate the "Trojan Defense", before it's ever actually raised. Analysts can look to Windows Forensic Analysis, Windows Registry Forensics, and the upcoming Digital Forensics with Open Source Tools for solutions on how to address this situation. One example is to create a timeline...one that shows the user logging into the system, launching the P2P application, and then from there add any available logs of file down- or up-loads, launching an image viewing application (and associated MRU list...), etc.
Another issue that needs to be addressed involves determining what artifacts "look like" when a user connects a smart phone to a laptop in order to copy or move image or video files (or uploads them directly from the phone), and then share them via a P2P network.
Free Stuff
Ken Pryor has posted his second article about doing "Digital Forensics on a (less than) shoestring budget" to the SANS Forensic blog. Ken's first post addressed training options, and his second post presents some of the tools described in the upcoming Digital Forensics with Open Source Tools book.
What I like about these posts is that by going the free, open-source, and/or low cost route for tools, we start getting analysts to understand that analysis is not about tools, it's about the process. I think that this is critically important, and it doesn't take much to understand why...just look around at all of the predictions for 2011, and see what they're saying about cybercrime being and continuing to become more sophisticated.
Chris Brown has updated ProDiscover to version 6.8. This may not interest a lot of folks but if you haven't kept up with PD, you should consider taking a look.
If you go to the Resource Center, you'll find a couple of things. First off, there's a whitepaper that demonstrates how to use ProDiscover to access Volume Shadow Copies on live remote systems. There's also a webinar available that demonstrates this. Further down the page, ProDiscover Basic Edition (BE) v 6.8 is available for download...BE now incorporates the Registry, EventLog and Internet History viewers.
Chris also shared with me that PD v6.8 (not BE, of course) includes the following:
Added full support for Microsoft Bitlocker protected disks on Vista and Windows7. This means that users can add any bitlocker protected disk/image to a project and perform all investigative functions provided that they have the bitlocker recovery key.
The image compare feature in the last update is very cool for getting the diff's on volume shadow copies.
Added support for Linux Ext4 file system.
Added a Thumbs.db viewer.
These are just some of the capabilities he mentioned, and there are more updates to come in the future. Chris is really working hard to make ProDiscover a valuable resource.
MS Tool
Troy Larson reached to me the other day to let me know that MS had released the beta of their Attack Surface Analyzer tool. I did some looking around with respect to this tool, and while there are lot of 'retweets', there isn't much out there showing its use.
Okay, so here's what the tool does...you install the tool and run a baseline of the system. After you do something...install or update an app, for example...you rerun the tool. In both cases, .cab files are created, and you can then run a diff between the two of them. I see two immediate uses for something like this...first, analysts and forensic researchers can add this to their bag of tricks and see what happens on a system when an app is installed or updated, or when updates are installed. The second, which I don't really see happening, is that organizations can install this on their critical systems (after testing, of course) and create baselines of systems, which can be compared to another snapshot after an incident.
I'll admit, I haven't worked with this tool yet, so I don't know if it creates the .cab files in a specific location or the user can specify the location, or even what's covered in the snapshot, but something like this might end up being very useful. Troy says that this tool has "great potential for artifact hunters", and I agree.
CyberSpeak is back!
After a bit of an absence, Ovie is back with the CyberSpeak podcast, posting an interview with Mark Wade of the Harris Corporation. The two of them talked about an article that Mark had written for DFINews...the interview was apparently based on pt. 1 of the article, now there's a pt. 2. Mark's got some great information based on his research into the application prefetch files generated by Windows systems.
During the interview, Mark mentioned being able to use time-based analysis of the application prefetch files to learn something about the user and their actions. Two thoughts on this...unless the programs that were run are in a specific user's profile directory (and in some cases, even if they are...), you're going to have to do more analysis to tie the prefetch files to when a user was logged in...application prefetch files are indirect artifacts generated by the OS, and are not directly tied to a specific user.
The second thought is...timeline analysis! All you would need to do to perform the analysis Mark referred to is generate a nano-timeline using only the metadata from the application prefetch files themselves. Of course, you could build on that, using the file system metadata for those files, and the contents of the UserAssist subkeys (and possibly the RecentDocs key) to build a more complete picture of the user's activities.
Gettin' Local
A recent article in the Washington Post stated that Virginia has seen a rise in CP cases. I caught this on the radio, and decided to see if I could find the article. The article states that the increase is a result of the growth of the Internet and P2P sharing networks. I'm sure that along with this has been an increase in the "I didn't do it" claims, more commonly referred to as the "Trojan Defense".
There's a great deal of analysis that can be done quickly and thoroughly to obviate the "Trojan Defense", before it's ever actually raised. Analysts can look to Windows Forensic Analysis, Windows Registry Forensics, and the upcoming Digital Forensics with Open Source Tools for solutions on how to address this situation. One example is to create a timeline...one that shows the user logging into the system, launching the P2P application, and then from there add any available logs of file down- or up-loads, launching an image viewing application (and associated MRU list...), etc.
Another issue that needs to be addressed involves determining what artifacts "look like" when a user connects a smart phone to a laptop in order to copy or move image or video files (or uploads them directly from the phone), and then share them via a P2P network.
Free Stuff
Ken Pryor has posted his second article about doing "Digital Forensics on a (less than) shoestring budget" to the SANS Forensic blog. Ken's first post addressed training options, and his second post presents some of the tools described in the upcoming Digital Forensics with Open Source Tools book.
What I like about these posts is that by going the free, open-source, and/or low cost route for tools, we start getting analysts to understand that analysis is not about tools, it's about the process. I think that this is critically important, and it doesn't take much to understand why...just look around at all of the predictions for 2011, and see what they're saying about cybercrime being and continuing to become more sophisticated.
Tuesday, January 18, 2011
More VSCs
I was doing some writing last night, specifically documenting the process described in my previous blog post on accessing VSCs. I grabbed an NTUSER.DAT from within a user profile from the mounted image/VHD file, as well as the same file from within the oldest VSC available, and ran my RegRipper userassist plugin against both of the files.
Let me say that I didn't have to use robocopy to extract the files...I could've just run the plugin against the mounted files/file systems. However, I had some other thoughts in mind, and wanted the copies of the hive files to try things out. Besides, robocopy is native to Windows 7.
If the value of VSCs has not been recognized or understood by now, then we have a serious issue on our hands. For example, we know that the UserAssist key values can tell use the last time that a user performed a specific action via the shell (ie, clicked on a desktop shortcut, followed the Start->Programs path, etc.) and how often they've done so. So, the 15th time a user performs a certain action, we only see the information about that instance, and not the previous times.
By mounting the oldest VSC and parsing the user hive file, I was able to get additional historical information, including other times that applications (Quick Cam, Skype, iTunes, etc.) had been launched by the user. This provides some very significant historical data that can be used to fill in gaps in a timeline, particularly when there's considerable time between when an incident
occurred and when it was detected.
Here's an excerpt of the UserAssist values from the NTUSER.DAT in the mounted VHD:
Thu Jan 21 03:10:26 2010 Z
UEME_RUNPATH:C:\Program Files\Skype\Phone\Skype.exe (14)
Tue Jan 19 00:37:46 2010 Z
UEME_RUNPATH:C:\Program Files\iTunes\iTunes.exe (296)
And here's an excerpt of similar values from the NTUSER.DAT within the mounted VSC:
Sat Jan 9 11:40:31 2010 Z
UEME_RUNPATH:C:\Program Files\iTunes\iTunes.exe (293)
Fri Jan 8 04:13:40 2010 Z
UEME_RUNPATH:C:\Program Files\Skype\Phone\Skype.exe (8)
Some pretty valuable information there...imagine how this could be used to fill in a timeline.
And the really interesting thing is that just about everything else you'd do with a regular file system, you can do with the mounted VSC...run AV scans, run RegRipper or the forensic scanner, etc.
Let me say that I didn't have to use robocopy to extract the files...I could've just run the plugin against the mounted files/file systems. However, I had some other thoughts in mind, and wanted the copies of the hive files to try things out. Besides, robocopy is native to Windows 7.
If the value of VSCs has not been recognized or understood by now, then we have a serious issue on our hands. For example, we know that the UserAssist key values can tell use the last time that a user performed a specific action via the shell (ie, clicked on a desktop shortcut, followed the Start->Programs path, etc.) and how often they've done so. So, the 15th time a user performs a certain action, we only see the information about that instance, and not the previous times.
By mounting the oldest VSC and parsing the user hive file, I was able to get additional historical information, including other times that applications (Quick Cam, Skype, iTunes, etc.) had been launched by the user. This provides some very significant historical data that can be used to fill in gaps in a timeline, particularly when there's considerable time between when an incident
occurred and when it was detected.
Here's an excerpt of the UserAssist values from the NTUSER.DAT in the mounted VHD:
Thu Jan 21 03:10:26 2010 Z
UEME_RUNPATH:C:\Program Files\Skype\Phone\Skype.exe (14)
Tue Jan 19 00:37:46 2010 Z
UEME_RUNPATH:C:\Program Files\iTunes\iTunes.exe (296)
And here's an excerpt of similar values from the NTUSER.DAT within the mounted VSC:
Sat Jan 9 11:40:31 2010 Z
UEME_RUNPATH:C:\Program Files\iTunes\iTunes.exe (293)
Fri Jan 8 04:13:40 2010 Z
UEME_RUNPATH:C:\Program Files\Skype\Phone\Skype.exe (8)
Some pretty valuable information there...imagine how this could be used to fill in a timeline.
And the really interesting thing is that just about everything else you'd do with a regular file system, you can do with the mounted VSC...run AV scans, run RegRipper or the forensic scanner, etc.
Thursday, January 13, 2011
More Stuff
More on Malware
No pun intended. ;-)
The MMPC has another post up about malware, this one called Kelihos. Apparently, there are some similarities between Kelihos and Waledac, enough that the folks at the MMPC stated that there was likely code reuse. However, there's quite a bit more written about Waledac...and that's what concerns me. The write-up on Kelihos states that the malware "allows unauthorized access and control of an affected computer", but there's no indication as to how that occurs. The only artifact that's listed in the write-up is a file name and the persistence mechanism (i.e., the Run key). So how does this control occur? Might it be helpful to IT and network admins to know a little bit more about this?
Also, take a close look at the Kelihos write-up...it mentions a file that's dropped into the "All Users" profile and an entry in the HKLM\...\Run key...but that Run key entry apparently doesn't point to the file that's listed.
I understand that the MMPC specifically and AV companies in general aren't in the business of providing more comprehensive information, but what would be the harm, really? They have the information...and I'm not talking about complete reverse engineering of the malware, so there's no need to do a ton of extra work and then post it for free. Given that this affects Microsoft operating systems, I would hope that some organization with MS could provide information that would assist organizations that use those OSs in detecting and reacting to infections in a timely manner.
Interview
Eric Huber posted a very illuminating interview with Hal Pomeranz over on the AFoD blog. Throughout the interview, Hal addresses several questions (from his perspective) that you see a lot in lists and forums...in particular, there are a lot of "how I got started in the business" responses. I see this sort of question all the time, and it's good to see someone like Hal not only discussing what he did to "break into the business", as it were, but also what he looks for with respect to new employees. If you have the time, take a read through the questions and answers, and see what Hal has to offer...it will definitely be worth your time.
Personally, having received an iTouch for Christmas, I think that a podcast would be a great forum for this sort of thing. I'm just sayin'... ;-)
Artifacts
Corey Harrell posted the results of some research on his Journey into Incident Response blog; he's performed some analysis regarding locating AutoPlay and Autorun artifacts. He's done some pretty thorough research regarding this topic, and done a great job of documenting what he did.
Results aside, the most important and valuable thing about what Corey did was share what he found. Have you ever had a conversation with someone where maybe you showed them something that you'd run across, or just asked them a question, and their response was, "yeah, I've been doing that for years"? How disappointing is that? I mean, to know someone in the industry, and to have a problem (or even just be curious about something) and know someone who's known the answer but never actually said anything? And not just not said anything at that moment...but ever.
I think that's where we could really improve as a community. There are folks like Corey who find something, and share it. And there are others in the community who have things that they do all the time, but no one else knows until the topic comes up and that person says, "yeah, I do that all the time."
Process Improvement
I think that one of the best shows on TV now is Undercover Boss. Part of the reason I like it is because rather than showing people treating themselves and each other in a questionable manner, the show has CEOs going out and engaging with front line employees. At the end of the show, the employees generally get recognized in some way for their hard work and dedication.
One topic jumped out in particular from the UniFirst episode...that front line employees who were the ones doing the job were better qualified to suggest and make changes to make the task more efficient. After all, who is better qualified than that person to come up with a way to save time and money at a task?
When I was in the military, I was given training in Total Quality Management (TQM) and certified by the Dept of the Navy to teach it to others. Being a Marine, there were other Marines who told me that TQM (we tried to call it "Total Quality Leadership" to get Marines to accept it) would never be accepted or used. I completely agree now, just as I did then...there are some tasks that process improvement won't provide a great deal of benefit, but there others that will. More than anything else, the one aspect I found from TQM/TQL that Marines could use everywhere was the practice of engaging with the front line person performing the task in order to seek improvement. A great example of this was my radio operators, who had to assemble RC-292 antennas all the time; one of my Marines had used wire, some epoxy and the bottom of a soda can to create "cobra heads", or field-expedient antenna kits that could be elevated (and the radios operational) before other Marines could go to the back of the Hummer, pull out an antenna kit, and start putting the mast together. This improved the process of getting communications up and available, and it was a process developed by those on the "front lines" who actually do the work.
So what does that have to do with forensics or incident response? Well, one of the things I like to do now and again is look at my last engagement, or look back over a couple of engagements, and see what I can improve upon. What can I do better going forward, or what can I do if there's a slight change in one of the aspects of the examination?
While on the IBM team and performing data breach investigations, I tried to optimize what I was doing. Sometimes taking a little more time up front, such as making a second working copy of the image, would allow me to perform parallel operations...I could use one working copy for analysis, and the other would be subject to scans. Or, I could extract specific files and data from one working copy, start my analysis, and start scanning the two working images. Chris Pogue, a SANS Thought Leader who was on our team at the time, got really good at running parallel analysis operations, by setting up multiple VMs to do just that.
The point is that we were the ones tasked with performing the work, and we looked at the requirements of the job, and found ways to do a better, more comprehensive job in a more efficient manner, and get that done in fewer ticks of the clock. One thing that really benefited us was collaborating and sharing what we knew. For example, Chris was really good at running multiple VMs to complete tasks in parallel, and he shared that with the other members of the team. I wrote Perl scripts that would take the results of scans for potential credit card numbers, remove duplicate entries, and then separate the resulting list into separate card brands for archiving and shipping (based on the required process). We shared those with the team, and Chris and I worked together to teach others to use them.
So why does any of this matter? When I was taking the TQM training, we were told that Deming originally shared his thoughts on process improvement with his fellow Americans, who laughed him out of the country, but others (the Japanese) absorbed what he had to say because it makes sense. In manufacturing processes, errors in the process can lead to increase cost, delays in delivery, and ultimately a poor reputation. The same is true for what we do. Through continual process improvement, we can move beyond where we are now, and provide a better, more comprehensive service in a timely manner.
In closing, use this as a starting point...a customer comes to you with an image, and says that they think that there's malware on the system, and that's it. Think about what you can provide them, in a report, at the end of 40 hours...5 days, 8 hrs a day of work. Based on what you do right now, and more specifically, the last malware engagement you did, how complete, thorough, and accurate will your report be?
No pun intended. ;-)
The MMPC has another post up about malware, this one called Kelihos. Apparently, there are some similarities between Kelihos and Waledac, enough that the folks at the MMPC stated that there was likely code reuse. However, there's quite a bit more written about Waledac...and that's what concerns me. The write-up on Kelihos states that the malware "allows unauthorized access and control of an affected computer", but there's no indication as to how that occurs. The only artifact that's listed in the write-up is a file name and the persistence mechanism (i.e., the Run key). So how does this control occur? Might it be helpful to IT and network admins to know a little bit more about this?
Also, take a close look at the Kelihos write-up...it mentions a file that's dropped into the "All Users" profile and an entry in the HKLM\...\Run key...but that Run key entry apparently doesn't point to the file that's listed.
I understand that the MMPC specifically and AV companies in general aren't in the business of providing more comprehensive information, but what would be the harm, really? They have the information...and I'm not talking about complete reverse engineering of the malware, so there's no need to do a ton of extra work and then post it for free. Given that this affects Microsoft operating systems, I would hope that some organization with MS could provide information that would assist organizations that use those OSs in detecting and reacting to infections in a timely manner.
Interview
Eric Huber posted a very illuminating interview with Hal Pomeranz over on the AFoD blog. Throughout the interview, Hal addresses several questions (from his perspective) that you see a lot in lists and forums...in particular, there are a lot of "how I got started in the business" responses. I see this sort of question all the time, and it's good to see someone like Hal not only discussing what he did to "break into the business", as it were, but also what he looks for with respect to new employees. If you have the time, take a read through the questions and answers, and see what Hal has to offer...it will definitely be worth your time.
Personally, having received an iTouch for Christmas, I think that a podcast would be a great forum for this sort of thing. I'm just sayin'... ;-)
Artifacts
Corey Harrell posted the results of some research on his Journey into Incident Response blog; he's performed some analysis regarding locating AutoPlay and Autorun artifacts. He's done some pretty thorough research regarding this topic, and done a great job of documenting what he did.
Results aside, the most important and valuable thing about what Corey did was share what he found. Have you ever had a conversation with someone where maybe you showed them something that you'd run across, or just asked them a question, and their response was, "yeah, I've been doing that for years"? How disappointing is that? I mean, to know someone in the industry, and to have a problem (or even just be curious about something) and know someone who's known the answer but never actually said anything? And not just not said anything at that moment...but ever.
I think that's where we could really improve as a community. There are folks like Corey who find something, and share it. And there are others in the community who have things that they do all the time, but no one else knows until the topic comes up and that person says, "yeah, I do that all the time."
Process Improvement
I think that one of the best shows on TV now is Undercover Boss. Part of the reason I like it is because rather than showing people treating themselves and each other in a questionable manner, the show has CEOs going out and engaging with front line employees. At the end of the show, the employees generally get recognized in some way for their hard work and dedication.
One topic jumped out in particular from the UniFirst episode...that front line employees who were the ones doing the job were better qualified to suggest and make changes to make the task more efficient. After all, who is better qualified than that person to come up with a way to save time and money at a task?
When I was in the military, I was given training in Total Quality Management (TQM) and certified by the Dept of the Navy to teach it to others. Being a Marine, there were other Marines who told me that TQM (we tried to call it "Total Quality Leadership" to get Marines to accept it) would never be accepted or used. I completely agree now, just as I did then...there are some tasks that process improvement won't provide a great deal of benefit, but there others that will. More than anything else, the one aspect I found from TQM/TQL that Marines could use everywhere was the practice of engaging with the front line person performing the task in order to seek improvement. A great example of this was my radio operators, who had to assemble RC-292 antennas all the time; one of my Marines had used wire, some epoxy and the bottom of a soda can to create "cobra heads", or field-expedient antenna kits that could be elevated (and the radios operational) before other Marines could go to the back of the Hummer, pull out an antenna kit, and start putting the mast together. This improved the process of getting communications up and available, and it was a process developed by those on the "front lines" who actually do the work.
So what does that have to do with forensics or incident response? Well, one of the things I like to do now and again is look at my last engagement, or look back over a couple of engagements, and see what I can improve upon. What can I do better going forward, or what can I do if there's a slight change in one of the aspects of the examination?
While on the IBM team and performing data breach investigations, I tried to optimize what I was doing. Sometimes taking a little more time up front, such as making a second working copy of the image, would allow me to perform parallel operations...I could use one working copy for analysis, and the other would be subject to scans. Or, I could extract specific files and data from one working copy, start my analysis, and start scanning the two working images. Chris Pogue, a SANS Thought Leader who was on our team at the time, got really good at running parallel analysis operations, by setting up multiple VMs to do just that.
The point is that we were the ones tasked with performing the work, and we looked at the requirements of the job, and found ways to do a better, more comprehensive job in a more efficient manner, and get that done in fewer ticks of the clock. One thing that really benefited us was collaborating and sharing what we knew. For example, Chris was really good at running multiple VMs to complete tasks in parallel, and he shared that with the other members of the team. I wrote Perl scripts that would take the results of scans for potential credit card numbers, remove duplicate entries, and then separate the resulting list into separate card brands for archiving and shipping (based on the required process). We shared those with the team, and Chris and I worked together to teach others to use them.
So why does any of this matter? When I was taking the TQM training, we were told that Deming originally shared his thoughts on process improvement with his fellow Americans, who laughed him out of the country, but others (the Japanese) absorbed what he had to say because it makes sense. In manufacturing processes, errors in the process can lead to increase cost, delays in delivery, and ultimately a poor reputation. The same is true for what we do. Through continual process improvement, we can move beyond where we are now, and provide a better, more comprehensive service in a timely manner.
In closing, use this as a starting point...a customer comes to you with an image, and says that they think that there's malware on the system, and that's it. Think about what you can provide them, in a report, at the end of 40 hours...5 days, 8 hrs a day of work. Based on what you do right now, and more specifically, the last malware engagement you did, how complete, thorough, and accurate will your report be?
Friday, January 07, 2011
Links and stuff
Windows Registry Forensics
The folks at Syngress tweeted recently that Windows Registry Forensics is due to be published this month! It's listed here on Amazon, along with editorial reviews from Troy Larson and Rob Lee. I, for one, cannot wait! Seriously.
A word about the book...if you're interested in an ebook/Kindle version, or if you have trouble getting the contents of the DVD with your ebook purchase, please contact the publisher first. Once the book has been sent in for printing, I (and authors in general) have very little to do with the book beyond marketing it in ways that the publisher doesn't.
Rules
Jesse Kornblum posted his Four Rules for Investigators recently. I would say that it was refreshing to see this, but I've gotta say, I've been saying most of the same things for some time...I think the big exception has been #2, and not because I disagree, but as a consultant, I generally assume that that's already been addressed and handled.
Jesse's other rules remind me a great deal of some of the concepts I and others have been discussing:
Rule 1 - Have a plan...that kind of sounds like "what are the goals of your investigation and how do you plan to address it with the data you have?"
Rule 2 - Have permission...definitely. Make sure the contract is signed before you forensicate.
Rule 3 - Write down what you do...Documentation! Now, I know some folks have said that they don't keep case notes, as those would be discoverable, and they don't want the defense counsel using their speculation and musings against them. Well, I'd suggest that that's not what case notes are about, or for. Case notes let you return to a case 6 months or a year later and see what you did, and even why. They also let someone else pick up where you left off, in case you get sick, or hit by a bus. What I really don't like seeing is the folks who say that they spent hours researching something that was part of a case, but they didn't document it, so they can't remember it...they then have to re-do all of that research the next time they encounter that issue. Also, consider this...one person on a team conducts research that takes 10 hrs to complete. If they don't document and share the results of the research, then the other 9 people on the team are going to spend a total of 90 hrs doing that research themselves...when the original research could have been shared via email, or in a 1/2 hr brown bag training session.
Rule 4 - Work on a copy...Always! Never work on the original data. I've had instances where immediately after I finished making copies of the images, the original media (shipped by the customer) died. Seriously. Now, imagine where I'd've been had I not followed procedure and made the copies...my boss would've said, "...that's okay, because you made copies...right?" I'm generally one of those folks who follows procedure because it's the right thing to do, and I tend not to make arbitrary judgments as to when I will or won't follow the procedure.
Jesse isn't the only one saying these things. Take a look at Sniper Forensics: Part 1 over at the SpiderLabs Anterior blog. Chris has gotten a lot of mileage out of the Sniper Forensics presentations, and what his talks amount to include putting structure around what you do, and the KISS principle. That's "keep it simple, stupid", NOT listening to Love Gun while you forensicate (although I have done that myself from time to time).
Is it StuxNet, or is it APT?
I found this DarkReading article about targeted attacks tweeted about over and over again. I do agree with the sentiment of the article, particularly as the days of joyriding on the Information Superhighway are over with, my friends. No one is really deploying SubSeven any longer, just to mess with someone and open and close their CD-Rom tray. There's an economic driver behind what's going on, and as such, steps are being taken to minimize the impact of unauthorized presence on compromised systems. One thing's for sure...it appears that these skilled, targeted attacks are going to continue to be something that we see in the news.
USB Issues and Timelines
Okay, this isn't about the USB issues you might be thinking of...instead, it's about a question I get now and again, which is, why do all of the subkeys beneath the USBStor key in the System hive all have the same LastWrite time? While I have noticed this, it hasn't been something that's pertinent to my exam, so I really haven't pursued it. I have seen where others have said that they've looked into it and found that the LastWrite time corresponded with an update.
Rather than speculating as to the cause, I thought what I'd do is recommend that folks who see this create a timeline. Use the file system metadata, LastWrite times from the keys in the System and Software hives, and Event Log data, to start. This should give you enough granularity to begin your investigation. I'd also think about adding Prefetch file metadata (if you have any Prefetch files...), as well as data from the Task Scheduler log (that is, if it says anything besides the Task Scheduler service starting...).
The folks at Syngress tweeted recently that Windows Registry Forensics is due to be published this month! It's listed here on Amazon, along with editorial reviews from Troy Larson and Rob Lee. I, for one, cannot wait! Seriously.
A word about the book...if you're interested in an ebook/Kindle version, or if you have trouble getting the contents of the DVD with your ebook purchase, please contact the publisher first. Once the book has been sent in for printing, I (and authors in general) have very little to do with the book beyond marketing it in ways that the publisher doesn't.
Rules
Jesse Kornblum posted his Four Rules for Investigators recently. I would say that it was refreshing to see this, but I've gotta say, I've been saying most of the same things for some time...I think the big exception has been #2, and not because I disagree, but as a consultant, I generally assume that that's already been addressed and handled.
Jesse's other rules remind me a great deal of some of the concepts I and others have been discussing:
Rule 1 - Have a plan...that kind of sounds like "what are the goals of your investigation and how do you plan to address it with the data you have?"
Rule 2 - Have permission...definitely. Make sure the contract is signed before you forensicate.
Rule 3 - Write down what you do...Documentation! Now, I know some folks have said that they don't keep case notes, as those would be discoverable, and they don't want the defense counsel using their speculation and musings against them. Well, I'd suggest that that's not what case notes are about, or for. Case notes let you return to a case 6 months or a year later and see what you did, and even why. They also let someone else pick up where you left off, in case you get sick, or hit by a bus. What I really don't like seeing is the folks who say that they spent hours researching something that was part of a case, but they didn't document it, so they can't remember it...they then have to re-do all of that research the next time they encounter that issue. Also, consider this...one person on a team conducts research that takes 10 hrs to complete. If they don't document and share the results of the research, then the other 9 people on the team are going to spend a total of 90 hrs doing that research themselves...when the original research could have been shared via email, or in a 1/2 hr brown bag training session.
Rule 4 - Work on a copy...Always! Never work on the original data. I've had instances where immediately after I finished making copies of the images, the original media (shipped by the customer) died. Seriously. Now, imagine where I'd've been had I not followed procedure and made the copies...my boss would've said, "...that's okay, because you made copies...right?" I'm generally one of those folks who follows procedure because it's the right thing to do, and I tend not to make arbitrary judgments as to when I will or won't follow the procedure.
Jesse isn't the only one saying these things. Take a look at Sniper Forensics: Part 1 over at the SpiderLabs Anterior blog. Chris has gotten a lot of mileage out of the Sniper Forensics presentations, and what his talks amount to include putting structure around what you do, and the KISS principle. That's "keep it simple, stupid", NOT listening to Love Gun while you forensicate (although I have done that myself from time to time).
Is it StuxNet, or is it APT?
I found this DarkReading article about targeted attacks tweeted about over and over again. I do agree with the sentiment of the article, particularly as the days of joyriding on the Information Superhighway are over with, my friends. No one is really deploying SubSeven any longer, just to mess with someone and open and close their CD-Rom tray. There's an economic driver behind what's going on, and as such, steps are being taken to minimize the impact of unauthorized presence on compromised systems. One thing's for sure...it appears that these skilled, targeted attacks are going to continue to be something that we see in the news.
USB Issues and Timelines
Okay, this isn't about the USB issues you might be thinking of...instead, it's about a question I get now and again, which is, why do all of the subkeys beneath the USBStor key in the System hive all have the same LastWrite time? While I have noticed this, it hasn't been something that's pertinent to my exam, so I really haven't pursued it. I have seen where others have said that they've looked into it and found that the LastWrite time corresponded with an update.
Rather than speculating as to the cause, I thought what I'd do is recommend that folks who see this create a timeline. Use the file system metadata, LastWrite times from the keys in the System and Software hives, and Event Log data, to start. This should give you enough granularity to begin your investigation. I'd also think about adding Prefetch file metadata (if you have any Prefetch files...), as well as data from the Task Scheduler log (that is, if it says anything besides the Task Scheduler service starting...).
Subscribe to:
Posts (Atom)