First...you've GOT to see this. Tell me that the main character doesn't look surprisingly like Marcus Ranum.
Second, a huge shout out to JT for her work on regslack.pl, which is available in the Downloads section of RegRipper.net. I was running a search across an image recently for some important data, and surprisingly, I got several hits in Registry hive files; specifically, the Software hive, a couple of NTUSER.DAT files, and even in some UsrClass.dat files. This was odd, so I opened up a couple of the hive files in UltraEdit to view the guts of the hive files and didn't see any key value structure information anywhere near the entries. To be sure, I ran JT's regslack.pl against the hive files...I had done so previously to check for some of the hive files for deleted keys...and was able to verify that the sensitive data was, in fact, part of the unallocated space within the hive file and NOT part of any Registry structures. If you've ever found hits for your keywords within Registry hive files, you'll know that having this kind of definitive information can make a HUGE difference!
Rich over at HBGary showed me a neat trick for tracking down data in memory dumps. In this same engagement, I had collected a memory dump from a Windows 2003 system using Fast Dump Pro, and had used some of the same tools I use to search images for sensitive data on the memory dump...and found stuff. Well, the next step was to nail this down to a specific process. Unfortunately, within Responder Field Edition, you can export the executable image for the process but not the memory pages it uses. That's where Rich came to the rescue...he told me to right-click on the imported memory snapshot, choose View Binary from the context menu, and after the binary contents of the memory dump appeared in the right-hand view pane, click on the binoculars at in the menu bar above the memory dump and enter my search terms. I did this, and based on the output, was able to determine that the data I was searching for was not associated with a specific process. Interestingly, the strings associated with the process itself had not contained the information I was looking for (based on my search terms) and that served to corroborate my findings. Thanks to Rich with for his helping hand in showing me how to ring just a little bit more out of Responder!
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Pages
▼
Tuesday, March 31, 2009
Saturday, March 28, 2009
EventLog Parsing
It's a rainy day here in Northern VA, just the kind of day where you want to sit inside and code. Seriously. One of the things I've had to get back to is tweaking some of the issues I've had with the code for evt2xls.pl. For some reason, on smaller EVT files, it would rip right through them, but on larger files, particularly those around 16MB, it was having...issues.
Rather than try to wade through the mess of code I wrote two years ago, I decided to just rewrite the code from the ground up. Microsoft is nice enough to provide the EVENTLOGRECORD structure format, as well as the ELF_LOGFILE_HEADER structure and ELF_EOF_RECORD structure formats. Using this information, I completely rewrote the code that is the basis of evt2xls.pl. I will also be updating evtrpt.pl, which provides statistics about EVT files, such as the frequency of occurence of various event sources and IDs, as well as the date range of all of the records listed in the EVT file. I plan to add some statistics for SIDs, as well.
So the method for analyzing EVT files from Windows 2000, XP, and 2003 remains the same
1. Run the auditpol plugin from RegRipper (using rip.pl) against the Security hive file to see what's being audited.
2. Run evtrpt.pl (the new one when it's out) against the EVT file(s) to see what you have; for example, if the date range of the EVT records doesn't cover your incident, then there may be little of value.
3. Run evt2xls.pl to extract the event records to XLS, CSV, or a timeline-specific format for analysis.
A basic version of the rewritten evt2xls.pl will be available shortly. A more fully featured version will be available through some other means at a later date.
This is very useful, as the Perl script is platform-independent...it will run on any platform that has Perl, as no special or platform-specific modules are required (with the exception of Spreadsheet::WriteExcel, which can be easily installed on ActiveState Perl using PPM). Also, as the Windows API is not used, there's no worry about extracting event records from EVT files that other tools (particularly the Event Viewer) refer to as "corrupted", so their is not need to "fix" a corrupted EVT file (because it probably isn't corrupted at all).
Addendum: The next step is to create code for locating (and parsing) event records in memory dumps and unallocated space.
Rather than try to wade through the mess of code I wrote two years ago, I decided to just rewrite the code from the ground up. Microsoft is nice enough to provide the EVENTLOGRECORD structure format, as well as the ELF_LOGFILE_HEADER structure and ELF_EOF_RECORD structure formats. Using this information, I completely rewrote the code that is the basis of evt2xls.pl. I will also be updating evtrpt.pl, which provides statistics about EVT files, such as the frequency of occurence of various event sources and IDs, as well as the date range of all of the records listed in the EVT file. I plan to add some statistics for SIDs, as well.
So the method for analyzing EVT files from Windows 2000, XP, and 2003 remains the same
1. Run the auditpol plugin from RegRipper (using rip.pl) against the Security hive file to see what's being audited.
2. Run evtrpt.pl (the new one when it's out) against the EVT file(s) to see what you have; for example, if the date range of the EVT records doesn't cover your incident, then there may be little of value.
3. Run evt2xls.pl to extract the event records to XLS, CSV, or a timeline-specific format for analysis.
A basic version of the rewritten evt2xls.pl will be available shortly. A more fully featured version will be available through some other means at a later date.
This is very useful, as the Perl script is platform-independent...it will run on any platform that has Perl, as no special or platform-specific modules are required (with the exception of Spreadsheet::WriteExcel, which can be easily installed on ActiveState Perl using PPM). Also, as the Windows API is not used, there's no worry about extracting event records from EVT files that other tools (particularly the Event Viewer) refer to as "corrupted", so their is not need to "fix" a corrupted EVT file (because it probably isn't corrupted at all).
Addendum: The next step is to create code for locating (and parsing) event records in memory dumps and unallocated space.
Friday, March 27, 2009
Case Studies
The guys over at the Hacking Exposed Computer Forensics blog have been posting a couple of entries that may be of interest to examiners and analysts, called What did they take when they left? In today's economy, and particularly with the news media talking about disgruntled employees taking data or information with them when they leave an employer, this information can be very helpful. Their first post refers to looking for artifacts of CD burning, and part 2 discusses what you might find in the UserAssist key. These are both excellent posts that present more than just dry technical data to the reader...they discuss how the data can be used.
There are some good points raised in part 2, particularly what might have happened if there are no entries in the UserAssist key for the user you're looking at. Based on my experience, one thing I'd point out is to be sure you're looking at the hive file for an ACTIVE user. I can't tell you the number of times that I've seen someone try to run extraction tools across an NTUSER.DAT file from the "Default User" or from the "All Users" profile. I've also seen seasoned examiners try to run tools against the ntuser.dat.log file.
Another point I'd like to expand a bit is that if there are no entries beneath the Count key, or if there don't seem to be a number of entries commensurate with the apparent user activity, be sure to check the LastWrite time of the Count key (particularly if the key has no values at all). Remember, the LastWrite time of a key is similar to the last modification times for files, and the time may correlate directly to when the entries were deleted.
Speaking of which, if you're examining a Windows XP system, don't forget to consider System Restore Points. While the user may have deleted the UserAssist entries from the current hive file, there may be a number of Restore Points that contain valuable data. The upcoming Windows Forensic Analysis second edition includes a discussion of a tool that I wrote to allow the examiner to run RegRipper plugins across the System Restore Points.
Be sure to continue following the posts over at the Hacking Exposed Computer Forensics blog. Folks love a good story, particularly something that they can follow and actually use, and the HECF guys are bringing it on!
Be sure to check out Matt's interview on the Forensic 4Cast podcast, Hogfly's use of HBGary Responder, and Christine's updates at the e-Evidence site.
There are some good points raised in part 2, particularly what might have happened if there are no entries in the UserAssist key for the user you're looking at. Based on my experience, one thing I'd point out is to be sure you're looking at the hive file for an ACTIVE user. I can't tell you the number of times that I've seen someone try to run extraction tools across an NTUSER.DAT file from the "Default User" or from the "All Users" profile. I've also seen seasoned examiners try to run tools against the ntuser.dat.log file.
Another point I'd like to expand a bit is that if there are no entries beneath the Count key, or if there don't seem to be a number of entries commensurate with the apparent user activity, be sure to check the LastWrite time of the Count key (particularly if the key has no values at all). Remember, the LastWrite time of a key is similar to the last modification times for files, and the time may correlate directly to when the entries were deleted.
Speaking of which, if you're examining a Windows XP system, don't forget to consider System Restore Points. While the user may have deleted the UserAssist entries from the current hive file, there may be a number of Restore Points that contain valuable data. The upcoming Windows Forensic Analysis second edition includes a discussion of a tool that I wrote to allow the examiner to run RegRipper plugins across the System Restore Points.
Be sure to continue following the posts over at the Hacking Exposed Computer Forensics blog. Folks love a good story, particularly something that they can follow and actually use, and the HECF guys are bringing it on!
Be sure to check out Matt's interview on the Forensic 4Cast podcast, Hogfly's use of HBGary Responder, and Christine's updates at the e-Evidence site.
Thursday, March 26, 2009
Timeline Analysis, pt V - First Steps
In order to really understand developing a timeline of activity on a system, a great place to start is with the file system. Well...okay...not great, per se...it's the traditional way, how's that? So, let's get some hands-on experience, and to do that, let's start with an image...pick one from the Available Images section below; I'm going to use the NIST "hacking" case image because it has some interesting things we'll take a look at. You may have to download the segments of the raw, dd-format image and reassemble them into a single image file using the "type" command, or download the .E0x file and recapture the image into a raw format using FTK Imager.
Once you've downloaded the image, you can use FTK Imager to load the file and check out the partition table, or you can use the VDK file system driver (see the Resources section below) to view the partition table from the command line. Using the "vdk view" command, you can see the partition table, which gives us similar information (along with offsets) as what is available through FTK Imager:
Disk Capacity : 9514260 sectors (4645 MB)
Number Of Files : 1
Type Size Path
------- ------- ----
FLAT 9514260 d:\hacking\image.dd
Partitions :
# Start Sector Length in sectors Type
-- ------------ --------------------- ----
0 0 9514260 ( 4645 MB)
1 63 9510417 ( 4643 MB) 07h:HPFS/NTFS
Another tool you can use to collect similar information from an image is TSK's mmls tool. Using the command "mmls -t dos d:\hacking\image.dd", we see the following output:
DOS Partition Table
Offset Sector: 0
Units are in 512-byte sectors
Slot Start End Length Description
00: Meta 0000000000 0000000000 0000000001 Primary Table (#0)
01: ----- 0000000000 0000000062 0000000063 Unallocated
02: 00:00 0000000063 0009510479 0009510417 NTFS (0x07)
03: ----- 0009510480 0009514259 0000003780 Unallocated
In the output of both "vdk view" and "mmls", I've bolded the particular information that we're looking for, the offset to the partition that we're interested in.
This is why I chose this image in particular; it provides us with a good example to use in order to demonstrate the use of the tools, as the NTFS partition doesn't at the first sector; rather, it starts at sector 63 (Note: you can get this same information by selecting the partition in FTK Imager and choosing View -> Properties). One of the tools that we'll want to use to obtain timeline information from our acquired image is the TSK tool 'fls' (see the link in the Resource section below). The 'fls' tool will allow you to extract timeline information for the file system from the acquired image. In order to create a bodyfile containing all of the timeline information, use the following command:
fls -r -p -o 63 -l -m C:/ d:\hacking\image.dd > bodyfile
Another great use for the tool is to get just a listing of all of the deleted files from the system using the following command:
fls -d -r -p -l -o 63 -m C:/ d:\hacking\image.dd > deleted
I won't go into detail on the uses of all of the various switches, as you can find those by typing just "fls" at the command prompt, or by accessing the appropriate link in the Resources section below. The output bodyfile from our first command contains all of the deleted files, as well.
The bodyfile created by fls lists 4 timestamps in Unix epoch time format; atime, mtime, ctime, and crtime. In this case, the crtime is the creation time, and the ctime value is the metadata change time, which are derived from the $STANDARD_INFORMATION NTFS attribute (for the NTFS file system, of course).
At this point, we have a body file that we can use with mactime to create a timeline of file system activity. We can also use this body file as an input to Michael Cloppert's ex-tip in order to incorporate other data sources into our timeline.
Available Images
Lance's ForensicKB blog practicals
NIST "hacking" case
InfoSecShortTakes competition image
Resources
SleuthKit fls man page
SleutKit Wiki: BodyFile
SleuthKit Wiki: Timeline
Forensic Wiki: How to analyze partitions
VDK file system driver
ForensicWiki: NTFS
Once you've downloaded the image, you can use FTK Imager to load the file and check out the partition table, or you can use the VDK file system driver (see the Resources section below) to view the partition table from the command line. Using the "vdk view" command, you can see the partition table, which gives us similar information (along with offsets) as what is available through FTK Imager:
Disk Capacity : 9514260 sectors (4645 MB)
Number Of Files : 1
Type Size Path
------- ------- ----
FLAT 9514260 d:\hacking\image.dd
Partitions :
# Start Sector Length in sectors Type
-- ------------ --------------------- ----
0 0 9514260 ( 4645 MB)
1 63 9510417 ( 4643 MB) 07h:HPFS/NTFS
Another tool you can use to collect similar information from an image is TSK's mmls tool. Using the command "mmls -t dos d:\hacking\image.dd", we see the following output:
DOS Partition Table
Offset Sector: 0
Units are in 512-byte sectors
Slot Start End Length Description
00: Meta 0000000000 0000000000 0000000001 Primary Table (#0)
01: ----- 0000000000 0000000062 0000000063 Unallocated
02: 00:00 0000000063 0009510479 0009510417 NTFS (0x07)
03: ----- 0009510480 0009514259 0000003780 Unallocated
In the output of both "vdk view" and "mmls", I've bolded the particular information that we're looking for, the offset to the partition that we're interested in.
This is why I chose this image in particular; it provides us with a good example to use in order to demonstrate the use of the tools, as the NTFS partition doesn't at the first sector; rather, it starts at sector 63 (Note: you can get this same information by selecting the partition in FTK Imager and choosing View -> Properties). One of the tools that we'll want to use to obtain timeline information from our acquired image is the TSK tool 'fls' (see the link in the Resource section below). The 'fls' tool will allow you to extract timeline information for the file system from the acquired image. In order to create a bodyfile containing all of the timeline information, use the following command:
fls -r -p -o 63 -l -m C:/ d:\hacking\image.dd > bodyfile
Another great use for the tool is to get just a listing of all of the deleted files from the system using the following command:
fls -d -r -p -l -o 63 -m C:/ d:\hacking\image.dd > deleted
I won't go into detail on the uses of all of the various switches, as you can find those by typing just "fls" at the command prompt, or by accessing the appropriate link in the Resources section below. The output bodyfile from our first command contains all of the deleted files, as well.
The bodyfile created by fls lists 4 timestamps in Unix epoch time format; atime, mtime, ctime, and crtime. In this case, the crtime is the creation time, and the ctime value is the metadata change time, which are derived from the $STANDARD_INFORMATION NTFS attribute (for the NTFS file system, of course).
At this point, we have a body file that we can use with mactime to create a timeline of file system activity. We can also use this body file as an input to Michael Cloppert's ex-tip in order to incorporate other data sources into our timeline.
Available Images
Lance's ForensicKB blog practicals
NIST "hacking" case
InfoSecShortTakes competition image
Resources
SleuthKit fls man page
SleutKit Wiki: BodyFile
SleuthKit Wiki: Timeline
Forensic Wiki: How to analyze partitions
VDK file system driver
ForensicWiki: NTFS
Dark Reading: WFA 2/e
John Sawyer had an article published in Dark Reading that mentions WFA 2/e! It's nice to get some love, even though the book isn't out yet! Thanks, John!
Friday, March 20, 2009
Lessons in IR
Something in the news out of Tulsa, OK, this morning really provided an excellent lesson in IR.
Basically, the story goes that someone saw what they thought might be one of the deadliest spiders on the planet, panicked, and killed it. An expert in spiders asked to see the body of the spider, but it wasn't available...it had been destroyed.
How many times has this happened to you as a responder?
Caller: "Help! We were hit with the deadliest Windows worm known to man!"
You: "Okay, calm down. How do you know?"
Caller: "We received an alert on our AV console!"
You: "Okay, good. What did it say?"
Caller: "We don't know."
You: "Uhm...okay. Have you isolated any infected systems or preserved a sample of the malware?"
This is where things just kind of go downhill. But the news article is a great example of how things go wrong on a daily basis in IR...
Basically, the story goes that someone saw what they thought might be one of the deadliest spiders on the planet, panicked, and killed it. An expert in spiders asked to see the body of the spider, but it wasn't available...it had been destroyed.
How many times has this happened to you as a responder?
Caller: "Help! We were hit with the deadliest Windows worm known to man!"
You: "Okay, calm down. How do you know?"
Caller: "We received an alert on our AV console!"
You: "Okay, good. What did it say?"
Caller: "We don't know."
You:
This is where things just kind of go downhill. But the news article is a great example of how things go wrong on a daily basis in IR...
Memory Analysis, for real!
A bit ago, Rich over at HBGary was nice enough to provide me with a dongle for Responder Field Edition, so that I could take a look at it and work with it from an incident response perspective. Having that kind of access to that kind of tool is really pretty amazing, as it provides a perspective to the overall environment that you don't get through reviewing web sites and testimonials.
First, some quick background...if you haven't been by the redesigned HBGary site, go by and take a look. Also, be sure to add the Fast Horizon blog to your RSS feed. The Fast Horizon blog gives some interesting insight into technical aspects of the Responder product (i.e., Digital DNA, etc.) that you won't get even by working with the tool. Also, as I'm working with the Field Edition, I don't have access to Digital DNA, so I won't be able to comment on that.
Okay, to make things even better when using tools like Responder to perform memory analysis, Hogfly has started a memory snapshot project, and has posted a memory snapshot from a VM guest running Windows XP (part II is posted here), with actual malware running. Pretty cool, and a great idea at that! This not only provides access to memory dumps with something to actually look for, but it also provides a great example of why you'd want to dump memory in the first place!
I'll be taking a look at Hogfly's first memory snapshot (.vmem file) with Responder and Volatility, as the snapshot is of a Windows XP system. I won't be using Memoryze, as I downloaded the MemoryzeSetup.msi file and ran it several times, and had nothing installed. I tried specifying a path for the installation, as well as using the default path. I even tried re-downloading the MSI file. Nothing worked.
Responder Field Edition
Responder provides a very easy to use GUI interface for working with memory dumps or snapshots. You simply select the memory snapshot to import, select a few options for what to do during processing, maybe add some keywords to search for, and then you let the tool do its thing.
A couple of very interesting things that Responder will do is look for "keys and passwords" (a string search for components that might include passwords) as well as for "Internet History" or URLs. When locating both of these, the column display in Responder includes a field for the offset where the string is located; this is useful as you may be able to map something interesting to a location in memory to give that string some context. In reviewing the recovered URLs from Hogfly's memory snapshot, I found a LOT of Microsoft- and MSN.com-related entries, as well as some others, such as:
http://192.168.30.129/malware/
http://*:2869/
http://www.iec.ch
http://%s:%u http://[::1]:8080/
There are a number of others (too many to list here), and there were also references to tunes.com and musicblvd.com.
For incident response purposes, one of the things I really like about Responder is the ability to quickly get the memory snapshot open and locate the issue. In this case, for me, the "offending" process stood out like a sore thumb...svhost.exe, PID: 1824, with a command line of "C:\Windows\msagent\svhost.exe". From there, you can expand the tree listing for that process in order to view specific information about and from that process. For example, we can see that the process has a couple of open file handles, to include the "foo" user's TIF index.dat file, and that the process also has an open TCP socket from port 1059 to a remote IP address on port 9899. This information can be extracted quickly once the memory snapshot has been collected, and the information available can then be used to correlate with other information, such as network packet captures, firewall or network device/appliance logs, etc.
From an incident response perspective, Responder is a great tool to really get you started in quickly identifying issues. I would like to be able to dump the binary, if possible...I selected the process in the Processes tree view (left-most pane of the UI), right-clicked and selected Package, View Binary from the context menu...some dialog boxes flashed, but I didn't end up with a binary, nor did I get anything stating that I couldn't have it. This may be due to the fact that the memory pages for the process executable had been paged out; in cases such as this, using FastDump Pro would've been the way to create a memory snapshot, incorporating the pagefile, as well, providing a more complete picture.
Over all, however, Responder lets you very quickly process information in a graphical format, providing speed and agility for something for which just a couple of years ago, there was no real process or methodology at all available.
Addendum: I heard from Rich with respect to parsing strings from binaries with Responder, and its actually pretty simple. If you select a particular process in the Processes tree view, you then expand the process listing for the Modules (see the above image) and select the executable image, which is also listed as a module. Expanding that tree lets you see Bookmarks and Strings. So with a couple of mouse clicks, you can view the strings extracted from any particular binary. Similarly, viewing, analyzing, or saving a copy of the binary is simply a matter of right-clicking on the name of the EXE file in the Modules tree view, choosing Package, and selecting the option that you want (see figure).
Volatility
Analyzing the memory snapshot with Volatility was somewhat different, as Volatility is a purely command line interface (CLI) tool. However, that does not make Volatility any less powerful when it comes to memory analysis. I won't go through everything Volatility can do, as it's been discussed before (and is rather nicely documented), but suffice to say that I ran the following commands against the memory snapshot:
ident
datetime
pslist/psscan2
connections/connscan2
sockets
files
regobjkeys
The great thing about these commands is that they can be included in a batch file and run all at once, and finished off with tools such as JL's vol2html to tie everything together into a single report (examples here and here). Don't forget that you can also include other modules, such as malfind, moddump, or Moyix's Registry modules. JL's code is a great example of what you can do with these tools, showing the flexibility available through Volatility.
Correlating output like this can make it easier to identify suspicious processes. From there, once you identify a process of interest, you can then use the "procdump" and "memdmp" commands to collect as much of the executable image and process memory, respectively, as you can. In this case, the executable image data appears in the local directory as "executable.1824.exe" and the memory dump appears as "1824.dmp". You can then view the contents of these files in a hex editor, or simply extract the strings. When running the procdump command, however, I received a number of "Memory not accessible" responses, indicating that the memory pages had been paged out.
As you can see, various memory analysis tools have strengths and weaknesses, and should be considered just as they are...tools to do a job. Rather than advocating one specific tool, I'd advocate understanding and using them all. Responder is more OS-complete than some of the other available tools, while Volatility provides a level of granularity not seen in the other tools. At this point, if someone asked me, "which tool do you recommend...Responder, Volatility, or Memoryze", I'd have to say "yes".
Another thing to remember is that looking at these tools in isolation only provides part of the answer, and only taps a small portion of the power available to you as a responder. For example, throw F-Response EE and FEMC, and RegRipper into your toolkit, and you're really expanding your capabilities.
First, some quick background...if you haven't been by the redesigned HBGary site, go by and take a look. Also, be sure to add the Fast Horizon blog to your RSS feed. The Fast Horizon blog gives some interesting insight into technical aspects of the Responder product (i.e., Digital DNA, etc.) that you won't get even by working with the tool. Also, as I'm working with the Field Edition, I don't have access to Digital DNA, so I won't be able to comment on that.
Okay, to make things even better when using tools like Responder to perform memory analysis, Hogfly has started a memory snapshot project, and has posted a memory snapshot from a VM guest running Windows XP (part II is posted here), with actual malware running. Pretty cool, and a great idea at that! This not only provides access to memory dumps with something to actually look for, but it also provides a great example of why you'd want to dump memory in the first place!
I'll be taking a look at Hogfly's first memory snapshot (.vmem file) with Responder and Volatility, as the snapshot is of a Windows XP system. I won't be using Memoryze, as I downloaded the MemoryzeSetup.msi file and ran it several times, and had nothing installed. I tried specifying a path for the installation, as well as using the default path. I even tried re-downloading the MSI file. Nothing worked.
Responder Field Edition
Responder provides a very easy to use GUI interface for working with memory dumps or snapshots. You simply select the memory snapshot to import, select a few options for what to do during processing, maybe add some keywords to search for, and then you let the tool do its thing.
A couple of very interesting things that Responder will do is look for "keys and passwords" (a string search for components that might include passwords) as well as for "Internet History" or URLs. When locating both of these, the column display in Responder includes a field for the offset where the string is located; this is useful as you may be able to map something interesting to a location in memory to give that string some context. In reviewing the recovered URLs from Hogfly's memory snapshot, I found a LOT of Microsoft- and MSN.com-related entries, as well as some others, such as:
http://192.168.30.129/malware/
http://*:2869/
http://www.iec.ch
http://%s:%u http://[::1]:8080/
There are a number of others (too many to list here), and there were also references to tunes.com and musicblvd.com.
For incident response purposes, one of the things I really like about Responder is the ability to quickly get the memory snapshot open and locate the issue. In this case, for me, the "offending" process stood out like a sore thumb...svhost.exe, PID: 1824, with a command line of "C:\Windows\msagent\svhost.exe". From there, you can expand the tree listing for that process in order to view specific information about and from that process. For example, we can see that the process has a couple of open file handles, to include the "foo" user's TIF index.dat file, and that the process also has an open TCP socket from port 1059 to a remote IP address on port 9899. This information can be extracted quickly once the memory snapshot has been collected, and the information available can then be used to correlate with other information, such as network packet captures, firewall or network device/appliance logs, etc.
From an incident response perspective, Responder is a great tool to really get you started in quickly identifying issues. I would like to be able to dump the binary, if possible...I selected the process in the Processes tree view (left-most pane of the UI), right-clicked and selected Package, View Binary from the context menu...some dialog boxes flashed, but I didn't end up with a binary, nor did I get anything stating that I couldn't have it. This may be due to the fact that the memory pages for the process executable had been paged out; in cases such as this, using FastDump Pro would've been the way to create a memory snapshot, incorporating the pagefile, as well, providing a more complete picture.
Over all, however, Responder lets you very quickly process information in a graphical format, providing speed and agility for something for which just a couple of years ago, there was no real process or methodology at all available.
Addendum: I heard from Rich with respect to parsing strings from binaries with Responder, and its actually pretty simple. If you select a particular process in the Processes tree view, you then expand the process listing for the Modules (see the above image) and select the executable image, which is also listed as a module. Expanding that tree lets you see Bookmarks and Strings. So with a couple of mouse clicks, you can view the strings extracted from any particular binary. Similarly, viewing, analyzing, or saving a copy of the binary is simply a matter of right-clicking on the name of the EXE file in the Modules tree view, choosing Package, and selecting the option that you want (see figure).
Volatility
Analyzing the memory snapshot with Volatility was somewhat different, as Volatility is a purely command line interface (CLI) tool. However, that does not make Volatility any less powerful when it comes to memory analysis. I won't go through everything Volatility can do, as it's been discussed before (and is rather nicely documented), but suffice to say that I ran the following commands against the memory snapshot:
ident
datetime
pslist/psscan2
connections/connscan2
sockets
files
regobjkeys
The great thing about these commands is that they can be included in a batch file and run all at once, and finished off with tools such as JL's vol2html to tie everything together into a single report (examples here and here). Don't forget that you can also include other modules, such as malfind, moddump, or Moyix's Registry modules. JL's code is a great example of what you can do with these tools, showing the flexibility available through Volatility.
Correlating output like this can make it easier to identify suspicious processes. From there, once you identify a process of interest, you can then use the "procdump" and "memdmp" commands to collect as much of the executable image and process memory, respectively, as you can. In this case, the executable image data appears in the local directory as "executable.1824.exe" and the memory dump appears as "1824.dmp". You can then view the contents of these files in a hex editor, or simply extract the strings. When running the procdump command, however, I received a number of "Memory not accessible" responses, indicating that the memory pages had been paged out.
As you can see, various memory analysis tools have strengths and weaknesses, and should be considered just as they are...tools to do a job. Rather than advocating one specific tool, I'd advocate understanding and using them all. Responder is more OS-complete than some of the other available tools, while Volatility provides a level of granularity not seen in the other tools. At this point, if someone asked me, "which tool do you recommend...Responder, Volatility, or Memoryze", I'd have to say "yes".
Another thing to remember is that looking at these tools in isolation only provides part of the answer, and only taps a small portion of the power available to you as a responder. For example, throw F-Response EE and FEMC, and RegRipper into your toolkit, and you're really expanding your capabilities.
Wednesday, March 18, 2009
Linkfest
This will be a short "linkfest"...
Ovie and Bret posted a new CyberSpeak podcast recently. In this podcast, they interviewed Drew Fahey of e-Fense, and talked about the reasons behind Helix becoming a commercial tool. I agree with Drew's reasoning...I can't say that I've been a huge user of Helix, although I do have copies of the CDs for various versions of Helix. I have run into folks that use Helix (in some cases, almost exclusively), so it behooves me to be a bit familiar with the tool sets, particularly when it's a customer and they're trying to provide me with some needed information.
One of the links from the show was for ADrive.com, an online storage and backup site. A while back, I blogged on the GMail Drive, an application that would allow you to use your GMail account as a backup/storage facility. Googling turns up a number of sites available for this kind of functionality, including VMN.NET, and an ExtremeTech article that lists six free online storage sites. Given some of the media attention that's been directed at insider threats, particularly in a down economy, this is yet another avenue of data leakage to be on the lookout for. When performing incident response or analysis in these situations, you may want to look for artifacts of online storage sites
Ovie and Bret posted a new CyberSpeak podcast recently. In this podcast, they interviewed Drew Fahey of e-Fense, and talked about the reasons behind Helix becoming a commercial tool. I agree with Drew's reasoning...I can't say that I've been a huge user of Helix, although I do have copies of the CDs for various versions of Helix. I have run into folks that use Helix (in some cases, almost exclusively), so it behooves me to be a bit familiar with the tool sets, particularly when it's a customer and they're trying to provide me with some needed information.
One of the links from the show was for ADrive.com, an online storage and backup site. A while back, I blogged on the GMail Drive, an application that would allow you to use your GMail account as a backup/storage facility. Googling turns up a number of sites available for this kind of functionality, including VMN.NET, and an ExtremeTech article that lists six free online storage sites. Given some of the media attention that's been directed at insider threats, particularly in a down economy, this is yet another avenue of data leakage to be on the lookout for. When performing incident response or analysis in these situations, you may want to look for artifacts of online storage sites
Sunday, March 15, 2009
Get in mah Kindle!!
Paraphrasing Fat Bastard, I thought it was kind of funny...I crack myself up!
Okay, enough of that. Rob Lee recently pointed me to the new Kindle, and this morning, Lance Mueller pointed me to something just as cool...Windows Forensic Analysis, on the Kindle! Right below the image of the book cover, there's a little box that says "Tell the Publisher"...you can tell the publisher that you want the book on your Kindle!
And yes, there is a similar box for WFA 2/e!
BTW...if you connect one of these to your Windows box to upload books, what artifacts does it leave?
Okay, enough of that. Rob Lee recently pointed me to the new Kindle, and this morning, Lance Mueller pointed me to something just as cool...Windows Forensic Analysis, on the Kindle! Right below the image of the book cover, there's a little box that says "Tell the Publisher"...you can tell the publisher that you want the book on your Kindle!
And yes, there is a similar box for WFA 2/e!
BTW...if you connect one of these to your Windows box to upload books, what artifacts does it leave?
Resources
With respect to incident management, and incident response and forensic analysis of Windows systems, what are your issues, concerns, and requirements?
What I mean by this is, what resources are out there that help you meet your needs and goals, and which ones simply are not available? What meets your needs, and what needs aren't being met?
These questions apply across the board, regardless of whether you're local, state, or federal LE, a consultant, FTE IT staff, college/university student, etc.
Is it a matter of the availability of information with respect to various or specific topics? If so, which ones? What about training? Is there information out that may be useful, but is out of reach for some reason (aside from being classified)? What are your limitations in these regards? Time? Funding? How could your requirements in these areas be better met?
Have you come through an incident or completed some forensic analysis and been left with questions or concerns, such as "did I miss something?" or "what could I have done better?"
Are you looking around and simply not finding your needs being met? Have you sat down and figured out what those needs are, even if they're moving targets? Do you keep coming back to some of them over and over again?
What I mean by this is, what resources are out there that help you meet your needs and goals, and which ones simply are not available? What meets your needs, and what needs aren't being met?
These questions apply across the board, regardless of whether you're local, state, or federal LE, a consultant, FTE IT staff, college/university student, etc.
Is it a matter of the availability of information with respect to various or specific topics? If so, which ones? What about training? Is there information out that may be useful, but is out of reach for some reason (aside from being classified)? What are your limitations in these regards? Time? Funding? How could your requirements in these areas be better met?
Have you come through an incident or completed some forensic analysis and been left with questions or concerns, such as "did I miss something?" or "what could I have done better?"
Are you looking around and simply not finding your needs being met? Have you sat down and figured out what those needs are, even if they're moving targets? Do you keep coming back to some of them over and over again?
Saturday, March 14, 2009
A bunch of stuff
Here's a bunch of stuff I've run across recently...
As an update to my Working with email post, I found this post from digfor about a couple of other useful tools for handling email. The post mentions Mail Viewer from MiTeC, as well as the MiTeC Outlook Express Reader.
Digfor also mentions the ImDisk virtual disk driver, something I've mentioned before and included in my book. I agree, this is an excellent tool for mounting images of disks.
Digfor's post led me to this one about an SFCList tool that lets you see all of the files protected by WFP. From there, I found this link to disabling SFC...something I've been aware of for a while, and the reason I wrote the wfpcheck tool discussed in the second edition of my book. The link to disabling SFC led me this one, and I knew about this one. I think that it's important for all responders and analysts to be aware of this, as it can help you to find bad stuff on a system or within an acquired image. This came into play with MS's initial release of their description of W32/Virut.BM (hey, I didn't name the variant, but it's funny to me, too!). In that initial description, under the Analysis tab, they had no mention of SFC being disabled. To me, this was just another example of how AV vendors are missing the mark when it comes to helping their customers...rather than writing up malware in a way that helps the IT staffs that battle it, they write it up in a way that suits themselves.
Next - not a tool, per se, as much as a site...check out the Command Line Kungfu blog!
As an update to my Working with email post, I found this post from digfor about a couple of other useful tools for handling email. The post mentions Mail Viewer from MiTeC, as well as the MiTeC Outlook Express Reader.
Digfor also mentions the ImDisk virtual disk driver, something I've mentioned before and included in my book. I agree, this is an excellent tool for mounting images of disks.
Digfor's post led me to this one about an SFCList tool that lets you see all of the files protected by WFP. From there, I found this link to disabling SFC...something I've been aware of for a while, and the reason I wrote the wfpcheck tool discussed in the second edition of my book. The link to disabling SFC led me this one, and I knew about this one. I think that it's important for all responders and analysts to be aware of this, as it can help you to find bad stuff on a system or within an acquired image. This came into play with MS's initial release of their description of W32/Virut.BM (hey, I didn't name the variant, but it's funny to me, too!). In that initial description, under the Analysis tab, they had no mention of SFC being disabled. To me, this was just another example of how AV vendors are missing the mark when it comes to helping their customers...rather than writing up malware in a way that helps the IT staffs that battle it, they write it up in a way that suits themselves.
Next - not a tool, per se, as much as a site...check out the Command Line Kungfu blog!
Friday, March 13, 2009
Incident Management 101
When responding to an incident, the single biggest, most important factor is information.
Some of you are going to read this and your first thought will be, "well...duh!" Well, think about it for a moment...if you're a consultant (like me), how many times have you received a call for assistance where the answer to all of your questions was either "I don't know" or "blue"? How many times have you responded to an incident where the customer even had as much as a usable network diagram? Remember, like any (potentially) bleeding victim, they want answers NOW, but it's like trying to diagnose someone via email...whether you're on the phone or on-site, the first thing you need to do is orient yourself to the situation, which, of course, takes time.
In any incident, there are going to be unknowns; ie, a lack of information. At first, you may not know what you're dealing with, so some data collection and analysis (you know..."investigation") will be required, and this should be based on a solid process.
One of the unknowns during an incident should not be your environment; if it is, you're in trouble. By environment, I mean everything, include very basic stuff like "TCP connections start with a three-stage handshake." Laugh if you will, but I'm serious. Having JUST that basic piece of information and understanding how to apply it makes a tremendous difference during incident response, for anyone. Also, you need other information such as, where are systems located, and what are the paths that data is supposed to (or can) take? Where are applications located? Is there any logging, and if so, what is logged?
When it comes to responding to an incident, there are four main locations for data collection:
1. The network - classic packet sniffing; analyze with Wireshark, NetWitness, etc.
2. Network devices - routers, firewalls, any appliances (IPS, Damballa, etc.)
3. Host system/memory
4. Host systems/physical disk - includes not just the host OS, but any applications, application logs, etc.
Data can be collected from any of these sources to assist you in your incident, IF you know where they are, how to (or who can) access them, and where the necessary data is located.
During an incident, something that is more dangerous than a lack of information (because you can always fill the gap, often at the expense of time...) is misinformation. Sometimes the best thing a responder or incident manager can do is recognize that they don't know something, and then gather data and facts, and perform analysis, in order to fill the gap. The absolute worst thing that can happen is when those gaps are filled (or created) by speculation and blatantly incorrect information.
As an example, I've been involved in a number of malware-related incidents in which the corporate AV solution did not detect a bit of malware, or they were hit with a variant of known malware family. In one instance, the IT staff captured a copy of the malware and ran "strings" on it, and then searched Google for one of the strings they found. On the second page of the Google search, they found a reference to a keystroke logger on a public forum. Assuming that this was 100% credible, they proclaimed that the malware had keystroke logging functionality...which immediately caused the Legal and Compliance department to fire off a brown star cluster (Marines will find that one humorous) and declare an emergency! After all, what started out as a quickly-eradicated malware issue now became one of potential sensitive data theft!
In this case, the information collected by the IT staff (ie, the string) had no context. Where was the string located within the EXE file? In the resource section, or in the import table? Depending on which one, that string can affect incident response in vastly different ways.
In another example, I was assisting an incident manager and providing advisory services when I was watching a couple of the IT staff assembled in the "war room". Two of the IT staff were talking about something related to the incident, and I heard one of them mention "keystroke logger". Given the incident, having a keystroke logger on systems would be very bad...you might even say, super bad. Another IT staffer who was working away on the other side of the room looked up when he heard this, and said, "the Trojan is a keystroke logger?" Right about that time, an IT manager walked into room and heard this and made the statement, "The Trojan is a keystroke logger." An hour later at a status meeting, the IT manager reported to the incident manager that a keystroke logger had been installed on systems on the network. Hint: during the ensuing hour, no one had done any examination of either the Trojan or any of the systems.
During incident response, the key to effectively managing an incident is knowing what you don't know, and doing what's necessary to fill that gap. Hint: Speculating ain't it!
Some of you are going to read this and your first thought will be, "well...duh!" Well, think about it for a moment...if you're a consultant (like me), how many times have you received a call for assistance where the answer to all of your questions was either "I don't know" or "blue"? How many times have you responded to an incident where the customer even had as much as a usable network diagram? Remember, like any (potentially) bleeding victim, they want answers NOW, but it's like trying to diagnose someone via email...whether you're on the phone or on-site, the first thing you need to do is orient yourself to the situation, which, of course, takes time.
In any incident, there are going to be unknowns; ie, a lack of information. At first, you may not know what you're dealing with, so some data collection and analysis (you know..."investigation") will be required, and this should be based on a solid process.
One of the unknowns during an incident should not be your environment; if it is, you're in trouble. By environment, I mean everything, include very basic stuff like "TCP connections start with a three-stage handshake." Laugh if you will, but I'm serious. Having JUST that basic piece of information and understanding how to apply it makes a tremendous difference during incident response, for anyone. Also, you need other information such as, where are systems located, and what are the paths that data is supposed to (or can) take? Where are applications located? Is there any logging, and if so, what is logged?
When it comes to responding to an incident, there are four main locations for data collection:
1. The network - classic packet sniffing; analyze with Wireshark, NetWitness, etc.
2. Network devices - routers, firewalls, any appliances (IPS, Damballa, etc.)
3. Host system/memory
4. Host systems/physical disk - includes not just the host OS, but any applications, application logs, etc.
Data can be collected from any of these sources to assist you in your incident, IF you know where they are, how to (or who can) access them, and where the necessary data is located.
During an incident, something that is more dangerous than a lack of information (because you can always fill the gap, often at the expense of time...) is misinformation. Sometimes the best thing a responder or incident manager can do is recognize that they don't know something, and then gather data and facts, and perform analysis, in order to fill the gap. The absolute worst thing that can happen is when those gaps are filled (or created) by speculation and blatantly incorrect information.
As an example, I've been involved in a number of malware-related incidents in which the corporate AV solution did not detect a bit of malware, or they were hit with a variant of known malware family. In one instance, the IT staff captured a copy of the malware and ran "strings" on it, and then searched Google for one of the strings they found. On the second page of the Google search, they found a reference to a keystroke logger on a public forum. Assuming that this was 100% credible, they proclaimed that the malware had keystroke logging functionality...which immediately caused the Legal and Compliance department to fire off a brown star cluster (Marines will find that one humorous) and declare an emergency! After all, what started out as a quickly-eradicated malware issue now became one of potential sensitive data theft!
In this case, the information collected by the IT staff (ie, the string) had no context. Where was the string located within the EXE file? In the resource section, or in the import table? Depending on which one, that string can affect incident response in vastly different ways.
In another example, I was assisting an incident manager and providing advisory services when I was watching a couple of the IT staff assembled in the "war room". Two of the IT staff were talking about something related to the incident, and I heard one of them mention "keystroke logger". Given the incident, having a keystroke logger on systems would be very bad...you might even say, super bad. Another IT staffer who was working away on the other side of the room looked up when he heard this, and said, "the Trojan is a keystroke logger?" Right about that time, an IT manager walked into room and heard this and made the statement, "The Trojan is a keystroke logger." An hour later at a status meeting, the IT manager reported to the incident manager that a keystroke logger had been installed on systems on the network. Hint: during the ensuing hour, no one had done any examination of either the Trojan or any of the systems.
During incident response, the key to effectively managing an incident is knowing what you don't know, and doing what's necessary to fill that gap. Hint: Speculating ain't it!
Thursday, March 12, 2009
Bashing the competition....FAIL!!
A letter went out recently from someone at Guidance Software that...well...misrepresented some facts about the F-Response product. I understand that this is how some folks believe that business is done, and that's...well...sad. I'm not going to bash Guidance or their products; instead, I think that as someone who greatly appreciates the work that Matt has done, it is important to clear up some of the misrepresentations put forth in that letter, as some are a bit off, while others are just blatantly wrong.
The letter starts off with: F-Response is a utility that simply allows users to acquire the contents of remote computers and devices, but without any type of security framework, data analysis or forensic preservation capabilities.
F-Response is a tool-agnostic means that facilitates acquisition of data...Matt never intended for it to provide acquisition, analysis or forensic preservation capabilities. There are already enough bloated applications out there, why add another one? Why not instead simply provide sound framework that allows you to do what you need to do? And don't get hung up on the term "sound"...if you're not willing to look into it for yourself, please don't argue the point.
Going on this way throughout the rest of the letter, point for point, would be obnoxious and boring. Instead, I'll illustrate some of the other major points brought up in the letter that include (but are not limited to):
Acquisition validation issues: Acquiring data using a new transfer method introduces an unknown into the acquisition that needs to be vetted by the industry and in the courts - How is new a bad thing? Of course things need to be vetted...EnCase needed to be vetted at one point. I'm not entirely sure I see the point to this "issue".
No logging capabilities - Of course F-Response doesn't have logging capabilities...that's not what it was designed for. This is like complaining that the hammer you brought can't be used to tighten or loosen bolts.
No end node processing - Again, F-Response wasn't designed to be yet another version of available tools; rather it was designed to give greater capabilities to those already possessing a number of the available tools; just watch the videos that are freely available.
Limited Volatile Data Collection - F-Response provides full access to physical memory, exposing it as a physical drive on the analyst's system. Mandiant's Memoryze is capable of directly accessing that physical drive. The contents of physical memory can also be acquired in raw (ie, "dd style") format and immediately imported into HBGary's Responder product with no conversion.
No Solaris, Mac, Linux, AIX, Novell: The solution is Windows only - F-Response currently supports Linux and Apple OSX 10.4, 10.5, with more coming. Characterizing F-Response as "Windows only" is blatantly incorrect.
Invasive compared to servlet - What is "invasive"? F-Response Enterprise is only 70k. You're kidding, right?
Agent deployment is manual - F-Response Enterprise Management Console. It's easier for me to deploy F-Response EE to a dozen systems than it is for me to answer an email on my Blackberry.
No Encryption - F-Response can support Microsoft IPSEC, and F-Response can be run over VPNs.
No compression - F-Response end points can be moved closer to the source machine, effectively reducing the need for compression. Also, compression is CPU-intensive, and wait a second, didn't the author of the letter just mention something about invasiveness??
All in all, the letter really goes a long way toward misrepresenting F-Response. Don't get me wrong...neither Matt nor F-Response need defending from me. Both are fully capable of standing on their own without any help from me. But when I see a misrepresentation as blatant as this, I really feel that it would be a disservice for this go on without at least saying something.
Regardless of my opinions in the matter, I'll leave it to anyone reading this to choose for themselves.
Addendum: Looks like this post got picked up here (in Poland) and by Moyix, as well. Moyix raises some excellent points about the FUD surrounding Volatility...
The letter starts off with: F-Response is a utility that simply allows users to acquire the contents of remote computers and devices, but without any type of security framework, data analysis or forensic preservation capabilities.
F-Response is a tool-agnostic means that facilitates acquisition of data...Matt never intended for it to provide acquisition, analysis or forensic preservation capabilities. There are already enough bloated applications out there, why add another one? Why not instead simply provide sound framework that allows you to do what you need to do? And don't get hung up on the term "sound"...if you're not willing to look into it for yourself, please don't argue the point.
Going on this way throughout the rest of the letter, point for point, would be obnoxious and boring. Instead, I'll illustrate some of the other major points brought up in the letter that include (but are not limited to):
Acquisition validation issues: Acquiring data using a new transfer method introduces an unknown into the acquisition that needs to be vetted by the industry and in the courts - How is new a bad thing? Of course things need to be vetted...EnCase needed to be vetted at one point. I'm not entirely sure I see the point to this "issue".
No logging capabilities - Of course F-Response doesn't have logging capabilities...that's not what it was designed for. This is like complaining that the hammer you brought can't be used to tighten or loosen bolts.
No end node processing - Again, F-Response wasn't designed to be yet another version of available tools; rather it was designed to give greater capabilities to those already possessing a number of the available tools; just watch the videos that are freely available.
Limited Volatile Data Collection - F-Response provides full access to physical memory, exposing it as a physical drive on the analyst's system. Mandiant's Memoryze is capable of directly accessing that physical drive. The contents of physical memory can also be acquired in raw (ie, "dd style") format and immediately imported into HBGary's Responder product with no conversion.
No Solaris, Mac, Linux, AIX, Novell: The solution is Windows only - F-Response currently supports Linux and Apple OSX 10.4, 10.5, with more coming. Characterizing F-Response as "Windows only" is blatantly incorrect.
Invasive compared to servlet - What is "invasive"? F-Response Enterprise is only 70k. You're kidding, right?
Agent deployment is manual - F-Response Enterprise Management Console. It's easier for me to deploy F-Response EE to a dozen systems than it is for me to answer an email on my Blackberry.
No Encryption - F-Response can support Microsoft IPSEC, and F-Response can be run over VPNs.
No compression - F-Response end points can be moved closer to the source machine, effectively reducing the need for compression. Also, compression is CPU-intensive, and wait a second, didn't the author of the letter just mention something about invasiveness??
All in all, the letter really goes a long way toward misrepresenting F-Response. Don't get me wrong...neither Matt nor F-Response need defending from me. Both are fully capable of standing on their own without any help from me. But when I see a misrepresentation as blatant as this, I really feel that it would be a disservice for this go on without at least saying something.
Regardless of my opinions in the matter, I'll leave it to anyone reading this to choose for themselves.
Addendum: Looks like this post got picked up here (in Poland) and by Moyix, as well. Moyix raises some excellent points about the FUD surrounding Volatility...
Malware for Incident Responders - Examples
I thought that now would be a good time to take a step back and look at the malware characteristics that have been presented, and see how they can be used to understand malware so that such incidents can be prevented, detected, and better responded to, not only by first responders, but also by forensic analysts. During many malware incidents, you'll hear things like, "we cleaned the systems, but they keep getting infected", or "we don't know how the malware is infecting systems." My hope is that the characteristics will provide a framework that can be used to answer these, and other questions.
Initial Infection Vector
One thing we're always curious about is how the malware originally got on a system or into an infrastructure. Many times we see malware on a system and find out that it makes its way through the network like a worm, exploiting a vulnerability or a specific set of vulnerabilities. However, many times, these vulnerabilities are obviated at the perimeter, either by the fact that the ports are not accessible via the firewall, or the network is NAT'd behind the firewall, or something similar. So the question remains, how did the malware originally make its way into the network? Where is patient zero?
Initial infection vectors (IIVs) can come from a variety of sources. For example, a user opens an email attachment and either the worm kicks off or a downloader gets installed, which then reaches out and grabs the worm. Other IIVs can include the web browser, USB thumb drives, etc. Heck, even digital cameras can be the IIV for malware!
Propagation Mechanism(s)
Propagation mechanisms are how the malware "gets around" and infects other systems once it's in your infrastructure. Some malware...worms, in particular...like to propagate by exploiting vulnerabilities on the network. Some worms use just one exploit, and others actually use multiple exploits, going with the one that works. This may also be the initial infection vector, or it may be how the malware propagates once it's on your network.
Very often, malware will exploit operational business functionality in order to propagate. From the exploit example, worms will not only exploit vulnerabilities, but also exploit the operational business functionality of not patching systems in a timely manner. A lot of malware currently exploits the operational business functionality of having all users run with Administrator privileges. The release of Conficker has seen a move to exploit file sharing; by placing a copy of the malware at the root of a share, and adding an autorun.inf file, the malware exploits the operational business functionality of file servers (as well as the fact that MS dropped the ball with respect to the NoDriveTypeAutorun Registry setting).
Persistence Mechanism(s)
Persistence mechanisms are those artifacts that allow malware to survive a reboot. Jesse Kornblum's "Rootkit Paradox" paper points out that rootkits (a form of malware) wants to remain hidden from view but at the same time, it wants to run; extending that paradox just a bit, something similar can be said for most malware...it wants to run, but in most cases it also wants to remain persistent across reboots. As such, there are a finite number of ways that malware (any software, really) can do this on Windows systems. Most of us are familiar with some of the persistence mechanisms in the Registry, in particular the ubiquitous "Run" key.
Another popular means of ensuring persistence is to load the malware as a Windows Service, or in the case of kernel-mode rootkits, as a device driver (*.sys file). This can be further extended by not only loading the malware as a service, but loading it as a DLL under the Svchost service. Doing so has the effect of loading the malware and hiding it in plain sight; by that, I mean that if a responder lists the running processes, either through Task Manager or some other tool, the malware won't be readily visible, as its running as part of another service.
A variant of this persistence mechanism that I've personally seen in the field is to install two services...one that's the actual malware, and another that checks to make sure that the malware is running after the system has been rebooted. I assisted a customer with a situation like this where they had identified one persistence mechanism, "cleaned" the system by removing the key and the files, and then rebooted...only to have the malware return. Identifying the other service allowed us to finally clean the systems.
Yet another persistence mechanism is to use Scheduled Tasks, also referred to as "AT jobs". Conficker is a recent example of malware that uses this persistence mechanism...the responder users an AV vendor's tool to remove the malware file itself, as well as any Registry keys, but shortly after the malware is removed, it's back again!
Artifacts
Artifacts differ from persistence mechanisms in that they are both essentially artifacts, but persistence mechanisms allow the malware to survive reboots and remain...well...persistent. Many times, malware will leave artifacts or "footprints" of its presence, either by design or simply by its interaction with the system. For example, some malware modifies the hosts file; this isn't a persistence mechanism, but it is a modification that the malware is designed to do. Some malware does this so that processes on the system that need to make network connections...AV updates, for example...don't succefully resolve the domains that they intend to for updates, etc.
Other artifacts can include disabled or even deleted security applications and services, as well as modifications to the Windows Firewall to register the malware as an exception to the firewall rules, allowing it to communicate to the Internet.
Artifacts are not limited to host systems (ie, Registry entries, files created or modified, log file entries, etc.); artifacts can also be found in network traffic, or in log files on other systems or devices. For example, one of the big things I've seen in a number of malware write-ups is malware's "phone home" capability, reaching out either to an IRC server or requesting a URL so that the infected systems are logged in the web server logs. Many AV companies are listing the various URLs, and malware authors have been randomizing the domain selection, making it a full time job to develop and maintain blacklists on firewalls and web proxies. However, the point is that while these URLs or IP addresses do change, they are transient artifacts that are found in the network connections on the system, as well as in network traffic.
The examples I've provided are not a complete list by any means. However, they are meant to illustrate a framework which can be used to understand and address malware from several perspectives, including initial triage of an incident, on-site or first response to a known or as-yet-unidentified malware incident, or forensic analysis following an incident. Once the framework is in-place, there are a number of resources that can be used to fill in the gaps, if you will: vendor-provided analysis, static/dynamic analysis of the malware itself, or analysis of captured network traffic and log data.
Initial Infection Vector
One thing we're always curious about is how the malware originally got on a system or into an infrastructure. Many times we see malware on a system and find out that it makes its way through the network like a worm, exploiting a vulnerability or a specific set of vulnerabilities. However, many times, these vulnerabilities are obviated at the perimeter, either by the fact that the ports are not accessible via the firewall, or the network is NAT'd behind the firewall, or something similar. So the question remains, how did the malware originally make its way into the network? Where is patient zero?
Initial infection vectors (IIVs) can come from a variety of sources. For example, a user opens an email attachment and either the worm kicks off or a downloader gets installed, which then reaches out and grabs the worm. Other IIVs can include the web browser, USB thumb drives, etc. Heck, even digital cameras can be the IIV for malware!
Propagation Mechanism(s)
Propagation mechanisms are how the malware "gets around" and infects other systems once it's in your infrastructure. Some malware...worms, in particular...like to propagate by exploiting vulnerabilities on the network. Some worms use just one exploit, and others actually use multiple exploits, going with the one that works. This may also be the initial infection vector, or it may be how the malware propagates once it's on your network.
Very often, malware will exploit operational business functionality in order to propagate. From the exploit example, worms will not only exploit vulnerabilities, but also exploit the operational business functionality of not patching systems in a timely manner. A lot of malware currently exploits the operational business functionality of having all users run with Administrator privileges. The release of Conficker has seen a move to exploit file sharing; by placing a copy of the malware at the root of a share, and adding an autorun.inf file, the malware exploits the operational business functionality of file servers (as well as the fact that MS dropped the ball with respect to the NoDriveTypeAutorun Registry setting).
Persistence Mechanism(s)
Persistence mechanisms are those artifacts that allow malware to survive a reboot. Jesse Kornblum's "Rootkit Paradox" paper points out that rootkits (a form of malware) wants to remain hidden from view but at the same time, it wants to run; extending that paradox just a bit, something similar can be said for most malware...it wants to run, but in most cases it also wants to remain persistent across reboots. As such, there are a finite number of ways that malware (any software, really) can do this on Windows systems. Most of us are familiar with some of the persistence mechanisms in the Registry, in particular the ubiquitous "Run" key.
Another popular means of ensuring persistence is to load the malware as a Windows Service, or in the case of kernel-mode rootkits, as a device driver (*.sys file). This can be further extended by not only loading the malware as a service, but loading it as a DLL under the Svchost service. Doing so has the effect of loading the malware and hiding it in plain sight; by that, I mean that if a responder lists the running processes, either through Task Manager or some other tool, the malware won't be readily visible, as its running as part of another service.
A variant of this persistence mechanism that I've personally seen in the field is to install two services...one that's the actual malware, and another that checks to make sure that the malware is running after the system has been rebooted. I assisted a customer with a situation like this where they had identified one persistence mechanism, "cleaned" the system by removing the key and the files, and then rebooted...only to have the malware return. Identifying the other service allowed us to finally clean the systems.
Yet another persistence mechanism is to use Scheduled Tasks, also referred to as "AT jobs". Conficker is a recent example of malware that uses this persistence mechanism...the responder users an AV vendor's tool to remove the malware file itself, as well as any Registry keys, but shortly after the malware is removed, it's back again!
Artifacts
Artifacts differ from persistence mechanisms in that they are both essentially artifacts, but persistence mechanisms allow the malware to survive reboots and remain...well...persistent. Many times, malware will leave artifacts or "footprints" of its presence, either by design or simply by its interaction with the system. For example, some malware modifies the hosts file; this isn't a persistence mechanism, but it is a modification that the malware is designed to do. Some malware does this so that processes on the system that need to make network connections...AV updates, for example...don't succefully resolve the domains that they intend to for updates, etc.
Other artifacts can include disabled or even deleted security applications and services, as well as modifications to the Windows Firewall to register the malware as an exception to the firewall rules, allowing it to communicate to the Internet.
Artifacts are not limited to host systems (ie, Registry entries, files created or modified, log file entries, etc.); artifacts can also be found in network traffic, or in log files on other systems or devices. For example, one of the big things I've seen in a number of malware write-ups is malware's "phone home" capability, reaching out either to an IRC server or requesting a URL so that the infected systems are logged in the web server logs. Many AV companies are listing the various URLs, and malware authors have been randomizing the domain selection, making it a full time job to develop and maintain blacklists on firewalls and web proxies. However, the point is that while these URLs or IP addresses do change, they are transient artifacts that are found in the network connections on the system, as well as in network traffic.
The examples I've provided are not a complete list by any means. However, they are meant to illustrate a framework which can be used to understand and address malware from several perspectives, including initial triage of an incident, on-site or first response to a known or as-yet-unidentified malware incident, or forensic analysis following an incident. Once the framework is in-place, there are a number of resources that can be used to fill in the gaps, if you will: vendor-provided analysis, static/dynamic analysis of the malware itself, or analysis of captured network traffic and log data.
Monday, March 09, 2009
RegRipper in Action!
I received an email from an enthusiastic user of RegRipper today, pointing me to a blog he'd posted on his experiences with the tool. I don't read Spanish, but I am really glad to see others using the tool. This post led me to another post explaining the use of RegRipper at NeoSysForensics.
Also, from his blog, I found a link to Moyix's example output of RegRipper, apparently run against hive files in memory using the RegRipper/Volatility prototype.
Also, from his blog, I found a link to Moyix's example output of RegRipper, apparently run against hive files in memory using the RegRipper/Volatility prototype.
Saturday, March 07, 2009
Malware for Incident Responders - FakeXPA
I've mentioned here before that many times it's very difficult for experienced incident responders to really get their heads wrapped around a malware issue, such as an infection in a corporate infrastructure. As difficult as it is for folks who do this kind of work, imagine how difficult it can be for the IT staffers managing that environment and attempting to recover from such an infection. I recently spent some time working with someone who'd been hit with a variant of a 9 year old dropper. Yes, that's right...9 years old. The secondary download was about 2 years old, and their AV product was able to detect these baddies...well, as long as the AV product installed was up-to-date and enabled. No extra.dat files were required.
Again, as I've mentioned before, one of the issues faced by responders (IT staff, as well as guys like me) is a lack of information from AV companies...most do not provide necessary information to perform detection, eradication and recovery effectively, simply because they're not in the business to do so. A commercial AV company's goal is to get you to use their product, so their response is to attempt to get a copy of the variant, develop a definition or signature for it, and see if that signature works for you...all while you just want it gone!
As a side note, one of the things that software vendors, in general, don't realize is this...by now, most folks are aware of the fact that things won't always be perfect. All products will have flaws, and most consumers know that. So the key at this point is, when someone does have a problem, how do you respond? Customers are happiest with the first person that feels their pain and really helps them.
Okay, enough of that. I thought that after our last example, it might be a good idea to take a look at some of the malware that comes out, and take a look at it from the perspective of an incident responder (and subsequently, a forensic analyst). An example I ran across today was W32/FakeXPA. Let's take a look at this from perspective of the framework we've already put together...
First off, W32/FakeXPA is a "rogue" security product, which infects a system and makes the user believe that they have some sort of issue (or no issues at all) on their system. Other similar malware includes Trojan:W32/AntivirusXP. Fortunately, Trojan:W32/FakeXPA was added to MRT in December, 2008...so there's some protection there.
Initial Infection Vector
No specific infection vector is identified. MS says that "various methods" are used to infect systems, so I'm sure that responders should look to either email attachments or a secondary download based on an initial dropper via email or the web browser. However, this also does not rule out other infected media (thumb drives, CDs) as the transport mechanism.
Artifacts
MS's write-up on W32/FakeXPA states that among other things, this malware may add an "XP Antivirus" value to the HKCU\Software key. The most recent information available from the MMPC also states that a recent variant modifies the hosts file.
Other artifacts may include an entry in the user's Start Menu folder (ie, "%UserProfile%\Start Menu\XP Antivirus XXXX")
Some of the different variants have different artifacts. For example, from MS's write-up, the 2010 variant installs as an BHO, as well as adding a subdirectory and files to the "C:\Documents and Settings\All Users\Application Data" directory.
While the "under the hood" artifacts of an infection vary, the commonality is that the user will see the fake antivirus GUI giving alerts to various threats detected on the system. Most peoples reaction will be, "yes, get rid of all of it!!", but responders need to look to whether or not the alerts are coming from the legit installed AV product or not. See the MS write-ups for screenshots of some of the variants.
Propagation Mechanism
This particular bit of malware doesn't appear to propogate once it's on a system. This is classified as a Trojan, not a worm.
Persistence Mechanism
According to the MS write-up, W32/FakeXPA writes a file to the %ProgramFiles% or "%ProgramFiles%\XP Antivirus" directory and creates an entry in the user's (HKCU hive) Run key. This appears to be the common persistence mechanism across the variants (the 2010 variant also includes a BHO), and provides responders with a great way to check individual machines, systems within a domain, as well as acquired images for indications of an infection.
Interestingly, the latest MMPC write-up ends with a listing of SHA-1 hashes for the latest variant. Come on, guys...not only do many products rely on MD5 hashes (EnCase, Gargoyle, etc.) but using static hashes in the face of malware that varies just does not make sense! Why not use fuzzy hashes?
Resources
VirusTotal page for one variant of FakeXPA (see how other AV vendors are detecting it)
ThreatExpert (2009 variant, A360 variant)
Another rogue "security product"
Final Note
Look at some of the differences between the various write-ups...for example, compare the write-up on the ThreatExpert 2009 variant to the MMPC Security Portal write-up for the same variant. Both identify some of the files and Registry keys left by the malware, but ThreatExpert identifies the packer, which means another means of detection may include Scout Sniper, from the illustrious Don Weber!
Again, as I've mentioned before, one of the issues faced by responders (IT staff, as well as guys like me) is a lack of information from AV companies...most do not provide necessary information to perform detection, eradication and recovery effectively, simply because they're not in the business to do so. A commercial AV company's goal is to get you to use their product, so their response is to attempt to get a copy of the variant, develop a definition or signature for it, and see if that signature works for you...all while you just want it gone!
As a side note, one of the things that software vendors, in general, don't realize is this...by now, most folks are aware of the fact that things won't always be perfect. All products will have flaws, and most consumers know that. So the key at this point is, when someone does have a problem, how do you respond? Customers are happiest with the first person that feels their pain and really helps them.
Okay, enough of that. I thought that after our last example, it might be a good idea to take a look at some of the malware that comes out, and take a look at it from the perspective of an incident responder (and subsequently, a forensic analyst). An example I ran across today was W32/FakeXPA. Let's take a look at this from perspective of the framework we've already put together...
First off, W32/FakeXPA is a "rogue" security product, which infects a system and makes the user believe that they have some sort of issue (or no issues at all) on their system. Other similar malware includes Trojan:W32/AntivirusXP. Fortunately, Trojan:W32/FakeXPA was added to MRT in December, 2008...so there's some protection there.
Initial Infection Vector
No specific infection vector is identified. MS says that "various methods" are used to infect systems, so I'm sure that responders should look to either email attachments or a secondary download based on an initial dropper via email or the web browser. However, this also does not rule out other infected media (thumb drives, CDs) as the transport mechanism.
Artifacts
MS's write-up on W32/FakeXPA states that among other things, this malware may add an "XP Antivirus" value to the HKCU\Software key. The most recent information available from the MMPC also states that a recent variant modifies the hosts file.
Other artifacts may include an entry in the user's Start Menu folder (ie, "%UserProfile%\Start Menu\XP Antivirus XXXX")
Some of the different variants have different artifacts. For example, from MS's write-up, the 2010 variant installs as an BHO, as well as adding a subdirectory and files to the "C:\Documents and Settings\All Users\Application Data" directory.
While the "under the hood" artifacts of an infection vary, the commonality is that the user will see the fake antivirus GUI giving alerts to various threats detected on the system. Most peoples reaction will be, "yes, get rid of all of it!!", but responders need to look to whether or not the alerts are coming from the legit installed AV product or not. See the MS write-ups for screenshots of some of the variants.
Propagation Mechanism
This particular bit of malware doesn't appear to propogate once it's on a system. This is classified as a Trojan, not a worm.
Persistence Mechanism
According to the MS write-up, W32/FakeXPA writes a file to the %ProgramFiles% or "%ProgramFiles%\XP Antivirus" directory and creates an entry in the user's (HKCU hive) Run key. This appears to be the common persistence mechanism across the variants (the 2010 variant also includes a BHO), and provides responders with a great way to check individual machines, systems within a domain, as well as acquired images for indications of an infection.
Interestingly, the latest MMPC write-up ends with a listing of SHA-1 hashes for the latest variant. Come on, guys...not only do many products rely on MD5 hashes (EnCase, Gargoyle, etc.) but using static hashes in the face of malware that varies just does not make sense! Why not use fuzzy hashes?
Resources
VirusTotal page for one variant of FakeXPA (see how other AV vendors are detecting it)
ThreatExpert (2009 variant, A360 variant)
Another rogue "security product"
Final Note
Look at some of the differences between the various write-ups...for example, compare the write-up on the ThreatExpert 2009 variant to the MMPC Security Portal write-up for the same variant. Both identify some of the files and Registry keys left by the malware, but ThreatExpert identifies the packer, which means another means of detection may include Scout Sniper, from the illustrious Don Weber!
Thursday, March 05, 2009
Working with emails
An interesting question popped up on one of the lists yesterday, and I just sort of watched it to see what happened, and how others would respond. The original question had to do with parsing emails, from one format to another. Like many examiners, I've had to deal with this sort of thing myself, looking at data exfiltration or misuse/abuse of corporate assets by an employee. Many times this sort of question really comes down to, how can you parse email messages from one format to another, in order to perform searches of either the email text or of attachments?
The results of the posts to the list are encapsulated below in a list of email conversion tools.
Email Conversion Tools
Emailchemy
Vound Software Intella
Aid4Mail
AvTech - Beware...Perl ahead!
Email Conversion Tools
Another respondent suggested Google searches for "eml to mbox bulk" and "msg to mbox bulk"
So...what do you do? What tools do you use?
Addendum: An alert reader mentioned a free (as in "beer") tool called Mail Cure for Outlook Express (described here) that reportedly can recovery emails from damaged or deleted dbx files. Very cool!
The results of the posts to the list are encapsulated below in a list of email conversion tools.
Email Conversion Tools
Emailchemy
Vound Software Intella
Aid4Mail
AvTech - Beware...Perl ahead!
Email Conversion Tools
Another respondent suggested Google searches for "eml to mbox bulk" and "msg to mbox bulk"
So...what do you do? What tools do you use?
Addendum: An alert reader mentioned a free (as in "beer") tool called Mail Cure for Outlook Express (described here) that reportedly can recovery emails from damaged or deleted dbx files. Very cool!
Wednesday, March 04, 2009
Malware Characteristics - An Example
In my last post, we took a look at some ways to do malware detection, and in that post, I presented four general characteristics of malware that can be used to detect and deal with many of the issues that we run into. I thought that a good way to get discussion on this started would be to run through an example, the nice folks at the MMPC popped up a great example today...%LnkGet%. Let's take a look at the write up on %LnkGet% and see how we can apply the four characteristics, and see if a response methodology (and even an analysis methodology) begins to evolve...
Initial Infection Vector
Okay, the MMPC describes this bit of malware as a Trojan Downloader that when run, downloads other files. Also according to the MMPC, the initial infection vector is to arrive as an email attachment, so one way to begin looking for indications of this is to search for Windows shortcut (*.lnk) files in email attachment directories; remember, this may also include web-based email tools, as well, so looking for .lnk files in the web cache may be a good idea, too.
Artifacts
The artifacts of this malware are pretty straightforward...first off, the main file involved is a Windows shortcut/*.lnk file. Yes, there can be a great number of these on a Windows system, but as an analyst, what you'll be looking for is a .lnk file in an unusual place (ie, email attachment or web browser cache directory). In addition, this .lnk file downloads a .vbs script that then downloads additional malware.
From a network perspective, the downloading may leave artifacts in log files; as you can see from the write up, there are a number of sites involved which resolve to .cn and .tw domains.
Propogation Mechanism
This bit of malware is described as a downloader, and doesn't propogate on it's own. However, as a downloader, it may download additional malware that does propogate (ie, a worm of some kind).
Persistance Mechanism
This bit of malware doesn't seem to need a persistance mechanism, because once it's downloaded the additional malware, it's done. The MMPC does state that while the shortcut files do try to disguise themselves through the use of icons, they apparently do not delete themselves once they've completed their task(s).
What this tells us...
1. Block .lnk file attachments
2. Have a written policy and educate users against launching arbitrary files that they receive, regardless of the source.
3. Develop a CSIRP and train your responders (this is a subject for a whole other series of posts).
My reason for looking at things like this is that AV companies are not incident responders. People who discover vulnerabilities and publish exploits are not incident responders. In most cases, companies such as these provide information that they think is important, but very often what they provide is not sufficient for their users and customers to incorporate into their risk management and incident response planning. In this regard, I really think that these companies are doing their customers a huge disservice.
Initial Infection Vector
Okay, the MMPC describes this bit of malware as a Trojan Downloader that when run, downloads other files. Also according to the MMPC, the initial infection vector is to arrive as an email attachment, so one way to begin looking for indications of this is to search for Windows shortcut (*.lnk) files in email attachment directories; remember, this may also include web-based email tools, as well, so looking for .lnk files in the web cache may be a good idea, too.
Artifacts
The artifacts of this malware are pretty straightforward...first off, the main file involved is a Windows shortcut/*.lnk file. Yes, there can be a great number of these on a Windows system, but as an analyst, what you'll be looking for is a .lnk file in an unusual place (ie, email attachment or web browser cache directory). In addition, this .lnk file downloads a .vbs script that then downloads additional malware.
From a network perspective, the downloading may leave artifacts in log files; as you can see from the write up, there are a number of sites involved which resolve to .cn and .tw domains.
Propogation Mechanism
This bit of malware is described as a downloader, and doesn't propogate on it's own. However, as a downloader, it may download additional malware that does propogate (ie, a worm of some kind).
Persistance Mechanism
This bit of malware doesn't seem to need a persistance mechanism, because once it's downloaded the additional malware, it's done. The MMPC does state that while the shortcut files do try to disguise themselves through the use of icons, they apparently do not delete themselves once they've completed their task(s).
What this tells us...
1. Block .lnk file attachments
2. Have a written policy and educate users against launching arbitrary files that they receive, regardless of the source.
3. Develop a CSIRP and train your responders (this is a subject for a whole other series of posts).
My reason for looking at things like this is that AV companies are not incident responders. People who discover vulnerabilities and publish exploits are not incident responders. In most cases, companies such as these provide information that they think is important, but very often what they provide is not sufficient for their users and customers to incorporate into their risk management and incident response planning. In this regard, I really think that these companies are doing their customers a huge disservice.
Tuesday, March 03, 2009
Looking for "Bad Stuff", pt III (Malware Detection)
So far, I've posted twice (see Resources below) on this subject, each time giving sort of a general overview of a process, but without getting down-and-dirty. One of the difficult things about putting together a process or a "best practice" is that very often, what's "best" for one person (or group) may not be "best" for another. So, with respect to malware detection (not analysis...at this point, we're still looking for the "bad stuff"), there several techniques you can use when going about this sort of process.
First, many times we may be looking for malware for different reasons. One reason would be that we suspect that a system may be infected with malware, while another may be that by looking for malware, we're attempting to nail down an intrusion or compromise (like following the trail of artifacts back to the original compromise vector), with the malware being a byproduct of the intrusion.
Another thing to keep in mind is that malware, in general, has four main characteristics:
1. An initial infection vector - how it got on the system in the first place; this can be through browser download (even on a secondary or tertiary level), email attachment, etc. Conficker, for example, can infect a system when the user opens Explorer on drive (USB thumb drive, mapped share, etc.) that has been infected. In some SQL injection compromises, I've seen malware placed on a system by the intruder sending tftp commands or creating and launching an FTP script, all via SQL injection. I've also seen the bad guy load the malware into a database table in 512 byte chunks, and then have the database reassemble the file in the file system so they could launch it.
2. Artifacts - what actions does the malware take upon infection and what footprints does it leave? Many time, we can determine these ourselves through dynamic malware analysis, but often its sufficient (and quicker) to use what's available from AV sites. Sometimes these "footprints" can be unique to a malware family (Conficker, for example). Also, these artifacts do not have to be restricted to a host; are there any network-based artifacts that you can use when analyzing logs?
3. Propogation Mechanism - How does the malware get about? Is it a worm that exploits a known (or unknown) vulnerability? Or is it like Conficker, infecting files at the root of drives and adding autorun.inf files? Understanding the propogation mechanism can help you fight the tide, as it were, or develop a mechanism to block or detect further infections.
4. Persistence Mechanism - As Jesse Kornblum points out in his "Rootkit Paradox" paper, malware likes to remain persistent, and the simple fact is that there are a finite number of ways to do that on a Windows system. The persistence mechanism can relate back to Artifacts; however, this would be an artifact specifically intended to allow the malware to survive reboots.
These characteristics act as a framework to help us visualize, understand, and categorize malware. Over the years, I have used these four characteristics to track down malware and help others do the same. In one instance in particular, after a customer had battled with a persistent (albeit fairly harmless) worm for over a month, I was told that they would delete certain files, reboot the system, and the files would be back. It occurred to me that they hadn't adequately tracked down the persistence mechanism, and once we found it, they were able to clean their systems!
Okay, so how can we go about tracking down malware, detecting its presence? I'm going to start with the idea that we have an acquired image, and we need to determine if there's malware on the system. I'm going to list several mechanisms for doing so, and these are not listed in order of priority. It will be incumbent upon you, the reader, to determine which steps work best for you, and in which order...that said, away we go!
Targeted Artifact Analysis
A lot of times, we may not know exactly what we're looking for, but if we know the persistence mechanism or other artifacts of malware, we can do a quick, surgical scan that malware. Tools such as RegRipper can make this a fast and extremely easy process (remember, for live systems, you can use RegRipper in combination with F-Response!). Take Conficker...while there are changes in artifacts based on the variant, the set of unique artifacts is pretty limited. As the variants have changed so as to obviate both AV scans and hash comparisons (at this point, everyone should be aware that hash comparisons for malware are marginally less effective than AV scanning with a single engine), artifacts have remained fairly static (Registry modifications) with some new ones (Scheduled Task) being added. The addition of unique artifacts helps narrow down the false positives.
Log Analysis
There are a number of logs on Windows systems that may provide some insight into malware detection. For example, maybe the installed AV product detected and quaratined a tertiary download...depending on the product, this may appear in the AV product logs as well as the Event Log. Or perhaps the AV scanner's real-time protection mechanism was disabled and the user ran a scan at a later time that detected the malware. Either way, check for an installed AV or anti-spyware product, and check the logs. Also, examine the Event Logs. And don't forget mrt.log!
Scans
Another way to go about detecting the presence of malware on systems is to scan for it using AV products. Yes, there are commercial AV products available, but as many have seen over the past couple of months, particularly with Conficker and Virut, sometimes using just one commercial AV product isn't enough. The key to running scans is to know what the scan is looking for so that you can better interpret the results.
For example, look at tools such as sigcheck and missidentify; both are extremely useful, but each tool looks for certain things. Another scanning tool that can be extremely useful is Yara, and anyone looking at using Yara should consider using the Yara-Scout Sniper release from the illustrious Don Weber! Yara can use packer rules (from the public PeID signatures) to detect packed files, and Don has added fuzzy hashing to Scout Sniper.
As a side note, while fuzzy hashing is obviously predicated on having a sample of the malware to hash, it is still a much preferable technique over "normal" hashing using MD5 or SHA-1 hashes. In one instance, I had two examinations about 8 months apart where I found files of the same name on both. Traditional (MD5) hashes didn't match, but using ssdeep, I was able to determine that the files were 99% similar.
So, other than scanning for not-normal files (with "normal" being somewhat amorphous), there are other ways to scan for possible malware infections. With the amount of malware that subverts Windows File Protection (WFP) in some manner, tools like wfpcheck can be used to determine if something on the system modified any of the "protected" files.
But again, keep in mind that scanning in general is a broad-brush approach and scans don't find everything. The idea is to have some idea of what you're looking for, and then selecting the proper tool (or tools) to build a comprehensive process. As part of that process, you'll need to document what you did, what you looked for, and what tools you used...because without that documentation, how to describe what you did in a repeatable manner, and how do you go about improving your process in the future?
Resources
Looking for "Bad Stuff", pt I and pt II
Claus's post on Portable AV tools
Free-av.com
Free AVG
PCTools
First, many times we may be looking for malware for different reasons. One reason would be that we suspect that a system may be infected with malware, while another may be that by looking for malware, we're attempting to nail down an intrusion or compromise (like following the trail of artifacts back to the original compromise vector), with the malware being a byproduct of the intrusion.
Another thing to keep in mind is that malware, in general, has four main characteristics:
1. An initial infection vector - how it got on the system in the first place; this can be through browser download (even on a secondary or tertiary level), email attachment, etc. Conficker, for example, can infect a system when the user opens Explorer on drive (USB thumb drive, mapped share, etc.) that has been infected. In some SQL injection compromises, I've seen malware placed on a system by the intruder sending tftp commands or creating and launching an FTP script, all via SQL injection. I've also seen the bad guy load the malware into a database table in 512 byte chunks, and then have the database reassemble the file in the file system so they could launch it.
2. Artifacts - what actions does the malware take upon infection and what footprints does it leave? Many time, we can determine these ourselves through dynamic malware analysis, but often its sufficient (and quicker) to use what's available from AV sites. Sometimes these "footprints" can be unique to a malware family (Conficker, for example). Also, these artifacts do not have to be restricted to a host; are there any network-based artifacts that you can use when analyzing logs?
3. Propogation Mechanism - How does the malware get about? Is it a worm that exploits a known (or unknown) vulnerability? Or is it like Conficker, infecting files at the root of drives and adding autorun.inf files? Understanding the propogation mechanism can help you fight the tide, as it were, or develop a mechanism to block or detect further infections.
4. Persistence Mechanism - As Jesse Kornblum points out in his "Rootkit Paradox" paper, malware likes to remain persistent, and the simple fact is that there are a finite number of ways to do that on a Windows system. The persistence mechanism can relate back to Artifacts; however, this would be an artifact specifically intended to allow the malware to survive reboots.
These characteristics act as a framework to help us visualize, understand, and categorize malware. Over the years, I have used these four characteristics to track down malware and help others do the same. In one instance in particular, after a customer had battled with a persistent (albeit fairly harmless) worm for over a month, I was told that they would delete certain files, reboot the system, and the files would be back. It occurred to me that they hadn't adequately tracked down the persistence mechanism, and once we found it, they were able to clean their systems!
Okay, so how can we go about tracking down malware, detecting its presence? I'm going to start with the idea that we have an acquired image, and we need to determine if there's malware on the system. I'm going to list several mechanisms for doing so, and these are not listed in order of priority. It will be incumbent upon you, the reader, to determine which steps work best for you, and in which order...that said, away we go!
Targeted Artifact Analysis
A lot of times, we may not know exactly what we're looking for, but if we know the persistence mechanism or other artifacts of malware, we can do a quick, surgical scan that malware. Tools such as RegRipper can make this a fast and extremely easy process (remember, for live systems, you can use RegRipper in combination with F-Response!). Take Conficker...while there are changes in artifacts based on the variant, the set of unique artifacts is pretty limited. As the variants have changed so as to obviate both AV scans and hash comparisons (at this point, everyone should be aware that hash comparisons for malware are marginally less effective than AV scanning with a single engine), artifacts have remained fairly static (Registry modifications) with some new ones (Scheduled Task) being added. The addition of unique artifacts helps narrow down the false positives.
Log Analysis
There are a number of logs on Windows systems that may provide some insight into malware detection. For example, maybe the installed AV product detected and quaratined a tertiary download...depending on the product, this may appear in the AV product logs as well as the Event Log. Or perhaps the AV scanner's real-time protection mechanism was disabled and the user ran a scan at a later time that detected the malware. Either way, check for an installed AV or anti-spyware product, and check the logs. Also, examine the Event Logs. And don't forget mrt.log!
Scans
Another way to go about detecting the presence of malware on systems is to scan for it using AV products. Yes, there are commercial AV products available, but as many have seen over the past couple of months, particularly with Conficker and Virut, sometimes using just one commercial AV product isn't enough. The key to running scans is to know what the scan is looking for so that you can better interpret the results.
For example, look at tools such as sigcheck and missidentify; both are extremely useful, but each tool looks for certain things. Another scanning tool that can be extremely useful is Yara, and anyone looking at using Yara should consider using the Yara-Scout Sniper release from the illustrious Don Weber! Yara can use packer rules (from the public PeID signatures) to detect packed files, and Don has added fuzzy hashing to Scout Sniper.
As a side note, while fuzzy hashing is obviously predicated on having a sample of the malware to hash, it is still a much preferable technique over "normal" hashing using MD5 or SHA-1 hashes. In one instance, I had two examinations about 8 months apart where I found files of the same name on both. Traditional (MD5) hashes didn't match, but using ssdeep, I was able to determine that the files were 99% similar.
So, other than scanning for not-normal files (with "normal" being somewhat amorphous), there are other ways to scan for possible malware infections. With the amount of malware that subverts Windows File Protection (WFP) in some manner, tools like wfpcheck can be used to determine if something on the system modified any of the "protected" files.
But again, keep in mind that scanning in general is a broad-brush approach and scans don't find everything. The idea is to have some idea of what you're looking for, and then selecting the proper tool (or tools) to build a comprehensive process. As part of that process, you'll need to document what you did, what you looked for, and what tools you used...because without that documentation, how to describe what you did in a repeatable manner, and how do you go about improving your process in the future?
Resources
Looking for "Bad Stuff", pt I and pt II
Claus's post on Portable AV tools
Free-av.com
Free AVG
PCTools
Monday, March 02, 2009
WFA 2/e is on its way!
Windows Forensic Analysis, second edition, is on its way!
Today is the day that everything was due, and for the most part I think that everything is in. At this point, all that's really left to do is for me to wait to see if the publisher sends me any mastered chapters in PDF format to review, but beyond that, it's simply a matter of waiting. As soon as I know when the book will be available, and in what formats, I'll let you know.
Eoghan Casey deserves a great big, huge thanks for his efforts as a technical editor. He put in a lot of work and had a lot of great suggestions, not all of which I had the time to really take advantage of; nonetheless, I greatly appreciate Eoghan's efforts in reviewing the materials, and I'm sure the readers will, too.
Now, a lot of you are going to ask me (and have been asking me) , what's new in this edition? First off, this isn't a new book, it's a second edition, so I used the first edition as a starting point. All of the chapters have been updated to some degree; some just a bit, because the information still holds, and others were pretty heavily updated (ch. 3, 4, and 5) due to changes that have occurred since the first edition was published.
For example, there are a lot of references to and discussion of Matt Shannon's F-Response, particularly the Enterprise Edition. I spent a good deal of time writing a step-by-step process for deploying F-Response EE remotely, and then just as I was getting ready to send that chapter in to the publisher, Matt came up with the FEMC! With the FEMC, any analyst or responder with an F-Response EE dongle now has an enterprise capability that is as easy to deploy remotely (and in a steathy manner) as it is to play Solitaire!
Chapter 3 on Memory Analysis has been heavily updated to include tools such as Volatility, HBGary's Responder and Mandiant's Memoryze. Unfortunately, all three went through some updates fairly recently, after the chapter was sent in to the publisher.
Chapter 4, Registry Analysis, has been very heavily updated, particularly since RegRipper plays such an important part in that chapter. Beware, Eoghan feels that this chapter is a bit of a "marathon" for the reader, and I agree...but there simply wasn't enough time to address that...so consider it a reference tome. ;-)
Chapter 5, File Analysis was pretty heavily updated, to include more information on some topics (such as SQL injection in IIS web server logs), as well as information on files from Vista, etc.
Yes, I've added more information on Vista and even dipped a bit into Windows 7.
I've also added two additional chapters; chapter 8 is Tying It All Together, is meant to bridge the gap imposed by many of the chapters. For example, one chapter talks about memory analysis, another about the Registry, and yet another about files on the system...but chapter 8 is where I've added case studies or war stories, illustrating how these different areas of analysis can be tied together to build a comprehensive picture of your incident or case.
Chapter 9, Performing Analysis on a Budget, isn't meant to tell the reader not to use commercial forensic analysis applications; not at all...I still like ProDiscover. However, the fact is that analysis isn't about the tool, it's about the process. Some folks need to see what tools are out there in order to expand their process...that's cool. Others may want to know what's possible, and then be able to pick from a list of tools (or like me, develop their own...). This chapter is not only meant for hobbyists who want to learn more, university students, and maybe LE, but it's also meant to show everyone that there are other things out there besides...well...fill in the name of your favorite application. ;-)
Now, some of the things that aren't in the book...first, any updates to the material that is in the book that occurred in the last week or so. This includes some of the stuff I've blogged about, such as Moyix's new and amazing feats! Another thing that really isn't in the book is the timeline analysis stuff I've been blogging about...I only got time to work on that after the manuscript was sent in. And finally, the stuff you're just now thinking about isn't in the book...sorry! ;-)
That being said, as soon as I get more information about when the book will actually be available and in bookstores, I'll let you know.
Today is the day that everything was due, and for the most part I think that everything is in. At this point, all that's really left to do is for me to wait to see if the publisher sends me any mastered chapters in PDF format to review, but beyond that, it's simply a matter of waiting. As soon as I know when the book will be available, and in what formats, I'll let you know.
Eoghan Casey deserves a great big, huge thanks for his efforts as a technical editor. He put in a lot of work and had a lot of great suggestions, not all of which I had the time to really take advantage of; nonetheless, I greatly appreciate Eoghan's efforts in reviewing the materials, and I'm sure the readers will, too.
Now, a lot of you are going to ask me (and have been asking me) , what's new in this edition? First off, this isn't a new book, it's a second edition, so I used the first edition as a starting point. All of the chapters have been updated to some degree; some just a bit, because the information still holds, and others were pretty heavily updated (ch. 3, 4, and 5) due to changes that have occurred since the first edition was published.
For example, there are a lot of references to and discussion of Matt Shannon's F-Response, particularly the Enterprise Edition. I spent a good deal of time writing a step-by-step process for deploying F-Response EE remotely, and then just as I was getting ready to send that chapter in to the publisher, Matt came up with the FEMC! With the FEMC, any analyst or responder with an F-Response EE dongle now has an enterprise capability that is as easy to deploy remotely (and in a steathy manner) as it is to play Solitaire!
Chapter 3 on Memory Analysis has been heavily updated to include tools such as Volatility, HBGary's Responder and Mandiant's Memoryze. Unfortunately, all three went through some updates fairly recently, after the chapter was sent in to the publisher.
Chapter 4, Registry Analysis, has been very heavily updated, particularly since RegRipper plays such an important part in that chapter. Beware, Eoghan feels that this chapter is a bit of a "marathon" for the reader, and I agree...but there simply wasn't enough time to address that...so consider it a reference tome. ;-)
Chapter 5, File Analysis was pretty heavily updated, to include more information on some topics (such as SQL injection in IIS web server logs), as well as information on files from Vista, etc.
Yes, I've added more information on Vista and even dipped a bit into Windows 7.
I've also added two additional chapters; chapter 8 is Tying It All Together, is meant to bridge the gap imposed by many of the chapters. For example, one chapter talks about memory analysis, another about the Registry, and yet another about files on the system...but chapter 8 is where I've added case studies or war stories, illustrating how these different areas of analysis can be tied together to build a comprehensive picture of your incident or case.
Chapter 9, Performing Analysis on a Budget, isn't meant to tell the reader not to use commercial forensic analysis applications; not at all...I still like ProDiscover. However, the fact is that analysis isn't about the tool, it's about the process. Some folks need to see what tools are out there in order to expand their process...that's cool. Others may want to know what's possible, and then be able to pick from a list of tools (or like me, develop their own...). This chapter is not only meant for hobbyists who want to learn more, university students, and maybe LE, but it's also meant to show everyone that there are other things out there besides...well...fill in the name of your favorite application. ;-)
Now, some of the things that aren't in the book...first, any updates to the material that is in the book that occurred in the last week or so. This includes some of the stuff I've blogged about, such as Moyix's new and amazing feats! Another thing that really isn't in the book is the timeline analysis stuff I've been blogging about...I only got time to work on that after the manuscript was sent in. And finally, the stuff you're just now thinking about isn't in the book...sorry! ;-)
That being said, as soon as I get more information about when the book will actually be available and in bookstores, I'll let you know.
Extreme Coolness
Moyix has done it again! Not only has he updated his Volatility modules for retrieving Registry data from memory, but he's also developed a means to run RegRipper against a memory image! This was also picked up on SANS ISC. Very, VERY cool! Check it out and give it a try...
Sunday, March 01, 2009
Timeline Analysis, pt IV
My last post on this subject was starting to get a bit long, so rather than add to it, I thought it would be best to write another post.
I think that in the long run, Michael was correct in his comment that a database schema would be needed. However, for the time being, there is enough here to provide a bit of insight to an analyst, albeit one with some programming ability. Okay, so here's how this can be used all together right now...
You have an acquired image, and you run Brian's TSK tool, fls, against it, getting a body file. You can then modify the mactime utility to filter the body file into a secondary file containing the 5 fields which were described in part III. At this point, we should have a file in the .tln format from the file system. You can also extract files from within the acquired image itself (Event Logs, Registry hives, etc.) and run them through the same filtering process. For example, evt2xls has such a filtering capability for .evt files...I ran it against a SysEvent.evt file, and got entries that look like the following:
1092790209|EVT|PETER||EventLog/6006;EVENTLOG_INFORMATION_TYPE; 1092790237|EVT|PETER||EventLog/6009;EVENTLOG_INFORMATION_TYPE;5.01. 2600 Service Pack 1 Uniprocessor Free 1092790237|EVT|PETER||EventLog/6005;EVENTLOG_INFORMATION_TYPE;
As you can see from the three .tln file entries, we have fields for the date/time value based on the Unix epoch (GMT, of course), source (EVT), host or system ("Peter", taken from Ender's Game), user (blank, in this case), and the description of the event. All fields are pipe-separated.
Quick segue...the description field for the above events is semi-colon separated, starting with the event source and ID fields from the event record. I do this because when I have a question about Windows event records, I'll start by going to EventID.net, where you can look up events by...well...source and ID. There are others ways to look up what an event record may be indicating, including searching on MS or Google, but that's for a different post all together.
Now, say you wanted to write a filter for events from IIS web server logs or the like. Michael wrote such a filter for McAfee AV logs in ex-tip; converting the time values in the logs to a Unix epoch time is pretty straightforward using the Perl Time::Local module, and as necessary, time zone settings in order to normalize all values to Unix epoch times based on GMT. Doing so gives us an easy means of comparison.
So, after running filters and creating output files, you may end up with several such .tln files right there in a subdirectory within your analysis file structure. At that point, if you wanted to...say...locate all events with a specific time frame, you could enter the dates into a script, have the script do the conversion and searching, and then display all of the events appropriately, where "appropriately" could mean either a text file or some sort of graphical output, such as Simile.
You may be asking questions like, "what about specific events that I'd like to add to a tln file that may not be something from within the acquired image, or I just want to add the event myself?" No problem...GUIs are easy to write, whether you're using Perl or Python or whatever...heck you could even add an event using Notepad! But the key to all of this is that you still have the raw data, and you've performed filtering and data reduction without modifying that raw data, and you're able to narrow down your analysis a bit.
The next step, I guess, would be to put something like this together as a complete package, and run it against an available image.
Addendum: This afternoon I updated recbin.pl, a Perl script I wrote for Windows Forensic Analysis to parse INFO2 files. The update I added was the ability to write timeline information in the format I'm using, as well as allow the analyst to add values for the system (host name) and the user (SID or username). Works like a champ!
I think that in the long run, Michael was correct in his comment that a database schema would be needed. However, for the time being, there is enough here to provide a bit of insight to an analyst, albeit one with some programming ability. Okay, so here's how this can be used all together right now...
You have an acquired image, and you run Brian's TSK tool, fls, against it, getting a body file. You can then modify the mactime utility to filter the body file into a secondary file containing the 5 fields which were described in part III. At this point, we should have a file in the .tln format from the file system. You can also extract files from within the acquired image itself (Event Logs, Registry hives, etc.) and run them through the same filtering process. For example, evt2xls has such a filtering capability for .evt files...I ran it against a SysEvent.evt file, and got entries that look like the following:
1092790209|EVT|PETER||EventLog/6006;EVENTLOG_INFORMATION_TYPE; 1092790237|EVT|PETER||EventLog/6009;EVENTLOG_INFORMATION_TYPE;5.01. 2600 Service Pack 1 Uniprocessor Free 1092790237|EVT|PETER||EventLog/6005;EVENTLOG_INFORMATION_TYPE;
As you can see from the three .tln file entries, we have fields for the date/time value based on the Unix epoch (GMT, of course), source (EVT), host or system ("Peter", taken from Ender's Game), user (blank, in this case), and the description of the event. All fields are pipe-separated.
Quick segue...the description field for the above events is semi-colon separated, starting with the event source and ID fields from the event record. I do this because when I have a question about Windows event records, I'll start by going to EventID.net, where you can look up events by...well...source and ID. There are others ways to look up what an event record may be indicating, including searching on MS or Google, but that's for a different post all together.
Now, say you wanted to write a filter for events from IIS web server logs or the like. Michael wrote such a filter for McAfee AV logs in ex-tip; converting the time values in the logs to a Unix epoch time is pretty straightforward using the Perl Time::Local module, and as necessary, time zone settings in order to normalize all values to Unix epoch times based on GMT. Doing so gives us an easy means of comparison.
So, after running filters and creating output files, you may end up with several such .tln files right there in a subdirectory within your analysis file structure. At that point, if you wanted to...say...locate all events with a specific time frame, you could enter the dates into a script, have the script do the conversion and searching, and then display all of the events appropriately, where "appropriately" could mean either a text file or some sort of graphical output, such as Simile.
You may be asking questions like, "what about specific events that I'd like to add to a tln file that may not be something from within the acquired image, or I just want to add the event myself?" No problem...GUIs are easy to write, whether you're using Perl or Python or whatever...heck you could even add an event using Notepad! But the key to all of this is that you still have the raw data, and you've performed filtering and data reduction without modifying that raw data, and you're able to narrow down your analysis a bit.
The next step, I guess, would be to put something like this together as a complete package, and run it against an available image.
Addendum: This afternoon I updated recbin.pl, a Perl script I wrote for Windows Forensic Analysis to parse INFO2 files. The update I added was the ability to write timeline information in the format I'm using, as well as allow the analyst to add values for the system (host name) and the user (SID or username). Works like a champ!