I recently ran across this very interesting paper by Dan Guido; the paper is titled, "A Case Study of Intelligence-Driven Defense". Dan's research point out the fallacy of the current implementation of the Defender's Paradigm, and how attackers, however unknowingly, are exploiting this paradigm. Dan's approach throughout the paper is to make his very valid points based on analysis of information, rather than vague statements and speculation.
In the paper, Dan takes the stance, in part, that:
- Over the past year, MITRE's CVE identified and tracked more than 8,000 vulnerabilities
- Organizations expend considerable resources (man-power, money, etc.)
- In 2010, only 13 vulnerabilities "were exploited to install SpyEye, Zeus, Gozi, Clampi and other info-stealing Trojans in massive exploitation campaigns".
As such, his point is that rather than focusing on compliance and having to address all 8K+ vulnerabilities, a more effective use of resources would be to put more focus on those vulnerabilities that are actually being included in the crimeware packs for mass malware distribution. Based on the information that Dan's included in the paper, this approach would also work for targeted attacks by many of the "advanced" adversaries, as well.
Dan goes on to say:
"Analysis of attacker data, and a focus on vulnerabilities exploited rather than vulnerabilities discovered, might yield more effective defenses."
This sounds very Deming-esque, and I have to agree. Admittedly, Dan's paper focuses on mass malware, but in many ways, I think that the approach Dan advocates can be used across a number of other data breach and compromise issues. For example, many of the folks who are working the PCI forensic audits are very likely still seeing a lot of the same issues across the board...the same or similar malware placed on systems that are compromised using some of the same techniques. So, Dan's approach to intel-driven defense can be applied to other aspects of DFIR besides mass malware infections in an equally effective manner.
Through the analysis he described in his paper, Dan was able to demonstrate how focusing on a few, low-cost (or even free), centrally-managed updates could have significantly reduced the attack surface of an organization and limited, inhibited, or even stopped mass malware infections. The same approach could clearly be applied to many of the more targeted attacks, as well.
Where the current approach to infosec defense falls short is the lack of adequate detection, response, and analysis.
Detection - look at the recent reports available from TrustWave, Verizon, and Mandiant, and consider the percentage of their respective customers for whom "detection" consists of third-party notification. So determining that an organization is infected or compromised at all can be difficult. Detection itself often requires that monitoring be put in place, and this can be scary, and thought of as expensive to those who don't have it already. However, tools like Carbon Black (Cb) can provide a great deal of ROI, as it's not just a security monitoring tool. When used as part of a security monitoring infrastructure, Cb will retain copies of binaries (many of the binaries that are downloaded to infected systems will be run and deleted) as well as information about the processes themselves...information that persists after the process has exited and the binary has been deleted as part of the attack process. The next version of Cb will include Registry modifications and network initiations, which means that the entire monitored infrastructure can then be searched for other infected systems, all from a central location and without adding anything additional to the systems themselves.
Response - What happens most often when systems are found to be infected? The predominant response methodology appears to be that systems suspected to be compromised or infected are taken offline, wiped, and the operating system and data are reinstalled. As such, critical information (and intelligence) is lost.
Analysis - Analysis of mass malware is often impossible because a sample isn't collected. If a sample is available, critical information about "in the wild" artifacts are not documented, and AV vendor write-ups based on the provided samples will only give a partial view of the malware capabilities, and will very often include self-inflicted artifacts. I've seen and found malware on compromised systems for which if no intelligence had been provided, the malware analyst would have had a very difficult time performing any useful analysis.
In addition, most often, the systems themselves are not subject to analysis...memory is not collected, nor is an image acquired from the system. In most cases, this simply isn't part of the response process, and even when it is, these actions aren't taken because the necessary training and/or tools aren't available, or they are but they simply aren't on-hand at the time. Military members are familiar with the term "immediate actions"...these are actions that are drilled into members so that they become part of "muscle memory" and can be performed effectively while under stress. A similar "no-excuses" approach needs to be applied as part of a top-down security posture that is driven by senior management.
Another important aspect of analysis is sharing. The cycle of developing usable, actionable intelligence depends on analysis being performed. That cycle can move much faster if the analysis methodologies are shared (and open to review and improvement), and if the results of the analysis are also shared. Consider the current state of affairs...as Dan points out, the cycle of developing the crimeware packs for mass malware infections is highly specialized and compartmentalized, and clearly has an economic/monetary stimulus. That is, someone who's really good at one specific step in the chain (i.e., locating vulnerabilities and developing exploits, or pulling the exploits together into a nice, easy to use package) performs that function, and then passes it along to the next step, usually for a fee. As such, there's the economic motivation to provide a quality product and remain relevant. Ultimately, the final product in the chain is being deployed against infrastructures with little monitoring in place (evidenced by the external third party notification...). When the IT staff (who are, by definition, generalists) are notified of an issue, the default reaction is to take the offending systems offline, wipe them and get them back into service. As such, analysis is not being done and a great deal of valuable intelligence is being lost.
Why is that? Is it because keeping systems up and running, and getting systems back online as quickly as possible are the primary goals of infected organizations? Perhaps this is the case. But what if there were some way to perform the necessary analysis in a timely manner, either because your staff has the training to do so, or because you've collected the necessary information and have a partner or trusted adviser who can perform that analysis? How valuable would it be to you if that partner could then provide not only the results of the analysis in a timely manner, but also provide additional insight or intelligence to help you defend your organization due to partnership with law enforcement and other intel-sharing organizations?
Consider this...massive botnets have been taken down (Khelios, Zeus, etc.) when small, dedicated groups have worked together, with a focused approach, to achieve a common goal. This approach has been proven to be highly effective...but it doesn't have to be restricted to just those folks involved in those groups. This has simply been a matter of a couple of folks saying, "we can do this", and then doing it.
The state of DFIR affairs is not going to get better unless the Defender's Paradigm is changed. Information needs to be collected and analyzed, and from that, better defenses can be put in place in a cost effective manner. From the defender's perspective, there may seem to be chaos on the security landscape, but that's because we're not operating from an intelligence-driven perspective...there are too many unknowns, but these unknowns are there because the right resources haven't been focused on the problem.
Here are Shawn Henry's (former exec. assistant director at the FBI) thoughts on sharing intel:
“We have to share threat data. My threat data will help enable you to be more secure. It will help you be predictive and will allow you to be proactive. Despite our best efforts we need to assume that we will get breached, hence we need to ensure our organisations have consequence management in its systems that allow us to minimise any damage.”
Resources
Dan's Exploit Intel Project video
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Friday, March 30, 2012
Wednesday, March 28, 2012
Malware Analysis
If you do malware analysis as part of your DFIR activities, check out this post from the System Forensics blog; what I really like about the post is not just that the testing environment is described in a pretty thorough manner, it's also that this is someone doing malware analysis who runs PEView against the file to be tested, rather than simply running strings! In fact, not only is PEView used in the static analysis of the malware, so is PEiD and Dependency Walker, both very useful tools that are used to great effect in this post to illustrate some important artifacts of the EXE being analyzed. The post also lists an impressive set of tools used for dynamic analysis of the malware (including CaptureBAT).
The post continues with dynamic analysis of the malware...if you're new to this sort of thing, this is an excellent post to read in order to get yourself up to speed on how to go about performing dynamic analysis. It also illustrates some of the important artifacts and IOCs that can be derived, not just from analysis of the malware, but in communicating the analysis and results to another part of the IR team.
Some thoughts on what might prove to be very useful...
MFT analysis, particularly with respect to the batch files mentioned in the post. For example, if the MFT is extracted and parsed, and the record for the tmpe275a93c.bat file still exists (even if it's marked as not in use), it might be a good idea to see if the file is resident or not. If it is (batch files don't need to contain a great deal of text to be useful), then the contents of the file could be extracted directly from the MFT record.
While it may seem to be providing redundant information from a purely malware analysis standpoint, enabling a greater level of auditing on the system (enabling Process Tracking, for example), as well as increasing the size of the Event Logs, would prove to be useful, particularly for those without the funding or budgets, or the time, for more expansive tools. When it comes to response, having relevant data is critical...yet, even when shortcomings are identified (i.e., "I really could have used information about processes that had been launched..."), many times we're not able to get the tools we need in order to answer the critical questions next time. So, if you have come to realize the value of tracking which processes had been launched, but can't get something like Carbon Black, then perhaps enabling Process Tracking on systems and increasing the size of the Event Log files is something of a happy medium. After all, it doesn't cost anything, and may provide you with some valuable information.
With the transient nature of the processes listed in the post (particularly WinMail), I would think that something like Carbon Black would be an excellent tool to have installed in a malware testing environment, particular the next version (due out next month) that includes monitoring of Registry modifications and network initiations.
There might be great benefit in more extensive Prefetch analysis, particularly with respect to some of the other processes that were created (i.e., WinMail, etc.). Corey recently took a second look at Prefetch file analysis, and turned up some pretty interesting artifacts, and illustrated how there's more to Prefetch file analysis than just getting the last execution time and the runcount.
Something else to keep in mind when testing malware like this...you need to separate the malware IOCs from the "self-inflicted" artifacts; if you have a sample and not other information regarding the propagation mechanism of the malware, then there will likely be some artifacts that are created as a result of the testing environment, as well as the method used to initiate the malware itself.
Finally, there can often be far more value to malware analysis, particularly from an intel/counter-intel perspective, something that was recently discussed on the MalAnalysis blog.
Resources
MS Free Safety Scanner
The post continues with dynamic analysis of the malware...if you're new to this sort of thing, this is an excellent post to read in order to get yourself up to speed on how to go about performing dynamic analysis. It also illustrates some of the important artifacts and IOCs that can be derived, not just from analysis of the malware, but in communicating the analysis and results to another part of the IR team.
Some thoughts on what might prove to be very useful...
MFT analysis, particularly with respect to the batch files mentioned in the post. For example, if the MFT is extracted and parsed, and the record for the tmpe275a93c.bat file still exists (even if it's marked as not in use), it might be a good idea to see if the file is resident or not. If it is (batch files don't need to contain a great deal of text to be useful), then the contents of the file could be extracted directly from the MFT record.
While it may seem to be providing redundant information from a purely malware analysis standpoint, enabling a greater level of auditing on the system (enabling Process Tracking, for example), as well as increasing the size of the Event Logs, would prove to be useful, particularly for those without the funding or budgets, or the time, for more expansive tools. When it comes to response, having relevant data is critical...yet, even when shortcomings are identified (i.e., "I really could have used information about processes that had been launched..."), many times we're not able to get the tools we need in order to answer the critical questions next time. So, if you have come to realize the value of tracking which processes had been launched, but can't get something like Carbon Black, then perhaps enabling Process Tracking on systems and increasing the size of the Event Log files is something of a happy medium. After all, it doesn't cost anything, and may provide you with some valuable information.
With the transient nature of the processes listed in the post (particularly WinMail), I would think that something like Carbon Black would be an excellent tool to have installed in a malware testing environment, particular the next version (due out next month) that includes monitoring of Registry modifications and network initiations.
There might be great benefit in more extensive Prefetch analysis, particularly with respect to some of the other processes that were created (i.e., WinMail, etc.). Corey recently took a second look at Prefetch file analysis, and turned up some pretty interesting artifacts, and illustrated how there's more to Prefetch file analysis than just getting the last execution time and the runcount.
Something else to keep in mind when testing malware like this...you need to separate the malware IOCs from the "self-inflicted" artifacts; if you have a sample and not other information regarding the propagation mechanism of the malware, then there will likely be some artifacts that are created as a result of the testing environment, as well as the method used to initiate the malware itself.
Finally, there can often be far more value to malware analysis, particularly from an intel/counter-intel perspective, something that was recently discussed on the MalAnalysis blog.
Resources
MS Free Safety Scanner
Links, Thoughts, and Updates
Locard's
I don't often read the Security Ripcord blog, but when I do, it's because there's something interesting there to read. See what I did there? That opening line was just a sneaky way to work a graphic into this post. Seriously...I do read Don's blog...just not his tweets. Okay, okay, I'm kidding. Anyway, check out Don's post...as I read through it, the third paragraph really jumped out at me as a fantastic example of Locard's Exchange Principle, applied to the digital realm. Specifically:
It is also logical to conclude that their activities generated system and network-based artifacts that outlined their activity, even if that activity mimicked normal and authorized operational activity. Understanding these system and network-based artifacts is an important step to preventing and detecting attempts to infiltrate a network.
Now, I added my own emphasis to the above quote, but Don makes an excellent point. When an intruder interacts with your infrastructure, there's going to be digital material exchanged between their system, and the nodes on your infrastructure with which they're interacting. Also, the intruder is going to leave digital detritus on your infrastructure...in some cases, it may be extremely transient or volatile, and in other cases it may look very similar to legitimate traffic or activity. However, Don's point is extremely well-taken...you need to understand these artifacts so that you can
IntoTheBoxes
Speaking of Don, he recently tweeted that he's interested in bringing the ITB back. When Don was producing this (two issues), it was a very good e-zine, as articles were contributed by folks from the DFIR field. It's one thing to have a list of links with no commentary or insight, but it's completely different...and I think, highly beneficial...to have something contributed by folks from the field who are not only doing the work, but doing some of the primary research, as well.
If you'd like to contribute to the ITB, take a look at the ITB WordPress site, maybe take a look at the previous editions, and contact Don at @cutaway on Twitter.
Artifacts and Presentations
While we're on the topic of artifacts...I've attended several conferences over the years that include presentation titles that refer to network or system artifacts for various incidents (intrusions, APT, etc.). As of yet, I have yet to see a single presentation that actually gives those artifacts. I attended a presentation not long ago, the title of which referred to intrusion artifacts found in the Windows Registry; however, the presentation including nothing more than a bunch of "look here..." references, with no real explanations as to why an analyst would look at those keys or values. Several of the referenced Registry keys or values were dependent upon the intruder having shell-based access to your systems, such as via RDP or VNC (yes, this does happen), but that isn't something that was really discussed during the presentation.
In general, the artifacts left during an intrusion are heavily dependent upon how the intrusion occurred. For example, I have seen incidents where an employee's home system was compromised, and their remote access (RDP) credentials "stolen" via a keylogger. The intruder logged in via RDP, activated a dormant domain admin account, and began hopping to systems across the enterprise. We were able to track the activity via Registry analysis...the Remote Desktop client on Windows XP maintains an MRU list of systems connected to in the Registry (on Windows 7, the MRU is in the Jump Lists).
Now, if the intruder is able to gain access to a system on your infrastructure (via an infected document in an email attachment, or a link to a malicious site in an email), and ultimately gain access to the system and run commands via the command line, most normal systems may have very limited capability (i.e., lack of sufficient Event Logging, etc.) for detecting this sort of activity. In these cases, you'd likely be looking for secondary or indirect artifacts (this term is discussed in chapter 1 of Windows Forensic Analysis 3/e).
GSR
As most folks on Twitter are aware, the TrustWave Global Security Report is out...and just looking at page 6, it's a very interesting read. Page 6 addresses the issue of detection, and states that 84% of the organizations included in the report relied (or depended) on external third party reporting in order to identify compromises to their infrastructure. That number is actually up slightly from the previous year...the report speculates that this maybe a resource issue...I would also consider that it may include a lack of understanding of threats.
Further, from reading through the information presented in the report, I can see a huge opportunity for IOCs. For example, simply looking at the packers used (pg 17), it should be easy to create a series of indicators that folks can look for with respect to these artifacts. The same is true with respect to the Aggregation section of the report (pg 9)...there is some pretty significant potential for IOCs in just that one paragraph.
Thoughts on Report Stats and Revelations
Following right along with the TrustWave GSR report, I ran across Wendy's blog post (via Twitter and LinkedIn) that shines some light on some of the realities behind these reports. When reading these reports, we need to keep in mind who the customers of the organization are...the TrustWave guys do a LOT of PCI forensic work, and some of that is done for smaller organizations. Wendy makes it clear in her post what some of the folks who get compromised are thinking and why they don't appear to have any idea of how to do security.
Another area that isn't covered as much in Wendy's post are the larger infrastructures that get compromised...those that may not have security that's much better than a mom-and-pop shop, simply because they are so big. In such organizations, change is most often glacial, and can be mired down by cultural and political issues within the infrastructure. A misconception about a lot of large organizations, including the federal government, is that they have the resources to throw at the information security problem...this simply isn't the case. Years ago, I supported a federal agency that had two DF analysts for the entire (global) organization.
Okay...but so what? Like Wendy said, don't let the statistics bother you, particularly if you're a security pro, because getting upset about it isn't going to change anything. As long as people are looking for the easiest way to achieve a "good enough" solution, there's going to be someone right there to take advantage of that, regardless of the technology used.
Case Studies
As I've worked to develop my writing style (for my books) over the years, one of the things that I've noticed is that folks in this community love case studies. I've also noticed that as much as folks ask for case studies, very few are willing to offer up case studies of their own. One of the forums recently had a discussion thread that involved posted case studies, and the forum admin started a different thread to gauge the interest in such things. Unfortunately, that thread didn't really go anywhere.
I'd like to encourage DFIR folks to consider either sharing case studies, or sharing and engaging in discussion regarding what they'd like to see. I know that some folks can't share their actual case findings (even if customer references are redacted or simply not mentioned) due to the nature of where they work...sadly, many can't even state where their primary source of evidence was found. However, for those who can't share those things, one of the things you may be able to share is requests such as, "...where would I look for...", or "...on Windows 7, what data sources would I look to in order to determine...".
Another example of sharing can be seen at Corey's blog...many of Corey's posts regarding exploit artifacts are very detailed, but are not specific to a case; rather, he's posted information about his research, which in turn is highly beneficial.
Tools
USB Write Protect is a nifty little GUI utility you can use to enable write protection on USB devices on your Windows systems. Even though this is software-based write-protection and should not replace a hardware write-blocker, having the ability to
Speaking of USB devices, check out Mark Woan's USBDeviceForensics utility. You can use this utility to import or point to the Software, System, and NTUSER.DAT Registry hives (found in mounted images, or extracted from an acquired image), collect a lot of the information available in Rob Lee's USB Analysis profiles, and even some additional information not specifically listed in the profiles.
David Hull posted recently regarding innovative approach to "finding evil"; using several tools and a bash script, he hashes what's found via AutoRuns and submits the hashes to VirusTotal. What I like about this is that David has clearly identified/defined the issue he was facing, and then developed an innovative solution to the issue. Many times I've talked to analysts who talk about "doing Registry analysis" without any clear understanding of what they're looking for, and not know if what they're looking for is even in the Registry. David has clearly decided that he wants to use automation to achieve a modicum of data reduction...of all of the data he has access to, his approach is, "just show me the bad stuff."
For anyone who's been watching, Corey's posted quite a bit on ripping VSCs, providing a lot of great information about not just what artifacts are available on a system, but what you can find in the VSCs on a Vista+ system, and how to get at those artifacts. Corey's taken a big step forward with this, tying free and open source tools together via batch files to make this data collection much more automated, and now Jason's thrown his hat in the ring by creating the VSC Toolset, a GUI interface for these tools. This is all very cool stuff and provides a capability that isn't necessarily available via some of the commercial products.
There was an excellent post from Christiaan Beek recently regarding using Volatility to analyze a hibernation file...check it out. Christiaan gives a step-by-step walk-through on how to analyze a hibernation file...something that should be strongly considered (and a skill that should be developed), particularly when the situation and goals of your exam warrant it.
This is a little off-topic, but if you're analyzing Android phones, you might want to check out Open Source Android Forensics. This is a bit more of an open source link than it is a Windows forensics link, but consider it in the context of DFwOST.
I don't often read the Security Ripcord blog, but when I do, it's because there's something interesting there to read. See what I did there? That opening line was just a sneaky way to work a graphic into this post. Seriously...I do read Don's blog...just not his tweets. Okay, okay, I'm kidding. Anyway, check out Don's post...as I read through it, the third paragraph really jumped out at me as a fantastic example of Locard's Exchange Principle, applied to the digital realm. Specifically:
It is also logical to conclude that their activities generated system and network-based artifacts that outlined their activity, even if that activity mimicked normal and authorized operational activity. Understanding these system and network-based artifacts is an important step to preventing and detecting attempts to infiltrate a network.
Now, I added my own emphasis to the above quote, but Don makes an excellent point. When an intruder interacts with your infrastructure, there's going to be digital material exchanged between their system, and the nodes on your infrastructure with which they're interacting. Also, the intruder is going to leave digital detritus on your infrastructure...in some cases, it may be extremely transient or volatile, and in other cases it may look very similar to legitimate traffic or activity. However, Don's point is extremely well-taken...you need to understand these artifacts so that you can
IntoTheBoxes
Speaking of Don, he recently tweeted that he's interested in bringing the ITB back. When Don was producing this (two issues), it was a very good e-zine, as articles were contributed by folks from the DFIR field. It's one thing to have a list of links with no commentary or insight, but it's completely different...and I think, highly beneficial...to have something contributed by folks from the field who are not only doing the work, but doing some of the primary research, as well.
If you'd like to contribute to the ITB, take a look at the ITB WordPress site, maybe take a look at the previous editions, and contact Don at @cutaway on Twitter.
Artifacts and Presentations
While we're on the topic of artifacts...I've attended several conferences over the years that include presentation titles that refer to network or system artifacts for various incidents (intrusions, APT, etc.). As of yet, I have yet to see a single presentation that actually gives those artifacts. I attended a presentation not long ago, the title of which referred to intrusion artifacts found in the Windows Registry; however, the presentation including nothing more than a bunch of "look here..." references, with no real explanations as to why an analyst would look at those keys or values. Several of the referenced Registry keys or values were dependent upon the intruder having shell-based access to your systems, such as via RDP or VNC (yes, this does happen), but that isn't something that was really discussed during the presentation.
In general, the artifacts left during an intrusion are heavily dependent upon how the intrusion occurred. For example, I have seen incidents where an employee's home system was compromised, and their remote access (RDP) credentials "stolen" via a keylogger. The intruder logged in via RDP, activated a dormant domain admin account, and began hopping to systems across the enterprise. We were able to track the activity via Registry analysis...the Remote Desktop client on Windows XP maintains an MRU list of systems connected to in the Registry (on Windows 7, the MRU is in the Jump Lists).
Now, if the intruder is able to gain access to a system on your infrastructure (via an infected document in an email attachment, or a link to a malicious site in an email), and ultimately gain access to the system and run commands via the command line, most normal systems may have very limited capability (i.e., lack of sufficient Event Logging, etc.) for detecting this sort of activity. In these cases, you'd likely be looking for secondary or indirect artifacts (this term is discussed in chapter 1 of Windows Forensic Analysis 3/e).
GSR
As most folks on Twitter are aware, the TrustWave Global Security Report is out...and just looking at page 6, it's a very interesting read. Page 6 addresses the issue of detection, and states that 84% of the organizations included in the report relied (or depended) on external third party reporting in order to identify compromises to their infrastructure. That number is actually up slightly from the previous year...the report speculates that this maybe a resource issue...I would also consider that it may include a lack of understanding of threats.
Further, from reading through the information presented in the report, I can see a huge opportunity for IOCs. For example, simply looking at the packers used (pg 17), it should be easy to create a series of indicators that folks can look for with respect to these artifacts. The same is true with respect to the Aggregation section of the report (pg 9)...there is some pretty significant potential for IOCs in just that one paragraph.
Thoughts on Report Stats and Revelations
Following right along with the TrustWave GSR report, I ran across Wendy's blog post (via Twitter and LinkedIn) that shines some light on some of the realities behind these reports. When reading these reports, we need to keep in mind who the customers of the organization are...the TrustWave guys do a LOT of PCI forensic work, and some of that is done for smaller organizations. Wendy makes it clear in her post what some of the folks who get compromised are thinking and why they don't appear to have any idea of how to do security.
Another area that isn't covered as much in Wendy's post are the larger infrastructures that get compromised...those that may not have security that's much better than a mom-and-pop shop, simply because they are so big. In such organizations, change is most often glacial, and can be mired down by cultural and political issues within the infrastructure. A misconception about a lot of large organizations, including the federal government, is that they have the resources to throw at the information security problem...this simply isn't the case. Years ago, I supported a federal agency that had two DF analysts for the entire (global) organization.
Okay...but so what? Like Wendy said, don't let the statistics bother you, particularly if you're a security pro, because getting upset about it isn't going to change anything. As long as people are looking for the easiest way to achieve a "good enough" solution, there's going to be someone right there to take advantage of that, regardless of the technology used.
Case Studies
As I've worked to develop my writing style (for my books) over the years, one of the things that I've noticed is that folks in this community love case studies. I've also noticed that as much as folks ask for case studies, very few are willing to offer up case studies of their own. One of the forums recently had a discussion thread that involved posted case studies, and the forum admin started a different thread to gauge the interest in such things. Unfortunately, that thread didn't really go anywhere.
I'd like to encourage DFIR folks to consider either sharing case studies, or sharing and engaging in discussion regarding what they'd like to see. I know that some folks can't share their actual case findings (even if customer references are redacted or simply not mentioned) due to the nature of where they work...sadly, many can't even state where their primary source of evidence was found. However, for those who can't share those things, one of the things you may be able to share is requests such as, "...where would I look for...", or "...on Windows 7, what data sources would I look to in order to determine...".
Another example of sharing can be seen at Corey's blog...many of Corey's posts regarding exploit artifacts are very detailed, but are not specific to a case; rather, he's posted information about his research, which in turn is highly beneficial.
Tools
USB Write Protect is a nifty little GUI utility you can use to enable write protection on USB devices on your Windows systems. Even though this is software-based write-protection and should not replace a hardware write-blocker, having the ability to
Speaking of USB devices, check out Mark Woan's USBDeviceForensics utility. You can use this utility to import or point to the Software, System, and NTUSER.DAT Registry hives (found in mounted images, or extracted from an acquired image), collect a lot of the information available in Rob Lee's USB Analysis profiles, and even some additional information not specifically listed in the profiles.
David Hull posted recently regarding innovative approach to "finding evil"; using several tools and a bash script, he hashes what's found via AutoRuns and submits the hashes to VirusTotal. What I like about this is that David has clearly identified/defined the issue he was facing, and then developed an innovative solution to the issue. Many times I've talked to analysts who talk about "doing Registry analysis" without any clear understanding of what they're looking for, and not know if what they're looking for is even in the Registry. David has clearly decided that he wants to use automation to achieve a modicum of data reduction...of all of the data he has access to, his approach is, "just show me the bad stuff."
For anyone who's been watching, Corey's posted quite a bit on ripping VSCs, providing a lot of great information about not just what artifacts are available on a system, but what you can find in the VSCs on a Vista+ system, and how to get at those artifacts. Corey's taken a big step forward with this, tying free and open source tools together via batch files to make this data collection much more automated, and now Jason's thrown his hat in the ring by creating the VSC Toolset, a GUI interface for these tools. This is all very cool stuff and provides a capability that isn't necessarily available via some of the commercial products.
There was an excellent post from Christiaan Beek recently regarding using Volatility to analyze a hibernation file...check it out. Christiaan gives a step-by-step walk-through on how to analyze a hibernation file...something that should be strongly considered (and a skill that should be developed), particularly when the situation and goals of your exam warrant it.
This is a little off-topic, but if you're analyzing Android phones, you might want to check out Open Source Android Forensics. This is a bit more of an open source link than it is a Windows forensics link, but consider it in the context of DFwOST.
Thursday, March 15, 2012
Prefetch Analysis, Revisited...Again...
I recently blogged on Prefetch Analysis, Revisited, and was poking around one of Corey's blog posts regarding exploit artifacts, this one associated with Java. Corey was kind enough to send me the Prefetch file that was created as part of his testing, and I ran it against the Perl script that I'd written. It pulled out some pretty interesting things, and with Corey's permission, I've pulled out some of the more interesting strings to share. Where the paths are really long and they have to wrap, I've added a space between the lines to cut down on confusion and make them a bit more readable.
First, we know from where Java was run:
\DEVICE\HARDDISKVOLUME1\PROGRAM FILES\JAVA\JRE6\BIN\JAVA.EXE
There's also another EXE listed in the output:
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\CMD.EXE
That's interesting. Maybe an interesting analysis technique to add via the tools is to parse the module paths, and find not just the path to the EXE that was created, but also any other EXE paths listed, and flag on those with more than one. Flagging on those Prefetch files that include more than one EXE path may not find anything in most cases, but hey, it's automated and documented, only takes a quick second to run, and the time that it does find something will be a real winner.
Okay, now back to our regularly scheduled program, already in progress...
Then I found these paths:
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\D3D9.DLL
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\D3D8THK.DLL
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\VMX_FB.DLL
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\D3D9CAPS.DAT
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\D3D9CAPS.TMP
I wasn't too interested in those (I had no idea what they are) until I found this one later on:
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\TEST\LOCAL SETTINGS\APPLICATION DATA\D3D9CAPS.TMP
I still have no idea what any of this is...remember, I only have the single Prefetch file...but this might be something worth investigating.
Here's a path to one of the logs that Corey mentioned in his post:
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\ADMINISTRATOR\APPLICATION DATA\SUN\JAVA\DEPLOYMENT\DEPLOYMENT.PROPERTIES
We see something similar later in the output, for the Test user:
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\TEST\APPLICATION DATA\SUN\JAVA\DEPLOYMENT\DEPLOYMENT.PROPERTIES
Here's the path to another log file that Corey mentioned in his blog:
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\JAVA_INSTALL_REG.LOG
And here's the same thing, but for the Test user account (again):
\DEVICE\HARDDISKVOLUME1\DOCUME~1\TEST\LOCALS~1\TEMP\JAVA_INSTALL_REG.LOG
Remember that Corey was using MetaSploit, per his blog post:
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\~SPAWN7448022680969506475.TMP.DIR\METASPLOIT.DAT
First, we know from where Java was run:
\DEVICE\HARDDISKVOLUME1\PROGRAM FILES\JAVA\JRE6\BIN\JAVA.EXE
There's also another EXE listed in the output:
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\CMD.EXE
That's interesting. Maybe an interesting analysis technique to add via the tools is to parse the module paths, and find not just the path to the EXE that was created, but also any other EXE paths listed, and flag on those with more than one. Flagging on those Prefetch files that include more than one EXE path may not find anything in most cases, but hey, it's automated and documented, only takes a quick second to run, and the time that it does find something will be a real winner.
Okay, now back to our regularly scheduled program, already in progress...
Then I found these paths:
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\D3D9.DLL
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\D3D8THK.DLL
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\VMX_FB.DLL
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\D3D9CAPS.DAT
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\D3D9CAPS.TMP
I wasn't too interested in those (I had no idea what they are) until I found this one later on:
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\TEST\LOCAL SETTINGS\APPLICATION DATA\D3D9CAPS.TMP
I still have no idea what any of this is...remember, I only have the single Prefetch file...but this might be something worth investigating.
Here's a path to one of the logs that Corey mentioned in his post:
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\ADMINISTRATOR\APPLICATION DATA\SUN\JAVA\DEPLOYMENT\DEPLOYMENT.PROPERTIES
We see something similar later in the output, for the Test user:
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\TEST\APPLICATION DATA\SUN\JAVA\DEPLOYMENT\DEPLOYMENT.PROPERTIES
Here's the path to another log file that Corey mentioned in his blog:
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\JAVA_INSTALL_REG.LOG
And here's the same thing, but for the Test user account (again):
\DEVICE\HARDDISKVOLUME1\DOCUME~1\TEST\LOCALS~1\TEMP\JAVA_INSTALL_REG.LOG
Remember that Corey was using MetaSploit, per his blog post:
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\~SPAWN7448022680969506475.TMP.DIR\METASPLOIT.DAT
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\ADMINISTRATOR\LOCAL SETTINGS\TEMP\~SPAWN5710426834330887364.TMP.DIR\METASPLOIT\PAYLOAD.CLASS
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\ADMINISTRATOR\LOCAL SETTINGS\TEMP\~SPAWN5710426834330887364.TMP.DIR\METASPLOIT.DAT
All in all, some pretty interesting data. Some of the new additions to the tools will make some of the automated collection and correlation of this information automated, but the analysis will still be up to...well, the analyst.
I'd like to thank Corey for not only doing the "heavy lifting" of compromising a system with MetaSploit, but also providing the Prefetch file, from which the script I wrote was able to extract the above module paths.
Monday, March 12, 2012
Interesting Links, and Some Thoughts
Often we run across interesting posts or articles on the Internet, and this leads us to other sites that interest us and may often be applicable to the work that we do. As such, blogging provides better approach to presenting and sharing this information, and the thoughts that are generated, because we not only have the site itself (as with a bookmark), but we can also include why we found it interesting, or some additional, supporting commentary to make the information a bit more useful and relevant.
Registry Artifacts
Andreas Schuster recently stood up a Scoop.It site for DFIR topics, and from that, I found this post at Pragmatic Forensics. The title of the post is somewhat off with respect to the actual content, but the content itself is interesting. In short, an analyst created a timeline and ran across an IRL example of the use of the Image File Execution Options Registry key as a persistence mechanism.
This key is addressed starting on page 133 of Windows Registry Forensics, and the RegRipper imagefile.pl plugin.
Along the same lines, I recently read this Cheeky4n6Monkey blog post, which discusses using SIFT tools to perform a diff of Registry hives. The example provided in the post is excellent, and can be easily translated for use in analyzing hives on a Windows 7 system, particularly when you want to know what changed following a VSC being created, or between VSCs.
Writing
Like many folks, I wasn't a big fan of writing. I know that's hard to believe...but the fact is that I didn't like to write. I wasn't good at it, and it was always a chore. Even when I did put effort into what I was writing, the results were often not what I'd hoped.
Over time, however, I was in positions and situations where I had to write. While I was in the military, there were JAG Manual investigations, fitreps, awards, log entries, etc. I had to write during my graduate program. When I got out of the military, and moved to private industry, I had analysis plans and reports to write.
Perhaps the most succinct description of why we (DFIR analysts) should write is encapsulated in the 7 Real Ways Writing Increases Expertise blog post. Take a look at the points mentioned...I think what's really powerful about this is that writing does allow us to clarify our thoughts, and you have to admit, you don't HAVE to hit the Send button until you're satisfied.
Self-Inflicted Artifacts
Over 6 years ago, I wrote a blog post regarding the MUICache key. At the time, I was wondering why I was seeing a reference to malware creating this key in AV vendor write-ups.
This System Forensics blog post provides an excellent example of self-inflicted artifacts. The post as a whole is an excellent resource for using Volatility in malware analysis, but very early on in the post, the author mentions a value being added to the MUICache key, as well as UserAssist subkey entries being created. These are self-inflicted artifacts, and a result of how the malware was executed for testing (i.e., place it on the Desktop, and double-click).
Okay...so why does any of this matter? I came across the following Registry key:
HKLM\SOFTWARE\Microsoft\RFC1156Agent\CurrentVersion\Parameters
According to the timeline I'd created, the key had been modified during a specific time period that was potentially of interest. When I checked the key, it had a single value named "TrapPollTimeMilliSecs". I am familiar with SNMP, so I know about traps and MiBs (per RFC1156), but I ran a Google search for the value name, I got a number of hits associated with malware. In fact, I found one AV write-up that indicated that this was the only artifact associated with the malware. Ultimately, the analysis did not reveal indications of malware on the system (included scans with multiple AV scanners).
None of the sites I found could provide a clue as to how the value in question is used by malware. Many of the write-ups I found stated that the value (and it's data) were created by the malware, and my timeline indicated that the key had been modified at a certain time, which could mean that the value was created...or it could mean something else. And no, I don't have a sample of the malware to analyze.
Overall, I'm beginning to think that the value is a "self-inflicted" artifact, and that the same might be true for some other artifacts that I've observed.
Shoutz to Cory Altheide for using "self-inflicted" in a sentence, and allowing me to bogart the term.
Vulture Research
I was reading this news article recently (yes, I admit it, I'm a closet CSI fan), and it got me to thinking...how often do we, as analysts, misinterpret what we have in front of us? How often do we draw conclusions based on the data that we have, given the possibility that the data is incomplete?
So...in case you didn't want to read the article I linked to, it's about research that was conducted with respect to scavenger predation of bodies, and how it can affect homicide investigations. If you watch the crime shows (CSI, NCIS, Bones, etc.), you can probably recount episodes in which a skeleton was found in the open and the conclusion was drawn that the body had been there for months. Well, this new research indicates that vultures can denude all flesh from a body in a couple of hours...and this can have a significant impact on the assumptions made about the timeline of the crime.
So, this got me to thinking...how often does this happen with what we do? This "new" research with respect to the vultures is simply something that's been happening for a while, but investigators perhaps haven't considered it. After all, these birds come down and feed for a few hours, you'd think that there'd be some sort of evidence of this...tracks, feathers, etc. But what if these artifacts were not noted or simply ignored, or obscured by weather?
Now, consider digital analysis. Who has received a Windows 7 system to analyze in the past year (or more) and their investigation involved tracking user activity? When you examined the system, did you include Jump Lists in your analysis? Registry artifacts? If you had to track USB devices connected to the system, did you include the EMDMgmt Registry key in that analysis?
Also, consider Corey Harrell's recent post on Jump Lists; even though he's worked backwards in our usual sequence (starting with a known task first...), he's shown what's possible on systems, and allows other analysts to connect the dots and fill in some gaps. If you'd looked at a Windows 7 system recently and had to determine authorship of a document, did you consider something along the lines of what Corey pointed out? Did you collect metadata from the document itself?
How often have we made and reported conclusions based on incomplete analysis, and how often have we used assumptions to fill in the gaps?
One of the ways to overcome this is to use checklists for certain types of data collection, particularly those that are complicated and repetitive. Notice I don't say "analysis"...that's because data gets collected, and then someone analyzes it; a checklist provides the ability to duplicate the data collection phase, and provides that data for analysis. The key to this is that a documented process can be updated and modified...you can't evaluate or replicate something that you don't have written down. Examples of where you might use a checklist include (but are not limited to) USB device analysis, malware detection, authorship of or access to documents, etc.
As a side note, I've taken some time recently update my malware detection checklist, offline, including some input from some of the recent and excellent malware analysis books that have been published.
Registry Artifacts
Andreas Schuster recently stood up a Scoop.It site for DFIR topics, and from that, I found this post at Pragmatic Forensics. The title of the post is somewhat off with respect to the actual content, but the content itself is interesting. In short, an analyst created a timeline and ran across an IRL example of the use of the Image File Execution Options Registry key as a persistence mechanism.
This key is addressed starting on page 133 of Windows Registry Forensics, and the RegRipper imagefile.pl plugin.
Along the same lines, I recently read this Cheeky4n6Monkey blog post, which discusses using SIFT tools to perform a diff of Registry hives. The example provided in the post is excellent, and can be easily translated for use in analyzing hives on a Windows 7 system, particularly when you want to know what changed following a VSC being created, or between VSCs.
Writing
Like many folks, I wasn't a big fan of writing. I know that's hard to believe...but the fact is that I didn't like to write. I wasn't good at it, and it was always a chore. Even when I did put effort into what I was writing, the results were often not what I'd hoped.
Over time, however, I was in positions and situations where I had to write. While I was in the military, there were JAG Manual investigations, fitreps, awards, log entries, etc. I had to write during my graduate program. When I got out of the military, and moved to private industry, I had analysis plans and reports to write.
Perhaps the most succinct description of why we (DFIR analysts) should write is encapsulated in the 7 Real Ways Writing Increases Expertise blog post. Take a look at the points mentioned...I think what's really powerful about this is that writing does allow us to clarify our thoughts, and you have to admit, you don't HAVE to hit the Send button until you're satisfied.
Self-Inflicted Artifacts
Over 6 years ago, I wrote a blog post regarding the MUICache key. At the time, I was wondering why I was seeing a reference to malware creating this key in AV vendor write-ups.
This System Forensics blog post provides an excellent example of self-inflicted artifacts. The post as a whole is an excellent resource for using Volatility in malware analysis, but very early on in the post, the author mentions a value being added to the MUICache key, as well as UserAssist subkey entries being created. These are self-inflicted artifacts, and a result of how the malware was executed for testing (i.e., place it on the Desktop, and double-click).
Okay...so why does any of this matter? I came across the following Registry key:
HKLM\SOFTWARE\Microsoft\RFC1156Agent\CurrentVersion\Parameters
"MiB". Get it? |
None of the sites I found could provide a clue as to how the value in question is used by malware. Many of the write-ups I found stated that the value (and it's data) were created by the malware, and my timeline indicated that the key had been modified at a certain time, which could mean that the value was created...or it could mean something else. And no, I don't have a sample of the malware to analyze.
Overall, I'm beginning to think that the value is a "self-inflicted" artifact, and that the same might be true for some other artifacts that I've observed.
Shoutz to Cory Altheide for using "self-inflicted" in a sentence, and allowing me to bogart the term.
Vulture Research
I was reading this news article recently (yes, I admit it, I'm a closet CSI fan), and it got me to thinking...how often do we, as analysts, misinterpret what we have in front of us? How often do we draw conclusions based on the data that we have, given the possibility that the data is incomplete?
So...in case you didn't want to read the article I linked to, it's about research that was conducted with respect to scavenger predation of bodies, and how it can affect homicide investigations. If you watch the crime shows (CSI, NCIS, Bones, etc.), you can probably recount episodes in which a skeleton was found in the open and the conclusion was drawn that the body had been there for months. Well, this new research indicates that vultures can denude all flesh from a body in a couple of hours...and this can have a significant impact on the assumptions made about the timeline of the crime.
So, this got me to thinking...how often does this happen with what we do? This "new" research with respect to the vultures is simply something that's been happening for a while, but investigators perhaps haven't considered it. After all, these birds come down and feed for a few hours, you'd think that there'd be some sort of evidence of this...tracks, feathers, etc. But what if these artifacts were not noted or simply ignored, or obscured by weather?
Now, consider digital analysis. Who has received a Windows 7 system to analyze in the past year (or more) and their investigation involved tracking user activity? When you examined the system, did you include Jump Lists in your analysis? Registry artifacts? If you had to track USB devices connected to the system, did you include the EMDMgmt Registry key in that analysis?
Also, consider Corey Harrell's recent post on Jump Lists; even though he's worked backwards in our usual sequence (starting with a known task first...), he's shown what's possible on systems, and allows other analysts to connect the dots and fill in some gaps. If you'd looked at a Windows 7 system recently and had to determine authorship of a document, did you consider something along the lines of what Corey pointed out? Did you collect metadata from the document itself?
How often have we made and reported conclusions based on incomplete analysis, and how often have we used assumptions to fill in the gaps?
One of the ways to overcome this is to use checklists for certain types of data collection, particularly those that are complicated and repetitive. Notice I don't say "analysis"...that's because data gets collected, and then someone analyzes it; a checklist provides the ability to duplicate the data collection phase, and provides that data for analysis. The key to this is that a documented process can be updated and modified...you can't evaluate or replicate something that you don't have written down. Examples of where you might use a checklist include (but are not limited to) USB device analysis, malware detection, authorship of or access to documents, etc.
As a side note, I've taken some time recently update my malware detection checklist, offline, including some input from some of the recent and excellent malware analysis books that have been published.
Thursday, March 08, 2012
Prefetch Analysis, Revisited
A while I go, I posted on Prefetch file analysis, and based on some recent events, I thought that I'd revisit this topic.
By now, most analysts are familiar with Prefetch files and how they can be useful to an examination. I think most of us tend to take a quick look over the names of the Prefetch files during analysis, just to see if anything jumps out at us (I've seen Prefetch files for "0.exe", and I'd call that odd and worth digging into). As described on the ForensicsWiki, Prefetch files also have a good bit of embedded metadata, and can be very useful during analysis. For example, you may look at the listing of files and something unusual may immediately jump out at you.
Also, if you include Prefetch file metadata in a timeline of an XP system, you should see a file access time for the executable "near" to when a Prefetch file for that executable is created or updated. If that's not the case, you may have an issue of time manipulation, or the original executable may have been deleted.
Thankfully, Prefetch files are an artifact of an EXE file engaging with its eco-system and not a direct artifact of the EXE. As such, if the EXE is deleted, the Prefetch file persists. Very cool.
Prefetch files also contain a number of strings, which are file names and full paths that point to modules used or accessed by the executable, and even some other files accessed by the executable. This stuff can be very interesting. For example, if you see two Prefetch files for Notepad, remember that the hash in the .pf file name is generated using the path to the file, as well as command line options. As most Windows systems have two copies of Notepad (one in the Windows dir, the other in the Windows\system32 dir), you're very likely to see two Prefetch files for that executable...and you will likely also find the full path to the executable right there within the embedded strings.
So, I got to thinking...how can we use this information? How about we do a little data reduction? Rather than running through all 128 or so .pf files manually, one at a time, we can let some code do that for us, and then display the paths to all of the executables. We might even want to add some checking, say grep() the file path for something like "temp", so we can see executables that were launched from a path that included "Temp" or even "Temporary Internet Files".
Another idea is to do a little Least Frequency of Occurrence analysis, and run through all of the .pf files and list those module paths that only appear once in all of the files. This is a great bit of work that can be easily automated, leaving us to just look over the output. You can even expand your analysis to those modules that occur twice, or even three times.
The more I think of this, the more I'm thinking that this is something that would go really well in a forensic scanner.
Addendum, 3/11: Tonight I completed transitioning the pref.pl Perl script to a module, so that the functionality can be encapsulated in other tools.
By now, most analysts are familiar with Prefetch files and how they can be useful to an examination. I think most of us tend to take a quick look over the names of the Prefetch files during analysis, just to see if anything jumps out at us (I've seen Prefetch files for "0.exe", and I'd call that odd and worth digging into). As described on the ForensicsWiki, Prefetch files also have a good bit of embedded metadata, and can be very useful during analysis. For example, you may look at the listing of files and something unusual may immediately jump out at you.
Also, if you include Prefetch file metadata in a timeline of an XP system, you should see a file access time for the executable "near" to when a Prefetch file for that executable is created or updated. If that's not the case, you may have an issue of time manipulation, or the original executable may have been deleted.
Thankfully, Prefetch files are an artifact of an EXE file engaging with its eco-system and not a direct artifact of the EXE. As such, if the EXE is deleted, the Prefetch file persists. Very cool.
Prefetch files also contain a number of strings, which are file names and full paths that point to modules used or accessed by the executable, and even some other files accessed by the executable. This stuff can be very interesting. For example, if you see two Prefetch files for Notepad, remember that the hash in the .pf file name is generated using the path to the file, as well as command line options. As most Windows systems have two copies of Notepad (one in the Windows dir, the other in the Windows\system32 dir), you're very likely to see two Prefetch files for that executable...and you will likely also find the full path to the executable right there within the embedded strings.
So, I got to thinking...how can we use this information? How about we do a little data reduction? Rather than running through all 128 or so .pf files manually, one at a time, we can let some code do that for us, and then display the paths to all of the executables. We might even want to add some checking, say grep() the file path for something like "temp", so we can see executables that were launched from a path that included "Temp" or even "Temporary Internet Files".
Another idea is to do a little Least Frequency of Occurrence analysis, and run through all of the .pf files and list those module paths that only appear once in all of the files. This is a great bit of work that can be easily automated, leaving us to just look over the output. You can even expand your analysis to those modules that occur twice, or even three times.
The more I think of this, the more I'm thinking that this is something that would go really well in a forensic scanner.
Addendum, 3/11: Tonight I completed transitioning the pref.pl Perl script to a module, so that the functionality can be encapsulated in other tools.
Saturday, March 03, 2012
Immediate Response
Having recently posted on counter-forensics, I wanted to use that as a means for illustrating the need for immediate response. As discussed previously, one way to address the use of counter-forensics techniques or measures is by having knowledgeable analysts available. Another way to address these measures, particularly when you include the activities of an operating system and it's applications over time as a counter-forensics measure, is through the use of immediate response, particularly by knowledgeable responders. Immediate response is something I've thought about a lot, and feel strongly enough about that it consumes an entire chapter of WFAT3e.
The use, and understanding, of counter-forensics measures is where an immediate response capability comes in...the sooner you detect and respond to something 'unusual', the more likely you are to be able to access and recover pertinent data. Why is that? Well, as we've said, computer systems are very active things, even when all we can see is a nice desktop with the mouse pointer just sitting there. Of all of the systems, Windows are probably the most active, with a considerable amount of activity going on under the hood. What this means is that anything that's deleted (cookies, Event Log records, files, etc.), and those sectors are available for re-allocation, will likely fall victim to the 'counter-forensics' measures "built into" the operating system.
What am I talking about? Anyone remember Windows XP? What happens, by default, every 24 hours on a Windows XP system? A Restore Point is created...and that can be a LOT of new files being created and lot of previously unallocated sectors being consumed. As new files are created, older ones may be deleted. And then every three days, there's a limited defrag that runs. Windows 7 is subject to similar activity...and in some cases, more so. Windows 7 ships with a LOT of default Scheduled Tasks that do things like backup the main Registry hives every 10 days, consuming previously unallocated sectors. When you edit MSOffice files, temporary copies of the files are created, consuming previously-unallocated sectors, and then the temp file is 'deleted' when you close the application. As such, there's a lot that goes on on a Windows system that we don't even see or even think about. How about Windows Updates? Do you use iTunes or QuickTime? When those applications are installed, a Scheduled Task is created to run on a regular basis to look for updates, and these can be installed automatically.
As such, the key to a modern response capability, beyond having knowledgeable responders and analysts, is to maintain a proactive security posture the employs early detection and immediate response. One way to achieve both is through the use of Carbon Black. (For the sake of full disclosure, I should tell you that my employer is a Cb reseller, but anyone who's followed my blog or read my books knows that my endorsement of Cb predates my current employment.) Cb is a tool that can be used to solve a lot of problems, but from the perspective of early detection and immediate response, it's invaluable. From a detection perspective, once you have Cb up and running in your enterprise (before an incident occurs, hence 'proactive security'), you can issue queries on a regular basis that look for the execution of 'new' (haven't been seen before) applications. Remember the counter-forensics techniques previously mentioned in this post? The technique itself requires code to be run, meaning that something has to execute, something that Cb can detect and report on. Or, you may detect an incident through other means...log monitoring, user notification, etc., and Cb can be used to quickly gather more information about the incident, allowing an analyst to drill down into things like parent processes (from where did the malware originate?), file modifications (which files did the malware write to?), etc. Once IOCs have been identified, scoping can take place quickly, from a central location, allowing analysts to determine how many of the monitored systems are affected, and then execute immediate actions in response to the incident.
The alternative (and in many cases, currently employed) approach is to, once an event has been identified, provide incomplete information to senior management, so that they can begin shopping around for a consulting firm that provides response services. While this is going on, we would hope that no one is doing anything on the systems (this isn't often the case) in question, but as we know, as time passes, things do happen all on their own. When a response firm is finally selected, additional time is required for contract negotiations, the responders need to travel on-site, and then they need to begin working with you to understand your infrastructure and scope the incident...all while data is (potentially, probably, most likely) leaving your infrastructure.
Consider this...is your organization subject to any compliance regulations or legislature? Many that are have little choice in notification reporting...if you cannot explicitly show which records were exposed, you have to report on ALL records that were potentially exposed. Which would you rather do...report on the records that were exposed, or report on all records that may potentially have been exposed (because you don't know)?
After all, "knowing is half the battle!" Did you see what I did there, with the quote, and the "GI Joe" image? This also serves another purpose...one of the things that military folks are trained in is "immediate actions"...stuff that you're trained over and over in, so that when the situation arises for you to use that skill, your response is immediate and instinctual, based on muscle memory. That same concept can be easily applied to the need for immediate incident response, as well. Many security events can be quickly investigated and either deemed non-events, or escalated for a more thorough investigation. Why is it that many organizations simply wipe a system on which suspicious activity has occurred, and reload the operating system? Because it's the easiest thing to do...they have the installation media right there. It's their "immediate action". So, what if you were able to replace that with a new immediate action...pre-stage the necessary tools for dumping memory and acquiring an image, and providing the training so that it becomes easy to do?
So...in summary, on the surface, counter-forensics techniques may appear to pose significant challenges for analysts, but the fact is that many of those challenges can be overcome through early detection, and immediate response by knowledgeable analysts and responders. The more pertinent information that is available to responders and analysts through early detection will significantly impact that immediate response, taking you from "something happened on a bunch of systems" to "this is what happened, and only these systems were affected", drastically reducing the impact of an incident to your infrastructure.
The use, and understanding, of counter-forensics measures is where an immediate response capability comes in...the sooner you detect and respond to something 'unusual', the more likely you are to be able to access and recover pertinent data. Why is that? Well, as we've said, computer systems are very active things, even when all we can see is a nice desktop with the mouse pointer just sitting there. Of all of the systems, Windows are probably the most active, with a considerable amount of activity going on under the hood. What this means is that anything that's deleted (cookies, Event Log records, files, etc.), and those sectors are available for re-allocation, will likely fall victim to the 'counter-forensics' measures "built into" the operating system.
What am I talking about? Anyone remember Windows XP? What happens, by default, every 24 hours on a Windows XP system? A Restore Point is created...and that can be a LOT of new files being created and lot of previously unallocated sectors being consumed. As new files are created, older ones may be deleted. And then every three days, there's a limited defrag that runs. Windows 7 is subject to similar activity...and in some cases, more so. Windows 7 ships with a LOT of default Scheduled Tasks that do things like backup the main Registry hives every 10 days, consuming previously unallocated sectors. When you edit MSOffice files, temporary copies of the files are created, consuming previously-unallocated sectors, and then the temp file is 'deleted' when you close the application. As such, there's a lot that goes on on a Windows system that we don't even see or even think about. How about Windows Updates? Do you use iTunes or QuickTime? When those applications are installed, a Scheduled Task is created to run on a regular basis to look for updates, and these can be installed automatically.
As such, the key to a modern response capability, beyond having knowledgeable responders and analysts, is to maintain a proactive security posture the employs early detection and immediate response. One way to achieve both is through the use of Carbon Black. (For the sake of full disclosure, I should tell you that my employer is a Cb reseller, but anyone who's followed my blog or read my books knows that my endorsement of Cb predates my current employment.) Cb is a tool that can be used to solve a lot of problems, but from the perspective of early detection and immediate response, it's invaluable. From a detection perspective, once you have Cb up and running in your enterprise (before an incident occurs, hence 'proactive security'), you can issue queries on a regular basis that look for the execution of 'new' (haven't been seen before) applications. Remember the counter-forensics techniques previously mentioned in this post? The technique itself requires code to be run, meaning that something has to execute, something that Cb can detect and report on. Or, you may detect an incident through other means...log monitoring, user notification, etc., and Cb can be used to quickly gather more information about the incident, allowing an analyst to drill down into things like parent processes (from where did the malware originate?), file modifications (which files did the malware write to?), etc. Once IOCs have been identified, scoping can take place quickly, from a central location, allowing analysts to determine how many of the monitored systems are affected, and then execute immediate actions in response to the incident.
The alternative (and in many cases, currently employed) approach is to, once an event has been identified, provide incomplete information to senior management, so that they can begin shopping around for a consulting firm that provides response services. While this is going on, we would hope that no one is doing anything on the systems (this isn't often the case) in question, but as we know, as time passes, things do happen all on their own. When a response firm is finally selected, additional time is required for contract negotiations, the responders need to travel on-site, and then they need to begin working with you to understand your infrastructure and scope the incident...all while data is (potentially, probably, most likely) leaving your infrastructure.
Consider this...is your organization subject to any compliance regulations or legislature? Many that are have little choice in notification reporting...if you cannot explicitly show which records were exposed, you have to report on ALL records that were potentially exposed. Which would you rather do...report on the records that were exposed, or report on all records that may potentially have been exposed (because you don't know)?
After all, "knowing is half the battle!" Did you see what I did there, with the quote, and the "GI Joe" image? This also serves another purpose...one of the things that military folks are trained in is "immediate actions"...stuff that you're trained over and over in, so that when the situation arises for you to use that skill, your response is immediate and instinctual, based on muscle memory. That same concept can be easily applied to the need for immediate incident response, as well. Many security events can be quickly investigated and either deemed non-events, or escalated for a more thorough investigation. Why is it that many organizations simply wipe a system on which suspicious activity has occurred, and reload the operating system? Because it's the easiest thing to do...they have the installation media right there. It's their "immediate action". So, what if you were able to replace that with a new immediate action...pre-stage the necessary tools for dumping memory and acquiring an image, and providing the training so that it becomes easy to do?
So...in summary, on the surface, counter-forensics techniques may appear to pose significant challenges for analysts, but the fact is that many of those challenges can be overcome through early detection, and immediate response by knowledgeable analysts and responders. The more pertinent information that is available to responders and analysts through early detection will significantly impact that immediate response, taking you from "something happened on a bunch of systems" to "this is what happened, and only these systems were affected", drastically reducing the impact of an incident to your infrastructure.
Subscribe to:
Posts (Atom)