Friday, January 29, 2010

Thoughts on APT

There's been a great deal of discussion lately about the advanced persistent threat, or APT, and I've seen list posts from folks adding their thoughts, or asking others to weigh in and provide any insight they may have. I see this as healthy, not only for customers, but also for the forensics community as a whole.

There are some things that are being said, quite clearly and repeatedly about this threat. For example, take a look at Wendi's post on the Mandiant blog; she presents some statistics from the M-Trends report that can give you an idea of what to look for if you suspect you've been compromised. I also think that if you view it the right way, and perhaps have a bit of context from other sources, you'll see that this upholds the Least Frequency of Occurrence (LFO) principle that Pete Silberman has described. So what this means is that responders and analysts need to look for the anomalies; not the massive spikes in activity, but the small, infrequent things that we may not notice in all the noise on a system, or infrastructure. The Mandiant folks mention this, and so do the HBGary folks...so, whether you're using LFO or MRI (thanks again, Pete!), or you're looking at digital DNA, you're looking for what is or should be standing out as anomalous and infrequent.

Okay, so...what about APT?

As I see it, there are three major groups of actors here...the good, the bad, and the ugly victim. The victims are pretty clear. The bad guys are the developers, purveyors and operators of exploits and other mechanisms (i.e., code, malware, etc...call it what you will) for malicious purposes. The good guys are LE, responders, corporate consultants, etc...those folks trying to assist the victims, most often after a data breach.

Now, a number of the good guys have been (or started) posting reports (see the Reports section at the end of this post) illustrating statistics based on the incidents they've responded to and the work they've done. Reading through these, we see a lot of information much like what Wendi included in her post. Perhaps the most important thing, in my mind, is that the numbers and information from these reports indicate that there was a cultural shift in the bad guy's realm. What I mean by that is that back "in the day", most of what we saw was malware that ran amok on networks, and folks blowing out SubSeven or NetBus to systems so that they could open and close the CD-Rom tray. No more. Systems are being targeted for either the access they provide or the data that they store and process. Malware is being modified enough so that current AV products don't detect new variants, and footprints of that malware are minimized, using mutexes so that the system is only infected once. I attended a conference in Redmond back in November, 2009, and in several of the presentations, LE stated that the bad guys are dedicated, patient, smart, well-funded, and they have an economic goal behind what they're doing.

From my perspective as a responder and analyst, as well as from reading the reports and compiled statistics, what I'm not seeing is a corresponding paradigm shift on the part of the organizations that fall victim to these intrusions and compromises. Intrusions are still going undetected; victims are being notified by external third parties weeks or months after the fact. Systems are still being compromised via SQL injection and the use of poor passwords by administrators.

One thing that really stands out in my mind is that looking at my own experience, as well as the experience of others (via reports and postings on the web), the victims are not experiencing a cultural shift that corresponds to what the bad guys have gone through. Even in the face of information that indicates that the cost of data breaches has increased, organizations continue to be breached. In all fairness, breach attempts are going to happen; however, at least one report indicates that as many as 70% of data breach victims responded to find out well after the breach from an external third party.

The point is that the bad guys have identified targets and have an economic stimulus of some kind for attaining their goals. They're dedicated and compartmentalized...someone is dedicated to discovering vulnerabilities, and often it appears to be a different party all together that employs the exploit and some new piece of malware. For the victims, we're still seeing incident prevention, detection, and response all being secondary or tertiary duties for overworked IT staff...so while the bad guys can dedicate time and resources toward getting into an organization, IF there are dedicated responders within the organization, and IF they have any recent training or experience, and IF anyone actually knows where the data resides...well, you can see my point. From the perspective of a historical military analogy, this appears to be akin to special operations forces attacking villages defended by farmers and shopkeepers.

Maybe I'm way off base here, but this whole discussion of APT seems to be showing us something that's a bit more of an expansive issue. My thinking here is that if those organizations that are storing and processing "sensitive data" (choose your definition du jour for "sensitive data") were to have a corresponding cultural paradigm shift, we might begin to see intrusions detected and responded to in a manner that would provide data and intel to law enforcement, such that there could ultimately be arrests. I know, this is easier said than done...look at the issues that have sprung up around compliance; all compliance is...really...is an attempt to mandate or legislate minimum levels of security that organizations should have already had in place. I don't want to cloud the issue (no pun intended), but my overall point here is that maybe law enforcement would be able to make arrests if they had data and intel. As a responder, too often have I arrived on-site for an incident where the customer was informed of an issue by an outside third party; no one knows definitively where critical data resides, there are no logs available, and administrators have already done "nothing", which in reality amounts to an extensive list of removing systems from the network, scanning them with AV, deleting files, and even wiping entire systems.

So we know that the bad guys are having fairly high rates of success compromising systems and infrastructures using, in some cases, well-known vulnerabilities that simply hadn't been patched. We know that in many cases, they don't need to use special privilege escalation exploits, because they get in with Administrator/root/superuser privileges. We know that in most cases, they don't upload massive sets of tools, but instead use native utilities or only one or two malware files. We know that rootkits simply don't have to be used to hide the bad guy's presence...why hide from someone who's not looking for you?

So the take away, for me, from these reports is simply that there needs to be a cultural shift on the part of those who store and process sensitive data, and it has to come from the top down. It's 2010, folks...do we still need to sell infosec to senior management? What should be the CEO's concern...that his email and IM are up and running, or that the sensitive data that his company stores and processes is secure, and his infrastructure monitored?

Reports
7Safe (UK)
Verizon
Mandiant

Addendum: There's a bit of a different perspective on APT and what it really means over at TaoSecurity (here, and commentary on the M-Trends report here). For another view or perspective on the M-Trends report, see what IntelFusion says.

One thing to keep in mind about the reports...remember that they're based on numbers compiled by the perspective groups. Each group may have a different customer base and primary line of business when it comes to what they do. What this means is that each report is going to represent a slightly different culture when it comes not only to the numbers but also what they represent.

Tuesday, January 26, 2010

Links

Tools
David Kovar has written a tool, in Python, to parse the NTFS $MFT, called analyzeMFT. The tool can be downloaded from this site. I've been using Mark Menz's MFTRipper to parse this data, and having other tools to do this sort of thing available can only be a good thing.

MS article on NTFS $MFT
Lance's article on Detecting Timestamp Changing Utilities

Windows 7 XP Mode

One of the interesting aspects of Windows 7, from both a usability and a digital forensics point of view is the addition of XP Mode. In short, if you have a system whose processor supports hardware virtualization (be sure to check that out!!), you can install a Windows XP SP3 virtual machine into VPC on Windows 7, and run tools that may not run (or run quite as well) on Windows 7. This sort of thing could be very useful from an analyst's perspective...with just one platform, you can run tools that don't rely on the Windows API to parse some data sources, and at the same time, you can run other tools that do require the Windows API, and even a specific version.

So, while this can be very useful, there's the question of virtualization and how it affects what the analyst needs to look for when examining a system. Diane Barrett has discussed artifacts left when someone uses Moka5 or MojoPak in presentations, and we're all aware of other virtualization tools and platforms out there...but with XP Mode, it's built into the OS shell.

The key to all this, from a digital forensics perspective, is going to be in determining where the artifacts of interest exist.

XP Mode Resources
Tony Bradley's article
LifeHacker article

AV, Symantec and the Google Thang
Symantec posted something on the Trojan.Hydraq Incident, indicating that it is associated with the Google issue that popped up recently.

Something I find concerning about their write-up is the description of the artifacts. They mention that the Trojan is a DLL and installs as a Windows service with the name "RaS[4 random characters]". Well, that's easy enough to search for across the enterprise...look for any service name that starts with "RaS". The problem is, this isn't the whole story. If the executable file is a DLL, that would indicate that it installs "under" something else, like SvcHost. This would mean that there are other artifacts; specifically, if someone finds a service with the specified name, then they should look at the Parameters subkey for the ServiceDll value...what happens if the name of the file changes from what's listed in the write-up? How about checking the SvcHost key in the Software hive?

Symantec isn't the only one who doesn't provide a great deal of useful information to folks, either. The MMPC has a write-up on rootkits, and mentions Trojan:W32/AproposMedia...here's their write-up on that one. Googling, I find that EmsiSoft, makers of the a-squared AV product, have something a bit more substantial.

SafeBoot
Didier Stevens has posted about restoring SafeMode with a .reg file, adding a bit more to his info about a virus that deletes the SafeBoot key, tricks to restore SafeBoot, and protecting the SafeBoot key from being deleted. While not an end-all, be-all security approach, it is a good idea to take a look at this and consider making it part of your system setup. After all, where would you be if you didn't have access to a bit of safety net like SafeBoot?

Safe Mode Boot Options
Safe Mode Boot options for XP (here're the options for Windows 2000)

Interesting Request
I received an interesting request in my email this morning...someone wanted to use one of my Perl scripts in part of their courseware, and was asking if it was okay to do so. I appreciate when people do that, but I didn't recognize the script: sweep.pl. I followed the link provided in the email and downloaded the script...it's a port scanner/banner grabbing script I wrote in 1998! I wouldn't call my skillz 'l33t in any sense, even now...but back then, maybe imaginative. After all, I was doing stuff back then to see if I could, and to see if I really understood the mechanics of what was going on.

Wednesday, January 20, 2010

Interesting Analysis

I was analyzing a Windows 2003 R2 SP2 image recently, and I ran across some interesting things that I thought might be a good idea to share.

One of the things I was doing in my analysis was creating a timeline, and saw some Event Log event records that were interesting, so I created a mini-timeline using just the Event Logs. In this case, the system had 5 .evt files...the three usual ones, and Internet Explorer.evt and WindowsPowerShell.evt. I extracted all five files from the image, and began parsing them with my own tools. At first glance, I noticed that there were Security event records dating back to early October, 2009...logins, Event Logs being cleared, etc.

As my analysis progressed, I needed to drill down on something specific, so I created a micro-timeline comprised of just the Security Event Log records, and noticed that the earliest record was from the first couple of days of December, 2009. But wait...how did my other timeline have Security events going back two months before what was apparently in the Security Event Log?

So then I ran evtrpt.pl against the Security Event Log and confirmed the number of records in the log, as well as the date range. I then proceeded to do the same thing across the other Event Logs, as well, just to be sure...and suddenly, from the WindowsPowerShell.evt log, I found 185 Security records! Ah, that's something! So I opened the file in a hex editor and viewed the ELF_LOGFILE_HEADER structure; the StartOffset and EndOffset values were the same...0x0030...indicating that both values pointed to the end of ELF_LOGFILE_HEADER structure (which is 48 bytes long), and that there were NO records in the Event Log. In addition, the CurrentRecordNumber value as set to 1, and the OldestRecordNumber was 0...again, no records. I then opened the Event Log in Event Viewer on my analysis system and was again told, no soup for you! I then went back to the hex editor and after scrolling down past a bunch of zeros, I began to see valid event records, all with "Security" as the source!

So what happened?

Well, apparently when the Event Logs were cleared, and the Event Log files themselves reallocated, at least some of the sectors contained valid event records. Also, there apparently hadn't been any Windows PowerShell events generated, so no event records were written to the log, and the Windows API (via the Event Viewer) duly reported "no records". However, the tools I've written and use do not rely on the Windows API, and instead parse the EVT files on a binary level; they search for the "magic number" for an event record, back up a DWORD to get the size, read that much information into a buffer, and then compare the initial size value to the last DWORD in the buffer. This is just the first check, but allows the analyst to locate event record structures in unstructured data, such as within "unallocated space" within the EVT file. So, not only did the 64K file contain 185 valid event records, but I also found the "smoking gun" that turned supposition (up to that point) into cold, hard fact!

A brief description of the tools (mentioned previously in this blog): evtrpt.pl is a script I use to tell me the total number of event records within a EVT file, as well as the frequency of sources/IDs, and the date range of all records in the file. I use this to get a quick view of how useful the EVT files may be, or to look for specific sources...like McLogEvent, etc.

An example of the frequency of sources and IDs looks as follows (from test data):
Security 528,2 2
Security 528,5 25
Security 538,3 1
Security 540,3 3
Security 540,8 1
Security 551 2

As you might imagine, that can be pretty useful.

I also use evtparse.pl as part of my timeline tools; I can point it at a single file, or at a directory containing EVT files, and it will parse through them, listing the output in the five-field TLN format. From here, I can create a mini-timeline using just the EVT file output, or add the events to an overall event file that also contains file system metadata info (via TSK fls.exe), etc. I also have a switch that will list only the event record numbers, in sequential order, with their corresponding time_generated times...I use this primarily to see if the system clock has been changed.

What this demonstrates is that there can be more sources of data on a Windows system than we're really aware of initially. Because of how the scripts I use were written (i.e., not using the Windows API), with minor modifications, they can also be used to find and extract records from the pagefile, memory, and unallocated space. Evtrpt.pl and an earlier version of evtparse.pl are included in the timeline_tools archive in the Win4n6 Yahoo group, and were mentioned in my second timeline analysis article in the Hakin9 magazine.

As a side note, I also found some pretty cool artifacts in unallocated space within one of the Registry hive files...in this case, the SAM file contained indications of deleted user accounts (and when they had been deleted), which corresponded to other findings in the analysis.

Sunday, January 17, 2010

Analysis Stuff

Metadata
Didier has posted new versions of his PDFiD and pdf-parser tools. Didier's offerings really kind of run the gamut, don't they? Well, hey...it's all good stuff! I mean, really, other than the fact that he's updated these really great tools, what else needs to be said?

Malware
The MMPC posted some new malware descriptions recently, regarding Hamweq and Rimecud. Nice names. Signatures have been added to MRT, apparently.

An interesting aspect of Hamweq is that it apparently drops files in the Recycle Bin, and uses the Installed Components key in the Registry as a persistence mechanism. I wrote up a quick and dirty plugin for the key...entries with StubPath values that point to the Recycle Bin would be suspicious, even if they didn't indicate Hamweq was installed, specifically.

Other malware has other artifacts and persistence mechanisms. Take Win32.Nekat as an example...this one adds entries to the user's Control Panel\don't load key, effectively removing the applets from view. While not overly sophisticated (I mean, it is native functionality...), something like this would be enough to slow down most users and many admins. And yes, there is an app a plugin for that (actually, it was pretty trivial to write...one of several that I wrote yesterday).

APT
With the Google thing, there's been considerable discussion of advanced persistent threat, or APT, lately. I'm not going to jump on that bandwagon, as there are lot of folks smarter than me talking about it, and that's a good thing. Even Hogfly has talked about APT.

I get the "threat" thing, but what I'm not seeing discussed is the "advanced" part. Wendi over at Mandiant blogged about M-Trends and APT, noting some...uhm...trends such as outbound connections and mechanisms used to avoid anomaly detection. One of those mechanisms listed is "service persistence", which sounds like the malware is installed as a Windows service, a persistence mechanism. While I do think that it's a good idea to talk about this kind of thing, what I'm not seeing a lot of right now is actionable intel. Wendi and Hogfly presented some very useful information, demonstrating that all of this talk still comes down to a couple of basic questions; Have I been breached, and am I infected? How do I find out? What do I do if I am? How do I protect myself? So someone looks at both posts and uses the information there to look and see if they've been breached. If they don't find anything, does that mean they're clean? No, not at all...what it means is that you didn't find what you searched for, and that's it. Both posts presented information that someone can use to scour systems, but is that all that's really available?

I think that a very important concept to keep in mind when doing this kind of analysis is what Pete Silberman said about malware; he was absolutely correct when he described it as having the least frequency of occurrence on a system. Think about it. Malware, particularly worms, no longer want to keep infecting systems over and over again, so they use a single, unique mutex to say, "hey, I'm infectin' here!". That way, the system doesn't get so massively infected that it stops functioning; not only does that alert folks that somethings wrong, but it also deprives the attacker of the use of the system. So, you can run handle.exe from MS on a live system, and then run the output through handle.pl and see mutants listed by least frequency of occurrence.

I'm going to throw this out there...run it up the flagpole and see who salutes, as it were...but I think that the same sort of thing applies to intrusions and breaches, as well. For example, Windows systems have thousands of files, and intruders may be installing some tools to assist them in persistence and propagation, but the fact is that there are a number of native tools that are perfect for what most folks want to do. I mean, why install a tool to locate systems on the network when you can use native tools (ipconfig, netstat, nbtstat, etc.)? So intruders don't have to install or add something to the compromised system, unless the required functionality is not available in a native utility. Remember what Wendi said in her M-Trends post about using services for persistence? How many do you think are there? Do intruders install their persistence mechanisms 50 or 100 times? No...likely, they only do it once. And they can either hide as svchost.exe, pointed at an executable in another location, or beneath the legit svchost.exe, using the ServiceDll entry (and adding an entry to the appropriate SvcHost key value in the Software hive).

Timeline Analysis
To illustrate my point about least frequency of occurrence, let's talk briefly about timeline analysis. Given the minimalist nature of malware and intrusions, how can we use timeline analysis to our advantage? The approach I've been advocating allows the analyst to see multiple events, from many sources, side-by-side for comparison and analysis.

One of the things that folks ask about with respect to timeline analysis is a graphical means for representing all of the data, in order to assist the analyst. IMHO, this simply does NOT work! I have yet to find a viable means of taking in all of the data from various sources, throwing it into a timeline and hoping to use a graphical representation to pick out those anomalies which happen least often. As an example, check out Geoff Black's CEIC 2007 Timeline Analysis presentation...the fourth slide in the pack illustrates a graphical file system metadata timeline within EnCase. I agree with Geoff's assessment...it's not particularly effective.

Overall, in order to get a grip on APT, we as responders and analysts need to change our mindset. We need to understand that we can't keep looking for spikes in behavior or activity, and there is no Find All Evidence button. When you respond to or analyze a single Windows system, and consider the changes in OS, various applications that can be installed, and the infrastructure itself, what constitutes an anomaly? Honestly, I don't think that this is something a software package can tell you.

I do, however, firmly believe that training and education are the key.

Friday, January 08, 2010

Links

e-Evidence
Christina's updated the e-Evidence What's New site...check it out! There's always some good reading here...

Tools
Speaking of tools, Don's been busy...no, Don's not a tool, but he's posted a tool update over on the SecRipcord blog. In addition to the tool update, Don has a little tip he got from some guys from Mandiant on detecting if malware embedded in or infecting CHM (Windows compiled Help) files might have been run on the system.

Don also links over to IronGeek's forensically interesting spots post. It's a very interesting post, and I noticed that a lot of the Registry locations that IronGeek mentions are covered by RegRipper plugins. Also, something else to think about...IronGeek has done a great job of presenting some interesting places to look for forensic data, but I noticed that under both IE and Firefox, there's no mention of Favorites or Bookmarks, respectively.

I digress for a moment, but I really think that examining some of the areas described in my earlier post can add a great deal of context to the information retrieved via more well-known analysis. For example, one of the first items on many checklists with respect to browser analysis is to check the contents of the TypedURLs key. That's fine, but what if the user's default browser isn't IE? That's right...no entries in this key would be explained by that fact. Now, what if there are some entries in the key, and the default browser is IE? Checking the Favorites list would provide some context to what was being seen in the browser history.

Where this can also be helpful is if some sort of anti-forensic tools or techniques have been employed on the system. I actually received a question along these lines just the other day...someone interested in getting into the field asked me how analysts "deal with" anti-forensic tools, and specifically mentioned timestomp. Well, there are a number of locations on a system that contain timestamps that are not affected by timestomp or other anti-forensic tools/techniques.

It seems that I'm not the only one thinking along these lines...Kristinn has updates to log2timeline that include adding this sort of information to a timeline. Thanks to gl33da for sending me the link to Kristinn's page.

Analysis
Speaking of tools, Daniel Wesemann has a SANS ISC post on the static analysis of malicious PDFs, which is pretty easy to follow along with...and starts out using Didier's PDF tools. Very cool!

Circling back around to browser analysis yet again, I picked this one up from the sausage factory...consider the browser's session restore capabilities, something that's been around for a bit. I use Firefox (I've been using Mozilla since 0.9, from as far back as...'94?) and version 3.5 would crash often. When it did, I'd have to decide to restore sessions are just start new ones. Harry Parsonage posted a PDF on the topic...take a look.

Certs
Daniel Castro had a very interesting article in FCW regarding his thoughts on a national certification program. I think that Daniel made some very important points in his article, one being that an increase in certified individuals hasn't resulted in a corresponding decrease in vulnerabilities, incidents/breaches, and losses. I also agree with Daniel in that a national/federal certification program will result in a check-in-the-box mentality...there are plenty of examples out there to illustrate this occurring.

There are a number of excellent certification and training programs out there, and I really don't think that the issue is the programs themselves; no, it's the individual who gets the certification and what they do with it. Let's say you go away to a training program, work hard, and get the certification, and when you return back to your job, you don't use any of the newly-developed skills for 6 or more months...then what? Do you go for a skills review after an incident occurs? If your company is going to send you off to this training, shouldn't there be a purpose for it...like, this person is going to be an integral part of our security plan or CIRT? Wouldn't it then be a good idea to have a plan that starts with getting someone trained, and once they return, get them involved in the program and using those newly-learned skills?

I've seen certified responders walk up to a compromised system, open the Event Viewer, start opening event records, and use Paint to save bitmaps of the entire desktop to the system hard drive as a means of documenting their findings. Seriously. Is that the fault of the certifying organization, or whomever wrote the materials? No, of course not. The material was provided to an individual and they were certified, but what happens after that is in part up to the individual, and in part the responsibility of their management.

Rootkits
Microsoft's Malware Protection Center recently posted some observations on rootkits for 2009...pretty interesting read. However, I think that like a lot of AV service write-ups, something like this falls short of what it could be. Many times, AV service write-ups are geared toward "use our product" type of information, and sometimes not enough to be useful in an environment that may already use that product.

Working as an incident responder over the past several years, a couple of the things I've seen are that (a) a malware infection can devastate an infrastructure, (b) AV products don't protect against everything, and (c) AV services are geared toward using the product.

As an example, just about a year ago, Conficker (and later in the spring, Virut) variants would bring an infrastructure to its knees...literally. Our team would show up, and often find someone from the AV vendor already on-site or engaged. Our observations of the malware characteristics would lead us to a diagnosis, so that we could assist in identifying infected systems, which would be pulled offline and scanned with the updated virus files provided by the vendor. Often things didn't really get rolling until we stepped in because the AV vendor required a sample to analyze, and the customer was drowning.

After the incident settled down, we would have discussions with the customer to help them understand that this was a variant of previously detected malware family, and that the variant itself was sufficiently different enough to escape detection by their current and up-to-date AV product.

Anyway...enough of the soapbox, I guess...it's just too early in 2010 for that!

Monday, January 04, 2010

Browser Stuff

Many times during an examination, we'll take a look at the user's browser activity. That might include starting by getting the contents of the TypedURLs Registry key (I see that as a first step in a lot of analysis plans) and parsing an index.dat file or two; however, one thing that I rarely see in an analysis process, in case notes, or in a report is, the first thing I did was determine the default browser, as well as any other browsers that may have been installed and used.

When exploring user activity via the browser, the analyst needs to be sure to:

1. Check to see what the default browser is...yes, there's an app a RegRipper plugin for that. But keep in mind, this value can change. If the user doesn't uncheck one box in the default browser dialog that appears, they will be asked every time if they want to make the browser their default, so the question will be asked each time and they can go back and forth.

2. Determine what other browsers may be installed; don't assume that someone's only going to use Firefox. Strike that...don't assume anything. Find out. Any easy way to do this is to check the file associations for .htm(l) and .asp(x) files, as well as just see what software is installed.

3. Other places to check (these are included in RegRipper plugins, by the way...) include the user UserAssist keys (what've they been launching), the Uninstall key (for software installations), and which MSIs have been run on the system.

Many of these are quick checks...several folks who've used RegRipper have said that the tool has reduced Registry analysis from days to minutes, and thanks to the plugins, been more comprehensive than previous processes. So adding these checks to your analysis plan doesn't correlate to a significant increase in the time it takes to conduct your analysis.

One area that I rarely see discussed in browser analysis is the bookmarks file. For Firefox, the bookmarks.html file include ADD_DATE and LAST_MODIFIED entries for folders, and ADD_DATE and LAST_VISIT entries for the URLs. For IE, you'd look for the Favorites folder, which contains InternetShortcut/.url files which also include timestamps, in addition to the file MAC times themselves.

It seems to me that including this information in a timeline...should the investigation necessitate doing so...might be a source of some valuable data. For example, let's say you found an entry of interest in the user's Internet history; would it add some additional (and perhaps significant) context to the overall investigation to know that the web site was in the user's bookmarks/Favorites?

Commercial tools like ProDiscover make it very easy to populate the IE Internet History view from an image rather than just a single user. But keep in mind that it isn't IE that populates the Internet history artifacts for the user; it's the use of the WinInet APIs. What that means is that any application or tool that uses the WinInet APIs may leave similar artifacts, which is why, during some engagements, some of us have seen an Internet history for the Default User populated. In one particular instance, wget.exe was found to have been launched using System-level privileges, and the tool was found to use the WinInet APIs so we found clear artifacts of the use in the Default User's Internet history. In that particular case, the intruder used SQL injection to gain access to the MS SQL Server, and ran commands to create an FTP script file, which was then launched via ftp.exe. The script downloaded wget.exe, which the intruder verified was on the system, and then used to download additional software.

Another aspect of browser analysis (specifically for IE) is to look for Browser Helper Objects, or "BHOs". From a forensic analysis perspective, some BHOs have been known to be spyware, or worse; Symantec identified BHOs as a common loading point for malware.

This article discusses how to prevent BHOs from loading with the Explorer process, and only loading with IE.

On Firefox, Add-ons may be of interest. Here's a Symantec article that talks about BHOs (IE) and XPCOM (Mozilla).

My point is that sometimes just looking at the user's Internet browsing history may not be enough to really get a solid picture of what's going on. The existence of a particular web site that has been bookmarked or added to the user's Favorites may add valuable context to the examination. BHOs are loaded when the user starts IE, so any action taken by the BHO will be done in the user's context, and therefore will populate the user's Internet history.

So how might you use this in a real-world investigation? Well, if the user has their browser configured to delete the history when the browser is closed, or uses another tool to do so, you may find something of value in the bookmarks. Even if the history hasn't been deleted, you will be able to associate some artifacts with specific user activity.

What about the Trojan Defense? Well, with a comprehensive and thorough malware detection process, you might also include a specific check for BHOs or addons to the browser, further closing the door on that issue.

Resources
Firefox 3 Forensics
FoxAnalysis
Firefox Forensics (Machor Software - also Windows and Google Chrome Forensics)
NirSoft Browser Tools
WBF Tool

Addendum, 8 Jan
Opera Files - global history is kept in global.dat, entries have a format that looks similar to IE Favorites .url files:

Webpage Title - Something
http(s)://www.somewebsite.com/page
(possible *nix epoch timestamp)

Linkilicious in 2010

RegRipper
Paul Stutz sent me a nice email recently, telling me that he'd not only used RegRipper's rip.pl on Linux, but he'd used it to work on the NIST CFReDS hacking case. Paul posted his write-up here, and gave me the go-ahead to post the link. Check it out! What Paul does is use rip.pl and some scripting on Linux to create something of a CLI version of RegRipper, automating a good deal of what RegRipper is capable of in order to solve the challenge. This is a really good view into what someone can do with RegRipper and the tools.

SafeBoot
Didier has written a program that creates an undeletable SafeBoot key. His point is dead on...there is malware that deletes the SafeBoot key so that the user or admin cannot boot into SafeMode and recover the system. This may not be such an issue on XP (thanks to Restore Points) but it can be a pain if your IR plan includes such steps.

I have to tell you, Didier really comes out with some pretty amazing things...check out his article in the first edition of the IntoTheBoxes newsletter, on Windows 7 UserAssist keys. Also, did you know that Didier's pdfid.py tool is used by VirusTotal?

Processes
Claus also has a very good post on tools you can use to view and manage processes on Windows. A number of the tools are GUI, but some are CLI tools, and all of them could be useful during incident response. Take a look at what Claus has compiled...there are some very good options available.

Windows Firewall and Open Ports
Speaking of Claus...he posted recently about opening ports in the Windows Firewall, and that got me to thinking about the times I've examined a Windows system following an incident (intrusion??) and found that there were custom entries in the Registry that allowed traffic to or from the system.

Why CIRTs Should Fail
David Bianco's blog post on why your CIRT should fail has some excellent points, and the post is well worth the read. Take a look and let me know what you think about what he says.

In short, if a CIRT reaches what appears to be "success", there's no need for "lessons learned", right? Wrong. Ninety-nine times out of a hundred, when reviewing an incident response performed by a customer (or another analyst), there isn't a logical progression of steps, from A to B to C. Most often, it's step A, make one or two findings, and then speculate your way to Z, at which point, the book is closed on the incident. Seriously. I've been in a customer's war room when someone who hasn't been involved in the incident up to that point walks in (while the incident manager is out of the room) and announces to those in the room that the malware includes a keystroke logger. When asked how he knows that, the response is usually, "that's what hackers do...". Hey, how about we do something different...like maybe collect and analyze a sample of the malware, as well as an infected system, and see if it actually includes keystroke logging capabilities? How about if we work on fact, rather than speculation?

This is the sort of thing that can come out of lessons learned? What did we do well, and what could have gone better? Did with go with speculation because it was quicker? Could we have performed the analysis better, or could we have been better prepared?

Finally, there's a lot of talk about "best practices", and going through a lessons learned is one of those best practices for CIRTs and should be part of the CSIRP. If you're going to talk about best practices, why not follow them?

DLP
Richard Bejtlich has an excellent post that addresses thoughts on some excerpts from Randy George's Dark Side of DLP. I have to say that having been involved in a good number of response engagements that involved the (potential or real) exfiltration of sensitive (pick your appropriate definition of "sensitive", be it PCI, HIPAA, NCUA, state notification laws, etc.) data, a lot of this post rings true. First, why bother scanning for where sensitive data is stored if you don't have a policy that states what's appropriate? Really. Okay, so some IT guy runs a scan and finds sensitive data stored on a system, used by a guy in marketing. The IT guy says, "hey, you can't do that.", and the marketing guy's response is, "says who?"

I also think that Mr. George has a good point with the statement, Managing DLP on a large scale can drag your staff under like a concrete block tied to their ankles. However, I would suggest that "DLP" can be replaced with anything else that extends outside of IT, and doesn't have a policy to support it. Like incident response.

I have seen first-hand how data discovery tools have been used to great effect by organizations, just as I have seen how they've been misused. In one instance, the organization used a DLP tool to locate a very limited amount of PCI data on their network, and when they were subject to an intrusion, were able to use that information (along with other information and analysis) to demonstrate that it was extremely unlikely that the PCI data was discovered by the intruder or exposed. Thanks to picture we were able to paint for Visa, the customer only received a small fine for having the data in the first place...rather than a huge fine for the data being exposed, then having to deal with notification costs, as well as all of the costs associated with that sort of activity.

One of the biggest things I've seen when responding to incidents is that when trying to prioritize response, no one seems to know where the sensitive data is stored or processed. Knowing what data you have and use, and having the necessary policies in place to describe its storage, processing, authorized access, etc., can go a long way toward preventing exposure during an incident, as well as helping you address your response when (not if) an incident occurs.

Friday, January 01, 2010

First post of 2010

For my first post of 2010, I'm pleased to announce that the first edition of the IntoTheBoxes newsletter is out! Check it out!

I have to say that Don did a great job of putting this together! I'm not a layout editor by any stretch of the imagination, and Don (of SecRipcord and Scout Sniper fame) has done a great job of putting this all together.

I also want to thank everyone who's contributed to this issue, as well as those who's contributions will appear in subsequent issues, for their time and effort in providing an article. Our hope is that we can provide quality, hands-on, practical information that has some use or interest to as many in the field as possible.

As a close-out to 2009, Richard Bejtlich ranked WFA 2/e as one of the three best books he read in 2009! Thanks, Richard...that's high praise, indeed! That sort of establishes a tradition, as WFA 1/e was ranked #3 in 2007!

I wish everyone the best in 2010!