Pages

Thursday, August 31, 2006

Getting WEP passphrases

Remember a bit ago when I blogged about getting wireless SSIDs out of the Registry? Well, a little while after that, the guys over at the secureme blog posted (but haven't done so in quite a while) about a Wireless Zero Configuration Information Disclosure issue. Aaron refers to this as a "local abuse" issue...I prefer to think of it as information gathering for the incident responder.

In the blog entry, there's a link to an executable that can be used to extract the WEP key right along with the SSID. The archive includes the source (in a .cpp file) so maybe we can use that information to further parse the contents of the Registry key.

However, the most interesting part of the blog entry is at the very end, where the ability to dump and decrypt the WEP keys has been added to Cain and Abel. The blog mentions version 2.77, but as of today, version 2.9 is available.

You may want to add something like this to your toolkit, particularly if you've already added things like the ability to view passwords in Protected Storage.

Getting user info from an image

A while ago, I posted on how to determine a user's group membership from the Registry files in an image. Andreas Schuster also posted a glorious bit of eye candy, as well.

Sometimes, it pays to revisit or clarify things.

Windows 2000 and XP maintain user and group information in the SAM portion of the Registry, which is in the SAM file on disk. This file is a Registry file and can be parsed using tools such as the Offline Registry Parser. However, the information you would look for is encoded in binary Registry values and needs to be parsed into something understandable. The best source of information for this is the sam.h file included in the source code for Peter Nordahl's chntpw tool. This tool is Linux-based, so it doesn't use the Windows API. Also, it's written in C, which means that the structures it uses are easily translated into other languages, such as (you guessed it) Perl.

For information about users, go to the key SAM\Domains\Account\Users\, where you'll see subkeys that look like "000001F4" and "000003EA". Translate these from hex to decimal (1F4 = 500, 3EA = 1002) and you'll see that you're looking at user RIDs. Within each of these keys, you'll find F and V values, both of which are binary. The F value contains the type of account and various account settings, and the V value contains the user name, full name, comment, password hashes, etc.

The structures I use in my ProScript look like this:

V structure:
# $v is the binary contents of the V value
my $header = substr($v,0,44);
my @vals = unpack("V*",$header);
my $name = _uniToAscii(substr($v,($vals[3] + 0xCC),$vals[4]));
my $fullname = _uniToAscii(substr($v,($vals[6] + 0xCC),$vals[7])) if ($vals[7] > 0);
my $comment = _uniToAscii(substr($v,($vals[9] + 0xCC),$vals[10])) if ($vals[10] > 0);

F structure:
# unpack() string is a little messy, but used this way for clarity
# $f is the binary data of the F value
my @vals = unpack("x8V2x8V2x8V2Vx4vx6vvx12",$f);
# Note: Times maintained by Windows are 64-bit FILETIME objects
#$vals[0] and $vals[1] = lockout time
#$vals[2] and $vals[3] = creation time
#$vals[4] and $vals[5] = login time
my $rid = $vals[6];
my $acb = $vals[7];
my $failedcnt = $vals[8];
my $logins = $vals[9];

The $acb value holds the ACB flags settings, which tells us things like if the account is locked out, etc.

In order to get the group membership, you would need to go to the SAM\Domains\Builtin\Aliases key, where you would find subkeys similar to the ones above, only these are the group RIDs. Each of these subkeys contains a C value, which is also a binary data type. The code that I use to parse this structure looks like (it's messy, I know):

# $keyInfo->{strValueData} is the binary contents of the C value
my @users = ();
my $header = substr($keyInfo->{strValueData},0,0x34);
my @vals = unpack("V*",$header);
my $grpname = _uniToAscii(substr($keyInfo->{strValueData},(0x34 + $vals[4]),$vals[5]));
my $comment = _uniToAscii(substr($keyInfo->{strValueData},(0x34 + $vals[7]),$vals[8]));
my $num = $vals[12];

At this point, $num is the number of group members, and all that's left at this point is to parse through the rest of the data, beginning at the offset located in $vals[10], collecting the SIDs of the users. Again, Andreas's blog entry has some nice graphics to go along with it.

There's one other place to look for user account settings on the system. Locate the F value under the SAM\Domains\Account key...this is where the info is kept. Peter's sam.h file contains the actual structure...I haven't written code yet to parse it yet.

I hope that helps some folks out there. I should probably write a version of the Offline Registry Parser that just dumps this info out of the SAM file.

IR Tools review

I took a look at IR tools recently, and today ran across an SC Magazine review of forensic tools...as it turns out, the review is of IR tools. Overall, I wasn't surprised to see ProDiscoverIR rated as highly as it was...it's an excellent tool/toolkit, and for the price it can't be beat. However, while the article did look at some of the basic capabilities, such as information collection, things such ProDiscover's use of Perl for scripting wasn't addressed.

One thing that got me about the article, though, was something that was said at the beginning of the first paragraph, and then repeated at the beginning of the third paragraph:

Managing security incidents is essentially a problem of forensics.

...and...

Essentially, incident management is a forensic problem.

I can't say that I agree with this. My philosophy has always been that forensic analysis is part of an overall incident response capability, and that incident management is a security management problem.

Within most organizations, there are...people. Okay, a little simple, I know...but the point is that when an incident happens, the response isn't only about forensics. Depending upon the situation, you may have Legal, HR, and other departments involved. Many security incidents have a business impact, as well...compromises of critical servers, malware infections, etc., will at some point be addressed from the perspective of "how does this affect our business?"

Yes, you will need to perform forensics or incident analysis at some point, in order to determine the nature and extent of the issue. In fact, performing an analysis, lumping "forensics" and "incident response" together into "root cause analysis", is absolutely necessary. I can't say it often enough...too many times folks will make assumptions or SWAGs (see the second definition) about the issue, take the system offline, blow the base operating system away and reload it and the data from clean media. Why, then, are we so shocked when the system is p0wned all over again?

The problem with incident response (and IT security, in general) today is that security incidents that lead to IR/forensic activities and analysis are not being viewed as security management issues. Instead, they're being viewed as technical issues best addressed by IT folks...the same folks who are undertrained, understaffed, underpaid, and overworked. Management cannot wash their hands of a security incident by putting it on the shoulders of IT, after that same management infrastructure has made it a priority to do pretty much everything other than prepare for such issues.

In truth, incident management is a security management problem, and security management is a business issue.

Tuesday, August 29, 2006

LiveView

Ever wondered what was going on with a system while it was running? Have you ever been looking at an image in your analysis-tool-of-choice and thought that you'd get a lot more info if you could only boot this puppy?

Now, you can do that...well, at least for Windows systems. Check out LiveView, a Java-based tool from CMU that promises to let you boot your system image in VMWare. It looks very interesting, and I'm itching to give it a try.

If you do decide to try it out, remember this...send feedback in to the guys who produced the tool. Telling them it doesn't work isn't very useful; instead, give them as much information as you can, so that they can improve the tool and make it more useful to everyone.

Addendum: I downloaded an installed LiveView, and everything went pretty smooth. The Java Runtime Environment (JRE) 5.0 was installed, as was the VMWare DiskMount utility. I happen to have an image available that I downloaded, so I ran LiveView and pointed it toward the image, accepting most of the defaults in the UI. I opted to have the resulting VM automatically run, and interestingly enough, it started right up! This is somewhat different from Richard's experience, but I didn't have any binaries that had been modified. I'm using VMWare Workstation 5.5.2, so I let the VM go through it's "found new hardware" shenanigans, and then installed the VMWare Tools. I then rebooted the image and updated the display settings. The image I'm working with is an XP system that seem to have been set with no password. I'll need to see how effective the NTPasswd disk is on systems like this. Either way...it's very cool. I can see what the running system looked like, and I can snapshot the system prior to installing tools or performing any analysis on it. In the end, I still have the dd image, as well.

Oops! Okay, I wanted to see if I could, in fact, snapshot the VM I was running, and the choices were greyed out on the menu bar. So I figured I would suspend the VM, and then see what's going on in the resulting .vmem file. I chose Suspend...and VMWare bombed. I restarted the VM and tried it again, and was able to get it to work, although it didn't seem to go as smoothly as a "normal" VM session does. Anyway, I got the .vmem file I was looking for, and it's about the right size. Now I have something to work with and run my tools against.

Word of warning...I wasn't able to modify the settings on the VM session, such as increase RAM from 256 MB to 512 MB. This is something to think about if you're setting up a system this way.

Monday, August 28, 2006

Interesting Event IDs

Every now and then, when I analyze a Windows box, the Event Logs end up being pretty important to the issue at hand, and not just because there's login information there. One of the things I tend to look for is Dr Watson events, indicating that there was a crash dump generated.

So, what other interesting things are there in the Event Log?

Steve Bunting has a great article on event IDs that correspond to changes in system time. You could correlate this to information found in the UserAssist key, as well. So, keep your eyes open for the event IDs 520 and 577, particularly if Privilege use is being audited.

Also, I recently exchanged email with a LEO, discussing the various event IDs associated with USB devices being connected to Windows systems. Win2K, for example, has event ID 134 for "arrival" events and ID 135 for "removal" events. This changed for XP, with a possible explanation here. Specifically, at the bottom of the KB article, it says, "After you install the hotfix, Netshell no longer listens for Plug and Play device arrival notifications. Therefore, you are not notified about new devices." Ah...okay.

What're some good ways to go about quickly locating these interesting event IDs in a .evt file?

I like to use the Perl module that I wrote called File::ReadEvt (I've blogged out it before). This module makes it really easy to gather statistics on what's in a .evt file, and it does so without using the MS API...yep, it can be run on a Linux or Mac OS/X box. It parses through the binary file and extracts information based on what it finds there.

The archive that contains the module includes a couple of scripts that allow you to either gather statistics about the .evt file in question, or simply dump the contents into something readable. This is particulary useful if you're following the methodology of loading the .evt file into the Event Viewer on a live machine...sometimes you may encounter an error message saying that the .evt file is "corrupt". Run the script against it, any you'll be able to pull out the information.

One cool thing about scripts like this is that I've used them in the past to locate event records that the MS API did not recognize, and didn't see. This kind of thing is pretty neat, and may be useful in the course of an investigation.

The module doesn't, however, correlate the data from an event to the message strings located in the message library (DLL file on disk). But that's not really an issue, because you can get a subscription to EventID.net and find out more than you ever wanted to know about a particular event ID.

Okay, so, like, how do I tell, from an image, like, what was being audited while the system was running?

Well, to answer that, we need to go to the MS KB article that addresses this issue exactly. This KB article is specifically for NT 4.0, but there's a blog post here that shows things pretty much stayed the same up through XP. As it happens, I have an image from a Windows system that I downloaded and like to use to test things out. I extracted the Security file from the image, and ran my offline Registry parser against it. The odd thing was this...the KB article never tells you what the data type (binary, dword, string, etc.) the value is; regp.pl told me that the type is "REG_NONE". Okay, so I made a quick modification to the script and reran it, and got the data for the value we're looking for:

01 17 f5 77 03 00 00 00 03 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 00 00 00 03 00 00 00 00 00 00 00 03 00 00 00 09 00 00 00

What does this tell us? Well, first off, auditing was enabled...that's the "01" at the beginning. The next three bytes (17 f5 77) mean...well, I don't know at this point. The last octet specifies how many different fields there are...you'll notice that between the first and last DWORD (ie, 4-byte values) there are 9 DWORDs. From there we can map the different audit sections, and the auditing set for that section corresponds to 1 = success auditing, 2 = failure auditing, and 3 = both.

The blog article linked above provides a great way to perform some live testing, and you can use auditpol.exe (if anyone has a credible link to this tool, please share it) to verify this, as well.

Thursday, August 24, 2006

The State of IR

I blogged recently about an InformationWeek article in which Kevin Mandia was quoted; part of that article and one of his quotes is:

One of the worst things users can do if they think their systems have been compromised by a hacker is to shut off their PCs, because doing so prevents an investigator from analyzing the contents of the machine's RAM...

Like others in the IR arena, I'm sure Kevin and his guys are seeing this more and more...the IR activities performed by the on-site IT staff is often very knee-jerk, and little thought is given ahead of time to the questions that need to be answered when you've been compromised or p0wned. Whether the IT folks are doing wide-spread scanning (and filling log files), local scanning at the console (with destructive A/V scans), or simply killing processes and deleting files, these activities are making it difficult for follow-on IR/CF work to be effective.

Not being one to simply say, "hey, you're wrong!", and not offer some alternative or solution, here's my take on what needs to be done...

First, and most importantly, senior management needs to take security (in general, and IR/CF specifically) more seriously...and not just on paper. Back in the day when I was doing mostly vulnerability assessments and pen tests, some organizations actually had IR policies in place and wanted to test them. So they'd contact my employer and have a pen test scheduled...without telling the IT staff. That way, you could see how they'd react.

Right now, senior management (CEO, Board of Directors, shareholders, etc.) are primarily focused on uptime and accessibility...after all, without those, where does revenue come from? Now, let's assume that when planning for these, they brought security folks in at the ground floor so that not only was everything accessible and available, but more secure, as well? Okay, that's not reality...but guess what? Security doesn't have to break stuff. No, really. Honest. Measures can be put in place to prevent and detect issues (to include security incidents) without breaking your current system setup.

Also, senior management should invest in their IT staff, through training, job progression, etc. I've seen some really bright, conscientious folks leave the IT arena because they were tired of being the back-office behind-the-scenes folks who got none of the credit and all of the blame. Along those lines, if you send some of your folks off to training, get them certified, and don't provide any avenue for advancement, they'll find someone who will. That's kind of a wasted investment, isn't it? I saw this in the early '90s at a trucking company that first trained some of the IT staff in SmallTalk, and as soon as the project they were on was completed, they all migrated to the local bank, advancing their positions and salaries. When I was involved in the Java training in the mid-90s, the same exact thing happened.

Second, to paraphrase Agent Mulder, "the tools are out there." The licenses for many of the available freeware tools state that you can run the tools as much as you want, as long as you own or administer the systems (be sure to check, either way). So, grab a CD, put the tools and a simple batch file on it, and include one argument...the directory to send the output of the tools to. This can be a thumb drive, USB-connected external HDD, or mapped drive. When something happens, fire up the batch file, and capture that valuable volatile data.

Don't know what tools to use, or where to get them? Check out my first book, or look for my next one coming this spring. Or watch this blog.

Don't know what the data you've collected is telling you? Check out my first book, or look for my next one coming this spring. Or watch this blog.

The bad guys are getting more and more sophisticated, so the level of training and knowledge of the good guys needs to narrow the gap.

Thursday, August 17, 2006

Report Writing

I was responding to an email this morning, discussing what I'm including in my next book (or could include) with someone, and he brought up a subject that I hadn't thought of...there are no resources out there for writing reports. And we all know how much technical nerds and security geeks love to write reports.[/sarcasm mode=off]

My first thought on this was, well, everyone has different requirements. I mean, an internal security guy in an FTE position has different requirements for his reports than a consultant does (I've done both), and those are different from LE requirements.

As I started to think more about that, it occurred to me that there are some commonalities in how reports can/should be written, and that perhaps by addressing and discussing them, maybe we can clear up some of the mystery behind the topic.

Before we start, let me give you a bit of context. I started "learning" to write in the military, and the most consistent thing I wrote there were personnel reviews. I had to do a thesis in graduate school, and in addition to reports I've written as a consultant and in FTE positions, I've written numerous articles and a book. However, this does NOT make me an expert, so I don't claim to be. Based on my experience, I have developed a style of writing, and when I'm writing a report for a client, I'm still going to feel better if I have someone review it.

That being said, there are a couple of general guidelines I follow when writing my reports.

Follow the KISS principle
Does this mean you should put on a wild, skin-tight suit, paint your face with black and white makeup, and present your report to the client in heavy metal? No. KISS = Keep It Simple, Stupid. This is a well-known, and yet oft-overlooked principle. Julius Caesar followed this principle...what's simpler than "Veni. Vidi. Vici."? I mean...really. How much more simple can you get?

An slight modification of this principle was passed down to me by a commanding officer. He held regular meetings, and as the CommO, I represented a technical discipline. He told me that when it was my turn to talk about issues in my department, if I couldn't summarize the issue on a 3x5 card in 3 (but no more than 5) bullet statements, then I didn't know the issue well enough. And you know something...15 yrs later, that's still true.

Keep the audience in mind
Remember who you're going to be talking to. I've found this to be an extremely valuable thing to remember, whether I was doing pen tests, vulnerability assessments, or now, when I perform forensic analysis. IT guys are usually pretty technical but the thing about consultants is that many times, we're less general than IT guys. Forensic analysts can be even more technically focused, so spilling our grey matter out onto paper isn't going to do much more than confuse some IT guys...what do you think it will do to the IT managers?

I'm not saying IT guys are dumb...that's not the case at all. However, the reason guys like us (consultants, LEs, etc.) get called on-site is usually because the client doesn't have the skill set, nor do they have the time, to perform the work they're asking/paying us to do. So performing the work is only half the battle...the real challenge is to communicate the results (of an assessment or analysis) to the client in a way they can understand and use.

If you're sitting at home playing BuzzWord Bingo right now, put a marker over "value add".

That's our job as professionals. We provide "value add". Sure, we can image systems and dump spreadsheets and analysis in the client's lap, but why not simply tell them what we found?

Don't overwhelm the reader with technical information
This is a follow-on to the previous item. In knowing your audience and who's going to be reading the report, you don't want to overwhelm them with technical detail. For example, you can say "between these dates, these systems were subject to [un]successful XX attacks." You might even add "...a total of n times." That's much better than dumping a spreadsheet in the client's lap, with all of the data.

A good method of implementing this is to include an executive summary, something that someone at the management or senior management level can quickly read, and get an understanding of what went happened. The executive summary is best kept short, no longer than a page (unless absolutely necessary), and to the point. Follow that up with some technical detail, describing what happened, and the order in which it happened. Remember, when performing analysis, timeframes add context to what we do and see. The same thing is true for the client. Now, you don't need to give away your whole bag of tricks in this section, but you can and should provide some level of detail, so that your report has credibility. Finally, if you must provide the real down-in-the-weeds technical stuff, put it in a tabbed appendix, ordered by relevance.

If you don't know, and can't prove it, say it
This goes right along with Avoid speculation and too much interpretation, so I'm just going to combine the two. A lot of the reports I've read over the years having included too much of what can be referred to as "artistic license". A lot of really, really smart guys (every time I see this sort of thing, it's come from a guy, not a woman) like to show how smart they are, and many times will do so in their report. I'll give you an example. I've seen reports that will say things like, "the attacker was Russian." Well, how do you know that? Sure, you see in the logs that the attack came from an IP address assigned to the Russian Federation, and you may have found a binary file with Cyrillic characters in the name or strings in the file...but is that definitive? Do these two facts combine to make the original statement true? No, they don't.

One thing we're seeing a lot of in the media is the loss of sensitive data. In his BlackHat presentation, Kevin Mandia mentioned that clients are asking the question, "was sensitive data copied from the system?" Well, if you're looking at an image of a hard drive, what is your answer? In most cases, it's "we don't know, because there isn't enough information." Yes, you may see the sensitive data on the drive, and the last access times on the file(s) may correspond to when the intruder was on the system...but is that definitive proof that the data was actually taken?

Just this summer, the Veteran's Administration announced that they'd recovered a laptop that had been stolen...a laptop that contained a great deal of sensitive information. Media reports stated that the FBI had analyzed the laptop and determined that the data had not been compromised, yet the backlash in the public forums regarding this statement was incredible...and to some degree, correct. There are several ways, of varying technical sophistication, that could be used to copy the data from the hard drive without leaving obvious forensic artifacts.

So the point is, if you can't connect the dots, don't. There's an old saying that goes something like this..."it's better to keep your mouth shut and be thought a fool, than to open your mouth and remove all doubt." You may be able to fully justify your thought processes to yourself, and you may become indignant when a co-worker, team member, or even your boss calls you on them, but what do you think will happen when the defense attorney does it?

Well, I hope these meager thoughts have been helpful. What hints, tidbits, or pointers do you use when writing reports?

Wednesday, August 16, 2006

Artifacts

I attended an excellent presentation on IM Forensics at the recent GMU2006 conference, and that got me to thinking...are there resources out there that list forensic artifacts for various IM applications? I know that folks on the LE side dig into this quite often, but when I see/hear it discussed, it usually starts out with "...I read somewhere...". Hhhhmmm. I've thought that with the recent releases of AIM Triton and a new version of Yahoo, I'd take a look at these and document forensic artifacts. Of course, there are other IM applications out there (GAim, Trillian, etc.), so I'd like to start by compiling a list of sites that provide credible information on older versions.

I'd also like to see if there are any resources (sites, blogs, papers, etc.) regarding forensic artifacts for P2P applications, as well. I've looked at LimeWire in the past, but now and again I see questions regarding Bearshare, etc.

Finally, while we're on the topic of artifacts, I'm also interested in talking to (or hearing from) anyone who's willing to share information on artifacts regarding exploits and compromises. One of the questions I get very often is, "what do I look for if I suspect someone has used this exploit?" Sometimes we can determine this sort of thing through testing, and other times we can look at anti-virus web sites to get artifacts for worms and backdoors. Still other times, we stumble across these things by accident.

I'll give you an example of what I'm talking about. Take cross-site scripting and SQL injection attacks against IIS web servers. Sometimes during analysis (it really depends on the situation) I'll run a keyword search for xp_cmdshell across the web server logs. If I get hits, I then use Perl scripts to extract the information from the logs into an easy to manage format.

This is the sort of things I'm interested in...mostly because I know others are interested, as well, because I hear the questions. Besides looking at Registry and file system artifacts, this might be an interesting avenue to pursue in Windows memory analysis, as well.

Blogs and Podcasts

I'm familiar with the Cyberspeak podcasts, and various security-related blogs such as TaoSecurity, etc.

What are some other podcasts out there that are related to forensics? I'm familiar with some of the more general-IT-oriented podcasts, but was wondering if there are others out there that cover topics in forensics.

The same is true with blogs...I'll do searches for blogs related to security and find quite a lot, but I'm interested in those focused on forensics.

Finally, don't forget webinars and even groups, such as the Windows Forensic Analysis group.

Thanks!

Where to start?

Ever have one of those days when you load up an image, check your notes...and simply have no clue where to begin? I'm not talking about your general brain cramp...no, you simply have no idea where to start. Kind of like when someone hands you a drive and tells you to find "all the bad stuff". Oy.

Many times, simply looking into the file system may get you started...what applications are installed (based on the directory structure), what users are there and seem to be active, etc. However, this sort of meandering may only get you so far, and it you're on an hourly clock, it's also going to get you in hot water with the boss (be it your boss-boss, or your customer/boss).

So, what do you do? The approach I generally take is to try and narrow down the time window for the incident, either by interviewing the client, or reviewing their incident report. Look for things that within the report (or via the interview) that narrow down the time frame...when did you first notice something? Some folks have monitoring software on their network and/or systems that logs or emails them alerts...these generally have a time stamp of some kind. This is a good place to start, but one HUGE caveat is this...do not think that this sort of thing is the cornerstone of your analysis. I have seen time and time again where something will happen that makes the client think they've been compromised, and once you start your analysis, you see clearly that the system had really been compromised much earlier, perhaps even multiple times.

I know a lot of folks who check the "usual suspects" when it comes to Registry keys, primarily looking for recently viewed documents and checking autostart locations. Also, running searches of the Registry and file system, using keywords derived from process names and application pop-ups is a good idea.

One thing that I've found to be a really good step to add to my methodology is to go into the Registry and sort the Services subkey in the appropriate ControlSet, based on the LastWrite times of the keys. Most of the keys will show times from when the OS was installed, but I've used this several times to successfully locate backdoors and even rootkits.

Another thing I like to do is use the File::ReadEvt Perl module to generate statistics about the Event Logs. The module ships with two scripts, one to get simple statistics, and another to simply output the event records to STDOUT. Over time, I will be posting additional scripts (based on what I've developed for various analysis tasks), and I will also be including them in my next book. Another thing that's very helpful with parsing and analyzing the contents of the Event Logs is a subscription to EventID.net. Besides things like logins, etc., another event you may want to look for is Dr. Watson...many times, this may be an indication of an exploit being run.

Gotchas - when working with times, a big GOTCHA is accounting for time zones. One thing that can throw you is how your analysis tool displays the times...ProDiscover gets the UTC/GMT times from Windows FILETIME objects, and displays them using the time zone settings of the analysis workstation. Many of the tools (Perl scripts, as well as ProScripts) I've written display times in UTC/GMT format. One of the things on my personal TO-DO list is to update tools that display times so that the ActiveTimeBias can be retrieved from the Registry and passed as an argument (in the case of ProScripts, retrieve this value automatically). Also, many text based logs (IIS, A/V tools, etc.) write times to ASCII text files using the current settings on the system, including the time zone settings. If I have an image of hard drive from, say, the Pacific Standard Time (PST) time zone, and I'm in EST, and some of my tools are displaying the times in UTC/GMT...while the times themselves are "accurate", they still need to be translated.

So what do you do when you're stuck? What are some tips you have for getting out of a rut when analyzing an image of a Windows system?

Friday, August 11, 2006

Week in Review, plus some

I spent this past week at GMU2006...well, not all week, just parts of the days...mostly the mornings. I was originally scheduled for five presentations...an opening session, and two rounds each of Tracking USB Devices and Windows Memory Analysis. I later found out that I had also been scheduled for Windows Live Response, give twice on Friday morning.

Overall, I think the GMU2006 conference was good...like most conferences, you get out of it what you put in. It was a great opportunity for me to see some old faces, meet some new folks, and put faces to names. To my surprise, I got to meet AAron Walters, who came to down twice. AAron's a bright guy with a lot of really good ideas.

The downside of a conference like this is that while I'm presenting, a lot of good presentations are going on at the same time. I got to sit in on IM Forensics by Charles Giglia, but missed his MySpace Forensics presentation. I missed other presentations by folks like Jesse Kornblum, Cynthia Hetherington, and Terri Gudaitis. However, in my presentations, I got some good comments and questions, and had a couple of really good side conversations with some folks.

If you're the GWU student who talked to me on Friday morning...drop me a line. I'm always happy to help.

Anyway, I made a couple of comments during my opening session talk, and during my presentations that cybercrime is increasing in sophistication, and that there's a widening gap between what "we" (forensic analysts, first responders and sysadmins) do or are capable of doing, and what the bad guys do. I can't say that there was a strong reaction either way, but this morning I read this article, where Kevin Mandia was quoted. In the article, Kevin talks about the increased/ing sophistication of cybercrime, and the widening gap between the good guys and bad guys.

As I mentioned, I presented on Windows Memory Analysis, and interestingly this appeared in the article:

One of the worst things users can do if they think their systems have been compromised by a hacker is to shut off their PCs, because doing so prevents an investigator from analyzing the contents of the machine's RAM, which often contains useful forensic evidence, Mandia said.

The paragraph that followed this one also provided some interesting insight. This shows that this sort of thing is being done (ie, RAM is being collected and analyzed), it's being done by some smart folks, and valuable information is being used to solve cases. More importantly, the traditional approach most folks use doesn't include collecting this information.

If you have any questions about the conference, my presentations, or about anything...drop me a line.

Addendum, 17 Aug: During my presentations at GMU2006, I talked about live response...in some of the presentations, the discussion was tangential, but in the actual Windows Live Response presentation...and the need for live response. One of the reasons for conducting live response is that downtimes of systems aren't measured in minutes, but in dollars per minute. For the most part, it sort of looks like I'm making this up...I really don't have anything to reference, other than professional experience. Well, I found a link to an article at DarkReading this morning that talks about the cost of a hack. One of the bullet statements in the article references a Yankee Group survey that indicates that some companies measure downtime in thousands of dollars per hour.

Thursday, August 03, 2006

What is "forensically sound"?

Mike Murr over on the Forensic Computing blog posed an interesting question yesterday, surrounding the definition of "forensically sound". Mike made some interesting points...I suggest you read through them and ponder the idea for a bit.

This was also picked up by Richard Bejtlich at TaoSecurity.

My thoughts on this is that it's an important and timely question really...what is "forensically sound" evidence? Given that potential sources of evidence no longer consist of simply hard drives, but now also include volatile memory, the network, and non-hard drive sources such as cell phones, PDAs, thumb drives, digital cameras, etc., maybe it's about time for another definition.

I'm not usually a big fan of massive cross-posting, but this is an important issue...the current definition doesn't bode well for live response and acquisitions. So, if you're so inclined, read up and add a comment.

Addendum, 4 Aug: It looks like we may be closer to a definition. I'm copying and pasting this definition from a comment I made on the TaoSecurity blog:

"A forensically sound duplicate is obtained in a manner that does not materially alter the source evidence, except to the minimum extent necessary to obtain the evidence. The manner used to obtain the evidence must be documented, and should be justified to the extent applicable."

The second sentence in bold is something I added. Having been in the military, I'll just say that the placement of "must" and "should" were purposeful and intended.

Thoughts?

NoVA Sec Founded

Richard Bejtlich recently started NoVA Sec, billed as "Pure technical gatherings for security professionals in the northern Virginia area. Check your certifications at the door." Sounds cool. One of the folks who commented made reference to the 2600 meetings that used to occur in the area.

Unfortunately, yours truly is disqualified for admission, beginning with the very first meeting. According to the announcement, "The price of admission is a laptop running something other than Microsoft Windows."

Oh, well. Can someone tell me how the meeting goes? Thanks.

Addendum, 4 Aug: Okay, it looks like the definition of "running" isn't as explicit as I originally thought. I'll try to come up with something exotic and show up...

Looking at IR Tools

When performing incident response (most notably in a live situation), I find myself thinking, "There's gotta be an easier way." I've faced some of the very same situations you have...a multitude of systems that are physically/geographically distributed, but I can reach them via the network, Windows servers configured with no NetBIOS so you can't log into them remotely, etc. In all situations, however, the basic requirements are the same...the systems have to be examined live, can't be turned off, and you have to find out what, if anything, is going on. Basically, play "Where's Waldo" with malware.

The question has been...how best to do so? Well, I originally put together the FSP for this purpose. I wanted something that could be put together, was flexible and easy to use, and minimized the impact on the system. With the FSP, an investigator can put together a distribution CD, send it out to remote locations (or a first responder can download the files and burn them to CD), and the client will connect to the server over a TCP/IP connection to transfer the data that it collects.

What about other distribution methods, such as collecting information over the network? Well, if you're able to login remotely to Windows systems, you can use a combination of tools such as psexec.exe and WMI to collect information remotely. In fact, some of the tools I've created for use with the FSP use Perl to implement WMI.

Recently, I took a look at other tools that are available in this space. Let me start by saying a couple of things. First, what I'm going to say is based only on my initial impressions, not on extensive testing. When I began looking at these tools, I didn't have a set of criteria to evaluate them against. Rather, I wanted to get familiar with them, see how they were set up, how they functioned, etc. More extensive testing will come after I have a better understanding of the tools themselves. Part of the reason is that while the tools generally fit into the IR space, they are all different...different capabilities, functions, deployment options, etc.

Second, I do not make any claims that this is a complete list of the tools that are available. These are just the ones I'm aware of. If you find others, please don't hesitate to comment here, or drop me a line.

Finally, when I set out on this exercise, my notional testing infrastructure consisted of an XP Home SP2 laptop running VMWare Workstation 5.x and a Windows 2000 SP4 VM client. This gives me a semblence of a "network".

Now, let's take a look at the tools:

Mandiant's First Response - When I first downloaded First Response, I was concerned by the size...23+MB, plus an additional 22MB for the .NET framework. That's a lot of code, but it gets installed on one system, from which the agent files are distributed. Setting things up was relatively easy, and I was presented with a nice GUI (hey, I'm a fan of the command line, but I have no allusions that in order to penetrate the market, you NEED a GUI). I created the agent files with no trouble, and then copied them to a thumb drive. I had taken some time to puruse the Mandiant forums and seen how Steve Bunting had posted about an issue with distributing the FRAgent files. In a nutshell, the tool uses SSL to encrypt the data it sends, and creates individual keys for each system - so if you run the tools from a thumb drive on one system, you have to then "clean up" some of the files written to the thumb drive prior to running the agent on another system (from the thumb drive). This is inherent to the current version of First Response, and Dave Merkel indicated that this may change in the future. So, I copied the agent files to a directory on the Windows 2000 "guest" and attempted to install the agent (it installs as a Win32 service on the system...something to keep in mind); I say "attempted" because I received an error message about not being able to find a DLL that was clearly visible in the PATH.

Hhhhmmm. Okay, I then installed the agent files on the XP Home SP2 host system, and used the console to connect to the FRAgent running on 127.0.0.1. Things ran fine at that point, and I told the console to perform a "General Audit". After a bit, I could view all of the information that was collected...which was a lot. First Response grabs a pretty comprehensive set of information (Processes, Ports, Registry, file info, etc.) during a General Audit, and everything can be stored locally, and even exported to other formats (.cvs, .txt, etc.).

I think that First Response is best used if installed prior to an incident. The agent can be installed and running on servers, and if an incident occurs, the administrator can collect information from all of the systems, and then archive it and send it to someone for analysis.

Now, I should point out something very odd that I found. When I ran the General Audit on localhost, I noticed a connection in the Ports report...fragent.exe was connecting to an IP address off of my network, at nLayer/Akamai Technologies, via port 80. I posted this to the Mandiant forums, and will see if I can replicate this behaviour. At Matt Pepe's suggestion, if I see this again, I'll attempt to verify it with other tools, including WireShark.

Nigilant32 - Nigilant32 is a nice little tool from Agile Risk Management LLC...and I do mean little. It's a GUI tool, but still weighs in at under 1MB when you download the archive and unzip it. You can copy Nigilant32 to a thumb drive, and walk it around to various systems, launch the GUI, snapshot the system, save the retrieved data to the thumb drive, and move on.

The snapshot of data collected by Nigilant32 is comprehensive, but not easily parsed. Opening the snapshot in Notepad, you don't easily see all processes in relation to each other (due to how the information is formatted) and you have to go to another section of the file to see what ports are open.

Something nice about Nigilant32 is that you can dump the contents of RAM. I did that, and ran the dump through lsproc.pl and got the basic information about all the processes in RAM...very nice.

The one thing I didn't like about Nigilant32 is that when you launch the app, you're presented with a splashscreen. In my experience, the splashscreen should be visible as the app loads, or just for a couple of seconds. However, with Nigilant32, you have to click on the splashscreen to access the GUI. Annoying, but the Nigilant32 is still simple and easy to use, but one still needs some means of performing data reduction and correlation.

RPIER - This is an interesting little tool I came across from Intel, available on SourceForge. Unfortunately, I wasn't able to test it...I unzipped the archive and tried to run it, and the app complained that it couldn't find "mscoree.dll". I checked the archive, and didn't find any such file. I used Dependency Walker to verify that the EXE file did indeed rely on that DLL. The funny thing was, Mandiant's tool ships with a file by that name.

Looking at the files in the archive, it seems that RPIER is similar to the FSP, in that it uses and launches external tools to collect at least some of the information. Reading through the documentation on RPIER, it's a bit more mature than FSP, in that it has a GUI interface, comes pre-assembled, and the files can be uploaded to a properly configured web server, whereas the FSP doesn't write any files to the system being examined (it sends the information it collects out over a TCP/IP socket to the waiting server).

I've emailed one of the authors to see about resolving the issue...I'll test it out when the DLL issue is sorted out.

ProDiscover IR - This is the only non-free tool I looked at, and I'll tell you right up front...I've had a license for PD since around version 3.x. In fact, in the past year, I've been using it quite a bit, and it's my favorite tool for forensic analysis of images, by far. I've even gone so far as to convert EnCase files from their native format to dd-format so I can open them in ProDiscover.

PD/IR comes with an incident response capability...basically, an agent that can be either pushed out (via remote login) or distributed on CD, thumb drive, etc. Using the GUI to collect information can be complicated, but ProDiscover comes with the ProScript/Perl scripting language that allows you to automate sending the agent, collecting information, and then deleting the agent from the remote system. With multiple distribution methods and the ability to automate distribution and data collection, ProDiscover is the most flexible tool that I looked at.

Also, like Nigilant32, you can use ProDiscover IR to collect the contents of RAM from the remote system...only with PD, you can do it over the network.

In a nutshell, most of the tools I looked at have their uses and strong points. The one commonality amongst all of the tools is the lack of data reduction and correlation capabilities. However, that's not a bad thing...it's something that will come along as people use the tools, and as the tools mature. Collecting data is easy...analyzing the data is the hard part, and requires some skill and dedication. Like most incident responders, I've seen experienced administrators (and experienced responders, as well) look at the exact same data I'm looking at, and not see anything "unusual", or focus on the wrong thing simply because they aren't familiar with it. Data reduction and correlation tools can be crafted pretty easily...my preference and experience in doing so is with Perl...but at some point, a person needs to review the data and decide what's what. For example, if you're using any of these tools in a pretty static environment, such as a server farm or data center, you can pretty easily build a simple database (using mySql or SqlLite) of known-good processes, ports, etc. That way, after you run your collection tool(s), you can run the data through a parsing mechanism that filters out the known-good stuff...data reduction...leaving things that you need to look at (and may end up adding to the known-good list).

Again, please keep in mind that this "review" of the tools is just an initial blush, and I haven't explored all of the capabilities of each of them.

As always, comments/questions are welcome.

Addendum, 3 Aug: I was informed that I'd missed a tool that should have been included, WFT, which seems to have gained it's notariety through SANS. I downloaded it from the web site, and at first blush it's very similar to the FSP, in that you have to go out and get the tools you want to use with it. I wasn't able to easily locate anything in the zipped archive or at the web site that indicated where you can go to get some of the tools...some are obvious to me (such as those from SysInternals.com), while others aren't. There is one...mac.exe...that looks familiar (in name only), and may be one I wrote a while back, as according to the WFT configuration file, it was "compiled" with Perl2Exe.

I haven't run the tool yet, but from presentations and screenshots available on the web site, it looks like WFT reports it's output in a nice HTML format, in addition to saving the raw output of the commands that are run. Something like this is very useful, and is a step up from a simple batch file...WFT generates/verifies MD5 checksums of the tools listed in the configuration file.

All of these tools collect a good deal of information...some more than others. Tools such as WFT and FSP allow the user to configure how much information is collected (like the FSP, WFT is capable of using other configuration files besides the default), but the issue of data reduction, correlation, and analysis remains.

PSS: Okay, looking around, I've found other first response/IR toolkits; one at FIRST, and one called IRCR.

New Hashing

I blogged the other day regarding Jesse Kornblum's "fuzzy hashing" technique...I can't wait until Tuesday next week to see his presentation at GMU2006. I think this would be very useful in cases in which you're able to use tools such as lspi.pl to extract a binary image from a Windows memory dump.

Andreas posted last night on "authenticating" a reconstructed binary by hashing the immutable sections separately. This, too, is a good idea, but as with Jesse's technique, it changes how things are done now with regards to hashing. Right now, there are lists of file names, metadata, and hashes made available through various sources (NIST, vendors, etc.) that are used as references. There are lists of known-bad files, known-good files, malware, etc. These lists are good for static, old-school data reduction techniques (which is still valid). However, as we continue moving down the road to more sophisticated systems and crimes, we need to adapt our techniques, as well. Tools and techniques such as "fuzzy" hashing and "piecewise" hashing (on a by-section basis, based on the contents of the PE headers) will only help us.

Tuesday, August 01, 2006

All hail the FATKit!

It's times like this that I really need icons for my blog entries...while many would rate a nice Homer Simpson icon, this one rates an icon of Bender from "Futurama" (here's another good one).

Aaron Walters and Nick Petroni have put together something called the FATKit: The Forensic Analysis Toolkit. I've known about the site for a while, and while it looks interesting, there hasn't been a lot of stuff up there...until now. Aaron posted a whitepaper that among other things, gives us a view of what the FATKit is all about...and to be honest, it looks AWESOME!

Something like this is incredibly useful, providing a layer of abstraction for the analyst while maintaining the integrity of the underlying data. Very cool...these guys have done a lot of great work. If you get a chance, take a look at the paper and either comment here, or send your comments directly to Aaron and Nick.

As you can see from the bottom of the page, Aaron will be at DFRWS, and is putting together a BoF with Andreas Schuster...makes me wish I could go! I only hope that someone attends, and sends in a write up on how things went.

GMU2006 Presentations Posted

I've posted my presentations for GMU2006, which starts next week. I'm posting them so that anyone attending can get a look at the presentations ahead of time, and so that those folks who aren't attending can see the same thing. One word of warning, though...I'm not a huge proponent of "death by PowerPoint"...meaning that my entire script isn't on the slides. However, there should be more than enough there to give you a really good idea of what's going to be said. Also, I tend to be very interactive, discussing topics with the attendees rather than at them.

So, the archive contains my presentations on Windows memory analysis and tracking USB devices across Windows systems. There is a third presentation, as well, that is for the opening session...the folks running the conference asked me late last week to speak, and I thought I'd fill the time talking about issues we're facing as a community.

If you download the presentations and have questions or comments, please feel to share them here, or with me directly.

Addendum 1 Aug: Okay, here's my speaking schedule for the conference:

Mon, 7 Aug: Opening session, 9:30 - 10:30AM
Tues, 8 Aug: "Tracking USB Devices", 10AM
Wed, 9 Aug: "Windows Memory Analysis", 10AM
Thu, 10 Aug: "Tracking USB Devices", 8AM
Thu, 10 Aug: "Windows Memory Analysis", 10AM

In addition, there are a lot of great presentations going on...Cynthia Hetherington is a fantastic speaker, and Terri Gudaitis is giving her "Cybercrime Profiling" presentation again - it's always a winner. Also, I just spoke with Jesse Kornblum and he's only going to be on-site for his presentation on "Fuzzy Hashing" at 8am on Tues - boys and girls, Jesse's presentation on "Fuzzy Hashing" is a MUST SEE, even if you've listened to his Cyberspeak podcast interview!

I'd suggest buying Jesse a beer, but I don't know how he feels about imbiding that early in the day. I, however, am of the firm belief that it's 5 o'clock somewhere. ;-)