Monday, January 31, 2005

Registry Mining

You're probably asking, "what is Registry mining?" Well, have you heard of "data mining"?This is where companies hire consultants and build huge databases of business data, and then "mine" it for business intelligence, trending information, etc. I'd like to discuss how this sort of thing can be used with the Registry.

One way to look at the Registry is the way Microsoft presents it. You know, that little warning at the beginning of all KB articles that mention the Registry that read something like, "Abandon hope, all ye who enter here." Hey, just kidding! The Registry is a binary database of information about the configuration of a Windows system. The Registry replaces the use of the text-based configuration files that you're familiar with from Windows 3.1. Yes, Virginia, there was a Windows before Windows 95! (As a historicalnote, I have actually seen Windows 3.0 running...seriously!)

In some ways, the Registry can also be considered to be a log file, of sorts. Before you think I've gone off of the deep end, consider this...when certain activity occurs on a Windows system, the Registry will be updated; keys will be added or deleted or simply updated. One way to track those changes is to take successive snapshots of the Registry over time. Since most of us don't do that, another way of tracking changes to the Registry is to get the LastWrite time from Registry keys. This value is similar to the last modification time on files, in that the time indicates when the key was last modified. This information can be retrieved (I haven't *yet* found a public API for setting or modifying this value) and used to correlate other times (ie, file MAC times, times Event Log entries are created, etc.) on your system. I wrote a Perl script called "keytime.pl" that's included in my book, and will retrieve the LastWrite time from keys that you pass to it. Also, the latest version of the First Responder Utility (FRU) includes this capability inherently as part of the functionality.

Now, on to the mining portion of this entry. ;-) The Registry is full of data relating to how the system is configured. For example, the HKLM\System\MountedDevices key
maintains a database of persistent volume names mounted to the NTFS file system. This includes CDs/DVDs, USB-connected storage devices, etc. Whenever a volume is mounted and assigned a drive letter, the assignment is listed in this key. If you want to tie that information to a specific user, check the HKEY_USERS key...specifically, HKEY_USERS\{SID}\Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2. You'll find a lot of subkeys beneath this key, in many cases, and the names of those subkeys will be GUIDs (those long strings of characters encapsulated in curly brackets). You'll also find subkeys whose names are standard drive letters (C, D, etc.). There's a subkey named "CPC\Volume", beneath which are subkeys that look like GUIDs. If you open these keys, then right-click on the Data value (choose "Modify Binary Data" to open a dialogue that will let you see the...uh...data) in RegEdit, you'll see information similar to what you saw in the MountedDevices key.

There's more information like this in the Registry, but unfortunately, it's not really catalogued, per se. Part of the reason for this is that there is very little, if any, data or information about the myriad of Registry keys, and how they relate to each other, if at all. Much of this kind of information is maintained in-house, either as proprietary research, or simply because the information is speculative, or not supported by vendor documentation (ooops!).

When digging into some of the aspects of the Registry, I use Google and the Microsoft Advanced Search facility. Other tools I like to use on systems when I'm looking into autostart locations include AutoRuns, SilentRunner, and AutoStart Viewer.

Another tool that looks like it has some promise is Beau Monday's FirstOnScene.

One final note to forensics analysts...

On dead or imaged systems, some of the Registry keys won't be visible. This is due to the fact that some hives/keys are generated at run-time and are volatile. See KB256986, Description of the Windows Registry for more information.

File Version Information

Now and again, you'll find a file on your system that you don't recognize, or know nothing about. It may be sitting in the system32 directory, or anyplace else in the file system. You may find references to the file as a running process, or in one of the many autostart locations within the Registry or file system. So, how do you figure out what the file is?

There are a bunch of ways to figure these things out. While Google is a useful tool, I don't recommend starting there necessarily. Why? Well, everyone knows that files can be named anything. Well, okay...not everyone...bad guys and people who think like bad guys (security weenies) know this, but for the most part, many admins don't seem to understand this. I say this from personal experience, as well as from monitoring the various SecurityFocus lists I follow.

One of my favorite exercises in my incident response course is to copy netcat (ie, nc.exe) over to a system, and rename it to "inetinfo.exe". I then run a command line to launch the renamed netcat, bound to port 80. From the basic netstat.exe and TaskManager outputs, this looks like the IIS web server to most people.

I'm going to blog in more detail later about basic IR activities, but I wanted to take a second to present some information on looking into (literally) binary executable files (ie, file signature of 'MZ', or extensions of .exe, .dll, .sys, etc). These files will, in many cases, have file version information compiled into them, especially if they are products of commercial organizations, such as Microsoft, Symantec, etc. This information can help you narrow down "suspicious" files pretty quickly.

Below is a Perl script written using the Win32::File::VersionInfo module. This module provides convenient access to the file verison information. The script takes a file name (with complete path) as it's only argument, and attempts to retrieve the file version information. This is a basic script with limited functionality, but it is sufficient as an example.

#! c:\perl\bin\perl.exe
use strict;
use Win32::File::VersionInfo;

my $file = shift die "You must enter a filename.\n";

if (-e $file) {
if (my $ver = GetFileVersionInfo ($file)) {
print "File Version : ".$ver->{FileVersion}."\n";
print "Product Version : ".$ver->{ProductVersion}."\n";
print "OS : ".$ver->{OS}."\n";
print "Type : ".$ver->{Type}."\n";
if (my $lang = (keys %{$ver->{Lang}})[0]) {
print "CompanyName : ".$ver->{Lang}{$lang}{CompanyName}, "\n";
print "FileDescription : ".$ver->{Lang}{$lang}{FileDescription}, "\n";
print "FileVersion : ".$ver->{Lang}{$lang}{FileVersion}, "\n";
print "InternalName : ".$ver->{Lang}{$lang}{InternalName}, "\n";
print "Copyright : ".$ver->{Lang}{$lang}{Copyright}, "\n";
print "Trademarks : ".$ver->{Lang}{$lang}{Trademarks}, "\n";
print "OrigFileName : ".$ver->{Lang}{$lang}{OriginalFilename}, "\n";
print "ProductName : ".$ver->{Lang}{$lang}{ProductName}, "\n";
print "ProductVersion : ".$ver->{Lang}{$lang}{ProductVersion}, "\n";
print "PrivateBuild : ".$ver->{Lang}{$lang}{PrivateBuild}, "\n";
print "SpecialBuild : ".$ver->{Lang}{$lang}{SpecialBuild}, "\n";
}
}
}
else {
die "$file not found.\n";
}

Again, this is just a basic script, but it provides the basis for a much more extensive search utility. I ran the script on several files on my system, and here's an example output (from svchost.exe):

C:\Perl>ver.pl c:\windows\system32\svchost.exe
File Version : 5.1.2600.2180
Product Version : 5.1.2600.2180
OS : NT/Win32
Type : Application
CompanyName : Microsoft Corporation
FileDescription : Generic Host Process for Win32 Services
FileVersion : 5.1.2600.2180 (xpsp_sp2_rtm.040803-2158)
InternalName : svchost.exe
Copyright : ⌐ Microsoft Corporation. All rights reserved.
Trademarks :
OrigFileName : svchost.exe
ProductName : Microsoft« Windows« Operating System
ProductVersion : 5.1.2600.2180
PrivateBuild :
SpecialBuild :

Pretty revealing, eh? Suffice to say, another file with the same name wouldn't include the same information. Microsoft used to have a tool available called "showbinarymfr.exe" that pulled some of this information from a file, but for some reason, that tool doesn't seem to be available any longer. However, who needs showbinarymfr.exe when you can use your own tools?

Anatomy of...

Robert Hensing has some nice updates to his blog entitled "Anatomy of...". It's worth taking a look to see what Robert looks for when dealing with incidents.

Robert is a Microsoft employee, and does incident response for customers. One caveat with regards to his blog, though...Robert mentions some things (ie, WOLF, rootkit detection) that he can only mention, and cannot provide details (or copies) of. Even so, he does a great job of walking through some of the things that he looks at with regards to incidents.

Friday, January 28, 2005

Monitoring and Incident Prevention

Something popped up on the Internet Storm Center Handler's Diary entry for today that struck a chord with me. In the entry, the handler talks about corporate infrastructures, and why they continue to get hit with the various malware that seems to be rampant on the Internet.

The reason this hit home with me is a combination of my book, and the fact that I'm currently doing a technical review of Ed Skoudis's second edition of Counter Hack. In one of the chapters I just reviewed, Ed does a great job of describing warwalking and wardialing, and presents some of the usual means of locating rouge APs and modems. In the case of wardialing, for example, Ed recommends wardialing yourself. On the other hand, or perhaps in addition, I would highly recommend that sysadmins learn a little programming and use WMI to implement monitoring tools and scanners. For example, using the Win32_POTSModem class, an admin can scan systems in her domain for installed modems, using VBScript, Perl, C#...whatever works. To me, it's quick, efficient, can be done during working hours and isn't disruptive.

Okay, back to the ISC post...what interested me about this is that it's someone else saying, "we still aren't doing all that we could." I agree. For the most part, corporate environments aren't doing what they can to protect their own infrastructure and data. Why this is, I don't know. The new mysql worm has hit some folks...why TCP port 3306 is exposed to the Internet, I'll never know. I still haven't heard a convincing business case (I take that back...I haven't heard a business case, convincing or otherwise) for why MSSQL systems were exposed to the Internet back when the Slammer worm hit.

I'm not sure what I'm missing here, to be honest, because a lot of what can be done is non-disruptive and doesn't cost a lot. For example, IIS web servers that are already in place can have unnecessary script mappings disabled without affecting the function of the systems. Other things that may be a bit more disruptive, such as putting technical measures in place to require users to use strong passwords, may be a bit harder, but if upper management considers it a requirement, and documents it, what stops it from happening?

Something administrators can do is perform their own scanning. Nmap comes in a Windows-based binary, and can be used to detect rogue WAPs, identify systems running servers they shouldn't, etc. Don't want to have to comb through the glut of data returned by nmap when run against a large domain? No problem! Perl has modules for parsing the XML output of nmap, and yes, those modules install on Windows systems running ActiveState's Perl. Like I mentioned before, WMI can be used to scan for a wide variety of things, including services and device drivers that are installed, perform hardware and software inventories, etc.

The...uhm...tools are out there folks. I'm still at a loss to understand why so many systems still get hit by these admittedly "dumb" bits of malware.

Thursday, January 27, 2005

Locard's Exchange Principle in the Digital World

Ever been watching CSI and hear Grissom say something about "Locard's" or Nick say "possible transfer"? Like most people, you've probably just shrugged it off and not given it a second thought.

What is "Locard's"? From the Virginia Institute of Forensic Science and Medicine:

"Locard's Exchange Principle - Whenever two human beings come into contact, something from one is exchanged to the other, ie dust, skin cells, hair etc. "

Many of us are probably familiar with practical applications of this...give your wife or girlfriend a hug, and you may have a couple of her hairs on your jacket. But what does this mean to us in the digital world? Well, in essence, whenever two computers come "into contact" and interact, they exchange something from each other. This may appear in log files, and be visible in the output of commands.

There're a couple of really good ways to demonstrate this. First, take two Windows computers. Go to one, and then map a share from the other. If the audit policy is set properly on the second computer, you should see a login entry in the Security Event Log. If you type 'net session' on the second computer, you will see information (ie, the IP address of the first computer, etc.). If you type 'net use' on the first computer, information about the second computer will appear in the output of the command.

Another interesting way to demonstrate this is to open a netcat listener on computer A, then go to computer B, and use netcat to connect to the listener on computer A. If you set up the netcat listener on computer A using the necessary switches to open a command prompt upon connection (ie, '-e cmd.exe'), you can type commands like 'dir' and get a directory listing from computer A. Now, use pmdump.exe to dump the memory used by the netcat processes on each machine. Then use strings.exe to parse through the memory dumps, and see what you find. On the memory dump from computer A, you should see the IP address of computer B. Pretty neat, eh?

This is something to keep in mind when performing incident response and forensics investigations. On Windows systems in particular, there are many places where little bits and traces of 'contact' with other systems and devices are maintained. There are log files that few people know about, and there are Registry keys that hold information that can be of use. For example, on NT 4.0 (geez, remember NT??) the telnet client had a GUI. Everytime you connected to a system using the client, the name or IP address of the target system, and the port you were connecting to, was logged in a Registry key. The Registry key 'LastWrite' (similar to the last modification time on files...in fact, the LastWrite time is maintained in a FILETIME structure) time corresponded to the last system that the user tried to connect to.

This is just an example, and there are a lot of other keys that pertain to different activity. A lot of these different activities can be demystified with a little experimentation on your part, using a variety of monitoring tools.

Have you uncovered some interesting activity? How did you do this? Want to share your story? Better yet, want to get your research published?

Wednesday, January 26, 2005

Conducting Research

I've been wondering for a while now about the status of computer forensics research and publication. Regarding 'research', I'm not talking about the academic kind, where a couple of grad students complete projects for a PhD. What I'm talking about is folks in the industry following a rigorous methodology for discovering and verifying something, or answering a question.

I wonder about this, because I do some of this myself. In fact, I'm working with someone now on a project that involves the use of USB-connected storage devices on Windows systems. Some of the things we're doing that are holding the research up are verification of findings, as well as trying to get documentation from Microsoft to support our findings.

In conducting searches for information, it's become pretty clear that either this sort of research isn't going on, or it's simply not publicly available.

What're your thoughts on the topic? Have you published something that you've fully documented, and made it publicly available?

Monday, January 24, 2005

LogParser 2.2 is out!

Microsoft released LogParser 2.2. I haven't found any details at the MS site regarding what updates have been included in the new version...maybe there's something in the .msi file that describes the updates.

Friday, January 21, 2005

Backdoors to BotNets

Ever sat down and just thought about the history of computer intrusions? How about just a small part of it? Well, I did...and wanted to share some thoughts...

How long ago was it that Back Orifice was released? Amusing name, eh? Got to give those cDc
guys some credit for their sense of humor. I suppose that at first no one really thought that opening and closing someone's CDRom tray was a big deal...until the helpdesk at various companies started getting calls and couldn't figure out what was causing it, or how to remove it.
Then came the updated version, BO2K and the plugins. Again we see the sense of humor of the guys creating this stuff (in how they named their plugins), but yet again, I don't think that the real issues were taken seriously. While opening and closing someone's CDRom tray, or sending goofy 'net send' messages to the user was annoying, it wasn't viewed as a real threat. Okay, so some helpdesk hours are sucked up, just assign the ticket to the new guy. But it seemed as if no one was really looking at the big issue, which was that external, unauthorized software had been installed and was running on a (corporate) system. While the outwardly visible effect was that someone was annoying the user, what was going on "under the hood"? Was there a sniffer or a keylogger (or both) installed?

Many times, when faced with an incident, I'll ask the above question, and get an emphatic response. However, asking for proof is usually met with indignation. How do you know? Did you find a log file? Did you find an unusual process? If so, what made it "unusual"?

The idea behind backdoors is that the attacker gets one on a system, and then connects to it from a remote location. In some cases, as with the SubSeven backdoor, the remote attacker usually ended up with a greater level of control of the system than the person sitting at the keyboard. However, this could be easily blocked by enabling filtering on your perimeter routers and firewalls...allow only the traffic you specify, and then allow that traffic to go to specific machines or groups of machines that you designate. So...someone inadvertantly gets a copy of SubSeven on their corporate workstation, and it's unlikely that someone from the Internet is going to connect to it.

So, moving on...

Desktop security is something of an arms race. Take a look at tanks (if anyone thinks for an instant that this is a blatant shout out to Lance Spitzner...you're right). Tanks had armor to protect them from bullets and grenades. So the TOW missile was produced. To offset the effects of the TOW, reactive armor was added to the tanks. And on and on. Someone designs a weapon, and someone else designs a countermeasure. Then the original designer designs a counter-countermeasure, then...well, I think you get the idea.

So, you're probably wondering how this applies to computer and network security. Well, once folks finally started to catch on and block inbound traffic from the Internet with their perimeter devices (routers, firewalls), it became clear that another means of gaining access and control of systems was necessary. Some attackers began targetting the servers they could reach, such as web servers. These attackers assumed that the systems were misconfigured in some way, and in a great many cases, they were right. Others started attacking users with a different kind of threat...one that would "phone home", rather than sit there and wait for a connection. One popular implementation of this was the IRCBotNet. Basically, an IRC client is dropped onto the system, which then makes an outbound connection to an IRC server. Once connected and logged into the specific channel used by the attacker, the attacker could easily control all systems connected to the IRC channel by sending a single command to the channel itself. For example, if someone wanted to conduct a massive denial of service attack against a target, they'd issue the command "ping server.example.com" to the channel, and all of the systems connected to the channel (in some cases, there have literally been thousands of infected systems, or 'zombies', connected) will start pinging the system, and the overwhelming traffic would result in a denial of service attack. However, not a single packet sent against the target would ever have been issued from the attacker's own system.

The crux of the issue is that now, the malware on the infected system reaches out from behind the firewall, and most places allow all traffic to leave their infrastructure, completely unrestricted.Know anyone who does egress filtering? I know of some that block only specific ports. Very few block all outbound traffic except that which is specifically allowed to leave the network. Knowing this, the attackers configure their IRCbots to use different ports, allowing them to bypass thosefirewalls that block specific traffic/ports only.

How do these bots get on systems in the first place? One way is through email. Another is to take advantage of flaws in the IE browser and get a downloader on the system, which then downloads the bot.

Remember the russiantopz bot? This bot was one of many that was able to sneak in past anti-virus applications, as it consisted of two legitimate applications, one being the mIRC32.exe IRC client. (Note: In the case of the russiantopz bot, this baddie was dumped on an IIS system using the directory transversal exploit.) The bot was dropped onto the system using the TFTP client residing on the system, and then launched. Artbitrary/unauthorized software was copied to the system and successfully executed. The necessary patch was provided long before the system was compromised, but following some of the online configuration guides (as well as common sense), the system would not have been vulnerable, even if it hadn't been patched.

Now, take a look at this recent post to the SecurityFocus Incidents list about a bot being dropped on a system via SQL Injection. Notice here that the same thing had happened, to the same site, just two weeks earlier. Ouch! In both cases, the attack seems to have been successful.

I think at one point, President Bush tried to say "fool me once, shame on you...fool me twice, shame on me" (see #5 on the list). 'Nuff said.

The point of all this is to demonstrate the growth and modification of attack techniques over time.When one door is closed, the attacker will try another. It's an arms race. However, the necessary protective measures that need to be used by the good guys have been around for a long time...defense in depth, the Principle of Least Privilege, minimalization, and good ol' common sense. However, too many times, these measures aren't implemented, and the bad guys are right there to take advantage of it.

I won't go into too much detail, but the use of botnets has created a new economy for online crime particularly in extortion ("pay up or I'll crash your site"). Some folks are even renting botnets out to be used for spam, DDoS attacks, etc.

Thoughts?

Tuesday, January 18, 2005

MetaData

One topic that isn't covered in any great detail is metadata, particularly in files and documents on Windows systems.

The term metadata means "data about data". Searching the web, you'll find that this term has a variety of uses, depending upon the context. within the context of this blog entry, I'm going to use "metadata" to refer to data associated with a file, either accompanying it, or being contained within it. This can include file MAC times, NTFS alternate data streams (ADSs), file attributes, etc. Metadata can also include information or data contained in the file, as a part of the file or document itself.

You can use Frank Heyne's LADS tool to retrieve information about ADSs (see my previous post on data hiding for tools to retrieve the information stored within ADSs). LADS will find all ADSs, so if you've right-clicked on a file and filled out the Summary Information in the Property tab, or you've downloaded files via IE on XP SP2, you'll be guaranteed to see some ADSs. It's probably the others that you need to worry about, though.

Windows Media Files also contain metadata, as seen in the Extracting MetaData from Windows Media Files blog entry.

You can retrieve metadata from image files, particularly those created on digital cameras, from here. If you're interested in a Perl-based approach, check out the Perl EXIFTool script. It's unclear to me exactly how detailed this information is, and I don't think that it can definitively tie a particular image to a specific camera. However, it is interesting to see,and where possible, to add comments to your files.

To View/modify resources (ie, icons, dialogs, etc) within executable files on Windows systems, take a look at EXEScope and Resource Hacker

Something I presented in my book was a Perl script that used the PDF::API2 module to retrieve metadata from PDF documents.

You can the Win32::File::VersionInfo Perl module on Windows systems to retrieve file and product version information from within executable files. Many commercial companies (Microsoft, Adobe, HP, etc.) include this information in the EXE files, DLLs, and drivers they install on Windows systems. You can use this module to retrieve that information. If you're running ActiveState Perl, this module is trivial to install via PPM. I used modules like this when I was performing my analysis of the russiantopz IRC bot.

Metadata embedded in Office documents has been a big issue for quite some time. In fact, my first real introduction into the harm (re: embarrassment, information disclosure, or both) that it can cause was from the ComputerBytesMan's page on the topic. This is very interesting stuff. I included a Perl script with my book that can retrieve metadata from Word documents, and with a bit of work, it can be modified to pull similar data from Excel spreadsheets. Richard Smith, aka ComputerBytesMan, used a program he wrote himself to pull the revision history from the document, but you should be able to find the same information using strings.exe. Hint: the information is stored in the file in Unicode.

For a while, Microsoft had an article posted that manually dealt with removing this "hidden" data from Word files. Now, if you're using OfficeXP/2003, you might want to download the "Remove Hidden Data" tool.

Basically, once you go looking, you'd be surprised what information can be found in a variety of file formats.

Capturing information from live systems

I'm sure that over time, I'll have plenty of opportunity to discuss various tools that can be used in the course of incident response activities...what they do, how they can be used, etc.

What I'd like to talk about now is various ways of getting the information off of systems. There are a variety of ways you can go about collecting information and sending it off of your systems...the one you choose to use depends on the situation,your policies, etc.

One way to collect the information is to run your tools, and have the output of each tool written to a file in a specific directory you've created on the system. Many times, even the most basic information won't fit on a 1.44MB diskette along with your tools, so you'll put the tools on a diskette, CD, or a shared drive, and run them. Once the tools are done, you can launch the FTP client that ships with Windows systems to transport the information off of the system. Of course, you can also archive the data you've collected first before sending it off the system. This is a good method to use in an environment that doesn't make use of domains, where you'd have mapped drives available for copying data to for storage.

If you're a domain admin, and/or you have remote access to a system for administrative purposes, you can use a combination of remotely-run and locally-run (can run them remotely via psexec.exe) tools to collect information and save it locally to your system.

Another method is to set up a netcat listener, and then use netcat in client mode on the "victim" system to pipe the output of the command to the listener. The listener would be set up on the server in this manner:

c:\netcat>nc -L -d -p 5150 > myfile.txt

The above command line starts netcat in detached mode, listening on port 5150 (yes, this is a Van Halen reference, in case you were wondering). Anything that the listener receives on this port is then redirected to 'myfile.txt'.

Then, on the "victim" system, you'd run netcat in this manner:

c:\> nc [IP address of server] 5150

For a listing of the various switches used by netcat, look here.

If you're concerned about encryption, then take a look at cryptcat from farm9. Cryptcat is netcat with TwoFish encryption.

The problem with using this method is that now you have to parse apart "myfile.txt" into it's separate component files, as the listener command dumps everything into one file. The way around this is to create a new listener for each command you're going to run, either by retyping the command or creating a separate listener on a separate port.For example, if you know how many commands you're going to run, you can use a batch file to create separate listeners on sequential ports. Then, when you run your client-sidebatch file, simply pipe the output of each command to a different port.

My personal favorite (for obvious reasons) is to use the Forensic Server Project (FSP). The FSP is a framework for getting data off of a system. The tools (the FSP itself, and the client component, called the First Responder Utility, or FRU) are written in Perl, but provided as standalone executables (accompanied by Perl source). The FSP works like this...you start the server component (in this case, FSPC.exe) on the system you're using as a forensic server. You can choose the default settings for listening port, directory to store files (i.e., case management), etc. Then, you put your CD with your tools and the FRU (FRUC.exe) in the "victim"system, and enter the appropriate command line options. The FRU reads the fruc.ini file for the commands to run (executables for each command must be on the CD), as well as which Registry keysto query. The FRU sends all of this information over to the FSP automatically, where the data is stored, checksummed, and all activity is logged with timestamps.

There is also a file copying client available, though it was not included on the CD along with the book. This client gets a list of files to copy from the first responder via the GUI, and for each file, it gathers metadata, calculates a checksum, and copies the file bit-by-bit to the server. Once the file is copied, the server component will calculate a checksum for the newly-copied file, and verify the integrity of the file. Let me know if you want a copy of this file.

Because all of the data is stored in flat text files, scripting languages (batch files, Perl, Python, VBScript) can be used to implement an analysis suite on the forensic server, to quickly and easily parse the data and provide some level of data reduction (remember "artifical ignorance" from previous posts?) and analysis.

I designed the Forensic Server tools to be extensible. Not only is the Perl source included for both tools, but the design of the FRU allows you to add any tools you like to the fruc.ini file. For example, grab the DriveInfo tool from my Tools page, and run that on a system. Then, add it to the fruc.ini file. Very cool.

If you happen to be using the FSP, how about posting the fruc.ini file you use? Don't want to post it into a comment here? Got some interesting tools that you use in the fruc.ini file? Drop me a line and let me know.

One of the things on my rather extensive to-do list is to write a version of the FRU that writes to a mapped drive, or a USB-connected thumb drive. If you're interested in something like this, let me know.

A whole other area that needs to be addressed and discussed is the analysis of informationcollected, but that's best left to blogs yet to come...


Capturing information from live systems

I'm sure that over time, I'll have plenty of opportunity to discuss various tools that can be used in the course of incident response activities...what they do, how they can be used, etc.

What I'd like to talk about now is various ways of getting the information off of systems. There are a variety of ways you can go about collecting information and sending it off of your systems...the one you choose to use depends on the situation,your policies, etc.

One way to collect the information is to run your tools, and have the output of each tool written to a file in a specific directory you've created on the system. Many times, even the most basic information won't fit on a 1.44MB diskette along with your tools, so you'll put the tools on a diskette, CD, or a shared drive, and run them. Once the tools are done, you can launch the FTP client that ships with Windows systems to transport the information off of the system. Of course, you can also archive the data you've collected first before sending it off the system. This is a good method to use in an environment that doesn't make use of domains, where you'd have mapped drives available for copying data to for storage.

If you're a domain admin, and/or you have remote access to a system for administrative purposes, you can use a combination of remotely-run and locally-run (can run them remotely via psexec.exe) tools to collect information and save it locally to your system.

Another method is to set up a netcat listener, and then use netcat in client mode on the "victim" system to pipe the output of the command to the listener. The listener would be set up on the server in this manner:

c:\netcat>nc -L -d -p 5150 > myfile.txt

The above command line starts netcat in detached mode, listening on port 5150 (yes, this is a Van Halen reference, in case you were wondering). Anything that the listener receives on this port is then redirected to 'myfile.txt'.

Then, on the "victim" system, you'd run netcat in this manner:

c:\> nc 5150

For a listing of the various switches used by netcat, look here.

If you're concerned about encryption, then take a look at cryptcat from farm9. Cryptcat is netcat with TwoFish encryption.

The problem with using this method is that now you have to parse apart "myfile.txt" into it's separate component files, as the listener command dumps everything into one file. The way around this is to create a new listener for each command you're going to run, either by retyping the command or creating a separate listener on a separate port.For example, if you know how many commands you're going to run, you can use a batch file to create separate listeners on sequential ports. Then, when you run your client-sidebatch file, simply pipe the output of each command to a different port.

My personal favorite (for obvious reasons) is to use the Forensic Server Project (FSP). The FSP is a framework for getting data off of a system. The tools (the FSP itself, and the client component, called the First Responder Utility, or FRU) are written in Perl, but provided as standalone executables (accompanied by Perl source). The FSP works like this...you start the server component (in this case, FSPC.exe) on the system you're using as a forensic server. You can choose the default settings for listening port, directory to store files (i.e., case management), etc. Then, you put your CD with your tools and the FRU (FRUC.exe) in the "victim"system, and enter the appropriate command line options. The FRU reads the fruc.ini file for the commands to run (executables for each command must be on the CD), as well as which Registry keysto query. The FRU sends all of this information over to the FSP automatically, where the data is stored, checksummed, and all activity is logged with timestamps.

There is also a file copying client available, though it was not included on the CD along with the book. This client gets a list of files to copy from the first responder via the GUI, and for each file, it gathers metadata, calculates a checksum, and copies the file bit-by-bit to the server. Once the file is copied, the server component will calculate a checksum for the newly-copied file, and verify the integrity of the file. Let me know if you want a copy of this file.

Because all of the data is stored in flat text files, scripting languages (batch files, Perl, Python, VBScript) can be used to implement an analysis suite on the forensic server, to quickly and easily parse the data and provide some level of data reduction (remember "artifical ignorance" from previous posts?) and analysis.

I designed the Forensic Server tools to be extensible. Not only is the Perl source included for both tools, but the design of the FRU allows you to add any tools you like to the fruc.ini file. For example, grab the DriveInfo tool from my Tools page, and run that on a system. Then, add it to the fruc.ini file. Very cool.

If you happen to be using the FSP, how about posting the fruc.ini file you use? Don't want to post it into a comment here? Got some interesting tools that you use in the fruc.ini file? Drop me a line and let me know.

One of the things on my rather extensive to-do list is to write a version of the FRU that writes to a mapped drive, or a USB-connected thumb drive. If you're interested in something like this, let me know.

A whole other area that needs to be addressed and discussed is the analysis of informationcollected, but that's best left to blogs yet to come...


Hiding on a Live System

Data hiding is a subject that I'll probably be posting on quite a bit as time goes on.

Robert Hensing's recent blog entry on advanced hiding techniques is a very interesting read, as he provides a lot of information about areal-world incident.

In his blog entry, Robert covers a topic I've presented on...file signature analysis. Several years ago, I took the EnCase Intro Course, and got interested in this thing called file signature analysis. Now, this doesn't mean generating MD5 hashes and comparing them. File signature analysis involves looking forspecific sequences of bytes in the first 20 bytes of a file. As an exercise, I went home and wrote my own file signature analysis tool in Perl, and I've compiled it as "sigs.exe". The Perl source code for this tool is included on the CD with my book, and the EXE version is included on my web site ...click on the "Tools" link and extract sigs.exe, p2x580.dll, and headersig.txt from the archive. The file signature database used by sigs.exe is available from TechPathways.com, home of the forensics tool ProDiscover.

The FILExt.com site is an excellent resource for file signature information. For example, do a search on the extension "exe", and this is what you'll find this. Notice the section on "Identifying Characters". This is the information that is used to locate executable files on Windows systems. By "executable", I'm also referring to files that may have extensions such as .dll, .sys, .ocx, etc. Keep in mind that file association information is maintained in the Registry, and can be viewed via the 'assoc' and 'ftype' commands.

So, in a nutshell, just because a file has an extension of '.gif', that doesn't mean that the file itself is a GIF image file. In the case of Robert's blog entry, and from what others have relayed to me, it could be an executable file. This technique of changing the extension of the file seems to be one of the more popular methods of hiding things on a Windows system, right behind simply changing the name of the file.

Another search method to use when looking for suspicious files on a system is to look for file and product version information compiled into executables. Robert's blog shows examples of using strings.exe to view this information...tools written in Perl can pull this information from binary files, as well. Something like this can probably be added to the sigs.exe tool, as accessing the first 20 bytes of the file causes the last access time to be modified, so why not get everything you can? Or how about being able to search a system for all non-Microsoft executable files? That would be a pretty nifty tool to have, wouldn't it?

The HackerDefender rootkit can reportedly be detected using RKDetect, a VBS script that runs queries for services using the Service Control Manager via sc.exe, and also via WMI. The output of these two queries are compared for anomolies; ie, does a service appear in one listing that does not appear in the other? This is not a signature-based approach, but instead looks for anomolies. In the case of Robert's blog entry, the "Dynamic Storage Allocation" service would be the anomoly.

Robert goes on in his entry to show how file times can be searched and used to look for events that occurred on the system, even if they aren't logged in the Event Log.

Also, a couple of tools are mentioned in the blog entry, several of which aren't publicly available. Checksym.exe can be found here.

One final note on SFC/WFP...after reading Robert's blog, I took a look on my XP Home system, and found that while Winlogon.exe is in the system32 directory, it is not in the system32\dllcache directory. The same is true on my XP Pro system. As Robert pointed out, SFC/WFP is for reliability, not security...and the process the implements SFC is not itself protected by SFC. This is definitely something to keep in mind.

Friday, January 14, 2005

Scanners

I was working up a blog entry system monitoring, but it started to get lengthy. I thought I'd just start by breaking out some of the items I'd collected, in particular, scanners.

First, a caveat. Scanners are tools, and like any tool, has a use. Many freeware and open-source scanners are very powerful, but the real power lies in the fact that they're configurable. This means that rather than paying for and running a commercial tool that you have no idea what it does. Unfortunately, far too many people download the freeware and open-source stuff, run it, and if it doesn't pop up any high-risk vulnerabilities, they proclaim the system secure. Not good.

ATK 4.0
ATK is an open-source security scanner and exploit framework for Windows, from Marc Ruef. This tool is similar to Nessus, in that it provides a framework for running user-defined checks known as "plugins", but it runs on Windows. ATK even includes experimental support for Nessus plugins. Marc even provides a link to a separate tool for creating ATK plugins. I originally took a look at ATK when it was version 2...I definitely need to go back and take a look at the latest version.

NTOInsight
NTOInsight is a freeware web site crawler available from JD Glaser. NTOInsight lets you take a look at the architecture of your web site. By being able to see the content exposed by your web site, you have a better chance of mitigating exposures. While NTOInsight is a command line tool, the output is in a graphical format that's easier to analyze and understand. One other thing...if you've never seen JD present, sign up for a conference that's he's going to be presenting at, and go. Seriously.

Wikto
This is a Windows-based tool that scans a host for all entries in the Google Hacking Database , which includes full web-based vulnerability scanning. Definitely a good place to start. Note that Wikto requires the .NET framework to run. Even so, Wikto has a lot of very powerful features. All I can say is, take a look.

Nessus/NeWT
NeWT is a Windows version of the Nessus scanner. It's a powerful tool for scanning systems...powerful in the sense that it provides a framework for writing plugins thattest security. If you don't have a Linux system that you can install Nessus on, take alook at NeWT...it uses the same plugins that Nessus does. The freeware version is limited to the local subnet, but it's still worth the time to take a look.

Nessus is one of those tools I was talking about above. I've seen far too many people blindly run the tool, and send off the PDF output of the report. Folks, tools like this are configurable for a reason. I was in one environment in which Nessus scans, without credentials, would return 9 warnings, all relating to null session enumeration. Why not lower the noise, and remove plugins 2 thru 9, and rewrite the output of the first one. Nessus and ATK plugins are basically text files, which means that you can not only open them and read them, but lo and behold, you can edit them as well!

SiteDigger 2.0
FoundStone recently released an updated version of their SiteDigger tool, which goes through the Google cache looking for "vulnerabilities, errors, configuration issues, proprietary information, and interesting security nuggets on web sites." It sounds very cool, and very useful.

Okay, this is just a cursory list, so please don't think that it's complete. I'm sure that there are more out there. But let me tell you, before you flood me with email, keep one thing in mind...I'm focusing on free- or shareware tools, preferably open source, that run on Windows. I'm not blogging just so that I can repeat what the Linux guys say, okay?

Data Reduction and Representation

Data reduction and representation or reporting is a huge issue in the Windows world, even in IR. Maybe more so, in some cases. By now,you're probably thinking, what's he talking about? What I'm referring to the glut of data that is potentially available for troubleshooting and/or incident response purposes.

What kind of data am I talking about? Event Logs for one. My previous post pointed out ways of reducing the amount of noise and unusable data that goes into the Event Logs, so that you've got something really useful to work with. From there, there are a variety of tools available for parsing and managing the data, from Perl to LogParser. My focus is always freeware approaches, due to the advantages, such as being able to implement something with little to no monetary cost...the biggest 'cost' is the time it takes to learn something and develop a new skill.

One method of data reduction that I've used in the past is Marcus Ranum's "artificial ignorance". What this amounts to is that instead of using complicated heuristic algorithms to try to determine what's "bad" on a machine, you simply get a list of all the "known good" stuff' and have thetool show you everything else. For example, I used to use a simple Perl script to retrieve the contents of the HKLM\..\Run key from all of the systems (workstations, servers, et al) from across the enterprise. I'd have a small text file containing known good entries, and I'd have the script filter those entries out of what was being retrieved, leaving only the unusual stuff for me to look at. This works for just about anything...Registry keys, services and device drivers, processes, etc. Using Perl or VBScript to implement WMI calls, and then filtering the output based on a list of stuff you know to be good for your environment is pretty easy. I'm sure that there commercial products that may help, but as we all know, no two architectures are the same, so what you're really paying for is a nice GUI and a huge database of "known goods", most of which you're not usinganyway. Besides, if, as a sysadmin, you can't look at a listing from one of your systems and not know, or be able to quickly figure out, what's valid and what's not...well, let's just say that it would be a great learning experience. The same holds true for consultants and security engineers...with the right tools you can easily determine what's 'good' or 'bad' on a system. "AI" is easy to implement and adheres to the KISS principle.
The vague term "data munging" may apply to other forms of data reduction and representation. When performing IR activities, you will very often have many files to deal with for each machine, each file representing the output of a particular tool. In some cases, you may need to correlate the output from several tools, such as those that collect process information. If you run multiple tools (tlist, pslist, WMI scripts, openports/fport/portqry, etc) then you're going to have to process and analyze this data. Well, let me tell you...after having to print out all of these files, lay them out on a big table, and trace through each one with a ruler and a highlighter, I figured I'd write a Perl script to do the correlation (by PID) and presentation for me. I implemented this in the procdmp.pl script provided with my book. Taking it one step further, someone with even a little bit of Perl knowledge (or even just the desire) could add "artificial ignorance" to that, as well, reducing the data even further for their own infrastructure.

Let's say you do some security scanning, either on your own internal (corporate) network, or as part of contract (you're a consultant). Your tool of choice may be nmap...if so, take a look at fe3d for visualizationof the data. Fe3d may not be *the* answer you're looking for, as you may be more interested in that workstation that's running a web server than having pretty pictures to look at, but fe3d is an option, especially if you want to "wow" your customer or boss.

Finally, some things that go into data reduction...rather than reducing the data at the output end of the process (ie, you've already collected the data), try doing so at the front end. An example of this is the steps I mentioned in an earlier blog entry about configuring Event logs. Another way of handling the data is to know exactly what you're looking for. Personally, when it comes to either incident response or vulnerability assessments, I'd rather collect all that I can, and then reduce it anduse only what I need. However, this may not work for everyone. Perhaps looking only for specific things works for others.

Wednesday, January 12, 2005

Windows Audit/Logging

Eric posted a blog entry on Tues entitled "Keeping the noise down in your security log" that is a 'must read'! He makes some very good points about reducing the amount of noise in the Security Event Log, and presents some tips for doingexactly that.

I took a couple of interesting points away from the blog entry myself, the first being that enabling auditing on a system, and then auditing both success and failure events for every setting is pretty much just as bad, if not worse,than not auditing at all. Doing so, and then complaining that there's too much noise in your Event Log can be filed under the heading of "Duh".

Second, Eric's process is iterative. That means that you apply a template, let the system run for a bit, observe the changes, and then tweak the settings as necessary. My experience has been that most Windows admins don't have the time or interest in this sort of process...once the settings are changed, they're off to something else entirely. This is due in large part to how they are tasked by their managers, their individual knowledge/skill level, etc.

The third thing that I took away from Eric's entry is the need for documentation. Something like this is going to look odd to a new admin coming on board, so having documentation in the form of policies, procedures, and even a file stating why the settings were made (with references to pertinent MS KB articles, etc.) wouldbe very beneficial.

So, once you've made your settings changes, then what? You've taken steps to reduce the noise in your logs, so then what do you do? Well, there are ways to parse the data. One is to use DumpEvt or PSLogList to export Event Log entries to a text-based format. From there, you can use scripts to parse the data.

Being a Perl user, I'm more likely to opt for Perl to parse the text-based output of the log dumping tools mentioned above. The Microsoft Script Center provides a wealth of resources for using a variety of scripting languages for all sorts of system administration needs. Take a look at the Script Repository, and choose your language (Perl, Python, JScript, VBScript, REXX, etc.) to get started.

Microsoft also provides a tool called LogParser, which can be used to parse Event Logs and even text-based logs using SQL queries. LogParser.com provides some pretty extensive support for the Microsoft tool. LogParser works not only with Event Logs, but also text-based logs, such as those created by IIS. The neat thing about this tool is that it ships as an EXE as well as a DLL COM object, meaning that you can script your searches and correlation activities via the COM object.

EventComb is a graphicaltool provided by MS for searching Security Event Logs.

There are a lot of freeware tools for managing Event Logs across several machines, or the enterprise. Again, Perl can be used, or a syslog client/agent can be added to machines, and Event Log entries can be stored in text files, Excel spreadsheets, a mySql database, etc. Perl can be used to create reports of events, and there are also modules you can use to provide graphical representations of the data.

Post a comment, or drop me a line, and let me know what your favorite method for managing Event Logs is, or send me any questions you may have.

Thursday, January 06, 2005

What's using that port?

Every now and again, I see posts to public lists asking for an "lsof for Windows", or "how do I found out which process is using which port?" The stock answer to this has, for quite a while, been to use fport. Well, there are quite a few options available at this point, above and beyond that one tool.

Let's take a look at some CLI tools. I like openports, personally, and highly recommend it. Unlike fport, openports does not require an admin account to run the tool. Or, if you're on XP, go to the command prompt and type "netstat /?". Take a look at the '-o' switch. (Note: I feel like I have to repeat myself on this, but if you try using the '-o' switch on 2K, do NOT come back to me and tell me it didn't work. I know that. I doesn't work on 2K.) The '-o' switch adds the PID to the output of netstat, in the far right column. For fun, look at the '-b' and '-v' switches, as well.

For GUI tools, take a look at TCPView, and ActivePorts.

Microsoft even provides a couple of useful tools of their own for this. Portqry started out as a scanner, but version 2 has some added functionality. For example, "On computers that support Process ID (PID) to port mappings" (re: XP and 2K3), you can get a listing of not only the network connections a la netstat, but the processes using the ports. I downloaded a copy of portqryv2 to an XP Pro machine today, and the output doesn't look at all like what's in the KB article. However, if you use the '-local' switch, you'll get quite a bit of information, to include not only the mappings, but more detailed information about each process, including the network connections. Add on the '-v' switch, and you'll get information about the modules used by the processes.

Another Microsoft tool is PortReporter. In a nutshell, PortReporter installs as a service and continually monitors the system as it communicates on the network. On XP and 2K3 systems, PortReporter monitors:
  • The ports that are used
  • The processes that use the port
  • Whether a process is a service
  • The modules that a process loaded
  • The user accounts that run a process

On Win2K systems, the tool only monitors the port used, and when it was used.

Like Portqry, PortReporter can generate a lot of data. After all, the service has three log files. Fortunately, there's a Parser tool available to help you go through all of the log data.

It might be a good idea to include PortReporter as part of your standard image for Win2K3 systems, at least.

So, why is this important? Well, if you're an IDS analyst, and you see some unusual traffic on the network (or in firewall logs), you may want to find out which application on that Windows machine is generating the traffic. Is it legit, or is it spyware/malware?

Wednesday, January 05, 2005

XP and USB Storage Devices

As a follow-up to my post on Mounted Devices, here's an MS KB article that talks about how to disable USB storage devices on XP. If you'd rather allow the devices, but make them read-only, take a look at this info from Microsoft.

Windows Rootkit Detection

I recently received an email from a fellow blogger asking about rootkit detectors for Windows. This was interesting, because there are quite a few for Linux/*nix, but almost (notice that I say "almost") none for Windows.

One such tool is RKDetect,a VBS script that launches both sc.exe (queries SCM) and a WMI query to get information about services on a system, and then by comparing the output of both queries, claims to be able to detect HackerDefender v1.0. (Note: I say "claims" because I haven't tested it myself).

The rootkit detection script I presented in my book takes a similar approach, but goes a bit further. The basic idea is that several methods of querying the "victim" system are used, in the hopes that one tool may use a different API to obtain it's information, and that one may not be masked by the rootkit. For example, I took a look at AFX Rootkit 2003 in my book, and while Microsoft's tlist.exe didn't "see" the hidden process, SysInternals' pslist.exe did. By using Perl to obtain the output of both tools and then compare them, the "hidden" process was located.
Another example of a technique used by my script was to query the Registry, in particular, the ubiquitous HKLM\..\Run key. Using psexec.exe from SysInternals, an admin can run a local query on the system for the contents of the key. Then, he could run a remote query for the contents of the same key from another system...a system not infected with the rootkit and therefore one without it's API calls (in the case of a DLL injection-type rootkit) being intercepted. Comparing the output of these two queries will reveal the "hidden" key entry.

Additional thoughts on rootkit detection techniques include scanning the "victim" system with nmap, saving the output to XML format. Run port-to-process mapping tools such as openports.exe and netstat.exe (on XP and above - I recommend that you use the'-o' and '-b' switches) and then parse the output with Perl. Perl modules exist forparsing the nmap output, specifically Nmap::Parse and Nmap::Parse::XML (and yes, Virginia, these modules are available for Windows via ActiveState's PPM)...and this can then be compared to the openports/netstat output to look for any disparities.

The point is that most of the currently available rootkits (I haven't seen or worked with them all) have design or implementation flaws that allow them to be detected, or perhaps even have their installation completely prevented. The AFX Rootkit 2003, for example, did nothing to hide its DLL files, which were visible in the file system, as well as in the output of listdlls.exe for the explorer.exe process.

Commercial tools (we've looked at freeware solutions so far) such as ProDiscover and EnCase have incident response capabilities, and use techniques for detecting rootkits. One such technique includes walking through the raw MFT (on NTFS drives) and then performing a directory listing via the operating system's utilities, and looking for disparities.

If you go to Rootkit.com, you'll not only find several rootkits, but there are also rootkit detection tools available, as well. I haven't tried tools like VICE yet, but I am looking into methods for detecting rootkits that implement direct kernelobject manipulation (DKOM).
Some other links for information regarding rootkit detection on Windows include Scheinsicherheit and RKDScan (here, or here).

At this point, you're probably asking yourself, "how prevalent are rootkits, or rootkit-like functionality?" Well, a quick look at SARC.com today revealed Backdoor.Zins and W32.Protoride.B. A thread on the Full-Disclosure list from Sept, '04, discussed the use of rootkits to hide spyware. Perhaps the real question should be, when will we see the next bit of malware with rootkit functionality?