Monday, March 31, 2008

WFA slashdotted!

Yes, WFA was slashdotted! Thanks to Don for the review!

In the past (Aug 2004), Richard Bejtlich showned a correlation between a book review appearing on Slashdot.org, and higher traffic to his blog, as well as "better" numbers at book vendors sites. Hopefully I can expect to see a similar effect...

Addendum, 1 April
Okay, this isn't a joke...here's a portion of the WFA page on Amazon from today (1 April 2008), prior to 3pm EST:






Comparatively speaking, "Windows Forensics and Incident Recovery" is listed as follows:





All in all, pretty cool. ;-)

Friday, March 28, 2008

Registry Analysis - What Is It??

That's right...what is this thing we call "Registry analysis"? When someone performs "Registry analysis", what are they doing?

Okay...raise your hand if, for you, Registry analysis consists of looking for strings (using strings, or BinText, or your favorite tool), or maybe using grep() to do regex searches for IP addresses, email addresses, or something else.

Great, thanks. You can put your hands down.

Now, raise your hand if when performing Registry analysis, you open the hive files you're interested in with one of the popular Registry viewers (EnCase, FTK, ProDiscover, or even good ol' RegEdit), and "look around for anything interesting". Keep it up there if you use some sort of checklist or spreadsheet of Registry keys that may be of interest for your case or exam.

Okay...great. Go ahead and put your hands down.

So, what's wrong with either of these methodologies? Cumbersome? Inefficient? In some cases, ineffective? Ever wonder what you're missing? How about...A LOT?

The fact of the matter is, I really believe that Registry analysis isn't being performed today nearly as much as it should because it isn't "easy". I mean, sure, you've got this file that contains all this data, all this potential "evidence" (depending upon the audience, of course), but you don't know (a) how to get it, and maybe even (b) how to interpret it. After all, Registry viewers don't give you what you need, do they? They just present the data as is...it's up to you, the investigator or analyst to make heads or tails of it.

What if you just want to get the most recent document accessed...not just by the user, but via various applications, such as RealPlayer, maybe an image viewer, Excel, Adobe Reader, or even just by one of the common dialogs? If you're just looking for documents accessed, there are a LOT of places to look in the Registry...and using a checklist can take a long time. Also, due to encoding used by various vendors, regular ASCII/Unicode string searches won't work. So what if your "checklist" could be run against the Registry hive file itself?

What about those times when you have to correlate between multiple Registry keys, such as when you're trying to find out about those installed BHOs, or trying to determine when a USB thumb drive was last plugged into the system? How cumbersome is that?

How would you like to rip through your Registry analysis, getting just what you need, presented in the way you need it, or at least a way that's usable? Forget spreadsheets and checklists...how about plugins (stuff like Nessus and Metasploit use plugins, right?) that reach out and get what you want? How about...in order to update this tool, plugins just need to be dropped into a directory, and they're ready to use? How about it this all came with a GUI and a nice "FindAllEvidence" button?

What if you could also get timestamped data (ie, most recently accessed documents, UserAssist entries, etc.) so that you could import it into a format such as Excel, or even XML (for use with Simile TimeLine)?

Know what's really cool about the timestamped data? In order for it to be placed in the user's Registry hive file (NTUSER.DAT), the user account needs to be logged into the system. Sounds pretty simple, doesn't it? Hit most public lists, though, and you'll see questions such as, "how do I tell when a user was logged on if auditing of logon events isn't recorded in the Security Event Log?" (I'm paraphrasing, of course). Well, in most cases, when someone logs on, they do something...right? Look at all of the user activity that is recorded (I say 'recorded" because in many ways, the Registry is a log file, of sorts) in the user's hive file...and then correlate that to other activity (Internet browser history, etc.) that may be available.

Sound pretty cool? How about flippin' sweet?!?

The fact is, there's "lookin' at" the Registry, and then there's doing real Registry analysis and getting the data you need.

Addendum
Some might be wondering, "What is it about Registry analysis that's so hot? After all, I get all of the information/evidence I need from the file system." Well, I can only speak to those things that I've determined through Registry analysis...logon history, files accessed, files NOT accessed, applications that had been installed, run, and then uninstalled, etc. There is a great deal of information...much of it historical, much of it associated in some way with a time stamp...right there in the Registry.

Addendum, 1 Apr
Okay, this isn't a joke...but I added three plugins to the RegRipper last night. One for the Uninstall key in the Software hive (all entries sorted based on the key LastWrite times), as well as one for the USBStor key, and another for the DeviceClasses keys...both in the System hive.

Adding a plugin for the Protected Storage System Provider is going to be problematic until I get some info how to decrypt the data in the "Item Data" values.

Thursday, March 13, 2008

New WFA Review Posted

Rob Lee posted a review of Windows Forensic Analysis today...check it out!

I have to tell you, it's a good one! Rob really hits home with some very important points about the book, particularly regarding flow. That's something I'll have to work on for 2/e. That's right...a second edition. I plan to make it more than an update, more than just adding new stuff. One of the problems I see with the current edition is like Rob said...flow. How does one sit down and find something more than just information about a tool or file? Sure, books have indexes (hint, hint) and that's a great place to start, but talking about how Prefetch files or a particular Registry key is useful will only get you so far. What I need to do is figure out a way to tie this all together into something that describes how to use this stuff in an actual...you know...examination. After all, that's the point, isn't it?

I do have some thoughts and ideas on where to go, but to be honest, I'd really like to hear from folks regarding what might work.

Wednesday, March 05, 2008

Event Log Analysis

In keeping with the Getting Started posts, I wanted to include something that may be of interest with regards to finding corroborating artifacts when performing computer forensic analysis.

Many times, when performing CF analysis, we end up trying to find out when a particular user may have logged into a system, or into a Windows domain. There may be other artifacts, as well, that may lead us to the Windows Event Log (right now, I'm just talking about the Windows 2000, XP, and 2003 Event Logs). There are a number of different ways to go about this, using the commercial tools such as EnCase and ProDiscover, but sometimes the analyst may want to extract the .evt files from the acquired image and parse them. In such instances, the Windows API (used by the Event Viewer and a number of other tools) may report that the .evt file is "corrupted".

This has happened enough to others that I don't even bother any longer, and instead resort to tools such as EvtUI, a GUI-enabled Perl script based on the Evt2Xls Perl script that I wrote to parse .evt files on a binary basis, by-passing the MS API and producing something a bit more usable. EvtUI runs against an .evt file and parses out all of the event records into an Excel binary-compatible spreadsheet. The Time_Generated field of the event record structure is formated so that it can be used to sort on in the spreadsheet. EvtUI also produces a report file, which gives the analyst an overview of the .evt records based on the frequency of the various sources and event IDs. I found this particular functionality useful enough that I pulled it out into its own tool (I call it "evtrpt") and added a frequency count for event types (Info, Warning, Error, Success, and Failure). The report file also gives you the date ranges of all of the event records.

Another thing that EvtUI lets the analyst do is enter exceptions. I've seen instances with really large .evt files (when combined with an extremely verbose audit configuration) where .evt file will have more than 65,535 records...and this is the limit of entries for Excel. So, the analyst can run EvtUI once, and then check the report...if there are more than 65,535 records, she can choose event IDs to enter as exceptions and then re-run EvtUI.

Now, once you've gotten this far, the question then becomes, how do you analyze the data you've got? Well, what you look for depends not only on your case, but what's being audited (which you can see very easily by parsing the PolAdtEv value from the Security Registry hive file. This is only a start, though...I suggest that anyone who does or wants to do Event Log analysis check out the following sites:

EventID.net (indispensable and well worth the $24/yr subscription)
Eric Fitzgeralds' blog
Rob "Van" Hensing's Blog
Windows 2000 Security Event Descriptions (pt 1, 2)

Tips
There was an intrusion investigation where the intruder was suspected of having created an account (done in many cases in order to maintain persistence) within the domain. Auditing for logon events was not enabled, but auditing for account management events was...and I was able to quickly find an event ID 624 record showing the creation of the suspicious

Other Resources

EventLogRecord structure
Windows Event Log Reference (Vista, 2008)
GrokEVT (Python-based)
ScreenClean

Monday, February 25, 2008

Getting Started, pt II

Okay, in the face of recent (and completely bullsh*t) claims by Sen. Clinton that Sen. Obama plagiarized speeches (so the guy used some phrases...so what?), I thought that it would be best that I was up-front and came clean...I did not have se...oh, geez...wait a sec...

I was on the e-Evidence site this morning and saw a paper listed from Kennesaw State University, entitled,
"Digital Forensics on the Cheap: Teaching Forensics Using Open Source Tools", from Richard Austin. This paper goes right along with what I was referring to in my earlier post, but also takes it a step further with regards to using specific tools, in this case, Helix and Autopsy. This is a great read and definitely very useful.

So, you're probably wondering...what's the point? Well, lists of free and open-source tools, as well as to documents that describe their use can be used to provide a solid foundation in the fundamentals (and even in more advanced information and techniques) of computer forensic analysis. Some college (community college as well as university) courses may not have the budget for some of the more expensive tools, but can provide the time and impetus necessary for folks wanting to learn and develop skillz to do so.

The availability and access to images and tools for creating and obtaining images, as well as the access to tools for analysis also provide a foundation for training programs, as well, in order to develop more advanced skill sets. Not only that, but new areas of computer forensic analysis can be explored...for example, it's not entirely difficult to locate malware on a system, but one of the areas that isn't explored is how it got there in the first place. Training sessions, brown-bag or white-board discussions all lend themselves very well to advancing the knowledge base of any group of forensic analysts, and the availability of the tools and images put the basis for these training sessions within reach of anyone with a Windows system and some storage space.

One final thought to close out this post (but not this subject)...has anyone thought about using these resources as part of an interview process? I can easily see three immediate ways of doing so...
  • 1. Query the interviewee with regards to familiarity with the tools and/or techniques themselves; if familiarity is mentioned or discovered during the interview process, ask probing questions about the use of the tools (Note: this requires the interviewer to be prepared).

  • 2. Prior to the actual interview, have a candidate perform an exercise...point them to a specific image, and give them instructions on what tools to use (or not to use). Part of the interview can then be a review of their process/methodology.

  • 3. If an interview is conducted on-site, with the candidate coming into the facility (rather than a remote interview), have the candidate sit down at a workstation and solve some problem.
The whole point of the use of these tools and techniques as training and evaluation resources would be to get analysts thinking and processing information beyond the point of "Nintendo forensics", going beyond pushing a button to get information...because how do you know if the information you receive is valid or not? Does it make sense? Is there a way to dig deeper or perhaps validate that information, or is there a technique that will provide validation of your data?

When First Responders Attack!!

It still happens...an "event" or "incident" occurs within an organization, and the initial response from the folks on-site (most often, the organization's IT staff) obliterates some or all of the "evidence" of the event. Consultants are called to determine "how they got in", "how far they got" into the infrastructure, and "what data was taken", and as such, are unable to completely answer those questions (if at all) due to what happened in the first hours (or in some cases, days) after the incident was discovered.

Check out Ignorance wrecking evidence, from AdelaideNow in Australia. It's an excellent read from the perspective of law enforcement, but a good deal of what's said applies across the board.

One of the things that consultants see very often is a disparity between what first responders say they did during an initial interview, and what the analyst sees during an examination. Very often, the consultant is told that the first responders took the system offline, but didn't do anything else. However, analysis of the image shows that installing and running anti-virus and -spyware tools, deleting files, and even restoring files from backup all happened. A great deal of this can be seen once the approximate timeline of the incident is determined...and very often, you'll see an administrator login, install or delete/remove stuff, etc., and then say that they didn't do anything.

Why would this matter? Let's take a look...

Many analysts still rely on traditional examination techniques, focusing on file MAC times, etc. So an admin logs into a system and runs an AV or anti-spyware scan (or both...or just 'pokes around'...something that happens a LOT more than I care to think about...), so now all of the file access times on the system have been modified, and perhaps some files have been deleted. Anyone remember this article on anti-forensics that appeared in CIO Magazine? Why worry about that stuff, when there is more activity of this nature occurring due to either the operating system itself, or due to regular, day-to-day IT network ops?

So what's the solution? Education and training, starting with senior management. They have to make it important. After all, they're the ones that tell IT that systems have to stay up, right? If senior management were really aware of how many times (and how easily) their organization got punked or p0wned by some 15 yr old kid, then maybe they'd put some serious thought and effort into protecting their organization, their IT assets, and more importantly, the (re: YOUR) data that they store and process.

Thursday, February 21, 2008

Important Memory Update

I ran across this info today, and thought that I'd post it...it seems quite important, in that it pertains to the use of physical memory (RAM) to deal with whole disk encryption (WDE), referred to as "cold boot attacks on disk encryption".

This looks like very cool stuff. Give it a read, and let me know what you think.

Don't forget that TechPathways provides a tool called ZeroView, which can reportedly be used to detect WDE.

Wednesday, February 20, 2008

Getting started, or forensic analysis on the cheap

Quite often, I'll see posts or receive emails from folks asking about how to get started in the computer forensic analysis field. What most folks don't realize is that "getting into" this field really isn't so much about the classes you took at a college or the fact that you have a copy of EnCase. What it's about is how well you know your stuff, what you're capable of doing, and if you're capable of learning new stuff.

For example, who would you want to hire or work with...someone who only knows how to use one tool (for example, EnCase), or someone who can explain how EnCase does what it does (such as file signature analysis) and can come up with solutions for the problems and challenges that we all run into?

What I've decided to do is compile a list of free (as in "beer") resources that can be used by schools and individuals to develop labs, training exercises, etc., for the purposes of providing an educational background in the field of computer forensic analysis. With nothing more than a laptop and an Internet connection, anyone interested in computer forensics analysis can learn quite a lot without ever spending any $.

Imaging
FTK Imager 2.5.3 (and Lite 2.5.1)
George M. Garner, Jr's FAU
dcfldd - Wiki
dc3dd

Image/File Integrity Verification
MD5Deep

Images/Analysis Challenges
Lance's Forensic Practicals (#1 and #2) (no EnCase? Use FTK Imager to convert the .E0x files to dd format)
NIST Hacking Case
DFTT Tool Testing Images
HoneyNet Project Challenges
VMWare Appliances (FTK Imager will allow you to add these - most of which are *nix-based - as evidence items and create dd-format images)

Analysis Applications
TSK 2.51 (as of 10 Feb 2008...includes Windows versions of the tools, but not the Autopsy Forensic Browser - see the Wiki for how to use the tools)
NOTE: DFLabs is developing PTK, an alternative Sleuthkit interface, and they are reportedly working on a full Windows version, as well!
ProDiscover 4.9 Basic Edition
PyFlag

Mounting/Booting Images
VDK & VDKWin
LiveView (ProDiscover Basic will allow you to create the necessary .vmdk file for a dd-format image)
VMPlayer

Analysis Tools
Perl ('nuff said!!) - my answer for everything, it seems ;-)

File Analysis
MiTec Registry File Viewer - import Registry hive files
TextPad
Rifiuti - INFO2 file parser
BinText - like strings, but better
Windows File Analyzer

File Carving
Scalpel

Browser History
WebHistorian

Archive Utilities
Universal Extractor
jZip
PeaZip

AV and Related Tools
Miss Identify - identify Win32 PE files (different from an AV scan)
GriSoft AVG Free Edition anti-virus
Avira AntiVir PersonalEdition anti-virus
McAfee Stinger - standalone tool to scan for specific malware
ThreatFire (requires live system, best when used w/ AV)
GMER Rootkit Detection (requires live system)

Packet Capture and Analysis
PacketMon
WireShark

Other Tools
According to Claus at the GSD blog , Mozilla uses SQLite databases to store information, so if you're doing browser analysis, you may want to take a look at SQLite DB Browser, or SQLiteSpy. If you want to create your own databases in SQLite, check out SQLite Administrator. So, you can use these tools not only for analysis of the Mozilla files, but also with creating your own databases for use with other tools (ie, Perl).

Please keep in mind that this is just a list...and not an exhaustive one...of technical resources that are available. There are many, many other tools available.

Also, all of the technical tools and techniques are for naught if you (a) cannot follow a process, and (b) cannot document what you do.

Jesse rides again!

Jesse Kornblum has done it again! Jesse's one of those guys who releases some really amazing tools for use in the IR and forensic analysis space, and he's done it again with "Miss Identify".

Miss Identify is a tool to look for Win32 applications through the use of file signature analysis. By default, it looks for Win32 apps (per the PE header) that do not have executable file extensions. As with Jesse's other tools, Miss Identify is rich with features, all of which are configurable from the command line.

So, you're probably thinking...okay, so what? You can already do this sort of thing with other tools, right? What makes this tool so Super Bad, McLovin?? Well, right now, there are a number of ways that a forensic analyst can identify malware in an acquired image, including checking the logs of any AV app that is already installed, or mounting the image and running an AV scanner or hash set comparison tool. However, two issues arise with these approaches...one is that there are legitimate tools that can and are used for malicious purposes. The other is that signatures (AV signatures, hashes, etc.) don't always work. However, there is one thing that all malware must be, and that is executable!

Miss Identify can also print strings that are found in the files, as well. This is great because you may find an executable file in the system32 directory that has a Microsoft-sounding name, but does not contain the MS copyright info embedded in the resource strings. This would be a "clue".

The use of Miss Identify doesn't replace other analysis and data reduction techniques, but instead augments them. This is without a doubt a useful tool, and one that should be considered for use by all sysadmins, first responders, as well as forensic analysts.

A round of applause for Jesse, everyone!

Also, I love the "Hollywood teaser" Jesse used to let everyone know what was coming! Speaking of teasers, isn't IronMan coming out soon....? Can you think of a better way to get Marvel Comics and Black Sabbath to come together??? ;-)

Addendum: I reached out to Jesse and mentioned to him that it might be useful to parse out the file version information from an executable, rather than all of the strings. Also, reading through the comments to Jesse's blog, there are some very useful tips pointed out...for example, finding an executable file in a user's browser cache might be considered by some examiners to be a "clue"... ;-)

Friday, February 15, 2008

CIO article on the need for forensics

CIO Magazine out of the UK has an interesting article titled In-depth Investigation that discusses the need for computer forensics capabilities. While it is from across the pond, the message of the article is extremely applicable here in the US, as wel.

I know that as I agree with it, many folks are going to think, "well, yeah, you're a consultant...of course you agree with this article, because it recommends that companies hire you!" And yes, that's true...I am a consultant, and in most cases a company would have to hire someone like me to come in and do the kind of work that is recommended.

However, even taking e-discovery out of the equation for a moment, with the increase in state notification laws (goin' federal in the near future...), as well as the regulatory stuff (SEC, PCI Council, FISMA, HIPAA, etc.), a forensics capability is being mandated. The decision has been left to organizations, and they've opted not to develop the capability...and now many organizations are being told that they have to have it.

My personal thought on this is that ideally what an organization would want to do is develop an in-house capability for tier 1 response...trained folks whose job it is to respond to, triage, and diagnose a technical IT incident. By "trained", I mean in the basics, such as NSM, incident response, troubleshooting, etc...enough to be able to triage and accurately diagnose level 1 and 2 incidents, as well as preserve data until outside professionals can respond to level 3 or 4 incidents.

That leads to one other thought...many times when folks like me recommend that an outside third-party be called to perform incident response and/or computer forensic activities, it's not so much because we want your money (well, that IS part of it...), but look at it this way...if your organization is mandated (by the PCI Council, for example) to have a pen test performed, how well do you think they're going to accept the results when your report says that your own IT employees performed the pen test against the systems they set up, and they found no way to get in? Having an outside third party do this kind of thing adds credibility to the report...besides, this is what we do all the time. ;-)

New Docs at SWGDE

The Scientific Working Group on Digital Evidence (SWGDE) has released some new documents, the most notable of which are the Vista Technical Notes, and the document on "Live Capture".

The document on Live Capture was very interesting! At only 5 pages in length (the first page is formal disclaimer stuff...), there isn't a whole lot of detail, and the timeliness of the document may be questionable, but the point is that the document does reference the benefits of performing "live capture"...a term which encompasses three different activities. The document spends only a small paragraph discussing RAM dumps, and in that paragraph refers to "DD" as a software tool that can be used for collecting the contents of memory...on Windows systems, this is no longer the case (unless you have an old copy of the version of dd.exe sitting around). Further, this article in the Forensic Magazine mentions the use of dcfldd (version 1.3.4 was reportedly used when writing the article) to dump RAM from a Windows system...however, the command line listed in the article no longer seems to work (although for some odd reason, on a Windows XP SP2 system, replacing "\\.\PhysicalMemory" with "/dev/mem" seems to get something). Oddly enough, the document doesn't mention ProDiscover (which had the ability to collect RAM and volatile data before EnCase), nor does it mention Nigilant32.

The section of the document that addresses live acquisition is also extremely short and bereft of any real content...I'd love to know what "careful planning" they are referring to, just as I'm sure others reading the document who've never done a live acquisition must be wondering.

But hey...don't get me wrong...I think it's a great thing that the document is out. The more these techniques and methodologies are discussed and presented, the more likely they are to be used and then become part of standard procedures.

Thursday, February 07, 2008

DFRWS 2008 Announcement

The DFRWS 2008 CfP and Challenge have been posted!

The CfP invites contributions on a wide range of subjects, including:
  • Incident response and live analysis
  • File system and memory analysis
  • Small scale and mobile devices
  • Data hiding and recovery
  • File extraction from data blocks (“file carving”)
And here's a couple that should be interesting:
  • Anti-forensics and anti-anti-forensics
  • Non-traditional approaches to forensic analysis
Submission deadline is 17 Mar, with author notification about 6 wks later.

I may submit something on Registry analysis...we'll have to see. This may be a good segue into a book...I've been thinking that based on some new tools I've been working on, as well as data collected since Windows Forensic Analysis was published, I may have enough to put together a book just on Registry analysis.

This year's challenge is similar to 2005's, except that this time the issue is Linux memory analysis.

This year, the conference is in Baltimore ("Bahlmer"), MD, 11-13 Aug 2008.

Thursday, January 31, 2008

Enter Sandman

You're probably wondering, "Since when did Metallica have anything to do with Windows forensics?"

My answer to that is...since ALWAYS!

Okay...enough of that. The Sandman I'm referring to isn't the one from the Metallica song. Rather this one has to do with the Windows hibernation file (get it? "sleep". "Sandman". get it? no...you don't...). Evidently Nicholas and Mattheiu has been working on a C library for reading/writing the Windows hibernation file. This sounds really cool, and it looks as if they're going to include Python bindings, as well as a couple of sample apps, one of which will reportedly convert a hibernation file into a dd-style memory dump. Very cool. Keep in mind, however, that a hibernation file doesn't contain the current contents of memory, but rather the contents of memory from when the file was created.

Sandman looks like a good tool to have in your kit, and I can't wait to try it out.

Artifact Repositories, part deux

I wanted to take my last post on this topic just a bit further...

I received an email from someone recently asking me about checklists for determining the attack vector of an incident. Yeah, I know...that's a pretty broad question, but I do see the issue here. Sure, some folks are "finding stuff", but the question is now becoming, how did it get there? That's the next logical question, I suppose, and it is being asked.

Over on F-Secure, I saw this post this morning about a PHP IRC Bot. This is just one example, but once the IRC bot is located on the infected system, how does one go about determining how it got there? One thought would be to look at the method you used to locate the bot...say, scanning with an AV scanner...and use that as a starting point. What did the AV scanner identify the malware as? Once you get a name, go to the vendor's web site and see what it says about infection vectors. Again...this is one way to go about the process, not the way.

Another example is this...I'm sure that finding a bot or backdoor isn't too difficult, particularly if you have volatile data or the malware is easily detected by the scanner you're using. But how did it get there? Was the attack vector a downloader from a malicious web site that pulled down a Trojan or bot that was then used to put the backdoor on the system?

At this point, you might be thinking..."who cares?" After all, you found the bad stuff, right? But is that really sufficient? If you you don't discover the root cause, how do you protect yourself in the future? How do you protect other systems?

Another aspect to consider are the application artifacts. Hogfly recently posted on intel gathering...what are the artifacts of the use of the three applications he listed in the post? Does anyone know? After all, one thing to be concerned about is USB devices...but how many are considering remote storage repositories? Anyone remember this three year old blog post regarding the GMail Drive? Some of the artifacts listed in the post are easily added to any sort of Registry scanning tool... ;-)

I'm thinking that it's about time to add a little something to our investigative process. Artifact repositories will very likely make it much easier to determine root causes, keeping in mind, though, that such a thing isn't really a comprehensive solution. Training or instruction in OS and application basics will provide a better understanding of how to determine root causes, particularly when there a multiple possibilities.

Sunday, January 27, 2008

Artifact Repositories

I see posts in a number of lists asking about (and for) forensic artifacts for P2P applications...lately, there have been several about LimeWire. For the most part, general questions regarding P2P apps drift toward...well...general questions, like "has anyone ever dealt with this" kind of questions. When specific apps are named, like LimeWire, specific questions are asked, such as "what are the contents of this file?" I can easily see how these issues would be relevant to cases involving files being shared, whether they are illicit images, or company proprietary information and IP.

It has occurred to me, time and again, that what is needed is a central repository of forensic artifact information. Something like a searchable database portal where you can login, type in a few keywords, and obtain a listing of relevant articles. These articles could be downloadable PDF documents...something that you can take with you, print out, etc. These articles would be written for forensic analysts, by forensic analysts...that way, they would contain relevant information, as well as have tips for techniques to use for data extraction and analysis, or even the tools themselves.

Now, the question becomes...if this repository were to contain more than just a few articles on forensic artifacts of P2P applications, but instead covered other areas, and even addressed other OSs, is this something you would pay for? Far too often in this community, when something is provided for free, it languishes unused...be it tools, information, or books. An annual subscription fee would be necessary to keep something like this up and running.

Now, articles would be updated, of course, and information would constantly be added to the library. Something like this could also have a forum where information could be exchanged, and clarify questions could be asked. Also, a subscriber could request additional information or make a request for the latest version of the app to be examined.

Is there anything else you'd like to see, or wouldn't like to see in something like this? Does this make any sense at all?

Tuesday, January 22, 2008

Free AV Scanners

Many times during an examination, you may want to do a little data reduction, by scanning your image for the presence of malware. While this should not be considered a 100% guarantee that there is no malware if there are no hits, this may lead you to something and narrow your search a bit. Again, this is just a tool, something that as a forensic analyst you can use.

Start by mounting the image as a read-only drive letter using Mount Image Pro or VDKWin. Then scan the drive letter with your AV scanner of choice. Some free AV scanner options include:

GriSoft AVG Free Edition
avast! Home Edition (free for home/non-commercial use)
ClamWin
Avir AntiVirus PersonalEdition
Comodo AV
Windows Defender (spyware)

Some rankings reports (includes free and for-pay):
PCWorld
Top10 Reviews
GCN Lab
Top Windows AV

Note that some of the available AV products may include a command line interface (F-Prot, for example) which means that you can run the scanner after hours using a Scheduled Task.

So, what's in your wallet? What is your AV scanner of choice (free or otherwise)?

Sunday, January 13, 2008

IR Immediate Actions

As you may remember, in December 2007 I presented at the HTCIA conference in Hong Kong, the first HTCIA conference for this chapter. It was a great conference, and a great opportunity to meet a lot of folks in this field (see the Speaker's page).

While there, I presented on Registry Analysis and Windows Memory Analysis, both of which seemed to be well received. One question came up during discussions of Windows Memory Analysis...during the presentation, I talk about different tools and techniques that are available for extracting the contents of RAM, and also talk about the difference between tools I can use as a private individual and those I can use as a consultant. At one point, someone asked me which tools and techniques I use most often as a consultant...and my response was simply "none of them". The reason for this is that in most all of the responses I've dealt with, the system (or systems) have already been turned off, rebooted, or sufficiently imposed upon by others (AV scans, running multiple tools, etc.) that even if I did, as a consultant, have an option available for collecting the contents of RAM available to me, doing so would be of little benefit to either my analysis, or any information I could provide to the customer.

Part of the issue is that as a consultant, I am extremely limited in the tools I can use to collect the contents of physical memory from a system, not so much due to the availability of a particular tool, but more so due to the license agreement associated with that tool.

However, a bigger issue is that the immediate actions performed by most first responders on-site are to take systems offline, shutdown them down, or "clean" and reboot them prior to any calls for outside assistance being made. While this may be pertinent for business preservation and continuity, it most often has a detrimental impact on any follow-on analysis that may take place.

If there is data leakage due to an intrusion (or this is suspected, or this is just a question that needs to be answered...), then the immediate reaction is (apparently) to shut the system down. This may be pertinent, particularly if there is no incident response plan in place that lets people know what they need to do, and time is required to notify and get approval for follow-on activities (such as calling consultants). This reaction appears to be fairly ingrained, and I'm not suggesting that we change it by saying DO NOT shut systems down. What I am going to suggest is that we modify those immediate actions such that pertinent information is collected from systems before they are shut down.

Wait...what? It looks like what I am saying is, go ahead and shut the systems down, or reboot them, or "clean" them...and I am. This is going to happen anyway. There's nothing I can say or do, either on this blog or as a paid consultant, to change that. In order to change that behavior, there has to be an incident response plan (CSIRP) in place that tells people what to do, and when someone shuts a system off inappropriately or incorrectly, that issue needs to be addressed. The issue is that some kind of overwhelming stimulus needs to be put in place to change this behavior, and until this happens, nothing anyone can say or do is going to change that. So...it's gonna happen anyway, and instead of changing it, let's go ahead and go with it...but throw something else in there along the way.

What can we do? Well, one thing is to put tools on systems (for a list of tools, check out my book) when they are set up, or at least at some point prior to an incident occurring. If you can't put tools and a simple batch file on systems (for whatever reason...the most common of which seems to be, "...we don't know how..."), then issue each admin/responder a CD with tools, and a thumb drive. Then make it a policy within the organization that a batch file (could be a DOS batch file, or WMI script, etc.) be run and the output saved to a thumb drive, shared drive, etc., prior to the system being taken offline.

Tuesday, January 08, 2008

Response to "antiforensics"

On one of the online forums this evening, someone posted about reading the latest issue of 2600 and finding an article that mentions the use of "antiforensics" techniques, specifically with regards to one forensic analysis application in particular.

My response was this:

[Most of] these techniques don't defeat tools...they defeat examiners.

Thoughts?

Saturday, January 05, 2008

Metadata, again...

I've blogged about metadata for various file types before, and the other day I saw a question regarding metadata in MS Works documents. That was pretty interesting, so I fired up my 'leet Google h4x0R skillz and entered in metadata + "MS Works" as my search terms, and I ended up finding something called Meta-Extractor from the folks at the National Library of New Zealand. This tool appears to be Java-based, is about a 10.7MB download, and appears to extract metadata from a variety of file formats...to include MS Works! That's interesting...I didn't even know that MS Works docs had metadata! My first real intro into Word metadata involved the Blair doc...and I'm aware that other Office OLE file formats have metadata, as well.

Another such tool is Metagoofil, from DarkNet. I haven't tried this one...but then I haven't had a great deal of need for things like metadata. When I have, I've written my own tools.

One of the more interesting ways to generate some cool metadata is to use MergeStreams to merge an Excel spreadsheet into a Word doc. I used to present on this at LE conferences all the time, along with things like NTFS alternate data streams, and hiding data in the Registry...but it looks like this stuff is just kewl nerd stuff and nothing more...

Tuesday, January 01, 2008

First Post of '08

So, here it is...the first post of 2008 on my blog...what to say, what to say? I'm not a big fan of the "predictions" posts, pontificating on what's going to happen in the coming year. For the most part, who knows? Anything we do see in the media regarding data breaches is...well...tainted by the media, so we're not going to have any idea of the validity of what we're seeing.

Let's do some highlights...

From the perspective of this blog and the subject matter, the highlights for 2007 were the release of Windows Forensic Analysis in May, followed at the end of the year by the release of Perl Scripting for IT Security (the cover on Amazon says "IT", but the book on my bookshelf says
"Windows"...it was published by Elsevier).

Another highlight, as it relates to the WFA book, is that Richard Bejtlich posted his Best Books Bejtlich Read in 2007, and ranked WFA #3! High praise, indeed, considering that Richard is a *BSD guy!

Goals I'd like to achieve in the coming year include:
  1. Finish development on Windows memory parsing tools (or at least progress along in the stages....)
  2. Finish development of a Windows Registry preprocessor (basically, extract the Registry hive files from an image and drop them into a "thresher", and the wheat gets separated from the chaff...)
  3. Include more Vista- and Windows 2008-specific data in #1 and #2 above
  4. Do more codification and documentation of frameworks and processes related to my day job; things like live response, CSIRP development, documentation of data extraction and analysis processes for Windows platforms, etc.
I think that's about enough, don't you? Keep the goals achievable...there's nothing like looking back over a year (or a customer engagement!!) and realizing that the goals were to grandeous and volumonous, and simply weren't reached.

If you got some goals, thoughts or comments that relate to the subject matter of this blog, feel free to post a comment...and have a great 2008!

Addendum:
Andrew Hay's Predictions for '08

Saturday, December 29, 2007

Who you gonna call?

Remember that old tag line from the '80's? It's right up there with "where's the beef!" However, my question is directed more toward forensic analysis, including anything collected during live response.

Where do you go for thoughts, input or validation regarding your live response process? Do you just grab a copy of Helix and run the WFT tools? Or are you concerned about doing that blindly (believe me, there are folks out there who aren't...), and want some kind of validation? (I'm not saying that WFT and toolkits like are bad...in fact, that's not the case at all. What I am saying is that running the tools without an understanding of what they're doing is a bad thing.)

What about analysis of an image? Do you ever reach out and ask someone for insight into your analysis, just to see if you've got all of your bases covered? If so, where do you go? Is it a tight group of folks you know, and you only contact them via email, or do you reach out to a listserv like CFID, or go on lists like ForensicFocus?

Another good example is the Linux Documentation Project and the list of HowTo documents. These are great sources of information...albeit not specific to forensic analysis...and something I've used myself.

NIST provides Special Publications in PDF format, and Security Horizon is distributed in PDF. CyberSpeak is a podcast. IronGeek posts videos, mostly due to hacking. I included a couple of desktop video captures on the DVD with my book, showing how to use some of the tools.

While agree that we don't need yet another resource to pile up on our desks and go unread, I do wonder at times why there isn't something out there specific to forensic analysis.

Friday, December 28, 2007

Deleted Apps

Another question I've received, as well as seen in online forums, has to do with locating deleted applications.

As Windows performs some modicum of tracking of user activities, you may find references to applications that were launched in the UserAssist keys in the user's NTUSER.DAT file. Not only would you find references to launching the application or program itself, but I've seen where the user has clicked on the "Uninstall" shortcut that gets added to the Program menu of the Start Menu. I've also seen in the UserAssist keys where a user has launched an installation program, run the installed application, and then clicked on the Uninstall shortcut for the application.

You may also find references to the user launching the "Add/Remove Programs" Control Panel applet.

If you're dealing with an XP system you may find that if the application was originally installed via an MSI package, a Restore Point was created when the application was installed...and one may have been created when the application was removed, as well. So, be sure to parse those rp.log files in the Restore Points.

An MFT entry that has been marked as deleted is great for showing that a file, or even several files, had been deleted. Analysis of the Recycle Bin/INFO2 file may show you something useful. But there are other ways to find more data, to include showing that at one time, the application had been used...such as by parsing the .pf files in the XP Prefetch directory, and performing Registry analysis.

Other Resources:
Intentional Erasure

The MAC Daddy

I received a question in my inbox today regarding locating a system's MAC address within an image of a system, and I thought I'd share the response I provided...
"The path to the key that tells you which NICs are/were in use on the system is:
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkCards

Beneath this key, you will see a set of subkeys, each with different numbers;
on my system, I see "10", "12", and "2". Each of these keys contains values;
Description and ServiceName. The ServiceName value contains the GUID
for the interface.

Using the GUIDs, go to:
HKLM\SYSTEM\ControlSet00x\Services\Tcpip\Parameters
\Interfaces

*Be sure to use the ControlSet marked as "Current".

Beneath this key, you'll see subkeys with names that are GUIDs. You're
interested in the GUIDs you found beneath the previous key. Within each key,
you will find the values associated with that interface.

By default, Windows does not retain the MAC address in the Registry. I'm
aware that there are sites out there that say that it does, but they are incorrect...
at least, with regards to this key. If you *do* find an entry within the "Interfaces"
key above that contains a value such as "NetworkAddress", it is either specific
to the NIC/vendor, or it's being used to spoof the MAC address (this is a known
method).

Also check the following key for subkeys that contain a "NetworkAddress" value:
HKLM\SYSTEM\ControlSet001\Control\Class
\{4D36E972-E325-11CE-BFC1-08002bE10318}

Other places you can look for the MAC address:

*Sometimes* (not in all cases) if you find the following key, you may find a value
named "MAC", as well:
HKLM\SOFTWARE\Microsoft\Windows Genuine Advantage

Another place to look is Windows shortcut (*.lnk) files...Windows File Analyzer
is a GUI tool that parses directories worth of *.lnk files and one of the fields that
may be populated is the MAC address of the system."

I thought others might find this helpful as well...

Sunday, December 23, 2007

Perl Scripting Book

It looks like my Perl Scripting book made it out just in time for Christmas!

Oddly enough, it only seems to be available on Amazon...I can't seem to locate it on the Syngress site, but I did find it on the Elsevier site (Elsevier owns Syngress).

Perl Scripting for IT Security is not a follow-on or companion to my previous book, Windows Forensic Analysis. Rather, it goes more into showing what can be done, and how it can be done, in the world of Incident Response and Computer Forensics Analysis using an open-source solution such as Perl. The book, in part, shows that with a little bit of knowledge and skill, we are no longer limited to viewing only what our commercial forensic analysis tools show us.

Addendum, 28 Dec: A box showed up on my doorstep today, with several copies of the book! Woot!

Sunday, December 02, 2007

Windows Shortcut (LNK) files

I saw a question in a forum recently regarding Windows Shortcut Files, and thought I would post something that included my findings from a quick test.

The question had to do with the file MAC times found embedded in the .lnk file, with respect to those found on the file system (FAT32 in this case). The question itself got me curious, so I ran a quick test of my own.

I located a .lnk file in my profile's (on my own system) Recent folder (NTFS partition) and ran the lslnk.pl script from my book against it, extracting (in part) the embedded file times:

MAC Times:
Creation Time = Fri Nov 16 20:33:21 2007 (UTC)
Modification Time = Fri Nov 16 22:47:50 2007 (UTC)
Access Time = Fri Nov 16 20:33:21 2007 (UTC)

I then double-clicked the Shortcut file itself (from within Windows Explorer), launching the .pdf file. I closed the .pdf file and used "dir /ta" to verify that the last access time on the shortcut file and on the .pdf file itself had been updated to the current time (in this case, 7:29am EST). I re-ran the lslnk.pl script against the original shortcut file, and extracted the same MAC times as seen above.

BTW...the creation date on the .pdf file in question is 11/16/2007 03:33 PM...which is identical to the creation date embedded in the shortcut file, with the time zone taken into account.

Based on extremely limited testing, it would appear that the MAC times in the shortcut file do not change following the initial creation of the shortcut file.

Can anyone confirm/verify this, on NTFS as well as FAT32?

Saturday, December 01, 2007

Forensic Analysis

Often (well, often enough) we run into instances where it becomes evident that someone has an unrealistic expectation of what answers forensic analysis can provide.

I know that right now, most of you are thinking, "dude...no way!" But I say, truly, it is so. There's even a term for it...it's called the CSI Effect. There are even articles written about it (here, and here).

Let's look at an example of what I mean. Our hero, the forensic analyst and incident responder Dude Diligence gets a call from a company, and the caller says that they've been "hacked" ("hacked" is the new "smurfed", by the way...a verb with many nuances and flavors...). Dude gets on site to find that even before he was called, a considerable amount of remediation was going on...systems were being accessed and "cleaned" (another one of those verbs with many nuances and flavors...) by administrators, with no real focus as to who was doing what, when, or why?

I'm sure that by now, some of you who are consultants like our hero are now weeping into your beers, sobbing, "dude..." under your breath...but our story continues.

Dude does the best he can to get the story of what happened, and what systems were affected. In the end, he acquires images of about half a dozen systems and returned to the Dude Lab to do analysis. Before leaving, however, he made sure that he had a solid understanding of what questions needed to be answered for the customer...specifically, was this a targeted attack, and was sensitive data (could be credit card numbers, could be PII, PHI, etc.) compromised.

To make a long story short, ultimately what Dude finds is that...he can't find anything. Systems had not been configured for any sort of viable logging (the system, as well as applications), and what logs that were there had been removed from the system. Systems had been scanned with AV applications, etc., etc. Traces of the intruder's activity (if there was one) had been completely obliterated by the actions of those who "responded" to the incident. Even if Dude had determined that sensitive information was, in fact, on the system, he isn't able to provide a definitive response to the question of, does that information now, as a result of the intrusion/compromise, exist somewhere it shouldn't? Was it exposed?

Even under the best of circumstances, there are just some questions that forensic analysis cannot answer. One question that comes up time and time again, particularly in some of the online forensic forums, is, from an imaged system, can you tell what files were copied off of the system? Dude has found artifacts of files being uploaded to web-based email systems such as GMail, and found other artifacts, but what computer forensic artifacts are there if someone opens a Word or PDF document on their screen and copies information out of it onto a piece of paper, using a pen? How about if someone connects a removable storage device to the system and copies the files off onto to that device? Sure, there are artifacts of the device being connected to the system (particularly on Windows systems), but without actually imaging and analyzing that removable storage device, how would Dude determine what files were copied from the system to the device?

I've talked about alternative analysis techniques before, and the solutions I'm looking toward include, for example, how you may be able to show names of files that someone viewed, and possibly the dates, even if the user deleted and overwrote the files themselves, or viewed them from removable media, etc. There are lots of ways to get additional information through analysis of the Registry, Event Logs, and even of the contents of RAM captured from the system...but there are just some questions that computer forensics can not answer.

That being said, how does our hero Dude Diligence go about his dude-ly analysis? Well, to begin with, Dude is part-sysadmin. This means that he understands, or knows that he needs to understand, the interrelation between the different components of a system...be that the interrelation between the external router, firewall, and the compromised system within the network infrastructure, or the interrelation between the network, the apparently compromised host (i.e., the operating system), and the applications running on the host.

When it comes to analyzing intrusions, Dude doesn't have to be a pen-tester or "ethical hacker"...although it may help. Instead, Dude needs to shift his focus a bit and not so much concentrate on breaking into or compromising a system, but instead concentrate "around" it...what artifacts are left, and where, by various actions, such as binding a reverse shell to the system through the buffer overflow of an application or service? For example, when someone logs into a system (over the network via NetBIOS or ssh or some other application, or at the console), where would Dude expect to see artifacts? What does it mean to Dude if the artifacts are there? What if they're not there?

Remember Harlan's Corollary to Jesse's First Law of Computer Forensics?

This leads us back to the first statement in this post...there are some actions for which the artifacts are not available to forensic analysts like Dude when he's performing a post-mortem analysis, and there are some questions that simply cannot be answered by that analysis. There are some questions that can be answered, if Dude pursues the appropriate areas in his analysis...

Wednesday, November 21, 2007

Alternative Methods of Analysis

Do I need to say it again? The age of Nintendo Forensics is gone, long past.

Acquiring a system is no longer as simple as removing the hard drive, hooking it up to a write blocker, and imaging it. Storage capacity is increasing, devices capable of storing data are diversifying and becoming more numerous (along with the data formats)...all of which are becoming more ubiquitous and common-place. As the sophistication and complexity of devices and operating systems increases, the solution to the issue of backlogs due to examinations requiring additional resources is training and education.

Training and education lead to greater subject-matter knowledge, allowing the investigator to ask better questions, and perhaps even make better requests for assistance. Having a better understanding of what is available to you and where to go to look leads to better data collection, and more thorough and efficient examinations. It also leads to solutions that might not be readily apparent to those that follow the "point and click execution" methodology.

Take this article from Police Chief Magazine, for example. 1stSgt Cohen goes so far as to specifically mention the collection of volatile data and the contents of RAM. IMHO, this is a HUGE step in the right direction. In this instance, a law enforcement officer is publicly recognizing the importance of volatile data in an investigation.

It's also clear from the article that training and education has led to the use of a "computer forensics field triage", which simply exemplifies the need for growth in this area. It's also clear from the article that a partnership between law enforcement, the NW3C and Purdue University has benefited all parties. It would appear that at some point in the game, the LEs were able to identify what they needed, and were able to request the necessary assistance from Purdue and NW3C...something known in consulting circles as "requirements analysis". At some point, the cops understood the importance of volatile memory, and thought, "we need this...now, how do we collect it in the proper manner?"

So what does this have to do with alternative methods of analysis? An increase in knowledge allows you to seek out alternative methods for your investigation.

For example, take the Trojan Defense. The "purist" approach to computer forensics...remove the hard drive from the system, acquire an image, and look for files...appears to have been less than successful, in at least one case in 2003. The effect of this decision may have set the stage for other similar decisions. So, let's say you've examined the image, searched for files, even mounted the image as a file system and hit it with multiple AV, anti-spyware, and hash-comparison, and still haven't found anything. Then lets assume you had collected volatile data...active process list, network connections, port-to-process mapping, etc. Parsing that data, wouldn't the case have been a bit more iron-clad? You'd be able to show that at the time the system was acquired, here are the processes that were running (along with their command line options, etc.), installed modules, network connections, etc.

At that point, the argument may have been that the Trojan included a rootkit component that was memory-resident and never wrote to the disk. Okay, so let's say that instead of running individual comments to collect specific elements of memory (or, better yet, before doing that...), you'd grabbed the contents of RAM? Tools for parsing the contents of RAM do not need to employ the MS API to do so, and can even locate an exited or unlinked process, and then extract the executable image file for that process from the RAM dump.

What if the issue had occurred in an environment with traffic monitoring...firewall, IDS and IPS logs may have come into play, not to mention traffic captures gathered by an alert admin or a highly-trained IR team? Then you'd have even more data to correlate...filter the network traffic based on the IP address of the system, isolate that traffic, etc.

The more you know about something, the better. The more you know about your car, for example, the better you are able to describe the issue to a mechanic. With even more knowledge, you may even be able to diagnose the issue and be able to provide something more descriptive than "it doesn't work". The same thing applies to IR, as well as to forensic analysis...greater knowledge leads to better response and collection, which leads to more thorough and efficient analysis.

So how do we get there? Well, someone figured it out, and joined forces with Purdue and NW3C. Another way to do this is through online collaboration, forums, etc.

Tuesday, November 20, 2007

Windows Memory Analysis

It's been a while since I've blogged on memory analysis, I know. This is in part due to my work schedule, but it also has a bit to do with how things have apparently cooled off in this area...there just doesn't seem to be the flurry of activity that there was in the past...

However, I could be wrong on that. I had received an email from someone telling me that certain tools mentioned in my book were not available (of those mentioned...nc.exe, BinText, and pmdump.exe, only BinText seems to be no longer available via the FoundStone site), so I began looking around to see if this was, in fact, the case. While looking for pmdump.exe, I noticed that Arne had released a tool called memimager.exe recently, which allows you to dump the contents of RAM using the NtSystemDebugControl API. I downloaded memimager.exe and ran it on my XP SP2 system, and then ran lsproc.pl (a modified version of my lsproc.pl for Windows 2000 systems) against it and found:

0 0 Idle
408 2860 cmd.exe
2860 3052 memimager.exe
408 3608 firefox.exe
408 120 aim6.exe
408 3576 realsched.exe
120 192 aolsoftware.exe(x)
1144 3904 svchost.exe
408 2768 hqtray.exe
408 1744 WLTRAY.EXE
408 2696 stsystra.exe
244 408 explorer.exe

Look familiar? Sure it does, albeit the above is only an extract of the output. Memimager.exe appears to work very similar to the older version of George M. Garner, Jr's dd.exe (the one that accessed the PhysicalMemory object), particularly where areas of memory that could not be read were filled with 0's. I haven't tried memimager on a Windows 2003 (no SPs) system yet. However, it is important to note that Nigilant32 from Agile Risk Management is the only other freely available tool that I'm aware of that will allow you to dump the contents of PhysicalMemory from pre-Win2K3SP1 systems...it's included with Helix, but if you're a consultant thinking about using it, be sure to read the license agreement. If you're running Nigilant32 from the Helix CD, the AgileRM license agreement applies.

I also wanted to followup and see what AAron's been up to over at Volatile Systems...his Volatility Framework is extremely promising! From there, I went to check out his blog, and saw a couple of interesting posts and links. AAron is definitely one to watch in this area of study, and he's coming out with some really innovative tools.

One of the links on AAron's blog went to something called "Push the Red Button"...this apparently isn't the same RedButton from MWC, Inc. (the RedButton GUI is visible in fig 2-5 on page 50 of my first book...you can download your own copy of the "old skool" RedButton to play with), but is very interesting. One blogpost that caught my eye had to do with carving Registry hive files from memory dumps. I've looked at this recently, albeit from a different perspective...I've written code to locate Registry keys, values, and Event Log records in memory dumps. The code is very alpha at this point, but what I've found appears fairly promising. Running such code across an entire memory dump doesn't provide a great deal of context for your data, so I would strongly suggest first extracting the contents of process memory (perhaps using lspm.pl, found on the DVD with my book), or using a tool such as pmdump.exe to extract the process memory itself during incident response activities. Other tools of note for more general file carving include Scalpel and Foremost.

So...more than anything else, it looks like it's getting to be a good time to update processes and tools. I mentioned an upcoming speaking engagement earlier, and I'm sure that there will be other opportunities to speak on Windows memory analysis in the future.

Sunday, November 18, 2007

Jesse's back!

Jesse blogged recently about being over at the F3 conference in the UK, and how he was impressed with the high SNR. Jesse shared with me that during a trivia contest, one of the teams chose the name, "The Harlan Carvey Book Club". Thanks, guys (and gals, as the case may be)! It's a nice thought and very flattering...though I don't ever expect to be as popular as Jesse or The Shat. ;-)

Saturday, November 17, 2007

Upcoming Speaking Engagement

Next month, I'll be in Hong Kong, speaking to the local police, as well as at the HTCIA-HK conference. The primary topic I've been asked to present is Registry analysis, with some live response/memory analysis, as well. The presentations vary from day-long to about 5 hrs, with a 45 min presentation scheduled for Thu afternoon - I'll likely be summing things up, with that presentation tentatively titled "Alternative Analysis Methods". My thought is that I will speak to the need for such things, and then summarize my previous three days of talks to present some of those methods.

Sunday, November 11, 2007

Pimp my...Registry analysis

There are some great tools out there for viewing the Registry in an acquired image. EnCase has this, as does ProDiscover (I tend to prefer ProDiscover's ability to parse and display the Registry...) and AccessData's Registry Viewer. Other tools have similar abilities, as well. But you know what? Most times, I don't want to view the Registry. Nope. Most times, I don't care about 90% of what's there. That's why I wrote most of the tools available on the DVD that ships with my book, and why I continue to write other, similar tools.

For example, if I want to get an idea of the user's activity on a system, one of the first places I go is to the SAM hive, and see if the user had a local account on the system. From there, I go to the user's hive file (NTUSER.DAT) located in their profile, and start pulling out specific bits of information...parsing the UserAssist keys, etc...anything that shows not only the user's activities on the system, but also a timeline of that activity. Thanks to folks like Didier Stevens, we all have a greater understanding of the contents of the UserAssist keys.

Now, the same sort of thing applies to the entire system. For example, one of the tools I wrote allows me to type in a single command, and I'll know all of the USB removable storage devices that had been attached to the system, and when they were last attached. Note: this is system-wide information, but we now know how to tie that activity to a specific user.

On XP systems, we also have the Registry files in the Restore Points available for analysis. One great example of this is the LEO that wanted to know when a user had been moved from the Users to the Administrators group...by going back through the SAM hives maintained in the Restore Points, he was able to show approximately when that happened, and then tied that to other activity on the system, as well.

So...it's pretty clear that when it comes to Registry analysis, the RegEdit-style display of the Registry has limited usefulness. But it's also clear that there really isn't much of a commercial market for these kinds of tools. So what's the answer? Well, just like the folks who get their rides or cribs pimped out on TV, specialists bring a lot to the table. What needs to happen is greater communication of needs, and there are folks out there willing and able to fulfill that need.

Here's a good question to get discussion started...what's good, easy-to-use and easy-to-access format for a guideline of what's available in the Registry (and where)? I included an Excel spreadsheet with my book...does this work for folks? Is the "compiled HTML" (ie, *.chm) Windows Help format easier to use?

If you can't think of a good format, maybe the way to start is this...what information would you put into something like this, and how would you format or organize it?

Pimp my...live acquisition

Whenever you perform a live acquisition, what do you do? Document the system, write down things like the system model (ie., Dell PowerEdge 2960, etc), maybe write down any specific identifiers (such as the Dell service tag) and then acquire the system. But is this enough data? Are we missing things by not including the collection of other data in our live acquisition process?

What about collecting volatile data? I've had to perform live acquisitions of systems that had been rebooted multiple times since the incident was discovered, as well as systems that could not be acquired without booting them (SAS/SATA drives, etc.). Under those circumstances, maybe I wouldn't need to collect volatile data...after all, what data of interest would it contain...but maybe we should do so anyway.

How about collecting non-volatile data? Like the system name and IP address? Or the disk configuration? One of the tools available on the DVD that comes with my book lets you see the following information about hard drives attached to the system:

DeviceID : \\.\PHYSICALDRIVE0
Model : ST910021AS
Interface : IDE
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x41ab2316
Serial No : 3MH0B9G3

DeviceID : \\.\PHYSICALDRIVE1
Model : WDC WD12 00UE-00KVT0 USB Device
Interface : USB
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x96244465
Serial No :

Another tool lets you see the following:

Drive Type File System Path Free Space
----- ----- ----------- ----- ----------
C:\ Fixed NTFS 17.96 GB
D:\ Fixed NTFS 38.51 GB
E:\ CD-ROM 0.00
G:\ Fixed NTFS 42.24 GB

In the above output, notice that there are no network drives attached to the test system...no shares mapped. Had there been any, the "Type" would have been listed as "Network", and the path would have been displayed.

Does it make sense to acquire this sort of information when performing a live acquisition? If so, is this information sufficient...or, what other information, at a minimum, should be collected?

Are there conditions under which I would acquire certain info, but not others? For example, if the system had not been rebooted, would I dump the contents of physical memory (I'd say "yes" to that one)...however, if the system had been accessed by admins, scanned with AV, and rebooted several times, would it do me any good at that point to dump RAM or should I simply document the fact that I didn't, and why?

Would this information be collected prior to the live acquisition, or immediately following?

What are your thoughts...and why?

PS: If you can think of a pseudonym I can use...think of it as a "hacker handle" (that thing that Joey from Hackers kept whining about), but REALLY cool, like "Xzibit", let me know. Oh, yeah...Ovie Carroll needs one, too. ;-)

Thursday, November 08, 2007

Pimp my...forensics analysis

How often do you find yourself in the position where, when performing forensic analysis, you end up either not having the tools you need (ie, the tools you do have don't show you what you need, or don't provide you with useful output)? Many of the tools we use provide basic functionality, but there are very few tools that go beyond that, and are capable of providing what we need over a large number of cases (or in some instances, even examination to examination). This leads to one of the major challenges (IMHO) of the forensic community...having the right tool for the job. Oddly enough, there just isn't a great market for tools that do very specific things like parse binary files, extract data from the Registry, etc. The lack of such tools is very likely due to the volume of work (i.e., case load) that needs to be done, and to a lack of training...commercial GUI tools with buttons to push seem to be preferred over open-source command line tools, but only if the need is actually recognized. Do we always tell someone when we need something specific, or do we just put our heads down, push through on the investigation using the tools and skill sets that we have at hand, and never address the issue because of our work load?

With your forensic-analysis-tool-of-choice (FTK, EnCase, ProDiscover, etc.), many times you may still be left with the thought, "...that data isn't in a format I can easily understand or use...". Know what I mean? Ever been there? I'm sure you have...extract that Event Log file from an image and load it up into Event Viewer on your analysis system, only to be confronted with an error message telling you that the Event Log is "corrupted". What do you do? Boot the image with LiveView (version 0.6 is available, by the way) and log into it to view the Event Log? Got the password?

The answer to this dilemma is to take a page from Xzibit's book and "pimp my forensics analysis". That's right, we're going to customize or "trick it out" our systems with the tools and stuff you need to do the job.

One way to get started on this is to take a look at my book [WARNING: Shameless self-promotion approaching...], Windows Forensic Analysis; specifically at some of the tools that ship on the accompanying DVD. All of the tools were written in Perl, but all but a few have been "compiled" into standalone EXEs so that you don't have to have Perl installed to run them, or know anything about Perl -- in fact, based on the emails I have received since the book was released in May 2007, the real limiting factor appears to be nothing more than a lack of familiarity with running command line (CLI) tools (re: over-dependence on pushing buttons). The tools were developed out of my own needs, and my hope is that as folks read the book, they too will recognize the value in parsing the Event Log files, Registry, etc., as well as the value of the tools provided.

Another resource is the upcoming Perl Scripting for Forensic Investigation and Security, to be published in the near future by Syngress/Elsevier.

What do these two resources provide? In a nutshell, customized and customizable tools. While a number of tools exist that will let you view the Registry in the familiar (via RegEdit) tree-style format, how many of those tools will translate arbitrary binary data stored in the Registry values? How many will do correlation, not only between multiple keys within a hive, but across hives, as well?

How many of the tools will allow you to parse the contents of a Windows 2000, XP, or 2003 Event Log file into an Excel spreadsheet, in a format that is easily sorted and searched? Or report on various event record statistics?

How many of these tools provide the basis for growth and customization, in order to meet the needs of a future investigation? Some tools do...they provide base functionality, and allow the user to extend that functionality through scripting languages. Some are easier to learn than others, and some are more functional that others. But even these can be limiting sometimes.

The data is there, folks...what slows us down sometimes is either (a) not knowing that the data is there (and that a resource is a source of valuable data or evidence), and (b) not knowing how to get at that data and present it in a usable and understandable manner. Overcoming this is as simple as identifying what you need, and then reaching out to the Xzibits and Les Stroud's of the forensic community to get the job done. Rest assured, you're not the only one looking for that functionality...

Saturday, October 27, 2007

Some new things...

I've been offline and not posting for a while, I know...not much time to post with so much going on during my day job (but that's a Good Thing).

A couple of new things have popped up recently that I wanted to share with everyone. First, Didier Stevens has produced an update to his UserAssist program, for parsing the UserAssist Registry keys on a live system. This update parses the GUIDs, giving you even more information about the user's activities. This is something that I'll have to add to my own tools that parse the same keys, but during post-mortem analysis.

Second, Peter Burkholder over at Ellipsis has produced a patch for running my Forensic Server Project (FSP) on *nix-variant systems, to include MacOSX. I have said from the very beginning that this could be done, and Peter has gone and done it! Very cool!

Jesse Kornblum has released md5deep 2.0, which has some new features and bug fixes...check it out.

If I've missed anything, please drop me a line and let me know...