Monday, June 28, 2010

Skillz

Remember that scene from Napoleon Dynamite where he talks about having "skillz"? Well, analysts have to have skillz, right?

I was working on a malware exam recently...samples had already been provided to another analyst for reverse engineering, and it was my job to analyze acquired images and answer a couple of questions. We knew the name of the malware, and when I was reviewing information about it at various sites (to prepare my analysis plan), I found that when the malware files are installed, their MAC times are copied from kernel32.dll. Okay, no problem, right? I'll just parse the MFT and get the time stamps from the $FILE_NAME attribute.

So I received the images and began my case in-processing. I then got to the point where I extracted the MFT from the image, and the first thing I did was run David Kovar's analyzemft.py against it. I got concerned after I ran it for over an hour, and all I got was a 9Kb file. I hit Ctrl-C in the command prompt and killed the process. I then ran Mark Menz's MFTRipperBE against the file and when I opened the output .csv file and ran a search for the file name, Excel told me that it couldn't find the string. I even tried opening the .csv file in an editor and ran the same search, with the same results. Nada.

Fortunately, as part of my in-processing, I had verified the file structure with FTK Imager, and then created a ProDiscover v6.5 project and navigated to the appropriate directory. From there, I could select the file within the Content View of the project and see the $FILE_NAME attribute times in the viewer.

I was a bit curious about the issue I'd had with the first two tools, so I ran my Perl code for parsing the MFT and found an issue with part of the processing. I don't know if this is the same issue that analyzemft.py encountered, but I made a couple of quick adjustments to my Perl script, and I was able to fairly quickly get the information I needed. I can see that the file has $STANDARD_INFORMATION and $FILE_NAME attributes, as well as as data attribute, that the file is allocated (from the flags), and that the MFT sequence number is 2. Pretty cool.

The points of this post are:

1. If you run a tool and do not find the output that you expect, there's likely a reason for it. Validate your findings with other tools or processes, and document what you do. I've said (and written in my books) that the absence of an artifact where you would expect to find one is itself an artifact.

2. Analysts need to have an understanding of what they're looking at and for, as well as some troubleshooting skills, particularly when it comes to running tools. Note that I did not say "programming" skills. Not everyone can, or wants to, program. However, if you don't have the skills, develop relationships with folks who do. But if you're going to ask someone for help, you need to be able to provide enough information that they can help you.

3. Have multiple tools available to validate your findings, should you need to do so. I ran three tools to get the same piece of information, of which I had documented the need in my analysis plan prior to receiving the data. One tool hung, another completed without providing the information, and I was able to get what I needed from the third, and then validate it with a fourth. And to be honest, it didn't take me days to accomplish that.

4. The GUI tool that provided the information doesn't differentiate between "MFT Entry Modified" and "File Modified"...I just have two time stamps from the $FILE_NAME attribute called "Modified". So I tweaked my own code to print out the time stamps in MACB format, along with the offset of the MFT entry within the MFT itself. Now, everything I need is documented, so if need be, it can be validated by others.

Tuesday, June 22, 2010

Links and what not

Case Notes
Chris posted on writing and maintaining case notes a bit ago, using Case Notes. One of the things that is sorely overlooked many times is note taking and documenting what you're doing during an examination.

This is perhaps one of the most overlooked aspects of what we (IR/DF folks) do...or perhaps more appropriately, need to be doing. Many times, so little of what we do is documented, and it needs to be, for a number of reasons. One that's mentioned many times is, how are you going to remember the details of what you did 6 months later? Which tool version did you use? Not to pick on EnCase, but have you been to the user forum? Spend a little time there and you'll see why the version of the tool makes a difference.

Another aspect that few really talk about is process improvement...how do you improve upon something if you're not documenting it? As forensics nerds, we really don't think about it too much, but there are a lot of folks out there who have processes and procedures...EMTs, for example. Let's say you're helping someone with some analysis, and you've worked out an analysis plan with them. Now, let's say that they didn't follow it...they told you that...but they can't remember what it was they did do. How do you validate the results? How can you tell that what they did was correct or sufficient?

A good example is when a customer suspects that they have malware on a system, and all you have to work with is an acquired image. How do you go about finding "the bad stuff"? Well, one way is to mount the image read-only and scan it with AV. Oh, but wait...did you check to see which AV, if any, was already installed on the system? Might not be a good idea to run that, because apparently it didn't work in the first place. So what do/did you run? Which version? When was it updated? What else did you do? You will not ever reach 100% certainty, but with a documented process you can get close.

When you document what you do, one of the side effects is that you can improve upon that process. Hey, two months ago, here's what I did...that was the last time that I had a malware case. Okay, great...what've I learned...or better yet, what have other folks learned since then, and how can we improve this process? Another nice side effect is that if you document what you did, the report (another part of documentation that we all hate...) almost writes itself.

In short, if you didn't document what you did...it didn't happen.

Raw2vmdk
Raw2vmdk is a Java-based, platform independent tool for mounting raw images as VMWare vmdk disks. This is similar to LiveView, and in fact, raw2vmdk reportedly uses some of the same Java classes as LiveView. However, raw2vmdk is a command line tool; thanks for JD for taking the time to try it out and describe it in a blog post.

MFT $FILE_NAME Attributes
Eric posted to the Fistful of Dongles blog, asking about tools that can be used to extract/retrieve $FILE_NAME attributes from the MFT. I mentioned two tools in my comment that have been around and available for some time, and another commenter mentioned the use of EnScripts.

Tool Updates
Autoruns was updated on 15 June; perhaps the most notable update is the -z switch, which specifies an offline Windows system to scan. I really like Autoruns (and it's CLI companion, autorunsc.exe), as it does a great job of going through and collecting information about entries in startup locations, particularly the Registry. The drawback of the tool is that there isn't much of an explanation as to why some areas are queried, leaving it up to the investigator to figure it out. This is pretty unfortunate, given the amount of expertise that goes into developing and maintaining a valuable tool like this, and usually means that a great deal of the information that the tool collects is simply overlooked.

If you're interested in USB device histories on your system, check out usbhistory.exe. The web page provides a very good explanation of how the tool works and were it goes to get it's information. Overall, if this is information that you're interested in, this would be a very good tool to add to a batch file that collects information from a live system.

USB Devices
Speaking of USB device history, there's a topic I keep seeing time and time again in the lists that has to do with when a USB device had last been connected to a system. In almost all cases, the question that is posed indicates that there has been some issue determining when devices had been attached, as the subkeys beneath the Enum\USBStor key all have the same LastWrite times.

While there is clearly some process that is modifying these subkeys so that the LastWrite times are updated, these are not the keys we're interested in when it comes to determining when the devices were last attached to the system. I addressed this in WFA 2/e, starting on pg 209, and Rob Lee has covered it in his guides for profiling USB devices on various versions of Windows.

Sunday, June 20, 2010

Who's on...uh, at...FIRST?

I attended the FIRST conference in Miami last week. My employer is not a member of FIRST, but we were a sponsor, and we hosted the "Geek Bar"...a nice room with two Wiis set up, a smoothie bar (also with coffee and tea), and places to sit and relax. One of my roles at the conference was to be at the Geek Bar to answer questions and help sign folks up for the NAP tour on Thursday, as well as mingle with folks at the conference. As such, I did not get to attend all of the presentations...some were going on during my shift at the Geek Bar, for example.

Note: Comments made in this blog are my own thoughts and opinions, and do not necessarily reflect or speak for my employer.

Dave Aitel's presentation hit the nail on the head...defenders are behind and continue to learn from the attacker. Okay, not all defenders learn from the attacker...some do, others, well, not so much. Besides Dave's really cool presentation, I think that what he said was as important as what he didn't say. I mean, Dave was really kind of cheerful for what, on the surface, could be a "doom-and-gloom" message, but someone mentioned after the presentation that Dave did not provide a roadmap to fixing/correcting the situation. I'd suggest that the same things that have been said for the past 25 years, the same core principles still apply...they simply need to be used. My big take-away from this presentation was that we cannot say that defensive tactics, or the tactics, techniques, and strategies used by the defenders have failed, because in most cases, they haven't been implemented properly, or at all.

I really liked Heather Adkins' presentation regarding Aurora and Google, and I think that overall it was very well received. It was clear that she couldn't provide every bit of information associated with the incident, and I think she did a great job of heading off some questions by pointing out what was already out there publicly and could be easily searched for...via Google.

Vitaly Kamluk's (Kaspersky/Japan) presentation on botnets reiterated Dave's presentation a bit, albeit not in so many words. Essentially, part of the presentation was spent talking about the evolution of botnet infrastructures, going through one-to-many, many-to-one, C2, P2P, and a C2/P2P hybrid.

Unfortunately, I missed Steven Adair's presentation, something I wanted to see. However, I flew to Miami on the same flight as Steven, one row behind and to the side of his seat, so I got to see a lot of the presentation development in action! I mean, really...young guy opens up Wireshark on one of two laptops he's got open...who wouldn't watch?

Jason Larsen (researcher from Idaho National Labs) a good talk on Home Area Networks (HANs). He mentioned that he'd found a way to modify firmware on power meters to do such things as turn on the cellular networking of some of these devices. Imagine the havok that would insue if home power meters suddenly all started transmitting on cellular network frequencies. How about if the transmitting were on emergency services frequencies?

The Mandiant crew was in the hizz-ouse, and I saw Kris Harms at the mic not once, but twice! Once for the Mandiant webcast, and once for the presentation on Friday. I noticed that some parts of both of Mandiant's presentations were from previous presentations...talking to Kris, they were still seeing similar techniques, even as long as two years later. I didn't get a chance to discuss this with Kris much more, to dig into things like, were customers against which these techniques used detecting the incident, or was the call to Mandiant the result of an external third party calling the victim organization?

Richard Bejtlich took a different approach to his presentation...no PPT! If you're read his blog, you know that he's been talking about this recently, so I wasn't too terribly surprised (actually, I was very interested to see where it would go) when he started his time at the mic by asking members of the audience for questions. He'd had a handout prior to kicking things off, and his presentation was very interesting because of how he spent the time.

There were also a number of presentations on cloud computing, including one by our own Robert Rounsavall, and another on Fri morning by Chris Day. It's interesting to see some folks get up and talk about "cloud computing" and how security, IR, and forensics need to be addressed, and then for others to get up and basically say, "hey, we have it figured out."

Take Aways from FIRST
My take aways from FIRST came from two sources...listening to and thinking about the presentations, and talking to other attendees about their thoughts on what they heard.

As Dave Aitel said, defenders are steps behind the attackers, and continue to learn from them. Kris Harms said that from what they've seen at Mandiant, considerable research is being placed into malware persistence mechanisms...but when talking about these to some attendees, particularly those involved in incident response in some way within their organizations, there were a lot of blank stares. A lot of what was said by some was affirmed by others, and in turn, affirmed my experiences as well as those of other responders.

I guess the biggest take away is that there are two different focuses with respect to business. Organizations conducting business most often focus on business and not so much on securing the information that is their business. The bad guys have a business model, as well, that is also driven by revenue...they are out to get your information, or access to your systems, and they are often better at using your infrastructure than you are. The drive or motivation of business is to do business, and at this point, security is such a culture change that its no wonder that victims find out about so many intrusions and data breaches after the fact, due to third party notification. The road map, the solutions, to addressing this have been around for a very long time, and nothing will change until organizations start adopting those security principles as part of their culture. Security is ineffective if it's "bolted on"...it has to be part of what businesses do...just like billing and collections, shipping and product fulfillment, etc. Incident responders have watched the evolution of intruder's tactics over the years, and organizations that fall victim to these attacks are often stagnant and rooted in archaic cultures.

Overall, FIRST was a good experience, and a good chance to hear what others were experiencing and thinking, both in and out of the presentation forum.

Thursday, June 10, 2010

Linkz

Analysis
This post from the Digital Detective site discusses how to manually identify the time zone of a system from the image. This information is maintained in the Registry, and RegRipper has a plugin for this (as part of the default distro).

Plugins
I saw this post recently on the SANS ISC blog, which has to do with software restriction policies on a system. I thought...hey, that's pretty cool, AND there's a Registry key listed. From there it was a simple matter to research the MS site and see what other information I could find, and I began to see the possible value of the data derived from the DefaultLevel value (called a "key" in the blog post) to an analyst. In a matter of minutes, I had a functioning RegRipper plugin.

Interestingly enough, the more I research this, the more I see the CodeIdentifiers key being of some level of importance, not only to forensic analysts, but also to system administrators. After all, if it weren't, why would so many bits of malware be modifying or deleting entries beneath this key?

TSK/Open Source Conference

I have to say, when someone who's attended conferences sets out to create a conference, things tend to turn out pretty well. Aaron's OMFW (2008) turned out that way, and Brian's Open Source conference (9 June) was another excellent example.

There were seven presentations, all of high caliber (well, six of high caliber, and mine! ;-) ) and two time-slots for open discussion. I like the shorter talks (unless there's some kind of hands-on component), but that also requires the presenters to develop their presentations to meet the time constraint. For example, Cory had some great stuff (I know, because I was sitting next to him during earlier presentations when he developed it!) in his presentation, but had to skip over portions due to time.

Rather than walking through each presentation individually, I wanted to cover the highlights of the conference as a whole. In that regard, there were a couple of points that were brought up with respect to open source overall:

Tool Interoperability - Open source tools need to be able to interoperate better. During my presentation, I mentioned several times that due to the output provided by some tools and the format that I need, I use Perl as the "glue". Maybe there's a better way to do this.

Tool Storehouse - There are a number of open source (and free) tools out there that are very useful and very powerful...but they're out there and not accessible from a single location. It's difficult to keep up with so many things in forensics and IR, following blogs and lists...it can be too much. Having a centralized location where someone can go and search for information, kind of like an encyclopedia, would be more beneficial...maybe something like the Forensics Wiki.

Talking to a number of the other folks attending the conference, it was clear that everyone was at a different level. Some folks develop tools, other use and modify that tools, still others use the tools and submit bug reports or feature requests, and other simply use the tools. One of the benefits of these conferences is that all of these folks can meet, share thoughts and opinions, and discuss the direction of either individual tools, or open source in general. Some folks are very comfortable sharing ideas with a larger audience, and others do so on a more individual basis...conferences like this provide an environment conducive to both.

I think one of the biggest things to come out of the conference is that open source tools come from a range of different areas. Some come from academic research projects, others start as a need that one person has. Regardless of where they come from, they require work...a lot of work. Take memory acquisition and analysis...lots of folks have put a lot of effort into developing what's currently available, and if anyone feels that what's available right now isn't sufficient, then support the work that being done. I think AW covered that very well; the best way to get your needs met is to communicate them, and then support those who are doing the work. Learn to code. Don't code? Can you write documentation? How else can you provide support? Loaning hardware? Providing samples for analysis? There's a great deal that can be done. We all have to remember that for most of this, the work is hard, such as providing a layer of abstraction for a very technical and highly fluid area of analysis (i.e., memory) or providing an easily accessible library for something else. It's hard work, and very often done on someone's own time, using their own resources. I completely agree with AW...folks that do those things should get recognition for their good works. As such, organizations (and individuals) that rely heavily on open source (and free) tools for their work should be offering some kind of support back to the developer, particularly if they want some additional capabilities.

On the organizational side of things...rather than operational...it's always good to see a conference where some of the things that you don't like about conferences are improved upon. For example, the attendee's badge wallet had the complete schedule listed on a small card right there in the back of the wallet. That way, you weren't always looking around for the schedule. Also, there were plenty of snacks, although I haven't yet been to an event like this where the coffee was any good. ;-(

Overall, this was a great conference, with lots of great information shared. One of the best things about conferences is that they bring folks together...folks that may "know" each other on the Internet or via email, but haven't actually met, or haven't seen each other in a while. Great content generates some great conversations, and the folks that share end up sharing some great ideas. I'm really hoping to get an invite to the next one...which means I should keep working with open source tools and "going commando"...

Monday, June 07, 2010

Anti-forensics

The term "anti-forensics" is used sometimes to describe efforts attackers use in order to foil or frustrate the attempts of the good guys to discover what happened, when it happened, and how it happened. Many times, what comes to mind when someone uses the term "anti-forensics" are things like steganography, rootkits, and timestomp.

Richard Bejtlich went so far as to make a distinction between the terms "anti-forensics" and "counterforensics". On Wikipedia, the term "anti-computer forensics" is used to refer tocountermeasures against forensic analysis, while "counterforensics" refers to attacks directed against forensic tools. For additional clarification on the terms, see W. Matthew Hartley's paper on the topic here; the paper is three pages long, and makes a clear distinction between the two terms.

In short, from Hartley's paper, anti-forensics refers to techniques and technologies that "invalidate factual information for judicial review", whereas counterforensics refers to tools and techniques that "target...specific forensic technologies to directly prevent the investigator from analyzing collected evidence". Hartley's paper gives some very good examples to illustrate the differences between these two terms, so I strongly suggest reading the three pages.

Very often, we use these terms (in some manner) to describe what an attacker or intruder may do, but one thing that's not discussed often is how this applies to responder's actions, or inaction, as the case may be. In that regard, there are a couple of things that organizations and responders need to keep in mind:

1. You can not just grab the disk and expect to get what you need. Live response, including memory acquisition, is becoming more and more important, particularly in the face of questions brought on by legislative and regulatory compliance. Too many times, a response team (consultants, not on-site staff) will be called in and during the course of response, but provided with goals. Then, after the fact, the victim organization will start with questions such as, "...was data exfiltrated?" and "...how much data was exfiltrated?" Fortunately, consulting responders have faced these questions often enough that they can often head them off at the pass, but what do you do when the victim organization has already taken systems offline before calling for help? Would it have been beneficial to the overall response if they'd captured memory, and perhaps some volatile data, first?

2. Temporal Proximity - lots of stuff happens on Windows systems, particularly since Windows XP was released, that have an antiforensics effect...and much of it happens without any interaction from a user or administrator. Let's say that a bad guy gets into a system, uploads a couple of tools, runs them, copies data off of the system, then deletes everything and leaves. While the system is just sitting there, there are limited defrags going on, System Restore Points and Volume Shadow Copies are being created and removed, operating system and application updates are being installed, etc. It won't be long before, due to a lack of responsiveness, all you're left with is an entry in an index file pointing to a file name. Once that happens, and you're unable to determine how many records were exposed, you will very likely have to report and notify on ALL records. Ouch!

The point is that intruders very often don't have to use any fancy or sophisticated techniques to remain undetected on a system or within a network infrastructure. What tends to happen is that responders and analysts may have only so much time (8 hrs, 16 hrs, etc.) to spend on investigating the incident, so as long as the intruder uses a technique that takes just a bit longer than that to investigate, they're good. There's no real need to use counter- or anti-forensics techniques. Very often, though, it's not so much the actions of the intruder, but the inaction of the closest responders that have the greatest impact on an investigation.

Thoughts?

Resources
Anti-forensics.com
Antiforensics.net
ForensicsWiki: Anti-forensics
Adrien's Occult Computing presentation
CSOOnline article: The Rise of Anti-Forensics
Lance Mueller: Detecting Timestamp Changing Utilities

Thursday, June 03, 2010

Book Review

I recently had an opportunity to read through Handbook of Digital Forensics and Investigation (ISBN-10: 0123742676), edited by Eoghan Casey.

One of the first things you notice about the book...aside from the heft and sense that you actually have something in your hands...is the fact that the book actually has 16 authors listed! 16! Many of the names listed are very prominent in the field of digital forensics, so one can only imagine the chore of not only keeping everyone on schedule, but getting the book into an overall flow.

Something I really like about the Handbook is that it starts off with basic, core principles. Digital forensics is one of those areas where folks usually want to dive right into the cool stuff (imaging systems, finding "evidence", running tools, etc.), but as with other similar areas, you're only going to get so far that way. These core principles are presented in an easy-to-understand manner, with little sidebars that illustrate or reinforce various points. Many of these principles are topics that are misunderstood, or talked about but not practiced in the wider community, and having luminaries such as Eoghan and Curtis and others present them in this sort of format brings them home again.

Some of those sidebars I mentioned, in my copy of the Handbook, are heavily highlighted, and the book itself has a number of notes in the margins of several chapters. One of the highlighted gems is in a "Practitioner's Tip" sidebar on page 23, where it says, "...apply the scientific method, seek peer review...". This one struck home, because too often in the community we see where statements are made that are assumptions, not based on supporting fact. IMHO, I think that is is a result of not seeking peer review, not engaging with others, and not having someone who's able to ask the simple question, "why?"

Another aspect of the Handbook that I found very useful was that when a technique or something specific about some data was discussed, several times, there were illustrations using not just commercial forensic analysis applications, but also free and open-source tools. For example, page 55 has an illustration of the use of the SQLite command line utility to examine the contents of the Skype main.db database file. My sense is that the overall approach to the Handbook is to move practitioners away from over reliance on a specific commercial application, and toward an understanding that hey, there are other riches to be discovered if you disconnect the dongle and think for a minute.

I'll admit that I didn't spend as much time on some chapters as I did on others. Chapters 1 and 2 were very interesting for me, but chapter 3 got into electronic discovery. While this really isn't something I do a lot of, parts of the chapter (prioritizing systems, processing data from tapes, etc.) caught my eye. The pace picked back up again with chapter 4, Intrusion Investigation, particularly where there was discussion of "fact versus speculation" and "Reporting Audiences". In my experience, these are just two of the areas where any investigation can easily veer off course. Of course, without question, I spent a great deal of time in chapter 5, Windows Forensic Analysis! However, that doesn't mean that the other chapters don't have a great deal of valuable information.

The real value of the Handbook is that it did not focus on any one platform, or on any one commercial product. Analysis of Windows, as well as *nix/Linux, Mac, and embedded systems are addressed, as was the network (including mobile networks). There was no singular focus on one commercial product, something you see in other books; instead, a combination of commercial, free, and open-source tools were used to illustrate various points. In one instance, three commercial applications were shown side-by-side to illustrate a point about deleted files. Even some of my own tools (RegRipper, wmd.pl, etc.) were mentioned!

Overall, I think that the book is an excellent handbook, and it definitely has a prominent place on my bookshelf. No one of us knows everything there is to know, and I even found little gems in the chapter on Windows forensic analysis. For anyone who doesn't spend much time in that area, or analyzing Macs, those chapters will be a veritable goldmine...and you're very likely to find something new, even if you are very experienced in those areas. The Handbook is going to be something that I refer back to time and again, for a long time to come. Thanks to Eoghan, and thanks to all of the authors who put in the time and effort to produce this excellent work.

Resources
Richard Bejtlich's review @TaoSecurity

Stuffz

Tool Updates
Paraben recently sent out an email about an updated version of their P2eXplorer tool being available. This is the product that allows you to mount acquired images for viewing, mounting a variety of images as physical disks.

ImDisk is available for 32- and 64-bit versions of Windows, including Windows 2008. I've got an idea for trying it out on Windows 7...we'll have to see how it works.

The TSK tools are up to version 3.1.2. Be sure to update your stuff.

eZine
There's a new issue of Hakin9 magazine available...it's free now, which is kind of cool.

WindowsRipper
Matt posted about how he and Adam used RegRipper to create WindowsRipper. It's an interesting project and I have to say, I really like it when folks find ways to achieve their needs and get the tools to meet their goals, rather than the other way around. Great job, guys...I'm looking forward to seeing where this goes. Let me know what I can do to help.

WinFE
Speaking of RegRipper, it appears that RegRipper is included in WinFE! Brett Shavers set up the WinFE site (he's also the guy who set up the RegRipper site), and the list of tools includes RegRipper!

Podcasts
I was interviewed last night by the guys from the Securabit podcast (episode 58). Thanks, guys, for a great time..."hanging out" on Skype with a bunch of former sailors...truly a dream come true! ;-) I enjoy having the opportunity to talk nerdy with folks, as forensics is not just a job, it's an adventure!

Check out Chris Pogue's "Sniper Forensics" interview on the CyberJungle podcast. It's episode 141, and the hosts start mentioning SANS (as a lead-in to Chris's interview) at about 58:26 into the podcast. Chris talked about his sniper forensics, as well as the 4-step Alexiou Principle that he uses as a basis for analysis. Chris will be giving his "Sniper Forensics" presentation at the SANS Forensic Summit in July.

Wednesday, June 02, 2010

Timelines

With a couple of upcoming presentations that address timelines (TSK/Open Source Conference, SANS Forensic Summit, etc.), I thought that it might be a good idea to revisit some of my thoughts and ideas behind timelines.

For me, timeline analysis as a very powerful tool, something that can be (and has been) used to great effect, if it makes sense to do so. Not all of us are always going to have investigations that are going to benefit from a timeline analysis, but for those cases where a timeline would be beneficial, they have proven to be an extremely valuable analysis step. However, creating a timeline isn't necessarily a point-and-click proposition.

I find creating the timelines to be extremely easy, based on the tools I've provided (in the Files section of the Win4n6 Yahoo group). For my uses, I prefer the modular nature of the tools, in part because I don't always have everything I'd like to have available to me. In instances where I'm doing some IR or triage work, I may only have selected files and a directory listing available, rather than access to a complete image. Or sometimes, I use the tools to create micro- or nano-timelines, using just the Event Logs, or just the Security Event Log...or even just specific event IDs from the Security Event Log. Having the raw data available and NOT being hemmed in by a commercial forensic analysis application means that I can use the tools to meet the goals of my analysis, not the other way around. See how that works? The goals of the analysis define what I look for, what I look at, and what tools I use...not the tool. The tool is just that...a tool. The tools I've provided are flexible enough to be able to be used to create a full timeline, or to create these smaller versions of timelines, which also serve a very important role in analysis.

As an example, evtparse.pl parses an Event Log (.evt, NOT .evtx) file (or all .evt files in a given directory) into the five-field TLN format I like to use. Evtparse.pl is also a CLI tool, meaning that the output goes to STDOUT and needs to be redirected to a file. But that also means that I can look for specific event IDs, using something like this:

C:\tools> evtparse.pl -e secevent.evt -t | find "Security/528" > sec_528.txt

Another option is to simply run the tool across the entire set of Event Logs and then use other native tools to narrow down what I want:

C:\tools>evtparse.pl -d f:\case\evtfiles -t > evt_events.txt
C:\tools>type evt_events.txt | find "Security/528" > sec_528.txt

Pretty simple, pretty straightforward...and, I wouldn't be able to do something like that with a GUI tool. I know that the CLI tool seems more cumbersome to some folks, but for others, it's immensely more flexible.

One thing I'm not a huge proponent of when it comes to creating timelines is collecting all possible data automagically. I know that some folks have said that they would just like to be able to point a tool at an image (maybe a mounted image), push a button, and let it collect all of the information. From my perspective, I see that as too much, simply because you don't really want everything that's available, because you'll end up having a huge timeline with way too much noise and not enough signal.

Here's an example...a bit ago, I was doing some work with respect to a SQL injection issue. I was relatively sure that the issue was SQLi, so I started going through the web server logs and found what I was looking for. SQLi analysis of web server logs is somewhat iterative, as there are things that you search for initially, then you get source IP addresses of the requests, then you start to see things specific to the injection, and each time, you go back and search the logs again. In short, I'd narrowed down the entries to what I needed and was able to discard 99% of the requests, as they were normally occurring web traffic. I then created a timeline using the web server log excerpts and the file system metadata, and I had everything I needed to tell a complete story about what happened, when it started, and what was the source of the intrusion. In fact, looking at the timeline was a lot like looking at a .bash_history file, but with time stamps!

Now, imagine what I would have if I'd run a tool that had run through the file system and collected everything possible. File system metadata, Event Logs, Registry key LastWrite times, .lnk files, all of the web server logs, etc...the amount of data would have been completely overwhelming, possibly to the point of hiding or obscuring what I was looking for, let alone crashing any tool for viewing the data.

The fact is, based on the goals of your analysis, you won't need all of that other data...more to the point, all of that data would have made your analysis and producing an answer much more difficult. I'd much rather take a reasoned approach to the data that I'm adding to a timeline, particularly as it can grow to be quite a bit of information. In one instance, I was looking at a system which had apparently been involved in an incident, but had not been responded to in some time (i.e., it had simply been left in service and used). I was looking for indications of tools being run, so the first thing I did was view the audit settings on the system via RegRipper, and found that auditing was not enabled. Then I created a micro-timeline of just the available Prefetch files, and that proved equally fruitless, as apparently the original .pf files had been deleted and lost.

In another instance, I was looking for indications that someone had logged into a system. Reviewing the audit configuration, I saw that what was being audited had been set shortly after the system had been installed, and logins were not being audited. At that point, it didn't make sense to add the Security Event Log information to the timeline.

I do not see the benefit of running regtime.pl to collect all of the Registry key LastWrite times. Again, I know that some folks feel that they need to have everything, but from a forensic analysis perspective, I don't subscribe to that line of thinking...I'm not saying it's wrong, I'm simply saying that I don't follow that line of reasoning. The reason for this is threefold; first, there's no real context to the data you're looking at if you just grab the Registry key LastWrite times. A key was modified...how does that help you without context? Second, you're going to get a lot of noise...particularly due to the lack of context. Looking at a bunch of key LastWrite times, how will you be able to tell if the modification was a result of user or system action? Third, there is a great deal of pertinent time-based information within the data of Registry values (MRU lists, UserAssist subkeys, etc.), all of which is missed if you just run a tool like regtime.pl.

A lot of what we do with respect to IR and DF work is about justifying our reasoning. Many times when I've done work, I've been asked, "why did you do that?"...not so much to second guess what I was doing, but to see if I had a valid reason for doing it. The days of backing up a truck to the loading dock of an organization and imaging everything are long past. There are engagements where you have a limited time to get answers, so some form of triage is necessary to justify which systems and/or data (i.e., network logs, packet captures, etc.) you're going to acquire. There are also times where if you grab too many of the wrong systems, you're going to be spending a great deal of time and effort justifying that to the customer when it comes to billing. The same is true when you scale down to analyzing a single system and creating a timeline. You can take the "put in everything and then figure it out" approach, or you can take an iterative approach, adding layers of data as they make sense to do so. Sometimes previewing that data prior to adding it, performing a little pre-analysis, can also be revealing.

Finally, don't get me wrong...I'm not saying that there's just one way to create a timeline...there isn't. I'm just sharing my thoughts and reasoning behind how I approach doing it, as this has been extremely successful for me through a number of incidents.

Resources
log2timeline
Cutaway's SysComboTimeline tools

Saturday, May 29, 2010

Some more stuff...

Registry
I've been working on a book on forensic analysis of the Windows Registry, and I was adding something to my outline the other day when I ran across Chris's blog post on how to crack passwords using files from an acquired image. Nothing quite like freeware to get the job done, eh? I guess one of the issues is that there's a "cost" associated with everything...you either pay a lot of $$ for a commercial package, or you "pay" by having to learn something that doesn't include pushing the "find all evidence" button. Kind of makes me wish for Forensicator Pro! ;-)

This is pretty cool stuff, particularly when you use it in conjunction with the samparse plugin, and this information about User Account Analysis. I know I keep referring back to that post, but hey...there are a LOT of analysts out there who think that the "Password Not Required" flag in the SAM means that the account doesn't have a password, and that's not the case at all.

Two things about this: first, some things (like this) bear repeating...again and again. Second, this is why we need to engage and be part of the larger community. Sitting in an office somewhere with no interaction with others in the community leads to misconceptions and bad assumptions.

Contacts and Sharing
Speaking of communities and sharing, Grayson had an interesting post that caught my eye, with respect to sharing. Evidently, he recently found out about a group that meets in Helena to discuss security, hacking, etc. This is a great way to network professionally, share information...and apparently, to just get out and have a sandwich!

Speaking Engagements
I've blogged recently about some upcoming speaking engagements, conferences where I and others will be speaking or presenting. My next two presentations (TSK/Open Source and the SANS Forensic Summit) will cover creating timelines, and using them for forensic analysis. The content of these presentations will be slightly different, due to time available, audience, etc. However, they both address timelines in forensic analysis because I really feel that they're important, and I'm just not seeing them being used often enough, particularly where it's glaringly obvious that a timeline would be an immensely powerful solution.

Yes, I know of folks who are using SIFT and log2timeline...I've seen a number of comments over in the Win4n6 Yahoo group. That's some real awesome sauce. I've written articles for Hakin9, including this one, which walks the reader through using my tools to create a timeline. I've done analysis of SQL injection attacks where a timeline consisting of the web server logs and the file system metadata basically gave me a .bash_history file with time stamps. I've created and used timelines to map activity across multiple systems and time zones, and found answers to questions that could only be seen in a timeline.

So, at this point, for those of you who are not creating timelines regularly, what is the biggest impediment or obstacle for you? Is it lack of knowledge, lack of access to tools...what?

Podcasts
Speaking of speaking engagements...I'm scheduled to be on with the guys from the Securabit podcast on 2 June. I'm a big fan of Ovie and Bret's CyberSpeak podcast and these kinds of things are always interesting. Most recently, I listened to the interview that included Dr. Eric Cole...whom I once worked with when he was at Teligent (I was with a consulting firm), albeit only for a couple of weeks.

I've also been on Lee Whitfield's Forensic4Cast podcast. Lee and Simon are swinging the Forensic4Cast Awards 2010, which they started last year...if you're planning to be at the SANS Forensic Summit this July (and even if you're not), be sure to enter a nomination and vote. You can view the 2009 awards here.

CaseNotes
There's an updated version of CaseNotes available...you do keep case notes, right? Chris blogged on it, as well as the importance of keeping case notes.

Wednesday, May 26, 2010

More links...

WFA 2/e Book Review
Peter Sheffield posted a review of WFA 2/e over on the SANS Forensic Blog. What can I say, besides, "Thanks, Peter!" I really appreciate it when folks let me know what they think of the book, or the tools, but I appreciate it even more when they do so publicly, like what Peter did. Such things really help the sales of the book. More importantly, it's beneficial for me to see that others in the community have found the work and effort put into the books to be useful or valuable.

Quote
I received the following quote from Chris Perkins, CISSP, ACE (Hujarl), Digital Forensic Investigator, along with his authorization to share it:

Some years ago while at a tech conference I ran across your first edition of the Windows Forensics Analysis book. On my return flight I read it cover to cover, and read the Registry Analysis chapter twice! I had an interest in the forensic space previous to this experience with my work as a security analyst, but your book spurred my interest even further and helped drive me towards my current career.

Fast forward to today and I am still referencing that great book frequently in my work as a Digital Forensic Investigator. It is well worn and dog-eared throughout.


In addition, your RegRipper tool is used constantly in my investigations, especially in Intellectual Property work. The beauty of the tool is its quick, clean text reports and flexibility for additional plug-ins based on specific needs. It can be verified directly with other tools and methods, which is very necessary process to validate the data.


Thanks so much for the great work!


Thanks, Chris, for your words, as well as for allowing me to share your comments publicly.

MS goes Open Source
Microsoft recently released a tool for viewing the content structure of PST files called the PST Data Structure View Tool, or pstviewtool. MS has also released the PST file format SDK. These releases follow MS's release of the .pst structure specification earlier this year, and make it easier for programmers to access the contents of PST files without having to have OutLook or Exchange installed.

Date Formats
Working on writing recently, I've been trying to figure out where a good place is to fit in a discussion or even just state, "here are date formats used by MS". The Old New Thing blog has a very good post on time stamp formats. One that isn't mentioned in the post is the 128-bit SYSTEMTIME format; this one is used in Scheduled Task .job files, as well as in several Registry keys that have to do with wireless access on Vista and above. Please don't think that that's a complete or comprehensive list of where the date format is used in Windows...it's only two places that I'm aware of, and there are likely others.

Metadata
I've recently seen and received a number of questions about Office 97-2003 metadata date formats, what the date values refer to (GMT vs. local system time), and where they're located in the binary format. Well, MS was nice enough to publish the formats, which you can use to verify findings from other tools. Click on the link in the "Date Formats" section above, and you'll see that the OLE date format is different from other formats, particularly the more recent Office (2007, 2010) formats.

User Account Analysis
The issue of user account analysis comes up time and again, and I thought that this would be worth repeating. I've seen the question of the "password not required" flag and what it means come up in various forums, most recently in the new RegRipper forums. I understand that this can be a bit tough to grasp, so I'd like to post it again.

With respect to the "password not required" flag in the output of the samparse.pl plugin, what I got from someone at MS is as follows:

That specifies that the password-length and complexity policy settings do not apply to this user. If you do not set a password then you should be able to enable the account and logon with just the user account. If you set a password for the account, then you will need to provide that password at logon. Setting this flag on an existing account with a password does not allow you to logon to the account without the password.

I hope that helps those of you doing analysis.

CyberSpeak
Ovie (sans Bret) has posted another CyberSpeak podcast...check it out!

TSK/Open Source Conference
Just a reminder about the TSK/Open Source Digital Forensics Conference coming up on 9 June! Check out the presentations!

SANS Forensic Summit
The SANS Forensic Summit is coming up, 8/9 July! Check it out!

Friday, May 21, 2010

Analysis Tips

I wanted to throw out a couple of things that I've run across...

MFT
I've worked a number of incidents where malware has been placed on a system and it's MAC times 'stomped', either through something similar to timestomp, or through copying the times from a legitimate file. In such cases, extracting $FILE_NAME attribute times for the file from the MFT have been essential for establishing accuracy in a timeline. Once this has been done, everything has fallen into place, including aligning the time with other data sources in the timeline (Scheduled Task log, Event Logs, etc.).

$LogFile
In a number of instances, this NTFS metadata file has been a gold mine. I worked on an exam last year where we thought that the intruder had figured out the admin password for a web-based CMS. I had a live image of the drive, and found the contents of the password file in $LogFile, which clearly demonstrated that the admin password was blank. BinText worked great for this.

$I30
This index file appears in directories when you're using FTK Imager, and in a number of instances, has been indispensable. During a recent exam where, due to lack of temporal proximity, I found the directory used by an intruder, but it was apparently empty. The $I30 file contained references to the artifacts in question.

Thoughts on Analysis
When working on an engagement...IR, forensic analysis, etc...we often loose site of what the term "analysis" means. We're supposed to be the experts, and the "customer" is relying on use to extract, review, and distill that extremely technical data into something that they can understand and use. Running through gigabytes of IIS logs and dumping out the entries that appear to indicate SQL injection and giving those to the customer from them to research and interpret is NOT analysis.

Let me break it down so you can put this on a t-shirt:

Run tool + dump output != analysis

Make sense?

Temporal Proximity

A couple of years ago, I heard someone...a really smart someone...talk about "temporal proximity". I know that sounds Star Trek-y, but the reference was to initiating response activities as close to an incident as possible.

Several of the industry reports indicate that the majority of incidents that the vendors have responded to have been the result of third-party notification to the victim. That alone indicates a number of things, the lack of temporal proximity (perhaps a better description would be "temporal dispersion") being one of them.

Why is this so important? Well, a lot of good...no, strike that...a lot of critical information can exist in memory, making it very volatile. Processes complete, network connections terminate. The longer you wait, the more this happens...and the system is more likely to be rebooted.

Some intruders get on systems, run tools, and them leave with data. When they do so, they may delete their toolkits. I've seen batch files that include the "del" command for just that purpose. Well, the more temporal dispersion you have from the incident, the less likely you are to recover the deleted files...in case you haven't heard, Windows (especially XP) has it's own built-in antiforensics measures.

Okay, you're probably wondering what I'm talking about...so I'll tell you. For Windows systems, even if you don't interact with the system, stuff still happens, particularly with XP. Just let an XP system sit for a couple of days, and you'll see. Restore Points are created every 24 hours, and if the disk space available for the RPs is getting short, others will be deleted. A limited defrag is run every three days. And this is just for an XP system that sits there with no network connectivity and no one interacting with it. Now, add to that things like Windows software and application updates...you know, the stuff that just kind of happens automatically with a network connected system. Even with minimal auditing enabled, stuff still gets logged to the Event Log...more so on Vista and Windows 7, simply because there are so many more logs.

Now, add to the mix that no one within your infrastructure is aware of an incident (intruder, malware, etc.), and systems remain up, functioning, operational and in use. I've been on engagements where we collected data from a system and then three days later collected the same data...and you'd swear that they were two different systems. Prefetch files had been deleted, deleted files had been overwritten by OS and application updates, applications and tools being run, etc.

In order to achieve temporal proximity, you need a couple of things. First, visibility...if you don't have visibility into your infrastructure, how will you know when something occurs? You can't really expect to know when something goes wrong or changes if you're not monitoring, right?

Second, you need a plan. What's you're IR plan? Acquire memory and disk, and then take the system offline? Or panic and not do anything at all until someone who has no idea what's going on makes a decision? I can't tell you the number of times I've responded and found out that the incident had been detected a month prior, and the infected/compromised system had been left up the entire time.

Me: "You know the intruder has been siphoning data off of this system for the past month, right?"

Them: "We didn't know what to do."

This happens more than you'd care to know, and not just to one vertical...not just PCI, but to many, many types of victims.

One final note...Marines in training learn what are referred to as "immediate actions". These are simple tasks that you use to clear a jammed weapon. They're simple when you're on the range, on a bright, sunny day after a good night's sleep. You can ask a range coach if you're doing it right. But we're trained on this over and over because you never need it in those conditions...when you're going to need that reaction to be programmed is during an assault, at 2:30am, after you've gone without sleep for two or more days and maybe haven't eaten in as long. And it's raining. And it's cold.

Are your IT assets critical to your business? If I were to back up a truck and take all of your computers...all desktops, laptops, servers, etc...how would that affect your business? It would disappear, wouldn't it? Well, if IT assets are so critical to your business, why not protect them? The bad guys aren't coming into your organization and walking out with boxes full of papers...they're coming into your network and stealing data that way. And they're successful because in many cases, they have greater visibility into your infrastructure than you do.

Friday, May 14, 2010

Linkity linkity

RegRipper.net
Brett updated RegRipper.net recently...take a look. I received some emails recently regarding "404 Not Found" messages for some stuff linked at the original site, and then received a couple of messages from Brett.

Brett's done a great job of maintaining the site, but for the site to really be of value, it takes more than folks in the community coming by to grab RegRipper and that's it. It takes contributions...thoughts, ideas, communication, etc. One particular project that's benefited from a very active community is Volatility.

Here's an interesting post from the Binary Intelligence blog, explaining how Matt went about modifying the current RegRipper to meet his own needs! Great job, Matt!

64-bit Software hives
Speaking of the Registry, does anyone have Software hives from well-used 64-bit systems that they're willing to share, for research purposes?

ProDiscover
Chris Brown recently released v6.5 of ProDiscover. Chris very graciously provided me with a license back when PD was at version 3, and I've used the framework ever since. Chris added Perl as the scripting language for PD, in the form of ProScripting, a while back and that proved to be very beneficial. Most times my case notes will start with something like, "Created a case project in ProDiscover v6.0, added the image, and populated the Registry and Internet History Views." This is a great way to get an initial view of things, particularly if you suspect malware has infected the system. One of the things I look for first is the Default User with a web browsing history.

Mutexes
When conducting IR or analyzing live IR data, I tend to lean toward a little Least Frequency of Occurrence (LFO) analysis as an approach to malware detection on systems. Most times, what I do is grab the output of handle.exe and run it through a Perl script (handle.pl, posted to the Files section of the Win4n6 Yahoo group) to get a list of the mutants/mutexes that are unique or appear least frequently on the system.

When it comes to LFO analysis, some folks seem to think that means just running a tool, like handle.exe, or just running a script that locates all of those mutexes that are unique and appear only once in the output of handle. But that's not analysis...that's just running a tool and getting data. Due to the way malware authors use mutexes, you have to look for something odd and out of place, so comparing the names with each other is one way to conduct LFO analysis. Another way to address this if you're looking at multiple systems is to compare your findings between systems. Let's say that you have some systems you know to not be infected, and others that you suspect may be infected...conducting LFO analysis on each system and comparing the output across all systems may provide some interesting findings.

My point is that running a tool and dumping the output into a report is NOT analysis, folks.

In addition to handle.exe, I ran across this post on Jamie Blasco's blog today that listed two tools, one of which that would be of use during IR...enumeratemutex.exe. While this would enumerate the mutexes for you and allow you to do a really quick LFO analysis, it wouldn't necessarily allow you to tie the mutex to a specific process. However, it can be a good check.

TSK Open Source Conference
June 9th is the date for Brian Carrier's first TSK and Open Source Forensics Conference, right in my own backyard (well, almost...that would be kind of cool though...).

I'll be giving a presentation on using open source tools to create timelines for analysis.

Cory's apparently doing a presentation entitled Commando Forensics...is it wrong of me to want to go see Cory talk about doing forensics commando? I have to admit that there's a certain horrifying fascination there...but is it really so wrong? ;-)

The venue isn't far from a Dogfish Head Ale House, and Vintage 51 is close, as well. Look to one of those venues for the conference pre-party (I'm kind of proactive and not into after-parties...) the evening before.

SANS Forensic Summit
The SANS What Works in Forensics and Incident Response summit is coming up this summer in Washington, DC.

The agenda looks like another good one this year. Jesse will be talking about fuzzy hashing, Troy will be talking about Windows 7, and Richard will be presenting on the CIRT-level response to APT. Between the presentations and panels, this looks like it will be another great opportunity.

I'll be giving a workshop on adding pertinent Registry data to a timeline (can you see a trend developing here, with my presentation at the TSK conference?), and how doing so can really help develop context to and confidence in the data you're looking at.

Looks like I'm on a panel again this time around...those are always a good time. Troy Larson will be there...everyone should come on by and check out his Sharky lazer pointer. No, that's not a euphemism for anything.











Giggity.

Friday, April 30, 2010

F-Response & Some Mo Stuffz

F-Response
I know I've mentioned this already, but this one is worth repeating...Matt's up to v3.09.07, which includes support for access to physical memory on x64 systems, and a COM scripting object for Windows systems.

I have to say, back when Chris opted to add Perl as the scripting language for ProDiscover, I was really excited, and released a number of ProScripts for use. Matt's really big on showing how easy it is to use F-Response (Matt is incredibly responsive and has released Mission Guides), and has released a number of samples that demo the ability to script tasks with F-Response EE through VBScript, Python, and yes...Perl! My favorite part of his demos is where he has the comment, "Do Work". ;-)

Last year, Matt released the FEMC, taking the use of F-Response EE to an entirely new level, and with the release of the scripting object, he's done it again. Imagine being able to provide a list of systems (and the necessary credentials) to a script, and sitting back to allow it to run; reach out to each system, perform some sort of RegRipper-like queries of the system, look for (and possibly copy) some files...and then you come back from doing other work to view a log file.

I've done some work, with Matt's help, to get a Perl version of his script working against a Windows system, just to kind of get familiar with working with some of the functions his COM object exposes. I have VMWare Workstation and a couple of Windows VMs available, and the biggest issues I ran into were having the firewall running (turn it off for testing), and an issue with connecting to remote shares, even default ones...I found the answer here. I made the change described just so that I could map a share as a means for testing, and then found out by flipping the setting from "Classic" back to "Guest Only" that an untrapped error would be thrown. Once I had the F-Response License Manager running on my analysis system and the adjustment made on my target testing system, the script ran just fine and returned the expected status.

The script I wrote, with Matt's help, runs through the process of connecting to and installing F-Response, running it, enumerating targets and then uninstalling F-Response. The output of the script looks like:

Status for 192.168.247.131: Avail, not installed
Status for 192.168.247.131: Installed, stopped

Status for 192.168.247.131: Installed, started

iqn.2008-02.com.f-response.harlan-202f63d5:disk-0
iqn.2008-02.com.f-response.harlan-202f63d5:vol-c

iqn.2008-02.com.f-response.harlan-202f63d5:
pmem
Status for 192.168.247.131: Avail, not installed


I should note that in order to get pmem as a target, I had to open FEMC and check the appropriate box in the Host Configuration dialog.

As you can see, the script cycled through the various states with respect to the target system, from the system being available with no F-Response installed, to installing and viewing targets, to removing F-Response. With this, I can now completely automate accessing and checking systems across an enterprise; connect to systems, log into the appropriate target (vol-c, for example), and as Matt says in his sample scripts, "do work". Add in a little RegRipper action, maybe checking for files, etc.

Awesome stuff, Matt! For more awesome stuff, including a video and Mission Guide for the COM scripting object, check out the F-Response blog posts!

Awesome Sauce Addendum: The script can now access/mount the volume from the remote system, then uses some WMI and Perl hash magic to get the mounted drive letter on the local system, and then determines if Windows\system32 can be found. Pretty awesome sauce!

Documents
Didier has updated his PDFid code recently to detect and disarm the "/Launch" functionality. Didier mentions that this is becoming more prevalent, something important for analysts to note. When performing root cause analysis, we need to understand what's possible with respect to initial infection vectors. Many times, the questions that first responders need to answer (or assist in answering) include how did this get on the system, and what can we do to prevent it in the future? Many times, the actual malware is really the secondary or tertiary download, after the initial infection through some sort of phishing attack.

Over on the Offensive Computing blog, I saw that JoeDoc, a novel runtime analysis system for detecting exploits in documents like pdf and doc, has been released in beta. Check out JoeDoc here...it currently supports PDF format.

If you're trying to determine if there's malware in Flash or Javascript files, you might want to check out wepawet.

Viewers
In the first edition of the ITB, Don wrote an article on potential issues when using an Internet-connected system to view files in FTK Imager. John McCash posted to the SANS Forensic Blog recently regarding a similar topic that ThotCon recently.

Okay, so what's the issue here? Well, perhaps rather than making artifacts difficult to find, an intruder could put out a very obviously interesting artifact in hopes that the analyst will view it, resulting in the infection of or damage to the analysis system.

Also check out iSEC's Breaking Forensic Software paper...

Friday, April 23, 2010

Links...and whatnot

There seems to be a theme to this post...something along the lines of accessing data through alternate means or sources...and whatnot...

Blog Update
- Mounting EWF files on Windows
Over in the Win4n6 Yahoo group, a question was posted recently regarding mounting multiple (in this case, around 70) .E0x files, and most of the answers involved using the SANS SIFT v2.0 Workstation. This is a good solution; I had posted a bit ago regarding mounting EWF (Expert Witness Format, or EnCase) files on Windows, and Bradley Schatz provided an update, removing the use of the Visual Studio runtime files and using only freely available tools, all on Windows.

ImDisk
Speaking of freeware tools, if you're using ImDisk, be sure to get the updated version, available as of March of this year. There have been some updates to allow for better functionality on Windows 7, etc.

Also, FTK Imager (as well as the Lite version) are up to version 2.9.

...Taking Things A Step Further...
If you've got segmented image files, as with multiple .E0x or raw/dd format .00x files, and you want to get file system metadata for inclusion in a timeline, you have a number of options available to you using freely available tools on Windows.

For the raw/dd format files, one option is to use the 'type' command to reassemble the image segments into a full image file. Another option...whether you've got a VMWare .vmdk file, or an image composed of multiple EWF or raw/dd segments...is to open the image in FTK Imager. Once the image is open and you can see the file system, you can (a) re-acquire the image to a single, raw/dd format image file, or (b) export a directory listing.

You can also use FTK Imager to export file system metadata from live systems, but this can be a manual process, as you have to add the physical drive via the GUI, etc. This process may be a bit more than you need. To meet the needs of a live IR script, I created a CLI tool called mt.exe (short for MACTimes) that is a compiled Perl script. Mt.exe will get the MAC times of files in a directory, and can recurse directories...it will also get MD5 hashes (it gets the MAC times before computing a hash) for the files, and has the option to output everything in TSK v3.x bodyfile format. I plan to use this to get file listings for specific directories, in order to optimize response and augment follow-on analysis.

Into The Shadows
Lee Whitfield posted his SANS EU Forensics Summit presentation, Into The Shadows, for your listening/viewing pleasure. In the presentation, Lee presents what he refers to as "the fourth way" to analyze Volume Shadow Copies. Watching the video, it appears that Lee is deciphering the actual files that the created by the Volume Shadow Service, and using that information to extract meaningful data.

You should also be able to work with Volume Shadow Copies as we discussed earlier, but like Lee says (and was mentioned by Troy Larson), if you're going to image the entire VSC, you're going to need to have additional space available. However, what if you were to mount the VSC in question and only extract selected files? Sure, this would require knowledge of what you were attempting to achieve and how you'd go about doing it, but you wouldn't require the additional space, and you would still have the VSC available to be mounted later, if need be.

EVTX Parsing
SANS has postponed the Forensics Summit in London due to the Krakatoa-like volcanic eruption that has been obscuring the airspace over Europe. As such, Andreas has posted his slides regarding Vista (and above) Event Log format. Very cool stuff, and very useful.

Articles
Finally, Christa pointed me to an interesting article at CSOOnline about how fraud is no longer considered by banks and financial institutions to be just a cost of doing business. Very interesting, and it demonstrates how incident preparation, detection, and response are becoming more visible as business processes.

From that article, I found a link to another article, this one discussing the basics of incident detection, response and forensics with Richard Bejtlich. Very much well worth the read...

Wednesday, April 14, 2010

More Links

RegRipper in Use
Simon posted to the Praetorian Prefect blog recently regarding WinPE. In his post, Simon described installing and using RegRipper from the Windows Forensic Environment (WinFE). Very cool! RegRipper (well, specifically ripXP) was designed for XP System Restore Point analysis, and RegRipper itself has been used via F-Response, and on an analysis system to extract Registry data from mounted Volume Shadow Copies.

F-Response
Speaking of WinFE, Matt posted recently on creating a WinFE bootable CD with F-Response pre-installed! Matt shows you how to use Troy's WinFE instructions to quickly make the installation F-Response ready! Along with the Linux bootable CDs Matt's put together, it looks like he's building out a pretty complete set.

XP Mode Issues
I found this on Securabit, how the installation and use of XP Mode in Windows 7 exposes the system to XP OS vulnerabilities, as well as vulnerabilities to applications run through XP Mode. This is going to be a bit more of an issue, now that MS has removed that hardware virtualization requirement for XP Mode to run, making it accessible to everyone. Core Security Technologies announced a VPC hypervisor memory protection bug...wait, is that bad?

Well, I like to look at this stuff from an IR/DF perspective. Right now, we've got enough issues trying to identify the initial infection vector...now we've got to deal with it in two operating systems! I've installed XP Mode, and during the installation, the XP VM gets these little icons for the C: and D: drives in my host...hey, wait a sec! So XP can "see" my Windows 7? Uh...is that bad?

Installing and using Windows 7 isn't bad in and of itself...it's really no different from when we moved from Windows 2000 to XP. New challenges and issues were introduced, and the IT community, as well as those of us in the IR/DF community learn to cope. In this case, IT admins need to remain even more vigilant because now we're adding old issues in with the new...don't think that we've closed the hole by installing Windows 7, only to be running a legacy app...with it's inherent vulnerabilities...through XP Mode.

Volume Shadow Copies
Found this excellent post over on the Forensics from the Sausage Factory blog, detailing mounting a Volume Shadow Copy with EnCase and using RoboCopy to grab files. Rob Lee has posted on creating timelines from Volume Shadow Copies, and accessing VSCs has been addressed several times (here, and here). With Vista and now Windows 7 systems becoming more pervasive (a friend of mine in LE has already had to deal with a Windows 7 system), accessing Volume Shadow Copies is going to become more and more of an issue...and by that I mean requirement and necessity. So it's good that the information is out there...

MFT Analysis
Rob Lee posted to the SANS Forensic Blog regarding Windows 7 MFT Entry Timestamp Properties. This is a very interesting approach, because there's been some discussion in other forums, including the Win4n6 Yahoo group, around using information from the MFT to create or augment a timeline. For example, using most tools to get file system metadata, you'll get the entries from the $STANDARD_INFORMATION (SIA) attribute, but the information in the $FILE_NAME (FNA) attribute can also be valuable, particularly if the creation dates are different.

When tools are used to alter file time stamps, you'll notice the differences in the SIA and FNA time values, as Lance pointed out. Brian also mentions this like three times in one of the chapters of his File System Forensic Analysis book. So, knowing how various actions can affect file system time stamps can be extremely important to creating or adding context to a timeline, as well as to the overall analysis.

The Future
Rob's efforts in this area got me to thinking...how long will it be before forensic analysts sit down at their workstation to being analysis, and have a Kindle or an iPad or similar device right there with them, to assist them with their analysis workflow? Given the complexity and variety of devices and operating systems, it would stand to reason that an organization would have a workflow with supporting information (docs like what Rob's putting together, etc.), possibly even online in one location. The analyst would access the online (internally, of course) app and enter their information and begin their case, and they would be presented with workflow, processes and supporting information. In fact, something like that could also provide for case management, as well as case notes and collaboration, and even ease reporting.

Is the workflow important? I'd suggest that yes, it is...wholeheartedly. I've seen a number of folks stumble over what they were looking for and spend a lot of time doing things that really didn't get them any closer to their goals...if they had and understood their goals! This would not obviate the need for training, of course, particularly in basic skills, but having some kind of Wiki-ish framework with a workflow template for an analyst to follow would definitely be beneficial...that is, aside from its CSI coolness (I still yell, "Lt Dan!" at the TV whenever Gary Sinise comes on screen in CSI:NY).

Sunday, April 11, 2010

Links...and whatnot

Security Ripcord
Don posted recently on his experiences attending the Rob Lee's SANS SEC 508 course. Don has some very interesting insights, so take a look. Read what he says carefully, and think about it before reacting based on your initial impression. Don's an experienced responder that I have the honor of knowing, and the pleasure of having worked with...he's "been there and done that", likely in more ways than you can imagine. Many times when we read something from someone else, we'll apply the words to our own personal context, rather than the context of the author...so when you read what Don's said, take a few minutes to think about what he's saying.

One example is Don's statement regarding the court room vs. the data center. To be honest, I think he's absolutely right. For far too long, what could possibly go on in the court room has been a primary driver for response, when that shouldn't be the case. I've seen far too many times where someone has said, "I won't do live response until it's accepted by the courts." Okay, fine.

Another one that I see a lot is the statement, "...a competent defense counsel could ask this question and spread doubt in the mind of the jury." Ugh. Really. I saw on a list recently where someone made that statement with respect using MD5 hashes to validate data integrity, and how a defense attorney could bring up "MD5 rainbow tables". Again...ugh. There are more issues with this than I want to go into here, but the point is that you cannot let what you think might happen in court deter you from doing what you can, and what's right.

DFI Newsletter
I subscribe to the DFI Newsletter, and I found a couple of interesting items in the one I received on Fri, 9 April. Specifically, one of my blog posts appear in the In The Blogs section. Okay, that was pretty cool!

Also, there was a link to an FCW article by Ben Bain regarding how Bill Bratton "said local police departments have been behind the curve for most of their history in tackling computer-related crime and cybersecurity" and that "it's a resource issue."

I know a couple of folks who have assisted local LE in their area, and that seems to be something of a beneficial relationship, particular for LE.

File System Tunneling
Okay, this is a new one on me...I ran across this concept on a list recently, and thought I'd look into it a bit. In short, it seems that there's long been functionality built into NTFS that allows, under specific conditions and for a short period of time (default is 15 seconds) for file metadata (specifically, the file creation time) to be reused. In short, if a file with a specific name is deleted, and then another file with the same name created in that directory within 15 seconds, the first file's metadata will be reused. Fortunately (note the sarcasm...), this functionality can be extended or disabled.

Okay, so what does this mean to forensic analysts? Under most conditions, probably not a lot. But this is definitely something to be aware of and understand. I mean, under normal circumstances, time stamps are hard enough to keep up with...add into that tunneling, anti-forensics, and the fact that on Vista and above, updating of last access times is disabled.

More than anything else, this really illustrates how important it is, when considering potential issues or asking questions about systems, to identify things like the OS, the version (i.e., XP vs. Win7), the file system, etc.

Resources
MS KB 172190
MS KB 299648
Daniel Schneller's thoughts
Old New Thing blog post
MSDN: File Times

eEvidence
The eEvidence what's new site was updated a bit ago. Christina is always able to find some very interesting resources, so take some time to browse through what's there. Sometimes there's case studies, sometimes some really academic stuff, but there's always something interesting.

MoonSol
Matthieu has released the MoonSol Windows Memory Toolkit, with a free community edition. Check it out.