Monday, November 14, 2011

Tool Update - WiFi Geolocation

I wanted to let everyone know that I've updated the maclookup.pl Perl script which can be used for WiFi geolocation; that is, taking the MAC address for a WAP and performing a lookup in an online database to determine if there are lat/longs available for that address.  If there are, then you can convert the lat/long coordinates into a Google map for visualization purposes.

A while back I'd posted the location of WiFi WAP MAC addresses within the Vista and Windows 7 Registry to ForensicArtifacts.com.  This information can be used for intelligence purposes, particularly WiFi geolocation, that is, if the WAP MAC address has been mapped and the lat/longs added to an online database, they can then be looked up and plotted on a map (such as Google Maps).  I've blogged about this, and covered it in my upcoming Windows Forensic Analysis 3/e.  I also wrote maclookup.pl, which used a URL to query the Skyhook Wireless database to attempt to retrieve lat/longs for a particular WAP MAC address.  As it turns out, that script no longer works, and I've been looking into alternatives.

One alternative appears to be WiGLE.net; there seems to be a free search functionality that requires registration to use.  Registration is free, and you must agree to non-commercial use during the registration process.  Fortunately, there's a Net::Wigle Perl module available, which means that you can write your own code to query WiGLE, get lat/longs, and produce a Google Map...but you have to have Wigle.net credentials to use it. I use ActiveState Perl, so installation of the module was simply a matter of extracting the Wigle.pm file to the C:\Perl\site\lib\Net directory.

So, I updated the maclookup.pl script, using the Net::Wigle module (thanks to the author of the module, as well as Adrian Crenshaw, for some assistance in using the module).  I wrote a CLI Perl script, macl.pl, which performs the database lookups, and requires you to enter your Wigle.net username/password in clear text at the command line...this shouldn't be a problem, as you'll be running the script from your analysis workstation.  The script takes a WAP MAC address, or a file containing MAC addresses (or both), at the prompt, and allows you to format your output (lat/longs) in a number of ways:

- tabular format
- CSV format
- Each set of lat/longs in a URL to paste into Google Maps
- A KML file that you can load into Google Earth

All output is sent to STDOUT, so all you need to do is add a redirection operator and the appropriate file name, and you're in business.

The code can be downloaded here (macl.zip).  The archive contains a thoroughly-documented script, a readme file, and a sample file containing WAP MAC addresses.  I updated my copy of Perl2Exe in order to try and create/"compile" a Windows EXE from the script, but there's some more work that needs to be done with respect to modules that "can't be found". 

Getting WAP MAC Addresses
So, the big question is, where do you get the WAP MAC addresses?  Well, if you're using RegRipper, the networklist.pl plugin will retrieve the information for you.  For Windows XP systems, you'll want to use the ssid.pl plugin.


Addendum: On Windows 7 systems, information about wireless LANs to which the system has been connected may be found in the Microsoft-Windows-WLAN-AutoConfig/Operational Event Log (event IDs vary based on the particular Task Category).

Important Notes
Once again, there are a couple of important things to remember when running the macl.pl script.  First, you must have Perl and the Net::Wigle Perl module installed.  Neither is difficult to obtain or install.  Second, you MUST have a Wigle.net account.  Again, this is not difficult to obtain.  The readme file in the provided archive provides simple instructions, as well.

Resources
Adrian wrote a tool called IGiGLE.exe (using AutoIT) that allows you to search the Wigle.net database (you have to have a username and password) based on ZIP code, lat/longs, etc.

Here is the GeoMena.org lookup page.

Here is a review of some location service APIs.  I had no idea there were that many.

Thursday, November 10, 2011

PFIC 2011

I just returned from PFIC 2011, and I thought I'd share my experiences.  First, let me echo the comments of a couple of the attendees that this is one of the best conferences to attend if you're in the DFIR field.

What I Liked
Meeting people.  I know what you're thinking..."you're an ISTJ...you don't like people."  That isn't the case at all.  I really enjoyed meeting and engaging with a lot of folks at the conference...I won't name them all here, as many don't have an open online presence, and I want to respect their privacy.  Either way, it's always great to put a face to a name or online presence, and to meet new people, especially fellow practitioners.

The content.  I didn't get to attend many presentations (unfortunately), but those that I did get to attend were real eye-openers, in a number of ways.  I didn't get to sit in on anything the first day (travel, etc.), but on Tuesday, I attended Ryan's presentation on how hiding indications of activity leaves artifacts, and Amber's mobile devices presentation.  Ryan's presentation was interesting due to the content, but also due to the reactions of many of the attendees...I got the sense from looking around the room (even from my vantage point) that for some, Ryan's presentation was immediately useful...which is a plus in my book.

Amber's presentation was interesting to me, as I really haven't had an opportunity to this point to work with mobile devices.  Who knew that an old microwave oven (with the cord cut) was an acceptable storage facility for mobile devices?  As an electrical engineer, I know that a microwave oven is a Faraday cage, but like I said...I haven't had a chance to work with mobile devices.  Amber also brought up some very interesting points about clones, and even demonstrated how a device might look like an iPhone, but not actually be one, requiring careful observation and critical thinking.

Another great thing about the content of presentations is that there were enough presentations along a similar vein that you could refer back to someone else's presentation in order to add relevance to what you were talking about.  I referred to Ryan Washington's presentation several times, as well as to an earlier presentation regarding the NTFS file system.  In a lot of ways, this really worked well to tie several presentations together.

After-hours event.  I attended the PFIC After Dark even this year...The Spur bar had been shut down just for the event, and we had shuttle transportation between the hotel and bar.  It was a great time to meet up with folks you hadn't had a chance to talk to, or to just talk about things that you might not have had a chance to talk about before.  I greatly appreciated the opportunity to talk to a number of folks...even those who took the opportunity to buy me a Corona, which I greatly appreciated! 

My room.  I got in to the venue, and found that I had a complimentary upgrade to another room.  Wow!  The original room was awesome (or would have been), but then I got a room right by the slopes where they were creating snow for the upcoming ski season.  I really like how ski resorts get business in the off-season through conferences and other events...it's a great use of facilities and brings a good deal of business to the local area.

What I'd Do Differently
This section is really a combination of what I'd do differently, as well as what I think, based on my experience, would make the event a better experience overall...

Adjust my travel.  I flew in on the Monday of the conference, got in, got cleaned up from my time in airports, grabbed a bite to eat, and then gave my first presentation.  Next year, I think I'd like to see about getting to the conference site a bit earlier, and maybe being able to participate in some more things.  For example, I was invited to speak on the panel that took place on Wed morning, but my flight out left about an hour before the panel started.

Encourage more tweeting.  Social media is a great way to get the word out about upcoming events, but I've also found that live tweeting during the event is also a great way to generate buzz and encourage participation.  I did a search this morning for "#PFIC" and turned up only 20 tweets, some in Spanish.  I know that Mike Murr wasn't at this

Contests.  In addition to the tweeting, Amber mentioned an idea for next year...a forensic challenge of some kind, complete with each team delivering their findings and being judged/graded.  I think that would encourage some great participation.  I think that these sorts of things attract attention to the blog.

Presentations.  One thing I saw and heard others talk about was the fact that there were several good presentations going on at the same time.  For example, I had wanted to attend Chad's presentation, but couldn't because I was presenting.  On Tues morning, there were two presentations on what appeared to be similar topics that I wanted to attend, and I chose to attend Ryan's. 

On the topic of presentations, as the "I" in the conference name stands for "innovation", I think next year would be a fantastic time to hear from the Carbon Black guys.

My Presentations
I gave two presentations this year...thanks again to Amber and Stephanie for allowing me to do so.  As the presentation materials don't really convey what was said in the presentation itself, I wanted to share some of my thinking in developing the presentations, as well as the gist of what was said...

Scanning for Low-hanging Fruit: This presentation centered on the forensic scanner I've been working on, both the concept (as in, why would you want to do this...) and the actual implementation (still very proof-of-concept at this point).  The presentation even included a demo, which actually worked pretty well. 

The idea of the presentation, which may not be apparent from the title, was that once we've found something that we've never seen before (either a new variant of something, or an entirely new thing...), that becomes low-hanging fruit that we can check for each time via automation.  The idea would then be to free the analyst to do analysis, rather than having the analyst spend time performing a lot of manual checks, and possibly forgetting some of them in the process.  As I mentioned, the demo went over very well, but there's still work to be done with respect to the overall project.  Up until now, I haven't had a great deal of opportunity to really develop this project, and I hope to change that in the future.

Introduction to Windows Forensics:  When developing this presentation, I really had to think about what constitutes an introduction to Windows forensics.  What I decided on...and this seemed to work really well, based on the reactions of the attendees...was to assume that most everyone in the presentation already understood the basics of forensic analysis, and we'd progress on to the forensic analysis of Windows systems.  The distinction at that point was that the introduction included some discussion of analysis concepts, and then went into discussing analysis of a Windows system, based on the premise that we'd be analyzing a complex system.  So we started out with some concepts, and went into discussing not just the forensic potential of various artifacts and sources (the Registry, Event Log, Prefetch files, etc.), but also the value of considering multiple sources together in order to develop context and a greater relative confidence in the data itself.

Overall, I think that this presentation went well, even though I went really fast (without any RedBull, I should mention...) and finished almost exactly on time.  I spoke to Stephanie after the presentation, and hope to come back next year and give a longer, hands-on version of this presentation.  I think a bootcamp or lab would be great, as I really want to convey the information in this presentation in a much more "use this right away" format.  Also, Windows Forensic Analysis 3/e is scheduled to be published early in 2012, and will provide a great foundation for the lab.

Slide Decks
I put the PDF versions of my presentations (in a zipped archive) up on Google Docs...you can find them here.  I've also share the malware detection checklist I mentioned at the conference; keeping in mind that this is a living document, and I'd greatly appreciate feedback.

Links to Attendee's blogs:
Girl, Unallocated - It was great to put a face to a name, and hear how some folks name their blogs...
Journey into IR - It was great to finally meet Corey in person...
ForensicMethods - I'm looking forward to seeing Chad in Atlanta at DC3.

Friday, November 04, 2011

DF Analysis Lifecycle

In an effort to spur some interest within the DFIR community (and specifically with the NoVA Forensics Meetup group) in engaging and sharing information, I thought it would be a good idea to point out "forensic challenges" or exercises that are available online, as well as to perhaps setup and conduct some exercises of our (the meetup group) own.

As I was thinking about how to do this, one thing occurred to me...whenever I've done something like this as part of a training exercise or engagement, many times the first things folks say is that they don't know how to get started.  When I've conducted training exercises, they've usually been for mixed audiences..."mixed" in the sense that the attendees often aren't all just DF analysts/investigators; some do DF work part-time, some do variations of DF work (such as "online forensics") and others are SOC monitors and may not really do DF analysis.

As such, what I wanted to do was lay out the way I approach analysis engagements, and make that process available for others to read and comment on; I thought that would be a good way to get started on some of the analysis exercises that we can engage in going forward.  I've included some additional resources (by no means is this a complete list) at the end of this blog post.

Getting Started
The most common scenario I've faced is receiving either a hard drive or an image for analysis.  In many cases, it's been more than one, but if you know how to conduct the analysis of one image, then scaling it to multiple images isn't all that difficult.  Also, acquiring an image is either one of those things that you can gloss over in a short blog post, or you have to write an entire blog post (or series of posts) on how to do it...so let's just start our examination based on the fact that we received an image.

Documentation
Documentation is the key to any analysis.  It's also the hardest thing to get technical folks to do.  For whatever reason, getting technical folks to document what they're doing is like herding cats down a beach.  If you don't believe me...try it.  Why it's so hard is up for discussion...but the fact of the matter is that proper documentation is an incredibly useful tool, and when you do it, you'll find that it will actually allow you to do more of the cool, sexy analysis stuff that folks like to do.

Document all the things!

Most often when we talk about documentation during analysis, we're referring to case notes, and as such, we need to document pretty much everything (please excuse the gratuitous meme) about the case that we're working on.  This includes when we start, what we start with, the tools and processes/procedures we use, our findings, etc. 

One of the documentation pitfalls that a lot of folks run into is that they start their case notes on a "piece of paper", and by the end of the engagement, those notes never quite make it into an electronic document.  It's best to get used to (and start out) documenting your analysis in electronic format, particularly so your notes can be stored and shared.  One means of doing so is to use Forensic CaseNotes from QCC.  You can modify the available tabs to meet your needs.  However, you can just as easily document what you're doing in MS Word; you can add bold and italics to the document to indicate headers, and you can even add images and tables (or embed Visio diagrams) to the document, if you need to.

The reasons why we document what we do are (1) you may get "hit by a bus" and another analyst may need to pick up your work, and (2) you may need to revisit your analysis (you may be asked questions about it) 6 months or a year later.  I know, I know...these examples are used all the time and I know folks are tired of hearing them...but guess what?  We use these examples because they actually happen.  No, I don't know of an analyst who was actually "hit by a bus", but I do know of several instances where an analyst was on vacation, in surgery, or had left the organization, and the analysis had to be turned over to someone else.  I also know of several instances where a year or more after the report was delivered to the customer, questions were posed...this can happen when you're engaged by LE and the defense has a question, or when you're engaged by an organization, and their compliance and regulatory bodies have additional questions.  We often don't think much about these scenarios, but when they do occur, we very often finding ourselves wishing we'd kept better notes.

So, one of the questions I hear is, "...to what standard should I keep case notes?"  Well, consider the two above scenarios, and keep your case notes such that (1) they can be turned over to someone else or (2) you can come back a year later and clearly see what you did.  I mean, honestly...it really isn't that hard.  For example, I start my case notes with basic case information...customer point of contact (PoC), exhibits/items I received, and most importantly, the goals of my exam.  I put the goals right there in front of me, and have them listed clearly and concisely in their own section so that I can always see them, and refer back to them.  When I document my analysis, I do so by including the tool or process that I used, and I include the version of the tool I used.  I've found this to be critical, as tools tend to get updated.  Look at EnCase, ProDiscover, or Mark Woan's JumpLister.  If you used a specific version of a tool, and a year later that tool had been updated (perhaps even several times), then you'd at least have an explanation as to why you saw the data that you did.

Case notes should be clear and concise, and not include the complete output from every tool that you use or run.  You can, however, include pertinent excerpts from tool output, particularly if that output leads your examination in a particular direction.  By contrast, dumping the entire output of a tool into your case notes and including a note that "only the 3 of the last 4 lines in the output are important" is far from clear or concise.  I would consider including information about why something is important or significant to your examination, and I've even gone so far as to include references, such as links to Microsoft KnowledgeBase articles, particularly if those references support my reasoning and conclusions.

If you keep your case notes in a clear and concise manner, then the report almost writes itself.

Now, I will say that I have heard arguments against keeping case notes; in particular, that they're discoverable.  Some folks have said that because case notes are discoverable, the defense could get ahold of them and make the examiner's life difficult, at best.  And yet, for all of these comments, no one has ever elaborated on this beyond the "maybe" and the "possibly".  To this day, I do not understand why an analyst, as a matter of course, would NOT keep case notes, outside of being explicitly instructed to do so (i.e., to not keep case notes) by whomever you're working for. 

Checklists
Often, we use tools and scripts in our analysis process in order to add some level of automation, particularly when the tasks are repetitive.  A way to expand that is to use checklists, particularly for involved sets of tasks.  I use a malware detection checklist that I put together based on a good deal of work that I'd done, and I pull out a copy of that checklist whenever I have an exam that involves attempting to locate malware within an acquired image.  The checklist serves as documentation...in my case notes, I refer to the checklist, and I keep a completed copy of the checklist in the case directory along with my case notes.  The checklist allows me to keep track of the steps, as well as the tools (and versions) I used, any significant findings, as well as any notes or justification I may have for not completing a step.  For example, I won't run a scan for NTFS ADSs if the file system of the image is FAT. 

The great thing about using a checklist is that it's a living document...as I learn and find new things, I can add them to the checklist.  It also allows me to complete the analysis steps more thoroughly and completely, and in a timely manner.  This, in turn, leaves me more time for things like conducting deep(er) analysis.  Checklists and procedures can also be codified into a forensic scanner, allowing the "low hanging fruit" and artifacts that you've previously found to searched for quickly, thereby allowing you to focus on further analysis.  If the scanner is designed to keep a log of it's activity, then you've got a good deal of documentation right there.

Remember that when using a checklist or just conducting your analysis, no findings can be just as important as an interesting finding.  Let's say that you have a checklist that includes 10 steps, and of those, only 1 step finds anything interesting.  Let's say you follow all 10 (again, purely arbitrary number, used only as an example) steps of your malware detection checklist, and only the ADS detection step finds anything of interest, but it turns out to be nothing.  If you choose to not document the steps that had no significant findings, what does that tell another analyst who picks up your case, or what does it tell the customer who reads your report?  Not much.  In fact, it sounds like all you did was run a scan for ADSs...and the customer is paying how much for that report?  Doing this makes whomever reads your report think that you weren't very thorough, when you were, in fact, extremely thorough.

One final note about checklists and procedures...they're a good place to start, but they're by no means the be-all-end-all.  They're tools...use them as such.  Procedures and checklists often mean the difference between conducting "Registry analysis" and getting it knocked out, and billing a customer for 16 hrs of "Registry analysis", with no discernible findings or results.  If you run through your checklist and find something odd or interesting (for example, no findings), use that as a launching point from which to continue your exam.

Start From The End
This is advice that I've given to a number of folks, and I often get a look like I just sprouted a third eye in the middle of my forehead.  What do you mean, "start at the end"?  Well, this goes back to the military "backwards planning" concept...determine where you need to be at the end of the engagement (clear, concise report delivered to a happy customer), and plan backwards based on where you are now (sitting at your desk with a drive image to analyze).  In other words, rather than sitting down with a blank page, start with a report template (you know you're going to have to deliver a report...) and work from there.

Very often when I have managed engagements, I would start filling in the report template while the analyst (or analysts) was getting organized, or even while they were still on-site.  I'll get the executive summary knocked out, putting the background and goals (the exact same goals that the analyst has in their case notes) into the report, and replicating that information into the body of the report.  That leaves the analyst to add the exhibits (what was analyzed) and findings information into the report, without having to worry about all of the other "stuff", and allows them to focus on the cool part of the engagement...the analysis.  Using a report template (and using the same one every time), they know what needs to be included where, and how to go about writing their findings (i.e., clear and concise).  As mentioned previously, the analysis steps and findings are often taken directly from the case notes.

What's the plan, Stan?
Having an analysis plan to start with can often be key to your analysis.  Have you ever seen someone start their analysis by loading the image into an analysis application and start indexing the entire image?  This activity can take a great deal of time, and we've all seen even commercial applications crash during this process.  If you're going to index an entire image, why are you doing so?  In order to conduct keyword searches?  Okay...what's your list of keywords?

My point is to think critically about what you're doing, and how you're going to go about doing it.  Are you indexing an entire image because doing so is pertinent to your analysis, or "because that's what we've always done"?  If it's pertinent, that's great...but consider either extracting data from the image or making an additional working copy of the image before kicking off the indexing process.  That way, you can be doing other analysis during the indexing process.  Also, don't waste time doing stuff that you don't need to be doing.

Report Writing
No one likes to write reports.  However, if we don't write reports, how do we get paid?  How do we communicate our findings to others, such as the customer, or the prosecutor, or to anyone else?   Writing reports should not be viewed as a necessary evil, but instead as a required skill set.

When writing your report, as with your case notes, be clear and concise.  There's no need to be flowery and verbose in your language.  Remember, you're writing a report that takes a bunch of technical information and very often needs to translate that into something a non-technical person needs to understand in order to make a business or legal decision.  It's not only harder to make up new verbiage for different sections of your report, it also makes the finished product harder to read and understand.

When walking through the analysis or findings portion of the report (leading up to my conclusions), I've found that it's best to use the same cadence and structure in my writing.  It not only makes it easier to write, but it also makes it easier to read.  For example, if I'm analyzing an image in order to locate suspected malware, in each section, I'll list what I did ("ran AV scan"), which tools I used ("AV scanner blah, version X"), and what I found ("no significant/pertinent findings", or "Troj/Win32.Blah found").  I've found that when trying to convey technical information to a non-technical audience, using the same cadence and structure over and over often leaves the reader remembering the aspects of the report that you want them to remember.  In particular, you want to convey that you did a thorough job in your analysis.  In contrast, having each section worded in a significantly different manner not only makes it harder for me to write (I have to make new stuff up for each section), but the customer just ends up confused, and remembering only those things that were different. 

Be professional in your reporting.  You don't have to be verbose and use $5 words; in fact, doing so can often lead to confusion because you've used a big word incorrectly.  Have someone review your report, and for goodness sake, run spell check before you send it in for review!  If you run spell check and see a bunch of words underlined with red squiggly lines, or phrases underlined with green squiggly lines, address them.  Get the report in for review early enough for someone to take a good look at it, and don't leave it to the last minute.  Finally, if there's something that needs to be addressed in the report, don't tell your reviewer, "fine, if you don't like it, fix it yourself."  Constructive criticism is useful and helps us all get better at what we do, but the petulant "whatever...fix it yourself" attitude doesn't go over well.

The report structure is simple...start with an executive summary (ExSumm).  This is exactly as described...it's a summary for executives.  It's not a place for you to show off how many really cool big words you know.  Make it simple and clear...provide some background info on the incident, the goals of the analysis (as decided upon with the customer) and your conclusions.  Remember your audience...someone non-technical needs a clear and concise one-pager (no more than 2) with the information that they can use to make critical business decisions.  Were they compromised?  Yes or no?  There's no need to pontificate on how easily they had been compromised...just be clear about it.  "A successful SQL injection attack led to the exposure of 10K records."

The body of the report should include background on the incident (with a bit more detail than the ExSumm), followed by the exhibits (what was analyzed), and the goals of the analysis.  From there, provide information on the analysis you conducted, your findings, and your conclusions.  The goals and conclusions from the body of the report should be identical...literally, copy-and-paste...from the ExSumm.

Finally, many reports include some modicum of recommendations...sometimes this is appropriate, other times it isn't.  For example, if you're looking at 1 or 10 images, does that really give you an overall view into the infrastructure as a whole?  Just because MRT isn't up-to-date on 5 systems, does that mean that the organization needs to develop and implement a patch management infrastructure?  How do you know that they haven't already?  This is the part of the report that is usually up for discussion, as to whether or not it's included.

Summary
So, my intention with this post has been to illustrate an engagement lifecycle, and to give an overview of what an engagement can look like, cradle-to-grave.  This has by no means been intended to be THE way of doing things...rather, this is a way of conducting an engagement that has been useful to me, and I've found to be successful.

Resources
Chris Pogue's "Sniper Forensics: One Shot, One Kill" presentation from DefCon18
Chris Pogue's "Sniper Forensics v.3" from the most recent SecTor (scroll down)
TrustWave SpiderLabs "Sniper Forensics" blog posts (five posts in the series)
Girl, Unallocated On Writing
UnChained Forensics Lessons Learned
Brad Garnett's tips on Report Writing (SANS)
Computer Forensics Processing Checklist

Useful Analysis Tidbits
Corey's blog posts on exploit artifacts

Thursday, November 03, 2011

Stuffy Updates

Meetup
We had about 15 or so folks show up for last night's NoVA Forensics Meetup.  I gave a presentation on malware characteristics, and the slides are posted to the NoVA4n6Meetup Yahoo group, if you want to take a look.  Sorry about posting them the day of the meetup...I'm trying to get slides posted beforehand so that folks can get them and have them available.

One of the things I'd like to develop is interest in the meetup, and get more folks interested in showing up on a regular basis, because this really helps us develop a sense of community.  Now, one of the things I've heard from folks is that the location isn't good for them, and I understand that...not everyone can make it.  However, I do think that we likely have enough folks from the local area to come by on a regular basis, as well as folks who are willing to attend when they can.  The alternative to the location issue is that instead of saying that the drive is too far, start a meetup in your local area.  Seriously.  The idea it develop a sense of community, which we don't get with "...I can't make it to the meetup because it's too far..."; starting a local meetup increases the community, rather than divide it.

I've also received some comments regarding what folks are looking for with respect to content.  I like some of the ideas that have been brought up, such as having something a bit more interactive.  However, I'd also like to see more of a community approach to this sort of thing...one person can't be expected to do everything; that's not "community".  I really think that there as some good ideas out there, and if we have more folks interested in attending the meetups and actually showing up, then we can get the folks who want to know more about something in the same room as others who know more about that subject and may be willing to give a presentation.

Next month (7 Dec), we're going to be blessed with a presentation on mobile forensics from Sam Brothers.  In order to bring more folks in, Cory Altheide suggested that we have a Google Plus (G+) hangout, so I'm going to look at bringing a laptop for that purpose, and also see about live tweeting during the presentation (and getting others to do so).

Finally, we confirmed that adult beverages are permitted at the ReverseSpace site, as long as everyone polices their containers.  There didn't seem to be any interest this month in meeting for a pre-meetup warm-up at a nearby pub, so maybe for next month's meetup, some folks would consider bringing something to share.  I know from experience what Sam likes, so maybe we can make the event just a bit more entertaining for everyone. 

A couple of things to think about regarding the future of the meetups and the NoVA forensics community.  First, I've talked to the ReverseSpace folks about the possibility of holding a mini forensics-con at their facility.

Second, what would be the interest in forensic challenges?  We could use online facilities and resources to post not only the challenges, but also the results, and folks could then get together to discuss tools and techniques used.  The great thing about having these available online is that folks who may not be able to make it to the meetups can also participate.

Finally, the last thing I wanted to bring up regarding the meetups is this...what are some thoughts folks have regarding available online resources for the meetups?  I set up the Yahoo group, and I post meetup reminders to that group, as well as the Win4n6 group, to my blog, LinkedIn acct, and Twitter.  After the Oct meetup, two LinkedIn groups were set up for the meetup.  Even so, I just saw a tweet today where someone said that they just found out about the meetups via my blog.  I'd like to hear some thoughts on how to get the word out, as well as get things posted (slide decks, challenges, reminders, announcements) and available in a way that folks will actually get the information.  What I don't want to do is have so many facilities that no one knows what to use or where to go.

Memory Analysis
Melissa's got another post up on the SketchyMoose blog regarding Using Volatility: Suspicious Process.  She's posted a couple of videos that she put together that are well worth watching.  You may need to turn up the volume a bit (I did)...if you want to view the videos in a larger window, check out the SketchyMoose channel on YouTube.

Something I like about Melissa's post is that she's included reference material at the end of the post, linking to further information on some of what she discussed in the videos.

While we're on the topic of memory analysis, Greg Hoglund posted to the Fast Horizon blog; his topic was Detecting APT Attackers in Memory with Digital DNA.  Yes, the post is vendor-specific, but it does provide some insight into what you can expect to see from these types of attackers.

Attack Vectors/Intel Gathering
When investigating an incident or issue, analysts are often asked to determine how the bad guy got in or how the infection occurred.  Greg's post (mentioned above) refers to a threat that often starts with a spear phishing attack, which is based on open source intelligence gathering.  The folks over at Open Source Research have posted on real-world pen-testing attack vectors, and believe me, it really is that easy.  Back in '98-'99 when I was doing this kind of work myself, we'd use open source intel collection (which is a fancy way of saying we used Lycos and DogPile...the pre-Google stuff...searches) to start collecting information.

I think that if folks really started to look around, they'd be pretty surprised at what's out there.  Starting at the company executive management site will give you some names to start with, and from there you can use that information and the company name itself to search for things like speaker bios, social networking profiles, etc.  As suggested in one of the comments to the post, you can also check for metadata in documents available via the corporate site (also consider checking P2P networking infrastructures...you might be surprised at what you find...).

Documents aren't the only sources of information...keep in mind that images also contain metadata.


Intel Collection During Analysis
Funny how writing this post is progressing this morning...one section of the post leads to another.  As I mentioned, during analysis we're often asked to determine how a system became compromised in the first place..."how did it happen?", where "it" is often a malware infection or someone having obtained unauthorized access to the system.  However, there are often times when it is important to gather intelligence during analysis, such as determining the user's movements and activities.  One way of doing this to see which WAPs the system (if it's a laptop) had connected to...another way to determine a user's movements is through smart phone backups.  I recently posted some tools to the FOSS page for this blog that might help with that.

In addition, you can use Registry analysis to determine if a smart phone had been connected to the system, even if a management (iPhone and iTunes, BB and the BB Desktop Manager) application hadn't been used.  From there you may find pictures or videos that are named based on the convention used by that device, and still contain metadata that points to such a device.  In cases such as this, the "intelligence" may be that the individual had access to a device that had not been confiscated or collected during the execution of a search warrant. 

OpenIOC
I recently commented on Mandiant's OpenIOC site, and what's available there.  One of the things that they're sharing via this site is example IOCs, such as this one.  There are a couple of things that I like about this sharing...one is that the author of the IOC added some excellent comments that give insight into what they found.  I know a lot of folks out there in the DFIR community like that sort of thing...they like to see what other analysts saw, how they found it, tools and techniques used, etc.  So this is a great resource for that sort of thing.

The IOCs are also clear enough that I can write a plugin for my forensic scanner that looks for the same thing.  The scanner is intended for acquired images and systems accessed via F-Response, and doesn't require visibility into memory.  However, the IOCs listed at the OpenIOC site have enough disk-based information in them (file system, Registry, etc.) that it's fairly easy to create a plugin to look for those same items.

Wednesday, November 02, 2011

Stuff

NoVA Forensics Meetup Reminder
Don't forget about the meetup tonight...and thanks to David for pointing out my typo on the Meetup Page.

I haven't received any responses regarding a pre-meetup warm-up at a local pub, so I'll look forward to seeing everyone who's attending tonight at 7pm at our location.

I posted the slides for tonight's presentation to the NoVA Forensics Meetup Yahoo group.

SSDs
I was recently asked to write an article for an online forum regarding SSDs.  Up until now, I haven't had any experience with these, but I thought I'd start looking around and see what's already out there so I can begin learning about solid state drives, as they're likely to replace more traditional hard drives in the near future.

In Windows 7, if the drive is an SSD, ReadyBoot and SuperFetch are disabled.

Resources
SSDTrim.com
Andre Ross' post on on the DigFor blog.

OpenIOC
With this M-unition blog post, Mandiant announced the OpenIOC framework web site.  I strongly suggest that before going to the OpenIOC.org site that you read through the blog post thoroughly, so that you understand what's being presented and offered up, and to set your expectations when you go to the site.

What I mean by this is that the framework itself has been around and discussed for some time, particularly through the Mandiant site.  Here is a presentation from some Mandiant folks that includes some discussion/slides regarding the OpenIOC.  There's also been an IOC editor available, which allows you to create IOCs, and now, with the OpenIOC.org site being released, the command line IOC finder tool has been released.  This tool (per the description in the blog post) allows a responder to check one host at a time for the established IOCs.

Fortunately, several example IOCs are also provided, such as this shelldc.dll example.  I tend to believe that this is where the real power of this (or any other) framework will come from; regardless of the type of framework or schema (or standard) used to describe indicators of compromise, the real power is going to come from the ability of #DFIR folks to understand and share these IoCs.  Having a standard for this sort of thing raises the bar for DFIR...not for admission, but it tells everyone where they have to be with respect to their understanding of DFIR activities, because not only will they have to understand what's out there, but they'll have to really understand it in order to be part of the community and share their own findings.

So, in a lot of ways, this is a step in the right direction.  I hope it takes off...as has been seen with GSI's EnScripting and the production of RegRipper plugins, sometimes no matter how useful something is to a small subset of analysts, it's not really picked up by the larger community.

Breach Reporting
There's been some interesting discussion in various forums (G+, Twitter, etc.) lately regarding breach reporting.  Okay, not so much discussion as folks posting links...I do think that there needs to be more discussion of this topic.

For example, much of the breach reporting is centered around actually reporting that a breach occurred. Now, if you read any of the published annual reports (Verizon, TrustWave, Mandiant), you'll see historically that a large percentage of breach victims are notified by external third parties.  These numbers appear to be across the board, as each of the organizations publishing these reports target slightly different customer bases and respond predominantly to different types of breaches (PCI/PII, APT, etc.).

Maybe a legislative requirement for reporting a breach, regardless of how it was discovered, is just the first step.  I mean, I've seen during PCI breaches where a non-attorney executive has stated emphatically that their company would not report a breach, but I tend to think that was done out of panic and a lack of understanding/information regarding the breach itself.  However, if breaches start getting reported, there will be greater visibility into the overall issue, and from there, intelligent metrics can be developed, followed by better detection mechanisms and processes.

With respect to PII, it appears that there are 46 states with some sort of breach notification requirements, and there's even a bill put forth by Sen. Leahy (D-VT) and others regarding a national standard requiring reporting of discovered breaches.

Resources
Leahy Personal Data Privacy and Security Act

Sunday, October 30, 2011

Stuff

Speaking
I've got a couple of speaking engagements coming up that I thought I'd share...

7-9 Nov - PFIC 2011 - I'll be giving two presentations, one on the benefits of using a forensic scanner, and the other, an Introduction to Windows Forensics.  I attended the conference last year, and had a great time, and I'm looking forward to meeting up with folks again.

30 Nov - CT HTCIA - I'm not 100% sure what I'm going to be presenting on at this point...  ;-)  I'm thinking about a quick (both presentations are less than an hour) presentation on using RegRipper, as well as one on malware characteristics and malware detection within acquired images.  I think that both are topical, and both are covered in my books.

Jan 2012 - DoD Cybercrime Conference (DC3) - I'll be presenting on timeline analysis.  I gave a presentation on Registry analysis (go figure, right??) here a long time ago, and really enjoyed the portions of the conference that I was able to attend.  I know that Rob Lee recently gave an excellent webinar on Super Timeline Analysis, but rest assured, this isn't the same material.  While I have provided code to assist with log2timeline, I tend to take a slightly different approach when presenting on timeline analysis.  Overall, I'm looking forward to having a great time with this conference and presentation.  Also, timeline analysis has it's own chapter in the upcoming Windows Forensic Analysis 3/e.

Reading
I've had an opportunity to travel recently, and when I do, I like to read.  Being "old skul" (re: I don't own a tablet...yet), I tend to go with hard copy reading materials, such as magazines and small books.  I happened to pick up a copy of Entrepreneur recently, for a couple of reasons.  First, it's easy to maneuver the reading material in to my seat and stow my carry-on bag.  Second, I think it's a great idea to see how other folks in other business areas solve problems and address issues that they encounter, and to spur ideas for how to recognize and address issues in my own area of interest.  For example, the October issue of the magazine has an article on how to start or expand a business during a recession, addressing customer needs.  In the technical community, this is extremely important.

In that same issue, Jonathan Blum's article titled Hack Job (not the same title in the linked article, but the same content) was interesting...while talking about application security, the author made the recommendation to "choose an application security consultant".  I completely agree with this, because it falls right in line with my thoughts on DFIR work...rather than calling an IR consultant or firm in an emergency, find a "trusted adviser" ahead of time who can help you address your DFIR needs.  What are those needs?  Well, in any organization, regardless of size, just look around.  Do you have issues or concerns with violation of acceptable use policies, or any other HR issues?  Theft of intellectual property? 

If you call a consulting firm when you have an emergency, it's going to cost you.  The incident has already happened, and then you have to work through contracting issues, getting a consultant (or a busload of consultants) on-site, and having to help the responders understand your infrastructure, and then start collecting data.  You maybe paying for more consultants than you need initially, because after all, it is an emergency, and your infrastructure is unknown.  Or, you may be paying for more consultants later, as more information about the incident is discovered.  Also, keep in mind emergency/weekend/holiday rates, the cost of last minute travel, lodging, etc.  And we haven't even started talking about anything the consultants would need to purchase (drives for imaging), or fines you may encounter from regulatory bodies.

Your other option is to work with an trusted adviser ahead of time, someone who's seen a number of incidents, and can help you get ready.  You'll even want to do this before increasing your visibility into your infrastructure...because if you don't have a response capability set up prior to getting a deep view into what's really happening on your infrastructure, you can very easily be overwhelmed once you start shining a light into dark corners.  Work with this trusted adviser to understand the threats you're facing, what issues need to be addressed within your infrastructure and business culture, and establish an organic response capability.  Doing this ahead of time is less expensive, and with the right model, can be set up as a budgeted, predictable business expense, rather than a massive, unbudgeted expenditure.  Learning how an incident responder would address your issue and doing it yourself (to some extent) is much faster (quicker response time because you're right there) and less expensive (if you need analysis done, FedEx is much less expensive than last minute plane flights, lodging, rental cars, parking, etc., for multiple consultants).  Working with a trusted adviser ahead of time will help you understand how to do all of this in a sound manner, with confidence (and documentation!).

MBR Infectors
I've posted on MBR infectors before, and even wrote a Perl script to help me detect one of the characteristics of this type of malware (i.e., modifying the MBR, and then copying the original MBR to another sector, etc.).

Chad Tilbury recently posted an MBR malware infographic that is extremely informative!  The infographic does a great job of illustrating the threat posed by this type of malware, not just in what it does and how it works, but being a graphic, you can see the sheer number of variants that are out there, as well as how they seem to be increasing. 

This stuff can be particularly insidious, particularly if you've never heard of it.  I've given a number of presentations where I've discussed NTFS alternate data streams (ADSs), and the subject matter freaks Windows admins out...because they'd never heard of ADSs!  So, imagine something getting on a system in such a way as to bypass security protections on the system during the boot sequence.  More importantly, as a DFIR analyst, do you have checks for MBR infectors as part of your malware detection process?

Blogs
Melissa's posted a couple of great blog posts on a number of topics, including (but not limited to) using Volatility and John the Ripper to crack passwords (includes a video), and examining partition tables.  She's becoming more prolific, which is great because she's been posting some very interesting stuff and I hope to see more of it in the future.

Tools
I've seen some tweets over the past week or so that have mentioned updates to the Registry Decoder tool...sounds like development is really ripping along (no pun intended...).  If you do any analysis of Windows systems and you haven't looked at this tool as a resource...what's wrong with you?  Really?  ;-)

Evidence Collection
A long time ago, while I was on the IBM ISS ERS team, we moved away from using the term "evidence" to describe what we collected.  We did so, because the term "evidence" has the connotation of having do with courts, and there was an air of risk avoidance in much of the IR work that we did...I'm not entirely sure where that came from, but that's how it was.  And if a customer (or someone higher up the food chain) says, "don't call it 'evidence' because it sounds like we're taking it to court...", then, well...to me, it doesn't matter what you call it.  Now, this doesn't mean that we changed what we did or how we did it...it simply means that we didn't call the digital data that we collected "evidence". 

This recent SANS ISC post caught my eye.  The reason it caught my eye was that it started out talking about having a standard for evidence handling, listed that requirements, and then...stopped.  Most often when talking with on-site IT staff during an incident, there's an agreement with respect to the need for collecting data, but when you start talking about what type of evidence is admissible in court, that's when most folks stop dead in their tracks and paralysis sets in, as often the "how" is never addressed...at least, not in a way that the on-site IT staff remembers or has implemented.

Here are a couple of thoughts...first, make data collection part of your incident response process.  The IR process should specify the need to collect data, and there should be procedures for doing so.  Each of these procedures can be short enough to easily understand and implement.

One of the things that I learned while preparing for the CISSP exam way back in 1999 was that business records...those records and that data collected as part of normal business processes...could be used as evidence.  I am not a lawyer, but I would think that this has, in part, to do with whether or not the person collecting the data is acting as an agent for law enforcement.  But if collecting that data is already part of your IR process and procedures, then it's documented as being part of your normal business processes.

And right there is the key to collecting "evidence"...documentation.  In some ways, I have always got the impression that this is the big roadblock to data collection...not that we don't know how to do it (there is a LOT of available information regarding how to collect all sorts of data from computer systems), but that we (technical folks) just seem to naturally balk at documenting anything.  And to be honest, I really don't know why that is...I mean, if a procedure states to follow these steps, and you do so, what's the problem?  Is it the fear of having done something wrong?  Why?  If you followed the steps in the procedure, what's the issue?

This really goes back to what I said earlier in this post about finding and working with a trusted adviser, someone with experience in IR who is there to help you help them to help you (that was completely intentional, by the way...).  For example, let's say you have a discussion and do some hands-on work with your trusted adviser regarding how to collect and preserve "evidence" from the most-often encountered systems in your infrastructure...laptops, desktops, and servers in the server room.  Then, let's say you have an incident and have to collect evidence from a virtual system, or a boot-from-SAN device?  Who better to assist you with this than someone who's probably already encountered these systems?  Or better yet, someone who's already worked with you to identify the one-off systems in your infrastructure and how to address them?

So, working with an adviser would help you address the questions in the SANS ISC blog post, and ensure that if your goal (or one of your goals) is to preserve evidence for use by law enforcement, then you've got the proper process, procedures, and tools in place to do so.

Saturday, October 29, 2011

NoVA Forensics Meetup

Reminder - our next NoVA Forensics Meetup is Wed, 2 Nov 2011...same Bat-time, same Bat-place.

Drop me an email or comment here if you're interested in meeting for a warm up at or just before 6pm.

Thursday, October 27, 2011

Tools and Links

Not long ago, I started a FOSS page for my blog, so I didn't have to keep going back and searching for various tools...if I find something valuable, I'll simply post it to this page and I won't have to keep looking for it.  You'll notice that I really don't have much in the way of descriptions posted yet, but that will come, and hopefully others will find it useful.  That doesn't mean the page is stagnant...not at all.  I'll be updating the page as time goes on.

Volatility
Melissa Augustine recently posted that she'd set up Volatility 2.0 on Windows, using this installation guide, and using the EXE for Distorm3 instead of the ZIP file.  Take a look, and as Melissa says, be sure to thoroughly read and follow the instructions for installing various plugins.  Thanks to Jamie Levy for providing such clear guidance/instructions, as I really think that doing so lowers the "cost of entry" for such a valuable tool.  Remember..."there're more things in heaven and earth than are dreamt of in your philosophy."  That is, performing memory analysis is a valuable skill to have, particularly when you have access to a memory dump, or to a live system from which you can dump memory.  Volatility also works with hibernation files, from whence considerable information can be drawn, as well.

WDE
Now and again, you may run across whole disk encryption, or encrypted volumes on a system.  I've seen these types of systems before...in some cases, the customer has simply asked for an image (knowing that the disk is encrypted) and in others, the only recourse we have to acquire a usable image for analysis is to log into the system as an Admin and perform a live acquisition.

TCHunt
ZeroView from Technology Pathways, to detect WDE (scroll down on the linked page)

You can also determine if the system had been used to access TrueCrypt or PGP volumes by checking the MountedDevices key in the Registry (this is something that I've covered in my books).  You can use the RegRipper mountdev.pl plugin to collect/display this information, either from a System hive extracted from a system, or from a live system that you've accessed via F-Response.

Timelines
David Hull gave a presentation on "Atemporal timeline analysis" at the recent SecTorCA conference (can find the presentation .wmv files here), and posted an abridged version of the presentation to the SANS Forensic blog (blog post here).

When I saw the title, the first thing I thought was...what?  How do you talk about something independent of time in a presentation on timeline analysis?  Well, even David mentions at the beginning of the recorded presentation that it's akin to "asexual sexual reproduction"...so, the title is meant to be an oxymoron.  In short, what the title seems to refer to is performing timeline analysis during an incident when you don't have any sort of time reference from which to start your analysis.  This is sometimes the case...I've performed a number of exams having very little information from which to start my analysis, but finding something associated with the incident often leads me to the timeline, providing a significant level of context to the overall incident.

In this case, David said that the goal was to "find the attacker's code".  Overall, the recorded presentation is a very good example of how to perform analysis using fls and timelines based solely on file system metadata, and using tools such as grep() to manipulate (as David mentions, "pivot on") the data.  In short, the SANS blog post doesn't really address the use of "atemporal" within the context of the timeline...you really need to watch the recorded presentation to see how that term applies.

Sniper Forensics
Also, be sure to check out Chris Pogue's "Sniper Forensics v3.0: Hunt" presentation, which is also available for download via the same page.  There are a number of other presentations that would be very good to watch, as well...some talk about memory analysis.  The latest iteration of Chris's "Sniper Forensics" presentations (Chris is getting a lot of mileage from these things...) makes a very important point regarding analysis...in a lot of a cases, an artifact appears to be relevant to a case based on the analyst's experience.  A lot of analysts find "interesting" artifacts, but many of these artifacts don't relate directly to the goals of their analysis.  Chris gives some good examples of an "expert eye"; in one slide, he shows an animal track.  Most folks might not even really care about that track, but to a hunter, or someone like me (ride horses in a national park), the track tells me a great deal about what I can expect to see.

This applies directly to "Sniper Forensics"; all snipers are trained in observation.  Military snipers are trained to quickly identify military objects, and to look for things that are "different".  For example, snipers will be sent to observe a route of travel, and will recognize freshly turned earth or a pile of trash on that route when the sun comes up the next day...this might indicate an attempt to hide an explosive device.

How does this apply to digital forensic analysis?  Well, if you think about it, it is very applicable.  For example, let's say that you happen to notice that a DLL was modified on a system.  This may stand out as odd, in part because it's not something that you've seen a great deal of...so you create a timeline for analysis, and see that there wasn't a system or application update at that time. 

Much like a sniper, a digital forensic analyst must be focused.  A sniper observes an area in order to gain intelligence...enemy troop movements, civilian traffic through the area, etc.  Is the sniper concerned with the relative airspeed of an unladen swallow?  While that artifact may be "interesting", it's not pertinent to the sniper's goals.  The same holds true with the digital forensic analyst...you may find something "interesting" but how does that apply to your goals, or should you get your scope back on the target?

Data Breach 'Best Practices'
I ran across this article recently on the GovernmentHealthIT site, and while it talks about breach response best practices, I'd strongly suggest that all four of these steps need to be performed before a breach occurs.  After all, while the article specifies PII/PHI, regulatory and compliance organizations for those and other types of data (PCI) specifically state the need for an incident response plan (PCI DSS para 12.9 is just one example).

Item 1 is taking an inventory...I tell folks all the time that when I've done IR work, one of the first things I ask is, where is your critical data.  Most folks don't know.  A few that have have also claimed (incorrectly) that it was encrypted at rest.  I've only been to one site where the location of sensitive data was known and documented prior to a breach, and that information not only helped our response analysis immensely, it also reduced the overall cost of the response (in fines, notification costs, etc.) for the customer.

While I agree with the sentiment of item 4 in the article (look at the breach as an opportunity), I do not agree with the rest of that item; i.e., "the opportunity to find all the vulnerabilities in an organization—and find the resources for fixing them." 

Media Stuff
Brian Krebs has long followed and written on the topic of cybercrime, and one of his recent posts is no exception.  I had a number of take-aways from this post that may not be intuitively obvious:

1.  "Password-stealing banking Trojans" is ambiguous, and could be any of a number of variants.  The "Zeus" (aka, Zbot) Trojan  is mentioned later in the post, but there's no information presented to indicate that this was, in fact, a result of that specific malware.  Anyone who's done this kind of work for a while is aware that there are a number of malware variants that can be used to collect online banking credentials.

2.  Look at the victims mentioned in Brian's post...none of them is a big corporate entity.  Apparently, the bad guys are aware that smaller targets are less likely to have detection and response capabilities (*cough*CarbonBlack*cough*).  This, in turn, leads directly to #3...

3.  Nothing in the post indicates that a digital forensics investigation was done of systems at the victim location.  With no data preserved, no actual analysis was performed to identify the specific malware, and there's nothing on which law enforcement can build a case.

Finally, while the post doesn't specifically mention the use of Zeus at the beginning, it does end with a graphic showing detection rates of new variants of the Zeus Trojan over the previous 60 days; the average detection rate is below 40%.  While the graphic is informative,

More Media Stuff
I read this article recently from InformationWeek that relates to the recent breach of NASDAQ systems; I specifically say "relates" to the breach, as the article specifies, "...two experts with knowledge of Nasdaq OMX Group's internal investigation said that while attackers hadn't directly attacked trading servers...".  The title of the article includes the words "3 Expected Findings", and the article is pretty much just speculation about what happened, from the get-go.  In fact, the article goes on to say, "...based on recent news reports, as well as likely attack scenarios, we'll likely see these three findings:".  That's a lot of "likely" in one sentence, and this much speculation is never a good thing.  


My concern with this is that the overall take-away from this is going to be "NASDAQ trading systems were hit with SQL injection", and folks are going to be looking for this sort of thing...and some will find it.  But others will miss what's really happening while they're looking in the wrong direction.

Other Items
F-Response TACTICAL Examiner for Linux now has a GUI
Lance Mueller has closed his blog; old posts will remain, but no new content will be posted

Thursday, October 20, 2011

Stuff in the Media

Now and again, I run across some interesting articles available through various media sources.  Back in the days when I was doing vulnerability assessments ('98-ish), we used to listen to what our contact said when we went onsite, and try to guess which magazines and journals he had open in his office...usually, we'd hear our contact using keywords from recent articles.

Terry Cutler, CTO of the Canadian firm Digital Locksmiths, had an interesting article published in SecurityWeek recently.  The article is titled, "You've been hacked.  Now what?", and provides a fictional...albeit realistic...description of what happens when an incident has been identified.  A lot of what is described in the article appears to have been pulled from either experience (IR is not listed as an available service on the company web site) or from "best practices".  For example, in the article, the assumption appears to be made that if a compromise occurs, corporate cell phones must be assumed to have been compromised (with respect to calls...email wasn't mentioned).

The article talks about not disconnecting systems, which in many cases is counter to what most victims of a compromise want to do right away.  However, I completely agree with this...unfortunately, the article doesn't expand beyond that statement to say what you should do.

Now, what I do NOT agree with is the statement in the article that you should "get help from an ethical hacker".  First off, given the modern usage of the term "hacker", the phrase "ethical hacker" is an oxymoron...like "jumbo shrimp".  While I do agree that some of the folks performing "ethical hacking" are good at getting into your network (as stated in the article, "Ethical hackers are experts at breaking into your system the same way a hacker will."),  I don't agree that this necessarily makes them experts at protecting networks, or more importantly, scoping the incident and determining where the attack came from.

In the years that I have been an incident responder, the one thing that consistently makes me a cringe is when I hear someone say, "...if I were the hacker, this is what I would have done."  Folks, where that thinking takes you can be irrelevant, or worse, can send your responders chasing way down rabbit holes.  Think CSI, and go where the evidence takes you.  I've seen instances where the intruder had no idea what organization he'd compromised and simply meandered about, leaving copious and prolific artifacts of his activity on all systems he touched.  I've also seen SQL injection attacks where, once in, the intruder was very focused in what they were looking for.  Sometimes, it's not so much about the corporate assets as it is loading keystroke loggers on user systems in order to harvest online banking credentials.

What you should be doing is collecting data and following the evidence, using the information you've collected to make educated, reasoned determinations as to where the intruder is going and what they are doing.  Do not make the assumption that you can intuit the attackers intentions...you may never know what these are, and you may chase down rabbit holes that lead to nowhere.  Instead, focus on what the data is telling you.  Is the intruder going after the database server?  Were they successful?

The best way to go about establishing an organic capability for this sort of work (at least, for tier 1 and/or 2 response) is to establish a relationship with a trusted adviser, someone who has experience in incident response and digital forensics, and can guide you through the steps to building that organic capability for immediate response.

At this point, you're probably wondering what I mean by "organic", and why "immediate response" is something that seems so necessary.  Well, consider what happens during a "normal" incident response; the "victim" organization gets notified of the incident (usually by an external third party), someone is contacted about providing response services, contract negotiations occur, and then at some time in the future, responders arrive and start to learn about your infrastructure so that they can begin collecting data.

The way this should be occurring is that data collection begins immediately, with incident identification as the trigger...if this doesn't happen, critical data is lost and unrecoverable.  The only way to do this is to have someone onsite trained in how to perform the data collection.


A lot of local IT staff look at consultants as the "experts" in data collection, and very often don't realize that before collecting data, those "experts" ask a LOT of questions.  Most often, the consultants called onsite to provide IR capabilities are, while knowledgeable, not experts at networking, and they are definitely not experts in YOUR infrastructure and environment.

I'm not even talking about getting to prosecution at this point...all I'm talking about is that data that is necessary to determine what happened, what data may have been compromised is quickly decaying, and if steps are not taken to immediately collect and preserve this data, there very likely will be a significant detrimental impact on the organization.  Now, the only reason that this isn't being done now is because onsite IT staff don't have the training.  So, work with that trusted adviser and develop a process and a means for collecting the necessary data, and documenting it all. 

Going back to the SecurityWeek article, I completely agree...don't disconnect the system as your first act.  Instead, have the necessary tools in place and your folks trained in what to do...for example, collect the contents of physical memory first, and then do what you need to do.  This may be to disconnect the system from the network (leaving it powered on), or making an emergency modification to a switch or firewall rule in order to isolate the system in another manner.  If the system is boot-from-SAN, you may also want to (for example) have a means in place for acquiring an image of the system before shutting it down.  Regardless of what needs to be done, be sure that you have a documented process for doing it, one that allows for pertinent data, as well as business processes, to be preserved.

Ever wondered, during an incident, what kind of person (or people) you're working against?  This eWeek article indicates that the impression that hackers are isolated, socially-inept "lone wolf" types is incorrect; in fact, according to the article, "hackers" are very social, sharing exploits, techniques and even providing tutorials.  Given this, is it any wonder why folks on the other side of the fence are constantly promoting sharing?  The bad guys do it because it makes sense, and makes them better...so why aren't we doing more of it?

Wednesday, October 19, 2011

Links, Updates, and WhatNot

Malware
Evild3ad has an excellent writeup of the Federal (aka, R2D2) Trojan via memory analysis using Volatility.  The blog post gives a detailed walk-through of the analysis conducted, as well as the findings.  Overall, my three big take-aways:

1.  An excellent example of how to use Volatility to conduct memory analysis.
2.  An excellent example of case notes.
3.  Detailed information that can be used to create a plugin for either RegRipper, or a forensic scanner.

There is also a link to a Rar archive containing the memory image at the site, so you can download it and try running the commands listed in the blog post against the same data.

M-Trends
The Mandiant M-Trends 2011 report is available...I received a copy yesterday and started looking through it.  Very interesting information in the report...as a host-based analysis guy, I found some of the information on persistence mechanisms (starting on pg 11 of the report) to be very interesting.  Some may look at the use of Windows Services and the ubiquitous Run key as passe, but the fact is that these persistence mechanisms work.  After all, when the threat actors compromise an infrastructure, they are not trying to remain hidden from knowledgeable and experienced incident responders.

Interestingly, the report includes a side note that the authors expect to see more DLL Search Order Hijacking used as a persistence mechanism in the future.  I tend to agree with the statement in the report, given that (again, as stated in the report) that this is an effective technique that is difficult to detect.

Another interesting persistence mechanism described in the report was services.exe being modified (without changing the size of the binary) to point to an additional (and malicious) DLL.  This technique has been seen being used with other binaries, including other DLLs. 

A major section (section III) of the report discusses visibility across the enterprise; I think that this is an extremely important issue.  As I've performed incident response over the years, a common factor across most (if not all) of the incidents I've responded to has been a lack of any infrastructure visibility whatsoever.  This has been true not only for initial visibility into what goes on on the network and hosts, but it has also affected response capabilities.  Network- and host-based visibility of some kind needs to be achieved by all organizations, regardless of size, etc.  I mean, think about it...any organization that produces something has some sort of visibility into processes that are critical to the business, right?  A company that manufactures widgets has controls in place to ensure that the widgets are produced correctly, and that they're shipped...right?  I mean, wouldn't someone notice if trucks weren't leaving the loading docks?  So why not have some sort of visibility into the medium where your critical information assets are stored and processed?

Looking at the information provided in the M-Trends report (as well as other reports available from other organizations), I can see the beginning an argument for incident preparation being built up; that is to say that while the report may not specifically highlight this (the M-Trends report mentions the need for "...developing effective threat detection and response capabilities..."), it's clear that the need for incident preparation has existed for some time, and will continue to be an issue.

Addendum: Pg 13 of the M-Trends report mentions some "interesting" persistence mechanisms being used, one of which is "use of COM objects"; however, the rest of the report doesn't provide much of a description of this mechanism.  Well, I ran across this post on the ACROS Security Blog that provides some very good insight into using COM objects for persistence.  Both attacks described are something of a combination of the use of COM objects and the DLL Search Order hijacking, and very interesting.  As such, there needs to be tools, processes, and education of analysts in these techniques so that they can be recognized or at least discovered through analysis.  I would suggest that these techniques have been used for some time...it's simply that most of us may not have known to (or "how to") look for them.

Resources
Verizon DBIR
TrustWave GSR

Incident Preparation
I recently gave a talk on incident preparation at ETCSS, and overall, I think it was well received.  I used a couple of examples to get my point across...boxing, fires in homes...and as the gears have continued to turn, I've thought of another, although it may not be as immediately applicable or understandable for a great many folks out there.

Having been a Marine, and knowing a number of manager- and director-types that come from prior military experience, I thought that the USS Cole would be a great example of incident preparation.  The USS Cole was subject to a bombing attack on 12 October 2000, and there were 56 casualties, 17 of which were fatalities.  The ship was stuck by a bomb amidships, and a massive hole was torn in her side, part of which was below the waterline.  However, the ship did not sink.

By contrast, consider the RMS Titanic.  On 15 April 1912, the Titanic struck an iceberg and shortly thereafter, sank.  According to some sources, a total of six compartments were opened to the sea; however, the design of the Titanic was for the ship to remain afloat with only the first four compartments opened to the sea.  As the weight of the water pulled the ship down, more water was allowed to flood the ship, which quickly led to her sinking.

So, what does this have to do with incident preparation and response?  Both ships were designed with incidents in mind; i.e., it was clear that the designers were aware that incidents, of some kind, would occur.  The USS Cole had some advantages; better design due to a better understanding of threats and risk, a better damage control team, etc.  We can apply this thinking to our current approach to infrastructure design and assessments.

How would the USS Cole have fared had, at the time of the bombing, they not had damage control teams and sailors trained in medical response and ship protection?  What would have happened, do you think, if they'd instead done nothing, and gone searching for someone to call for help?

My point in all this goes right back to my presentation; who is better prepared to respond to an incident - the current IT staff on-site, who live and work in that environment every day, or a consultant who has no idea what your infrastructure looks like?

Determining Quality
Not long ago, I discussed competitive advantage and how it could be achieved, and that got me to thinking...when a deliverable is sent to a customer of DFIR services, how do they (the customer) judge or determine the quality of the work performed?

Over the years, I've had those engagements where a customer says, "this system is infected", but when asked for specifics regarding why they think it was infected, or what led them to think it was infected, most often don't have anything concrete to point to.  I'll go through, perform the work based on a malware detection checklist and very often come up with nothing.  I submit a report detailing my work activity and findings, which leads to my conclusions of "no malware found", and I simply don't hear back.

Consulting is a bit different from the work done in LE circles...many times, the work you do is going to be reviewed by someone.  The prosecution may review it, looking for information that can be used to support their argument, and the defense may review it, possibly to shoot holes in your work.  This doesn't mean that there's any reason to do the work or reporting any differently...it's simply a difference in the environments.

So, how does a customer (of consulting work) determine the quality of the work, particularly when they've just spent considerable money, only to get an answer that contradicts their original supposition?  When they receive a report, how do they know that their money has been well-spent, or that the results are valid?  For example, I use a checklist with a number of steps, but when I provide a report that states that I found no indication of malware on the system, what's the difference between that and another analyst who simply mounted the image as a volume and scanned it with an AV product?

Attacks
If you haven't yet, you should really consider checking out Corey's Linkz about Attacks post, as it provides some very good information regarding how some attacks are conducted.  Corey also provides summaries of some of the information, specifically pointing out artifacts of attacks.  Most of them are Java-based, similar to Corey's exploit artifact posts.

This post dovetails off of a comment that Corey left on one of my posts...

I've seen and hear comments from others about how it's difficult (if not impossible) and time consuming to determine how malware ended up on the system.

Very often, this seems to be the case.  The attack or initial infection vector is not determined, as it is deemed too difficult or time consuming to do so.  There are times when determining the initial infection vector may be extremely difficult, such as when the incident is months old and steps have been taken (either by the attacker or local IT admins) to clean up the indicators of compromise (IoCs).  However, I think that the work Corey has been doing (and providing the results of publicly) will go a long way toward helping analysts narrow down the initial infection vector, particular those who create detailed timelines of system activity.

Consulting
Hal Pomeranz has an excellent series of posts regarding consulting and issues that you're likely to run into and have to address if you go out on your own.  Take a look at part 1, 2, 3, and 4.  Hal's provided a lot of great insight, all of which comes from experience...which is the best teacher!  He also gives you an opportunity to learn from his mistakes, rather than your own...so if you're thinking about going this route, take a look at his posts.

Friday, October 14, 2011

Links

Carbon Black
I recently gave a presentation at ETCSS, during which we discussed the need for incident preparedness in order to improve the effect of incident response efforts.  In that presentation, I mentioned and described Carbon Black (Cb), as well as how it can be used in other ways besides IR.

While I was traveling to the venue, Cb Enterprise was released.  Folks, if you don't know what Carbon Black is, you really should take a look at it.  If you use computers in any capacity beyond simply sitting at a keyboard at your house...if you're a dentist's office, hospital, law firm, or a national/global business...you need to take a good hard look at Cb.  Cb is a small, light-weight sensor that monitors execution on a system...remember Jesse Kornblum's Rootkit Paradox paper?  The paradox of rootkits is that they want to hide, but they must run...the same is true with any malware.  Cb monitors program execution on Windows systems.  The guys at Cb have some great examples of how they've tracked down a three-stage browser drive-by infection in minutes, where it may have taken an examiner doing just disk forensics days to locate the issue.

If you have and use computers, or you have customers who do, you should really take a hard look at Cb and consider deploying it.  Seriously...check out the site, give the Kyrus Tech guys a call, and take a good hard look at what Cb can do for you.  I honestly believe that Cb is a game changer, and the Kyrus Tech guys have demonstrated that it is, indeed, a game changer, but not just for IR work.

Timeliner
Jamie Levy has posted documentation and plugins for her OMFW talk (from last July) regarding extracting timeline data from a memory dump using the Volatility framework.  This is a great set of plugins for a great memory analysis framework, folks.  What's really cool is that with a little bit of programming effort,  you can modify the output format of the plugins to meet your needs, as well.  A greatbighuge THANKS to Jamie for providing these plugins, and for the entire Volatility team/community for a great memory analysis framework.

Exploit Artifacts
Speaking of timelines...Corey has posted yet another analysis of exploit artifacts, this one regarding a signed Java applet. This is a great project that Corey works on, and a fantastic service that he's providing.  Using available tools (i.e., MetaSploit), he compromises a system, and then uses available tools and techniques (i.e., timeline analysis) to demonstrate what the artifacts of the exploit "look like" from the perspective if disk analysis.  Corey's write-up is clear and concise, and to be honest, this is what your case notes and reports should look like...not exactly, of course, but there are lot of folks that use the "...I don't know what standard to write to..." as an excuse to not do anything.  Look at what Corey's done here...don't you think that there's enough information to replicate what he did?  Does that work as a standard?

Also, take a look at the technique Corey used for investigating this issue...rather than posting a question online, he took steps to investigate the issue himself.  Rather than starting with an acquired image and a question (as is often the case during an exam), he started with just a question, and set out to determine an answer.  Information like this can be extremely valuable, particular when it comes to determining things such as the initial infection vector of malware or a bad guy, and a good deal of what he's provided can be added to an exam checklist or a plugin for a forensic scanner.  I know that I'm going to continue to look for these artifacts...a greatbighuge THANKS to Corey, not just for doing this sort of work, but posting his results, as well.

DFF
DFF 1.2 is available for download.  Take a look at this for a list of the updates; check out batch mode.  Sorry, I don't have more to write...I just haven't had a chance to dig into it yet.

Community
One of the things I see a great deal of, whether it's browsing the lists or reading questions that appear in my inbox, is that when asking questions regarding forensic analysis, many of us still aren't providing any indication of the operating system that we're analyzing.  Whether its an application question (P2P, FrostWire, a question about MFT entries, etc.), many of us are still asking the questions without identifying the OS, and if it's Windows, the version.

Is this important at all?  I would suggest that yes, it is.  The other presentation I gave at ETCSS (see the Carbon Black entry above) was titled What's new in Windows 7: An analyst's perspective.  During this presentation, we discussed a number of differences, specifically between Windows XP and Win7, but also between Vista and Win7.  Believe it or not, the version of Windows does matter...for example, Windows 2003 and 2008 do not, by default, perform application prefetching (although they can be configured to do so).  With Windows XP, the searches a user executed from the desktop were recorded in the ACMru key; with Vista, the searches were NOT recorded in a Registry key (they were/are maintained in a file); with Windows 7, the search terms are maintained in the WordWheelQuery key.

Still not convinced?  Try analyzing a Windows 7 memory dump with Volatility, but don't use the Windows 7 profile.  

So, it you're asking a question that has to do with file access times, then the version of Windows is very important...because as of Vista, by default, updating of last access times on files is disabled.  This functionality can be controlled by a Registry value, which means that this functionality can also be disabled on Windows XP systems.

I also see a number of questions referring to various applications, many of which are specific to P2P applications.  Different applications behave differently...so saying, "I'm doing a P2P investigation" doesn't really provide much information if you're looking for assistance.  I mean, who's going to write an encyclopedic if/then loop with all of the possibilities?  Not only is the particular application important, but so is the version...for the same reasons that the OS version is important.  I've dealt with older versions of applications, and what those applications do, or are capable of doing, can be very important to an investigation...that is, unless you're planning to fill in the gaps in your investigation with speculation.

In short, if you've got a question about something, be sure to provide relevant background information regarding what you're looking at...it can go a long way toward helping someone answer that question and provide you with assistance.


Tools
I've started a new page for my blog, listing the FOSS forensic tools that I find, come across, get pointed to, and use.  It's a start...I have a good deal of catching up to do.  I've started listing the tools, and provided some descriptions...I'll be updating the tools and descriptions as time goes on.  This is mostly a place for me to post tools and frameworks so that I don't have to keep going back and searching through my blog for something, but feel free to stop by and take a look, or email me a tool that you like to use, or site with several tools.

Endorsements
One final thing...and this is for Mr. Anonymous, who likes to leave comments to some of my blog posts...I get no benefit, monetarily or otherwise, for my comments or endorsement of Volatility, nor for DFF...or any other tool (FOSS or otherwise) for that matter.  I know that in the past, you've stated that you "...want to make sure that it is done with the right intentions".  Although you've never explicitly stated what those intentions are, I just wanted to be up front and clear...I have used these tools, and I see others discovering great benefit from them, as well...as such, I think that it's a great idea to endorse them as widely as possible, so that others don't just see the web site, but also see how they can benefit from using these tools.  I hope that helps.