Tuesday, June 28, 2011

Meetup, Tools and other stuff

Meetup
Just a reminder about our next NoVA Forensic Meetup, scheduled for 6 July.  Tom Harper will be presenting, and he's posted  his slides on his site.  Take a look at his slides, and download them if you want to bring them along.  Also, taking a look at them ahead of time will make the presentation itself go a bit more smooth, and leave more time for questions and interaction.

Location: ReverseSpace, 13505 Dulles Technology Drive, Herndon, VA.

Time: 7:00 - 8:30pm

Review
Corey Harrell posted his review of DFwOST.  Thanks for the review, Corey, and for taking the time to put your thoughts out there for others to see.

MBR
I've mentioned previously that I include a check for MBR infectors in my malware detection process, and I even have it on my checklist.  I discussed this recently with Chris Pogue ("@cpbeefcake" on Twitter), and he pointed me to the mbrparser.pl Perl script that Gary Kessler makes available via his site (the script can be found in the Boot Record Parsers archive).  Gary's mbrparser.pl script does more than just display the MBR; it also parses the partition table in a very readable manner, providing a bit more information than mmls from the TSK tools.

My own Perl script is used for something other than just displaying the MBR, and it doesn't parse the partition table...instead, it looks for sectors that are not all zeros, and will either list those sectors (in summary mode) or display them.  The thought process behind this approach is that many of the identified MBR infectors will modify the MBR with a jmp instruction, and copy the original MBR to another sector.  Some may even include executable code within sectors.  This is particularly useful when you have an image acquired from a physical disk and the first active partition is located at sector 63; usually, I will see sector 0 and 63 as non-zero, and maybe one or two other sectors.  Running the script in summary mode gives me an idea of how many of the sectors I will need to look at, and then running it in full mode allows me to have something that I can redirect to a file and even include in my report, should the need arise.  Even if nothing is found, that's useful, too...because now my malware detection process includes an additional step beyond simply, "I ran an AV scanner against the mounted image...".

Here are some links to disassembling the MBR:
Links
MBR disassembly
More MBR disassembly

Also be sure to check out Gary's page for other utilities, particularly if you're into SCUBA.

Speaking of MBR infectors, a recent MS Technet blog post mentioned a new MBR infector/bootkit called "Trojan:Win32/Popureb.E", which is apparently an update to previous versions of the identified malware, in that it infects the MBR.  Unfortunately, MS's recommendation is to fix your MBR using your recovery disk - I say "unfortunately" because without a root cause analysis, what is the likelihood that you "fix" the system, and the bad guy is then able to un-fix your fix?  Pretty high.  I've seen other MBR infectors get delivered via Java exploits...so if you don't figure out how the MBR got infected, how are you going to protect your infrastructure?  I'd suggest that the steps should be more along the lines of acquiring memory and an image of the physical disk, and then if you must re-provision the system, clean the MBR as suggested, but tag the system as such, and put a high priority on determining an overall fix.

Tools
Nick Harbour of Mandiant maintains a page with a couple of interesting tools, including nstrings and pestat.  Nick's also posted some FileInsight plugins, as well as interesting presentations, on that same page.

The Malicious Streams site includes a Python-based LNK file parser (as well as some interesting MacOSX-specific tools).

David Kovar has moved his analyzeMFT tool to a new location; take note, and keep an eye out for new tools/libraries...

What is "APT"?
Jason Andress published an article for ISSA about APT, what it is, and what the threat really means.  While I do think that the article gets a little too mired down in the origins and meaning of the term "APT", it does make a very valid point...regardless of what side of the fence you're on, APT is a very real threat.  Jason does point out some very important points with respect to defending against this threat, but he does mention tools without specifically identifying any particular tools to use.  I think that in doing this, he makes a point...that is, there is no security silver bullet, and one tool (or combination of tools) may work very well for one organization, but not at all for another.

I think that the most important point that Jason makes regarding defense is the idea of "defense-in-depth".  Now, I've seen where some have suggested that this doesn't work...but the fact of the matter is that you can't deem something a failure if it was never employed.

Jason mentions "security awareness"...this is important, but it has to begin at the top.  If you spend the time and effort to train your employees and raise security awareness, but your C-level executives aren't in attendance, how effective do you think that training will be?  If it's really that important, wouldn't the boss be there?  I attended two days of manager-and-above training for an employer, which was kicked off with a message from the CEO.  And guess what...every time I looked around throughout the two days, I saw the CEO there.  More than anything else, that told me that this stuff was important.  Security has to be top-down.

Something to point out that wasn't mentioned in Jason's article is that many of the things he mentions are either directly or indirectly covered in most of the legal and regulatory compliance standards.  Testing...training...having the right tools in place...many of these map directly to the PCI DSS.  Even defense-in-depth, when you include the computer security incident response team (CSIRT), maps to chapter 12.9 of the DSS.

Jump List DestList Structure
Jump lists are one of those new things in Windows 7 that many analysts are trying to wrap their heads around.  Jump lists provide similar functionality to the user as shortcuts in the Recents folder, as well as the RecentDocs Registry key, albeit in a different manner.  First, jump lists come in two flavors; Automatic and Custom destinations.  I'm not going to be really prolific about jump lists, but from a binary point of view, custom destinations are apparently SHLLNK structures run to together consecutively, one after another, where as Automatic destinations are streams within an OLE container (that is, each *.automaticdestination-ms file is an OLE container).  Each of the streams within the contain is numbered (with the exception of one called "DestList"), and follows the SHLLNK structure. You can open the automatic destinations jump list using the MiTeC Structured Storage Viewer, extract each numbered stream, and view the contents of the stream in your favorite LNK viewer.

There is very little (i.e., "none") information available regarding the structure of the "DestList" stream, but looking at it in a hex editor seems to indicate that it may serve as a most recently used (MRU) or most frequently used (MFU) list.  We know that the SHLLNK structure in the numbered streams contains time stamps, but from the structure definition we know that those are the MAC times of the target file.  So how does Windows 7 know how to go about ordering the entries in the jump list?

Well, I spent some time with some jump lists, a hex editor and a number of colored pens and highlighters, and  I think I may have figured out a portion of the DestList structure.  First, each DestList structure starts with a 32 byte header, which contains (at offsets 0x03 and 0x18) an 8 byte number that corresponds to the number of numbered streams within the jump list.

Following the header, each element of the structure is 114 bytes, plus a variable length Unicode string.  The table below presents those items within each element that I've been able to extract on a consistent basis, albeit via limited testing.

Offset Size Description
0x48 16 bytes NetBIOS name of the system; padded with zeros to 16 bytes
0x58 8 bytes Stream number; corresponds to the numbered stream within the jump list
0x64 8 bytes FILETIME object
0x70 2 bytes Number of Unicode characters in the string that follows

While the information represented in the above table clearly isn't all that's available, I do think that it's a good start.  Once further testing is able to determine what the FILETIME object represents, this is enough information to allow for inclusion of jump lists into timelines.

Monday, June 27, 2011

Links and Updates

CyberSpeak
Ovie's got a new podcast up, be sure to check it out. In this podcast, Ovie interviews John Goldfoot, lawyer and author of the new paper, The Physical Computer and the 4th Amendment, published in the Berkeley Journal of Criminal Law.  The overall idea is of the paper is to no longer view the computer as a system of containers, but simply as a container, in itself.

Ovie's podcasts are great listening and very educational.  If you're one of those people who has to have a bit of coffee before becoming coherent in the morning, the CyberSpeak podcast is a great accompaniment for your Starbucks.

I especially like John's reference to the dog that did not bark, which is a Sherlock Holmes reference to the fact that the absence of evidence that would be present to something that happened, thus suggesting that it did not happen.  This is very similar to something I've said and published in my books for a long time...the absence of an artifact where you would expect to find one is, itself, an artifact.

Take away from the podcast...don't use the S-word!

Anti-Malware Tools of Note
Klaus is back, with a great post pointing out anti-malware tools of note!  I've always found Klaus's posts to be chock full of great information about the various tools that he finds, and in many cases, has tried and posted his sort-of-Consumer-Reports version of the tool.  I find posts like this to be very useful, particularly as I tend to get a lot of "...we think this computer has/had malware on it; find it..." cases, and as such, I have what I believe to be a comprehensive process, in a checklist, that includes (but is not limited to) running AV scans.  In that process, I specifically steps that require me to check the timeline of the system to determine which AV scanner(s), if any, have already been run on the system.  Many times, the system will have a fairly up-to-date scanner (Symantec or McAfee) running (find Application Event Logs records, AV logs, etc.), and in a good number of cases, I've found that an Administrator had also installed and run Spybot Search & Destroy, and maybe even the MalwareBytes Anti-Malware scanner.  If none of the tools that have already been used on the system found any threats, then it's incumbent upon the analyst to NOT use any of those tools; after all, if scanner A didn't find anything, how much value are you really providing to the customer to run your edition of scanner A on the mounted image and not finding any threats?

Compromises
I recently ran across this little gem, which states some pretty interesting "statistics".  In this case, I use quotes around that last term, as I'm not entirely sure that something wasn't edited out of the article prior to publication.  For example, the biggest issue I see with the article is that I don't understand is how you can survey 583 businesses, and then extrapolate their responses to "90% of US businesses"; this suggests that the 583 respondents, without any indication of their makeup, are a representative sampling of all US businesses.  The article does point to this Ponemon Institute report.

Based on the survey results, there seem to be some interesting, albeit not surprising  findings...

"...59% of respondents claimed that the worst consequences of the attacks were theft of information, followed by business disruption in second place."

This is interesting, as more and more organizations seem to be coming to this conclusion, that data theft is an issue.  It wasn't so long ago that data theft was something of an afterthought for many of the organizations I encountered

Another interesting quote from the article:

"Employee mobile devices and laptops are believed to be the most likely entry point through which serious attacks are unleashed."

I found this particularly interesting, because as a responder, in most of the cases I've worked, there is very often little done to attempt to determine the "most likely entry point".  I'm guessing that terms such as "believed" and "most likely" are intended to convey speculation, rather than assertions supported by fact.

One particular quote stood out from the article; Dr. Larry Ponemon reportedly stated, “conventional network security methods need to improve in order to curtail internal and external threats.”

Now, I realize that this is likely taken out of context, but "conventional network security methods" are likely not those things that security professionals have been harping on for years; these security methods are most likely what the respondents have in place, not what they should have in place.  The fact is that many organizations may not have the ability to detect and effectively respond to incidents, which not only makes the threats appear to be extremely sophisticated, but also ends up costing much more in the long run.

The Value of Data
My last blog post, in which I asked, who defines the value of data, generated some interesting comments and discussion.  Based on the discussion and further thought, I'm of the opinion that, right or wrong, and due to several factors, the value of data is entirely subjective and most often up to the analyst.

Think about it...how do most analysis engagements start?  Ultimately, regardless of where or how the overall engagement originates, an analyst is presented with data (an acquired image, logs, etc.) and a set of requirements or goals.  Where did these goals come from?  Many times, the "customer" (business organization, investigator, etc.) has stated their goals, from their perspective, with little to no translation or understanding of/expertise in the subject (beyond what they've read in the industry trade journal du jour, that is...).  These are then passed to the analyst via a management layer that often similarly lacks expertise, and the analyst is left to discern the goals of the analysis without any interaction or exchange with the "customer" or "end user". 

Now, consider the issue of context...who determines the contextual value of the data?  Again...the analyst.  Isn't it the analyst who needs to know about the data in the first place?  And then isn't it the analyst who presents the data to someone else, be it a "customer" or "end user"?  If the analyst's training has made them an expert at calculating MFT data runs, but that training didn't address the Windows Registry, then who determines the contextual value of all of the available data if not all of the data is analyzed and/or presented?

No, I'm not busting on analysts, not at all...in fact, I'm not making derogatory remarks about anyone.  What I am suggesting is that we take a look at the overall DFIR process and consider the effect that it has on studies and surveys such as was mentioned earlier in this post.  Incidents of all kinds are clearly causing some pretty significant, detrimental effects on organizations, but IMHO and based on my experience over the years, as well as talking with others in the industry, it's pretty clear to me that incidents and compromises are becoming more targeted and focused, and the battle ensues between the attacker and the target's management.

Early this year, I read and wrote a review for CyberCrime and Espionage, by John Pirc and Will Gragido.  Two statements I made in that review are:

What is deemed "adequate and reasonable" security is often decided by those with budgeting concerns/constraints, but with little understanding of the risk or the threat.

Compliance comes down to the auditor versus the attacker, with the target infrastructure as the stage.  The attacker is not constrained a specific compliance "standard"; in fact, the attacker may actually use that "standard" and compliance to it against the infrastructure itself.

I believe that these statements still apply, and do so, in fact, quite well.  However, I'd like to combine them into, A data breach comes down to the management versus the attacker.  Attackers are neither concerned with nor constrained by budgets.  Attackers pry the cover off of technologies such as Windows XP, Windows 7, Windows Media Player, Adobe Reader, etc., and dig deep into internal workings, knowing that their targets don't do the same.  Management determines the overall security culture of their organization, as well as the use of resources to implement security.  Management also determines whether their organization will focus on protecting their data, or simply meeting the letter of the law with respect to compliance.

Don Weber, a former US Marine, recently tweeted, "In their minds #LULZSEC ~== WWII French Resistance".  That's very appropriate...when the Germans invaded France, they didn't know the lay of the land the way the resistance fighters did; sure, they could navigate from point to point with a compass, and move in force with tanks and troops, but the resistance fighters knew the land, and knew how to move from point to point in a stealthy manner, remaining out of site by using the terrain to their advantage.

Tool Update
John the Ripper, a password cracker for Unix, Windows, DOS, BeOS, and OpenVMS has seen an update, intended to make it a bit faster with respect to cracking passwords.  There's a great deal of information available in the OpenWall wiki for the tool.

Selective Imaging
I ran across a reference to this thesis paper from Johannes Stüttgen via Twitter this morning.  The paper is well-worth reading, and very informative, but at the same time, I can say from experience that there are a LOT of responders and analysts out there (myself included) who have already gone through the same sort of thought process and come up with our own methods for addressing the same issues raised by the author.  Now, that is not to say that the paper doesn't have value...not at all.  I think that after having read through the paper, it's going to provide something to everyone; for example, it's going to present a thought and exploration process to the new analyst who doesn't have confidence in his or her own abilities and experience.  We often work in isolation, on our teams, and may not have someone more experienced to "bounce" things off, and if you don't feel that you can reach out to someone else in the community, this paper is a great resource.  I also think that the paper may open the eyes of some more experienced analysts, providing insights to things that may not have been considered during the development of their own processes.

Wednesday, June 22, 2011

Defining "Forensic Value"

Who defines "forensic value" when it comes to data?  How does a string of 1s and 0s become "valuable", "evidence" or "intelligence"?

These are questions I've been asking myself lately.  I've recently seen purveyors of forensic analysis applications indicate that a particular capability has recently been added (or is in the process of being added) to their application/framework, without an understanding of the value of the data that is being presented, or how it would be useful to a practitioner. Sure, it's great that you've added that functionality, or that you will be doing so at some point in the very near future, but what is the value of the data that the capability provides, and how can it be used?  Do your users recognize the value of the data that you're providing?  If not, do you have a way of educating your users?

I was also thinking about these questions during my presentation at OSDFC...I was talking about extending RegRipper into more of a forensic scanner, and found myself looking out across a sea of blank stares.  In fact, at one point I asked the audience if what I was referring to made sense, and the only person to react was Cory.  ;-)  As a practitioner, I believe that there is significant value in preserving and sharing the collective knowledge and experience of a group of practitioners.  I believe that being able to quickly determine the existence (or absence) of a number of artifacts and removing that "low hanging fruit" (i.e., things we've seen before) is and will be extremely valuable.  Based on the reaction of the attendees, it appears that Cory and I may be the only ones who see the value of something like this.  Does something like this have value?

Also, at the conference, there were a number of academics and researchers in attendance (and speaking), along with a number of practitioners.  Speaking to some of the practitioners between sessions and after the conference, there was a common desire to have more practical information available, and possibly even separate tracks for practitioners and developers/academics.  There seemed to be a common feeling that while developing applications to parse data and run on multiple cores was definitely a good thing, this only solved a limited number issues and did not address issues that were on the plates of most practitioners right now.  It would be safe to say that many of the practitioners (those that I spoke with) didn't see the value in some of the presentations.

One example of this is bulk_extractor (previous version described here), which Simson L. Garfinkel discussed during the conference.  This is a tool (Windows EXE/DLLs available) that can be run against an image file, and it will extract a number of items by default, including credit card numbers and CCN track 2 data, and includes the offset to where within the image file the data was found.  Something like this may seem valuable to those performing PCI forensic exams, but one of the items required for such exams is the name of the file in which the credit card number/track data were located.  As such, where a tool like bulk_extractor might have the most value during a PCI forensic exam is if it were run against the pagefile and unallocated space extracted from the image.  Even so, using three checks (LUN formula, length, and BIN) only gives you the possibility that you've found a CCN...we found that there are a lot of MS DLLs with embedded GUIDs that appear to be Visa CCNs, including passing the three checks.  In this case, there is some value in what Simson discussed, although perhaps not at its face value.

As a side note, another thing you might want to do before running the tool is to contact Simson and determine which CCNs the tool searches for, to ensure that all of the CCNs covered by PCI are addressed.  When I was doing this work, we had an issue with a commercial tool that wasn't covering all the bases, so to speak...so we rolled our own solution.

Recently, I began looking at Windows 7 Jump Lists, and quickly found some very good information about the structure of both the automatic and custom "destinations" files.  One thing I could not find, however, was information regarding the structure of the DestList stream located in the automatic destinations file; to me, this seemed to be of particular value, as the numbered streams followed the MS-SHLLNK file format and contained MAC time stamps for the target file, but not for led to the creation of the stream in the first place.  Looking at the contents of the DestList stream in a hex editor, and noticing a number of familiar data structures (FILETIME, etc.), it occurred to me that the DestList stream might act like a most recently used (MRU) or most frequently used (MFU) list.  More research is needed, but at this point, I think I may have figured out some of the basic elements of DestList structure; so far, my parsing code is consistent across multiple DestList streams, including from multiple systems.  As a practitioner, I can see the value in parsing jump list numbered streams, and I believe that there may be more value in the contents of the DestList stream, which is why I pursued examining this structure.  But again...who determines the value of something like this?  The question is, then...is there any value to this information, or is it just an academic exercise?  Simply because I look at some data and as a practitioner believe that it is valuable, is this then a universal assignment, or is that solely my own provenance?

Who decides the forensic value of data?  Clearly, during an examination the analyst would determine the relative value of data, perhaps based on the goals of the analysis.  But when not involved in an examination, who decides the potential or relative value of data?

Monday, June 20, 2011

Awards

Not long ago, at the SANS Forensic Summit, I received two awards during the Forensic 4Cast Awards ceremony.  I was not able to attend this summit, but got to catch part of it on LiveStream.  Cory was kind enough to drop the pictured awards off the following week when he came to NoVA for OSDFC.

The top award is for Outstanding Contribution to Digital Forensics - Individual, and the lower one is for Best Digital Forensics Book, which I'm told applies to Windows Registry Forensics.

I'd like to thank everyone involved for these honors; the folks who nominated me, those who voted for me, as well as Lee Whitfield for setting up the awards, and Rob Lee and SANS staff for arranging and conducting the summit.

I'm not entirely sure what I did to receive these awards, but I'm very grateful and appreciative to the community and to everyone involved in the Forensic 4Cast awards process.

Links and Updates

Reviews
Eric Huber posted an excellent review of the DFwOST book that Cory wrote, and I assisted on the writing (full disclosure; I'm a minor co-author of the book).  Eric has even added DFwOST to his "Learn Digital Forensics" list on Amazon, along with WFA 2/e, WRF, and others.  Eric's list is intended to help others figure out which books to get if they're...well...interested in learning digital forensics.

Here's another review from Little Mac (aka, Frank) who apparently went about the review process the old fashioned way...by purchasing the book first.

Jesse Kornblum, the man behind tools such as md5deep and ssdeep (and truly a man with the gift of beard), has kindly posted his review of DFwOST, as well. It's short and sweet, although no less appreciated.  As I was reading the review, however, I got to the third paragraph, which started, "There was something I felt was missing from the book."  I read this with great interest, as I've written a number of books myself, and I'm working on one know...so anything I could take away from what Jesse said to improve my book, I'm anxious to read.  Jesse went on to mention getting free support or work from someone you've never met...that is, the person or people maintaining the open source (and very often free) tool that you're using.

Procedures
Speaking of Jesse, one of the great things he's produced (perhaps beyond his papers and tools) is his "Four Rules for Investigators" blog post.  If you've never seen it, check it out.  Then think about this list in relation to what you do.  Yes, I know that there's many of us out there who say, "I do all of them...every time!"  I've heard folks say this...and then a month later the same folks have said, "I was working on an exam and 'lost' the image"...apparently, having forgotten #4.

#3 is a big one, because most of us simply don't do it.  Examiners will often say, "I don't know what standard to write to..." as a reason for NOT taking notes.  But the fact is...you do.  What is the purpose for taking the notes?  It's so that you can come back a year later, after having done a couple of dozen other investigations, and clearly see what you did.  But...it's also so that if you get hit by a bus leaving work one day, another examiner can pick up your notes and see exactly what you did.  In short, if you didn't document it, it didn't happen.

Thoughts on Conferences
I recently attended (and spoke at) OSDFC, and while I was there, I had a chance to meet and speak with a lot of great folks from the community.  While doing so...and while talking to folks after the conference...there seemed to be a common theme with regards to the talks and presentations at conferences, in general.  OSDFC is interesting because there are talks by developers and academics, as well as by practitioners; in some cases, the developer who is speaking is him- or herself a practitioner, as well.  The common theme seemed to be that there's a desire for more meaty presentations; that is, provide someone with a 30 minute time frame where they discuss the problem or obstacle that they encountered, and their thought process/reasoning for selecting the solution at which they ultimately arrived. 

Don't get me wrong...I thoroughly enjoyed OSDFC, and I greatly appreciate the effort that the organizers and presenters put into making the conference a success.  Brian asked at the end of the conference what the attendees thought could be improved for next year, and ended up saying that he'd just flip a coin.  I'm going to go out on a limb and say that there needs to be two tracks...one predominantly focused toward developers and academics, and the other toward practitioners.  This doesn't mean that someone can't present in either track, or attend either track...but I met a number of LE examiners whose eyes glassed over when an academic talked about running a tool on 4, 16, then 64 cores; sure, that's cool and all, but how does that help me put bad guys in jail?

So, again...I'm not saying that there need to be conferences that focus on one side of the equation the exclusion of others.  In fact, it's quite the opposite...I think that all aspects of the community are important.  However, listening to folks after the presentations, there was usually one reference, with respect to a how, that most wished the presenter had discussed in a bit more detail.

This is an approach that I'd like to take with the NoVA Forensics Meetup.  At our next meeting on 6 July, Tom Harper will be giving a presentation, and I'm sure that it will be a great success.  Going forward, I'd like to have shorter talks, perhaps two each meeting, running about 20-30 min each.  To do this, I'd like to ask the folks attending (and even some of those not attending) to offer up ideas for what they'd like to hear about...and for some folks to step up and give presentations.

Tools
Along the lines of the above thoughts on presentations, one of the things about free and open source tools that are available is that they're out there...and that's the problem, they're out there.  Now, I'm NOT saying that we need to be inundating each other links to sites for tools...what I am saying is that we should take advantage of the sites that we do have available for linking to the tools, as well as providing use cases and success (or failure) stories regarding these tools.

For example, with the ForensicsWiki site already available, we also have ForensicArtifacts.com site.  One of the benefits of conferences such as OSDFC is that, if you can attend, you can find out about tools that perhaps you didn't know about, and see how someone has used them to achieve a goal or overcome an obstacle in their investigation.

So...there are some great tools out there and available, but we need more folks within the community to pass along their impressions and encounters with the tools.  It's great that we have folks within the community who blog, but very often, what's posted is missed because most of us either don't know about their blog, or simply don't have time to keep up with our RSS feeds.  There needs to be a way to make this information available without having to wait for, or attend a conference.

CyberSpeak
Ovie's back with a new CyberSpeak podcast, which includes an interview with the Kyrus Tech guys regarding their Carbon Black product.  If you've never heard of Carbon Black, you should really take a listen to the interview and hear how the tool is described.  I've looked at Cb, but any description I could provide wouldn't do justice to the interview.  Great job to Ovie for yet another great podcast, and the Kyrus Tech guys for the interview and the information about Cb.

Thursday, June 16, 2011

Thoughts on IR

Those of us in the security community like to share ideas through analogy; I'm sure that's to convey technical issues in an understandable (to others) manner.  As a former military member, there are a number models that I use and refer to in analogies, particularly when communicating to other former military members.

Along those lines, something struck me the other day...the Internet is very much like the ocean, and organizations connected to the Internet are like ships on the ocean.  For the most part, ships (and submarines) are designed to do well on the ocean, within their limitations.  However, like the Internet, there are risks involved and potentially negative events (attacks, etc.) can originate from anywhere (above, or on or below the surface of the ocean, and from any direction) at any time.  There are a lot of events that occur on the Internet all the time, and many have no effect at all on organizations, both large and small.  Some only affect smaller organizations, while larger organizations are not affected at all by these events.

However, the Internet is not the only origination point for negative events.  Devastating events can also originate internally within a ship, just as they can with an organization or company.  As such, internal and external threats are well understood by the captain, and that understanding is subsequently conveyed to the crew.  So, not only do ships have things like radar, sonar and manned watches to protect them from external threats, but there are internal monitors, as well...gauges to monitor pressure and flow rates in pipes, etc.  There are also crew members who monitor these gauges, and keep track of the state of various functions aboard the ship at all times.  Minor events can be detected and addressed early, before they become major incidents, and more significant events are detected and the appropriate individuals warned 

Another thing to consider is this...the risks of operating on the ocean are well understood, and that understanding has guided the construction of the ships themselves.  US Navy ships have, among other things, watertight compartments that can be sealed, preventing fire or flooding (the two primary threats to most ships at sea) from expanding.  For an example of this, consider the sinking of the RMS Titanic (read the first paragraph of the Collision section) versus the bombing of the USS Cole - even with a hole in the hull at the waterline, the Cole did not sink.

Even though all of these risks are understood and planned for, Navy ships still have damage control (DC) teams.  These a members of the crew with regular jobs on the ship, but they are also trained to respond effectively when an incident occurs.  That's right...most naval personnel get some training or familiarity with damage control and what it takes, but there are individuals specifically designated with DC duties.  The leaders and members of the DC teams are identified by name, and they all have specific responsibilities, and they also understand each other's responsibilities...not to critique what the other team members do, but to understand where each team member fits in the response process, and to be able to take over their role, if necessary (due to injury, etc.).  These teams have designated, pre-staged equipment and conduct regular training drills, with the idea being that a missile or torpedo strike against a ship, or even a fire breaking out in the galley, doesn't necessarily wait for the most opportune moment for the crew...as such, the DC team must be able to respond under the worst of conditions.

The purpose of the DC team is to control the situation and minimize the effect of the incident on the health and operation of the ship and its crew.  The Executive Officer (second-in-command) of the ship is usually the person responsible to the captain for the training of the DC team, while the Engineering Officer is usually designated as the damage control officer (ref).

So, where does this model fit in with today's organizations?  Do the risks of operating interconnected IT equipment appear to be understood?  Who within the organization is the DC team leader?  Who are the members of the DC team and how often do they drill?  Perhaps more importantly, what type of monitoring is in place?  Where are the gauges and who monitors them? Sure, these may be questions asked by some guy with a blog, but they're also asked by those responsible for assessing regulatory compliance, be it the PCI DSS (para. 12.9 specifies the requirement for an IR team), HIPAA, FISMA, NCUA, etc.  Further, state notification laws (what are we up to...46 states with notification laws at this point?) such as California's SB-1386 have an implication of a response capability; after all, how would questions be answered without someone getting answers?

Thoughts?  Does the DC team model fit?

Wednesday, June 15, 2011

OSDFC Follow-up

I had the honor and privilege of speaking at OSDFC yesterday, and wanted to provide something of a follow-up or review of how the conference went.  But first, I want to thank everyone involved in setting up and arranging the conference, as well as presenting and even just attending the conference, for making this event a real success.

This is the second time that Brian has had the conference, which is held in conjunction with a conference put on by his company, Basis Technology, for their government customers.  This year, Brian said that there were about 160 attendees, and from what I saw, we had folks from different parts of the DFIR community...private industry, public, LE, etc.  There were even international speakers and attendees.

The format of the conference was to have a series of talks in the morning, and then split off into two tracks after lunch.  After three presentations in each track, we went back to everyone meeting in one room for another presentation (from Cory Altheide), finishing up with several lightning talks.  This seemed to work very well, but having been to several conferences over the years, any presentation that's longer than 20-30 minutes and doesn't directly engage the audience is going to hit a bit of a slow-down right around the 30 minute mark.  At the end of the conference, Brian did ask some questions with respect to the format for the next year's conference, and I'll return to some thoughts on that at the end of the post.

Rather than going over each presentation that I attended individually, I want to say that they were all excellent, and I want to thank everyone...presenters as well as the organizers...for the time and effort they put into this conference.  Also, one of the things that really make these conferences a success and is often overlooked is the people within the community who, while not presenting, come to attend the conference.  Being able to interact with your peers in the community, engage and exchange ideas (or even just some jokes) is one of the biggest benefits of events like this...so a huge thanks to everyone who drove or flew in to attend the conference!  

During the conference, there were a number of presentations that mentioned JSON, from Jon Stewart discussing scripting with the TSK tools, all the way through to Cory's "Making it Rain" presentation, where he talked about browser artifacts (and we all sang "Happy Birthday" to his daughter!).  So, for what it's worth, this is likely going to be part of a LOT of examinations.

As I attended various presentations, it occurred to me that the common theme of the presentations seemed to be, "I had a problem and here's how I solved it with open source tools".  Now, this isn't specifically about using an open source tool that is already available, although there were a number of presentations that did just this (Cory Altheide of Google, and Elizabeth Schweinsberg, soon to be of Google).  There were other presentations (my own, etc.) that discussed creating something open source to solve a problem.  Like Brian said, last year's conference was a "call for frameworks", and this year's conference was an answer to that call, as a number of frameworks were described.

Now, conference format.  While I have an academic background (I earned my MSEE from NPS), I'm more of a practitioner or engineer.  There seem to be two types of folks who are drawn to conferences like this...practitioners and academics.  Now, I'm not presenting this as a division within the community, because...well...I don't see it that way.  To be honest, we need both within the community.  At the conference, we had presentations from academics who were looking at solving some pretty big problems, and I was very thankful to see them taking this on.  I really think that there is a lot of benefit in doing so.  However, when it comes to how the presentation is viewed, practitioners and academics look at things differently.  Academics ask questions about testing corpus, parallelization, and will the application scale from 4 to 64 processors.  Practitioners ask, how soon can I get this application, and will it run on the system(s) that I have in my lab?

So...Brian took a vote at the end of the conference, asking the attendees what they thought about the format for next year.  I have to say that given who attends this type of conference, it would probably be best to keep the two-track format...have a developer's track, where developers and academics can discuss developer stuff.  Keep the practitioner's track, but you don't have to keep it separate...there may be very good reason to have a developer present in both tracks, including giving a more practitioner-oriented presentation to the guys and gals in the trenches.  I think that it's important that we all come together at an event like this, even if we don't all mix all the time.

So, I'm going to throw my hat into the ring for two tracks and shorter, more narrowly-focused presentations.  I think that the shorter presentations will allow for more of them, and focusing them a bit more probably wouldn't be all that hard.

Something else that I think would be hugely beneficial is something like Cory's presentation from the first conference, where he just ran through a list of open source tools and projects, and how they were useful.  There are sights out there to maintain information like this, but they don't seem to be regularly maintained to any significant degree.  For example, there's Open Source Forensics, as well as ForensicsWiki, but as a community, I think we need to come together and come up with a way to bring all of this information together.  That aside, however, while various open source projects were discussed, there are a number of others out there that would be beneficial to many examiners, if they knew about them.  Now, this doesn't have to be a presentation, as it can be a web page or entries at one of the sites above...but I think what really gets folks over the hump regarding using this stuff is recommendations from others.

Another benefit of this conference that I hadn't realized is the reach of open source tools.  Within the US, we have various institutions, ranging from large private, academic and governmental organizations, to local community colleges and law enforcement shops.  We can all benefit from open source projects, but the smaller organizations are limited in background knowledge, training, etc.  Joshua James opened my eyes to the fact that this is an international issue...that LE in Ireland is limited by funds, as are LE in Africa.  Joshua mentioned a department in Africa that has 4 staff members and a total of 2000 Euro available annually for training, hardware, etc.  This opened me up to the need not only for open source tools that meet the needs of these departments, but to the need for free, easily available training for knowledge transfer as well as how to use the tools and really get the most out of them.

Stuff
Simson's bulk-extractor
Cory's write-up and insights on OSDFC

Saturday, June 11, 2011

Updates, Links, Etc.

OSDFC
I'm slotted to speak at the Open Source Digital Forensics Conference (put on by Brian Carrier) in McLean, VA, on 14 June.  I'll be presenting on "Extending RegRipper", and I've posted to slides to the WinForensicAnalysis site.

You'll notice something a bit uncharacteristic (or me) about the presentation...I put a LOT of text in the slides.  I sorta figured I needed to do this, as when I've talked about this topic to others, I think I get a lot of head knodding and "okays" so that I'll just stop talking.  I wanted to have the text on the slides for folks to look at, so that when someone was noodling these ideas over (as opposed to deciding whether or not to stay for the rest of the presentation...), they can refer back to what's on the slides, and not have to go with, "...but you said...".

Fresh off of attending the SANS Forensic Summit, Cory Altheide will be providing training on 13 June, and speaking at OSDFC on 14 June.  Corey and I will also be available to sign your copy of DFwOST, if you'd like. 

RegRipper
Speaking of open source tools, Mark Morgan posted "Using RegRipper inside EnCase Enterprise"...jokes aside, I think that this is a great use of resources!  I mean, it wasn't part of the design of RegRipper, but hey, I think things worked out pretty well, and Mark (like others before him) found a way to get the most out of the tools he's using.  Great job!

VSCs
Stacey Edwards has a great post up over on the SANS Forensic blog that demonstrates how to extract file system metadata/MAC times from files in Volume Shadow Copies, using LogParser.  The cool thing about what she discussed is that it can all be scripted through batch files...that's right, you can have a batch file that will run through designated VSCs, mount each one, run the LogParser command against it, unmount the VSC, and move on to the next one.  Pretty cool stuff.  Think of all of the other stuff you could script into a batch file, too...like RegRipper/rip, etc. 

MetaData
Corey Harrell has another really good post over at the jIIr blog, regarding why certain document metadata could possibly look the way it does.  He's taken the time to do some testing, lay out and describe what he did, and then present it clearly.  That kind of sounds like what we all have to do in our jobs, right?  ;-)

EVTX
Andreas has updated his EVTX parser code to v1.0.8.  He's put considerable effort into figuring out the binary format for the Windows Event Logs ("new" as of Vista) and provided his open source code for parsing them.  I've also found LogParser to be very helpful with this.

NoVA Forensics Meetup
Keep 6 July on your calendar for our next meetup.  Tom Harper has graciously offered to present on setting up a dynamic malware analysis platform, so come on by, bring a friend, and join us!

Monday, June 06, 2011

Updates

DLL Search Order Issue
Nick Harbour recently put together another great, very informative post over on the Mandiant that has to do with the DLL search order issue that he'd discussed last year ("Malware Persistence without the Windows Registry").  His recent post has to do with fxsst.dll, which appears to pertain to the Fax Service.  The difference in fxsst.dll with respect to the earlier issue that Nick mentioned (re: ntshrui.dll) is that whilst ntshrui.dll was loaded directly by Windows Explorer (as an approved shell extension), fxsst.dll is actually loaded by stobject.dll (System Tray component for Windows Explorer).

The DLL search order issue is something that's been around for a while (11 years), and as Nick mentioned, allows for malware persistence without the use of the Registry.  The analysis technique that I've used to track down issues like this is timeline analysis...putting a timeline together and looking at various aspects of the incident (timeframe, files involved, etc.) has been a very revealing process, and really turned up some good information.  Nick used an interesting approach to track down how fxsst.dll was loaded...I'd suggest taking a look at what he did, and seeing where you could use a technique similar to his in your examinations.

In short, if you find a copy of fxsst.dll in the Windows or Windows\system32 directory, take a very careful look at it.  However, be sure that when you do look at it, you understand what's going on...because just because you find a file with this name on the system, it doesn't necessarily follow that the file has anything to do with the incident.

NoVA Forensics Meetup Slides
Chris has been kind enough to post his presentation slides from this month's meetup presentation.  Chris provided a lot of great information in his presentation...take a look and see what you think, and feel free to send him questions.

Jump Lists
Jump lists are something new to Windows 7, a nice little feature that appears to be similar to the Windows shortcuts in the user's Recent folder.  Here's more information about Jump Lists, and how they're used, from MS.

A while back, I was at a Microsoft cybercrime conference in Redmond, and Troy Larson mentioned during his presentation that the "old" OLE "structured storage" file structure that was used in MS Office documents prior to Office 2007 was again used in Windows 7, and one of the locations was the Jump Lists.  I made a note of it then, but really hadn't pursued it.  As I've been using Windows 7 more and  more, and looking into forensic artifacts, I thought I'd take a look at them.  Troy had also mentioned that not only did the Jump Lists make use of the OLE "structured storage" mechanism, but the streams within the "file system within a file" were based on the shortcut/LNK file format, so that was something to go on...

ProDiscover v6.11 (I use the IR edition) has a Jump List viewer, and over on the Win4n6 list, Rob Lee said that he uses MiTeC's Structured Storage Viewer and a Windows shortcut/LNK file viewer (MiTeC WFA) to parse the Jump List information.

I used code from wmd.pl and lslnk2.pl to develop a Perl script to parse Jump Lists.  Wmd.pl uses the OLE::Storage module, and lslnk2.pl is completely Perl-based, using no Win32-specific modules, but instead parses the LNK file information on a binary level based on the shortcut file format. I've just got the code working, so it's not ready for prime time, and I still have to figure out how I want to display the information.  I'm considering the TLN format as one means of displaying the information, using something similar to how I recently updated/modified regtime.pl and rip.pl...maybe .csv will be an option, as well.

Addendum:  Using this resource from MS, I was able to identify and parse the ExtraData blocks, and extract the NetBIOS name of the system from the TrackerDataBlock. 


Tools

I ran across this one by accident recently...I'm not really a *nix person (and I don't claim to be), and haven't made wide use of awk, but I thought that this post on using awk to address clock skew in the regtime bodyfile output was worth sharing.  Clock skew on a system, as well as between systems, are definitely an issue when performing analysis, particularly if you're putting things into timelines.  At the OSDFC conference last year, I talked about timelines, and was informed (not asked, but told...) that my technique for developing timelines did not allow for clock skew...and that simply isn't/wasn't the case (dude, it's open source...).  My point is that things like time zones and clock skew are very important when it comes to performing analysis on multiple systems, particularly when they're geographically dispersed.


News
I had posited a bit ago when something like this would happen...the Unveillance CEO faced extortion (JadedSecurity has a different take on the matter).  I added this to this post, as this is something I discussed with others recently via email, and the results were pretty much what they'd suggested would happen...

Thursday, June 02, 2011

Updates

NoVA Forensics Meetup
Last night's meetup, our second such get-together, was a rousing success!   Chris Witter gave a great presentation that covered a lot of the different aspects of building a packet capture engine "on da cheap", and all told, we had a total of 38 attendees!

*For those interested, there's a new version of Wireshark available.

I think that we're doing well enough to begin working on a format for these events, so that's something I'll be coming up with for the next meetup (on 6 July).  I think that pretty much what things will look like is an intro, then have everyone go around and introduce themselves, and then we'll kick the presentation off around 7:30pm.  This will give a bit more time for folks to show up.

Also, it might be a good idea to have two shorter talks, as opposed to one presentation...perhaps not every time, but every now and again.  Maybe something like a demo of a very specific tool or technique that you can present in about 20 min or so.

Finally, I'd like to ask everyone to take a moment and think about what they might like to hear about in a presentation, or a topic for a presentation that they'd be willing to give.  Also, try to think of a question or two that you might have for the group...we had a very diverse group of folks at last night's meeting, ranging from very experienced DFIR folks to IT staff and folks who are very new to the industry.

So...a big thanks to everyone who attended last night...we hope to see you again, and hey...bring a friend!  And a huge thanks to Christopher for stepping up and giving an excellent presentation!

DFwOST Reviews
If you're on the fence regarding purchasing "Digital Forensics with Open Source Tools", take a look at these reviews on Amazon.  They're all pretty glowing, and there's even one from a "shark tamer" (is that even possible??).


Upcoming Speaking Events
I have a couple of opportunities for speaking coming up over the next months (and I'll be submitting to other CfPs), and I find myself making some changes to how I'm going to be presenting the material in question.

My next speaking engagement is OSDFC on 14 June; I will be giving a presentation entitled "Extending RegRipper".  As I've been preparing the presentation, I'm finding that I'm putting more text into the slides than I normally would, but I think that under the circumstances, that makes pretty good sense.  After all, what I'm talking about in the presentation is kind of new; after all, I've found (not surprisingly) that there are a good number of forensic analysts that have neither heard of nor used RegRipper, and I'm talking about extending the tool into a forensic scanner framework (which is, itself, a little something different).  I don't usually read from my slides, but I this case, I think that it's important to put the actual text on the screen so that attendees can see it, read it, and marinate on it.  I still plan to do a good deal of discussion and delve into things not explicitly listed in the slides (unfortunately, the set up doesn't allow for a demo...), but my hope is that by putting some of the explicit text in the slides, this will ultimately generate some discussion.

I'm also giving a presentation in August on timeline creation and analysis; similar to the above presentation, this one is going to have some slides in it that contain a good deal of text, but again, I think that there's a very good reason for doing this; some of the things I talk about in the presentation may be somewhat new to many attendees.  As such, I'd like to have certain things phrased in an explicit manner, so that when (not "if"...) discussion ensues, and someone says, "But you said...", we can go right back to the slide and address the question.  Also, I feel like I'd like to have the statement(s) being discussed sitting up there for folks to refer back to during the discussion, so that it has a better chance of crystallizing in their minds.  While I prefer the "hit-and-run" tactic of just having bullet statements (or not even using slides at all), there are really some things that are important enough to not only have explicitly stated on the slides, but to also include in the slide pack for later reference.

Blogs
I've run across a couple of interesting blogs recently.  For example, the guys at Crucial Security still have a blog up and running, even though they're now part of Harris Corp.  There are a couple of very useful posts to this blog, such as this one regarding VM files essential for forensic investigations.  There's also this one regarding malware issues from an operational perspective...which, IMHO, would be far more useful when tied in with malware characteristics.

I've found the Girl, Unallocated blog to be a good read, and I have to say, I've really enjoyed the slant the author takes on some of the topics she presents in her posts.  Sometimes it's good to be less serious, even while remaining on-point...taking a whimsical approach can be a good thing at times.  I have also found some of her posts thought-provoking, such as this one on structured analysis...I've often heard forensicators state, "...sometimes we don't know what we're looking for...", and I have to say, that's bull.  No one just acquires images from systems at random, so when a system is acquired, there's a reason for it, and that reason can lead us to what we're looking for, or trying to prove or disprove.

Monday, May 30, 2011

NoVA Forensic Meetup

Reminder: NoVA Forensic Meetup, Wed, 1 June, at the ReverseSpace location in Herndon (7pm - 8:30pm)


Wednesday, May 25, 2011

Tools

I've run across a number of tools recently, some directly related to forensics, and others more related more to IR or RE work. I wanted to go ahead and put those tools out there, to see what others think...

Memory Analysis
There have been a number of changes recently on the memory analysis front.  For example, Mandiant recently released their RedLine tool, and HBGary released the Community Edition of their Responder product. 

While we're on the topic of memory analysis tools, let's not forget the erstwhile and formidable Volatility.

Also, if you're performing memory dumps from live systems, be sure to take a look at the MoonSol Windows Memory Toolkit.

SQLite Tools
CCL-Forensics has a trial version of epilog available download, for working with SQLite databases (found on smartphones, etc.).  One of the most noticeable benefits of epilog is that it allows you to recover deleted records, which can be very beneficial for analysts and investigators.

I'm familiar with the SQLite Database Browser...epilog would be interesting to try.

MFT Tools
Sometimes you need a tool to parse the NTFS $MFT file, for a variety of reasons.  A version of my own mft.pl is available online, and Dave Kovar provided his analyzemft.pl tool online, as well.  Mark McKinnon has chimed in and provided MFT parsing tools for Windows, Linux, and MacOSX.

Other Tools
HBGary also made their AcroScrub tool available, which uses WMI to reach across the enterprise and scan for older versions of Adobe Reader.

A very interesting tool that I ran across is Flash Dissector.  If you deal with or even run across SWF files, you might want to take a look at this tool, as well as the companion tools in the SWFRETools set.

The read_open_xml.pl Perl script is still available for parsing metadata from Office 2007 documents.

From the same site as the SWFRETools are some malware write-ups including NiteAim, and Downloader-IstBar.  As a complete aside, here's a very interesting Gh0stNet writeup that Chris pointed me to recently (fans of Ron White refer to him as "Tater Salad"...fans of Chris Pogue should refer to him as "Beefcake" or "Bread Puddin'"...).

ADSs
Alternate data streams isn't something that you see discussed much these days.  I recently received a question about a specific ADS, and thought I'd include some tools in this list.  I've used Frank's LADS, as well as Mark's streams.exe.  Scanning for ADSs is part of my malware detection process checklist, particularly when the goal of the analysis is to determine if there's any malware on the system.

Also, I ran across this listing at MS of Known Alternate Stream Names.  This is some very useful information when processing the output of the above tools, because what often happens is that someone uses one of the above tools and finds one of the listed ADSs, and after the panic that ensues, their attitude switches back to the other side of the spectrum, to apathy...and that's when they're most likely to get hit.

Here are some additional resources from Symantec, IronGeek, and MS. Also, be sure to check out what I've written about these in WFA 2/e.


Scanners

Microsoft recently released their Safety Scanner, which is a one-shot micro-scanner...download it, run it, and it expires after 10 days, and then you have to download it again.  This shouldn't replace the use of Security Essentials or other AV tools, but I'm pointing this out because it could be very useful when included as part of your malware detection process.  For example, you could mount an acquired image via FTK Imager or ImDisk and scan the image.  Also, the folks at ForensicArtifacts recently posted on accessing VSCs (their first research link actually goes back to my post by the same title...thanks to AntiForensics for reposting the entire thing...)...without having to have EnCase or PDE, you could easily scan the mounted VSC, as well.


Frameworks
The Digital Forensics Framework (DFF) is open source, and was recently updated to include support for the AFF format, as well as mailbox reconstruction via Joachim Metz's libpff.

Christopher Brown, of TechPathways, has made ProDiscover Basic Edition v6.10.0.2 available, as well.  As a side note, Chris recently tweeted that he's just finished the beta of the full version of ProDiscover, adding the ability to image and diff VSCs.  Wowzers!

Sites
TZWorks - free "prototypes" tools, including the Windows Shellbags parser, an EVTX file parser, and others.  Definitely worth checking out.

WoanWare - several free forensics tools including a couple for browser forensics, and (like TZWorks) a "USBStor parser".

NirSoft - the link to the site goes to the forensics tools, but there are a lot of free tools available at the NirSoft site...too many to list.

The Open Source Digital Forensics site is a good source of tools, as well.

OSDFC
Speaking of tools, let's not forget that the OSDFC is right around the corner...

Addendum
Check out Phil Harvey's EXIFTool (comes with a standalone Windows EXE)...there's a long list of supported file types at the tool page.

Additional lists of tools include Mike's Forensic Tools, as well as the tools at MiTeC (thanks to Anonymous' comment).  Also, Mark McKinnon has posted some freely available tools, as well.

Sunday, May 22, 2011

Brain Droppings

NoVA Forensics Meetup
The next NoVA Forensics Meetup will be held on Wed, 1 June 2011, from 7-8:30pm.  As to a location, I met with the great folks at Reverse Space, a hacker space in Herndon where some of the folks have an interest in forensics.  Thanks to Carl and Richard for taking the time to meet with me, and for offering to host our meetings.

I hope that we get a big turn-out for our currently scheduled presentation, titled "Build your own packet capture engine".

Our meetup in July will be scheduled for Wednesday, 6 July, and we've already got an offer of a presentation regarding setting up virtual machines to use for dynamic malware analysis.

As to further topics, I'd like to get suggestions regarding how we can expand our following; for example, Chris from the NoVA Hackers group told me that they follow the AHA participation model.  I'd like the development of this group to be a group effort, and as such will be asking participants and attendees for thoughts, ideas, comments (and to even volunteer their own efforts) regarding how this group can expand.  For example, do we need a mailing list or is the Win4n6 Group sufficient?  If you have anything that you'd like to offer up, please feel free to drop me a line.



Breakin' In
Speaking of the NoVA Forensics Meetup, at our last meeting, one of our guests asked me how to go about getting into the business.  I tried to give a coherent answer, but as with many things, this question is one of those that have been marinating for some time,  not just in my brain housing group, but within the community.

From my own perspective, when interviewing someone for a forensics position, I'm most interested in what they can do...I'm not so much interested that someone is an expert in a particular vendor's application.  I'm more interested in methodology, process, what problems have you solved, where have you stumbled and what have you learned.  In short, are you tied to a single application, or do you fall back to a process or methodology?  How do you go about solving problems?  When you do something in particular (adding or skipping a step in your process), do you have a reason for doing so?

But the question really goes much deeper than that, doesn't it?  How does one find out about available positions and what it really takes to fill them?  One way to find available positions and job listings is via searches on Monster and Indeed.com.  Another is to take part in communities, such as the...[cough]...NoVA Forensics Meetup, or online communities such as lists and forums.

Breaches
eWeek recently (6 May) had an article regarding the Sony breach available, written by Fahmida Rashad, which started off by stating:

Sony could have prevented the breach if they’d applied some fundamental security measures...

Sometimes, I don't know about that.  Is it really possible to say that, just because _a_ way was found to access the network, that including these "fundamental security measures" would have prevented the breach?

The article went on to quote Eugene Spafford's comments that Sony failed to employ a firewall, and used outdated versions of their web server.  'Spaf' testified before Congress on 4 May, where these statements were apparently made.

Interestingly, a BBC News article from 4 May indicates that at least some of the data stolen was from an "outdated database".   

The eWeek article also indicates (as did other articles) that Data Forte, Guidance Software and Protiviti were forensics firms hired to address the breach.

As an aside, there was another statement made within the article that caught my interest:

“There are no consequences for many companies that under-invest in security,” Philip Lieberman, CEO of Lieberman Software, told eWEEK. 

As a responder and analyst, I deal in facts.  When I've been asked to assist in breach investigations, I have done so by addressing the questions posed to me through analysis of the available data.  I do not often have knowledge of what occurred with respect to regulatory or legislative oversight.  Now and again, I have seen news articles in the media that have mentioned some of the fallout of the incidents I've been involved with, but I don't see many of these.  What I find interesting about Lieberman's statement is that this is the perception.

The Big Data Problem
I read a couple of interesting (albeit apparently diametrically opposed) posts recently; one was Corey Harrell's Triaging My Way (shoutz to Frank Sinatra) post where Corey talked about focusing on the data needed to answer the specific questions of your case.  Corey's post provides an excellent example of a triage process in which specific data is extracted/accessed based on specific questions.  If there is a question about the web browsing habits of a specific user, there are a number of specific locations an analyst can go within the system to get information to answer that question.

The other blog post was Marcus Thompson's We have a problem, part II post, which says, in part, that we (forensic analysts) have a "big data" problem, given the ever-increasing volume (and decreasing cost) of storage media.  Now, I'm old enough to remember when you could boot a computer off of a 5 1/4" floppy disk, remove that disk and insert the storage disk that held your documents...before the time of hard drives that were actually installed in systems.  This dearth of storage media naturally leads to backlogs in analysis, as well as intelligence collection.

I would suggest that the "big data" problem is particularly an issue in the face of the use of traditional analysis techniques.  Traditional techniques applied to Corey's example (above) states that all potential sources of media must be collected, and keyword searches run.  Wait...what?  Well, no wonder we have backlogs!  If I'm interested in a particular web site that the user may have visited, why would I run a keyword search across all of the EXEs and DLLs in the system32 directory?  While there may be files on the 1TB USB-connected external hard drive, what is the likelihood that the user's web browser history is stored there?  And why would I examine the contents of the Administrator (or any other) account profile if it hasn't been accessed in two years?

Another variant on this issue was discussed, in part, in Mike Viscuso's excellent Understanding APT presentation (at the recent AccessData User's Conference)...the presentation indicates that the threat isn't really terribly "advanced", but mentions that the threat makes detection "just hard enough".

Writing Open Source Tools
This is a topic that came up when Cory and I were working on DFwOST...Cory thought that it would be a good section to add, and I agreed, but for the life of me, I couldn't find a place to put it in the book where it just didn't seem awkward.  I still think that it's important, in part because open source tools come from somewhere, but also because I think that a lot more folks out there really have something to contribute to the community as a whole.

To start off, my own motivation for writing open source tools is to simply solve a problem or address something that I've encountered.  This is where RegRipper came from...I found that I'd been looking at many of the same Registry keys/values over and over again, and had built up quite a few scripts.  As such, I wanted a "better" (that's sort of relative, isn't it??) to manage these things, particularly when there was so many, and they seemed to use a lot of the same code over and over.

I write tools in Perl because it's widely available and there a LOT of resources available for anyone interested in learning to use it...even if just to read it.  I know the same is true for Python, but back in '98-'99 when I started teaching myself Perl, I did so because the network monitoring guys in our office were looking for folks who could write Perl, and infosec work was as hard for folks to sell back then as forensic analysis is now.

When I write Perl scripts, I (in most cases) try to document the code enough so that someone can at least open the script in Notepad and read the comments to see what the script does.  I don't always try for the most elegant solution, reducing the number of keystrokes to accomplish a task, as making the steps available not only lets someone see more clearly what was done, but it also lets someone else modify the code to meet their needs...simply comment out the lines in question and modify the script to meet your own needs.

DFF
Speaking of open source tools, one of the tools discussed in DFwOST is the Digital Forensics Framework, of which version 1.1.0 was recently released.  This version includes a couple of updates, as well as a bug fix to the ntfs module.  I've downloaded it and got it running nicely on a Windows XP system...great work and a huge thanks to the DFF folks for their work.  Be sure to check out the DFF blog for some tips on how you can use this open source forensic analysis application.

Thursday, May 05, 2011

Updates

NoVA Forensics Meetup
Last night's meetup went pretty well...there's nothing wrong with humble beginnings.  We had about 16 people show up, and a nice mix of folks...some vets, some new to the community...but it's all good.  Sometimes having new folks ask questions in front of those who've done it for a while gets the vets to think about/question their assumptions.  Overall, the evening went well...we had some good interaction, good questions, and we gave away a couple of books. 

I think that we'd like to keep this on a Wed or Thu evening, perhaps once a month...maybe spread it out over the summer due to vacations, etc. (we'll see).  What we do need now is a facility with presentation capability.  Also, I don't think that we want to have the presentations fall on just one person...we can do a couple of quick talks of a half hour each, or just have someone start a discussion by posing a question to the group.

Besides just basic information sharing, these can be good networking events for the folks who show up.  Looking to add to your team?  Looking for a job?  Looking for advice on how to "break in" to the business?  Just come on by and talk to folks.

So, thanks to everyone who showed up and made this first event a success.  For them, and for those who couldn't make it, we'll be having more of these meetups...so keep your eyes out and don't hold back on the thoughts, comments, or questions.


Volatility
Most folks familiar with memory analysis know about the simply awesome work provided through the Volatility project.  For those who don't know, this is an open source project, written in Python, for conducting memory analysis.

Volatility now has a Python implementation of RegRipper built-in, thanks to lg, and you can read a bit more about the RegListPlugin.  Gleeda's got an excellent blog post regarding the use of the UserAssist plugin.

I've talked a bit in my blog, books, and presentations about finding alternate sources of forensic data when the sources we're looking for (or at) may be insufficient.  I've talked about XP System Restore Points, and I've pulled together some really good material on Volume Shadow Copies for my next book.  I've also talked about carving Event Log event records from unallocated space, as well as parsing information regarding HTTP requests from the pagefile.  Volatility provides an unprecedented level of access to yet another excellent resource...memory.  And not just memory extracted from a live running system...you can also use Volatility to parse data from a hibernation file, which you may find within an (laptop) image.  Let's say that you're interested in finding out how long that system has been compromised; i.e., you're trying to determine the window of exposure.  One of the sources I've turned to is crash dump logs...these are appended (the actual crash dump file is overwritten) with information about each crash, and include a pslist-like listing of processes.  Sometimes you may find references to the malware in these listings, or in the specific details regarding the crashing process.  Now, assume that you're looking at a laptop, and find a hibernation file...you know when the file was created, and using Volatility, you can parse that file and find specifics about what processes were running at the time that the system went into hibernation mode.

And that's not all you can use Volatility for...Andre posted to the SemperSecurus blog about using Volatility to study a Flash 0-day vulnerability. 

If you haven't looked at Volatility, and you do have access to memory, you should really consider diving in and giving it a shot.  

Best Tool
Lance posted to his blog, asking readers what they consider to be the best imaging and analysis tools.  As of the time that I'm writing this post, there are seven comments (several are pretty much just "agree" posts), and even reading through some of the thoughts and comments, I keep coming back to the same thought...that the best tool available to an analyst is that grey matter between their ears.

This brings to mind a number of thoughts, particularly due to the fact that last week I had two opportunities to consider some things for topics of analyst training, education and experience...during one of these opportunities, I was considering the fact that when I (like many other analysts) "came up through the ranks", there were no formal schools available to non-LE analysts, aside from vendor-specific training.  Some went that route, but there were others who couldn't afford it.  For myself, I took the EnCase v.3.0 Introductory course in 1999...I was so fascinated by the approach taken to file signature analysis that I went home and wrote my own Perl code for this; not to create a tool, per se, but more to really understand what was happening "under the hood".  Over the years, knowing how things work and knowing what I needed to look for really helped me a lot...it wasn't a matter of having to have a specific tool as much as it was knowing the process and being able to justify the purchase of a product, if need be.

Breaches
If the recent spate of breaches hasn't yet convinced you that no one is safe from computer security incidents, take a look at this story from The State Worker which talks about the PII/PCI data of 2000 LE retirees being compromised.  I know, 2000 seems like such a small number, but hey...regardless of whether its 77 million or 2000, if you're one of those people who's data was compromised, it's everything.

While the story is light on details (i.e., how the breach was identified, when the IT staff reacted in relation to when the incident actually occurred, etc.), if you read through the story, you see a statement that's common throughout these types of announcements; specifically, "...taken steps to enhance security and strengthen [the] infrastructure...".  The sequence of events for incidents like this (and keep in mind, these are only the ones that are reported) is, breach, time passes, someone is notified of the breach, then steps are taken to "enhance security".  We find ourselves coming to this dance far too often.

Incident Preparedness
Not long ago, I talked about incident preparation and proactive IR...recently, CERT Societe Generale (French CERT) posted a 6 Step IRM Worm Infection cheat sheet. I think that things like this are very important, particularly when the basic steps necessarily assume certain things about your infrastructure.  For example, look at step 1 of PDF includes several of the basic components of a CSIRP...if you have all of the stuff outlined in the PDF already covered, then you're almost to a complete CSIRP, so why not just finish it off and formalize the entire thing?

Step 3, Containment, mentions neutralizing propagation vectors...incident responders need to understand malware characteristics in order to respond effectively to these sorts of incidents.

One note about this, and these sorts of incidents...worms can be especially virulent strains of malware, so this applies to malware in general...relying on your AV vendor to be your IR team is a mistake.  Incident responders have seen this time and again, and it's especially difficult for folks who do what I do, because we often get called after response efforts via the AV vendor have been ineffective, and have exhausted the local IT staff.  I'm not saying that AV vendors can't be effective...what I am saying is that in my experience, throwing signature files at an infrastructure based on samples provided by on-site staff doesn't work.  AV vendors are generally good at what they do, but AV is only part of the overall security solution.  Malware infections need to be responded to with an IR mindset, not through an AV business model.

Firefighters don't learn about putting out a fire during a fire.  Surgeons don't learn their craft during surgery.  Organizations shouldn't hope to learn IR during an incident...and the model of turning your response over to an external third party clearly doesn't work.  You need to be ready for that big incident...as you can see just from the media, it's a wave on the horizon headed for your organization.