Thursday, April 28, 2011

NoVA Forensic Meet-Up

I've scheduled a room at the Reston Public Library on Wed, 4 May 2011, for the first NoVA Forensic Meet-Up.  We're scheduled for 7pm to 8:30pm, in meeting room 1.

I'll be talking about the presentation I'll give at the OSDF Conference in June, specifically, "Extending RegRipper".

Due to the short time frame, I'll try to put together and post some slides or something, but if not, it's pretty easy to do a discussion format.

Hope to see you there!

Addendum (1 May): I uploaded the presentation (in PDF) that I'll be working from; the library doesn't have any projection capabilities in the room we've reserved, so be sure to download a copy of the PDF before you come on out.

Tuesday, April 26, 2011

Proactive IR

There are a couple of things that are true about security, in general, and IR specifically.  One is that security seems to be difficult for some folks to understand (it's not generally part of our culture), so those of us in the industry tend to use a lot of analogies in an attempt to describe things to others that aren't in our area of expertise.  Sometimes this works, sometimes it doesn't.

Another thing that's true is that the current model for IR doesn't work  For consulting companies, it's hard to keep a staff of trained, dedicated, experienced responders available and on the bench, because if they sit unused they get pulled off into other areas (because those guys "do security stuff") and like many areas of information security (web app assessments, pen testing, malware RE, etc.) the hard-core technical skills are perishable.  Most companies that need such skills simply don't keep these sorts of folks around, as they look to consulting companies to provide this service.

Why doesn't this work?  Think about it this way...who calls emergency incident responders?  Well, those who need emergency incident response, of course.  Many of us who work (or have worked) as incident responders know all too well what happens...the responders show up, often well after the incident actually occurred, and have to first develop an understanding of not just what happened (as opposed to what the customer thinks may have happened), but also "get the lay of the land"; that is, understand what the network infrastructure "looks like", what logs may be available, etc.  All of this takes time, and that time means that (a) the incident isn't "responded to" right away, and (b) the clock keeps ticking as far as billing is concerned.  Ultimately, what's determined with respect to the customer's needs really varies; in fact, the questions that the customer had (i.e, "what data left our network?") may not be answered at all.

So, if it doesn't work, what do we do about this?  Well, the first thing is that a cultural shift is needed.  Now, follow me here...all companies that provide a service or product (which is pretty much every one of them) have business processes in place, right?  There's sales, customer validation, provisioning and fulfillment, and billing and collections...right?  Companies have processes in place (documented or otherwise) for providing their product or service to customers, and then getting paid.  Companies also have processes in place for hiring and paying employees...because without employees to provide those products or services, where would you be?

Ever since I started in information security, one of the things I've seen across the board is that most companies do not have information security as a business process.  Companies will process, store and manage all manner of sensitive data...PCI, PHI, PII, critical intellectual property, manufacturing processes and plans, etc...and not have processes for protecting that data, or responding to incidents involving the possible exposure or modification of that data.

Okay, how about those analogies?  Like many, I consider my family to be critical, so I have smoke alarms in my home, fire extinguishers, we have basic first aid materials, etc.  So, essentially, we measures in place to prevent certain incidents, detect others, and we've taken steps to ensure that we can respond appropriately to protect those items we've deemed "critical".

Here's another analogy...when I went to my undergraduate education, we were required to take boxing.  If you're standing in a class and see everyone in line getting punched in the face because they don't keep their gloves up, what do you do?  Do you stand there and convince yourself that you're not going to get punched in the face?  When you do get punched in the face because  you didn't keep your gloves up, do you blame the other guy ("hey, dude! WTF?!?!") or do you accept responsibility for getting punched in the face?  Or, do you see what's happening, realize that it's inevitable, listen to what you're being told, and develop a culture of security and get your gloves up?  The thing about getting punched in the face is no matter what you say or do afterward, the fact got punched in the face.

Here's another IRL example...I recently ran across this WaPo article that describes how farms in Illinois are pre-staging critical infrastructure information in an easily accessible location for emergency responders; the intention is to "prevent or reduce property damage, injuries and even deaths" in the event of an incident.  Variations of the program have reportedly been rolled out in other states, and seem to be effective.  What I find interesting about the program is that in Illinois, aerial maps are taken to each farm, and the farmers (those who established, designed, and maintain the infrastructure) assist in identifying structures, etc.  This isn't a "here's $40K, write us a CSIRP"...instead, the farmer has to take some ownership in the process, but I guess they do that because a 1 hour or one afternoon interview can mean the difference between minor damage and loosing everything.

Sound familiar?

As a responder, I'm aware of various legislation and regulatory bodies that have mandated the need for incident response capabilities...Visa PCI, NCUA, etc.  States have laws for notification in the case of PII breaches, which indirectly require an IR capability.  Right now, who's better able to respond to a breach...local IT staff who know and work in the infrastructure every day (and just need a little bit of training in incident response and containment) or someone who will arrive on-site in anywhere between 6 and 72 hours, and will still need to develop an understanding of your infrastructure?

If the local IT staff knew how to respond appropriately, and was able to contain the incident and collect the necessary data (because they had the training and tools, and processes for doing so), analysis performed by that trusted third party adviser could begin much sooner, reducing response time and overall cost.  If the local IT staff (under the leadership of a C-level executive, like the farmer) were to take steps to prepare for the incident...identify and correct shortfalls in the infrastructure, determine where configuration changes to systems or the addition of monitoring would assist in preventing and detecting incidents, determine where critical data resides/transits, develops a plan for response, etc...just as is mandated in compliance requirements, then the entire game would change.  Incidents would be detected by the internal staff closer to when they actually occur...rather than months later, by an external third party.  Incident response would begin much quicker, and containment and scoping would follow suit.

Let's say you have a database containing 650K records (PII, PCI, PHI, whatever).  According to most compliance requirements, if you cannot explicitly determine which records were exposed, you have to report on ALL of them.  Think of the cost associated with costs of reporting and notification, followed by indirect costs of cleanup, fines, lawsuits, etc.  Now, compare that to the cost of doing something like having your DBA write a stored procedure (includes authorization and logging) for accessing the data, rather than simply allowing direct access to the data.

Being ready for an incident is going to take work, but it's going to be less costly in the long run when (not if) an incident occurs.

What are some things you can do to prepare?  Identify logging sources, and if necessary, modify them appropriately (add Process Tracking to your Windows Event Logs, increase logs size, set up a means for centralized log collection, etc.).  Develop and maintain accurate network maps, and know where your critical data is located.  The problem with hiring someone to do this for you is that you don't have any ownership; when the job's done, you have a map that is an accurate snapshot, but how accurate is it 6 months later?  Making incident detection and tier 1 response (i.e., scoping, data collection) a business process, with the help of a trusted adviser, is going to be quicker, easier and far less costly in the long run, and those advisers will be there when you need the tier 3 analysis completed.

What about looking at things like Carbon Black?  Cb has a number of uses besides just IR, and can help you solve a number of other problems.  However, with respect to IR, it can not only tell you what was run and when, but it can keep a copy of it for when it comes to determining the capabilities of the malware downloaded to your system, you already have a copy available; call that trusted adviser and have them analyze it for you.

Remember the first Mission: Impossible movie?  After his team was wiped out, Ethan made it back to the safe house and as he reached the top of the stairwell, took the light bulb out of the socket and crushed it in his jacket, then spread the shards on the floor as he backed toward his room.  What this does is provide a free detection mechanism...anyone approaching the room isn't going to know that the shards are their until they step on them and alert Ethan to their presence; incident detection.

So what are you going to do?  Wait until an incident happens, or worse, wait until someone told you that an incident happened, and then call someone for help?  You'll have to find someone, sign contracts, get them on-site, and then help them understand your infrastructure so that they can respond effectively.  When they're first there, you're not going to trust them (they're new, after all) and you're not going to speak their language.  In most cases, you're not going to know the answer to their we even have firewall logs?  What about we log that?  What will happen is that you will continue to hemorrhage data throughout this process.

The other option is to have detection mechanisms and a response plan in place and tested, and have  a trusted adviser that you can call for assistance.  Your local IT staff needs to be trained to perform the initial response, scoping and assessment, and even containment.  While the IT director is on the phone with that trusted adviser, designated individuals are collecting and preserving data...because they know where it is and how to get it.  The questions that the trusted adviser (or any other consulting firm) would ask are being answered before the call is being made, not afterward ("Uh...we had no idea that you'd ask that...").  That way, you don't loose the whole farm, and if you do get punched in the face, you're not knocked out.

By the final note.  This doesn't apply solely to large companies.  Small business are loosing money hand over fist and some are even going out of just don't hear about it as much.  These same things can be done inexpensively and effectively, and need to be done.  The difference is, do you get it done, even if you have to have a payment plan, or do you sit by and wait for an incident to put you out of business and lay off your employees?

Friday, April 22, 2011

Extending RegRipper (aka, "Forensic Scanner")

I'll be presenting on "Extending RegRipper" at Brian Carrier's Open Source Digital Forensics Conference on 14 June, along with Cory Altheide, and I wanted to provide a bit of background with regards to what my presentation will cover...

In '98-'99, I was working for Trident Data Systems, Inc., (TDS) conducting vulnerability assessments for organizations.  One of the things we did as part of this work was run ISS’s Internet Scanner (now owned by IBM) against the infrastructure; either a full, broad-brush scan or just very specific segments, depending upon the needs and wants of the organization.  I became very interested in how the scanner worked, and began to note differences in how the scanner would report its findings based on the level of access we had to the systems within the infrastructure.  Something else I noticed was that many of the checks that were scanned were a result of the ISS X-Force vulnerability discovery team.  In short, a couple of very smart folks would discover a vulnerability, add a means of scanning for that vulnerability via the Internet Scanner framework, and roll it out to thousands of customers.  Within fairly short order, this check can be rolled out to hundreds or thousands of analysts, none of whom have any prior knowledge of the vulnerability, nor have had to invest the time to investigate it.  This became even more clear as I started to create an open-source (albeit proprietary) scanner to replace the use of Internet Scanner, due in large part to significant issues with inaccurate checks, and the need to adapt the output.  I could create a check to be run, and give it to an analyst going on-site, and they wouldn't need to have any prior knowledge of the issue, nor would they have to invest time in discovery and analysis, but they could run the check and easily review and understand the results.

Other aspects of information security also benefit from the use of scanners.  Penetration testing and web application assessments benefit from scanners that include frameworks for providing new and updated checks to be run, and many of the analysts running the scanners have no prior knowledge of the checks that are being run.  Nessus (from Tenable) is a very good example of this sort of scanner; the plugins run by the scanner are text-based, providing instructions for the scanner.  These plugins are easy to open and read, and provide a great deal of information regarding how the checks are constructed and run.

Given all of the benefits derived from scanners in other disciplines within information security, it just stands to reason that digital forensic analysis would also benefit from a similar framework.

The forensic scanner is not intended to replace the analyst; rather, it is intended as a framework for documenting and retaining the institutional knowledge of all analysts on the team, and remove the tedium of looking for that "low-hanging fruit" that likely exists in most, if not all, exams.

A number of commercially available forensic analysis applications (EnCase, ProDiscover) have scripting languages and scanner-like functionality; however, in most cases, this functionality is based on proprietary APIs, and in some cases, scripting languages (ProDiscover uses Perl as it's scripting language, but the API for accessing the data is unique to the application). 

A scanner framework is not meant to replace the use of commercial forensic analysis applications; rather, the scanner framework would augment and enhance the use of those applications, by providing an easy and efficient means for educating new analysts, as well as "sweeping up" the "low-hanging fruit", leaving the deeper analysis for the more experienced analysts.

This scanner framework would be based on easily available tools and techniques.  For example, the scanner would be designed to access acquired images mounted read-only via the operating system (Linux mount command) or via freely available applications (Windows - FTK Imager v3.0, ImDisk, vhd/vmdk, etc.); that way, the scanner can make use of currently available APIs (via Perl, Python, etc.) in order to access data within the acquired image, and do so in a "forensically sound manner" (i.e., not making any changes to the original data).

The scanner is not intended to run in isolation; rather, it is intended to be used with other tools (here, here) as part of an overall process.  The purpose of the scanner is to provide a means for retention, efficient deployment, and proliferation of institutional digital forensic knowledge.

Some benefits of a forensic scanner framework such as this include, but are not limited to, the following:

1.  Knowledge Retention - None of us knows everything, and we all see new things during examinations.  When an analyst sees or discovers something new, a plugin can be written or updated.  Once this is done, that knowledge exists, regardless of the state of the analyst (she goes on vacation, leaves for another position, etc.).  Enforcing best practice documentation of the plugin ensures that as much knowledge as possible is retained along with the application, providing an excellent educational tool, as well as a ready means for adapting or improving the plugin.

2.  Establish a career progression - When new folks are brought aboard a team, they have to start somewhere.  In most cases, particularly with consulting organizations, skilled/experienced analysts are hired, but as the industry develops, this won't always be the case.  The forensic scanner provides an ancillary framework for developing "home grown" expertise where inexperienced analysts are hired.  Starting the new analysts off in a lab environment and having them begin learning the necessary procedures by acquiring and verifying media puts them in an excellent position to run the scanner.  For example, the analyst either goes on-site and conducts acquisition, or acquires media sent to the lab, and prepares the necessary documentation.  Then, they mount the acquired image and run the scanner, providing the more experienced analyst with the path to the acquired image and the report.

This framework also provides an objective means for personnel assessment; managers can easily track the plugins that are improved or developed by various analysts.

3.  Teamwork - In many environments, development of plugins likely will not occur in a vacuum or in isolation.  Plugins need to be reviewed, and can be improved based on the experience of other analysts.  For example, let's say an analyst runs across a Zeus infection and decides to write a plugin for the artifacts.  When the plugin is reviewed, another analyst mentions that Zeus will load differently based on the permissions of the user upon infection.  The plugin can them be documented and modified to include additional conditions.

New plugins can be introduced and discussed during team meetings or through virtual conferences and collaboration, but regardless of the method, it introduces a very important aspect of forensic analysis...peer review.

4.  Ease of modification - One size does not fit all.  There are times when analysts will not be working with full images, but instead will only have access to selected files from systems.  A properly constructed framework will provide the means necessary for accessing and scanning these limited data sets, as well.  Also, reporting of the scanner can be modified according to the needs of the analyst, or organization.

5.  Flexibility - A scanner framework is not limited to just acquired images.  For example, F-Response provides a means of access to live, remote systems in a manner that is similar to an acquired image (i.e., much of the same API can be used, as with RegRipper), so the framework used to access images can also be used against systems accessed via F-Response.  As the images themselves would be mounted read-only in order to be scanned, Volume Shadow Copies could also be mounted and scanned using the same scanner and same plugins.

Another means of flexibility comes about through the use of "idle" resources.  What I mean by that is that many times, analysts working on-site or actively engaged in analysis may be extremely busy, so running the scanner and providing the output to another, off-site analyst who is not actively engaged frees up the on-site team and provides answers/solutions in a timely and efficient manner.  Or, data can be provided and the off-site analyst can write a plugin based on that data, and that plugin can be run against all other systems/images.  In these instances, entire images do not have to be sent to the off-site analyst, as this takes considerable time and can expose sensitive data.  Instead, only very specific data is sent, making for a much smaller data set (KB as opposed to GB).


 Book Update
 I've received the counter-signed contract from Syngress for Windows Forensic Analysis 3/e, and I'm finishing up a couple of the chapters to get in for review.  This book is NOT the same as 2/e, in that I did not start with the manuscript from that edition (the way I did when I started 2/e).  Instead, 3/e is a companion edition...if you already have 2/e, you will want to have 3/e, as well.  This is because the information in 2/e is still valid, and in many instances (in particular information such as the PE file format, etc.) hasn't changed.  Also, 2/e focused primarily on XP, and those systems are still around...there hasn't been a huge corporate shift to Windows 7 yet.  As such, 3/e will shift focus to Windows will also focus more on solving problems, rather than simply depositing technical information in your lap and leaving you to figure out what to do with it.

Another new aspect of WFA 3/e is that rather than providing an accompanying DVD, the tools (as with WRF) will be provided online.  Providing the tools in this manner is just so much easier for everyone, particularly when someone purchases the ebook/Kindle version of the book, or leaves their DVD at home.  As with my previous books, I will do my best to provide functioning, tested code along with book, and provide links to other tools mentioned, described, or discussed in the book.

Accessing VSCs
I've posted before on accessing Volume Shadow Copies, but thanks to a recent blog post from Corey Harrell, I thought that it might be a good idea to revive the topic.  In his post, A Little Help with Volume Shadow Copies, Corey walks through a means for automating access to several VSCs, as well as automating the collection of information from each.  Corey does this through the use of a batch file.

Accessing VSCs in this manner is nothing's been around for a while.  This post appeared on the Forensics from the sausage factory blog over a year ago.  In this post, copying of specific files via robocopy is demonstrated, showing how to use batch files to mount VSCs, copy files and then unmount the VSCs.  Corey's script takes a different approach, in that rather than copying files, he rips Registry hives using RegRipper (more accurately, rip.exe).  Corey was kind enough to provide a commented copy of one iteration of his batch file for inclusion in the materials associated with WFA 3/e (see above).

More than anything else, this is just the beginning.  Corey's had a need and used already-available information as a stepping stone to meeting his needs.  Whether you use the VHD method for mounting images, or the VMWare method (described by Rob Lee and Jimmy Weg), or some other method, the fact is that once you mount the VSC, it's simply a matter of getting the job done.  You can either copy out the Registry hives, or do as Corey's done, and run RegRipper (you'll still have the image and VSCs to access if you need the original data) on the hives.  You can copy or query for other files, as well, or use other tools (some I'll mention later).  In fact, with the right tools and a little bit of thought, you can do pretty much files by hash, look for specific files, etc.  You may need to build some tools (or reach to someone for assistance), or download some tools, but you can piece some pretty decent automated (and self-documenting) functionality together and achieve a great deal.

Open Source Tools Book
Speaking of books, the book that Cory Altheide wrote (I was a minor co-author), Digital Forensics with Open Source Tools (aka, "DFwOST"), has been published and should be available to those who pre-ordered it soon.  Also, a really good idea is to follow @syngress on Twitter...I've been asked a couple of times if I will be providing a discount; I didn't provide the discount, Syngress did via Twitter.  I simply "RT'd" it.  You should really check this book out.  Cory's goal was to provide a means for folks with a basic understanding of digital forensics (and limited means) with an understanding of some of the open source tools available to them, and how to get them installed and configured.  And he did a great job of it!

My books have focused on the analysis of Windows systems, and have discussed/described free and open source tools that can assist an analyst.  Cory's book focuses on the open source tools, and covers several that you can use to analyze Linux, MacOSX and Windows systems.

SANS Forensic Summit
I don't know if you've seen it, but Rob's posted the agenda for this year's SANS Forensic Summit, to be held on 7 and 8 June, in Austin, TX.  Check it out...there are a number of great speakers, and several panels, which have proven to be an excellent format for conferences, as opposed to just having speaker after speaker.

It looks like Chris is gonna kick right off with his "Sniper Forensics" presentation, which has been getting him a LOT of mileage.  Richard Bejtlich is also presenting, in addition to being on a panel on the second day.  All in all, it looks like this will be another great opportunity to hear some good presentations, as well as to mingle with some of the folks in the business who are right there in the trenches.

I wanted to give another plug for Brian Carrier's OSDFC, the open source conference coming up on 14 June in McLean, VA.  Cory Altheide and I will both be presenting; I'm presenting in the morning, and Cory's got clean-up in the afternoon; that's Brian's tactic to get everyone to stay, by saving the best for last!  ;-)  I hope that this will be another great opportunity to mingle with others in the community...I had several interesting conversations with attendees at last year's conference.  Also, don't forget...DFwOST is out!  Bring your copy and get both of us to sign it...although you may have to wait for the cocktail reception at the end for that! 

There's an announcement over at the DFS Forensics blog that scalpel 2.0 is available.  There are some interesting enhancements, and the download contains pre-compiled Windows binaries and the source code.

I received another question today that I see time and again, via email and in the lists/forums, having to do with LastWrite times on the USBStor subkeys and how they apply to the time that a USB device was last connected to the system.

In this particular case, the person who emailed me had confiscated and secured the thumb drive, and then found that the LastWrite time (apparently, the system itself was still active) for the USBStor subkey had been updated recently.

Folks, I really don't understand how this can be written and talked about so much, published in books (WFA 2/e, WRF, etc.) and STILL be so misunderstood.  Rob Lee's even made PDFs available that describe very clearly how to perform USB device analysis (XP, Vista/Win7).

If you want to know more about what may have caused the USBStor subkey LastWrite time to be updated when the device hadn't been connected, or more about why all of the USBStor subkeys have the same LastWrite time, put together a timeline.  Seriously.  I've seen both of these questions (some even include, "...I need to explain this in court..."), and a great way to answer it is to create a timeline of activity on the system and see what occurred around that time.

Wednesday, April 13, 2011


Book Review
Ken Pryor posted a review of Windows Registry Forensics over on his blog...I greatly appreciate the effort folks put into these reviews.  Thanks, Ken, for taking the time to read the book and put your thoughts into a blog post!

If you're thinking about purchasing the book, take a look at Ken's review or any of the reviews on the Amazon site.  I've also been fielding questions, which come in from time to time.

Book Sales Numbers
Speaking of books, I was able to get sales numbers for foreign language editions of Windows Forensic Analysis; of the two editions, the book has been translated into Chinese, French, and most recently Korean.  The numbers may be a bit off, as it took Elsevier (thanks, btw...) some time to get the numbers, but here's how the books are doing so far:

Chinese - 4000 printed, 3281 sold to date
French -  1000 copies printed, 494 sold to date
Korean - 1000 copies printed, 700 sold to date

Pretty nifty.

Speaking of books, a hard copy of Digital Forensics with Open Source Tools showed up on my doorstep today!  Cory Altheide was the primary author...heck, the entire book was his idea...and I have to tell you, he did a great job!  Once, in a galaxy far, far away (actually, it was on the IBM ISS ERS team, but close enough...), I worked with Cory and saw firsthand that he's one of the most knowledgeable and capable forensicy folks I've ever worked with.  Not only is Cory REALLY smart, but he also likes beer!  Actually, I think his preference is single malt scotch...I know that sounds like some kind of personal ad but if you see him at a know what I'm sayin'! 

At first glance, the book turned our really well.  I was more interested in the formatting and how some of the images turned out more than anything else; spelling issues weren't my primary focus. The book is chock full of some really good information, and the content is mostly directed at beginners; however, I think everyone will find something useful.  For example, one of the open source tools that Cory described was the Digital Forensics Framework; I installed v1.0 on my Windows 7 analysis system today, and it fired up quite nicely (I'll be discussing DFF more in a later post).

Carbon Black
The guys over at Kyrus Tech are really moving along with Carbon Black.  If you haven't heard of this product, you really should check it out!  Cb is a lightweight sensor that monitors execution on systems, watching for new stuff being launched.

Kyrus recently sent out invitations to folks to download their latest version of Cb, and they've also set up a user forum (on Ning) for folks to engage with Kyrus and each other regarding the use of the sensor, and the resulting data.

Here's a good read on Cb vs. the RSA hack...

But Cb isn't just about security and of Kyrus' case studies involved cost reduction across an enterprise by determining how many employees were actually using the full breadth of an office application suite; by reducing the licenses in accordance with actual usage, and purchasing separate copies of the component applications for the employees who actually used them, the organization was able to realize a significant cost savings.

Aaron Walters is back at it again!  Prior to DFRWS 2008, Aaron had the first Open Memory Forensics Workshop, and I have to say, the format was a welcome change to many of the conferences I'd attended in the past.  Having short talks followed by panels was a great way to break up the long periods of sitting and listening, and I found the format engaging and stimulating.  Even better was the technical content based on who was there and presenting...all of the big names (Aaron, Moyix/Brendan, George M. Garner, Jr., etc) in memory acquisition and analysis were there, and it looks like Aaron's planning another OMFW soon!

Tuesday, April 12, 2011

Using RegRipper

I've received a couple of questions about RegRipper and it's use, and I thought that I'd take the opportunity to provide some more information about the use of this free, open source tool.

First, let me say that Windows Registry Forensics (WRF) is something of a user guide for RegRipper.  I found that even though I had provided a PDF document and several blog posts that talked about how to use RegRipper, and answered a lot of questions in various lists and forums, there were still questions and some confusion.  In fact, in most cases, there seem to be the same questions again and again. In an attempt to address this situation, I thought that perhaps writing a bit more extensive user guide for RegRipper and providing it in one location, in WRF, would be useful.

An example of the questions I receive have to do with getting the UserAssist data from an NTUSER.DAT hive file collected from one of the versions of Windows.  As it says on pg. 185 of WRF, the plugin was written specifically for Windows XP systems, while the plugin was written to work on all versions of Windows.  There is also a third RegRipper plugin,, which was written in 2008 in response to the use of Vignere encryption (vice ROT-13) of the value names in Windows 7 Beta.  So, if you want to get UserAssist information from any version of Windows, except Windows 7 Beta, you can use

In short, RegRipper runs plugins, which are simply Perl scripts (the files that end in ".pl") located in the ".\plugins" directory of the installation.  You can run a list of plugins against a hive file by selecting a plugins file or "profile", which is a flat text file, with NO extension, that has the plugins listed in order.  Within the profile, lines that begin with "#" are treated as comment lines and skipped...this allows you do comment out specific plugins or add your own documentation.

So, again...RegRipper (both the GUI and the CLI "rip") are similar to the Nessus vulnerability scanner, in that it is simply an engine that runs plugins.  The "plugins" are Perl scripts located in the ".\plugins" directory...files that end in the ".pl" extension.  If you want to run more than one plugin against a particular hive at a time, you can create a "plugins file" or "profile", which is a file with NO extension located in the ".\plugins" directory; this file is simply a text file that contains a list of plugins to be run, in order, with one plugin (drop the ".pl" extension from the plugin name) listed on each line.  You can comment the profile using "#"...RegRipper ignores lines that start with this character.

Listing Plugins
To get a list of plugins (files with ".pl" extension located in the ".\plugins" directory), there are a couple of things you can do.  The package shipped with WRF, as well as provided online, includes the Plugin Browser, a GUI means not only for seeing details about the available plugins, but also building or editing profiles.  Or, if you like, you can run the following command from the command line:

C:\tools>rip -l

This command will provide a list of plugins right to STDOUT.  Another option, to provide you with the same information in .csv format, would be to use the following command line:

C:\tools>rip -l -c > plugins.csv

Just open the resulting file in Excel or your favorite spreadsheet application, and sort to your heart's content!

Another thing...if you have any questions about the syntax for, simply type the following command at the command prompt:



C:\tools>rip -h

Other switches ("/?") will also work, as well.  And hey, if worse comes to worse and you just don't like the command prompt, open the file in Notepad or a text editor!  ;-)

Reporting Issues
When you do have what appears to be an issue, sometimes it's very helpful to look at a couple of things first.  You can actually do a bit of troubleshooting on your own, and it doesn't require any programming ability to do so.  When I first released RegRipper back in 2008, several people I knew ran it against the live NTUSER.DAT on their system.  Don't do that...RegRipper is intended for "dead box analysis", meaning that it's designed to be run against hive files extracted from other systems, not against the hives from the system you're currently logged into.  Others ran it against hive files from the systemprofile directory, and one person even ran it across a file named "NTUSER.DAT" that was 256K of zeros.  So, if you have an issue...try looking at the file in a viewer (there's an excellent free one listed in WRF).  Maybe the reason the plugin is telling you that a key or value doesn't exist is doesn't exist (or RegRipper can't find it in the provided path).  Also, look at the version of Windows you're running the plugin against.  Where this can be important is, for example, the UserAssist data, as the UserAssist subkeys (those listed between the UserAssist and Count keys) are different from XP to Windows 7.  Another one is the ACMru key...running the plugin against a Windows 7 NTUSER.DAT won't reveal any information, as that key isn't used on Windows 7.

If, at this point, you still can't figure out what the problem is, please feel free to contact me, and include a concise, thorough description of the issue.  For example, please be sure to include the version of Windows from with the hive was acquired, which hive you're working with, and which plugin you used.  If possible, please provide a copy of the hive.  Also, there are several plugins that are now available that I didn't write, so it might also be a good idea to provide the plugin itself.

Finally, remember...RegRipper is free, and open source.  This means that you can write your own plugins (WRF explains how...) and you can see what various plugins do simply by opening them in a text editor.  Many of the plugins I wrote and provided with the distribution contain links to references in the comments of the plugin, which can be very useful for validation, and even just as general interest.  I know a lot of folks are going to say, "...but I don't program, nor do I understand Perl...", and that's many cases, there is some plain English in the comments of the plugin that tell you what it's trying to do.

A great big THANKS to Brett Shavers for setting up and maintaining the site.

Monday, April 11, 2011

Links and Stuff

Digital Forensic Search
Corey Harrell's done some pretty interesting things lately...most recently, he set up a search mechanism that targets a specific subset of Internet resources that are specific to the digital forensics community.  Sometimes when we're searching for something, we head off to our favorite search engine and cast a wide net...and we may not get that many hits initially that are pertinent to what we're looking for; by narrowing the field a bit, more relevant hits may be returned.

One of the issues with the community is that there's a lot of good information out there, but it's out there.  Many analysts have expressed a bit of frustration that they can't seem to find what they're looking for when they need it, and that they don't know that they need it until...well...they need it.  I've also talked to people who've done hours of research but not documented any of it, so when the issue they were working on comes up again, they have to go back and redo all of that research again.

Rootkit Evolution
Greg Hoglund posted to his Fast Horizon blog recently, and the title...Rootkit Evolution...sparked my curiosity.  Sadly, when I read the post, it wasn't really anything more than a sales pitch for Digital DNA, whereas I had expected...well...something about the evolution of rootkits.  I mean, that's kind of what the title suggested.  Anyway, one statement from the blog caught my interest, however:

...we are still ahead of the threat.

While I don't disagree with this, I would suggest that attackers may find that it isn't necessary to employ rootkit technology.  Now, don't get me wrong...I'm sure that this does happen.  But for the most part, is it really necessary?  Look at many of the available annual reports, such the Verizon Business Security report, M-Trends, or TrustWave's Global Security Report...some of the commonalities you may see across the board include considerable persistence without the need to deploy rootkits. the research important?  Yes, it is.  They're still being used (see the Chinese bootkit).  Now and then, these thing pop up (well, not really...someone finds one...) during an incident as a well-designed rootkit can be very effective.  It's just like NTFS alternate data soon as the security community considers them passe and stops looking for them, that's when they'll make a resurgence and be used more and more by the bad guys.

What a Tweet Looks Like
Ever wondered what a tweet looks like?  I'm sure you have!  ;-)  By way of a couple of different links comes a very interesting write-up of what a tweet looks like from a developer's on the big picture in the middle of the post to enlarge the map-of-a-tweet (or "Twitter Status Object").  Most forensic analysts will likely look at the map and see the value right away.

Okay, so how would you get at this?  This sort of information would likely be in some unstructured area of the disk, right...the pagefile or unallocated space.  So, if you were to run strings or BinText against the pagefile or unallocated space extracted from an image via blkls, you would end up with a list of strings along with their offsets within the data.  What I've done is write a Perl script that goes into the data at the offset that I'm interested in, and extracts however many bytes on either side of the offset that I specify.  I've used this methodology to extract not only URIs and server responses from the pagefile, but also Windows XP/2003 Event Log records from unallocated space, translating them directly to timeline/TLN format.  Doing this provided me with a capability that went beyond simply carving for files, as I needed to carve for specific, perhaps well-defined data structures.

Something like this could be used to quickly and easily extract tweets from unallocated space or the pagefile.  Run strings/BinText, then search the output to see if you have any unique search terms, such as a user name or screen name.  Then, run the script that goes to the offset of each search term and extracts the appropriate amount of data.  This can be extremely valuable functionality to an examiner, and can be added to an overall data extraction process using free and open source tools.

Writing Open Source Tools
The above section, the imminent publication of Digital Forensics with Open Source Tools (the book was Cory Altheide's idea and he was the primary author, as is due to be published on 15 April), and the upcoming Open Source Forensics Conference (at which Cory and I will both be speaking...), all combine to make a good transition to some comments on writing open source tools.  Also, this is a topic that Cory and I had considered addressing in the book, but had decided that it was too big for a sidebar, and didn't quite fit in anywhere in particular.  After all, with all of the open source tools discussed in the book, we would really need to get input from others to really do the topic justice.  As such, I thought I could post a few comments here...

For me, writing open source tools starts as a way to serve my own needs when conducting analysis.  Throughout my career, I  have had access to commercial forensic analysis applications, and each has served their purpose.  However, as with any tool, these applications have their strengths and weaknesses.  When conducting PCI forensic investigations, a commercial application made it easy to set up a process that all of the analysts could employ, but we also found out that some of the built-in functions were not exactly accurate, and that affected our results.  The result of that was to seek outside assistance to rewrite the built-in functions, in order to get something that was more accurate and better suited to our needs.  We would then export the results and run them through an open-source process to prepare an accurate count of unique numbers, etc.

So, sometimes I'd write an open source tool in order to massage some data into a format that is better suited to presentation or dissemination.  However, there have been other times when no commercial application had the functionality I needed, so I wroted something to meet my needs.  A great example of this is the MBR infector detector script.  Another is the script I wrote to carve Windows XP/2003 Event Log records from unallocated space.

I can guess that one response to all this is going to be, "...but I don't know how to program...", and my response to that is, you don't have just have to know someone who does.  Not every analyst needs needs to know how to program, although many analysts out there can tell you that understanding programming (anything from batch files all the way to assembly...) can be extremely beneficial.  However, having someone who understands what you do and can program can be extremely beneficial when it comes to DFIR work.

Too many times, when it comes to DFIR work, analysts are sort of left on their own.  Business models of dictate the necessity for this...but having a support mechanism for engagements of all kinds can be an extremely effective means of extending your team's capabilities, as well as preserving corporate intellectual property. 

Even if you aren't part of a DFIR team, you can still develop and take advantage of this sort of relationship.  If you know someone within the community with programming skills, what does it hurt to seek their assistance?  If they, in turn, provide you with effective, timely support, then you have a great opportunity to further the relationship by supporting them in some manner...even if that's just a "thank you" for their efforts.  Many folks with some programming capabilities are simply seeking new challenges and new opportunities to learn, or employ their skills in new ways.  So when it comes to writing open source tools, many times, the only real "cost" involved is a "thank you" and acknowledgement of someone's efforts to support you.

Lenny's got a post up that lists three tools for scanning the file system for malware with custom signatures.  these are all excellent tools; in fact, if you remember that I had provided instructions (from MHL) regarding how to install on Windows systems, two of the tools that Lenny mentions (ClamAV, Yara) can be included in the setup for and the signatures used to locate suspicious files.

Signatures are one way to locate malware and other suspicious files on a system.  However, signatures change, and they must be kept up to date.  You can also use signatures to locate all packed files, as well as files "hidden" using other obfuscation methods.

Keep in mind, however, that this is only part of the solution.  Because signatures within malware files do change, we also need to consider the network, memory, and other parts of the system (i.e., Registry, etc.) to look for indicators of malware.  In fact, many times, this may be our first indicator of malware.  I've found previous infection attempts were malware has been loaded on a system, only to be detected and quarantined by the installed AV product.  I could see the names of the files within the AV and Application Event Logs.  Interestingly enough, files of the same name were created a couple of weeks later, indicating that the bad guy had obfuscated his malware so that the AV wouldn't detected it, and was able to get it successfully installed.

There's more to malware detection than just scanning files for signatures.  If all you have is an acquired image from a system, and a malware infection was suspected, there are a number of other things you can look at in order to find ancillary indicators of an infection.  Scanners should be part of the malware detection process.

Thursday, April 07, 2011


Digital Forensics Framework
The guys over at DFF have an open-source framework used as both a digital investigation and development platform.  As this is an open-source tool, Cory did discuss a previous version of this tool in the Digital Forensics with Open Source Tools book.

The DFF guys recently posted on Time Filtering, using DFF to filter the image based on time stamp information.

While I think that this is a great step forward, I also think at the same time that this sort of data visualization is of limited value.  As I've been creating timelines, I've been looking at ways to potentially present the information in a visual manner that would make analysis easier and more efficient; to be honest, I have yet to find something like this.  Others have talked about such presentation methods as a histogram showing volumes of activity, but in the same breath, they'll also talk about malware following the Least Frequency of Occurrence (LFO) on systems; I'm not sure that showing spikes in activity necessarily lends itself to finding those things that occur least frequently on a system.

Craig Ball wrote this article for Law Technology News, on the use of antiforensics.  Many times, measures taken to foil the work of forensic analysts were originally intended as a privacy measure, but even if those actions are intended specifically to hide the user's activities from the analyst, they are often not even a speed bump in the road to analysis.

During an investigation I determined that an "evidence eliminator" application had been used.  This analysis was of an older case, and the image was from a system that had been acquired several years prior to my analysis.  When I did some research on the version of the application, I found that it deleted specific Registry keys...but I was able to recover the most recently deleted keys from unallocated space within the hive file itself.  Previous subkeys and values were recovered via the hives found in the System Restore Points.

Antiforensics techniques target the training of the analyst...that's pretty much it.  For more information, see the Parsing EVT Records section below.

The DFF guys also included a link to the Digital Corpora site, from which the NTFS image described in the DFF blog post was downloaded.  This is a great place to go to get access to some images, one of which is of a Vista system, apparently.

One issue that continues to be a threat is disgruntled former employees.  Gucci was recently confronted with this issue.  What's interesting about the post to the Sophos NakedSecurity blog is that the fired former employee reportedly gained access to the network by first having created an account for a fictional employee, and then after being fired, social engineered the helpdesk and convincing them that he was that fictional employee.  After that, he was able to return time and time again.  This is just an example of how someone can use an organization's process against itself, taking full advantage of that process to perform a wide range of malicious actions. 

Using RegRipper
I recently received the following quote from someone who used RegRipper and the plugin recently, but asked to remain anonymous (permission was given to post the quote, however):

I have the date and time in which an IDS caught a piece of malware being downloaded on the network to a user's machine. I need/needed to look for clues to see if the exe actually executed or not. I was using FTK's registry viewer to create a timeline of last write times for Keys but Registry Viewer doesn't let you export in a format other than HTML, which is just not helpful.

RegRipper gives me a nice line by line way to see the time and date stamps in a way in which they are much more viewable, WHICH IS GREAT. 

Now, I'm not posting this to poke fun at nor admonish AccessData...not at all.  I'm also not saying that one tool is any better or worse than another...all tools have their strengths and weaknesses, and the real power of a tool is in who uses it.  I wanted to post this publicly to demonstrate to some who may not have used RegRipper or be familiar with it to see that, even though it's not a commercial tool, it can still be very useful.  I tend to think that a number of folks in the DFIR community use specific tools because they feel that they have to...their employer purchased a tool or set of tools, based on some ancillary knowledge of the industry or due to a customer requirement.  As such, there's considerable reticence toward trying or incorporating new tools, and rather than seeking the best tool to solve the problem, the problem is redefined to conform to the tools being used.

Open Source Conference
Speaking of tools, Brian Carrier sent out an email recently announcing the Sleuth Kit and Open Source Digital Forensics Conference on 14 June 2011 in McLean, VA.  The day before the conference, there will be "two half-day workshops that will allow you to get hands-on experience with analyzing web browser artifacts and making timelines with open source tools."

Speakers at the conference will include Cory Altheide, Brian Carrier and Jon Stewart.  You had me at "Cory Altheide".  ;-)  While remaining a fairly brief conference, this still looks as if it will be a good one, and I'm hoping that Cory and I will have copies of Digital Forensics with Open Source Tools available.

Chinese Bootkit
There's a new post over on the ThreatPost blog that discusses a Chinese bootkit.  There's some interesting information available, and a graphic that demonstrates the process by which systems are infected.  Part of that process includes an MBR infector, something for which I'd written a Perl script to help me detect during forensic analysis.  Unfortunately, there isn't a great deal of information available in the blog post about the MBR infector, but I will say that it appears that these sorts of malware are popping up more frequently, so this is definitely something you would want to include in your malware detection process.  After all, with the right tools, it only takes a few seconds to check for the possibility of an MBR infector, so we're not talking about extending your process by a day or more.  This Net-Security article indicates that the MBR infector copies the original MBR to the third sector, so the MBR infector detector would work very well in helping you find indications of this bit of malware.

Parsing EVT Records
Lance recently posted about an EnScript he provided to help parse "classic" Windows Event Log (.evt) records from unallocated space.  This is very similar to my recent post about the same thing, albeit the fact that the approach I took uses only free and open-source tools; however, if you're a heavy EnCase user, you'll probably want to go with Lance's solution.  More than anything else, I think that what this shows is that there's a need for these sorts of things within the community...many times, there simply isn't one, single way to accomplish something, and having multiple tools is a good thing.

I've recently received a number of requests to share this code and technique, and the first time I did so, I sent the script within 10-15 min of receiving the request.  And then didn't hear a thing back until I followed up three days later. it so hard to thank someone for sending you something that you asked for, and just acknowledging that you received it?

I was over reviewing the offerings on the "What's New" page at the web site, and found Nick Klein's presentation from RuxCon. Interestingly, slide 9 includes the bullet, "Be specific in defining the objectives and what evidence might assist in determining the facts."  Slide 11 of that presentation is all about documenting what you do.  This is interesting to me because it's very similar to what Chris talks about in his Sniper Forensics presentations.

Malware Analysis
MalwareAnalyzer 2.9 was released recently.  This project is written in Python, but provided as a Windows executable.  I haven't seen too much out there about this one, but projects like this are always worth a look.

Tuesday, April 05, 2011

Readin' and Writin'

Richard Bejtlich recently wrote an interesting blog post on reading, and followed that up with some answers to questions posted as comments to the first post.  In his first post, Richard discusses several types of reading; I tend to find myself reading mostly for information or entertainment, but when I'm writing a report or book, I will most often resort to a proofreading style of reading as I go back over what I've written.  Right now, my entertainment reading consists of a book on the life of George Washington and the ebook version of William Gibson's Zero History.

Many times, when I read for information, thoughts and ideas marinate and percolate, not just with respect to what I'm currently reading, but also including other sources...stuff heard in podcasts, other books or whitepapers read, etc.  For example, when I was reading Will Gragido and John Pirc's Cybercrime and Espionage, something that I read combined with work I'd done in the past...PCI breach investigations, and the QSA certification/re-cert classes I was required to ignite some interesting ideas.  So while I was reading, I would write down some notes, and then revisit those notes later after I'd finished reading, or even a day or two later.

Now, this a good place to transition from types of reading to types of writing.  One type of writing, such as note taking, is meant for personal information retention.  Often, we'll take notes and jot down little missives as a way to remind ourselves of something, or simply to document what we might have thought was a good idea at the time.  Another type of writing (documenting case notes, report writing, book writing, etc.) is meant more for transmitting information to others.  This style of writing will encompass a variety of forms, but for the most part the overall goal is to preserve and transfer information for others to use.

One method of writing I made great use of while I was in the military is illustrated in how I would write fitness reports ("fitreps"), the military term for "personnel evaluations".  I would start the fitreps several weeks out by consulting my platoon commander's notebook, and jotting down some notes to myself with respect to key elements I would like to highlight in the report.  Then I would set aside specific time for myself to revisit these notes over the next couple of days or weeks, allowing my thoughts to "marinate" and crystallize a bit.  By focusing for an hour or two each week, I could get the reports completed in a manner that I was very comfortable with, rather than rushing at the last minute and submitting something that I wasn't comfortable with and wasn't complete, and might have a detrimental effect on the Marine's career at some point in the future.  I'd actually seen the effect that poor planning and writing had on a Marine's career; one of my Staff Sergeant's was applying for the Warrant Officer program, and the selection committee had found a fitrep written on the SSgt in which the reviewing officer, a Marine Captain, had stated that the SSgt deserved to be awarded a medal...but never submitted a write-up for the medal.  In short, poor planning and execution could have a negative impact on someone's career later on down the road.

As a community, I don't think that we do enough reading or writing.  By reading, I mean really reading for comprehension, and by writing, I mean really writing to convey some sort of information.  Too many times, I see questions being asked in online forums, and the response that is received has no correlation to the question; it's as if whomever had read the question had only read every third or fifth word, and just answered another question all together.  Also, we all see emails and posts to list servs and forums that would benefit greatly from spellcheck or just a review for grammar.

It all comes down to thought processes.  One simple way to expand our horizons is to read something, and when we do, put more thought than just "neat!" into what we're reading.  Another way to expand and develop ourselves professionally is to write more for public consumption.  When you read something, do you think critically about what you're reading?  Or do you simply accept it without question because the person writing it is someone that you or others consider to be an "expert"? 

If you think that something you've read is a good idea, why do you think that?  Is it because it would be (or would have been) useful to you in some way?  How so?  Can you articulate that?

Ultimately, what we do ends up in some sort of written form.  When we perform analysis, many times the end result is a report to someone...our boss, a customer, etc.  The way we provide value through the reporting process is to think critically, and provide a clear, concise description of our findings to the customer, in a manner that they can understand and use. 

So how do we go about doing this?  A lot of us read...even it's 140 characters or less at a time, we tend to read stuff, right?  So what do you do at that point?  Do you simply retweet, or do you use that as the beginning of a blog post?  If you are writing a blog post, are you simply providing a link to what you read, or are you writing a description of how the information was impactful or meaningful to you?

There a lot of opportunities out there.  For example, look at what Corey's done over at the Journey into IR blog, with his exploit artifacts posts.  This is just one example of what can be done.  Regardless of the route you choose to take, or the path you choose to follow, reading something within the community (book, tweet, blog post, etc.) and not taking the opportunity to think critically about what was read and to open a discussion on the topic, sharing your thoughts with others...all of this is simply a missed opportunity.

More on Breaches... pun intended.  No...wait.  I totally meant it.

I was reading through some news items recently, and came across an article on Yahoo! news that refers to the recent Epsilon breach, (also read about it on in which, it appears, a number of email addresses were exposed.  However, some things about the article I read caught my attention...

From the article:
Epsilon said that while hackers had stolen customer email addresses, a rigorous assessment determined that no other personal information was compromised. By itself, without passwords and other sensitive data, email addresses are of little use to criminals. But they can be used to craft dangerous online attacks.

Okay, is it just me, or are the last two sentences contradictory?  I mean, without other information, email addresses are useless to criminals...except when they are used to "craft dangerous online attacks."  What are "dangerous online attacks"?  Well, Uri Rivner, the Head of the Security Division at EMC, recently posted "Anatomy of an Attack" on the RSA blog, and states in that post that the breach to their systems started with a phishing attack...specifically crafted emails were sent to specific individuals, knowing that at least some of them would be likely to click on the attachment.

Now, dear reader, please do not assume that these two incidents are tied together in any way, as that's not what I'm saying, nor am I suggesting it.  But what I am saying is that there is a considerable disconnect between what some think online criminals want or need, and what they actually end up going after.  For example, what does the second sentence in the above quoted paragraph sound like to you?  To me, it sounds like a justification to NOT have to notify, based on state notification laws (the first of which was California's SB 1386).  Think about it.  By specifically stating that all of this other personally identifiable information (PII) was not exposed along with the email addresses, there's now justification that the breach laws don't come into play, and by extension/implication, neither do any compliance regulations.

Okay, but someone still accessed your systems and took this information.  And by "took", I'm referring to the fact that while you still "have" the information, so do they.  So this isn't like real-world theft where someone steals your car and you no longer have possession...this is an instance where the confidentiality of the information on which your business runs has been compromised.

In addition, the article goes on to indicate that more than just the email addresses themselves were exposed...the businesses (banks, hotels, etc.) that the owners of those email addresses frequent were also exposed.

Do you know what this reminds me of?  The designer drug trade.  Apparently, designer drugs are outlawed based on chemical structure, so once one drug is outlawed, a chemist comes up with a new, potentially more powerful drug, with a different chemical structure, which is therefore legal until it's discovered by law enforcement and broken down enough to be uniquely identified.  The reason this breach reminds me of this sort of thing is that an organization was breached, and the critical information on which that organization's business relies was compromised...but not enough information to require notification by state breach laws, based on the specific definition of PII.  However, the information that was compromised...this email address is for someone who uses CitiBank, etc...can still be employed to devastating effect (refer back to the RSA breach).

So, Epsilon is notifying its customers...and folks like @briankrebs on Twitter are tracking notifications, and even receiving responses from individuals who are receiving five or more that's a good thing.  We see in the media all the time where those who get out in front of an incident and are very open about it fare much better in the long run than those who cover it up in legalese or just flat out deny that it happened.  But I wonder how things would have worked out had the organization taken a proactive approach and done a better job of preparing for an incident.  For example, I wonder what the effect of having Carbon Black installed on systems would have had on the overall incident detection, and ultimately, response.

Folks, the fact is that the instant you think that you don't have anything anyone would want or could use, you've's a total Sun Tzu thing.  Even if you can not possibly imagine how someone would use or profit from the data that you process or store, someone else likely already has.  At the very minimum, you've likely got CPU cycles, RAM, and storage space that can be used as a staging area or jumping-off point.

So, what do you do about this?  One way to address the situation of the inevitable security incident or breach...after all, the last couple of months should have clearly demonstrated to all that NO ONE is to be ready for it to happen.  So why not seek out a trusted adviser, someone who has dealt with breaches and incidents across a wide range of clients, and cultivate a relationship?  Incident preparation is more about a change to your corporate culture than it is about purchasing devices and software; a great deal of preparation can be done without purchasing a single device.  However, the lack of visibility that most organizations have will likely be addressed by some sort of purchase, but we're not talking about dropping a truck-load of gear off at your doorstep.  There's a great deal that can be done, and you're going to likely be sold whatever solution is provided by the vendor that you call.

Monday, April 04, 2011

Breaches, Links, and other stuff...

As many of you know, I am (quite thankfully) no longer involved in Visa PCI breach investigations.  However, something I saw in the news recently did attract my attention; specifically, Briar Group LLC is the first restaurant chain fined under the Massachusetts data breach law (here).

So why is this interesting?  Well, most of the time I was doing this PCI work, I was rarely aware that any of the organizations I encountered got fined.  Yes, there were one or two that I heard about later, but like I said, it was rare.  Mostly, I'd go on-site, scope the incident, collect data, return to the lab to analyze the data and submit my report...and that was it.  Now, I didn't really need to know what happened, to be honest, but after the first year, organizations that had been breached would ask me, "What happens after you submit your report?", and I really didn't have much of an answer.  As far as I could tell, QIRA firms had only two functions with respect to PCI to keep their analysts certified, and submit reports within the timeframe specified.

This article provides a good deal of information, and for me, a bit of deja vu.  For example, while the actual issue isn't described in any detail, the incident itself is described as having first occurred in April 2009, and went undetected until December 2009.  If you have reviewed any of the cumulative annual reports that have come out over the years, and in particular the Verizon Business Security report, you'll notice that this sort of thing isn't unusual at all; organizations rarely detect an intrusion while it's in progress, but rather get notified by an external third party, well after the incident occurred.  I think that this is in part due to a lack of visibility into their networks, but I also think that there is a lack of intelligence provided by the oversight organization (collects up all reports), and then acted upon by those organizations who fall victim to these breaches.

Restaurants aren't the only organizations affected by these breaches.  Other businesses, such as credit unions, are being breached, as well.  While it may appear that credit unions are subject to a different set of compliance measures, these measures really aren't all that different from what's been mandated for other organizations.

With respect to the RSA breach, Federal News Radio 1500 AM has an interview with Gene Spafford, Executive Director, CERIAS, that is well worth a listen.

Getting Help
I'm not going to pick on LE for this one, because this same sort of thing applies to others, as well...but even with "simple" customer service stuff, I tend to get names, titles, and such when talking to folks.  In this particular case, it appears that the information on the cell phone was thought to be important, so one would think that all due care in handling the device would be the order of the day.

I recently had an issue I was attempting to address and wanted to get some credible information before proceeding.  As such, I reached out to Eoghan and Terrence at cmdLabs, and I have to say that their response and support was very greatly appreciated.

The point of all this is something that I think has its roots in the community fragmentation discussion that took place recently.  For my money, if the situation were more than just idle curiosity, I'd seek credible assistance.  However, having observed the industry for a number of years, I still believe that many analysts are simply afraid (for whatever reason) to ask for or seek out assistance, and others are simply asking the wrong questions to begin with.

What are your thoughts?

APT Tabletop Exercise
This recent post (link above) over on the SANS ISC blog caught my attention.  One statement in particular that caught my eye was:

If you're not on the obvious-targets list already...

Kevin goes on in the post to point out that every organization has its secrets, and he's absolutely right.  More importantly, I would suggest that it's the height of folly to think that you don't have something someone wants, whether that something is intellectual property, or simply storage space and CPU cycles.  I mean, here in the US, it's getting close to tax time, so if you were to survey a wide range of systems, you would probably find tax-related software (TurboTax, etc.) and information on many of these systems.  Also, loading a key logger or 'data jacker' on a system will often result in online banking or shopping credentials being revealed.  Ultimately, a compromise may come down to simply the connectivity and processor power, as the compromised system is part of a botnet.

Speaking of APT and preparing, it would probably be a good idea to take a look at the RSA blog, and specifically the Anatomy of an Attack post, written by Uri Rivner.  The post does a very good job of covering the general methodology of how these attacks occur, and even provides information about the specific details regarding the spear phishing attack that led to the initial entry.

For folks who are more detailed oriented, such as myself, the rest of the post is light on actual details, with the exception of some domains used by the attackers.  There are some details, such as the use of FTP to transfer files, but in other cases, the methodology is discussed without details.  That being said, there is enough information there to infer the nature of what occurs on systems compromised by these actors.

Looking at the details that were provided, I have to wonder what that post would look like had Carbon Black been installed on critical systems within the infrastructure.  Speaking of which, the early release of Carbon Black starts today...drop on by the Kyrus web page and check it out!

Brad Garnett moved his blog's now the Digital Forensic Source blog.  Brad hasn't been terribly prolific, but if you follow the Case Leads posts on the SANS Forensics Blog, you'll find Brad's posts to be similar, providing links to items you might otherwise have missed.  Sometimes you may run across something that will lead to another blog or blog post or media article that's of interest.

Speaking of blogs, Richard took the time to answer some questions from a previous post (on Reading) over on the TaoSecurity blog.  I thought that the previous post was interesting, but found the questions that Richard chose to respond to even more so.

Over the years, I've found Richard's reviews of various books to be very insightful, and it has been clear that he's put some considerable effort into the reviews.  I have always suspected that he does this because he puts his name on the review (ie, doesn't post it under Anonymous), and realizes that the entire review reflects upon him and his position.  I've seen quite a number of reviews of digital forensics that have amounted to "It was good" or simply "It sucked."  I've seen other reviews that were really nothing more than a listing of chapters and a regurgitation of the content in each.  As such, I tend to put a lot of value in reviews such as those that Richard writes, as they are very insightful.

I especially enjoyed the first question (and response) in the blog post.  I won't copy-paste it here, as you really should head on over to Richard's blog and check it out.  Regardless, I have to say that the question is pretty typical; like others, Richard is doing something that he enjoys doing, and he's doing it entirely for free.  Yet, there's always going to be someone who will ask for something more, many times without offering anything of their own. 

With respect to the final question on the blog, I pretty much followed what Richard mentioned when I did a recent review of the ebook edition of Cybercrime and Espionage; while I took notes directly in the material using the Kindle functionality, I also took notes in a notebook.  I know that some will probably look at that as extra work, but I usually find when I read books such as the one from Gragido and Pirc that I not only get ideas and insights about the material presented, but I will also sometimes find tie-ins to other books or online materials, so having handwritten notes is a great way of solidifying those thoughts in my mind, and having something right in front of me to review later.

Windows 8
Finally, I caught an article on CNNMoney this morning that indicates that Microsoft is getting ready to release Windows 8.  I love job security.