Pages

Thursday, March 31, 2011

More Links...

Carbon Black
Every now and again, I talk about incident preparation.  For the most part, as a responder, this is something of a fantasy, because most folks that call people like me are simply not prepared for a computer security incident.  However, every once in a while, responders do get a chance to work with a customer who was either prepared for an incident, or in the process of getting there, and it's a whole different world!

So when I talk about incident preparation, many times those things that I talk about, like network segmentation, VLANs, enabling more appropriate logging, etc., are all things that require a good bit of work because, to be honest, well, they do.  They require changes to how the organization has been doing business for some time.  However, many of the changes are just the first step; for example, increased logging requires additional storage space, as well as someone to collect and possibly even review the logs.

Well, the folks at Kyrus Technology have a solution called Carbon Black.  Cool name, I know...but what is it?  Here's a description from the FAQ:

Carbon Black is a software sensor we created that monitors key points on the operating system and gathers data that is useful to intrusion responders and system administrators for security and compliance functions.

Carbon Black is less than 100 KB and runs on 32- and 64-bit Windows systems, from XP through Windows 7. Given sufficient demand, Linux and OS X versions may be warranted in the future.

I've had an opportunity to install a standalone version of the sensor on a Windows XP SP3 VM, and so far, it's proven to be very valuable.  The sensor monitors execution on systems, watching what's being run, and and logs this in a couple of different formats.  The full version logs to a remote system, either within the organization's infrastructure or to a designated SOC. When a new executable is detected, CB will make a copy of it, which in itself is pretty cool.  Right now, the sensor has execution monitoring, but an update due out on 4 April will include file system modifications, socket creation, and a new UI. 

Another update planned for this summer will include monitoring for Registry modifications, as well as a schema and API.

So, what can you do with Carbon Black?  Well, quite a bit...and not just IR.  One of the case studies that the Kyrus guys have involves cost reduction by looking out across an enterprise to determine the overall use of an office suite of applications; by determining that not all of the applications included in the suite were being used by all employees, the CIO was able to reduce the license costs and save the organization a good bit of money.

But what does all this mean to IR?  Well, CB is pretty lightweight...I'm running a standalone sensor in an XP SP 3 VM with 1GB RAM, and there are no noticeable hitches or slow downs.  As an analyst, there are a number of places that I look on systems for information.  For example, I will look to the Application Event Log for indications of running AV, and possibly any detections.  However, Event Logs (particularly on XP and Windows 2003) tend to "roll over", so I will also look for the text-based AV logs on the system, which tend to contain more historical information.  Sometimes, there is no AV installed on the system.  Other times, I will find that the infrastructure has Process Tracking enabled, so I see the corresponding events in the Event Logs...most times, this isn't the case.  Another option for getting some information about programs run on a system includes Prefetch files, but application prefetching is not enabled by default on Windows 2003 and 2008.

CB would be useful in a variety of environments.  The first that comes to mind is a large data center; such systems usually have a small set of processes that actually run all the time on the servers, so finding new things being run on systems would, after a familiarization period, actually be pretty simple.  Remember the Least Frequency of Occurrence?  But this would also apply to SMB infrastructures; if you follow Brian Krebs blog at all, you likely have seen that a pretty significant number of SMBs have contacted him over the years to say that they got hit with Zeus and funds were stolen from their bank, transferred to money mules.  While Carbon Black isn't a preventative measure (it doesn't block stuff), having something like this in place, reporting to a SOC would significantly improve detection, as well as response by law enforcement.

But again, Carbon Black isn't just about IR...IR is one of the uses of a sensor like this.  If any of this sounds interesting to you at all, get in touch with the Kyrus guys and ask them about their case studies.


Tools
The Malware Analyst's Cookbook tools are now online.  This is an extremely well-written and valuable book for analysts to have available, and the "recipes" that demonstrate the use of the tools really do a lot to show folks what can be done.  I have found that a lot of analysts really look to this sort of thing; "show me how it's used" really does a great deal more to get them to actually use it than just posting the tool to the web.  The Cookbook is an excellent and invaluable resource, not just to malware analysts, but also to a wide range of infosec professionals, offering a great deal of insight into the nature of malware.

I have a blog post here that describes how to get pescanner.py running on Windows.

Registry Stuff
The folks over at the DFS blog have an interesting post about Registry backups on Vista, Win2008, and Win7.  Apparently, the backed-up copies of the Registry hives on Win7 are the result of the RegIdleBackup Scheduled Task.

This is definitely something to keep in mind when analyzing these systems.  For example, I have used regslack on a number of engagements, and found some pretty valuable information.  Running a 'diff' on the current, active hives and the backed up versions might provide some useful insight, as well.  So this is yet another analysis technique to add to your bag of tricks.

Sunday, March 27, 2011

Links and Stuff

VMWare Tip
Jimmy Weg shared a tip over on the Win4n6 group that may be useful to others.  Jimmy pointed out that on a sidebar on pg. 102 of Windows Registry Forensics, I mention trying several times to interrupt the VMWare boot process so that I can access the BIOS.  Jimmy suggests making this a bit easier by adding bios.bootDelay="4000" to the settings.ini file in order to create a 4 second delay. 

MarkG added a suggestion for using the VMWare host menu to select Power on to BIOS rather than simply booting the VM.

Thanks to both Jimmy and Mark...these tips may be very helpful to someone performing research, or attempting to use the techniques described near pg. 102 in WRF.

Sniper Forensics
Chris has posted another installment of his Sniper Forensics series over on the SpiderLabs Anterior blog.  This one is called Part IV: Finding Evil.

Chris and I worked together on the IBM team a while ago.  Chris moved on to TrustWave, and before I left IBM, the team submitted a letter to Visa and dropped off of the PCI QIRA list.  So, while I worked PCI response engagements for about 3 yrs (which included the associated certification and annual re-certification), I haven't worked these engagements in a while, and Chris and his team continue to work them on a pretty regular basis.  As such, there's a good deal that Chris and his team sees on a regular basis that I no longer see.

In this installment of the Sniper Forensics series, Chris discussed looking at systems which were part of a payment processing system; however, it appears that while the systems processed card holder data (CHD), they were apparently known (or thought) to NOT have been breached when Chris was called in to "investigate".  As Chris said in his post, we all tend to get analysis engagements like this..."I think this system may have malware or may have been breached, but I'm not sure".  In many cases, there are no definitive facts to point at, like a pop-up on the Desktop, AV detecting something, etc., and as such, it behooves us to have a thorough, documented process for addressing these types of engagements.

In short, the process used in this engagement appeared to be to acquire the necessary data (memory, volatile data, images, logs) based on scoping, and then perform analysis based on this data.  Rightly so, Chris draws on his past experience, as well as the customer specifications ("2.  What are you hoping that I don't find?") to determine what to look for in memory (as well as on the disk), which appears to have consisted of four specific types or artifacts of processes, and the checks are based on known items that the team had seen on previous engagements (process name variations, etc.).

Chris also described examining his timeline, going back 6 months to look at file "birth" dates ("B" in the "MACB" timeline listing) on files.  In his post, Chris didn't mention where the 6 month time frame originated, but he does say that the installation date for the system was in 2009.

One thing I wasn't clear on is this statement...

Searching the timeline also helped me to identify potential dump files.  I know most modern RAM dumpers obfuscate the output files by either using some basic encryption or even just encoding (like with an XOR).  It doesn't have to be fancy – just put the stolen data in some format that a regex won't identify.

I think it would be interesting to know (if it's not TrustWave company proprietary IP...) how someone would go about searching their timeline for files that Chris describes in that statement.  I think it's great that company's such as TrustWave are sharing information in the manner that they are, and my hope would be that, in cases such as this, that sharing would go just a bit further.

Some thoughts...
Reading through the post, it's clear that the scope was rather limited...and to be honest, this is often the case, as the customer will many times limit the scope.  This seems to have been the case here, as one of the scoping questions appears to have been, "2.  What are you hoping that I don't find?"  However, limiting the analysis in this manner can potentially be very dangerous, as the analyst may miss things that are different from those specifications.

One example that comes to mind is a RAM dumper, which is something Chris mentioned (item 1 in his list) in the post.  I have seen these before...the ones I saw were a combination of tools what would be run by a service, and that service would have a random sleep() time (some had a specific time that they slept after system start); what this means is that the back-office point of sale (POS) payment processing system would be booted, and the "bad stuff" would sit and wait for a specific process's virtual memory to accumulate CHD.  At some point, it would "wake up", dump process virtual memory, and "scrape" track data out.  Due to some very specific artifacts associated with the variant I was familiar with, we were able to easily determine the window of exposure, or how long the system had been actively breached.

However, what this means is that you're not likely to find the processes for this stuff running in memory.  You should, of course, find the service that is waiting, but the other, ancillary processes will run based on a specific trigger, and they tend to run very quickly, so if you're not dumping memory during the exact time that those processes run, you won't "see" them in a memory dump.  And it's relatively simple to "hide" the service in a memory dump, or "hide" the file on the system using time stomping techniques.

Overall, the post discussed looking for some very specific items in the data that was collected; it's probably safe to assume that the overall documented process that Chris and his team uses in instances like this wasn't completely listed in the blog post, simply due to space and brevity.  There are a lot of other checks that could have been done quite easily and efficiently to provide a more thorough, comprehensive result, and I'm sure that in order to save space and not loose the reader, most of these process steps simply weren't mentioned.

Chris was exactly right in the post; "...as professionals, sometimes we have to improvise."  I agree.  I'm sure that like other professionals, Chris and his team have a documented checklist of items to look for, and tools to use, when they encounter customer requests such as this, and I'm sure that it wasn't discussed in detail for brevity's sake. 

Reaching back into the dark recesses of my memory, Chris and I worked a case together where someone had broken into a company in a specific industry (not banking) and the intruder activity that we observed included searching for files that contained the word "banking"; so, sometimes a breach doesn't necessarily start out as being about CHD, or whatever critical data is maintained and processed by the customer.  However, a breach is a breach, and considerable work may need to be done to determine whether or not the intruder accessed that data.

So, I'm sure that a much more comprehensive, documented process was used in "finding evil", as there is a great deal of analysis that could have been done quickly and efficiently, but it simply wasn't included in the post to keep the post from growing too large to be read.

Be sure to catch Chris at the SANS Forensic IR Summit this summer in Austin! 

WRF Review
Grayson posted his review of WRF on his An Eye on Forensics blog.  I would like to give a big THANKS to Grayson and others who've taken the time to read through the book and posted their thoughts.

Community
There have been a couple of posts recently regarding "community" that have garnered a bit of attention.  First is David Kovar's Fragmentation of the digital forensics community post, and the second is this one on Belonging and Community from David Sullivan.

Having viewed the infosec "community" for about 14 years and the DFIR community for about 10 of those years, I've experienced and seen a good deal of what David Kovar talks about in his post, and I have to agree wholeheartedly with many of his points.  Also, like David, I have written programs (Perl scripts) that I have freely provided to the community, and many of you may know that one of my pet peeves is responding to someone who says that they have an emergency need for a particular script or update to a current script, only to not acknowledge receipt of that script and not provide so much as a thank you for the effort.  So much for "community", eh?

Mr. Sullivan starts his post off with the question, "...unless you are an established forensics professional is there even such thing as a community?"  I would suggest to Mr. Sullivan that a "community" is where you make it, and like other communities, some within the community create their own subcommunity.  By this, I don't mean an offshoot or tangential community, I mean that some simply create a community of trusted advisers that they reach to and support through various means.


One of Mr. Sullivan's questions bears a response:  I wonder if people on the inside realise how tough it is for newcomers to the area to gain any foothold into whatever community does exist.  Is it the nature of the business that makes professionals wary of new faces?

I really don't think it's a matter of "new faces" as much as it is, what are you willing to do, most particularly for yourself?  How many times have you gone into a forum or seen a post to a listserv, and you become immediately and acutely aware that the original poster (OP) made no discernible effort to search either the forum or Google for any information pertaining to their question?  Sure, there are a number of posts where the OP is looking for the solution to a homework assignment...those are pretty clear.  However, a community is about sharing, and dropping into a forum to post a question, rather than doing the research yourself and then posting your findings, does little to further the community.  I don't know for sure, but this may be what David Kovar was referring to when he mentioned bad behavior persists in his post.  Some communities advertise themselves as being "noob friendly" and oriented toward learning, but do little to support that role.

Monday, March 21, 2011

Links

Exploit Artifacts
The Journey into IR blog has an interesting post regarding exploit artifacts.  This one is specific to CVE 2010-0094, and provides information with respect to exploit artifacts for Admin and non-admin users.  The post lists some excellent file system and Registry artifacts, and they're worth taking a look at and cataloging. One of the things I do in my malware detection process is look for PE files (with .tmp or .exe/.dll extensions) in the user %TEMP% directory (based on the operating system version)...this blog post mentions several file system artifacts in that directory that would be of interest and worth checking out during a wide range of exams (CP/Trojan Defense, intrusion, malware detection, etc.).

One thing I would say about the post is that the Registry artifact posted is confusing...it says that the path to the Registry key affected is "HKLM-Admin", but it should say "HKCU-Admin" for the admin user.

Be sure to take a look at the follow-on post, as well...

Addendum: Plugin complete...

SOP
One of the issues faced in corporate environments is that when something is detected, it's simply quicker to take the box offline, "nuke it from orbit", and reinstall the OS, apps and data.  This was noted in a recent SANS ISC post.  The age-old issue with this approach is that if a thorough exam isn't performed, and the infection or compromise vector isn't determined (based on fact and not speculation) then the system remains open to exploitation and compromise all over again.  So, while the argument appears to be that "it's quicker" to just nuke the box and move on, is it really quicker to keep doing it over and over again?

I completely understand the need to quickly reprovision systems and get them back into service.  But there's such a thing as moving too quickly...to keep with the movie references from the SANS ISC post, consider Arnold S. in "Twins"; "You move too soon."

Anyway, there are a couple of things that you should consider adding to your standard operating procedures (SOP) for these situations.  The SANS ISC post mentions using disk2vhd, which is free and is a good option to consider.  Another is to download FTK Imager (free) and acquire a copy of the drive before reprovisioning.  If you need to mount the image as a VHD file, MS provides the free vhdtool.exe.

The point is that "nuking from orbit" is indeed a quicker response...the first time.  However, not understanding the overall issue of how the system became infected/compromised in the first place quickly reduces the value of this approach, as the system becomes re-infected.  Consider the situation where the compromise is NOT a result of out-of-date patches...what good is setting the system up all over again with up-to-date patches?

If you still find yourself without time to pursue more than simply making a copy/image of the system, why not outsource the analysis?  Contact someone you know and trust within the DFIR community and ask for assistance.  If you're addressing this from a corporate perspective, consider establishing a retainer-based setup with a trusted advisor.

APIMonitor
Ever want a tool similar to strace, for Windows?  Did you ever want to see more about what an application is doing, and how it's doing it?  Check out APIMonitor.  While this tool does appear similar to ProcessMonitor, there are some interesting differences, such as the buffer view, and the ability to monitor services.

Review
Eric J. Huber, of the AFoD blog, posted a review of Windows Registry Forensics on Amazon a bit ago, but I wanted to mention it again here, as this is what helps get the word about the book (and its potential value) out to the community.

Thanks again to Eric and the others who've posted reviews on Amazon, or on other publicly-accessible sites!

One thing I'd like to mention with respect to the content of the review is that the tools from the DVD that accompanies WRF are available online, and the link is continually available from the Books page associated with this blog.

Sunday, March 20, 2011

Using RegRipper

Sometimes I'll receive or see questions in a forum about RegRipper...does it to this, does it do that?  Sometimes I get the sense from these questions that there's tire-kicking going on, and that's fine...but I've always thought that it's been pretty clear what RegRipper does...what problem does it try to solve.

RegRipper is an open-source tool that allows the user (usually an analyst or responder) to extract specific information (keys, key LastWrite times, values, etc.) from the Registry.

As a side effect, RegRipper is also an excellent facility for retention of institutional knowledge.  Let's say an analyst finds something that she hasn't seen before through Registry analysis, as a result of 10 hours of dedicated analysis.  She can write a plugin, documenting everything, and then provide that plugin to other team members.  Now, without having her knowledge or expertise, or having spent that same 10 hours digging, everyone of the other analysts on her team can "find" and extract that same information.  Two years later, after some analysts have left and new ones have been hired, we have the same effect, again without the new analysts having to spend 10 or more hours to find the same thing.  And the data is extracted every time.

RegRipper is NOT a search tool, although there are plugins that will parse through binary data to retrieve information that would not be found via any of the usual search tools.  You can program the ability to do some searching into a plugin, sure...but RegRipper is not a tool you would use to perform general searches of Registry hives.

Below are some of the more popular questions I get:

Does RegRipper work with Windows 7?

This is one of those questions that I'm not sure I know how to answer.  If I say, "yes", I'm afraid that there's an expectation that every possible plugin for Windows 7 that could ever be written has been written and is included with the distribution.  If I say, "yes, but...", anything after the comma will get lost, and we're back at the last answer.

The fact of the matter is that RegRipper works with all versions of Windows from NT up through and including Windows 7.  I've used it on everything from Windows 2000 through XP and on to Vista and Windows 7 systems.  It works because the Registry structure, on a binary and data structure level, remains the same across all versions.  Where things go haywire a bit is when a key or value has been added, moved or deleted...which happens quite often between Windows versions.

So, the long answer is that yes, RegRipper works on Windows 7, but the caveat is that it must have the plugin for the data in which you're interested.

Sometimes, the above question is more often asked as, Why doesn't RegRipper do X?  The answer to that is usually, Because you haven't written the plugin yet, my friend.  ;-)  Folks, RegRipper is open-source, and free.  It comes with a great deal of documentation on how to use it.  For example, if you want to know what rip.exe can do, just type "rip -?" or "rip -h" at the command prompt.

Does RegRipper do X?

Much like Nessus, RegRipper is an engine that runs plugins.  If you want it to do something, you can make it do it.  The tool is open source, and is written in Perl. 

One of the tools I included with RegRipper is "rip", either with the .pl or .exe extension, which is simply the command line version of RegRipper.  Rip has some cool features.  For example, you can run either single plugins or entire profiles from the command line, and capture the information to files using DOS redirection.  The output from rip goes to STDERR and STDOUT, so use the appropriate redirection to capture everything.

If you want to know what plugins you have, use rip -l -c > plugins.csv, and open the resulting file in Excel.  When the Registry forensics book was released, I included a GUI tool called "Plugin Browser" that lets you browse through the plugins one at a time.

Can you make RegRipper do X?

Yes.  And so can you.  RegRipper is open-source, based on Perl.  There's very little in the way of a "proprietary" API...in fact, there isn't any at all.  RegRipper encapsulates some regular Perl APIs, such as the print() function, but that's it...it's just encapsulated.  RegRipper is based on the Parse::Win32Registry module by James McFarlane, which is easily installed into ActiveState Perl using the Perl Package Manager (PPM).

Need some help?  No problem.  There are a number of plugins available, and you can open any of these in an editor (or even NotePad) and use them as a basis.  In fact, this is exactly how I do it.

If you need even more help, and would like me to write a plugin for you, all I need is a clear, concise description of what you want, and a sample hive that contains the data.  That's it.  If you give me those, and I have the time available, I can usually turn around a working plugin in very short order.

If you have any questions, or don't understand something, the best thing to do is ask.  RegRipper is a powerful and very useful tool...I'm not saying this because I wrote it; I'm saying it because I wrote it and use it on every engagement.  I use RegRipper to look at specific keys and values to provide insight into the system under analysis, as well as provide some context about the engagement overall.  I also use it locate malware that wasn't detected by AV.  I've use RegRipper to catalogue new intrusion artifacts, as well as demonstrate that a user account was used to view (or in one instance, not view) specific files.

If you're using RegRipper, a new distribution of plugins (not so much new plugins as more...) was included along with Windows Registry Forensics, as well as online.  I've updated a couple of the plugins, added a few more, and Brett's provided others via RegRipper.net.

Friday, March 11, 2011

Links and Notes

Forensic Meet-up
There are plans afoot for a forensics meet-up in the Northern VA area (Chantilly - Centreville - Herndon - Reston) on 31 Mar 2011.  The meet-up will likely start around 6:30pm - 7pm, and the location is TBD for the moment...keep an eye here, or on the Win4n6 group.  This first meet-up will be free-form, and I'll work up something of an informal agenda. 

As more folks become aware of this meet-up, I guess my initial concern would be where to meet.  I'd like this to be informal, and everyone to relax and have a beer.  If the interest is for something a bit more formal, then we may move to a different agenda later.  Eventually, my hope is that this becomes something useful to folks, as we can discuss and implement innovation in the DF and IR fields...

F-Response Patent
On Fri, 11 Mar 2011, Matt announced that F-Response had received a patent for remote forensic innovation!  Congrats, Matt...this is very well deserved!

Of specific note is that F-Response provides, "...forensic grade write-protection..." for remote forensics and raw access to systems.

This is fantastic news for Matt, and for the community as a whole!  Matt's contributions to the field have been phenomenal, to say the least. 

RegRipper Plugins
I recently wrote up some new plugins (and updated the samparse.pl plugin)...

notify.pl - Parses the Notify subkeys within the Software hive for registered Winlogon Notification DLLs, based on Mark's Case of the SysInternals-Blocking Malware post

init_dlls.pl - Checks for keys similar to the one mentioned in Mark's Case of the Malicious AutoStart post

renocide.pl - Checks for an artifact key mentioned on the MMPC site for the Win32/Renocide malware

These plugins are meant to demonstrate a couple of things...first, that Registry analysis can be used in conjunction with other analysis methods to detect malware within acquired images, where AV scanners might fail.  I've run AV scans before where two commercial and three free AV scanners didn't find anything, but the fourth free scanner found something.  I've also seen where AV used by customers has failed due not to having the incorrect DAT file, but to having the incorrect scanning engine.  We're all susceptible to this, and if you use AV as part of your malware detection process for when you examine acquired images, then this is something that you'll need to be aware of, as well.

Second, all three of these plugins took me less than 30 minutes...total...to write and test.  In fact, the only real slow-down was deciding how to make the output a bit more useful...for the notify.pl plugin, I copied code from the userassist.pl plugin to list all of registered DLLs sorted based on their key LastWrite times.  This means that if I want to deploy any of these plugins as part of my timeline creation toolkit, it's simply a matter of minutes for me to modify them.  So in less than 30 minutes, I was able to add three new plugins to the library, and saved everyone who uses those plugins the time for researching and writing those plugins themselves.  This serves not only as a force multiplier, but also as a library for institutional knowledge within the community as a whole.

You can get copies of these plugins from Brett's RegRipper.net site.

As a side note, running RegRipper is just part of the malware detection process that I use regularly, and what I'm writing about and detailing for my next book.  Part of the supporting materials for this book will include a checklist, as well.

Wednesday, March 09, 2011

More Malware Detection

Given my last post which mentioned part of my malware detection process, I thought it would be a good idea to mention a couple of bits of malware that I've seen described online recently.

First, from Mark's blog comes The Case of the SysInternals-Blocking Malware; as the title would lead you to believe, the responder working on this one had some issues troubleshooting the malware, as it kept blocking his use of SysInternals tools.  The malware was eventually identified as Swimnag, which apparently uses the Notify key as it's persistence mechanism.

All told last night, it took me less than 10 minutes to write, test, and modify a RegRipper plugin to display the name, LastWrite time, and DLLName values of the Notify subkeys.  I could put a few more minutes into manipulating the output a bit.  Speaking of which, has anyone taken a shot at writing a plugin for the type of malware described in The Case of the Malicious AutoStart?

Addendum: Took me about 10-15 min, but I wrote up init_dlls.pl to locate value names (for the Malicious AutoStart issue) that end in Init_DLLs.

Another bit of joy mentioned on the MMPC this morning is Win32/Renocide.  The write-up for this one is an interesting bit of reading, in that it spreads not just via local, removable and network drives (on a network, it can spread via NetBIOS), but it also looks for specific file sharing applications, and uses those to spread, as well.  The persistence mechanisms are nothing new, but what I did notice is that one of the artifacts of an infection is a change to the firewall settings...this is one of those things that I encapsulate in "Registry analysis" when attempting to detect the presence of malware in an acquired image.  Interestingly enough, this malware also maintains its configuration in a Registry key (Software\Microsoft\DRM\amty); if you locate this key in the Registry, the LastWrite time should give you an approximate time that the system was infected.

Monday, March 07, 2011

MBR Infector Detector

Now and again, I get those analysis gigs where someone suspects that a system may have been infected with some sort of malware, but they aren't sure, and don't really have anything specific (Event Log entry, AV alert, etc.) to point to.  I know that others get these sorts of gigs as well, and like them, I have a process that I go through when examining images of these systems.  This usually starts with checking for installed AV products (MRT, etc.) to review their logs, as well as checking for AV having been run before the system was taken offline...if logs are available, they can tell you a lot, particularly the product and version run.  From there, I also mount the image and scan it with other AV tools.

One of the steps on my list is to also look for MBR infectors.  What's an "MBR infector", you ask?  Read on...
F-Secure "Hippie" Description (1996)
SecurityVibes - Mebroot (2008)
F-Secure - Mebroot (3 Mar 2008)
Symantec - Mebroot (30 July 2010)
Sunbelt - TDSS/TDL4 (15 Nov 2010)
F-Secure, 17 Feb 2011
MMPC - Sinowal, aka Mbroot, Mebroot (8 Feb 2011)
MMPC - Win32/Fibebol.A (7 Mar 2011)

If you read through the above links, particularly those that are AV vendor descriptions of MBR infectors, you'll notice some commonalities...in particular, when the MBR is infected, other sectors prior to the first partition (usually, sector 63) contain something...a copy of the MBR, code to be injected into the system, something.  Now, this doesn't mean that this is the case for ALL MBR infectors, just those that have been mentioned publicly.

Usually, what I would do is load the image into FTK Imager, and scan through the sectors manually...but why to do that, when you make the computer do it?  That's right...I wrote a (wait for it!) Perl script (mbr.pl) to do this for me!

So, what the script does is scan through a range of sectors from an image file; by default, it will scan through sectors 0 through 63 inclusive, but the analyst can set different sectors to be scanned.  When a sector that does NOT contain all zeros is found, the script will flag it.  By "flag it", in summary mode, the script will just list the sector number.  In a more detailed mode (which is the default), the script will print out the contents of the sector to STDOUT, in a hex viewer-like format.  This way, it's real easy for the analyst to see, "hey, this sector just contains some strings associated with Dell installs", or "Hey, this sector is the start of a PE file!"  Because the output goes to STDOUT, you can pipe it through "more" or redirect the output to a file.

Also, using another switch, the analyst can dump the raw sectors to disk.  This allows you to generate MD5 or ssdeep hashes, run ssdeep hash comparisons, submit the raw dump to VirusTotal, etc.

Overall, it's pretty cool.  I usually run mmls against the image anyway, and many times I'll see that the first partition starts at sector 63.  Other times, I've found the starting sector for the first partition by searching the image via FTK Imager for "NTFS".  Regardless, with the output of mmls, I can then run mbr.pl as part of my malware detection process, and just like other parts of the process, if nothing unusual is found, that's okay.  If something is found, it's usually correlated against the output of other steps in the process.  The overall goal is to as thorough a job as possible.

Thursday, March 03, 2011

Cybercrime and Espionage

I recently finished reading through Cybercrime and Espionage: An Analysis of Subversive Multi-Vector Threats, by John Pirc and Will Gragido, and wanted to share my thoughts on the book.

First, a couple of points of clarification.  For one, I'm reading the ebook on a Kindle.  I don't have the book-book version, so I can't compare the formatting...but I do have a copy of Windows Registry Forensics on the Kindle, so I can make something of a comparison there.  Also, I wasn't sent a copy of the book to review...I'm writing this review entirely on my own accord, and because I think that are some very interesting statements in and thoughts generated by this book.  Having a new book out myself right now, I think that this is something of a distinction.

The authors do a very good job of laying the groundwork early in the book, in particular pointing out that there is a lot about cybercrime that isn't new, but has instead been around for centuries.  Wanting what others have, and securing it for one's own profit are age-old desires/motivators, and bits and bytes are simply the new medium.

I am somewhat familiar with most of the compliance standards that the authors discuss, such as the PCI Data Security Standard (I spent three years as a QSA-certified PCI examiner, part of a QIRA team), HIPAA, and others (the credit union NCUA wasn't mentioned by the authors, but would have fit within the chapter nicely).

The authors also spend considerable time in the cyber-realm, particularly in developing and describing their Subversive Multi-Vector Threat (SMT) taxonomy, in which they include the APT and even Spc. Manning.  The authors build up to their taxonomy and provide examples, and then take the time to go beyond that and provide descriptions of intelligence gathering processes, as well as means that can be used to attempt to protect organizations.

Throughout the book, the authors provide considerable background and definitions; I think that this is helpful, as it provides both the uninitiated reader, as well as the more experienced (in the subject matter being addressed) with a common, level playing field.  Through this development of background and supporting definitions, the reader should easily see where things such as insider threats come from, for example.  In chapter 6, the authors spend considerable time explaining different avenues for gathering information and developing intelligence.  At one point, the issue of "trust" is brought up; wouldn't it be easy for an operative (in search of, say, corporate intelligence) to single out a disgruntled employee and earn their "trust"?

This is not a technical book, but it's definitely something that will get you to think about what's really going on in the world around you.  This should apply to the CIO, CISO, IT Director, even to the IT admin who's wondering if they've been "hacked".  Books that provide solutions are good, but so are books that challenge your thinking and (as the authors describe in the MOSAIC process) base assumptions about your surroundings.

Thoughts
What I really liked about the book, in addition to what the authors presented, was the thoughts that reading that material generated.  The following are thoughts that I had based on my reading, and viewing that material through the lens of my own experience, and are not things that you'll necessarily find stated specifically in the book.

What is deemed "adequate and reasonable" security is often decided by those with budgeting concerns/constraints, but with little understanding of the risk or the threat.

Compliance comes down to the auditor versus the attacker, with the target infrastructure as the stage.  The attacker is not constrained a specific compliance "standard"; in fact, the attacker may actually use that "standard" and compliance to it against the infrastructure itself.

Auditors are often not technical, and they do not see across the various domains of the "standard" to which they are auditing, and are not able to bring other factors into consideration (i.e., corporate culture, economics, business models, etc.).  Auditing is a point-in-time assessment and usually based on a checklist of some kind; do you have a firewall, yes/no; do you have IDS/IPS, yes/no.  While requiring an organization to meet a compliance standard will likely raise their level of "security", it's often a small step that's coming too late.  "Compliance" is more of a band-aid, and an attempt to modify the corporate culture to take the threats seriously.

Good IR Work

Mark Russinovich recently posted The Case of the Malicious Autostart to his blog.  I have to say, I think we are all very fortunate that Mark decided to post this; besides providing a very good demonstration of the use of the tools that Mark has written and made available, but it also demonstrates what others within the community are seeing.  Chris Pogue recently did something similar with his Webcheck.dll post to the Spiderlabs Anterior blog, and it's good to see these kinds of things posted publicly.

Mark's post provides some really good information about what was found during a support call, and the tools and techniques used to find it, as well as to dig deeper.  One thing that's interesting to point out is that the infection of the system may have included subversion of Windows File Protection (not that that's not trivial...), as it's mentioned that  the user32.dll files in the system32 and dllcache directories were modified.

Posts like this give the rest of us an opportunity to see what others are facing and how they're addressing those challenges.  Being the tech support in my household, I'm somewhat familiar with these tools and their use, but I can't say that I've seen something like this.  What I like to do is see how this methodology fits into my own processes.

In the comments to the post, a user ("Mihailik") asks about determining the infection vector, to which Mark responds:

Unfortunately, that's a question just about anyone fighting a new malware infection will have a near impossible time of determining. Unless you actually see the infection as it takes place, you can't know - it could have been someone executing a malicious email attachment, opening an infected document, or via a network-spreading worm. 

I would suggest that by using timeline analysis, many of us have been able to determine infection vectors.  I know that folks using timelines have nailed down the original infection vector in some cases to phishing emails, attachments, browser drive-bys, etc. The timeline may give an indication of where you should look, and examination of the actual files (PDF or Word document, Java .jar file, etc.) will illuminate the issue further.  Determining the infection vector may not have been something that could be easily done on this system, during this support engagement, but for more IR-specific engagements, this is often a question that analysts are asked to address.