Pages

Wednesday, October 19, 2011

Links, Updates, and WhatNot

Malware
Evild3ad has an excellent writeup of the Federal (aka, R2D2) Trojan via memory analysis using Volatility.  The blog post gives a detailed walk-through of the analysis conducted, as well as the findings.  Overall, my three big take-aways:

1.  An excellent example of how to use Volatility to conduct memory analysis.
2.  An excellent example of case notes.
3.  Detailed information that can be used to create a plugin for either RegRipper, or a forensic scanner.

There is also a link to a Rar archive containing the memory image at the site, so you can download it and try running the commands listed in the blog post against the same data.

M-Trends
The Mandiant M-Trends 2011 report is available...I received a copy yesterday and started looking through it.  Very interesting information in the report...as a host-based analysis guy, I found some of the information on persistence mechanisms (starting on pg 11 of the report) to be very interesting.  Some may look at the use of Windows Services and the ubiquitous Run key as passe, but the fact is that these persistence mechanisms work.  After all, when the threat actors compromise an infrastructure, they are not trying to remain hidden from knowledgeable and experienced incident responders.

Interestingly, the report includes a side note that the authors expect to see more DLL Search Order Hijacking used as a persistence mechanism in the future.  I tend to agree with the statement in the report, given that (again, as stated in the report) that this is an effective technique that is difficult to detect.

Another interesting persistence mechanism described in the report was services.exe being modified (without changing the size of the binary) to point to an additional (and malicious) DLL.  This technique has been seen being used with other binaries, including other DLLs. 

A major section (section III) of the report discusses visibility across the enterprise; I think that this is an extremely important issue.  As I've performed incident response over the years, a common factor across most (if not all) of the incidents I've responded to has been a lack of any infrastructure visibility whatsoever.  This has been true not only for initial visibility into what goes on on the network and hosts, but it has also affected response capabilities.  Network- and host-based visibility of some kind needs to be achieved by all organizations, regardless of size, etc.  I mean, think about it...any organization that produces something has some sort of visibility into processes that are critical to the business, right?  A company that manufactures widgets has controls in place to ensure that the widgets are produced correctly, and that they're shipped...right?  I mean, wouldn't someone notice if trucks weren't leaving the loading docks?  So why not have some sort of visibility into the medium where your critical information assets are stored and processed?

Looking at the information provided in the M-Trends report (as well as other reports available from other organizations), I can see the beginning an argument for incident preparation being built up; that is to say that while the report may not specifically highlight this (the M-Trends report mentions the need for "...developing effective threat detection and response capabilities..."), it's clear that the need for incident preparation has existed for some time, and will continue to be an issue.

Addendum: Pg 13 of the M-Trends report mentions some "interesting" persistence mechanisms being used, one of which is "use of COM objects"; however, the rest of the report doesn't provide much of a description of this mechanism.  Well, I ran across this post on the ACROS Security Blog that provides some very good insight into using COM objects for persistence.  Both attacks described are something of a combination of the use of COM objects and the DLL Search Order hijacking, and very interesting.  As such, there needs to be tools, processes, and education of analysts in these techniques so that they can be recognized or at least discovered through analysis.  I would suggest that these techniques have been used for some time...it's simply that most of us may not have known to (or "how to") look for them.

Resources
Verizon DBIR
TrustWave GSR

Incident Preparation
I recently gave a talk on incident preparation at ETCSS, and overall, I think it was well received.  I used a couple of examples to get my point across...boxing, fires in homes...and as the gears have continued to turn, I've thought of another, although it may not be as immediately applicable or understandable for a great many folks out there.

Having been a Marine, and knowing a number of manager- and director-types that come from prior military experience, I thought that the USS Cole would be a great example of incident preparation.  The USS Cole was subject to a bombing attack on 12 October 2000, and there were 56 casualties, 17 of which were fatalities.  The ship was stuck by a bomb amidships, and a massive hole was torn in her side, part of which was below the waterline.  However, the ship did not sink.

By contrast, consider the RMS Titanic.  On 15 April 1912, the Titanic struck an iceberg and shortly thereafter, sank.  According to some sources, a total of six compartments were opened to the sea; however, the design of the Titanic was for the ship to remain afloat with only the first four compartments opened to the sea.  As the weight of the water pulled the ship down, more water was allowed to flood the ship, which quickly led to her sinking.

So, what does this have to do with incident preparation and response?  Both ships were designed with incidents in mind; i.e., it was clear that the designers were aware that incidents, of some kind, would occur.  The USS Cole had some advantages; better design due to a better understanding of threats and risk, a better damage control team, etc.  We can apply this thinking to our current approach to infrastructure design and assessments.

How would the USS Cole have fared had, at the time of the bombing, they not had damage control teams and sailors trained in medical response and ship protection?  What would have happened, do you think, if they'd instead done nothing, and gone searching for someone to call for help?

My point in all this goes right back to my presentation; who is better prepared to respond to an incident - the current IT staff on-site, who live and work in that environment every day, or a consultant who has no idea what your infrastructure looks like?

Determining Quality
Not long ago, I discussed competitive advantage and how it could be achieved, and that got me to thinking...when a deliverable is sent to a customer of DFIR services, how do they (the customer) judge or determine the quality of the work performed?

Over the years, I've had those engagements where a customer says, "this system is infected", but when asked for specifics regarding why they think it was infected, or what led them to think it was infected, most often don't have anything concrete to point to.  I'll go through, perform the work based on a malware detection checklist and very often come up with nothing.  I submit a report detailing my work activity and findings, which leads to my conclusions of "no malware found", and I simply don't hear back.

Consulting is a bit different from the work done in LE circles...many times, the work you do is going to be reviewed by someone.  The prosecution may review it, looking for information that can be used to support their argument, and the defense may review it, possibly to shoot holes in your work.  This doesn't mean that there's any reason to do the work or reporting any differently...it's simply a difference in the environments.

So, how does a customer (of consulting work) determine the quality of the work, particularly when they've just spent considerable money, only to get an answer that contradicts their original supposition?  When they receive a report, how do they know that their money has been well-spent, or that the results are valid?  For example, I use a checklist with a number of steps, but when I provide a report that states that I found no indication of malware on the system, what's the difference between that and another analyst who simply mounted the image as a volume and scanned it with an AV product?

Attacks
If you haven't yet, you should really consider checking out Corey's Linkz about Attacks post, as it provides some very good information regarding how some attacks are conducted.  Corey also provides summaries of some of the information, specifically pointing out artifacts of attacks.  Most of them are Java-based, similar to Corey's exploit artifact posts.

This post dovetails off of a comment that Corey left on one of my posts...

I've seen and hear comments from others about how it's difficult (if not impossible) and time consuming to determine how malware ended up on the system.

Very often, this seems to be the case.  The attack or initial infection vector is not determined, as it is deemed too difficult or time consuming to do so.  There are times when determining the initial infection vector may be extremely difficult, such as when the incident is months old and steps have been taken (either by the attacker or local IT admins) to clean up the indicators of compromise (IoCs).  However, I think that the work Corey has been doing (and providing the results of publicly) will go a long way toward helping analysts narrow down the initial infection vector, particular those who create detailed timelines of system activity.

Consulting
Hal Pomeranz has an excellent series of posts regarding consulting and issues that you're likely to run into and have to address if you go out on your own.  Take a look at part 1, 2, 3, and 4.  Hal's provided a lot of great insight, all of which comes from experience...which is the best teacher!  He also gives you an opportunity to learn from his mistakes, rather than your own...so if you're thinking about going this route, take a look at his posts.

3 comments:

  1. Anonymous2:35 PM

    Thanks for the kind words about M-Trends 2011. I spoke with one of our malware guys about COM object persistence. He said Browser Helper Objects (BHO) in Internet Explorer are most commonly used to autorun malware although the same method would work against any application using COM objects.

    ReplyDelete
  2. Is there anything you can provide to expand on the use of BHOs for persistence? I'm sure that the readers would find it very useful.

    ReplyDelete
  3. Anonymous4:18 PM

    May want to check out Jon Larimer's Virus Bulletin material
    http://www.virusbtn.com/conference/vb2011/abstracts/Larimer.xml

    ReplyDelete