Saturday, June 02, 2007

AntiForensics Article

I read an interesting article recently that talks about antiforensics. At first glance, the article is something of an interesting piece, but reading it a second time and thinking about what was actually being said really got me thinking. Not because the article addresses the use of antiforensics, but because it identifies an issue (or issues) that needs to be addressed within the forensics community. Yes, these tools are out there, and we should be thankful that they we made available by someone...otherwise, how could we address the issue? So, what do we need to do to update our methodologies accordingly? Perhaps more importantly, should be be trying to get ahead of the power curve, rather than playing catch up?

I do feel that it is important to mention something else in the article that I found very concerning, though:
"...details of the TJX breach—called the biggest data heist in history, with more than 45 million credit card records compromised—strongly suggest that the criminals used antiforensics to maintain undetected access to the systems for months or years and capture data in real time."

Strongly suggest, how?

The article goes on to say:
"Several experts said it would be surprising if antiforensics weren’t used."

Several experts? Who? Were any of them involved in the investigation? If they were, what "expert" reveals this kind of information, and keeps his or her job? If not...why are they speculating? It just seems to me that this part of the article is out of place, and when viewed within the context of the entire article, breaks up the flow. The article has a logical progression of here's the issue, okay we've identified it, let's get about fixing it...which all makes sense...but then this bit of speculation seems out of place.

Overall, though, it appears that the article points to some issues that should be addressed within the digital forensic community. Are the tools we have worthless? Not at all. We just have to make better use of the information we have at hand. The article mentions building layers of "evidence", using multiple sources of information to correlate and support what we found in our digital investigation.

Also, Harlan's Corollary to Jesse's First Law of Computer Forensics really seems to be applicable now more than ever! ;-)

13 comments:

Anonymous said...
This comment has been removed by a blog administrator.
hogfly said...

I just got that mag the other day and was starting to read the article. I take issue when they start to use blanket terms like antiforensics. I consider antiforensics a lot different than anti-detection and they seem to do a good job of blurring that line.

H. Carvey said...

Hogfly,

Can you expand on that a little?

Thanks,

H

hogfly said...
This comment has been removed by the author.
hogfly said...

Sure..
Anti-forensics are techniques designed to defeat forensic techniques and tools.

Antiforensics implies knowledge on the attackers part of forensic techniques being used to track their activity. Many attackers give no forethought to this and simply download and use a tool off the 'web. I've seen plenty of tools used in attacks that have "antiforensic" switches but they are never used by the attackers because the original author of the tool is the one that considered forensics when they created the tool, not the attacker.

The mystery attack they describe at the aquarium is probably something not out the ordinary - a rootkit with a sniffer and an untrained and unaware network administrator..that's not antiforensics.

The tools they mention are antiforensic in nature, and the authors designed them specifially for that purpose. However they article is mixing anti-detection in with the language.

Really...when did antivirus become a forensics tool? When did packers become an antiforensics tool?

The easiest way to distinguish the two (antiforensics vs antidetection) is to look at it from the attack perspective. Do I want to get in unseen and stay there(AD), or do I want to prevent someone from finding my tracks(AF)?


I do take away a few good points from the article though, and I think it validates some of my thinking..

We need to rely on criminology and real world and criminalistic techniques, Investigators should fail daubert because of the unreliability of digital evidence, and what's practiced is not a science and we need a hell of a lot more work before we reach that point.

H. Carvey said...

Hogfly,

Good points, all.

We need to rely on criminology and real world and criminalistic techniques, Investigators should fail daubert because of the unreliability of digital evidence,...

How is digital "evidence" (I'm somewhat hesitant to use that term, as we're not collecting the data under the color of law at this point) "unreliable"?

...and what's practiced is not a science and we need a hell of a lot more work before we reach that point.

I see others saying the same thing, but as of yet, I don't see a great deal in the way of a roadmap or framework for getting there.

Any thoughts?

hogfly said...

No we're not under color of law but those that are under color of law use it in the same way we do, so let me clarify evidence. I mean evidence as in proof of our claims.

I've discussed some of the reliability issues with digital evidence but largely its due to the circumstantial nature of it. How can claim with anything other than a degree of certainty that what happened, happened in the way we claim?

Check my blog and the windows forensics group..I'm trying to get people involved in creating a metholodogy for live response tools to improve the reliability of our claims during that phase of an investigation. The DFRWS and SWGDE have some stuff together and the ACPO does as well but we lack in many ways that Daubert needs us to succeed (testing, repeatability, agreement in the community). We have no taxonomy, we're surrounded by conjecture and there's a host of other reasons..but that's a good start.

Anonymous said...

Many attackers give no forethought to this and simply download and use a tool off the 'web. I've seen plenty of tools used in attacks that have "antiforensic" switches but they are never used by the attackers...

Quite true, in my experience, which is primarily devoted to after-the-fact forensics on dead systems. I've seen my share of wipers, track hiders, and the like, but I've yet to see one case in which the user employed such a tool to its utmost. I will say, however, that "attackers" are probably a step or two up on those who aren't engaged in going after another system. I'd say that most of my targets don't realize that it takes some effort and ability to really get in there and permanently eradicate data. Perhaps they're too busy doing what led up to my exam :-)

H. Carvey said...

...its due to the circumstantial nature of it. How can claim with anything other than a degree of certainty that what happened, happened in the way we claim?

I believe that if this can be done with "real world" crimes, the same thing can be done in digital forensics, as well. Why can someone go after someone else and attack them with a knife, for example, and then later be convicted of the act (assault, attempted murder...whatever the charges)?

...I'm trying to get people involved in creating a metholodogy for live response tools to improve the reliability of our claims during that phase of an investigation.

I've seen your blog...I'm a frequent visitor...but a methodology for what?

...but we lack in many ways that Daubert needs us to succeed (testing, repeatability, agreement in the community).

Okay, not to be too bone-headed (sorry!!), but if I run tlist.exe on a system to get the list of active processes, what "testing" is required? Or if I dump the contents of physical memory, and follow a 'brute-force' scan of the dump file and locate all EPROCESS blocks, aren't they all right there? Couldn't anyone do the same thing, either on their own, using my tool, or using some other tool?

I ask, because I keep hearing references to Daubert, and testing, etc., but nothing beyond that. I agree and understand that you're trying to get people involved, but involved doing what exactly?

Thanks!

hogfly said...

You're right, it can be and is done with real world crimes. It's said that physical evidence can't lie, the failure is in the interpretation of the evidence.

With Digital Evidence, we can't trust it - it lies all the time. If you approach a computer that 5 users log in to, can we ever prove "who" did something? No we can't. We can prove that someone with username X did Y at Z time but it could have been any one of the 5 people(excluding alibi). It's too circumstantial to be relied on by itself. In that instance we need data from multiple sources that corroborate each other, and many times we don't have that information, so we're forced to make a claim based on data from a single source(typically the computer). All we can typically do is say "I'm pretty sure it happened this way".

I'm looking to create methodology to make incident response more scientific, and improve the confidence level in our claims as investigators.

What testing...
How about testing what tlist will actually show you vs what it won't? Are you 100% confident that the output of the tool is complete? Wouldn't your argument for using tlist and trusting its output be much better if you could say that for instance.."Out of 100 tests with random malware I can claim that tlist will show 97% of malware related processes"?

To me, quantifying our claim seems to be much better in "proving" the claim.

If you dump memory, can you trust it with the advent of TLB displacement and the shadow walker techniques being developed? The trouble is, we get one shot to dump memory, maybe a second if you do what Garner suggests(get dumps from debug and physical devices). Memory dumps are not repeatable and therefore its much harder to rely on it factually.

Controlled testing is required and a methodology for repeatable testing is required so things issues like the "trojan defense" and "I didn't knowingly store CP in my browser cache" don't keep popping up.

Wouldn't it be better if we could back up our claims? That's why I keep referring to Daubert and the fact that investigators should fail the challenge of being an expert because we don't test, have peer review or error rates, or agreement in the community. I'm trying to get the community involved in testing and validation of the methodology, once the methodology is created.

H. Carvey said...

hogfly,

Interesting comments...

With Digital Evidence, we can't trust it - it lies all the time. If you approach a computer that 5 users log in to, can we ever prove "who" did something? No we can't.

But does that mean that the 'evidence' lies? No, I don't think it does. Even in the physical world, law enforcement runs into situations all the time where certain artifacts or evidence are simply not there.

It's too circumstantial to be relied on by itself.

I would suggest that it isn't circumstantial at all, but that yes, we do (and should always) rely on multiple sources of data and artifacts.

I'm looking to create methodology...

I'd like to assist with that, and start by suggesting that the elements or components of that methodology already exist.

Wouldn't your argument for using tlist and trusting its output be much better if you could say that for instance...

I don't think that's the issue at all. Take the "russiantopz" bot example I've talked about in my book. The tools used at the time showed the existance of the malware...even Task Manager. Does the fact that sysadmin did not recognize a 'suspicious' process make the tool unreliable? No.

My point is that perhaps you're looking at the wrong things. Tlist displays information about the active process list. You can perhaps validate this by the fact that other tools that also display the active process list show the same processes. But if the tools are all using the same API calls to collect their data, then does all that testing really validate anything?

If you dump memory, can you trust it with the advent of TLB displacement and the shadow walker techniques being developed?

Again, I would suggest that this is the wrong approach. I believe that we should be looking at the totality of the artifacts and seeing what the information is telling us. If we look at the totality, then are there any artifacts that would tell us whether or not shadow walker techniques had been used?

Controlled testing is required and a methodology for repeatable testing is required so things issues like the "trojan defense"...

I think you're mixing things here a little bit and maybe clouding the issue...or maybe I'm just missing some things. Controlled testing...okay, I'm on board with that...does a tool do what it says it does, or is supposed to do? However, proving or disproving the "Trojan Defense" isn't about a single tool being tested, it's about looking at the totality of the artifacts...if you find that there are artifacts and/or logs that show the user's direct interaction with the system resulting in the artifacts that prompted the "Trojan Defense" claim, then you've got them...

Wouldn't it be better if we could back up our claims?

Of course.

That's why I keep referring to Daubert and the fact that investigators should fail the challenge of being an expert because we don't test,

We should...but test what, exactly?

have peer review

I agree, but "peer review" of what?

or error rates

Again, of what? A tool? Or the API functions that the tool accesses and implements? If I were to determine the error rate of a specific API call, wouldn't I then be able to apply that error rate to all tools that implement it?

or agreement in the community. I'm trying to get the community involved

d00d, I thought you would've seen what I tried to do, and learned from that. ;-)

hogfly said...

But does that mean that the 'evidence' lies? No, I don't think it does. Even in the physical world, law enforcement runs into situations all the time where certain artifacts or evidence are simply not there.

Ok..poor example on my part.

...But if the tools are all using the same API calls to collect their data, then does all that testing really validate anything?

This is a good point. So perhaps that's a part of the method for tool testing and validation.

Again, I would suggest that this is the wrong approach. I believe that we should be looking at the totality of the artifacts and seeing what the information is telling us. If we look at the totality, then are there any artifacts that would tell us whether or not shadow walker techniques had been used?

This is where I'm applying a criminalistic approach. You're right - totality makes the ultimate difference and once we have all available sources to look at we can paint a clearer picture, but totality is made up of many sources. Each of these sources needs to be strong enough to withstand scrutiny because if one piece unravels, then the total result is diminished and the claim becomes weaker and it becomes that much easier to chip away at. The stronger the individual sources are, the stronger the claim or conclusion is.

I think you're mixing things here a little bit and maybe clouding the issue...or maybe I'm just missing some things

It's probably a little bit of both. What I'm trying to get across is that by having a formal method of tool testing will help disprove things like the trojan defense because the holes of uncertainty can be plugged leaving very little wriggle room for doubt mainly because there will be a greater degree of trust in the claims of the investigator.

I suppose we'll cover testing, peer review and error rates in time..but why don't I pick on Gargoyle as an example..
Gargoyle is a hash matching tool and it's advertised as a "malware investigation tool"

Testing- Testing for accuracy
Peer Review - The methodology in using the tool.
Error Rates - Closely related to testing as testing allows us to establish error rate. In this case - how many false positives does the tool generate?

Now, it might be easier in this case to get by Peer Review, but testing and error rate..wow I could punch a hole wide enough for a battleship to go through...

d00d, I thought you would've seen what I tried to do, and learned from that. ;-)

Very very true I should have learned..but this idea got you interested didn't it and you're part of the community ;-)

H. Carvey said...

...if one piece unravels, then the total result is diminished...

Not necessarily. The idea is to use multiple pieces of data that support each other.

Let's say you have several pieces of evidence against a guy that used a computer to compromise gov't computer systems. You have all sorts of 'evidence'...witnesses saw him going into the building, keycard access, video surveillance, etc. He sits down at a Windows system and logs in...but from their connects to another system, so you're 'evidence' of his actions at that point becomes dubious or perhaps harder to explain or describe to a jury.

Does this diminish the truthfulness of the other 'evidence'? No.

...by having a formal method of tool testing will help disprove things...

I agree with that. However, I think that the reasons for (as well as the methodology of) the tool testing need to be clarified. What is it that you're 'testing'? Are you testing that tlist.exe or pslist.exe are capable of returning all active processes? If that's the case...how do you know? If I run pslist.exe 100, or 1,000, or 100,000 times, how do I know that each time it returned every active process on the system?

I'm not disagreeing that there needs to be a formal methodology...what I'm trying to do is understand what it is you're looking to 'test'.