There are just somethings that you can't say often enough. For example, tell the ones you love that you love them. Also, tell them that when they suspect an incident, do NOT shut the system off!
Think of Brad Pitt and Ed Norton (no, not that way!!) in Fight Club, but paraphrase..."the first rule of incident response is...don't panic. The second rule of incident response is...DO NOT PANIC!!"
Okay, let's go back to the beginning...well, maybe not that far. Let's go back to an InformationWeek article published in Aug, 2006, in which Kevin Mandia was quoted...here's one interesting quote:
One of the worst things users can do if they think their systems have been compromised by a hacker is to shut off their PCs, because doing so prevents an investigator from analyzing the contents of the machine's RAM, which often contains useful forensic evidence, Mandia said.
So, what do you think? Is he right? Don't say, "well, yeah, he must be...he's Kevin Mandia!" Think about it. While you're doing that, let me give you a scenario...you're an admin, and you detect anomalous traffic on your network. First, you see entries in firewall or IDS logs. You may even turn on a sniffer to capture network traffic information. So let's say that at that point, you determine that a specific system is connecting to a server on the Internet on an extremely high port, and the traffic appears to be IRC traffic. What do you do? More importantly, what do you want to know?
The usual scenario continues like this...the system is taken offline and shutdown, and stored in an office someplace. An outside third party may be contacted to provide assistance. At this point, once the incident is reported to management, the questions become...what is our risk or exposure? Was sensitive information taken? Were other machines affected? If so, how many?
This is where panic ensues, because not only do most organizations not know the answers to the questions, but they don't know how to get the answers themselves, either. Here's another quote from the article:
...Mandia said rumors of a kernel level rootkits always arise within the company that's being analyzed.
You'll see this in the public lists a lot..."I don't know what this issue is, and I really haven't done any sort of investigation...it's just easier to assume that it's a rootkit, because at the very least, that says the attacker is a lot smarter than me." Just the term rootkit implies a certain sexiness to the incident, doesn't it? After all, it implies that you've got something someone else wants, and an incredibly sophisticated attacker, a cyberninja, is coming after you. In some cases, it ends up being just a rumor.
The point of all this is that many times, the questions that management has about an incident cannot be answered, at least not definitively, if the first step is to shut down the system. At the very least, if you have detected anomalous traffic on your network, and traced it back to a specific system, rather than shutting the system off, take that additional step to collect process and process-to-port mapping info (for a list of tools, see chapter 5 of my first book) from the system. That way, you've closed the loop...you've not only tied the traffic to a system, but you've also tied it to a specific process on the system.
Of course, if you're going to go that far, you might was well use something like the Forensic Server Project to gather even more information.
3 comments:
I like this post, but reality tends to go something like this in most companies and admins I have been exposed to:
1. incident is reported/detected that needs attention
2. take the system offline
3. do some low-level checks to figure out what is wrong
4. save the user's information, wipe the system, re-image it, and get that user back to working again with as little downtime as possible while management pretends nothing big really happened or could have happened, moving on with life.
This is the pressure for any admins not in a highly secure environment with absolutely defined processes that are supported by management and trump the user himself or the user's manager.
In this case, I believe it is highly, highly important that all admins be at least exposed on how to properly gather information and preserve evidence as quickly as possible. Get some initial information on ports, processes, threads, open files. Dump memory. Grab an image of the disk with checksums. Then attend to the business needs of getting that user up, and when time permits, pursue restoring that image to an identical machine or otherwise examining it in more detail. Far too many of us don't know how to do any of those things.
Anyway, you might cover that in your book, which I have already bought but is still in my personal reading queue. ;)
LV,
I agree with you, to a point. My experience so far has been that #3 isn't done very well, or its completely skipped.
I do think that your comment about "supported by management" is the key...if management supported/required a root cause analysis and supported training for the admins, particularly in the area of response, things would be different.
Exposing the admins to what they need won't be enough. I've worked with SANS certified admins who haven't a clue in a training scenario what to do. IR needs to become as second-nature as logging in in the morning for incident responsers.
Thanks for the comment.
You're right, it does need to be practiced and second nature. :)
Post a Comment