Pages

Monday, February 25, 2008

When First Responders Attack!!

It still happens...an "event" or "incident" occurs within an organization, and the initial response from the folks on-site (most often, the organization's IT staff) obliterates some or all of the "evidence" of the event. Consultants are called to determine "how they got in", "how far they got" into the infrastructure, and "what data was taken", and as such, are unable to completely answer those questions (if at all) due to what happened in the first hours (or in some cases, days) after the incident was discovered.

Check out Ignorance wrecking evidence, from AdelaideNow in Australia. It's an excellent read from the perspective of law enforcement, but a good deal of what's said applies across the board.

One of the things that consultants see very often is a disparity between what first responders say they did during an initial interview, and what the analyst sees during an examination. Very often, the consultant is told that the first responders took the system offline, but didn't do anything else. However, analysis of the image shows that installing and running anti-virus and -spyware tools, deleting files, and even restoring files from backup all happened. A great deal of this can be seen once the approximate timeline of the incident is determined...and very often, you'll see an administrator login, install or delete/remove stuff, etc., and then say that they didn't do anything.

Why would this matter? Let's take a look...

Many analysts still rely on traditional examination techniques, focusing on file MAC times, etc. So an admin logs into a system and runs an AV or anti-spyware scan (or both...or just 'pokes around'...something that happens a LOT more than I care to think about...), so now all of the file access times on the system have been modified, and perhaps some files have been deleted. Anyone remember this article on anti-forensics that appeared in CIO Magazine? Why worry about that stuff, when there is more activity of this nature occurring due to either the operating system itself, or due to regular, day-to-day IT network ops?

So what's the solution? Education and training, starting with senior management. They have to make it important. After all, they're the ones that tell IT that systems have to stay up, right? If senior management were really aware of how many times (and how easily) their organization got punked or p0wned by some 15 yr old kid, then maybe they'd put some serious thought and effort into protecting their organization, their IT assets, and more importantly, the (re: YOUR) data that they store and process.

4 comments:

  1. Anonymous2:40 PM

    Interesting note, in more recent cases where I've been called on for forensic analysis relating to (potential)theft of IP, Im getting asked by the CEO/CIO s to ALSO determine "what did my IT admin(s) do?" ...sigh...

    ReplyDelete
  2. Anonymous10:24 AM

    Many companies make the mistake of trying to hire small computer repair shops to do their investigation. The Laywers try to rip the Techs from their work by threat of subpoena. The Tech, without experience in forensics or desire to backlog his repair work is in a bad position and will probably act like a hostile witness. The only thing you should ask a local repair shop to do is image the drive, period.

    ReplyDelete
  3. I would guess that the hard part is defining an incident and educating IT staff. How does one tell when a daily "my system is acting funny" request or "what's this traffic hitting the firewall?" question will turn into an incident? IT admins will probably always approach issues as normal issues, until they realize that something falls into the realm of an incident.

    One would definitely approach a pornography or data theft issue far differently then those two issues above. Kinda like if I have my hands on a system doing normal work and find child porn on it, I really have to take my hands off the keyboard, stop everything, and escalate.

    Issues like that are easy to define. But when IT staff is often evaluated on their customer satisfaction which is based on how quickly they get the customer back up and running, taking the time to put on the brakes is costly if they're wrong.

    ReplyDelete
  4. I would guess that the hard part is defining an incident and educating IT staff.

    In my experience so far, it's not hard defining anything...especially "incidents". There are many sites that all define incidents pretty much the same, be they CERT, FIRST, whatever. The hard part is getting senior management on board and making IR of any kind relevant to the business of the organization.

    How does one tell when a daily "my system is acting funny" request or "what's this traffic hitting the firewall?" question will turn into an incident?

    Basic troubleshooting 101...I learned it as a 2dLt years ago, and applied what I learned to the digital side of things. It's also something that not many IT admins are familiar with.

    ...based on how quickly they get the customer back up and running...

    Exactly my point! Where does this priority come from, but senior management. And one doesn't have to "put on the brakes" if they know what they're doing...b/c they've been trained.

    ReplyDelete