Pages

Tuesday, April 26, 2011

Proactive IR

There are a couple of things that are true about security, in general, and IR specifically.  One is that security seems to be difficult for some folks to understand (it's not generally part of our culture), so those of us in the industry tend to use a lot of analogies in an attempt to describe things to others that aren't in our area of expertise.  Sometimes this works, sometimes it doesn't.

Another thing that's true is that the current model for IR doesn't work  For consulting companies, it's hard to keep a staff of trained, dedicated, experienced responders available and on the bench, because if they sit unused they get pulled off into other areas (because those guys "do security stuff") and like many areas of information security (web app assessments, pen testing, malware RE, etc.) the hard-core technical skills are perishable.  Most companies that need such skills simply don't keep these sorts of folks around, as they look to consulting companies to provide this service.

Why doesn't this work?  Think about it this way...who calls emergency incident responders?  Well, those who need emergency incident response, of course.  Many of us who work (or have worked) as incident responders know all too well what happens...the responders show up, often well after the incident actually occurred, and have to first develop an understanding of not just what happened (as opposed to what the customer thinks may have happened), but also "get the lay of the land"; that is, understand what the network infrastructure "looks like", what logs may be available, etc.  All of this takes time, and that time means that (a) the incident isn't "responded to" right away, and (b) the clock keeps ticking as far as billing is concerned.  Ultimately, what's determined with respect to the customer's needs really varies; in fact, the questions that the customer had (i.e, "what data left our network?") may not be answered at all.

So, if it doesn't work, what do we do about this?  Well, the first thing is that a cultural shift is needed.  Now, follow me here...all companies that provide a service or product (which is pretty much every one of them) have business processes in place, right?  There's sales, customer validation, provisioning and fulfillment, and billing and collections...right?  Companies have processes in place (documented or otherwise) for providing their product or service to customers, and then getting paid.  Companies also have processes in place for hiring and paying employees...because without employees to provide those products or services, where would you be?

Ever since I started in information security, one of the things I've seen across the board is that most companies do not have information security as a business process.  Companies will process, store and manage all manner of sensitive data...PCI, PHI, PII, critical intellectual property, manufacturing processes and plans, etc...and not have processes for protecting that data, or responding to incidents involving the possible exposure or modification of that data.

Okay, how about those analogies?  Like many, I consider my family to be critical, so I have smoke alarms in my home, fire extinguishers, we have basic first aid materials, etc.  So, essentially, we measures in place to prevent certain incidents, detect others, and we've taken steps to ensure that we can respond appropriately to protect those items we've deemed "critical".

Here's another analogy...when I went to my undergraduate education, we were required to take boxing.  If you're standing in a class and see everyone in line getting punched in the face because they don't keep their gloves up, what do you do?  Do you stand there and convince yourself that you're not going to get punched in the face?  When you do get punched in the face because  you didn't keep your gloves up, do you blame the other guy ("hey, dude! WTF?!?!") or do you accept responsibility for getting punched in the face?  Or, do you see what's happening, realize that it's inevitable, listen to what you're being told, and develop a culture of security and get your gloves up?  The thing about getting punched in the face is no matter what you say or do afterward, the fact remains...you got punched in the face.

Here's another IRL example...I recently ran across this WaPo article that describes how farms in Illinois are pre-staging critical infrastructure information in an easily accessible location for emergency responders; the intention is to "prevent or reduce property damage, injuries and even deaths" in the event of an incident.  Variations of the program have reportedly been rolled out in other states, and seem to be effective.  What I find interesting about the program is that in Illinois, aerial maps are taken to each farm, and the farmers (those who established, designed, and maintain the infrastructure) assist in identifying structures, etc.  This isn't a "here's $40K, write us a CSIRP"...instead, the farmer has to take some ownership in the process, but I guess they do that because a 1 hour or one afternoon interview can mean the difference between minor damage and loosing everything.

Sound familiar?

As a responder, I'm aware of various legislation and regulatory bodies that have mandated the need for incident response capabilities...Visa PCI, NCUA, etc.  States have laws for notification in the case of PII breaches, which indirectly require an IR capability.  Right now, who's better able to respond to a breach...local IT staff who know and work in the infrastructure every day (and just need a little bit of training in incident response and containment) or someone who will arrive on-site in anywhere between 6 and 72 hours, and will still need to develop an understanding of your infrastructure?

If the local IT staff knew how to respond appropriately, and was able to contain the incident and collect the necessary data (because they had the training and tools, and processes for doing so), analysis performed by that trusted third party adviser could begin much sooner, reducing response time and overall cost.  If the local IT staff (under the leadership of a C-level executive, like the farmer) were to take steps to prepare for the incident...identify and correct shortfalls in the infrastructure, determine where configuration changes to systems or the addition of monitoring would assist in preventing and detecting incidents, determine where critical data resides/transits, develops a plan for response, etc...just as is mandated in compliance requirements, then the entire game would change.  Incidents would be detected by the internal staff closer to when they actually occur...rather than months later, by an external third party.  Incident response would begin much quicker, and containment and scoping would follow suit.

Let's say you have a database containing 650K records (PII, PCI, PHI, whatever).  According to most compliance requirements, if you cannot explicitly determine which records were exposed, you have to report on ALL of them.  Think of the cost associated with that...direct costs of reporting and notification, followed by indirect costs of cleanup, fines, lawsuits, etc.  Now, compare that to the cost of doing something like having your DBA write a stored procedure (includes authorization and logging) for accessing the data, rather than simply allowing direct access to the data.

Being ready for an incident is going to take work, but it's going to be less costly in the long run when (not if) an incident occurs.

What are some things you can do to prepare?  Identify logging sources, and if necessary, modify them appropriately (add Process Tracking to your Windows Event Logs, increase logs size, set up a means for centralized log collection, etc.).  Develop and maintain accurate network maps, and know where your critical data is located.  The problem with hiring someone to do this for you is that you don't have any ownership; when the job's done, you have a map that is an accurate snapshot, but how accurate is it 6 months later?  Making incident detection and tier 1 response (i.e., scoping, data collection) a business process, with the help of a trusted adviser, is going to be quicker, easier and far less costly in the long run, and those advisers will be there when you need the tier 3 analysis completed.

What about looking at things like Carbon Black?  Cb has a number of uses besides just IR, and can help you solve a number of other problems.  However, with respect to IR, it can not only tell you what was run and when, but it can keep a copy of it for you...so when it comes to determining the capabilities of the malware downloaded to your system, you already have a copy available; call that trusted adviser and have them analyze it for you.

Remember the first Mission: Impossible movie?  After his team was wiped out, Ethan made it back to the safe house and as he reached the top of the stairwell, took the light bulb out of the socket and crushed it in his jacket, then spread the shards on the floor as he backed toward his room.  What this does is provide a free detection mechanism...anyone approaching the room isn't going to know that the shards are their until they step on them and alert Ethan to their presence; incident detection.

So what are you going to do?  Wait until an incident happens, or worse, wait until someone told you that an incident happened, and then call someone for help?  You'll have to find someone, sign contracts, get them on-site, and then help them understand your infrastructure so that they can respond effectively.  When they're first there, you're not going to trust them (they're new, after all) and you're not going to speak their language.  In most cases, you're not going to know the answer to their questions...do we even have firewall logs?  What about DHCP...do we log that?  What will happen is that you will continue to hemorrhage data throughout this process.

The other option is to have detection mechanisms and a response plan in place and tested, and have  a trusted adviser that you can call for assistance.  Your local IT staff needs to be trained to perform the initial response, scoping and assessment, and even containment.  While the IT director is on the phone with that trusted adviser, designated individuals are collecting and preserving data...because they know where it is and how to get it.  The questions that the trusted adviser (or any other consulting firm) would ask are being answered before the call is being made, not afterward ("Uh...we had no idea that you'd ask that...").  That way, you don't loose the whole farm, and if you do get punched in the face, you're not knocked out.

By the way...one final note.  This doesn't apply solely to large companies.  Small business are loosing money hand over fist and some are even going out of business...you just don't hear about it as much.  These same things can be done inexpensively and effectively, and need to be done.  The difference is, do you get it done, even if you have to have a payment plan, or do you sit by and wait for an incident to put you out of business and lay off your employees?

4 comments:

  1. Dave Nelson10:03 PM

    Compromise on client nodes is inevitable. Proactive incident response systems of the future are going to rely on things like automated indexing of binary hashs based on process creation, correlating least common occurrence in a network and automating the transmitting of the malware to sandboxes/repositories with risk results to flag a system for review or automate incident response scripts. Bonus points will be given for incorporating binary entropy (not at all perfect), certificates (proven fail), path info (for some reason this is still working and relevant) and event correlation but most of those aren’t really solid identifiers. Some of this is already being built into commercial ware. Some projects are being spun up that provide some of this functionality such as El Jefe and Carbon Black (we even built our own) but no one is anywhere near 'arrived'. Unfortunately, I suspect the free projects will be outgunned by commercial ware in speed, marketing and integration. Prove me wrong and make me happy.

    With the growing number of mobile devices and remotely connected devices, it's more important than ever to get artifact information as close as possible to the actual event. Monitors and triggers should be in place either as low level hooking agents (that don't stomp on other low level hooks) or customized real-time log forwarding (based on process creation) to initiate incident response scripts within minutes of an event. Only then can you build a reliable system to determine the extent of compromise and whether a node needs to be reimaged, passwords reset and the event registered in the incident log. Automated network disconnects and network initiated reimages are already here at larger companies. Eventually it will be done even faster and more widespread with VDI implementations.

    For incident response, we created a hybrid of live forensics and tools normally only used post mortem on a DD image. Combining free tools such as FGET, AnalyzeMFT, Memoryze, Regextract (not console ver anymore  ), NirSoft tools and a full battery of other tools and scripts normally run on a DD image but automated and adapted to run against a live production node, we centrally gather the information to gauge the extent of compromise while the client continues to work. We've built scripts to gather file indexes from Shadow Copy and MFT information. Automation of a persistent client popup, network card disabling and workstation account disabling are fairly easy but we continue to look for a free tool or script to automate the parsing of the Windows Desktop Search database either in full or by age. I am crossing my fingers for NirSoft to take up the flag since he already has a version for Live Messenger files.

    Malware incident response has come a long way but we are still woefully short of proactive. Sometimes I feel like the sand on the shore, pounded by the mafia waves as nodes are towed out to sea.

    ReplyDelete
  2. Dave,

    I greatly appreciate your comments...

    Proactive incident response systems of the future...

    ...but I also have to disagree because I think you're way off base. Sorry, but I read your comments as you being a very technical responder, albeit perhaps an admin in an FTE position, and this is your "in an ideal world" wish list.

    Don't get me wrong...from a technical perspective, I agree. However, I've responded to enough incidents to know that it will be a LONG time before something like that gets added to an organization's infrastructure. It doesn't matter if there is a technically complete and perfect tool available that will do everything at the push of a button, if the corporate culture of the "victim" organization simply does not allow for its use or deployment.

    ...I suspect the free projects will be outgunned by commercial ware...

    Again, I disagree. Snort started out free; yes, Netwitness is commercial, but it doesn't have the availability. From a DF perspective, I'm looking at things like timeline analysis, which is something you can ONLY do with freeware tools...it's a powerful analysis technique, but commercial forensic applications are being built to their customers requirements, rather than leading those analysts where they need to go.

    IR prep is NOT on people's plates at the moment, and based on our culture in the US, likely won't be for a while. "Compliance" standards have been out for a while, and rather than realizing that these are a necessary first step, most organizations are pushing back, even while other companies all around them are getting hit.

    My hope is to change the culture of just one organization...change their thinking. Get them to accept that yes, an incident WILL happen, and they need to take ownership of a response plan before they think about visibility...doing it the other way around is going to mean "seeing" all of the stuff that goes on, but not being able to respond to it, and the effort will fail. I would like to change the culture of one company, to get them to accept that the data that they're processing...PCI, PHI, PII, IP, whatever...is critical to their business and needs to be protected, just like their people and facilities.

    Can I count on getting your support? Again, I greatly appreciate your comments.

    ReplyDelete
  3. Dave Nelson1:04 AM

    Organizational culture towards security is merely the sum of a few parts in a corporate cogwheel that makes up the power chain to the minds and monies of a few. Those individuals don't change their thinking about security for security's sake. You don't sell them an insurance plan. Compliance laws and regulations don't change the way they think about security. It's not about convincing them of D-Day, it's about showing them that even now, under their noses, bits are trickling away to far off places and they don't even know what's in the obfuscated and encrypted packets.

    It does start with visibility, both internal visibility and external visibility. Visibility by showing that compromise is affecting real companies all the time, from Google to RSA and HBGary Federal. Showing data on the corporate LAN being sent to China, Russia and the Cocos Islands and that the vast majority of compromises are using unmodified malware kits. Nothing stops a click.

    Visibility is the seed that allows you to grow incident response processes and systems. How can someone blindly build a response plan?

    ReplyDelete
  4. I'm not suggesting that a CSIRP be built blindly...but visibility before having some ability to respond is going to do...what? Okay, I've got bleeding, but I know nothing about bandages or first aid...

    ReplyDelete