I've been doing IR work for a while now, and I've had the great fortune of watching things grow over time.
When I started in the industry, there were no real training courses or programs available, and IR business models pretty much required (and still do) that a new analyst is out on the road as soon as they're hired. I got started in the industry and developed some skills, had some training (initial EnCase training in '99), and when I got to a consulting position, I was extremely fortunate to have a boss who took an interest in mentoring me and providing guidance, which I greatly appreciate to this day.
However, I've seen this same issue with business models as recently as 2018. New candidates for IR teams are interviewed, and once they're hired and go through the corporate on-boarding, there is little if any facility for training or supervising...the analysts are left to themselves. Yes, they are provided with tools, software products, and report templates, but for the most part, that's it. How are they communicating with clients? Are they sending in regular updates? If so, are those updates appropriate, and more importantly, are they technically correct? Are the analysts maintaining case notes?
Over the years of doing IR work, I ran into the usual issues that most of us see...like trying to find physical systems in a data center by accessing the system remotely and opening the CD-ROM tray. But two things kept popping into my mind; one was, I really wished that there was a way to get a broader view of what was happening during an incident. Rather than the client sending me the systems that they thought were involved or the "key" systems, what if I could get a wider view of the incident? This was very evident when there were indications on the systems I was examining that pointed to them being accessed from other systems, or accessing other systems, on the same infrastructure.
The other was that I was spending a lot of time looking at what was left behind after a process ran. I hadn't seen BigFoot tromp across the field, because due to the nature of IR, all I had to look at were footprints that were several days old. What if, instead of telling the client that there were gaps in the data available for analysis (because I'm not going to guess or speculate...) I actually had a recording of the process command lines?
During one particularly fascinating engagement, it turned out that the client had installed a monitoring program on two of the affected systems. The program was one of those applications that parents use to monitor their kid's computers, and what the client provided was 3-frame-per-second videos of what went on. As such, I just accessed the folder, found all of the frames with command prompts open, and could see exactly what the adversary typed in at the prompt. I then went back and watched the videos to see what the adversary was doing via the browser, as well as via other GUI applications.
How useful are process command lines? Right now, there're considerable artifacts on systems that give analysts a view into what programs were run on a system, but not how they were run. For instance, during an engagement where we had established process creation monitoring across the enterprise, an alert was triggered on the use of rar.exe, which is very often as a means of staging files for exfiltration. The alert was not for "rar.exe", as the file had been renamed, but was instead for command line options that had been used, and as such, we had the password used to encrypt the archives. When we receive the image from the system and recovered the archives (they'd been deleted after exfil), we were able to open the archives and show the client exactly what was taken.
So, things have progressed quite a bit over the years, while some things remain the same. While there have been significant in-roads made into establishing enterprise-wide visibility, the increase of device types (Windows, Mac, Linux, IoT, mobile, etc.) still requires us to have the ability to go out and get (or receive) individual devices or systems for collection and analysis; those skills will always be required. As such, if the business model isn't changed in some meaningful way, we are going to continue to have instances where someone without the appropriate skill sets is sent out on their own.
The next step in the evolution of IR is MDR, which does more than just mash MSS and IR together. What I mean by that is that the typical MSS functionality receives an alert, enriches it somehow, and sends the client a ticket (email, text, etc.). This then requires that the client receive and understand the message, and figure out how they need to respond...or that they call someone to get them to respond. While this is happening, the adversary is embedding themselves deeply within the infrastructure...in the words of Jesse Ventura from the original Predator movie, "...like an Alabama tick."
Okay, so what do you do? Well, if you're going to have enterprise-wide visibility, how about adding enterprise-wide response and remediation? If we're able to monitor process command lines, what if we could specify conditions that are known pretty universally to be "bad", and stop the processes? For example, every day, hundreds of thousands of us log into our computers, open Outlook, check our email, and read attachments. This is all normal. What isn't normal is when that Word document that arrived as an email attachment "opens" a command prompt and downloads a file to the system (as a result of an embedded macro). If it isn't normal and it isn't supposed to happen and we know it's bad, why not automatically block it? Why not respond at software speeds, rather than waiting for the detection to get back to the SOC, for an analyst to review it, for the analyst to send a ticket, and for the client to receive the ticket, then open it, read it, and figure out what to do about it? In that time, your infrastructure could be hit by a dedicated adversary, or by ransomware.
If you stop the download from occurring, you prevent all sorts of bad follow-on things from happening, like having to report a "personal data breach", per GDPR.
Of course, the next step would be to automatically isolate the system on the network. Yes, I completely understand that if someone's trying to do work and they can't communicate off of their own system, it's going to hamper or even obviate their workflow. But if that were the case, why did they say, "Yes, I think I will "enable content", thank you very much!", after the phishing training showed them why they shouldn't do that? I get that it's a pain in the hind end, but which is worse...enterprise-wide ransomware that not only shuts everything down but requires you to report to GDPR, or one person having to have a new computer to get work done?
So, the overall point I'm trying to make here is that the future of IR is going to be to detect and respond faster. Faster than we have been. Get ahead of the adversary, get inside their OODA loop, and cycle through the decision process faster than they can respond. I've seen this in action...the military has things called "immediate actions", which are actions that, when a condition is met, you respond immediately. In the military, you train at these things until they're automatic muscle memory, so that when those things go occur (say, your rifle jams), you perform the actions immediately. We can apply these sorts of things to the OODA loop by removing the need to make decisions under duress because we made them ahead of time; we made the decision regarding a specific action while we had the time to think about it, so that we didn't have to try to make the decision during an incident.
In order to detect and respond quicker, this is going to require a couple of things:
- Visibility
- Intelligence
- Planning
I'll be addressing these topics in future blog posts.
2 comments:
I'm astounded that SysInternals; or other similar tools for sale by other vendors aren't widely deployed. We used Cb at my previous employer, and the visibility and responsiveness it granted us was incredibly valuable.
Post a Comment