Pages

Wednesday, January 19, 2022

How Do You Know What "Bad" Looks Like?

From the time I started in DFIR, one question was always on the forefront of incident responder's minds...how do you know what "bad" looks like? When I was heading on-site during those early engagements, that question was foremost on my mind, and very often, the reason I couldn't sleep on the plane, even on the long, cross country flights. As I gained experience, I started to have a sense of what "bad" might or could look like, and that question started coming from the folks around me (IT staff, etc.) while I was on-site.

How do you know what "bad" looks like?

The most obvious answer to the question is, clearly, "anything that's not "good"...". However, that doesn't really answer the question, does it? Back in the late '90s, I did a vulnerability assessment for an organization, and at one of the offices I visited there were almost two dozen systems with the AutoAdminLogon value in the Registry. This was anomalous to the organization as a whole, an outlier within the enterprise, even though only a single system had the value set to "1" (all others were "0"). The commercial scanning product we were using simply reported any system that had the value by name as "bad", regardless of what it was set to. However, for that office, it was "normal" and the IT staff were fully aware of that value, and why it was there.

From this example, we can see that the universal answer applies here; "it depends". It depends upon the infrastructure in question, the IT staff engaging in and managing that infrastructure, the "threat actor" (if there is one), etc. What's "bad" in an organization depends upon the organization. 

A responder from my team went on-site and acquired the systems of one employee; the goals of the investigation were to determine if there was anything "bad" on the systems, and the responder took that request, along with the images, and headed back to the lab. They began digging into the case, and found all sorts of "hacking" tools, scripts, instructions regarding how to use some of the tools, etc., and reported on all of what they found. The customer responded that none of this was "bad"; it was expected, as the employee's job was to test the security of their external systems.

While reviewing EDR telemetry over a decade and a half later, during an incident response engagement, I saw where an Excel spreadsheet was being opened by several employees, and macros being run. I thought I might have the root cause, only to find out that, no, that was "normal", as well, and part of a legitimate, standard business process. In one instance, we observed Excel launching macros, and alerted the customer to the nefarious activity...only to be told that it was a normal, legitimate business process, and expected behavior. Talk about "crying wolf"...what happens the next time something like that is seen in telemetry, and reported? It's likely that someone in the chain, either on the vendor/discovery side, or on the customer's side, is going to glance at the report, not look too deeply, and simply write it off as normal behavior.

The days of opening Task Manager during an engagement and seeing more svchost.exe processes than what is "usual" or "normal" are a distant memory, long faded in the rearview mirror. Threat actors, including insider threats, have grown more knowledgeable and cautious over the years. For more automated approaches to gaining access to systems, we're seeing scripts that include ping commands intended solely to delay execution, so that the impact of commands are not clustered tightly together in a timeline.

There are things...artifacts...that we universally know, or believe we know, to be bad. One of the best examples is the Registry value that tells the Windows OS to maintain credentials in memory in plain text; I cannot, nor have I seen, a legitimate business purpose for having this value (which doesn't exist by default) set to "1". I have, however, seen time and time again where threat actors have created and set this value several days before dumping the credentials. To me, this is something that I would consider to be a universally "bad" indicator. There are others, and this is just one example.

There are things that we believe are highly likely to be bad, such as the "whoami" command being run in an environment. I say, "highly likely", as while I have most often observed this command in EDR telemetry and associated with threat actors, I have also seen those very rare instances where admins had accounts with different privileges, and might have lost track of their current session; typing the "whoami" command allows them to orient themselves (just as it does the threat actor), or to troubleshoot their current permissions.

Finally, we need to be aware that something that might be "bad" might, in fact, be bad for the majority of an environment, but not for one outlier business unit. Back in 2000, I found a persistent route in the Registry on a system; further investigation revealed an FTP script that looked suspicious, and appeared to rely on the persistent route to work properly...I suspected that I'd found a data exfiltration mechanism. It turned out that this was a normal, legitimate business function for that department, something that was known, albeit not documented.

When determining "bad", there's a great deal that comes into play; business processes and functions, acceptable use policies, etc. There's a great deal to consider (particularly if your role is that of an incident response consultant), beyond just the analysis or investigative goals of the engagement. So, "it depends". ;-)

No comments:

Post a Comment