Pages

Sunday, January 23, 2022

The Threat Landscape and Attribution

Over the years, changes in the threat landscape have made attribution more difficult. Attribution has always been challenging, but has been and can continue to be eased through visibility. That is, if your view into an event or campaign is limited to resources such as malware samples pulled from public repositories, then attribution can be challenging. Even adding information regarding infrastructure extracted from the sample configs can still give a somewhat limited view. However, as visibility is expanded to include data from intrusions and incidents, attribution becomes clearer and more granular.

I ran across A Complex Threat Landscape Muddles Attribution recently and found it to be a fascinating, insightful read, one that anyone involved in threat intelligence, even peripherally, should take a good, thoughtful, intentional look at, marinate on it, and then contribute by sharing their own and thoughts and insights. This isn't something that you should just read over once, check the box that you read it, and move on.

I thought I'd take a few minutes to share some of my own thoughts. I'm not what anyone would refer to as a "threat intelligence professional". I don't have a military or LE intel background. However, what I am is a DFIR professional who sees, has seen, and has demonstrated the value of leveraging threat intel to further DFIR activities. I've also seen and demonstrated the value in leveraging DFIR activities (those that go beyond keyword and IOC searches) to further the threat intel picture, and subsequently, attribution. To that end, I spoke several times during 2021 on the topic of using DF analysis (specifically, analysis of the Windows Registry) findings to further the understanding of actor intent, "sophistication", and ultimately, attribution. One of the instances that I spoke on the topic was a half-day training event for federal LE.

I should note that Joshua M. (of Proofpoint) is one of the folks quoted several times in the article. I mention this largely to illustrate his background (check out his LinkedIn profile) as a counterpoint to my own. Joshua is what one would likely refer to as a more traditional intelligence professional.

I hope that brief description level-set the lens through which I'm viewing the content of the article. I get that we all have different perspectives, due in large part to each of us bringing our own experiences and knowledge to bear when reading and thinking about such things.

The first sentence of the article really sets the tone:

The sheer fluidity of the threat landscape is making the challenging process of attribution even more difficult for researchers.

IMHO, it's a good idea to unpack this a bit, not so much for the threat intel folks who've been doing this work for 20 yrs, but for everyone else. What this means is that "back in the day", there were some pretty clear delineations between activities. For example, "APT" (a.k.a., "nation-state" or "targeted") threat actors were distinctly different from 'e-crime' actors; different in how attacks and campaigns were organized and executed. Of course, if you look at the definitions used at the time, the Venn diagram included an overlap between "APT" and "e-crime" threat actors, in the form of credit card theft (PCI) attacks; these were considered 'e-crime' but were also clearly 'targeted'. However, the involvement of regulatory bodies (Visa, then later the PCI Council) served to move these attacks into an outlier status, the "everything else" column.

Over the years, we started to 'see' that the sharp lines of delineation between activities of 'nation-state' and 'e-crime' threat actors were becoming blurred; at least, this is what was being stated at presentations and in open reporting. I'll admit, I was seeing it as well, but I was seeing it through a different lens; that is, while a lot of threat intel is based on analysis of malware samples, the team(s) I worked with were quite literally following the threat actors, through a combination of active monitoring (EDR telemetry) and DF analysis. The EDR technology we were using was capable of not only monitoring from the point of installation going forward, but also of taking a "historical" look at the system by extracting, parsing, and classifying data sources on the system, that were populated prior to the installation of the EDR monitoring function. This provided us not only with unparalleled visibility into systems, but also allowed us to quickly scope the incident, and also quickly identify systems that required a closer look, a deeper DFIR examination. For example, on one engagement, there were over 150,000 total endpoints; we found that the threat actor had only "been on" a total of 8 systems, and from there, only needed to do a deep forensic analysis of 4 of those systems.

Finally, the reference to "researchers" can be viewed as a way of separating those attempting to perform attribution. A "researcher" can be seen as different from a "practitioner" or "responder", in that a researcher may be interacting with a somewhat limited data source (i.e., malware samples downloaded from a public repository) where a "practitioner" or "responder" might be engaging with a much wider range of data, including (but not limited to) EDR telemetry, logs, endpoint data acquired for analysis, etc.

The second paragraph references a couple of things we've seen develop specifically with respect to ransomware over the past 2+ yrs. Beginning in Nov 2019, we started to see ransomware threat actors move to a "double extortion" model, where they announce that they've stolen sensitive data from the compromised environment, and then prove it by posting samples of the data on the web. Prior to that, most DFIR consulting firms published reports with statements akin to "...no evidence of data exfiltration was found...", without specifying what evidence was looked for, or examined. The point is, Nov 2019 was a point where we started to see real, significant changes in the "ransomware economy", specifically with respect to how the activities were being monetized. In the months since then, we've seen references to "triple extortion" (i.e., files encrypted, data stolen, and a threat to inform customers), and even more than a few references to "n-tuple extortion" (don't ask). There's even been references to some actor groups further monetizing their attacks through the stock market, shorting stocks of companies that they were planning to or had "hit". Depending upon how the attacks themselves are carried out, endpoints might include clear indications of the threat actor's activity, as well as their intent. Some systems will clearly contain data pertaining to the threat actor's initial access vector, and other systems will likely contain data pertaining to not just the tools that were used, but also how those tools were used (i.e., threat actors may not use all capabilities provided by a tool in their kit).

Also mentioned in the second paragraph, some ransomware groups have moved away from conducting attacks and deploying ransomware themselves, and shifted to a "ransomware-as-a-service" (RaaS) model where they provide the ransomware and infrastructure for a fee. Add to that access brokers who are selling access to compromised infrastructures, and the end result is that not only has there been an increase in monetization associated with such attacks, but there has also been a significant lowering of the barrier for entry into the "ransomware game".

For example, in May, 2021, Mandiant published a blog article regarding DarkSide RaaS, and specifically regarding how they'd identified five clusters of threat activity, providing details of three of these clusters or groups. Even a cursory overview of what is shared regarding these clusters illustrates how analysts need to go beyond things like malware samples and infrastructure to more accurately and completely perform attribution.

Anti-Forensics and "False Flags"
Over the years, we've seen threat actors take actions referred to as "anti-forensics", which are efforts to remove or inhibit analysis and understanding of their techniques and actions. To some extent, this has worked. However, in some instances, what it has taken to achieve those goals has left much more indelible toolmarks that can be applied directly to attribution than the data that was "removed". 

There's also "false flags", intended to throw off and mislead the analyst. These can and have worked, but largely against analysts who are trying to move too quickly, or do not have sufficient visibility and are not aware of that fact.

From the article:

“Many attacker groups have been observed to fake one or more evidence types, like planting a string of a foreign language in their malware,” he said. “But there are no documented cases of attacker groups that took the effort to plant false flags consistently in all aspects of their attacks and in all evidence types.”

While there's no elaboration on what constitutes "all evidence types", so its unclear as to what was examined; that is, malware samples were analyzed, comparing not just code but also embedded strings.

Something else to consider when researching malware samples is validation. Very often, "embedded strings" might consist of sequences of commands or batch files. For example, in the spring of 2020, one observed ransomware sample included 156 embedded commands for disabling services. Either way, the question becomes, do the commands actually work? 

Finally, the second paragraph ends with, "...analysis of the breadcrumbs left behind by threat actors - including those in the malware’s code and the infrastructure - that researchers look at during attack attribution."

I've long been of the opinion that, however unknowingly, threat actors often seem to target analysts as much as infrastructures; perhaps a more appropriate description would be that threat actors target analyst's visibility, or their access to those "breadcrumbs". After all, it's very often the breadcrumbs that provide us that attribution.

As demonstrated via the Mandiant article, what's really missing from threat intel's ability to perform attribution is visibility. For example, from a great deal of public reporting, it's less that a threat actor "disabled Windows Defender" (or other security mechanisms) and more about how they did so, because the how and the timing can be leveraged to enable more granular attribution. While some threat actor activities appear to be consistent across monitored groups, at a high level, digging into the mechanics and timing of those activities can often reveal significant differences between the groups, or between individuals within a group.

Wednesday, January 19, 2022

How Do You Know What "Bad" Looks Like?

From the time I started in DFIR, one question was always on the forefront of incident responder's minds...how do you know what "bad" looks like? When I was heading on-site during those early engagements, that question was foremost on my mind, and very often, the reason I couldn't sleep on the plane, even on the long, cross country flights. As I gained experience, I started to have a sense of what "bad" might or could look like, and that question started coming from the folks around me (IT staff, etc.) while I was on-site.

How do you know what "bad" looks like?

The most obvious answer to the question is, clearly, "anything that's not "good"...". However, that doesn't really answer the question, does it? Back in the late '90s, I did a vulnerability assessment for an organization, and at one of the offices I visited there were almost two dozen systems with the AutoAdminLogon value in the Registry. This was anomalous to the organization as a whole, an outlier within the enterprise, even though only a single system had the value set to "1" (all others were "0"). The commercial scanning product we were using simply reported any system that had the value by name as "bad", regardless of what it was set to. However, for that office, it was "normal" and the IT staff were fully aware of that value, and why it was there.

From this example, we can see that the universal answer applies here; "it depends". It depends upon the infrastructure in question, the IT staff engaging in and managing that infrastructure, the "threat actor" (if there is one), etc. What's "bad" in an organization depends upon the organization. 

A responder from my team went on-site and acquired the systems of one employee; the goals of the investigation were to determine if there was anything "bad" on the systems, and the responder took that request, along with the images, and headed back to the lab. They began digging into the case, and found all sorts of "hacking" tools, scripts, instructions regarding how to use some of the tools, etc., and reported on all of what they found. The customer responded that none of this was "bad"; it was expected, as the employee's job was to test the security of their external systems.

While reviewing EDR telemetry over a decade and a half later, during an incident response engagement, I saw where an Excel spreadsheet was being opened by several employees, and macros being run. I thought I might have the root cause, only to find out that, no, that was "normal", as well, and part of a legitimate, standard business process. In one instance, we observed Excel launching macros, and alerted the customer to the nefarious activity...only to be told that it was a normal, legitimate business process, and expected behavior. Talk about "crying wolf"...what happens the next time something like that is seen in telemetry, and reported? It's likely that someone in the chain, either on the vendor/discovery side, or on the customer's side, is going to glance at the report, not look too deeply, and simply write it off as normal behavior.

The days of opening Task Manager during an engagement and seeing more svchost.exe processes than what is "usual" or "normal" are a distant memory, long faded in the rearview mirror. Threat actors, including insider threats, have grown more knowledgeable and cautious over the years. For more automated approaches to gaining access to systems, we're seeing scripts that include ping commands intended solely to delay execution, so that the impact of commands are not clustered tightly together in a timeline.

There are things...artifacts...that we universally know, or believe we know, to be bad. One of the best examples is the Registry value that tells the Windows OS to maintain credentials in memory in plain text; I cannot, nor have I seen, a legitimate business purpose for having this value (which doesn't exist by default) set to "1". I have, however, seen time and time again where threat actors have created and set this value several days before dumping the credentials. To me, this is something that I would consider to be a universally "bad" indicator. There are others, and this is just one example.

There are things that we believe are highly likely to be bad, such as the "whoami" command being run in an environment. I say, "highly likely", as while I have most often observed this command in EDR telemetry and associated with threat actors, I have also seen those very rare instances where admins had accounts with different privileges, and might have lost track of their current session; typing the "whoami" command allows them to orient themselves (just as it does the threat actor), or to troubleshoot their current permissions.

Finally, we need to be aware that something that might be "bad" might, in fact, be bad for the majority of an environment, but not for one outlier business unit. Back in 2000, I found a persistent route in the Registry on a system; further investigation revealed an FTP script that looked suspicious, and appeared to rely on the persistent route to work properly...I suspected that I'd found a data exfiltration mechanism. It turned out that this was a normal, legitimate business function for that department, something that was known, albeit not documented.

When determining "bad", there's a great deal that comes into play; business processes and functions, acceptable use policies, etc. There's a great deal to consider (particularly if your role is that of an incident response consultant), beyond just the analysis or investigative goals of the engagement. So, "it depends". ;-)

Monday, January 17, 2022

Registry Analysis - The "Why"

Why is Registry analysis important?
The Windows Registry, in part, controls a good bit of the functionality of a Windows system. As such, Registry analysis can help you understand why you're seeing something, or why you're not seeing something, as the case may be.

For example, Registry "settings" (i.e., keys, values, or combinations) can be/have been used to disable Windows Event Logs, enable or disable auditing (the content that goes into the Windows Event Log), disable access to security tools, enable or disable other functionality on Windows systems, etc. The Registry can be used to enable or disable application prefetching, which produces artifacts very commonly used by forensic analysts and incident responders. 

Registry values and data can provide insight into what an analyst is seeing, or why they're not seeing something they expect to see. For example, adding a specific Registry key to the System hive effectively disables logging to the Security Event Log; why clear the log when you can simply stop the event records from being written? Not seeing application prefetching files being generated on a Windows 10 or 11 system? Well, there's a Registry value for that, as well! 

Most analysts are aware that, particularly with respect to the file system, something "deleted" isn't always "gone". This is true for files, but even more so for records (elements with a file), and it's also true for the Registry. However, the Registry can be used to disable functionality the prevents activity from being recorded in the first place. As such, the records are never actually generated, and therefore, cannot be recovered. This is an important distinction to make, as over the past decade and a half, we've seen threat actors actively modify systems within an infrastructure to suit their own needs. For example, disabling or removing security tools, modifying operating system capabilities, and enabling or disabling functionality; whether manually or via a batch file (or some other means), many of these modifications manifest themselves as changes in the Registry. 

Case in point; in the spring of 2019, I was working with incident responders to track activities associated with the Ryuk ransomware group. Looking back to Sept, 2018, we'd seen where the actors had run mimikatz, and then within seconds, moved laterally. Closer examination showed that the actor had modified the Registry so that the operating system would maintain credentials in memory in plain text 6 to 9 days prior to running mimikatz. That way, they didn't have to copy the hashes so that they could crack them; the credentials were already in plain text, so all they had to do was choose the account with the highest level of privileges and move to subsequent systems. This is just one example of how threat actors have modified systems within a compromised infrastructure to meet their needs.

User Activity
In addition to "current state" configuration settings, the Registry "tracks" or maintains a history of activity on the system; updates, upgrades, devices connected, etc. This is also true for user actions; files opened or interacted with, applications launched, WiFi networks connected to, etc.

There used to be a few law enforcement officers who'd contact with me with questions; "used to" because they've since retired. One in particular had worked out a process with me, where she'd share an NTUSER.DAT from an active case and I'd process the hive and give her a comprehensive, detailed list of files interacted with by the user. I'd include applications used, and if applicable, dates and times of the interactions. I'd provide her with a complete description of how this information was found, so that she could replicate my findings so that she could testify (if necessary). She'd then tie the file paths and names to specific files and content, and take that to the pre-trial conference, and very often, she'd end up with a plea agreement.

In 2005, Cory Altheide and I published the first paper to address tracking USB devices across Windows systems. At the time, we were working with Windows XP, and since then, the Windows operating system has grown considerably in the records generated when USB devices are connected, as well as differentiating between various protocols. For example, connecting a smartphone or digital camera to a Windows computer via a USB cable appears in a different manner from a USB thumb drive; connecting a smartphone or a digital camera to a computer can mean the difference between possession and production of illicit images. 

Of course, devices connected via USB doesn't address devices connected via Bluetooth, or Wifi networks to which the computer has been connected.

"Proof of Life", Intent, Sophistication
Information derived from analysis of the Windows Registry can be leveraged to demonstrate "liveness", or to put a suspect "behind the keyboard". In addition, the totality of information derived from Registry analysis can be used to further the understanding of a user or threat actor's intent (what they're after) and their level of sophistication. For example, Registry analysis can show use that the threat actor added an account to a system as a means of persistence, and then added the account to different groups so that the account could be used to access the system remotely via RDP. From there, we can see if the icon for the account was hidden from the Welcome Screen, as well as if additional means of persistence were added.

Also, the timing between the various modifications we observe, in addition to the modifications themselves, can be leveraged to determine attribution. We have observed, for example, some threat actors making use of batch files or VBS scripts to make Registry modifications, either directly via reg.exe, or indirectly via other native tools such as sc.exe, or schtasks.exe. The timing of the modifications, in relation to other modifications and system impacts, can be used to establish attribution; was this truly the threat group we suspect it to be, or someone else? Or, do the observables indicate an evolution of capabilities of the threat group?

We've also seen how RaaS providers embed dozens of preemptive commands in the executables they provide; in the spring of 2020, one RaaS sample contained 156 embedded commands to disable Windows services, via sc.exe and other means. Looking at the timing of the modifications that actually took effect (not all services may be active on the impacted system) alongside files being encrypted provides insight into attribution.

Eco-System
A good bit of what we can find in the Registry aren't things we find as a result of a threat actor or malware directly interacting with the Registry; rather, there are more than a few keys or values that are "indirect artifacts", in that they're generated as a result of the actor or malware's interaction with the "eco-system" (platform, OS, applications, etc.). A good example of this is the AppCompatCache, or "ShimCache"; the entries in the value data are not created as a result of malware directly writing to the value, but rather that the file was on the system and subject to an application compatibility check.

From a configuration perspective, Registry (specific key, value, and data) settings can result in the OS being configured to generate additional "evidence" or tracks as a result of interaction with the eco-system. For example, there are settings to enable additional LSA protections; essentially adding protection to the LSASS.exe process. Setting the capability and rebooting generates an event record (in the Windows Event Log), and then interacting with the process in an incorrect manner will also generate Windows Event Log records (in the appropriate log).

Registry Analysis
Registry analysis is not a stagnant thing. That is to say that you don't reach a point where you're "done", and don't have to go any further. You don't just reach a point in, say, 2018 where you say, "I'm done" and don't continue to evolve your analysis. This is due to the fact that, even in a corporate environment, it's extremely difficult to find two systems that are the same; that is, system "loads" or configurations (base operating system, applications, configurations, etc.) are often very different. As a DFIR consultant, I would encounter different basic loads and configurations with every client, and even within that client, systems had different applications installed (in part, due to mergers and acquisitions), were configured differently, etc. Even when there was a "common load", where every system was supposed to have some common configuration, they weren't; often times, the version of the application would be a rev or two behind others, hadn't been updated in some time, etc.

Also, analysis goals very often differ; what you're looking to prove or disprove will vary depending upon your investigative goals. Are you dealing with a malware infection, a data breach, an insider threat, corporate espionage, access to illicit images or materials, or something else entirely?