Pages

Sunday, February 26, 2023

Devices

This interview regarding one of the victims of the University of Idaho killings having a Bluetooth speaker in her room brings up a very important aspect of digital forensic analysis; that technology that we know little about is very pervasive in our lives. While the interview centers around the alleged killer's smart phone, the same concept applies to Windows systems, and specifically mobile systems such as laptops and tablets. Very often, there are remnants or artifacts left over as a result of prior activity (user interaction, connected devices, etc.) that we may not be aware of, and in more than a few instances, these artifacts may exist well beyond the deletion of applications.

Something I've mentioned previously here in this blog is that where you look for indications of Bluetooth or other connections may depend upon the drivers and/or applications installed. Some laptops or tablets, for example, may come with Bluetooth chipsets and drivers, and their own control applications, while other systems may have to have an external adapter. Or...and this is a possibility...the internal chipset may have been disabled in favor of an external adapter, such as a USB-connected Bluetooth adapter. As such, we can cover a means for extracting the necessary identifying information, just as Brian did here in his blog in 2014, but that specific information may not apply to other systems. By way of example, participants in this analysis test would have found information about connected Bluetooth devices in an entirely different location. The publicly available RegRipper v3.0 includes three plugins for extracting information about Bluetooth-connected devices from the Registry, one of which is specific to certain Broadcom drivers.

WiFi
Okay, not what we'd specifically consider "devices", but WiFi connections have long been valuable in determining the location of a system at a point in time, often referred to as geolocation. Windows systems maintain a good deal of information about WiFi access points they've connected to, much like smartphones in the "Bluetooth" section above. We "see" this when we have the system (Windows laptop, or a smartphone) away from a WiFi access point for a period of time, and then return...once we're back within range, if the system is configured to do so, it will automatically reconnect to the access point.

While I've done research into discovering and extracting information from the endpoint, others have used this information to determine the location of systems. I've talked to analysts who've been able to demonstrate that a former employee for their company met with a competitor prior to leaving the company and joining the competitor's team. In a few instances, those orgs have had DLP software installed on the endpoint, and were able to show that during that time, files were copied to USB devices, or sent off of the system via a personal email account.

USB Devices
Speaking of USB devices...

USB devices connected to Windows systems have long been an interest within the digital forensics community; in 2005, Cory Altheide and I co-authored the first peer-reviewed, published paper on the topic. Since then, there has been extensive writing on this topic. For example, Nicole Ibrahim, formerly of G-C Partners, has written about USB-connected devices, and the different artifacts left by their use, based on the device type (thumb drive, external hard drive, smartphone) and protocols used. I've even written several blog posts in the past year, covering artifacts that remain as a result not of USB devices being connected to a Windows system, but changes in Windows itself (here, and here). Over time, as Windows evolves, the artifacts left behind by different activities can change; we've even seen this between Windows 10 builds. As a result, we need to keep looking at the same things, the same activities, and ensure that our analysis process is keeping up, as well.

To that end, Kathryn Hedley recently shared a very good article on her site, khyrenz.com. She's also shared other great content, such as what USB connections look like with no user logged into the system. While Kathryn's writing covers specifically USB devices, she does address the issue of validation by providing insight into additional data sources.

Saturday, February 25, 2023

Why Write?

I shared yet another post on writing recently; I say "yet another" because I've published blog posts on the topic of "writing" several times. But something I haven't really discussed is why should we write, nor what we should write about?

In his book, Call Sign Chaos, Jim Mattis said, "If you haven't read hundreds of books, you are functionally illiterate, and you will be incompetent, because your personal experiences alone aren't broad enough to sustain you." While this is true for the warfighter, it is equally (and profoundly) true for other professions, and there's something else to the quote that's not as obvious. It's predicated on other professionals writing. In his book, Mattis described his reading as he moved into a theatre of operations, going back through history to learn what challenges previous commanders had faced, what they'd attempted to overcome those challenges, and what they'd learned.

While the focus of his book was on reading and professional development/preparation, the underlying "truth" is that someone...a previous commander, a historian, an analyst, someone...needs to write. This is what we need more of in cybersecurity...yes, there are books available, and lists available online, but what's missing is the real value, going beyond simple lists and instructions, to the how and the why, and perhaps more importantly, to what was learned.

So, if you are interested in developing content, what are some things you can write about? Here are some ideas...

Book Reviews
With all of the books that are out there that cover topics in DFIR, one of the few things we see are book reviews.

A book review is not a listing of the chapters and what each chapter contains.

What I mean by a book review is how you found it; was it well written, easy to follow? Was there something that could have made it better, perhaps more valuable, and if so, what was it? What impact did the contents have on your daily work? Is there something you'd like to see; perhaps a deeper explanation, more screen captures, maybe exercises at the end of sections or chapters would be beneficial? 

And, if you found something that could be improved, maybe make clear, explicit recommendations. I've seen where folks have asked for "more screen captures" without saying of what, nor for what reason (i.e, what would be the goal or impact of doing so). 

Conference Talks
Many times, particularly during 'conference season', we'll see messages on social media along the lines of "so-and-so is about to go on stage...", or we'll see a picture of someone on a stage, with the message, "so-and-so talking about this-and-that...", but what we don't see is commentary about what was said. So we know a person is going to talk about something, or did talk about something, but we know little beyond that, like how did what they say impact the listener/attendee? This is a great way to develop and share content, and is similar to book reviews...talk about how what you heard (or read) impacted you, or impacted your approach to analysis.

General Engagement 
Speaking of social media, this is a great way to get started with the habit of writing...articulate your thoughts regarding something you see, rather than just clicking "Like", or some other button offered by the platform. 

Monday, February 20, 2023

WEVTX Event IDs

Now and again, we see online content that moves the community forward, a step or several steps. One such article appeared on Medium recently, titled Forensic Traces of Exploiting NTDS. This article begins developing the artifact constellations, and walks through forensics analysis of different means of credential theft on an Active Directory server.

We need to see more of these sorts of "how to investigate..." articles that go beyond just saying, "...look at the <data source>...". Articles like this can be very useful because they help other analysts understand how to go about investigating these and similar issues.

The sole shortcoming of this article is that the research was clearly conducted by someone used to looking at forensic artifacts in a list; each artifact is presented individually, isolated from others, rather than as part of an artifact constellation. Analysts who come from a background such as this tend to approach analysis in this way, because this is how they were taught. 

Further, about halfway through the article we see a reference to "Event ID 400"; the subsequent images illustrate the event source as being "Kernel-PNP". However, this isn't specified. If you Google for "event ID 400", you find event sources such as Powershell, Microsoft-Windows-TerminalServices-Gateway, Performance Diagnostics, Veritas Enterprise Vault, and that's just on the first page.

About a third of the way down the article (sorry, images are numbered for reference) there's an image with the caption "Event ID 4688". The important thing that readers need to understand with this image is that these do not appear in the Security Event Log by default. For these events to appear, successful Process Tracking needs to be enabled, and there's an additional step, a Registry modification that needs to be made, in order for full command lines to appear in the event record. This is important for analysts to understand, so that they do not expect the records to be present by default. Also, you can parse the Security Registry hive using the RegRipper auditpol.pl plugin to determine the audit configuration for the system, validating what you should expect to see in the Security Event Log.

When examining the Windows Event Log as a data source during an investigation, what's actually available in the logs is dependent upon the version of Windows, the installed applications, the configuration of the Security Event Log, etc. Don't assume when reading articles such as this online that, while profoundly useful, you're going to see the log entries in the systems you engage with and examine.

Monday, February 13, 2023

Training and CTFs

The military has a couple of adages...one, "you fight like you train", and another being, "the more you sweat in peace, the less you bleed in war." The idea behind these adages is that progressive, realistic training prepares you for the job at hand, which is often one performed under "other than optimal" conditions. You start by learning in the classroom, then in the field, and then under austere conditions, so that when you do have to perform the function(s) or task(s) under similar conditions, you're prepared and it's not a surprise. This is also true of law enforcement, as well as other roles and functions. Given the pervasiveness of this style of training and familiarization, I would think that it's suffice to say that it's a highly successful approach.

The way DFIR CTFs, while fun, are being constructed and presented, they are doing those in the field a disservice, as they do not encourage analysts to train the way they should be fighting. In fact, they tend to cement and even encourage bad habits.

Let me say right now that I understand the drive behind CTF challenges, particularly those in the DFIR field. I understand the desire to make something available for others to use to practice, and perhaps rate themselves against, and I do appreciate the work that goes into such things. Honestly, I do, because I know that it isn't easy. 

Let me also say that I understand why CTFs are provided in this manner; it's because this is how many analysts are "taught", and it's because this is how other CTFs are presented. I also understand that presenting challenges in this manner provides for an objective measure against which to score individual participants; the time it takes to complete the challenge, the time between answering subsequent questions, and the number of correct responses are all objective measures that can be handled by a computer program, and really provide little wiggle room. So, we have analysts who "come up" in the industry, taking courses and participating in CTFs that are all structured in a similar manner, and they go on to create their own CTFs, based on that same structure.

However, the issue remains...the way DFIR CTFs are presented, they encourage something much less than what we should be doing, IRL. We continue to teach analysts that reviewing individual artifacts in isolation is "sufficient", and there's no direction or emphasis on concepts such as validation, toolmarks, or artifact constellations. In addition, there's no development of incident intelligence to be shared with others, both in the DFIR field, and adjacent to it (SOC, detection engineering, CTI, etc.).

Hassan recently posted regarding CTFs and "deliberate practice"; while I agree with his thoughts in principle, these tend to fall short. Yes, CTFs are great, because they offer the opportunity to practice, but they fall short in a couple of areas. One in particular is that they really aren't necessarily "deliberate practice"; perhaps a different way of saying that is that it's "deliberate practice" in the wrong areas, because we're telling those who participate in these challenges that answering obscure questions, in a manner that isolates that information from other other information needed to "solve" the case, is the standard to strive for, and this should not ever be the case.

Another way that these DFIR CTFs fall short is that they tend to perpetuate the belief that examiners should look at artifacts one a time, in isolation from other artifacts (particularly others in the constellation). Given that Windows is an operating system, with a lot going on, our old way of viewing artifacts...the way we've always done it...no longer serves us well. It's like trying watch a rock concert in a stadium by looking through a key hole. We can no longer open one Windows Event Log file in a GUI viewer, search for somethings we think might be relevant, close that log file, open another one, and repeat. Regardless of how comfortable we are with this approach, it is terribly insufficient and leaves a great many gaps and unanswered questions in even what appears to be the most rudimentary case.

Let's take a look at an example; this CyberDefenders challenge, as Hassan mentioned CyberDefenders in a comment. The first thing we see is that we have to sign up, and then sign in to work the challenge, and that none of the analyst case notes from how they solved the CTF are available. The same has been true of other CTFs, including (but not limited to) those such as the 2018 DefCon DFIR CTF. Keeping case notes is something that analysts should be deliberately practicing, as well as sharing them.

Second, we see that there are 32 questions to be answered in the CTF, the first of which is, "what is the OS product name?" We already know from one of the tags for the CTF that the image is Windows, so how important is the "OS product name"? This information does not appear to be significant to any of the follow-on questions, and seems to be solely for the purpose of establishing some sort of objective measure. Further, in over 2 decades of DFIR work, addressing wide range of response scenarios (malware, ransomware, PCI, APT, etc.), I don't think I've ever had a customer ask more than 4 or 5 questions...max. In the early days, there was most often just one question customers were interested in:

Is there malware on this system?

As time progressed, many customers wanted to know:

How'd they get in? 
Who are they?
Are they still in my network?
What did they take?

Most often, whether engaging in PCI forensic exams, or in "APT" or targeted threat response, those four questions, or some variation thereof, were in the forefront of customer's minds. In over two decades of DFIR work, ranging from individual systems up to the enterprise, I never had a case where a customer asked 32 questions (I've seen CTFs with 51 questions), and I've never had a customer (or a co-worker/teammate) ask me for the the LogFile sequence number of an Excel spreadsheet. In fact, I can't remember a single case (none stands out in my mind) where the LogFile sequence number of any file was a component or building block of an overall investigation.

Now, I'm not saying this isn't true for others...honestly, I don't know, as so few in our field actually share what they do. But from my experience, in working my own cases, and working cases with others, none of the questions asked in the CTF were pivotal to the case.

So, What's The Answer?
The answer is that forensic challenges need to be adapted, worked, and "graded" differently. CTFs should be more "deliberate practice", aligned to how would DFIR work should be done, and perpetuating and reinforcing good habits. Analysts need to keep and share case notes, being transparent about their analytic goals and thought processes, because this is now we learn overall. And I don't just mean that this is how that analyst, the one who shares these things, learns; no, I mean that this is how we all learn. In his book, Call Sign Chaos, retired Marine General Jim Mattis said that our own "personal experiences alone are not broad enough to sustain us"; while this thought applies to a warfighter reading, this portion of the quote applies much more broadly to mean that if we're stuck in our own little bubble, not sharing what we've done and what we know with others, then we're not improving, adapting and growing in our profession.

If we're looking to provide others with "deliberate practice", then we need to change the way we're providing that opportunity.

Additional Resources
John Asmussen - Case_notes.py
My 2018 DefCon DFIR CTF write-ups (part 1, part 2)

Monday, February 06, 2023

Why Lists?

So much of what we see in cybersecurity, in SOC, DFIR, red teaming/ethical hacking/pen testing, seems to be predicated on lists. Lists of tools, lists of books, lists of sites with courses, lists of free courses, etc. CD-based distros are the same way, regardless of whether they're meant for red- or blue-team efforts; the driving factor behind them is often the list of tools embedded within the distribution. For example, the Kali Linux site says that it has "All the tools you need". If you go to the SANS SIFT Workstation site, you'll see the description that includes, "...a collection of free and open-source incident response and forensic tools." Here's a Github site that lists "blue team tools"...but that's it, just a list.

Okay, so what's up with lists, you ask? What's the "so, what?" 

Lists are great...they often show us new tools that we'd hadn't seen or heard about, possibly tools that might be more effective or efficient for us and our workflows. Maybe a data source has been updated and there's a tool that addresses that new format, or maybe you're run across a case that includes the use of a different web browser, and there's a tool that parses the history for you. So, having lists is good, and familiar...because that's the way we've always done it, right? A lot of folks developing these lists came into the industry themselves at one point, looked around, and saw others posting lists. As such, the general consensus seems to be, "share lists"...either share a list you found, or share a list you've added to.

Lists, particularly checklists, can be useful. They can ensure that we don't forget something that's part of a necessary process, and if we intentionally and purposely manage and maintain that checklist, it can be our documentation; rather than writing out each step in our checklist as part of our case notes/documentation, we can just say, "...followed/completed the checklist version xx.xx, as of Y date...", noting any discrepancies or places we diverged. The value of a checklist depends upon how it's used...if it's downloaded and used because it's "cool", and it's not managed and never updated, then it's pretty useless.

Are lists enough?

I recently ran across a specific kind of list...the "cheat sheet". This specific cheat sheet was a list of Windows Event Log record event IDs. It was different from some other similar cheat sheets I'd seen because it was broken down by Windows Event Log file, with the "event IDs of interest" listed beneath each heading. However, it was still just a list, with the event IDs listed along with a brief description of what they meant.

However, even though this cheat sheet was "different", it was still just a list and it still wasn't sufficient for analysis today. 

Why is that?

Because a simple list doesn't give you the how, nor does it give you the why. Great, so I found a record with that event ID, and someone's list said it was "important", but that list doesn't tell me how this event ID is important to nor used in my investigation, nor how I can leverage that event ID to answer my investigative questions. The cheat sheet didn't tell me anything about how that specific event ID 

We have our lists, we have our cheat sheets, and now it's time to move beyond these and start developing the how and why; how to use the entry in an investigation, and why it's important. We need to focus less on simple lists and more on developing investigative goals and artifact constellations, so that we can understand what that entry means within the overall context of our investigation, and what it means when the entry is absent. 

We need to share more about how the various items on our lists are leveraged to reach or further our investigative goals. Instead of a list of tools to use, talk about how you've used one of those tools as part of your investigative process, to achieve your investigative goals.

Having lists or cheat sheets like those we've been seeing perpetuates the belief that it's sufficient to examine data sources in isolation from each other, and that's one of the biggest failings of these lists. As a community, and as an industry, we need to move beyond these ideas of isolation and sufficiency; while they seem to bring about an immediate answer or immediate findings, the simple fact is that neither serves us well when it comes to finding accurate and complete answers.

Sunday, February 05, 2023

Validating Tools

Many times, in the course of our work as analysts (SOC, DFIR, etc.), we run tools...and that's it. But do we often stop to think about why we're running that tool, as opposed to some other tool? Is it because that's the tool everyone we know uses, and we just never thought to ask about another? Not so much the how, but do we really think about the why?

The big question, however, is...do we validate our tools? Do we verify that the tools are doing what they are supposed to, what they should be doing, or do we simply accept the output of the tool without question or critical thought? Do we validate our tools against our investigative goals?

Back when Chris Pogue and I were working PCI cases as part of the IBM ISS X-Force ERS team, we ran across an instance where we really had to dig in and verify our toolset. Because we were a larger team, with varying skill levels, we developed a process for all of the required searches, scans and checks (search for credit card numbers, scans for file names, paths, hashes, etc.) based on Guidance Software's EnCase product, which was in common usage across the team. As part of the searches for credit card numbers (CCNs), we were using the built-in function isValidCreditCard(). Not long after establishing this process, we had a case where JCB and Discover credit cards had been used, but these weren't popping up in our searches.

Chris and I decided to take a look at this issue, and we went to the brands and got test card numbers...card numbers that would pass the necessary checks (BIN, length, Luhn check), but were not actual cards used by consumers. We ran test after test, and none using the isValidCreditCard() returned the card numbers. We tried reaching out via the user portal, and didn't get much in the way of a response that was useful. Eventually, we determined that those two card brands were simply not considered "valid" by the built-in function, so we overrode that function with one of our one, one that included 7 regexes in order to find all valid credit card numbers, which we verified with some help from a friend

We learned a hard lesson from this exercise, one that really cemented the adage, "verify your tools". If you're seeing (or not, as the case may be) something that you don't expect to see in the output of your tools, verify the tool. Do not assume that the tool is correct, that the tool author knew everything about the data they were dealing with and had accounted for edge cases. This is not to say that tool authors aren't smart and don't know what they're doing...not at all. In fact, it's quite the opposite, because what can often happen is that the data changes over time (we see this a LOT with Windows...), or there are edge cases that the tool simply doesn't handle well.

So we're not just asking about the general "verify your tools" adage; what we're really asking about is, "do you verify your tools against your investigative goals?". The flip side of this is that if you can't articulate your investigative goals, why are you running any tools in the first place?

Not long ago, I was working with someone who was using a toolset built out of open source and free tools. This toolset included a data collection component, middleware (parsed the data), and a backend component for engaging with and displaying the parsed data. The data collection component included retrieving a copy of the WMI repository, and I asked the analyst if they saw any use of WMI persistence, to which they said, "no". In this particular case, open reporting indicated that these threat actors had been observed using WMI for persistence. While the data collection component retrieved the WMI repository, the middleware component did not include the necessary code to parse that repository, and as such, one could not expect to see artifacts related to WMI persistence in the backend, even if they did exist in the repository. 

The issue was that we often expect the tools or toolset to be complete in serving our needs, without really understanding those "needs", nor the full scope of the toolset itself. Investigative needs or goals may not be determined or articulated, and the toolset was not validated against investigative goals, so assumptions were made, including ones that would lead to incomplete or incorrect reporting to customers.

Going Beyond Tool Validation to Process Validation
Not long ago, I included a question in one of my tweet responses: "how would you use RegRipper to check to see if Run key values were disabled?" The point of me asking that question was to determine who was just running RegRipper because it was cool, and who was doing so because they were trying to answer investigative questions. After several days of not getting any responses to the question (I'd asked the same question on LinkedIn), I posed the question directly to Dr. Ali Hadi, who responded by posting a YouTube video demonstrating how to use RegRipper. Dr. Hadi then posted a second YouTube video, asking, "did the program truly run or not?", addressing the issue of the StartupApproved\Run key.

The point is, if you're running RegRipper (or any other tool for that matter), why are you running it? Not how...that comes later. If you're running RegRipper thinking that it's going to address all of your investigative needs, then how do you know? What are your "investigative needs"? Are you trying to determine program execution? If so, the plugin Dr. Hadi illustrated in both videos is a great place to start, but it's nowhere near complete. 

You see, the plugin will extract values from the keys listed in the plugin (which Dr. Hadi illustrated in one of the videos). That version includes the StartupApproved\Run key in the plugin, as well, as it was added before I had a really good chance to conduct some more comprehensive testing with respect to that key and it's values. I've since removed the key (and the other associated keys) from the run.pl plugin and moved them to a separate plugin, with associated MITRE ATT&CK mapping and analysis tips.

As you can see from Dr. Hadi's YouTube video, it would be pretty elementary for a threat actor to drop a malware executable in a folder, and create a Run key value that points to it. Then, create a StartupApproved\Run key value that disables the Run key entry so that it doesn't run. What would be the point of doing this? Well, for one, to create a distraction so that the responder's attention is focused elsewhere, similar to what happened with this engagement.

If you are looking to determine program execution and you're examining the contents of the Run keys, then you'd also want to include the Microsoft-Windows-Shell-Core%4Operational Event Log, as well, as the event records indicate when the key contents are processed, as well as when execution of individual programs (pointed to by the values) began and completed. This is a great way to determine program execution (not just "maybe it ran"), as well as to see what may have been run via the RunOnce key, as well.

The investigative goal is to verify program execution via the Run/RunOnce keys, from both the Software and NTUSER.DAT hives. A tool was can use is RegRipper, but even so, this will not allow us to actually validate program execution; for that, we need a process that includes incorporating the Microsoft-Windows-Shell-Core%4Operational Event Log, as well as the Application Event Log, looking for Windows Error Reporting or Application Popup events. For any specific programs we are interested in, we'd need to look at artifacts that included "toolmarks" of that program, looking for any file system, Registry, or other impacts on the system.

Conclusion
If you're going to use a tool in SOC or DFIR work, understand the why; what investigative questions or goals will the tool help you answer/achieve? Then, validate that the tool will actually meet those needs. Would those investigative goals be better served by a process, one that addresses multiple aspects of the goal? For example, if you're interested in IP addresses in a memory dump, searching for the IP address (or IP addresses, in general) via keyword or regex searches will not be comprehensive, and will lead to inaccurate reporting. In such cases, you'd want to use Volatility, as well as bulk_extractor, to look for indications of network connections and communications.