Wednesday, December 22, 2021

On Writing DFIR Books, pt III

Editing and Feedback
When it comes to writing books, having someone can trust to give you honest, thoughtful, insightful feedback is a huge plus. It can do a lot to boost your confidence and help you deliver a product that you're proud of.

When I first started writing books, the process of going from idea to a published book was pretty set...or so I thought, being new and naïve to the whole thing. I put together an idea for a book, and started on an outline; I did this largely because the publisher was asking for things like a "word count". Then they'd send me a questionnaire to complete regarding the potential efficacy of the book, and they'd send my responses to a panel of "experts" within the industry to provide their thoughts and insight.

However, there wasn't a great deal of insight in the responses that came back, and to be transparent, it got worse as the years went on, but again it was part of the process. Having someone to bounce ideas off, and to engage in thoughtful, insightful discussions would have been beneficial, which is why I'm recommending it now.

The same is true when it comes to the tech editing. Early on, one of the tech editors assigned by the publisher would return chapters late (not good when you're operating on a schedule), and even then, the comments would be "needs work" at the top of the chapter; not entirely helpful. So, over time, at first, I just ignored the tech editor comments, until I had built up enough capital as an author to go to the publisher and tell them who I wanted as a tech editor. By that point, I'd already worked with the person and received their agreement. That way, I knew that I had someone on board who would help me and keep me on track, so that I didn't loose the forest for the trees. Just like when conducting an investigation, while writing a book, it can be easy to go down a rabbit hole and go completely off the rails. As such, it's always good to have someone you trust to give you honest feedback during the course of your writing. It's particularly valuable to have someone in the same industry, so that they can provide thoughtful insights toward your writing, as well as make recommendations to help address certain areas or fill in any gaps.

In short, you can and should "take ownership" of the process, as much as you can. I had assumed early on that the publisher had a short list of folks they knew could provide quality edits and insight, but it turned out that they were just "checking the box". You're going to need someone to review your writing anyway, so why not work with someone you know and trust, when all it takes is for you to ask the publisher?

Marketing
Once the book is published, or even prior to that, there's the issue of marketing; how does the community know you've written a book?

Early on, I assumed that the publisher had this all locked up. I didn't see a great deal of "marketing" with the first couple of books, and simply "went with it" when the publisher said that they had a process. After the first few books, I started asking, "...what's the process?" 

At that point, I found that the publisher had a spreadsheet of 101 names of luminaries in the industry, and part of their marketing plan was to send a free copy of the book to each of these folks, hoping that they'd write a review. Yes, you read that right...hoping. None of the recipients was under any obligation to actually write the review. Of the 101 names on the spreadsheet, I only recognized one from the industry, and they had NO INTEREST in host-based forensics; everything they wrote about or discussed was focused on NSM.

A while later, I found out that several authors under the imprint...six of us who all had the word "forensics" in the title of our books...would be attending a significant training event, and it turned out that the publisher had a deal with the bookstore for other such events. However, when I asked, it turned out that the publisher had NO PLANS to take advantage of this opportunity; after all, there were six of us presenting, and having copies of the books at the event would make for a great book signing event, and move a lot of books. I convinced my editor that this would be a great event to attend, and to her credit, she took two days out of her family vacation to bring books to the event; all she had at the end were empty boxes.

So, my point is that when it comes to writing books for a publisher, you may have to take ownership of your own marketing, as well. It's really not hard to do, to be honest, using social media, and getting friends to write reviews of you send them a free copy of the book. Not only is there a lot of great info available on marketing things like a book you've written, there are also a lot of podcasts available, and you can reach out to offer free copies, or to be available for an interview, etc.

Saturday, December 18, 2021

Reasons to go looking in the Registry

Chris Sanders tweeted out an interesting pair of questions recently, and the simple fact is that for me to fully answer the question, the tweet thread would be just too extensive. The questions were:

What are the most common reasons you go looking in the Windows registry? What do you use it to prove most?

Like almost everything else in DFIR, my response to the both questions is, it depends. Why? Well, it depends upon the goals of your investigation. What I use the Registry to prove depends heavily on what I'm trying to prove, or to disprove. This may sound pretty obvious, and even intuitive, but far too often in DFIR, we can find ourselves far too easily chasing down rabbit holes that have little, if anything, to do with our investigative goals.

Configuration
The Windows Registry holds a great deal of configuration information, describing what functionality is enabled or disabled, what tools can be accessed, etc. There are many more options available than what we see when we first install the OS; many are not well documented, and many are undocumented. Many configuration settings are described as being accessible through a UI of some kind, and the end result is a Registry modification. For example, many of the settings in the Local Security Policy, as well as GPOs, result in Registry modifications. Several settings accessible via the Windows Security UI also result in Registry modifications.

Another example is that the Registry contains a great deal of information that pertains to the auditing and logging, including not only the audit configuration of the system, but also the Windows Event Logs that are enabled. There's even an undocumented Registry key that, if you add it to the system, the Security Event Log is no longer populated. As we saw with NotPetya, clearing the Windows Event Log only gets a threat actor so far; cleared records can be recovered from unallocated space. However, if the Windows Event Log in question is disabled, then the records are never actually written to disk. This can be very challenging for an incident responder, but it also means that a dearth or absence of Windows Event Logs is no longer a matter a guesswork or assumption; it's trivial to query the appropriate locations within the Registry, and base your findings on fact.

Keep in mind that the breadth of configuration information available in the Registry is dependent upon the use of the system (both the user, and how it's used), as well as the installed applications.

Persistence
When some analysts think of persistence, they think about the Run or RunOnce keys. Why not? These are pretty popular. In 2012, a Google researcher shared that they'd combed across a popular commercial AV web site and found that 51% of the Registry paths led to a Run key. The bad guys use these keys for persistence, because they work, and work reliably.

However, as Hamlet said, "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy." There are more ways to remain persistent on Windows systems, within the Registry, than just a small handful of keys. Different locations in the Registry allow for programs to be automatically executed on system boot, user login, user logoff, system shutdown, upon the launch of various applications, etc.

Proof of Life
There are a number of Registry keys that contain values that are the direct result of user activity, and can be used to place the user at the keyboard, or not, as the case may be.

Another valuable aspect of the Registry is that in recording user activity, as a means of enhancing the user experience, information about installed and/or executed applications can remain in the Registry, even after the application is deleted or uninstalled from the system. The same is true for files; there are locations within the Registry that will record information about a user's interaction with files that remains long after the file is removed from the system. This is also true for shares and devices; remnants remain well beyond the "lifetime" of the connection.

DEFAULT File
Most folks don't really look in the DEFAULT file (located in the system32\config folder), but Registry modifications made by commands executed via LocalSystem account will end up in this file.

I've seen entries for PSEXEC.exe in this file, I've seen entries added to the Run key in this file, and I've seen where entries added to the Run key further populated the Run keys in NTUSER.DAT files for user profiles created after the value was added to the key in the DEFAULT file.

This particular file can be very interesting simply due to the fact that it's not modified via normal user activity; as such, any unusual or new keys or values that appear in this file should be of significant interest.

Finding New Things
Very often, I will go looking in the Registry to see if there were any changes to the threat actor or malware TTPs, based on previous cases, or from information available via open reporting. By casting a wide net across a finite, deterministic space, you can determine if the threat actor was using the same TTPs observed on previous cases, or if they'd opted for a new technique. 

One of the challenges of relying on open reporting of threat intelligence is that very often, such reporting is not specifically intended for use by DF analysts and incident responders. This is illustrated in the gaps in DF-specific information, as well as incorrect use of nomenclature. For example, this malware write-up* states:

The malware stores its configuration in ‘\\HKCU\Software\Microsoft\Windows\DWM\‘, using registry keys that consist of a uid generated from the serial number of the C: drive and appended with a single digit or character.

Where the statement mentions "registry keys", it should read "values". Some may consider this pedantic, but programmatically, Registry keys and values are very different; their structures are different, the information contained in the structures are different, and the code used to enumerate Registry keys beneath the DWM key is very different than the code used to enumerate values from beneath the same key.

As such I will often pursue examination of the Registry to identify either the malware family, or attribute activity to an identified threat actor (i.e., attribution), or determine if there were any differences between what I was seeing, and the open reporting.

Registry keys, values, and data (or the lack thereof) can provide insight into a threat actors intent, sophistication, & level of situational awareness, as well as provide insight into attribution. Very often, it's not just a single key or value, and context is king!

Conclusion
There are basically two ways to go about digging into the Registry; casting a wide net, or intentional extraction. As Boris the Animal said in MiB3, "I prefer to do both." By automating both techniques, it's a relatively simple process to validate previous findings and open reporting, as well as find new things. The key, however, is that this is a living, continual process, as new findings are baked right back into the process.

While this blog post is not all-inclusive, I wanted to take a shot a providing a more comprehensive response to Chris's questions than would make sense in a long series of tweets.

*To be fair, my pointing out the above malware write-up is not single out the author; not at all. Microsoft is widely known for making this mistake a great many times, in their own write-ups, and we see this quite a bit more than I'd like to admit in open reporting.

Tuesday, December 14, 2021

Tips for DFIR Analysts, pt VI

Context & Finding Persistence
I was looking into an unusual mechanism for launching applications recently, and that research brought back a recurring issue I've seen time and again in the industry, specifically pivoting from one data point to another based on knowledge of the underlying system.

Very often, during SOC monitoring or live response, we'll find a process executing via EDR telemetry (or some other means) and have no clear understanding of the mechanism that launched that process. Sometimes, we may have the data available to assist us in discovering the root cause of the process launch; for example, in the case of processes launched via web shell, all you need to do is trace backward through the process tree until you get to the web server process (i.e., w3wp.exe, etc.).

Other times, however, it's not so easy to do this, as the process tree proceeds back through the parent and grandparent processes with no clear indication of the source. In such cases, we may need to seek other sources of data, such as Windows Event Log records, to determine a system boot, or a user login to the system. Then, we may need the Windows Registry hive files, or file system data, to determine persistence mechanisms. What this means is that understanding the context of a suspicious process can help us pivot to determining the origin of the process launch, and that context can be determined by an event's proximity to other events.

Tools
Over time, I've seen tools...forensic analysis suites...evolve and include more "things". For example, Paraben's E3 application (version 2.8) includes parsing of data from the Registry, and I've seen analysts use Magnet Axiom to do something similar. ForenSafe blogs about the ArtiFast application, including screen shots of the information available from the Registry. This is all in addition to tools that specifically parse various data sources, such as Eric Zimmerman's Registry Explorer, and other tools.

What analysts need to remember is that these are just tools; it is still incumbent upon the analyst to correctly interpret the information presented by the tools. For example, this ForenSafe article discusses printer device information available in the Registry, and it appears to present the information for the analyst to interpret (as many tools do). How would you, as an analyst, use this information to determine if the printnightmare exploit had been used by a threat actor, or if a threat actor had enabled the "keep print job" functionality on the printer, as a means of staging data for exfiltration?

This JumpSec Labs article does a good job of discussing data sources and tools that can be used in the absence of Windows Event Logs. However, there are a couple of items that do need to be mentioned; for example, artifacts should never be viewed in isolation from each other, as context is missed. In the article, the author mentions the use of Prefetch files, and then the AppCompatCache/ShimCache, to demonstrate program execution. Rather than being viewed separately, these should be viewed together whenever possible. The reason is that 32-bit Windows XP was the only version of Windows where the AppCompatCache data recorded a second time stamp, indicating the time of last execution; in all other instances, the available time stamp is the last modified time extracted from the $STANDARD_INFORMATION attribute within the MFT. I've seen cases over the years...not just my own cases, but also investigations discussed during conference presentations...where the threat actor placed the EXE on the system and "time stomped" it before executing it. In one instance (shared at a conference in 2015) the misinterpretation of this data had a profound, negative impact on the "victim", as it was a PCI case and the "window of compromise" was 4 yrs, rather than a couple of weeks.

All of this is to say that I highly recommend the intentional, purposeful use of tools to further forensic analysis. However, at the end of the day, it is still incumbent upon the analyst to correctly interpret the data presented by the tools, and this interpretation is often heavily impacted by the completeness of data collection and parsing.

Artifact Constellations
As discussed the JumpSec Labs article, analysts may encounter systems upon which some data sources are not readily accessible. The article describes threads that analysts can pursue in the absence of Windows Event Logs, but something the article doesn't touch on is "artifact constellations"; that is, when some action occurs on a system, there are often secondary and tertiary artifacts created that, when considered together, provide much greater evidence of activity.

Articles like this one do a great job of illustrating some of the different artifacts available, but it is still up to the analyst to complete the constellation, and to correctly interpret the data. 

Spelling & Grammar
Take spelling and grammar seriously. I know a lot of folks don't, and that when someone points out poor spelling, they're referred to as the "grammar police" or as a "spelling Nazi". 

However, consider this...let's say you work for me, and don't care about spelling or grammar. Now, I have to review EVERYTHING you produce before it goes to the next level, be it an internal executive or a customer. I know that every time you send me a report or an email via Outlook, it's going to be full of words with red squiggly underlines...you know, MSWord's way of saying, "hey, you misspelled this word...". Once those are corrected, I still have to spend a great deal of time reviewing what you wrote, because now I have to deal with incorrect grammar, incomplete sentences, and correctly spelled but incorrect words.

Let's reduce it a little bit...what happens when you transpose an IOC? Let's say you transpose an IP address or a domain name to be blocked, and you misspell it...what happens then? Well, the IOC intended to be blocked isn't, and the organization isn't any better off;  in fact, it's worse because everyone thinks the correct IOC has been submitted. 

What happens when you misspell a Registry value name (or the data) for a setting on a critical system? The system doesn't recognize the value (or data), and the intended functionality is not applied.

Apply the same thought to threat hunting, either as part of a proactive threat hunt or a DFIR threat hunt. Or, if you're looking for an IOC in logs...either way, you misspell it and report, "...nothing found...", what's the result?

Now, let's look at it another way...I work for you, and I'm the one who's not spelling things correctly. In what position does that place you?

Thursday, November 25, 2021

Threat Hunting, IRL

While I worked for one company, I did a lot of public speaking on the value of threat hunting. During these events, I met a lot of folks who were interested to learn what "threat hunting" was, and how it could be of value to them.

I live in a very rural area, on just shy of 19 acres. One neighbor has 15 acres up front and another 20 in the back, and he adjoins a large property with just a trailer. My neighbor on the other side has 19 acres of...just 19 acres. We have animals, as well as more than a few visitors, which makes for a great analogy for threat hunting.

Within the borders of my property, we have three horses and a mini-donkey, and we have different paddocks and fields for them. We can restrict them to certain areas, or allow them to roam freely. We do this at different times of the year, depending upon weather, availability of hay, etc. For example, in the spring, when the grass is coming in really well, we don't want the horses on it too soon or for too long, because they can colic (which is a bad thing). And we may want to cut the grass (do maintenance), so we'll restrict the horses from that area.

I understand the normal comings and goings of the horses, because I have full visibility. I can not only see most of the areas (albeit not all) from the house, but I get out and walk around the property. I am familiar with the normal habits of the horses, and understand how they respond to various "events". I also know when something is amiss, simply by watching the horses. This is my "infrastructure".

Like most horse owners, we provide them with salt and mineral licks, in the form of 40 lb blocks. We make this available to them year-round, replacing blocks as they get diminished. Even so, we've also notices that the horses will scratch at certain spots on the ground, and then spend a good bit of time happily licking the ground. Knowing this, we try to keep up on "pasture maintenance"; we pick up the poop, or drag the field, so that the horses don't get worms. We also know what the spots look like, and that they're different from where the horses like to roll. Where they scrape and lick, the ground is bare, and there are usually rounded marks where their hoof initially contacts the ground, before they drag it across the ground to break up the earth. Where they roll, there is usually still some semblance of grass left, and there's also hair left. In addition, the marks from their hoofs, where the horses circle before laying down and then when they get back up, are different from where they scrape the ground. All in all, this is normal, expected "user behavior". 

Walking the dog around the property this year, I noticed something I haven't seen before...there are bare spots on the trails where the leaves and grass have been scraped away and the ground exposed. This seems very similar to what I've seen the horses do, however, there are differences. First, these spots are in areas that the horses do not usually frequent unless we're riding them. Second, instead of rounded hoof marks, there are distinct two-toed marks in the ground. Third, these exposed areas are usually much smaller than what I've seen associated with the horses. From what I've observed, these are deer doing the same thing as the horses, and if the only IOC someone shared with me was "bare spot on the ground", I would assume that it was most likely from the horses (i.e., normal user activity). However, if I look beyond the IOC, and look at the specifics of the activity (i.e., TTPs), I'd then be able to clearly differentiate between "normal user activity" and "potentially unwanted/malicious activity". 

My neighbors recently shared "threat intel"...they'd seen a bear on their property. Now, they'd done more than just stated, "hey, we saw a bear!" That's right...they did more than just share an IOC. In fact, they took a picture, indicated the time of day, and the picture gave an indication of where the bear was located on their property. So we had other things to consider than just "a bear"...size, color, type, direction of movement, etc. As a result, I'm now on the look-out for these TTPs, as well as others known to be associated with this particular threat; scat, claw marks on trees, etc. We have persimmon trees in the area, and while I have seen scat from various animals that contains persimmon seeds, I have yet to see bear scat. But I am aware of the "threat", and looking for clear indications of that threat.

My neighbors did more than just share an IOC, they shared clear TTPs, and enough of it such that I could search for indications of that threat within my infrastructure.

We've lived on this property for more than 4 1/2 yrs, and it's only been in the past year that I've found two turkey eggs. In both cases, they've been broken, and based on the condition of the eggs, I can't tell if the chick hatched, or if the egg was a meal for a raccoon. Regardless, it does tell me that there are very likely turkeys in the area. While we are knowledgeable and understand the nature of turkeys and the "risk" they bring to the "infrastructure", the horses have a completely different view. To the horses, a turkey scurrying through the underbrush may as well be a velociraptor released from it's cage, right out of Jurrasic Park. While sitting in my office, a turkey is not a threat to me; however, while I am on horseback, a turkey could be a "threat", in that spooking the horse I am riding might have a severely negative impact on my day. I have another threat that I'm aware of, and because I have detailed visibility of my environment, and because I understand the nature of the threat, I understand the risk.

For me, it's interesting to take a step back and look at how my IRL life parallels my work life. Or, maybe my IRL life is being viewed through the lens of my work life. Either way, I thought that there were some interesting parallels.

Tuesday, November 23, 2021

Tips for DFIR Analysts, pt. V

Over the years, I've seen DFIR referred to in terms of special operations forces. I've seen incident response teams referred to as "Cyber SEALs", as well as via various other terms. However, when you really look at, incident response is much more akin to the US Army Special Forces, aka "Green Berets"; you have to "parachute in" to a foreign environment, and quickly develop a response capability making use of the customer's staff ("the natives"), all of whom live in a "foreign culture". As such, IR is less about "direct action" and "hostage rescue", and more about "foreign internal defense".

Analysis occurs when an analyst applies their knowledge and experience to data, and is usually predicated by a parsing phase. We can learn a great deal about the analyst's level of knowledge and experience by what data they collect, how they approach data NOT addressed during the initial collection phase, how they go about parsing the data, and then how they go about presenting their findings.

Analysts need to understand that there is NO SUCH THING as an attack that leaves no traces. You may hear pen testers and fellow DFIR analysts saying this, often with great authority, but the simple fact is that this is NOT THE CASE. EVER. I heard this quite often when I was a member of the ISS X-Force ERS team; at one point, the vulnerability discovery team told me that they'd discovered a vulnerability to Excel that left NO TRACES of execution on the endpoint. Oddly enough, they stopped talking to me all together when I asked if the user had to open...wait for it...an Excel file. Locard's Exchange Principle tells us that there will be some traces of the activity. Some artifacts in the constellation may be more transient than others, but any "attack" requires the execution of code (processing of instructions through the CPU) on the system, even those that include the use of "native tools" or "LOLBins". 

Take what you learn from one engagement, and bake it back into your process. Automate it, so that the process is documented (or self-documenting) and repeatable.

Over on Twitter, Chris Sanders has a fascinating thread "discussing" the ethics around the release of OSTs. He has some pretty important points, one being that cybersecurity does not have any licensing body, and as such, no specific code of ethics or conduct to which one must adhere. Section 17 of the thread mentions releasing OSTs being a "good career move"; do we see that on the "blue" side? I'd suggest, "no". Either way, this is a really good read, and definitely something to consider...particularly the source of "data" in the thread. Chris later followed up with this article, titled, "A Socratic Outline for Discussing the OST Release Debate".

Tools
Not long ago, I ran across this LinkedIn post on SCCM Software Metering; this can potentially provide insight into program execution on host (where other artifacts, such as AmCache and ShimCache, do not explicitly illustrate program execution). Remember, this should be combined with other validating artifacts, such as Prefetch files, EDR telemetry (if possible), etc., and not considered in isolation. Microsoft has a document available that addresses software metering, and FireEye has some great info available on the topic, and David Pany has a tool available for parsing this information from the OBJECTS.DATA file.

Inversecos had a great Tweet thread on RDP artifacts not long ago, and in that thread, linked to an artifact parsing tool, BMC-Tools. Looking around a bit, I found another one, RdpCacheStitcher. Both (or either) can provide valuable insight into putting together "the story" and validating activity.

Endpoint Tools
I wanted to pull together a compilation of tools I'd been collecting, and just put them out there for inspection. I haven't had the opportunity to really use these tools, but in putting them together into the below rather loose list, I'm hoping that folks will see and use them...

DFIR Orc/Zircolite - LinkedIn message
DFIR Orc - Forensic artifact collection tool
Zircolite - Standalone SIGMA-based tool for EVTX
Chainsaw - tool for rapidly searching Windows Event Logs
Aurora from Nextron Systems (presentation) - SIGMA-based EDR agent
PCAP Parser (DaisyWoman) - HTTP/S queries & responses, VT scans 
*This parser is great for using after you use bulk_extractor to retrieve a PCAP file from memory, or other unstructured data
Kaspersky Labs parse_evtx.exe binary

Images/Labs
If you want to practice using any of the above tools, or you want to practice using other tools and techniques, and (like me), you're somewhat limited at to what you have access, here are some resources you might consider exploring...

Digital Forensics Lab & Shared Cyber Forensic Intelligence Repository
CyberDefenders Labs - a fair number of labs available
Cado Security REvil Ransomware Attack Image/Data
Ali Hadi's DataSets - lots of good data sets available 
MUS2019 DFIR CTF (via David Cowen) 

2018 DefCon CTF (here, here)
DigitalCorpora Narcos Scenario (write-up)
CalPoly 2019 DF Downloads
DFIRMadness Stolen Szechuan Sauce (Twitter) (Answers)

Monday, November 01, 2021

Tips for DFIR Analysts, pt IV

Context is king, it makes all the difference. You may see something run in EDR telemetry, or in logs, but the context of when it ran in relation to other activities is often critical. Did it occur immediately following a system reboot or a user login? Does it occur repeatedly? Does it occur on other systems? Did it occur in rapid succession with other commands, indicating that perhaps it was scripted? The how and when of the context then leads to attribution.

Andy Piazza brings the same thoughts to CTI in his article, "CTI is Better Served with Context".

Automation can be a wonderful thing, if you use it, and use it to your advantage. The bad guys do it all the time. Automation means you don't have to remember steps (because you will forget), and it drives consistency and efficiency. Even at the micro-level, at the level of the individual analyst's desktop, automation means that data sources can be parsed, enriched, decorated and presented to the analyst, getting them to analysis faster. Admit it...parsing all of the data sources you have access to, the way you're doing it now, is terribly inefficient and error-prone. Imagine if you could use the system, the CPU, to do that for you, and have pivot points identified when you access the data, rather than having to discover them for yourself. Those pivot points could be based on your own individual experience, but what if it were based on the sum total experience of all analysts on the team, including analysts who were on the team previously, but are no longer available?

Understand and use the terminology of your industry. All industries have their own common terminology because if facilitates communications, as well as identifying outsiders. Doctors, lawyers, Marines all have terms and phrases that mean something very specific to the group or "tribe". Learn what those are for your industry or community, understand them and use them.  

Not all things are what we think they are...this is why we need to validate what we think are "findings", but are really assumptions. Take Windows upgrades vs. updates...an analyst may believe that a lack of Windows Event Logs is the result of an upgrade when they (a) do not have Windows Event Log records that extend back to the time in question, and (b) see that there was a great deal of file system and Registry activity associated with an update, "near" that time. Most often the lack of Windows Event Logs is assumed to be the threat actor "covering" their tracks, and this assumption will often persist despite the lack of evidence pointing to this activity (i.e., batch files or EDR telemetry illustrating the necessary command, specific event IDs, etc.). The next step, in the face of specific event IDs "missing", is to assume that the update caused the "data loss", without realizing the implication of that statement...that a Windows update would lead to data loss. How often do we see these assumptions, when the real reason for a dearth of Windows Event Logs covering a specific time frame is simply the passage of time?

Sunday, October 10, 2021

Data Exfiltration, Revisited

I've posted on the topic of data exfiltration before (here, etc.) but often it's a good idea to revisit the topic. After all, it was almost two years ago that we saw the first instance of ransomware threat actors stating publicly that they'd exfiltrated data from systems, using this a secondary means of extortion. Since then, we've continued to see this tactic used, along with other tertiary means of extortion based on data exfiltration. We've also seen several instances where the threat actor ransom notes have stated that data was exfiltrated but the public "shaming" sites were noticeably empty.

As long as I've been involved in what was first referred to as "information security" (later referred to as "cyber security"), data exfiltration has been a concern to one degree or another, even in the absence of clearly-stated and documented analysis goals. With the advent of PCI forensic investigations (circa 2007-ish), "data exfiltration" became a formalized and documented analysis goal for every investigation, whether the merchant asked for it or not. After all, what value was the collected data if the credit card numbers were extracted from memory and left sitting on the server? Data exfiltration was/is a key component necessary for the crime, and as such, it was assumed often without being clearly identified.

One of the challenges of determining data exfiltration is visibility; systems and networks may simply not be instrumented in a manner that allows us to determine if data exfiltration occurred. By default, Windows systems do not have a great deal of data sources and artifacts that demonstrate data exfiltration in either a definitive or secondary manner. While some do exist, they very often are not clearly understood and investigated by those who then state, "...there was no evidence of data exfiltration observed..." in their findings.

Many years ago, I responded to an incident where an employee's home system had been compromised and a keystroke logger installed. The threat actor observed through the logs that the employee had remote access to their work infrastructure, and proceeded to use the same credentials to log into the corporate infrastructure. These were all Windows XP and 2003 systems, so artifacts (logs and other data sources) were limited in comparison to more modern versions of Windows, but we had enough indicators to determine that the threat actor had no idea where they were. The actor conducted searches that (when spelled correctly) were unlikely to prove fruitful...the corporate infrastructure was for a health care provider, and the actor was searching for terms such as "banking" and "password". All access was conducted through RDP, and as such, there were a good number of artifacts populated when the actor accessed files.

At that point, data exfiltration could have occurred through a number of means. The actor could have opened a file, and taken a picture or screen capture of their own desktop...they could have "exfiltrated" the data without actually "moving" it.

Jump forward a few years, and I was working on an APT investigation when EDR telemetry demonstrated that the threat actor had archived files...the telemetry included the password used in the command line. Further investigation led us to a system with a publicly-accessible IIS web server, albeit without any actual formal web sites being served. Web server logs illustrated that the threat actor downloaded zipped archives from that system successfully, and file system metadata indicated that the archive files were deleted once they'd been downloaded. We carved unallocated space and recovered a dozen accessible archives, which we were able to open using the password observed in EDR telemetry. 

In another instance, we observed that the threat actor had acquired credentials and was able to access OWA, both internally and externally. What we saw the threat actor do was access OWA from inside the infrastructure, create a draft email, attach the data to be exfiltrated to the email, and then access the email from outside of the infrastructure. At that point, they'd open the draft email, download the attachment, and delete the draft email. 

When I first began writing books, my publisher had an interesting method for transferring manuscript files. They sent me instructions for accessing their FTP site via Windows Explorer (as opposed to the command line), which left remnants on the system well beyond the lifetime of the book itself.

My point is that there are a number of ways to exfiltrate data from systems, and detecting data exfiltration can be extremely limited without necessary visibility. However, there are data sources on Windows systems that can provide definitive indications of data exfiltration (i.e., BITS upload jobs, web server logs, email, network connections/pcaps in memory dumps and hibernation files, etc.), as well as potential indications of data exfiltration (i.e., shellbags, SRUM, etc.). These data sources are relatively easy (almost trivial) to check, and in doing so, you'll have a comprehensive approach to addressing the issue.

Friday, October 08, 2021

Tips for DFIR Analysts, pt III

Learn to think critically. Don't take what someone says as gospel, just because they say it. Support findings with data, and clearly communicate the value or significance of something.

Be sure to validate your findings, and never rest your findings on a single artifact. Find an entry for a file in the AmCache? Great. But does that mean it was executed on the system? No, it does not...you need to validate execution with other artifacts in the constellation (EDR telemetry, host-based effects such as an application prefetch file, Registry modifications, etc.).

Have a thorough process, one that you can add to and extend. Why? Because things are always changing, and there's always something new. If you can automate your process, then so much the better...you're not loosing time and enabling crushing inefficiencies. So what do you need to look for? Well, the Windows Subsystem for Linux has been around for some time, and has even been updated (to WSL2). There are a number of versions of Linux you can install via WSL2, including Parrot OS. As one would expect, there's now malware targeting WSL2 (Lumen Black Lotus LabsTomsHardware, The Register).

Learn to communicate clearly and concisely. This includes both the written and spoken form. Consider using the written form to make the spoken form easier to communicate, by first writing out what you want to communicate.

Things are not always what they seem. Just because someone says something is a certain way doesn't make it the case. It's not that they're lying; more often than not, it's that they have a different perspective. Look at it this way...a user will have an issue, and you'll ask them to walk through what they did, to see if you can replicate the issue. You'll see data that indicates that they took a specific action, but they'll say, "I didn't do anything." What they mean is that they didn't do anything unusual or different from what they do on a daily basis.

There can often be different ways to achieve the same goal, different routes to the same ending. For example, Picus Security shared a number of different ways to delete shadow copies, which included resizing the VSC storage to be less than what was needed. From very cursory research, if a VSC does not fit into the available space, it gets deleted. This means that VSCs created will likely be deleted, breaking brittle detections looking for vssadmin.exe being used to directly delete the VSCs. Interestingly enough, I found this as a result of this tweet asking about a specific size (i.e., 401Mb).

Another example of different approaches to the same goal is using sc.exe vs reg.exe to control Windows services, etc. Different approaches may be predicated upon the threat actor's skill set, or it may be based on what the threat actor knows about the environment (i.e., situational awareness). Perhaps the route taken was due to the threat actor knowing the blind spots of the security monitoring tools, or of the analysts responding to any identified incidents.

There are also different ways to compromise a system...via email, the browser, any accessible services, even WSL2 (see above).

An issue within or challenge of DFIR has long been signal-to-noise ratio (SNR). In the early days of DFIR, circa Win2000/WinXP systems, the issue had to do with limited data...limited logging, limited tracking of user activity, etc. As a result, there are limited artifacts to tie to any particular activity, making validation (and to an extent, attribution) difficult, at best.

As DFIR has moved to the enterprise and analysts began engaging with EDR telemetry, we've also seen surges in host-based artifacts, not only between versions of Windows (XP, to Win7, to Win10) but also across builds within Windows 10...and now Windows 11 is coming out. With more people coming into DFIR, there's been a corresponding surge in the need to understand the nature of these artifacts, particularly within the context of other artifacts. This has all led to a perfect storm; increases in available data (more applications, more Windows Event Logs, more Registry entries, more data sources) and at the same time, a compounding need to correctly and accurately understand and interpret those artifacts. 

This situation can easily be addressed, but it requires a cultural change. What I mean is that a great deal of the parsing, enrichment and decoration of available data sources can be automated, but without DFIR analysts baking what they've discovered...new constellations or elements of constellations...back into the process, this entire effort becomes pointless beyond the initial creation. What allows automation such as this continue to add value over time is that it is developed, from the beginning to be expanded; for new data sources to be added, but also findings to be added.

Hunting isn't just for threat hunters. We most often think of "threat hunting" as using EDR telemetry to look for "badness", but DFIR analysts can do the same thing using an automated approach. Whether full images are acquired or triage data is collected across an enterprise, the data sources can be brought to a central location, parsed, enriched, decorated, and then presented to the analyst with known "badness" tagged for viewing and pivoting. From there, the analyst can delve into the analysis much sooner, with greater context, and develop new findings that are then baked back into the automated process.

Addendum, 9 Oct: Early in the above blog post, I stated, "Find an entry for a file in the AmCache? Great. But does that mean it was executed on the system? No, it does not...you need to validate execution with other artifacts in the constellation...". After I posted a link to this post on LinkedIn, a reader responded with, "...only it does."

However, this tweet states, "Amcache entries are created for executables that were never executed. Executables that were launched and then deleted aren't recorded. Also, Amcache entries aren't created for executables in non-standard locations (e.g., "C:\1\2\") _unless_ they were actually executed." 

Also, this paper states, on the second half of pg 24 of 66, "The appearance of a binary in the File key in AmCache.hve is not sufficient to prove binary execution but does prove the presence of the file on the system." Shortly after that, it does go on to say, "However, when a binary is referenced under the Orphan key, it means that it was actually executed." As such, when an analyst states that they found an entry "in" the AmCache.hve file, it is important to state clearly where it was found...specificity of language is critical. 

Finally, in recent instances I've engaged with analysts who've stated that an entry "in" the AmCache.hve file indicated program execution, and yet, no other artifacts in the constellation (Prefetch file, EDR telemetry, etc.) were found. 

EDR Bypasses

During my time in the industry, I've been blessed to have opportunities to engage with a number of different EDR tools/frameworks at different levels. Mike Tanji offered me a look at Carbon Black before carbonblack.com existed, while it still used an on-prem database. I spent a very good deal of time working directly with Secureworks Red Cloak, and I've seen CrowdStrike Falcon and Digital Guardian's framework up close. I've seen the birth and growth of Sysmon, as well as MS's "internal" Process Tracking (which requires an additional Registry modification to record full command lines). I've also seen Nuix Adaptive Security up close (including seeing it used specifically for threat hunting), which rounds out my exposure. So, I haven't seen all tools by any stretch of the imagination, but more than one or two.

Vadim Khrykov shared a fascinating tweet thread regarding "EDR bypasses". In the thread, Vadim lists three types of bypasses:

1. Technical capabilities bypass - focusing on telemetry EDR doesn't collect
2. EDR configuration bypass - EDR config being "aggressive" and impacting system performance 
3. EDR detection logic bypass - EDR collects the telemetry but there is no specific detection to alert on the technique used

Vadim's thread got me to thinking about bypasses I've seen or experienced over the years....

1. Go GUI

Most EDR tools are really good about collecting information about new processes that are created, which makes them very valuable when the threat actor has only command line access to the system, or opts to use the command line. However, a significant blind spot for EDR tools is when GUI tools are used, because in order to access the needed functionality, the threat actor makes selections and pushes buttons, which are not registered by the EDR tools. This is a blind spot, in particular, for EDR tools that cannot 'see' API calls.

As such, this does not just apply to GUI tools; EXE and DLL files can either run external commands (which are picked up by EDR tools), or access the same functionality via API calls (which are not picked up by EDR tools).

This has the overall effect of targeting analysts who may not be looking to artifact constellations. That is to say that analysts should be validating tool impacts; if an action occurred, what are the impacts of that action on the eco-system (i.e., Registry modifications, Windows Event Log records, some combination thereof, etc.)? This way, we can see the effects of an action even in the absence of telemetry specifically of that action. For example, did a button push lead to a network connection, or modify firewall settings, or establish persistence via WMI? We may not know that the button was pushed, but we would still see the artifact constellations (even partial ones) of the impact of that button push.

Take Defender Control v1.6, for example. This is a simple GUI tool with a couple of buttons that allows the user to disable or enable Windows Defender. EDR telemetry will show you when the process is created, but without looking further for Registry modifications and Windows Event Log records, you won't know if the user opened it and then closed it, or actually used it to disable Windows Defender.

2. Situational awareness 

While I was with CrowdStrike, I did a lot of public presentations discussing the value of threat hunting. During these presentations I included a great deal of telemetry taken directly from the Falcon platform, in part demonstrating the situational awareness (or lack thereof) of the threat actor. We'd see some who didn't seem to be aware or care that we were watching, and we'd see some who were concerned that someone was watching (checking for the existence of Cb on the endpoint). We'd also see other threat actors who not only sought out which specific EDR platform was in use, but also began reaching out remotely to determine other systems on which that platform was not installed, and then moving laterally to those systems.

I know what you're thinking...what's the point of EDR if you don't have 100% coverage? And you're right to think that, but over the years, for a variety of reasons, more than a few organizations impacted by cyber attacks have had limited coverage via EDR monitoring. This may have to do with Vadim's reason #2, or it may have to do with basic reticence to install EDR capability (concerns about the possibility of Vadim's reason #2...).

3. Vadim's #2++

To extend Vadim's #2 a bit, some other things I've seen over the years is customers deploying EDR frameworks, albeit only on a very limited subset of systems. 

I've also seen where deploying EDR within an impact organization's environment has been inhibited by...well...the environment, or the staff. I've seen the AD admin refuse to allow EDR to be installed on _his_ systems because we (my team) might remove it at the end of the engagement and leave a backdoor. I've seen users in countries with very strict privacy laws refuse to allow EDR to be installed on _their_ systems. 

I've seen EDR installed and run in "learning" mode during an engagement, so that the EDR "learned" that the threat actor's actions were "good".

One of the biggest variants of this "bypass" is an EDR that is sending alerts to the console, but no one is watching. As odd as it may sound, this happens considerably more often than you'd think.

EDR is like any other tool...it's value depends heavily upon how it's used or employed. When you look closely at such tools, you begin to see their "blind spots", not just in the sense of things that are not monitored, but also how DFIR work significantly enhances the visibility into a compromise or incident.

Thursday, September 23, 2021

Imposter Syndrome

Imposter Syndrome

This is something many of us have experienced to one degree or another, at various times. Many have experienced, some have overcome it, others may not be able to and wonder why.

HealthLine tells us, "Imposter feelings represent a conflict between your own self-perception and the way others perceive you." I would modify that slight to, "...the way we believe others perceive us." Imposter syndrome is something internalized, and has very little to do with the outside world.

I wanted to take the opportunity to share with you, the reader, what I've learned over the years about what's really happening in the world when we're having those feelings of imposter syndrome.

Perception: I don't want to present at a conference, or ask a question at a conference, because everyone knows more than I do, and will ask questions I don't know the answer to.

Reality: Have you ever stood in the back of the room at a conference and just...watched? Most people you see aren't even listening the presentation. Some are on social media (Twitter, Discord, Slack, etc.), and some are on their phones.

When someone asks a question about a presentation, calm yourself so instead of the ringing of fear in your ears, you actually hear the question. Does the question have anything to do with the presentation? I ask, because in my experience, I've gotten a few questions following a presentation, and some of those questions have been way off topic. Looking back, what seems to happen is that someone hears a term that they're familiar with, or something that strikes a chord, and they take it way out in left field. 

My point is simply that, at conferences, the simple reality a lot of people aren't really paying attention. This can be very important, even critical, when the fear behind imposter syndrome is that you're under a microscope.

I've found that I've had to engage with people directly to get a reaction. When I worked for CrowdStrike, I was involved with a number of public speaking engagements, and for the last 5 or so months, the events were about 3 1/2 hrs long. Across the board, almost no one actually asked questions. For those who did, it took time for them to do so; some wouldn't ask questions until the 2 1/2 or 3 hr mark. Some would ask questions, but only after I'd goaded someone else into asking a question.

Perception: I could never write a book or blog post, no matter how prepared I am, because the moment it's published it'll torn apart, and I'll be eviscerated and exposed as the fraud that I am.

Reality: Most people don't actually read books and blog posts. My experience in writing books is that some will purchase the books, but few are willing to actually read the books, and even fewer will write reviews. It's the same with blog posts...you put work into the topic and content, and if you don't share a link to the post on social media, very few actually see the post. If you do share it on social media, you may get "likes" and retweets, but no one actually comments, neither on the tweet nor the post.

My overall point is that imposter syndrome is an internal dialog that prevents us from being our best and reaching our potential. We can overcome it by replacing it with a dialog that is more based on reality, on what actually happens, in the real world. 

My perspective on this is based on experience. In 2005, Cory Altheide and I published a paper discussing the artifacts associated with the use of USB devices on Windows systems (at the time, primarily Windows XP). After the paper was published, I spoke on the topic at a conference in my area, one primarily attended by law enforcement, and got a no questions. Several years later, I was presenting at another conference, and had mistakenly opened the wrong version of presentation; about halfway through, there was a slide with three bullets that all said, "blah blah blah". At the end of the presentation, three attendees congratulated me on a "great presentation". 



Sunday, September 19, 2021

Distros and RegRipper

Over the years, every now and then I've taken a look around to try to see where RegRipper is used. I noticed early on that it's included in several security-oriented Linux distros. So, I took the opportunity to compile some of the links I'd found, and I then extended those a bit with some Googling. I will admit, I was a little surprised to see how, over time, how far RegRipper has gone, from a "here, look at this" perspective.

Not all of the below links are current, some are several years old. As such, they are not the latest and greatest; however, they may still apply and they may still be useful/valuable.

RegRipper on Linux (Distros) 
KaliKali GitLab 
SANS SIFT 
CAINE  
Installing RegRipper on Linux 
Install RRv2.8 on Ubuntu 
CentOS RegRipper package 
Arch Linux  
RegRipper Docker Image 
Install RegRipper via Chocolatey 

Forensic Suites
Something I've always been curious about is why the value of RegRipper being incorporated into and maintained through a forensic analysis suite isn't more of "a thing", but that fact doesn't prevent RegRipper and tools like it from being extremely valuable in a wide range of analyses.

RegRipper is accessible via Autopsy 
OSForensics Tutorial 
Launching RegRipper via OpenText/EnCase

When I worked for Nuix, I worked with Dan Berry's developers to build extensions for Yara and RegRipper (Nuix RegRipper Github) giving users of the Workstation product access to these open source tools in order to extend their capabilities. While both extensions really do a great deal to leverage the open source tool for use by the investigator, I was especially happy to see how the RegRipper extension turned out. The extension would automatically locate hive files, regardless of the Windows version (including the AmCache.hve file), automatically run the appropriate plugins against the hive, and then automatically incorporate the RegRipper output into the case file. In this way, the results were automatically incorporated into any searches the investigator would run across the case. During testing, we added images of Windows XP, Windows 2008 and Windows 7 systems to a case file, and the extension ran flawlessly.

It seems that RegRipper (as well as other tools) have been incorporated into KAPE, particularly into the Registry and timelining modules. This means that whether you're using KAPE freely, or you're using the enterprise license, you're likely using RegRipper and other tools I've written, to some extend.

I look back on this section, and I really have to wonder why, given how I've extended RegRipper since last year, why there is no desire to incorporate RegRipper into (and maintain it through) a commercial forensic analysis suite. Seriously.

Presentations/Courses
I've covered RegRipper as a topic in this blog, as well as in my books. I've also given presentations discussing the use of RegRipper, as have others. Here are just a few links:

OSDFCon 2020 - Effectively Using RegRipper (video)
PluralSight Course 

RegRipper in Academia
Okay, I don't have a lot of links here, but that's because there were just so many. I typed "site:edu RegRipper" into a Google search and got a LOT of hits back; rather than listing the links, I'm just going to give you the search I ran and let you do with it what you will. Interestingly, the first link in the returned search results was from my alma mater, the Naval Postgraduate School; specifically, Jason Shaver's thesis from 2015.

On Writing DFIR Books, pt II

Part I of this series kicked things off for us, and honestly I have no idea how long this series will be...I'm just writing the posts without a specific plan or outline for the series. In this case, I opted to take an organic approach, and wanted to see where it would go.

Content
Okay, so you have an idea for a book, but about...what? You may have a title or general idea, but what's the actual content you intend to write about? Is it more than a couple of paragraphs; can you actually create several solid chapters without having to use a lot of filler and fluff? Back when I was actively writing books, this was something on the forefront of my mind, not only because I was writing books, but later I got a question or two from others along these lines.

In short, I write about stuff I know, or stuff I've done. Not everything I've written about came from work; a good bit of what I've written about came from research I'd done, either following up on something I'd seen or based on an idea I had. For example, during one period not long after we'd transitioned to Windows 7, I wanted to follow up on creating, using and detection NTFS alternate data streams (ADSs), and I'd found some content that provided alternate means for launching executables written to ADSs. I wanted to see if ADSs were impacted by scripting languages, and I added the results of what I found to the book content.

A number of years ago, I had access to an MSDN account, and that access was used to install operating systems and applications, and then to "see" the toolmarks or artifact constellations left behind by various activities, particularly identifying the impact of different applications and configurations. Unfortunately, the MS employee who'd provided me with the account moved on, and the license eventually expired, which was unfortunate, as I was able to see the impact of different activities not only with respect to the operating system, but also with respect to different application suites.

Sources of content can include (and in my case, have included) just about anything; blog posts, blog post drafts, presentation materials, the results of ad hoc testing, etc. 

Outline
Something I've learned over the years is that the easiest way to get started writing book content is to start with an outline. The outline allows (or forces) you to organize your thoughts into a structured format, and allows you to see the flow of the book from a high level. This also allows you to start adding flesh to the bones, if you will, seeing where there structure is and adding that content we talked about. It also allows you see where there is value in consistency, in doing the same or similar writing (or testing) in different locations in the book (i.e., "chapters") in order to be as complete as possible. A well-structured outline will allow you to see gaps.

Further, if you have a well-structured outline that you're working from, the book almost writes itself. 

Monday, September 06, 2021

Tips for DFIR Analysts, pt II

On the heels of my first post with this subject, I thought I'd continue adding tips as they came to mind...

I've been engaged with EDR frameworks for some time now. I first became aware of Carbon Black before it was "version 1.0", and before "carbonblack.com" existed. Since then, I've worked for several organizations that developed EDR frameworks (Secureworks, Nuix, CrowdStrike, Digital Guardian), and others that made use of frameworks created by others. I've also been very happy to see the development and growth of Sysmon, and used it in my own testing.

One thing I've been acutely aware of is the visibility afforded by EDR frameworks, as well as the extent of that visibility. This is not a knock against these tools...not at all. EDR frameworks and tools are incredibly powerful, but they are not a panacea. For example, most (I say "most" because I haven't seen all EDR tools) track process creation telemetry, but not process exit codes. As such, it can be detrimental to assume that because the EDR telemetry shows a process being created, that the process successfully executed and completed. Some EDR tools can block processes based on specific criteria...I saw a lot of this at CrowdStrike, and shared more than a few examples in public speaking events. 

In other instances, the process may have failed to execute all together. For example, it may be been detected by AV shortly after it started executing. I've actually been caught by Windows Defender; prior to initiating testing, I'll disable real-time monitoring, but leave Defender untouched other than that. I'll then go about my testing, and then at some point in the future (sometimes around 4 hrs), I'll be continuing my testing, only to have Windows Defender recover (real-time monitoring is automatically re-enabled), and what I was working on was quarantined.

Did the executable throw an error shortly after launch, with a WER record, or an Application PopUp message, being generated in the Windows Event Log? 

Were you able to validate the impact of the executable or command? For example, if the threat actor was seen running a command that would impact the Windows Registry and result in Windows Event Log records being generated, were those artifacts validated and observed on the system?

The overall point is that while EDR frameworks provide a tremendous amount of visibility, but they do not provide ALL visibility.

What's Old Is New Again
Something a bit more on the deeper forensicy-technical side...

I ran across a couple of things recently that reminded me that what I found fascinating and dug into deeply twenty years ago may not be that well known today. For example, last year, Windows Defender/MpCmdRun.exe was identified as an LOLBin, and that was accompanied by the fact that files could be written to alternate data streams (ADSs). 

I also ran across something "new" regarding the "zone.identifier" ADSs associated with downloads (Edge, Chrome); specifically, the ZoneID ADSs are no longer 26 or 29 bytes in size, now they're much larger because they include more information, including HostURL and RefererURL entries, as illustrated in fig 1.

Fig 1: ZoneID ADS contents




This clearly provides some useful information regarding the source of the file. The ADS illustrated in fig 1 is from a PDF document I'd downloaded to my desktop via Chrome; as such, it wasn't found in my Downloads folder (a dead giveaway for downloads...), but the existence and contents of the ADS clearly demonstrate that the file was indeed downloaded to the system.

Now, we'll just need to see if other means for downloading files...BITS jobs, LOLBins, etc...produce similar results.

On Writing DFIR Books, pt I

During my time in the industry, I've authored 9 books under three imprints, and co-authored a tenth.

There, I said it. The first step in addressing a problem is admitting you have one. ;-)

Seriously, though, this is simply to say that I have some experience, nothing more. During the latter part of my book writing experience, I saw others who wanted to do the same thing, but ran into a variety of roadblocks, roadblocks I'd long since navigated. As a result, I tried to work with the publisher to create a non-paid liaison role that would help new authors overcome many of those issues, so that a greater portfolio of quality books became available to the industry. By the time I convinced one editor of the viability and benefit of such a program, they had decided to leave their profession, and I had to start all over again, quite literally from the very beginning.

Authoring a book has an interesting effect, in that it tends to create a myth around the author, one that they're not aware of at first. It starts with someone saying, "...you wrote a book, so you must X..". Let "X" be just about anything. 

"Of course you're good at spelling, you wrote a book." Myth.

"You must be rolling in money, you wrote a book." Myth.

All of these things are assumptions, myths built up only to serve as obstacles. The simple fact is that if you feel like you want to write a book, you can. There's nothing stopping you, except...well...you. To that end, I thought I'd write a series of posts that dispel the myths and provide background and a foundation for those considering the possibility of writing a book.

There are a number of different routes to writing books. Richard Bejtlich has authored or co-authored a number of books, the most recent of which have been reprints of his Tao Security blog posts. Emma Bostian tweeted about her success with "side projects", the majority of which consisted of authoring and marketing her ebooks.

The Why
So, why write books at all? In an email that Gen Jim Mattis (ret) authored that later went viral, he stated:

By reading, you learn through others’ experiences, generally a better way to do business, especially in our line of work where the consequences of incompetence are so final for young men.

Yes, Gen Mattis was referring to warfighting, but the principle equally well for DFIR work. In his book, "Call Sign Chaos", Mattis further stated:

...your personal experiences alone aren't broad enough to sustain you.

This is equally true in DFIR work; after all, what is "analysis" but the analyst applying the sum total of their knowledge and experience to the amassed data? As such, the reason to write books is that no one of us knows everything, and we all have vastly different experiences. Even working with another analyst on the same incident response engagement, I've found that we've had different experiences due in large part to our different perspectives.

The simple fact is that these different perspectives and experiences can be profoundly valuable, but only if they're shared. A while back, I engaged in an analysis exercise where I downloaded an image and memory sample provided online, and conducted analysis based on a set of goals I'd defined. During this exercise, I extracted significantly different information from the memory sample using two different tools; I used Volatility to extract information about netstat-style network connections, and I also used bulk_extractor to retrieve a PCAP file, built from the remnants of actual packets extracted from memory. I shared what I'd done and found with one other analyst, and to be honest, I don't know if they ever had the chance to try it, or remembered to do so the next time the opportunity arose. Since then, I have encountered more than a few analysts to whom this approach never occurred, and while they haven't always seen significant value from the effort, it remains a part of their toolkit. I also included the approach in "Investigating Windows Systems", where it is available, and I assume more than one analyst has read it and taken note.

Speaking for myself, I began writing books because I couldn't find what I wanted on the shelves of the bookstore. It's as simple as that. I'd see a title with the words "Windows" and "forensics" in the title, and I'd open it, only to find that the dive did not go deep enough for me. At the time, many of the books related to Windows forensics were written by those who'd "grown up" using Linux, and this was clearly borne out in the approach taken, as well as the coverage, in the books.

The First Step
The first step to successfully writing a book is to read. That's right...read. By reading, we get to experience a greater range of authorship, see and decide what we enjoy reading (and what we pass on), and then perhaps use that in our own writing.

My first book was "Windows Forensics and Incident Recovery", published in 2004. The format and structure of chapter 6 of that book is based on a book I read while I was on active duty in the military titled "The Defense of Duffer's Drift". I liked the way that the author presented the material so much that I thought it would be a useful model for sharing my own story. As it turned out, that was the one chapter that my soon-to-be wife actually read completely, as it is the only chapter that isn't completely "technical".

With that, thoughts, comments and questions are, as always, welcome. Keep an eye open for more to come!


Friday, August 27, 2021

Building a Career in CyberSecurity

There's been a lot of discussion on social media around how to "break into" the cybersecurity field, not only for folks just starting out but also for those looking for a career change. This is not unusual, given what we've seen in the public news media around cyber attacks and ransomware; the idea is that cybersecurity is an exploding career field that is completely "green fields", with an incredible amount of opportunity.

Jax Scott recently shared a YouTube video (be sure to comment and subscribe!) where she provides five steps to level up any career, based on her "must read for anyone seeking a career in cybersecurity" blog post. Jax makes a lot of great points, and rather than running through each one and giving my perspective, I thought I'd elaborate a bit on one in particular.

Jax's first tip is to network. This is profound...really profound...for a number of reasons.

First, what I see a LOT of is folks on social media asking for advice on getting into the cybersecurity field, without realizing that the "cybersecurity field" is a huge, expansive...there are a lot of different things you can do in the field. Networking lets you see what you may not see, and it affords you the opportunity to see different aspects of the field. For example, there are more technical (pen testing, digital forensics) aspects of "cybersecurity", as well as less technical (incident management, compliance, policies, etc.) aspects. Not everyone is suited to everything in this field...I once worked with/mentored an incident response consultant who got so anxious when it was their turn to go on-site that they once had to check themselves into the hospital, and another analyst had to take the engagement.

Second, when you do network, make sure that it's purposeful and intentional. Clicking "like" or "follow", or just sending someone a blind connection request on LinkedIn, isn't really "networking", because it's too passive. If you're networking to develop an understanding of the field, and to find a (new) job, just following or connecting to someone isn't going to get you there.

Networking with intent affords us something else, as well. In his book, "Call Sign Chaos", retired Marine general Jim Mattis stated that "...your personal experiences alone aren't broad enough to sustain you." This is just as true in the cybersecurity field as it is to the warfighter, and intentional networking allows us to broaden our experiences through purposeful engagement with others.

I see recommendations on LinkedIn all the time with tips for how to develop your "brand", and most include things such as leaving a comment rather than liking a post, referring to/referencing other posts, as well as other activities that are active, rather than passive. All of these amount to the same thing...purposeful, intentional networking.

Be sure to check out and subscribe to Jax's YouTube videos for a lot of great insight and information, as well as follow the "Hackerz and Haecksen" podcast for some insightful interviews and content!

Thursday, August 26, 2021

Tips for DFIR Analysts

Over the years as a DFIR analyst...first doing digital forensics analysis, and then incorporating that analysis as a component of IR activity...there have been some stunningly simple truths that I've learned, truths that I thought I'd share. Many of these "tips" are truisms that I've seen time and time again, and recognized that they made much more sense and had more value when they were "named".

Tips, Thought, and Stuff to Think About

Computer systems are a finite, deterministic space. The adversary can only go so far, within memory or on the hard drive. When monitoring computer systems and writing detections, the goal is not write the perfect detection, but rather to force the adversary into a corner, so that no matter what they do, they will trigger something. So, it's a good thing to have a catalog of detections, particularly if it is based on things like, "...we don't do this here..".

For example, I worked with a customer who'd been breached by an "APT" the previous year. During the analysis of that breach, they saw that the threat actor had used net.exe to create user accounts within their environment, and this is something that they knew that they did NOT do. There were specific employees who managed user accounts, and they used a very specific third-party tool to do so. When they rolled out an EDR framework, they wrote a number of detection rules related to user account management via net.exe. I was asked to come on-site to assist them when the threat actor returned; this time, they almost immediately detected the presence of the threat actor. Another good example is, how many of us log into our computer systems and type, "whoami" at a command prompt? I haven't seen many users do this, but I've seen threat actors do this. A lot.

From McChrystal's "Team of Teams", there's a difference between "complexity" and "complicated". We often refer to computer systems and networks as "complex", when they are really just complicated, and inherently knowable. We, as humans, tend to make things that are complicated out to be complex.

A follow-on to the previous tip is that there is an over-use of the term "sophisticated" to describe a significant number of attacks. When you look at the data, very often you'll see that attacks are only as sophisticated as they need to be, and in most cases, they really aren't all that sophisticated. An RDP server with an account password of "password" (I've seen this recently...yes, during the summer of 2021), or a long-patched vulnerability with a freely available published exploit (i.e., JexBoss was used by the Samas ransomware actors during the first half of 2016).

When performing DF analysis, the goal is to be as comprehensive and thorough as possible. A great way to achieve this is through automation. For example, I developed RegRipper because I found that I was doing the same things over and over again, and I wanted a way to make my job easier. The RegRipper framework allowed me to add checks and queries without having to write (or rewrite) entirely new tools every time, as well as provided a framework for easy sharing between analysts.

TCP networking is a three-stage handshake, UDP is "fire and forget". This one tip helped me a great deal during my early days of DFIR consulting, particularly when having discussions with admins regarding things like firewalls and switches.

Guessing is lazy. Recognize when you're doing it before someone else does. If there is a gap in data or logs, say so. At some point, someone is going to see your notes or report, and see beyond the veil of emphatic statements, and realize that there are gaping holes in analysis that were spackled over with a thin layer of assumption and guesswork. As such, if you don't have a data source...if firewall logs were not available, or Windows Event Logs were disabled, say so.

The corollary to the tip on "guessing" is that nothing works better than a demonstration. Years ago, I was doing an assessment of a law enforcement headquarters office, and I was getting ready to collect password hashes from the domain server using l0phtcrack. The admin said that the systems were locked down and there was no way I was going to get the password hashes. I pressed the Enter key down, and had the hashes almost before the Enter key returned to its original position. The point is, rather than saying that a threat actor could have done something, a demonstration can drive the point home much quicker.

Never guess at the intentions of a threat actor. Someone raised in the American public school system, with or without military or law enforcement experience, is never going to be able determine the mindset of someone who grew up in the cities of Russia, China, etc. That is, not without considerable training and experience, which many of us simply do not have. It's easy to recognize when someone's guessing the threat actor's intention, because they'll start off a statement with, "...if I were the threat actor...".

If no one is watching, there is no need for stealth. The lack of stealth does not bely sophistication. I was in a room with other analysts discussing the breach with the customer when one analyst described what we'd determined through forensic analysis as, "...not terribly sophisticated...", in part because the activity wasn't very well hidden, nor did the attacker cover their tracks. I had to later remind the analyst that we had been called in a full 8 months after the threat actor's most recent activity.

The adversary has their own version of David Bianco's "Pyramid of Pain", and they're much better at using it. David's pyramid provides a framework for understanding what we (the good guys) can do to impact and "bring pain" to the threat actor. It's clear from engaging in hundreds of breaches, either directly or indirectly, that the bad guys have a similar pyramid of their own, and that they're much better at using theirs.

We're not always right, or correct. It's just a simple fact. This is also true of "big names", ones we imagine are backed by significant resources (spell checkers, copy editors, etc.), and as such, we assume are correct and accurate. As such, we shouldn't blindly accept what others say in open reporting, not without checking and critical thinking.

There are a lot of assumptions in this industry. I'm sure it's the same in other industries, but I can't speak to those industries. I've seen more than a few assumptions regarding royalties for published books; new authors stating out big publishers may start out at 8%, or less. And that's just for paper copies (not electronic), and only for English language editions. I had a discussion once with a big name in the DFIR community who assumed that because I worked for a big name company, of course I had access to commercial forensic suites; they'd assumed that my commenting on not having access to such suites was a load of crap. When I asked what made them think that I would have access to these expensive tool sets, they ultimately said that yes, they'd assumed that I would.

If you're new to DFIR, or if you've been around for a while, you've probably found interviewing for a job to be nerve-racking, anxiety-producing affairs. One thing to keep in mind is that most of the folks you're interviewing with aren't terribly good at it, and are probably just as nervous as you. Think about it...how many times have you seen courses offered in how to conduct a job interview, from the perspective of the interviewer?

Friday, June 25, 2021

What We Know About The Ransomware Economy

Okay, I think that we can all admit that ransomware has consumed the news cycle of late, thanks to high visibility attacks such as Colonial Pipeline and JBS. Interestingly enough, there wasn't this sort of reaction the second time the City of Baltimore got attacked, which (IMHO) belies the news cycle more than anything else.

However, while the focus is on ransomware, for the moment, it's a good time to point out that there's more to this than just the attacks that get blasted across news feeds. That is, ransomware itself is an economy, an eco-system, which is a moniker that goes a long way to toward describing why victims of these attacks are impacted to the extent that they are. What I mean by this is that everything...EVERYTHING...about what goes into a ransomware attack is directed at the final goal of the threat actor...someone...getting paid. What goes further to making this an eco-system is that when a ransomware attack does result in the threat actor getting paid, there are also others in the supply chain (see what I did there??) who are also getting paid.

I was reading Unit42's write-up on the Prometheus ransomware recently, and I have to say, a couple of things really stood out for me, one being the possible identification of a "false flag". The Prometheus group apparently made a claim that is unsupported by the data Unit42 has observed. Even keeping collection bias in mind, this is still very interesting. What would be the purpose of such a "false flag"? Does it tell use that the Prometheus group has insight into the workings of most counter threat intel (CTI) functions; have they "seen" CTI write-ups and developed their own version of the MITRE ATT&CK matrix?

Regardless of the reason or rationale behind the statement, Unit42 is...wait for it...relying on their data.  Imagine that!

Another thing that stood out is the situational awareness of the ransomware developer.

When Prometheus ransomware is executed, it tries to kill several backups and security software-related processes, such as Raccine...

Well, per the article, this is part of the ransomware itself, and not something the threat actors appear to be doing themselves. Being part of more than a few ransomware investigations over the years, relying on both EDR telemetry and #DFIR data, I've seen different levels of situational awareness on the part of threat actors. In some cases where the EDR tool blocks a threat actor's commands, I've seen them either give up, or disable or remove AV tools. In some cases, the threat actor has removed AV tools prior to performing a query, so the question becomes, was that tool even installed on the system?

This does, however, speak to how the barrier for entry has been lowered; that is, a far less sophisticated actor is able to be just as effective, or more so. Rather than having to know and manage all the parts of the "business", rather than having to invest in the resources required to gain access, navigate the compromised infrastructure, and then develop and deploy ransomware...you can just buy those things that you need. Just like the supply chain of a 'normal' business. Say that you want to start a business that's going to provide products to people...are you going to build your own shipping fleet, or are you going to use a commercial shipper (DHL, FedEx, UPS, etc.)?

Further, from the article:

At the time of writing, we don’t have information on how Prometheus ransomware is being delivered, but threat actors are known for buying access to certain networks, brute-forcing credentials or spear phishing for initial access.

This is not unusual. This write-up appears to be based primarily on OSINT, and does not seem to be derived from actual intrusion data or intelligence. The commands listed in the article for disabling Raccine are reportedly embedded in the ransomware executable itself, and not something derived from EDR telemetry, nor DFIR analysis. So what this is saying is that threat actors generally gain access by brute-forcing credentials (or purchasing them), or spear phishing, or by purchasing access from someone who's done either of the first two.

Again, this speaks to how the barrier for entry has been lowered.  Why put the effort into gaining access yourself when you can just purchase access someone else has already established?  

We’ve compiled this report to shed light into the threat posed by the emergence of new ransomware gangs like Prometheus, which are able to quickly scale up new operations by embracing the ransomware-as-a-service (RaaS) model, in which they procure ransomware code, infrastructure and access to compromised networks from outside providers. The RaaS model has lowered the barrier to entry for ransomware gangs.

Purchasing access to compromised computer systems...or compromising computer systems for the purpose of monetizing that access...is nothing new. Let's look back 15+ years to when Brian Krebs interviewed a botherder known as "0x80". This was an in-person interview with a purveyor of access to compromised systems, which is just part of the eco-system. Since then, the whole thing has clearly been "improved upon".

This just affirms that, like many businesses, the ransomware economy, the eco-system, has a supply chain. This not only means that there are specializations within that supply chain, and that the barrier to entry is lowered, it also means that attribution of these cybercrimes is going to become much more difficult, and possibly tenuous, at best.

Thoughts on Assessing Threat Actor Intent & Sophistication

I was reading this Splunk blog post recently, and I have to say up front, I was disappointed by the fact that the promise of the title (i.e., "Detecting Cl0p Ransomware") was not delivered on by the remaining content of the post. Very early on in the blog post is the statement:

Ransomware is by nature a post-exploitation tool, so before deploying it they must infiltrate the victim's infrastructure. 

Okay, so at this point, I'm looking for something juicy, some information regarding the TTPs used to "infiltrate the victim's infrastructure" and to locate files of interest for staging and exfil, but instead, the author(s) dove right into analyzing the malware itself, through reverse engineering. Early in that malware RE exercise is the statement:

This ransomware has a defense evasion feature where it tries to delete all the logs in the infected machine to avoid detection.

The embedded command is essentially a "one-liner" used to list and clear all Windows Event Logs, leveraging wevtutil.exe. However, while used for "defense evasion", it occurred to me that this command is not, in fact, intended to "avoid detection".  After all, with ransomware, the threat actors want to get paid, so they want to be detected. In fact, to ensure they're detected, the actors put ransom notes on the system, with clear statements, declarations, warnings, and time limits. In this case, the ransom note says that if action is not taken in two weeks, the files will be deleted. So, yes, it's safe to say that clearing all of the Windows Event Logs is not about avoiding detection. If anything, its really nothing more than a gesture of dominance, the threat actor saying, look at what I can do to your system. 

So, what is the purpose of clearing the Windows Event Logs? As a long-time #DFIR analyst, the value of the Windows Event Logs in such cases is to assist in a root-cause analysis (RCA) investigation, and clearing some Windows Event Logs (albeit not ALL of them) will hobble (but not completely remove) a responder's ability to determine aspects of the attack cycle such as lateral movement. By tracing lateral movement, the investigator can determine the original system used by the threat actor to gain access to the infrastructure, the "nexus" or "foothold" system, and from there, determine how the threat actor gained access. I say "hobble" because clearing the Windows Event Logs does not obviate the ability of the investigator to recover the event records, it simply requires a bit more effort. However, the vast majority of organizations impacted by ransomware are not conducting full investigations or RCAs, and #DFIR consulting firms are not publicly sharing ransomware trends and TTPs, anonymized through aggregation. In short, clearing the Windows Event Logs, or not, would likely have little impact either way on the response.

But why clear ALL Windows Event Logs? IMHO, it was used to ensure that the ransomware attack was, in fact, detected. Perhaps the assumption is that most organizations have some small modicum of a detection capability, and the most rudimentary SIEM or EDR framework should throw an alert of some kind in the face of the "wevtutil cl" command, or when the SIEM starts receiving events indicating that Windows Event Logs were cleared (especially if the Security Event Log was cleared!).