Live Response
It's been a while since I posted anything on the topic of live response, but I recently ran across something that really needed to be shared as widely as possible.
Specifically, Hadar Yudovich recently authored an article on the Illusive Networks blog about finding time stamps associated with network connections. His blog post is pretty fascinating, as he says some things that are probably true for all of us; in particular, we'll see a native tool (such as netstat.exe), and assume that the data that the tool presents is all that there is. We simply don't remember that MS did not create an operating system with DFIR in mind. However, Hadar demonstrates that there is a way to get time stamps for network connections on Windows systems, and wrote a Powershell script to do exactly that.
I downloaded and ran the Powershell script from a command prompt (not "Run as administrator") using the following command line:
powershell -ExecutionPolicy Bypass .\Get-NetworkConnections.ps1
The output is in a table format, but for anyone familiar with Powershell, I'm sure that it wouldn't be hard to modify the output to CSV, or some other format. Either way, this would be a great addition to any volatile data collection script.
This is pretty cool stuff, and I can see it being included in volatile data collection processes going forward.
Processes and Procedures
Speaking of collecting volatile data...
A couple of things I've seen during my time as an incident responder is (a) a desire within the community for the "latest and greatest" volatile data collection script/methodology, and (b) a marked reticence to document processes and procedures. I mention these two specifically because from my perspective, they seem to be diametrically opposed; after all, what is a volatile data collection script but a documented process?
Perhaps the argument I've heard against documented processes over the years is that having them "stifles creativity". My experience has been the opposite...by documenting my analysis process for activities such as malware detection within an acquired image, I'm able to ultimately spend more time on the fun and interesting aspects of analysis. Why is that? Well, for me, a documented process is a living document, one that is continually used and updated as necessary. Also, the documented process serves as a means for automation. As such, as I learn something new, I add it to the process, so that I don't forget something..God knows that most days, I can't remember what I had for breakfast, so how am I going to remember something that I read (didn't find or do as part of my own analysis) six months ago? The simple fact is that I don't know everything, and I haven't seen everything, but I can take those things I have seen, as well as what others have seen (culled together from blog posts, etc.) and incorporate them into my process. Having the process automated means that I spend less time doing those things that can be automated, and more time actually investigating those things that I need to be...well...investigating.
An example of this is eventmap.txt; I did not actually work the engagement where the event source/ID pair for "Microsoft-Windows-TaskScheduler/709" event record was observed. However, based on what it means, and the context surrounding the event being recorded, it was most definitely something I wanted to incorporate into my analysis process. Even if I never see the event again in the next 999 cases I work, that 1000th case where I do see it will make it worth the effort to document it.
Documenting processes and procedures for many is a funny thing. Not long ago, with a previous employer, I was asked to draft analysis processes and workflows, because other analysts said that they didn't "have the credibility". Interestingly enough, some of those who said this are now active bloggers. Further, after I drafted the workflows, they were neither reviewed, nor actually used. However, I found (and still find) a great deal of value in having a documented process or workflow, and I continue to use and develop my own.
OSINT, IOCs
All this talk of processes and workflows logically leads to questions of, where do I get the data and information that I turn into intelligence and incorporate into my workflows? Well, for the most part, I tend to get the most intel from cases I work. What better way to go from GB or TB of raw data to a few KB of actual information, with the context to turn that information into intelligence, than to do so working actual DFIR cases? Ultimately, that's where it all starts, right?
So, we can learn from our own cases, but we can also learn from what others have learned and shared. Ah, that's the key, though, isn't it...sharing the information and intelligence. If I learn something, and keep it to myself, what good is it? Does it mean that I have some sort of "power"? Not at all; in fact, it's quite the opposite. However, if I share that information and/or intelligence with others, then you get questions and different perspectives, which allows us to develop and sharpen that intelligence. Then someone else can use that intelligence to facilitate their analysis, and perhaps include additional data sources, extending the depth and value of the intelligence. As such, pursuing OSINT sources is a great way to not only further develop your own intel, but to develop indicators that you can then use to further your analysis, and by extension, furthering your intel.
This recent FireEye blog post is a great example (it's one, there are many others) of OSINT material. For example, look at the reference to the credential theft tool, HomeFry. This is used in conjunction with other tools; that alone is pretty powerful. In the past, we've seen a variety of sources say things like, "...group X uses Y and Z tools..." without any indication as to how the tools are used. In one of my own cases, it was clear that the adversary used Trojan A to get on an initial jump system, and from there, installed Trojan B, and the used that as a conduit to push our Trojan B to other systems. So, it's useful to know that certain tools are used, but yes, those tools are easily changed. Knowing how the tools are used is even more valuable. There's also a reference to lure documents that exploit the CVE-2017-11882 vulnerability; here's the PaloAlto Networks analysis of the exploit in the wild (they state that they skipped some of the OLE metadata...), and here are some apparent PoC exploits.
This write-up from ForcePoint is also very informative. I know a lot of folks in the industry would look at the write-up and say, "yeah, so the malware persists via the user's Run key...big deal." Yeah, it IS a big deal. Why? Because it still works. You may have seen the use of the Run key for persistence of years, and may think it's passe, but what does it say about the security industry as a whole that this still works, and that this persistence mechanism is found through response post-mortem?
Here's a fascinating write-up on the QWERTY ransomware from BleepingComputer. Some of what I thought was fascinating about it is the use of batch files and native tools...see, the bad guys automate their stuff, so maybe we should, too...right? Part of what the ransomware does is use two different commands to delete volume shadow copies, and uses wbadmin to delete backups. So how is this information valuable? Well, do you use any of these tools in your organization? If not, I'd definitely enable filters/alerts for their use in an EDR framework. If you do use these tools, do you use them in the same way as indicated in the write-up? No? Well, then...you have alerts you can add to your EDR framework.
Here's another very interesting blog post from Carbon Black, in part describing the use of MSOffice doc macros. I really like the fact that they used Didier's oledump.py to list the streams and extract the VBS code. Another interesting aspect of the post is something we tend to see a great deal of from the folks at CarbonBlack, and that's the use of process trees to illustrate the context behind one suspicious or apparently malicious process. Where did it come from? Is this something we see a great deal of? For example, in image 7, we see that wmiprvse.exe spawns certutil.exe, which spawns conhost.exe; is this something that we'd want to create an alert for? If we do and run it in testing mode, or search the EDR data that we already have available, do we see a great deal of this? If not, maybe we'd want to put that alert into production mode.
Recently, US CERT published an alert, TA18-074A, providing intel regarding specific threat actors. As an example of what can be done with this sort of information and intelligence, Florian Roth published some revised Yara rules associated with the alert.
Something I saw in the alert that would be useful for both threat hunters and DFIR (i.e., dead box) analysis is the code snippets seen in section 2 of the alert. Specifically, modified PHP code appears as follows:
img src="file[:]//aa.bb.cc[.]dd/main_logo.png" style="height: 1px; width: 1px;" /
Even knowing that the IP address and file name could change, how hard would it be to create a Yara rule that looks for elements of that line, such as "img src" AND "file://"? More importantly, how effective would the rule be? Would it be a high fidelity rule, in your environment? I think that the answer to the last question depends on your perspective. For example, if you're threat hunting within your own environment, and you know that (a) you have PHP code, and (b) your organization does NOT use "img src" in any of your code, then this would be pretty effective.
Something else that threat hunters and DFIR analysts alike may want to consider adding to their analysis processes is from the section of the alert that talks about the adversary's use of modified LNK files to collect credentials from systems. Parsing LNK files and looking for "icon filename" paths to remote systems might not seem like something you'd see a great deal of, but I can guarantee you that it'll be worth the effort the first time you actually do find something like that.
Side Note: I updated my lnk parsing tool (as well as the source files) to display the icon filename path; the tool would already collect it, it just wasn't displaying it. It does now.
If you're looking into threat feeds as a "check in the compliance box", then none of this really matters. But if you're looking to really develop a hunting capability, either during DFIR/"dead box" analysis, or on an enterprise-wide scale, the real value of threat intel is only realized in the context of your infrastructure, and operational policies.
Harlan, thanks for the shout-out. it's an honor :)
ReplyDeleteI was wondering (also after reading your previous blog post about How-Tos) - do you use timeline when doing live response? If you do, then how do you do it?
I haven't had the opportunity to perform live response in a long time...when I went to SecureWorks, we had a different methodology for accessing systems and collecting information for analysis.
ReplyDeleteHowever, I did work with a customer who was doing exactly what you ask about...they were pulling data from systems and using that data to create timelines. ;-)
I don't know exactly how they pulled the data, but the process should be pretty easy/straightforward...use various tools:
fls.exe - file system metadata
wevtutil - Windows Event Logs
'reg save' - registry hive files
You may also want to copy or pull the AmCache.hve file, as well
That's a lot of the very basic information you'd need for a simple timeline. Run that data through the process and you'll have a timeline.