What's old is new
Some discussions I've been part of (IRL and online, recently and a while ago), have been about what it takes to get started in the DFIR field, and one of the activities I've recommended and pushed, over certifications, is running one's own experiments and blogging about the findings and experience. The immediate push-back from many on that topic is often with respect to content, and my response is that I'm not looking for something new and innovative, I'm more interested in how well you express yourself.
That's right, things we blog about don't have to be new or innovative. Not long ago, Richard Davis tweeted that he'd put together a video explaining shellbag forensics. And that's great...it's a good thing that we're talking about these things. Some of these topics...shellbags, ShimCache, etc...are poorly understood and need to be discussed regularly, because not everyone who needs to know this stuff is going to see it when they need it. I have blog posts going back almost seven and a half years on the topic of shellbags that are still valid today. Taking that even deeper, I also have blog posts going back almost as far about shell items, the data blobs that make up shellbags, LNK files (and by extension, JumpLists), as well as provide the building blocks of a number of other important evidentary items found in the Windows Registry, such as RecentDocs and ComDlg32 values.
My point is that DFIR is a growing field, and many of the available pipelines into the industry don't provide complete enough coverage of a lot of topics. As such, it's incumbent upon analysts to keep up on things themselves, something that can be done through mentoring and self-exploration. A great way to get "into" the industry is to pick a topic or area, and start blogging your own experience and findings. Develop your ability to communicate in a clear and concise manner.
It's NOT just for the military
Over the two decades that I've been working in the cybersecurity field, there've been a great many times where I've seen or heard something that has sparked a memory from my time on active duty. When I was fresh out of the military, I initially found a lot of folks, particularly in the private sector who were very reticent to hear about anything that had to do with the military. If I, or someone else, started a conversation with, "...in the military...", the folks on the other side of the table would cut us off and state, "...this isn't the military, that won't work here."
However, over time, I began to see that not only would what we were talking about definitely work, but some times, folks would talk about "military things" as if they were doing it, but it wasn't being applied. Not at all. It was just something they were saying to sound cool.
"Defense-in-depth" is something near and dear to my heart, because throughout my time in the military, it was something that was on the forefront of my mind from pretty much the first day of training. Regardless of location...terrain model or the woods of Quantico...or the size of the unit...squad, platoon, company...we were always pushed to consider things like channelization and defense-in-depth. We were pushed to recognize and use the terrain, and what we had available. The basic idea was to have layers to the defense that slowed down, stopped, or drove the enemy in the direction you wanted them to go.
The same thing can be applied to a network infrastructure. Reduce your attack surface by making sure of things like, that DNS server is only providing DNS services, not RDP and a web server, as well. Don't make it easy for the bad guy, and don't leave "low hanging fruit" laying around in easy reach.
A great deal of what the military does in the real world can be easily transitioned to the cyber world, and over the years that I've been working in this space, I have seen/heard folks say that "defense-in-depth has failed"...yet, I've never seen it actually employed. Things like the use of two-factor authentication, segmentation, and role-based access can make it such that a bad guy is going to be really noisy in their attempts to compromise your network...so put something in place that will "hear" them (i.e., EDR, monitoring).
Not a military example, but did you see the first "Mission: Impossible" movie? Remember the scene where Tom Cruise's character made it back to the safe house, and when he got to the top of the stairs, took the light bulb out of the socket and crushed it in his jacket? He then spread the shards out on the now-darkened hallway floor, as he backed toward his room. This is a really good example for network defense, as he made the one pathway to the room more difficult to navigate, particularly in a stealthy manner. If you kept watching, you'll see that he was awakened by someone stepping on a shard from the broken light bulb, alerting him to their presence.
Compartmentalization and segmentation are other things that security pros talk about often; if someone from HR has no need whatsoever to access information in, say, engineering or finance, there should be controls in place, but more importantly, why should they be able to access it at all? I've seen a lot of what I call "bolt-on M&As", where a merger and acquisition takes place and fat pipes with no controls are used to connect the two organizations. What was once two small, flat networks is now one big, flat network, where someone in marketing from company A can access all of the manufacturing docs in company B.
The US Navy understands compartmentalization very well; this is why the bulkheads on Navy ships go all the way to the ceiling. In the case of a catastrophic failure, where flooding occurs, sections of the ship can be shut off from access to others. Consider the fate of the USS Cole versus that of the Titanic. 'Nuff said!
Sometimes, the military examples strike too close to home. I've been reading Ben MacIntyre's Rogue Heroes, a history of the British SAS. In the book, the author describes the preparation for a raid on the port of Benghazi, and that while practicing for the raid in a British-held port, a sentry noticed some suspicious activity, to which he was informed, in quite colorful language, to mind his own business. And he did. According to the author, this was later repeated at the target port on the night of the raid...a sentry aboard a ship noticed something going on and inquired, only to be informed (again, in very colorful language) that he should mind his own business. I've seen a number of incidents where this very example has applied...in fact, I've seen it many times, particularly during targeted adversary investigations. During one particular investigation, while examining several systems, I noticed activity indicative of an admin logging into the system (during regular work hours, and from the console), and 'seeing' the adversary's RAT on the system and removing it. Okay, I get that the admin might not be familiar with the RAT and would just remove it from a system, but when they do the same thing on a second system, and then failed to inform anyone of what they'd seen or done, there's no difference between those actions, and what the SAS troopers had encountered in the African desert.
Intel
I recently ran across this WPScans blog post, which discusses finding PHP and Wordpress "backdoors" using a number of methods. I took the opportunity to download the archive linked at the end of the blog post, and ran a Yara rule file I've been maintaining across it, and got some interesting hits.
The Yara rule file I used started out as a collection of rules pulled in part from various rules found online (DarkenCode, Thor, etc.), but over time I have added rules (or modified existing ones) based on web shells I've seen on IR engagements, as well as shells others have seen and shared with me. For those who've shared web shells with me, I've shared either rules or snippets of what could be included in rules back with them.
So, the moral of the story is that when finishing up a DFIR engagement, look for those things from that engagement that you can "bake back into" your tools (RegRipper, Yara rules, EDR filters, etc.) and analysis processes. This is particularly valuable if you're working as part of a team, because the entire team benefits from the experience of one analyst.
Additional Resources:
DFIR.IT - Webshells
Updates
I made some updates recently to a couple of tools...
I got some interesting information from a RegRipper user who'd had an issue with the shellbags.pl plugin on Windows 10. Thanks to their providing sample data to work with, I was (finally) able to dig into the data and figure out how to address the issue in the code.
I also updated the clsid.pl and assoc.pl plugins, based on input from a user. It's funny, because at least one of those plugins hadn't been updated in a decade.
I added an additional event mapping to the eventmap.txt file, not due to any new artifacts I'd seen, but as a result of some research I'd done into errors generated by the TaskScheduler service, particularly as they related to backward compatibility.
All updates were sync'd with their respective repositories.
I always find it amusing to hear non-military folks use military terms to describe what they do at their non-military work, when they have no idea of the context and purpose.
ReplyDeleteHowever, if security folks use principles that have been done for hundreds of years, they can say it however they want (You can be Rambo! You can be Rambo! We can all be Rambo!). Still, it's cute to hear someone in DFIR talk up their job being in the Marines when they have never worn the EGA.
Brett,
ReplyDeleteMy favorite is, "...I was gonna join the Marines..."
Harlan, thanks. As always, your posts are great.
ReplyDeleteI also find it useful to write YARA at the end of an engagement. Actually, a friend of mine just found a password dumper with a rule he wrote 3 years ago.
I do have a question. How do you use your event mapping file? As a reference only?
The world is full of was gonna and should of.
ReplyDeleteGood article.