Like many others of my generation, when I was a kid I'd go and play outside for hours and hours. Sometimes, while running through the woods, I'd see trash...but seeing it often, I wouldn't think much of it. Sometimes it was a tire in the creek, and other times it might be bottles or cigarette butts in a small cluster.
When I went through my initial military training, we spent a lot of time in the outdoors, but during the first 6 months, there was no real "this is what to look for training". It wasn't until a couple of years later, when I returned to be an instructor that the course was teaching new officers the difference between moving through the woods during the day and at night.
Much later, after my military service ended, I got engaged in horseback riding, particularly trail riding. Anyone who's ever done this knows that you become more aware of your surroundings, for a variety of reasons, but most importantly, the safety of your horse, you, and those you may be riding with. You develop an awareness of your surroundings; what's moving near you, and what's further away. Are there runners or walks with dogs (or children in strollers) down the trail? Sometimes you can be warned of the impending approach of others not so much visually, as by what you hear or smell (some riders like to smoke while they're riding). You can tell what is or might be in the area by visual observation (scat, droppings, etc.), as well as listening, and even smell. Why is this important? There's a lot that can spook a horse...horses will smell a carcass well before a human will, and as you ride up, a gaggle of vultures might suddenly take flight. Depending on your horse's temperament, a flock of turkey hens running through a field might spook them or just catch their attention.
If it's recently rained in the area where I'm riding, I'll visually sweep the ground looking for footprints and signs of animals and people. If I see what appear to be fresh impressions of running shoes with dog paw prints near by, I might be looking for a walker or runner with a dog. Most runners have a sense of courtesy to slow down to a walk and may be even say something when approaching horses from the rear. Most...not all. And not everyone who walks a dog really thinks about whether their dog is habituated to horses, or even how the dog will reach when they see horses. I'll also look for signs of deer, because they usually (albeit not always) move in groups. More than once in the springtime, I've come across a fawn coiled up in the tall grass...deer teach their fawns to remain very still, regardless of the circumstances, while they go off to forage. More than once I've seen the fawn well before the fawn has lost its nerve and suddenly bolted from its hiding place.
Now, because of this level of awareness, I had an interesting experience several years ago while hiking with friends at the base of Mount Rainier. At the park entrance, there was a small lake with signs prohibiting fishing, and a sign telling hikers what to do if they spotted a bear. We hiked about 3 1/2 miles to a small meadow with wild blueberries growing. The entire way out and back, there were NO signs of animal life...no insects, no sounds of birds or squirrels. No scat or markings of any kind. The absence of any and all fauna was very extremely evident, and more than a bit odd.
So what?
Okay...so what's my point? If you're familiar with the environment, and aware of your surroundings while performing DFIR work, and know what should be there, you know what to look for, as well as what data sources to go to if you're looking for suspicious activity. It's all about knowing what to hunt for when you're hunting. This is particularly true if you're performing hunting operations in your own environment; however, it can also be applied in cases where you're a consultant (like me), engaging with an unfamiliar infrastructure.
In a recent presentation (BrightTalk, may require registration) at InfoSecurity Europe 2015, Lee Lawson discussed some of the indicators that are very likely available to you right now, particularly if you've done nothing to modify the default audit configuration on Windows systems.
Not long ago, the folks at Rapid7 shared this video, describing four indicators you could use to detect lateral movement within your infrastructure. Unfortunately, two of them are not logged as a result of the default audit configuration of Windows systems, and the presenter doesn't mention alternate Windows Event Log source/ID pairs you can look for instead.
There are a lot of ways to hunt for indications of activity, such as lateral movement, but what needs to happen is that admins need to start looking, even if it means building their list of indicators (and hopefully automating as much of it as makes sense to do...) over time. If you don't know what to look for, ask someone.
If someone were to ask me (this is my opinion, and should not be misconstrued as a statement of my employer's opinion or business model) what was the one thing they could do to increase their odds of detecting malicious behavior, I'd say, "install Sysmon", but that's predicated on someone actually looking at the logs.
Addendum: @Cyborg_DFIR over on Twitter read the original version of this post (prior to the addendum) and felt that I talked more about explaining what you meant instead of talking about it. Fair enough. I don't dispute that. But what I will say is that I, and others, have talked about what to look for. A lot. Does that mean it should stop? No, of course not. Just because something's been talked about before doesn't mean that it doesn't bear repeating or being brought up again.
During July 2013, I wrote 12 articles whose titles started with "HowTo:" and went on to describe what to look for, if you were looking for specific things. More recently, I addressed the topic of detecting lateral movement within the last month.
So, to @Cyborg_DFIR and others who have similar thoughts, questions or comments...I'm more than willing to share what I know, but it's so much easier if you can narrow it down a bit. Is there something specific you're interested in hearing about...or better yet, discussing?
I think equally important is knowing whatever you are looking for looks like. I spend a lot of time in a lab environment testing different backdoors, webshells, lateral movement techniques to understand the different processes that are spawned, files that are touched, registry keys that are modified or created... Having this knowledge first hand I think is important especially when you see something that may or may not be bad. Having that firsthand knowledge will often help you identify either the absence or presence of additional indicators that may tip the scale either way. Also, going through the steps of recreating some malicious activity will often give you other ideas for detection or hunting that you may not have otherwise thought about.
ReplyDeleteI have advocated for a long time that people writing detection need to be familiar with whatever it is they are writing detection for. They will understand the the the different ways they can detect the activity and hopefully employ multiple ways in case one thing changes. The same can also be said for hunting.
I guess what I'm trying to say is that people should take your advice as well as have an understanding of what bad looks like. It will make those hunting trips more productive.
Jack,
ReplyDeleteThanks for sharing your thoughts...
...knowing whatever you are looking for looks like.
Agreed, 1000%.
Having this knowledge first hand I think is important especially when you see something that may or may not be bad.
Agreed, again...1000%. I can't tell you how many times in the past 15 yrs of being engaged in DFIR work where I've seen or heard someone draw a conclusion of malicious activity based on one artifact or indicator.
I would add that collaboration can have significant value in these efforts. Is everyone working on their own going to think of or see everything? Not likely. Collaborating and opening up what you've done for others to comment on and have input into can go a long way toward expanding our view and understanding, and extending our view into threat intelligence.
The thing is that if you have an understanding of what "bad" looks like, it becomes easier to understand why the old adage that defenders need to be right 100% of the time is a myth.
I concur with your post, Harlan. The way I've taught 'hunting' is that it's basically the same process folks go through when writing signatures or custom content. The steps of hunting (that we teach @ FGS) are:
ReplyDelete1) Know your adversary
- spend time researching attack methodologies, new vulns/exploits, TTPs discussed in blog posts and white papers, etc.
2) Know your environment
- make sure you know what 'normal' looks like in your environment and what your visibility is like into various data sources
3) Know your tools/capabilities
- you need to know what your tools are capable of looking for in terms of queries, automated scans, whether they support wildcards, regular expressions, etc.
4) Apply all of the above to find bad stuff. If/when you find something (or before if you're confident), generate custom content (rules, sigs, parsers, etc.) to detect said activity automatically. Make your tools work for you, otherwise you'll never recognize any efficiencies.
Jeff,
ReplyDeleteThanks for sharing your thoughts...
...spend time researching attack methodologies, new vulns/exploits, TTPs discussed...
Forgive me, but I really think that time, effort and resources could be better spent in other areas that would have more bang for the buck. But rather than just leaving that there, I'll explain why.
First, how does one "know your adversary"? No single organization or vertical has one adversary that's going to come after them. Those that "track" threat groups are really tracking clusters of indicators, and similar clusters of indicators are very often applied to compromised organizations in different verticals.
You can research attach methodologies, but what does that really get you? What I mean by that is, phishing is an example of an attack methodology. Someone sends an email to someone else, trying to entice them to click on a link or download and run something...and it works.
Very few blogs and white papers discuss TTPs. Most of what is referred to as "TTPs" or "threat intelligence" is malware-centric, which is a tool that can (and does) change. Things like C2 domains and IP addresses change, as well.
Now, if you *do* focus on TTPs, you'll not only be much more successful at detecting the malicious activity, but you'll also be far more successful at disrupting that activity, hopefully before significant damage is done.
I'm not saying that what you suggested isn't a good idea. What I am suggesting is that with limited resources (staff, time, etc.), efforts might be a bit better structured and not so open-ended. Once you do have a handle on things, and can see the width and breadth of the issue, then direct resources in those directions.
An alternative might be this...if you have a TI vendor, take a hard look at the TI you're receiving. Does it suit your needs? If you have folks who do monitoring (SOC/NOC) or hunting, or even IR, within your environment, take the next bit of TI you receive and have staff from each area write out what they'd look for, and then have them show you as they do so.
Know your environment
This, I completely agree with. However, it's also where (based on my experience as a consultant) the least emphasis is placed. I really think that what needs to happen is that IT admins and managers need to sit down with folks who do this sort of response work all the time, and really develop an understanding of what things "look like" under the hood. Do you know what an alternate data stream (ADS) is? Do you know how Windows uses ADSs? Do you know how adversaries can use ADSs?
What about 'at.exe'? Same questions apply. Do you use at.exe to manage your infrastructure? No? Okay...great. That's a REALLY good place to start. If the answer is, "we don't have 'at.exe' installed", run 'dir' on any system looking for it, and when you see it listed in the output, you know you've got your work cut out for you. ;-)
Something else I think that's important to recognize is that for quite a while now, annual reports have been saying what a lot of folks who do this sort of work on a regular basis already now...that the blue team myth has been debunked. It's been said that attackers only need to be right 1 time out of 100, and that defenders (the blue team) need to be right every time. But I really think that if you look at the numbers in the annual reports, you'll see that this simply isn't the case.
Hey Harlan,
ReplyDeleteAllow me to clarify. I didn't intend to imply that organizations can/should identify individuals or groups by name, but in the broader sense of the term - know who the 'bad guys' are and what their TTPs look like. I equate 'Knowing your adversary' with knowing what attackers (of all shapes and sizes) can and do do. Some of the best training we do is when we combine a red team assessment with what the blue team found and presenting each side of that coin to all involved.
Now, if truth be told, I think a better approach to security monitoring is when defenders take a risk-based approach rather than an attacker-based approach. When you know what your vulnerabilities are and the ways in which they can be exploited, you can prioritize your monitoring efforts better. As an example, if you know that your e-mail controls are locked down and that you have great sanitization in-line, and that you have great IT Security Awareness training and the individuals in your org are great at identifying and reporting phishes, then as a security analyst, you can probably prioritize 'hunting' for phishes a bit lower than some other things. But in my experience, most organizations don't conduct proper risk assessments (as it pertains to ITSEC) and if they do, they're not kept current. And for the orgs that DO have good risk management policies in place which they follow, they're unlikely to need advice on how to do security monitoring... ;)
But back to the point of the article - about hunting with direction...If you know what attackers are doing, both in the general sense and in the more specific sense as it might pertain to a particular group or tool or objective, that will inform the kinds of things you should look for. And I am specifically talking about TTPs here. In the example you provided about phishing being an attacker methodology - it's CRITICAL that security analysts understand exactly what phishes/spear-phishes look like (in the broad sense) AND that they're current on specific types of phishes that are being conducted so they can find them. I've interviewed quite a many 'security analyst' who couldn't describe all of the attributes to a spear phish and that's just not acceptable.
When it comes to conducting research and the value you get from that - as an example, there has been an increased trend in the last ~6-8 months on an attack which has been conducted for some time now, where attackers are sending specially crafted spear phishes, originating from domain names similar to the org's name, to the comptrollers of large organizations with a spoofed e-mail chain between the CEO and CFO discussing the need for an immediate wire transfer. The spoofed e-mail thread lends credence to the request, of course. Amongst other things, knowing that this attack is occurring and what it looks like would enable security analysts to look for it more effectively. Perhaps looking by creating a watch-list of high-profile e-mail addresses (for folks like your CEO, CFO, CIO, COO, etc.) and then looking for e-mails to those folks, coming from external e-mail addressees and containing keywords in those e-mails such as 'wire' and/or 'transfer'. This (to me) would be an example of hunting - and it was informed by first conducting research and knowing that this type of attack is occurring with more frequency. Thoughts?
...know who the 'bad guys' are and what their TTPs look like...
ReplyDeleteThat's exactly what I was getting at...how do you know what a targeted, dedicated adversary's TTPs "look like"?
...combine a red team assessment with what the blue team found...
While I think that this is a great idea, something I've seen is that what our pen testers do is distinctly different from what I've seen targeted threat actors do. I think that part of this is due to the fact that guys like me are looking at logs, host-based artifacts, and process creation monitoring logs...and pen testers aren't often doing this. In all the companies I've ever worked for (including, but not limited to ISS and IBM ISS), the pen testers had NO contact with the IR team...and in most cases, didn't want any.
I've had an opportunity to see the TTPs of an authorized pen test...when held up side-by-side with what is understood to be threat actor activity, there are significant, distinct differences.
While risk assessments are a good idea, anything involving people is vulnerable. When I was a security engineer at a company, a senior VP came to me one day and told me that she'd received three spam emails, and even though she knew better (she actually used the words, "I knew better..."), she clicked "Reply" for one of them and responded, because three was annoying. She was coming to me to because by the time she got out of a meeting, she had over 3000 spam emails in her inbox.
...couldn't describe all of the attributes to a spear phish...
I've conducted a number of investigations where we've been able to track the attack back to the IIV, which was a phish. In one early case, it was a weaponized PDF. In other cases, it was a poorly worded email that to me was quite obviously a phish, but to the person who received it...well, they'd clicked on the link...
Thoughts?
I guess I'm curious about where the analysts are doing research to get that level of detail...mostly because I've only seen that level of detail in either threat intel produced by vendors, or by close (and closed) collaboration between folks within the industry.