Pages

Sunday, February 24, 2019

LNK Files

Update, 28 Feb: Just a quick note...this post is not about tools.  I do not mention any tools in the post, and the content isn't about one tool being better than another. I do mention using a hex editor, but that is simply to verify finds from whichever tool is used.  In the first example below, every parsing tool available would've returned the exact same time stamps, and I would have (and recommended) used a hex editor to confirm the finding. At the time I'm writing this update, this post has been viewed 1258 times, and not one question about tools has been posted as a comment. This post is about LNK file metadata, what it means, and how it can be used in analysis. Thanks.

I know, I know...yet another blog post on LNK files...yeah.  I get it.  But some new things popped up recently that I wanted to take advantage of, and they ended up here.

I had an opportunity to look at several LNK files recently, and I wanted to dig a bit deeper into their metadata.  Most available write-ups make mention of the LNK file, but tend to gloss over the file and move right to the embedded command and the next stage download.  I wanted to take a closer look at what's in the LNK file, because after all, this is something created within the adversary's environment that they're sending out.  In a lot of ways, it's like free money, so why not make the best use of what you have available, and squeeze every little bit out of the data that you have?  Maybe it's just me, but when the adversary gives you something, it's hardly a "little bit".

Consider this: several years ago, the folks at JPCERT/CC said that LNK files give us a view into the attacker's development environment; this was, and still is, a great idea, so let's see if we can not only replicate what they did (with respect to metadata extraction) but also see if we can't squeeze just a bit more out of the files, and see  what we can learn about the adversary's development environment, and maybe even see what we can learn about the tools they use to create the LNK files.

Steroids
I ran across two write-ups on LNK files recently; perhaps it would be more correct to say two write-ups on the same (or very similar) LNK file.  I found this write-up at D3xt3r's Malware Lab (blog includes a hash), and this write-up on Max's blog (blog includes a link to download the sample).  While D3xt3r's write-up is dated 16 Feb 2019, Max's doesn't have a date as part of the published material, but it does appear to have been published in Feb 2019, as well.  This is to say that they're both fairly recent posts, and as such, the LNK file itself may be fairly recent, as well.

Anyway, I wanted to take a look at a few things associated with the LNK file metadata that weren't discussed in either blog post, specifically what we could learn by going really deep in parsing the files.  One of the first things that I looked at was the shell item ID list, shown below:

shitemidlist       My Computer/C:\/Windows/system32/cmd.exe
**Shell Items Details (times in UTC)**
  C:0                   M:0                   A:0                  Windows (8)
  C:0                   M:0                   A:0                  system32 (8)
  C:0                   M:0                   A:0                  cmd.exe (8)

As we can see from the above output, I parsed the shell item ID list, but the time stamps embedded within the individual shell items (the creation, modification, and last accessed times of the resources) appear to all be zeros.  Using a hex editor, I verified this quite easily, and in fact, the bytes at the locations for the time stamps within each of the shell items are all zeros.  Interestingly enough, the LNK file mentioned here also has the shell item time stamps zero'd out, so this is likely not a unique finding.  However, it's unclear whether this is the result of the tool used to create the LNK file, or of anti-forensic/obfuscation efforts taken after the LNK file was created.

We also see that the version information for the shell items is "8", indicating that the platform is Windows 2008, 7, or 8 (per section 6.5 here).  This gives us some additional (beyond what the JPCERT/CC folks discussed) insight into the development environment used by the adversary.

Now, let's take a look at the contents of the PropertyStoreDataBlock, shown below:

***PropertyStoreDataBlock***
GUID/ID pairs:
{46588ae2-4cbc-4338-bbfc-139326986dce}/4       
         SID: S-1-5-21-1243223150-2135741377-3118647425-500
{f29f85e0-4ff9-1068-ab91-08002b27b3d9}/6

As we can see,there are two GUID/ID pairs, the first of which points to a Unicode string that is a SID.  This is fairly common, in that we not only see the GUID/ID pair populated with a SID in a good number of LNK files, but we can use native tools (PowerShell, etc.) on Windows systems to create LNK files that contain this property set.  In this instance, the RID points to the default Administrator account for the system.  For the second property, the GUID indicates that it's a Summary Information Property Set, and the ID indicates that it's a comment.  Looking at the contents of the property set itself (via a hex editor), there don't appear to be any visible strings.

Interestingly, the LNK file does not appear to have a TrackerDataBlock, and as such, there is no machine ID (system NetBIOS name) embedded in the LNK file.  Again, this is pretty easy to verify via a hex editor.

What we have so far is a pretty interesting view into the manufacture of this LNK file.  There are some 'normal' elements of the file structure that exist, and there are other elements that we expect to see, but have been removed.  This is only one file, so without more insight, anything regarding the "why" is simply speculation and assumption.  What we do know at the moment is that steps were taken, either during the manufacturing process or afterward, to remove elements of the file structure, and the removal of those elements did not impact the function of the LNK file itself.

Finally, the D3xt3r post also includes a link (no pun intended) to this TrendMicro post from 2017.  The TM post indicates that, at the time, the use of LNK files to download malware was a "rising trend".

Cb
I also ran across this write-up from Carbon Black, dated 11 Feb 2019. The LNK file discussed in the Cb blog post contains some interesting metadata.  For example, this file happens to contain a description field, as shown below:

description        AVI

I can't say that I've often seen a description field within an LNK file that's been populated.  However, I should note that the LNK file from the 2018 campaign described here did have a description field of "ds7002.pdf"; the LNK file from the 2016 campaign did not.

In addition, we can see from the shell item ID list (below) that the shell items contain time stamps, and that the version numbers embedded within the shell items indicate that the system is a Windows 8.1 or 10 system.

shitemidlist       My Computer/C:\/Windows/System32/WindowsPowerShell/v1.0/powershell.exe
**Shell Items Details (times in UTC)**
  C:2018-04-11 21:04:34  M:2018-11-16 21:29:56  A:2018-11-16 21:29:56 Windows (9)
  C:2018-04-11 21:04:34  M:2018-11-21 17:32:02  A:2018-11-21 17:32:02 System32 (9)
  C:2018-04-11 23:38:22  M:2018-04-11 23:38:22  A:2018-06-28 23:05:46 WindowsPowerShell (9)
  C:2018-04-11 23:38:22  M:2018-06-28 23:04:20  A:2018-06-28 23:04:20 v1.0 (9)
  C:2018-04-11 23:35:28  M:2018-04-11 23:35:28  A:2018-04-11 23:35:28 powershell.exe (9)

Looking at the PropertyStoreDataBlock (below), we see a SID, as well as another property.

***PropertyStoreDataBlock***
GUID/ID pairs:
{446d16b1-8dad-4870-a748-402ea43d788c}/104
{46588ae2-4cbc-4338-bbfc-139326986dce}/4       
    SID: S-1-5-21-1607665944-3235443811-1991609163-1001

Looking at the SID, we see that in this case, the RID is "1001", indicating a user account was used to create the LNK file (as opposed to the built-in administrator account, with a RID of 500).  In fact, we know from the RID that this was the second user account created on the system.

Looking up the other GUID, it appears to be a System.VolumeID property; I'm still looking for information on how to parse this property field.

Finally, we have the TrackerDataBlock (below):

***TrackerDataBlock***
Machine ID                     : x10-slim
New Droid ID Time         : Sun Jul 29 22:57:38 2018 UTC
New Droid ID Seq Num  : 5969
New Droid    Node ID     : e8:9e:b4:3a:a3:ea
Birth Droid ID Time         : Sun Jul 29 22:57:38 2018 UTC
Birth Droid ID Seq Num  : 5969
Birth Droid Node ID        : e8:9e:b4:3a:a3:ea

As we can see from the TrackerDataBlock, the NetBIOS name of the system on which the LNK file was created is "x10-slim". Looking up the MAC address OUI (i.e., e8:9e:b4) indicates that it belongs to "Hon Hai Precision Ind. Co.,Ltd.", which was also mentioned in the JPCERT/CC article. Through the course of discussions of "toolmarks", one of the topics that came up was that a common virtual machine may be used by actors; the fact that a VM is used is supported by the OUI lookup, but something that hasn't been addressed is how common this may be.

RAT
I also ran across a Proofpoint blog post from August 2017 that was very interesting, and not just because of the use of Game of Thrones as the lure.  In this particular case, the LNK file was embedded within the .docx lure file

First, let's take a look at the shell item ID list (below):

shitemidlist       My Computer/C:\/WINDOWS/system32/WindowsPowerShell/v1.0/powershell.exe
**Shell Items Details (times in UTC)**
  C:2006-05-22 17:08:04  M:2007-12-06 05:29:12  A:2007-12-06 05:29:12 WINDOWS (3)
  C:2006-05-22 17:08:04  M:2007-12-04 00:44:06  A:2007-12-04 00:44:06 system32 (3)
  C:2007-09-21 00:56:32  M:2007-09-21 00:56:32  A:2007-09-21 00:56:32 WindowsPowerShell (3)
  C:2007-09-21 00:56:32  M:2007-12-03 06:20:46  A:2007-12-03 06:20:46 v1.0 (3)
  C:2009-07-13 23:49:08  M:2009-07-14 01:39:22  A:2009-07-13 23:49:08 powershell.exe (8)

In this case, the version of "3" from the shell items indicates a Windows XP or 2003 system.  And yes, I did catch (and verify) that the version info for the final shell item ("powershell.exe") is indeed "8".  This seems to indicate that the LNK file was created on an older platform.

Skipping past the encoded Powershell command, we see a couple of times of interest.  First, the PropertyStoreDataBlock (below):

***PropertyStoreDataBlock***
GUID/ID pairs:
{0c570607-0396-43de-9d61-e321d7df5026}/3
{46588ae2-4cbc-4338-bbfc-139326986dce}/4       
   SID: S-1-5-21-3345294922-2424827061-887656146-1000
{dabd30ed-0043-4789-a7f8-d013a4736622}/100
{b725f130-47ef-101a-a5f1-02608c9eebac}/10
{28636aa6-953d-11d2-b5d6-00c04fd918d0}/30

Okay, this LNK file has 5 items in this data block, which is unusual in itself.  We see the SID, which as with the LNK file from the Carbon Black post, is for a user account.  We also see a number of other GUID/ID pairs, as well, and I think that this may be the result of the EnableTargetMetadata flag having been set in the LNK file.  I say this because if you read the description of the flag (here), it says:

The shell link attempts to collect target properties and store them in the PropertyStoreDataBlock (section 2.5.7) when the link target is set.

I'm not 100% clear on what this means, but it seems to say that target properties are stored when the target of the LNK file is "set".  I guess maybe another question to consider is, was the flag specifically set, and if so, what tool or application was used to create the LNK file?

Moving on, we see that this particular LNK file has a code page value identified in the metadata, as shown below:

***ConsoleFEDataBlock***
Code page: 936

The code page is identified as "simplified Chinese", which makes sense because the campaign was thought to have a Chinese nexus.  However, what I have to wonder is how many times a code page is seen within LNK files.  For example, neither of the LNK files discussed here appeared to contain code pages.  I rarely see write-ups that mention LNK files that also mention a code page entry.

Is it possible that this field was modified as a "false flag" or "red herring"?  Sure.  When looking at any single metadata field with a file structure in isolation, the possibility exists that it was modified.  We've seen evidence, just in this blog post, of either a manufacturing process that creates the files in the state that we see them, or an "after market" process that removes the missing elements from the file structure.  But what we're not seeing is the exploration of these findings in the context of the larger campaign.  Are the actors creating other "false flags"?  If so, how pervasive are they?  Are the LNK file elements linked to other campaigns, possibly even ones associated with different actors?

Finally, the TrackerDataBlock contains a machine ID of "john-win764".  Used in conjunction with the SID and volume serial number (in this case, "CC9C-E694"), a VirusTotal retro-hunt might turn up other examples of similar LNK files.

Final Thoughts
Some thoughts come to mind as a result of looking at these files.  First off, there's a good bit of metadata that can be used to do research on campaigns, particularly through individual archives, as well as VirusTotal (re: retro-hunt).

I know some folks have said, "...yeah, but it's easy to change those fields...".  Sure, I agree to an extent, but there would be a 'cost' associated with that.  The Proofpoint team addressed this in their post.  However, what I'm referring to is parsing files already collected, ones that have been archived by teams following the campaigns and collecting data, or are available from other archival sources (VirusTotal, etc.).  If those files are tied to specific campaigns, or attributed to specific actors, then you may have something there.  Not only can you see if and how the files have changed over time (like the FireEye folks did), but what if LNK files with similar metadata appear across different campaigns, and are used by different actors?  What does that tell us?

Further, what this sort of analysis shows is that there's a lot going on underneath the MITRE ATT&CK technique, specifically, "Initial Access, Spearphishing Attachment".  In fact, quite a few sub-techniques and even specific technical details seem to pop right out.  Just look at what we've seen so far; some LNK files are sent as attachments themselves, but in other cases, the LNK files are embedded in a lure document.  Embedded commands differ (WMIC, Powershell, etc.), as does the level of obfuscation and encoding. Some LNK files contain embedded commands with some serious obfuscation, while some contain no obfuscation whatsoever.  We could create an entire matrix just for weaponized attachments.

The JPCERT/CC article described some work done in clustering LNK files based on some of the available metadata around different attacks, which was very interesting.

Additional Resources
FireEye blog post, re: FIN7 - mentions LNK "toolmarks"
PodCast with Nick Carr, discussing "toolmarks"
2018 NCSC Advisory - discusses not only the use of iconfilename fields in LNK files to maintain persistence across password changes, but also includes a very basic Yara rule for detecting the 'weaponized' LNK file (I should note that none of the metadata was shared)

Saturday, February 23, 2019

Tribe of Hacker Responses

When I shared my review of Marcus and Jennifer's Tribe of Hackers, I suggested to the reader that there was value in responding the questions themselves.  In this post, I'm sharing my first swag at my own responses to Marcus's questions.

1. If there is one myth that you could debunk in cybersecurity, what would it be?
That it's all about technology.  There is much more to "cyber" than technology, largely because technology is designed, purchased, employed, deployed, used, and abused by people.  When it comes to software products, it's people who use them.  When you're threat hunting, it's one person or team against another.  Policies are created, enforced, and abused by people.

There was an engagement that I was working with several other team members, and we'd identified a number of systems from which we required images for analysis.  All of these systems were in the data center, and none of them were "easy".  Needless to say, none of us wanted to stay in the data center for the time it was going to take to acquire the images, so we set our processes up, and then covered EVERYTHING in tape and signage.  In some ways it was as necessary as it was ridiculous.  We did this because we knew that the technology and process were sound, but that someone would likely come in and remove everything.

If you have kids, particularly ones that are grown now, you'll know one aspect of what I'm referring to.  Kids can be like a hive mind; they're all texting and on social media, and when one of the finds something interesting or new in the technology they use, within seconds, they all know it.  It doesn't matter if it's some undocumented feature in the new phone someone got, or something that allows parents to monitor the kid's activities through the latest social media app; as soon as one knows, they all know.  If this doesn't illustrate that people are the key, I don't know what does.

2. What is one of the biggest bang-for-the-buck actions that an organization can take to improve their cybersecurity posture?
C- and E-suite executives, particularly the CEO, must take the cybersecurity posture of the organization seriously, and make it a priority.  And I mean, really make it a priority, not say that it is and then go back to what they were doing..  You can't just talk the talk, you have to walk the walk.

If the CEO says that security is important, those within the company will know based on her actions, not her words.  If they see her tailgating into the office after issuing a "no tailgating" policy, even once, the policy is no longer effective.

You can hire all the "smart people" you want to help you with your security posture...a lot of organizations say that they do exactly that.  "We hire smart people, and they tell us what do do."  Sure, okay...but do you listen?  I know a lot of smart people who have left organizations out of frustration because they said what needed to be done ("...we need to implement multi-factor authentication on our remotely accessible resources..."), but it hasn't been done, even after events have occurred that illustrated the need.

Wrestlers and snakes know, where the head goes, the body follows.  So, make cybersecurity a business process.  Businesses have all kinds of processes, from payroll, to vendor and partner vetting, to fulfillment and customer service.  Making cybersecurity a business process makes it part of the business, not the clumsy, drunk uncle that shows up on the holidays.

3. How is it that cybersecurity spending is increasing but breaches are still happening?
Spending does not equate to security.  Years ago, I responded to a customer who'd purchased three copies of ISS's RealSecure IDS product.  You're thinking, "wow, that's oddly specific...", but the fact was that one copy, still in the shrinkwrap, was being used to prop open the door to the SOC. People get hired into fancy new positions with big salaries, and are completely ineffective against embedded corporate culture.

4. Do you need a college degree or certification to be a cybersecurity professional?
Absolutely.

I say this with the understanding that when you write your resume, as with anything else, you have to keep your audience in mind.  After all, being a "professional" implies that you're paid by someone, and in order to be paid by someone, you need to get a job.  Part of that is having a degree.  From my perspective as someone who has been a "gatekeeper" once or twice, when someone is looking to fill one position from 50 candidates, there has to be some way to trim the field, and a degree, any degree, is one way to do that.

Do you need a degree to be good at what you do?  No, not at all.  I've worked with some really exceptional practitioners who are really, really good, and I've worked with people with advanced degrees that have left me shaking my head in wonder.

5. How did you get started in the cybersecurity field, and what advice would you give to a beginner pursuing a career in cybersecurity?
I got started in "cybersecurity" back in '95, while I was in graduate school.  I asked a question that someone refused to answer.  It wasn't that they couldn't answer the question; they looked at me, smiled, and walked away.  Had their answer been different, I might not be where I am today.

My advice...engage.  The "cybersecurity" field has grown to be so large as to be overwhelming. Engage with a mentor to help you narrow things down, specifically to help you determine what you want to do in this field.

One reason why I recommend engaging is that my career in cybersecurity started long before I was in cybersecurity, going back to times when I learned or did things that laid the foundation for what I do now, but even today, are not really discussed.  For example, I took public speaking in college (circa '86).  Throughout my military training, there were multiple times when I was given a limited amount of time to prepare a short "speech"; that is, learn to speak coherently on the fly, in public.  I was evaluated multiple times during training, had to use it in my job, and then when I went back to the training environment, I had to evaluate others on their ability to do the same.

There was also a great deal that I did with respect to planning, and then executing that plan.  I found that this helped me a great deal when planning and executing assessment exercises, incident response engagements, etc.

What I'm saying is that, as is the case with others who "grew up" in cybersecurity before there really was such a thing, there was a great deal that I brought with me when I moved into a field that was very much in its infancy.  A lot of these things are not included in the courses or programs of instruction today, but are indispensable nonetheless.  The only way folks today are going to "catch up" is to actively engage with those who have been in the field for some time.

6. What is your specialty in cybersecurity? How can others gain expertise in your specialty?
Early on, I was doing assessment work, including war dialing. Not too long afterward, I moved into digital forensics and incident response work, and I've been doing that for quite some time.  Over a decade ago, that work started including targeted attacks, by both ecrime and nation-state actors.

I'm probably most known for the DFIR side, particularly as it applies to Windows systems.  I've also spent some time looking at data structures within files found on Windows systems, with an eye toward using metadata to extend analysis, as well as to inform and extend the threat intelligence picture.

The field has grown significantly since I started, and has gotten to the point where it is almost impossible to keep up.  My recommendation to anyone is to pick someplace to start...just pick one.  Look at it the way you "eat an elephant", so to speak...you do so one bite at a time.  Are you interested in malware analysis?  Start small.  Focus on something to get started...say, the PE file structure.  Learn what it is, and what it should "look like".  Build from there.  Or, you may find out that that aspect of cybersecurity is not for you.

Regardless, the point is to not get overwhelmed by the enormity of it all; break it down into smaller chunks and start by taking that first step.  Then take another.  Then another.

7. What is your advice for career success when it comes to getting hired, climbing the corporate ladder, or starting a company in cybersecurity?
Engage.  Get to know people, both in the field, as well as in other fields and disciplines.

Getting to know others in the field is going to have a profound impact on you.  First, it's going to help you with whatever level or degree of "imposter syndrome" with which you may have inflicted yourself.  Second, it's going to show that a lot of your assumptions about others are, again, self-inflicted, and wild misconceptions.

Above all, actively engage.  Clicking "like" and posting pictures of your food, your pet, or your workout is not actively engaging.  It's great to have hobbies and interests outside of cybersecurity, and it's something I highly recommend; however, in the age of "social" media, I think we've really lost track of what it means to actively engage.

8. What qualities do you believe all highly successful cybersecurity professionals share?
A sense of humor, and a focus on the goals that really matter.

Also, highly successful cybersecurity professionals understand the value in documenting things.  Truly successful professionals don't hoard information or experiences, and don't hide behind the "I don't remember" excuse.  There is too much that we don't know in this field to not be sharing what we do know, and one of the biggest qualities I see that truly successful professionals in this field share is sharing. 

9. What is the best book or movie that can be used to illustrate cybersecurity challenges?
There are two books that come to mind; "Once an Eagle" by Anton Myrer, and "Leadership in the Shadows" by Kyle Lamb.  Both are books that address leadership, but from a perspective that may be somewhat different than what you're used to.

Myrer's book is a fictional account of two officers, one who rises through the ranks by his own hard work and dedication, and the other who is "born" to it.  In a lot of ways, I see a parallel between these two officers, and what we see in business today.

Lamb's book is much more practical, but no less impactful or important.  Lamb addresses and discusses leadership from the perspective of a career working in special operations, providing lessons learned the hard way.  Leaders in the business world would see their effectiveness explode if they started following just some of what he describes in the book.

I know, I know, I've heard the same thing throughout my career in the private sector; "...that's the military, it won't work here."  The simple fact is that it will not only work, following (and living) military style leadership principles will have a profound effect not only on those around you, but the business, as well.

10. What is your favorite hacker movie?
"Hackers", hands down.  I not only enjoyed it (after all, it was a movie), but there are some very quotable lines in the movie, and when I'm giving a presentation I tend to share quotes from pop culture that, to me, are funny in the moment.  Movies from the '80s, '90s ("Hackers" is circa '94), and the later 2000s ("Deadpool") are fodder for many presentations.

11. What are your favorite books for motivation, personal development, or enjoyment?
I've always been a fan of first person perspectives of historical events, specifically first person accounts shared by military special operations personnel.  It doesn't matter if the event is VietNam, Iraq, Afghanistan, or any of the myriad smaller, undisclosed events, I find the "boots on the ground" perspective absolutely fascinating.  Having served in the military, and then working in DFIR, it's interesting that in both cases, there is the "historical write-up" of an event from a macro-perspective, but there is also the perspective of the individual working in the trenches.  It's that worm's-eye view that is often missed.

For enjoyment, I've always leaned towards science fiction.  William Gibson and Orson Scott Card are two of my favorites.  One of William Gibson's books talked about "locative art", or digital renderings that existed in a place, dependent upon your location and which direction you were looking.  Interestingly enough, we're starting to see some of that in VR realms. Card's series of books that started with "Ender's Game" have provided me with some great reading on plane flights.

12. What is some practical cybersecurity advice you give to people at home in the age of social media and the Internet of Things?
Don't.

Simply put, if you don't want your drama on social media, don't put it there.

With respect to IoT, there's no reason why everything needs to be connected to the Internet.  Baby monitors do not need to be accessible to anyone and everyone. The simple fact is, when something is made "easier", it's made easier for everyone.  If you can search for accessible security cameras online with a simple query, why would a baby monitor or your refrigerator be any different?  When technology is developed and made widely accessible, the unspoken guarantee is that there is no security, and you don't need to be an "expert" to understand that fact, nor to abuse it.

Vehicles have all sorts of new "safety" features built in, not to protect the driver, but to offset and overcome all of the other distractions we've put in front of the driver.

13. What is a life hack that you’d like to share?
Don't listen to that inner dialog that prevents you from doing something.  A while back, I was going through a very dark time in my life, and was overwhelmed with the tasks I had before me.  To make things a bit worse, there were people actively working against me.  Let me be clear, this was not a perception based on the negativity that I'd wrapped myself in; these people were actively saying and doing things to make my life difficult.

However, in one moment of clarity, I had an epiphany.  I realized that if I broke the mountain in front of me down into management, compartmentalized components, I ended up saying to myself, "wait a minute...hundreds of people do each of these things every day, and do them successfully".  Why can't I?

That moment changed everything for me.

14. What is the biggest mistake you’ve ever made, and how did you recover from it?
Biggest?  Wow, where to begin?  I've made so many mistakes over my career that it's hard to pick just one. I've misplaced dongles.  I've said the wrong thing or reacted the wrong way in front of a customer.  I've had a small error in a script snowball into a much bigger mistake in my findings.

You can recover from mistakes, and if what we see in the news media on a regular basis is any indication, there is only one way to do so.  Own up.  Accept responsibility, and learn from the mistake.

One thing that I try very hard to do is recognize when I've made a mistake early on, own it, and most importantly, inform my boss as soon as possible.  Did you get a call from a customer, get on a late fly and fly all night (at considerable expense) and then miss the meeting because you overslept?  Bite the bullet, and tell your manager first.  Don't make excuses.  Yes, your manager will be upset, but not nearly as upset as they would be if they were hearing about the issue from the customer.

Mistakes are like breaches; you have to accept that they're going to happen. It's what you do about them that matters.  Own up, and learn from your mistakes.  Also, take every opportunity to learn from the mistakes of others, not just by passively following someone, but by actively engaging with them.  I guarantee you that if you get the opportunity to sit down with someone and engage over a beer, at some point, you'll find out what mistakes they learned from.

Note that others have taken up this mantle, as well.  For example, Mark Kelly shared his responses on LinkedIn.

Friday, February 22, 2019

Aperture

If you follow me on Twitter or LinkedIn, you will very likely have seen me mention "aperture" more than a few times.  My use of the term has been in reference to digital analysis work (DFIR), as well as to the production and use of threat intelligence, particularly pursuant to, or with respect to, DFIR work.

What is "aperture"?  What am I referring to when I say, "aperture"? A definition I found online states that an aperture is "a space through which light passes in an optical or photographic instrument". In the context that I'm using the term, the "space" is a lens shaped and polished by our own individual experiences, and the "light" is the data that we have available.  How we interpret an event is based on what we know about the event (the data), applied to and filtered through the lens our own experiences.

To illustrate "aperture" by example, most of my background has been in DFIR work.  As a consultant, I was usually deployed after (sometimes quite a while after...) a breach had occurred, and I usually had host-based data with which to work.  The data that I did have available was heavily dependent upon a number of factors, including (but not limited to) the version of the OS, how long it had been since the actual breach had occurred, actions admins had taken, etc.  All of this is to say that for the most part, I did not have access to time stamped process creation data, nor to real-time telemetry of any kind.  I may have had artifacts of files that existed on the system at one time, or artifacts left behind by an application being executed, but what I did not have was the full, complete sequence of commands run by the bad actor.  I may have had access to historical data of some kind (i.e., Registry data, VSCs, etc.), but for the most part, my aperture was limited to what was available in the image.

As my career evolved into targeted threat hunting and response, my aperture widened a bit.  Sometimes, a response engagement was based on something detected through monitoring (process creation events, etc.) of the customer's network.  As such, we would have access to quite a bit of information about the malicious actor's activities, via endpoint telemetry collected through monitoring.  Other times, my response started by deploying an endpoint solution, which means that the only historical data available prior to the customer's call was logs and whatever was still available on the endpoints.

However, in other instances, the monitoring infrastructure was not created or enabled until after the customer called us.  Following the call, the monitoring infrastructure would be deployed, and the aperture was still limited; however, by deploying a sensor or agent, we were able to widen that aperture a bit.

Another example of aperture can be seen in the annual trend reports that we see published by security companies.  Not long ago, a friend of mine was at a company where about 90% of the work they did was PFI work; they did a lot of payment card industry (PCI) breach investigations.  In this example, the data that they had available was predominantly based on the PCI cases, interpreted through the analysis and experiences of the individual analysts.

We all have our own aperture; OSINT/Intel, EDR monitoring (via a SOC or MSS function), and DFIR all have their own aperture, based on the nature of the specific business model followed by each function, and their customer base.

Consider this example...OSINT tells us that a particular threat actor or group is targeting a specific vertical.  DFIR response to an engagement may show evidence that certain documents or information was collected, archived, and exfil'd.  However, very often, actors will encrypt the archives with a password, so while we may be able to determine the exfil mechanism (place the files on a web server, download them with a normal "GET" request, then delete them...), we may not be able to open the files to see exactly what was taken.  Having an EDR solution in place will show us a great deal about what the actor did, where they went within the infrastructure, what they took, and the password they used to encrypt the archives.

So what?  Why, or how, is aperture important?

For one, we have to acknowledge that we may not have a complete view of the incident or intrusion.  For example, if you're developing a "profile" of a threat group based on OSINT, which may include public reporting produced by others, you're going to have a much different view than that intrusion being actively monitored through the use of EDR technology.  The same is true for an after-the-fact DFIR engagement, particular one that did not benefit from the use of EDR technology (actor is long gone...).

Aperture is also important to keep in mind when we're viewing available information, such as blog posts, reports, etc.  Some are very clear on the aperture; "..we saw this once, on one customer's infrastructure..."; here's a great example from Carbon Black.  With the annual reports that company's publish, we have to keep aperture in mind, as well.  For example, a number of years ago, a friend of mind worked at a company where a majority of the work they did was PFI response investigations; keeping that in mind, what they shared in their annual report was cast in a different light.  In that sense, the report clearly didn't cover even a small minority of the available connected systems, and those engagements that were discussed were, for the most part, somewhat specific.

As we engage and develop our understanding, this is something to keep in mind. 

Thursday, February 07, 2019

Review: Tribe of Hackers

I'm not a hacker.  Yes, I'm intensely curious about technology, and in particular, computing.  However, I don't consider myself a "hacker", in the sense that the term began to be used in the mid- to late-'90s.  That is, the term was co-opted by marketing teams and became somewhat equivalent to "pen tester", and even began to be distinguished by prepending it with "ethical".  No, I've been on the defense/investigation side of the infosec community for most of my career. But I still downloaed a copy of Marcus and Jennifer's book, Tribe of Hackersthe other day and so far it's been an enjoyable and extremely insightful read. 

First, let me say that I like the idea that Marcus found something that impacted him while reading another book (Tribe of Mentors) and wanted to replicate that himself.  I did something similar in my book, Windows Forensics and Incident Recovery; in chapter 6, I tried to replicate the methodology for developing and building a process, based on The Defense of Duffers Drift, originally written by MajGen Swinton in 1904.  This book was required reading during my initial training as an officer in the United States military, and I still remembered the approach years later when I wrote my book.  I mention this because this is something Marcus states on the first page of the introduction to the book; as such, it's one of the first things I read, and it's the first thing that really resonated with me.  So here, I feel as if I already have a connection with Marcus, in that 15 years later, his approach validates my thought process.

Before I continue, there's something I wanted to get out of the way.  Yes, I read more than the introduction (much more), and as such, I recommend that not only do you read this book, but that you also strongly consider taking the time to answer the questions posed yourself, and perhaps even take a step beyond that and share your responses.  You can do this through a blog post, or take it one question at a time on LinkedIn, or simply use whichever medium with which you're comfortable.  I'm positive that not only will you find something in this book that resonates with you, someone with whom you connect, but if you share your responses to the questions posed in the book, you'll connect with someone else, as well.

Next, I found the book well-formatted and well-structured.  Yes, the approach is one of structured repetition, but in some ways, I think that's a good thing.  It gives a format to the approach, rather than just a free-form flow of disparate ideas.  I've seen what happens in the community, particularly at security conferences, when there's no structure and "famous" people are given a time slot and an open mic.  Hint: it doesn't go well.  The format used in the book lets the reader do something of an "apples-to-apples" comparison between respondents, as they each answer the same questions, albeit in their own way.

The book does not contain technical content, per se.  It's content is the views, opinions, experiences, reflections, and back-stories of those who responded.  If you're somewhat experienced in the information security field (say, 5 years, or more), you'll find a good bit of validation in what you've been thinking in this book. If you're new to the field, I really think that this will open a door for you.  The first thing most people will really see when they flip through this book is the pictures.  The idea of adding pictures of the respondents humanizes their words; pretty much anyone who picks up this book is going to see someone who looks like them.  This then gets them to take a step further and read the words; given the breadth of respondents, you're more than likely going to find someone in the book with whom you share a common background.

I don't recognize most of the 70 names in the table of contents; of those I do recognize, I've never met them in person, face-to-face.  We may have seen each other at a conference, in passing, but beyond that, I do not profess to "know" any of those who shared their background and insight.  But that doesn't mean that I don't find value in their words. 

This book can also presents challenges for those new to the industry; in several instances, the recommendations include "keeping up on the latest things in the industry".  In this day and age, that's almost impossible.  Looking at the backgrounds of many of the respondents, someone new to the industry, or looking to move into this sort of work, may find it intimidating.  Remember, many of the respondents have been around for a while, and they had to start somewhere.  Many, like me, started well before there was classes or courses of instruction available in any of their areas of expertise.  Some may have seen a need and moved to fill it, while others picked a direction and began the process of building knowledge in that particular area.  In short, they all started somewhere, so don't let that intimidate you.  Don't look at what they've achieved and think, "oh, I'll never be as good, as smart, or as capable as they are..."; instead, take the first step.  Don't look at where they are now as the story, consider where and how they started out and consider the journey. 

At this point, I have not read every chapter of the book; I'm still working through it.  Sometimes I'll open the PDF and read for a while, other times I'll pick out a name and just read their responses.  However, I wanted to share my thoughts on the book now, because I knew that if I forced myself to wait until I finished the book, I might never write my thoughts down. 

Interestingly enough, with all of the different backgrounds and beginnings described in the book, so far, there's one common theme I've picked up on...people.  In short, if you isolate yourself in this industry, you're obviating your ability to grow and progress, regardless of where you want to go.  However, if you actively engage and develop those "soft skills" that are referred to more than once throughout the book, that's the key to growth and progression.  This is not a purely technical profession; at some point, you're going to have to engage with someone, be they a team member, manager, or customer.  In many cases, you're going to have to harness all of your technical capabilities and communicate with someone who's not technical at all, be it through reporting, a presentation, or just talking. 

This theme continues to manifest itself, even from a technical perspective.  If you're into pen testing or web app testing, "the Googles" are only going to get you so far.  However, actively engaging with others is a force multiplier; more often than not, I know I've come away from a conversation with even just one person where we've actively engaged and come up with something much greater than we each could have on our own.  And we see this repeated time and again throughout the book.  Many (and when I finish reading all 70 chapters, I'm sure I'll be saying "all"...) of chapters include testimonies where active engagement with others has made the difference.

What does this book get me that I couldn't get someplace else?
This book provides something of an inside view into the considered thoughts of others in the industry, those who have gone before and in some cases, laid the groundwork and foundations for where we are today.  You only get so much from presentations or media sound bites; this book provides a deep dive into the minds of some who are not in the industry for the media presence or notoriety. 

Kudos to Marcus and Jennifer, and to all of those who were involved in this book.  Thank you so much for the time and effort you put into it.  Interestingly enough, I see this as just the first edition; in as short a time as two years, there could be another 70 or 100 respondents, and then at five years, another 100 more; by then, many of the respondents will be including, "...when I read the first edition..." in their words.  Great work, folks!

Monday, February 04, 2019

Data Points And Analysis

In DFIR and "threat intel" analysis, very often individual data points are dismissed out of hand, as they are thought to be easily mutable.  We see it all the time, don't we? We find a data point, and instead of just adding it to our picture of the incident, instead, we ask, "Hey, what about this...?".  Very often, we hear in response, "...hackers change that all the time...", so we drop it.

Why do we do this?  Why do we not include "easily mutable" artifacts in our analysis?

A common example of this is the PE compile time, a time stamp value added to an executable file during the compilation process.  I'm not an expert in compilers or linkers, but this time stamp value is understood throughout the DFIR community to be easily mutable; that is, it doesn't "cost" an adversary much to change this value.  Many of us have seen where the PE compile time value, when converted, indicates that file was compiled in 1980, or possibly even a date in the future.  This value is thought to be easily mutable precisely because many of us have either seen it changed, or have actually changed it ourselves.  A consequence of this is that when someone brings up, "...hey, this time stamp value says...", we may be immediately met with the value itself being dismissed out of hand.

However, there may be considerable value in including these values in our corpus of viable, relevant data points. What I mean is, just because a value is understood to be easily mutable, what if it wasn't changed?  Why are we not including these values in our analysis because they could be changed, without checking to see if they had been changed?

Consider the FireEye blog post from Nov 2018 regarding the APT29/Cozy Bear phishing campaign; table 1 of the article illustrates an "operational timeline", which is a great idea.  The fourth row in the table illustrates the time at which the LNK file is thought to have been weaponized; this is a time stamp stored in a shell item, as an MS-DOS date/time value.  The specific value is the last modification time of the "system32" folder, and if you know enough about the format of LNK files, it's not hard at all to modify this time value, so it could be considered "easily mutable".  For example, open the file in binary mode, go to the location/offset with in the file, and overwrite the 16-bit value with 0's.  Boom.  You don't even have to mess with issues of endianness, just write 0's and be done with it.

However, in this case, the FireEye folks included the value in their corpus, and found that it had significant value.

Something else you'll hear very often is, "...yeah, we see that all the time...".  Okay, great...so why dismiss it?  Sure, you see it all the time, but in what context?  When you say that you "see it all the time", does that mean you're seeing the same data points across disparate campaigns?

Let's consider Windows shortcut/LNK files again.  Let's say we retrieve the machine ID from the LNK file metadata, and we see "user-pc" again, and again, and again.  We also see the same node ID (or "MAC address") and the same volume serial number across different campaigns.  Are these campaigns all related to the same threat actor group, or different adversaries?  Either way, this would tell us something, wouldn't it?

The same can be said for other file and document metadata, including that found in phishing campaign lure documents, particularly the OLE format documents.  You see the same metadata across different campaigns?  Great.  Are the campaigns attributed to the same actors?

What about the embedded macros?  Are they obfuscated?  I've seen macros with no obfuscation at all, and I've seen macros with four or five levels of obfuscation, each level being completely different (i.e., base64 encoding, character encoding, differences in string concatenation, etc.).

All of these can be useful pieces of information to build out the threat intel picture.  Threat intel analysts need to know what's available, so that they can ask for it if it's not present, and then utilize it and track it.  DFIR analysts need to understand that there's more to answering the IR questions, and a small amount of additional work can yield significant dividends down the road, particularly when shared with analysts from other disciplines.