Tuesday, February 16, 2016

Tools, Links, From the Trenches, part deux

There's been considerable traffic online in various forums regarding tools lately, and I wanted to take a moment to not just talk about the tools, but the use of tools, in general.

Tools List
I ran across a link recently that pointed to this list of tools from Mary Ellen...it's a pretty long list of tools, but there's nothing about the list that talks about how the tools are used.

Take the TSK tools, for example.  I've used these tools for quite some time, but when I look at my RTFM-ish reference for the tools, it's clear that I don't use them to the fullest extent that they're capable.

LECmd
Eric Zimmerman recently released LECmd, a tool to parse all of the data out of an LNK file.

From the beginning of Eric's page for the tool:
In short, because existing tools didn't expose all available data structures and/or silently dropped data structures. 

In my opinion, the worst sin a forensics tool vendor can commit is to silently drop data without any warning to an end user. To arbitrarily choose to report or withhold information available from a file format takes the decision away from the end user and can lead to an embarrassing situation for an examiner.

I agree with Eric's statement...sort of.  As both an analyst and an open source tool developer, this is something that I've encountered from multiple perspectives.

As an analyst, I believe that it's the responsibility of the analyst to understand the data that they're looking at, or for.  If you're blindly running tools that do automated parsing, how do you know if the tool is missing some data structure, one that may or may not be of any significance?  Or, how do you correctly interpret the data that the tool is showing you?  Is it even possible to do correct analysis if data is missing and you don't know it, or the data's there but viewed or interpreted incorrectly?

Back when I was doing PFI work, our team began to suspect that the commercial forensics suite we were using was missing certain card numbers, something we were able to verify using test data.  I went into the user forum for the tool and asked about the code behind the IsValidCreditCard() function, and asked, "what is a valid credit card number?"  In response, I was directed to a wiki page on credit card numbers...but I knew from testing that this wasn't correct.  I persisted, and finally got word from someone a bit higher up within the company that we were correct; the function did not consider certain types of CCNs valid.  With some help, we wrote our own function that was capable of correctly locating and testing the full range of CCNs required for the work we were doing.  It was slower than the original built-in function, but it got the job done and was more accurate.  It was the knowledge of what we were looking for, and some testing, that led us to that result.

As a tool developer, I've tried to keep up with the available data structures as much as possible.  For example, take a look at this post from June, 2013.  The point is that tool development evolves, in part due to what becomes available (i.e., new artifacts), as well as in part due to knowledge and awareness of the available structures.

With respect to RegRipper, it's difficult to keep up with new structures or changes to existing structures, particularly when I don't have access to those data types.  Fortunately, a very few folks (Eric, Mitch, Shafik, to name a few...) have been kind enough to share some sample data with me, so that I can update the plugins in question.

Something that LECmd is capable of is performing MAC address lookups for vendor names.  Wait...what?  Who knew that there were MAC addresses in LNK files/structures?  Apparently, it's been known for some time.  I think it's great that Eric's including this in his tool, but I have to wonder, how is this going to be used?  I'm not disagreeing with his tool getting this information, but I wonder, is "more data" going to give way to "less signal, more noise"?  Are incorrect conclusions going to be drawn by the analyst, as the newly available data is misinterpreted?

I've used the information that Eric mentions to illustrate that VMWare had been installed on the system at one time.  That's right...an LNK file had a MAC address associated with VMWare, and I was able to demonstrate that at one point, VMWare had been installed on the system I was examining.  In that case, it may have been possible that someone had used a VM to perform the actions that resulted in the incident alert.  As such, the information available can be useful, but it requires correct interpretation of the data.  While not the fault of the tool developer, there is a danger that having more information on the analyst's plate will have a detrimental effect on analysis.

My point is that I agree with Eric that information should be available to the analyst, but I also believe that it's the responsibility of the analyst to (a) recognize what information can be derived from various data sources, and (b) be able to correctly interpret and utilize that data.  After all, there are still folks out there, pretty senior folks, who do not correctly interpret ShimCache data, and believe that the time stamp associated with each entry is when that program was executed.

From the Trenches
I thought I'd continue with a couple of "war stories" from the early days of my commercial infosec consulting career...

In 1999, I was working on a commercial vulnerability assessment team with Trident Data Systems (TDS).  I say "commercial" because there were about a dozen of us on this team, with our offices on one side of the building, and the rest of the company was focused on fulfilling contracts for the federal government.  As such, we were somewhat 'apart' from the rest of the company, with respect to contracts, the tools we used, and our profit margin.  It was really good work, and I had an opportunity to use some of the skills I learned in the military while working with some really great folks, most of whom didn't have military experience themselves (some did).

One day, my boss (a retired Army LtCol) called me into his office to let me know that a client had signed a contract.  He laid out what was in the statement of work, and asked me who I wanted to take on the engagement.  Of course, that was a trick question, of sorts...instead of telling me who was available to go, I had the pick of everyone.  Fortunately, I got to take three people with me, two of which were my first choice, and the third I picked when another person wasn't available.  I was also told that we'd be taking a new team member along to get them used to the whole "consulting" thing.  After all this was over with, I got my team together and let everyone know what we were doing, for whom, when we'd be leaving, etc.  Once that was done, I reached out to connect with my point of contact.

When the day came to leave, we met at the office and took a van to the airport.  Everyone was together, and we had all of our gear.  We made our way to the airport, flew to the city, got to our hotel and checked in, all without any incidents or delays.  So far, so good.  What made it really cool was that while I was getting settled in my room, there was a knock at the door...the hotel staff was bringing around a tray of freshly baked (still warm) cookies, and cartons of milk on ice, for all of the new check-ins.  Score!

The next morning, we met for breakfast and did a verbal walk-through of our 'concept of operations' for the day.  I liked to do this to make sure that everyone was on the same sheet of music regarding not just the overall task, but also their individual tasks that would help us complete the work for the client.  We wanted to start off looking and acting professional, and deliver the best service we could to the client.  But we also wanted to be sure that we weren't so willing to help that we got roped into doing things that weren't in the statement of work, to the point where we'd burned the hours but hadn't finished the work that we were there to do.

The day started off with an intro meeting with the CIO.  Our point of contact escorted us up to the CIO's office and we began our introductions, and a description of the work we'd be doing.  Our team was standing in the office (this wasn't going to take long), with our laptop bags at our feet.  The laptops were still in the bags, and hadn't been taken out, let alone even powered on.

Again, this was 1999...no 'bluetooth' (no one had any devices that communicate in that manner), and we were still using laptops that, in order to connect them to a network, you had to plug in a PCMCIA card into one of the slots.

About halfway through our chat with the CIO, his office door popped open, and an IT staff member stuck their head in, to share something urgent.  He said, "...the security guys, their scan took down the server."  He never even looked at us or acknowledged our presence in the room...after all, we were "the security guys".  The CIO acknowledged the statement, and the IT guy left.  No one said a word about what had just occurred...there seemed to be an understanding, as if our team would say, "...we hear that all the time...", and the CIO would say, "...see what I have to work with?"

11 comments:

Joachim Metz said...

> After all, there are still folks out there, pretty senior folks, who do not correctly interpret ShimCache data, and believe that the time stamp associated with each entry is when that program was executed.

How should this be interpreted in your opinion? Tell me how do you interpret the different insert flags?

H. Carvey said...

@Joachim,

I'm not sure what you mean...I was referring to the time stamps, not the flags.

How do you interpret the flag?

Joachim Metz said...

What do the insert flag means? How do they influence the interpretation of the Shim Cache (or to be more precise Application Compatibility Cache) entries?

H. Carvey said...

I can see that this isn't going to get us anywhere...thanks.

Joachim Metz said...

You are the one claiming "there are still folks out there, pretty senior folks, who do not correctly interpret ShimCache data"

So I'm asking you tell these senior folks: how should they "interpret ShimCache data" ?

H. Carvey said...

@Joachim,

The statement from the blog that you quoted was:

After all, there are still folks out there, pretty senior folks, who do not correctly interpret ShimCache data, and believe that the time stamp associated with each entry is when that program was executed.

Since I made no statement about the Insert Flags, I don't understand the point of your original question.

As such, I asked you, how do you interpret the flags, and your answer was to ask the question again.

Why don't you just share how folks should interpret the flag?

H. Carvey said...

For the record, I base my interpretations of the InsertFlag value based on this FireEye whitepaper.

Joachim Metz said...

Where do you see that whitepaper mention "InsertFlag" values?

Joachim Metz said...

if you mean the "exec flag" from the white paper:

Microsoft designed the Shimcache in Windows Vista, 7, Server 2008 and Server 2012 to incorporate a "Process Execution Flag" category for each entry. The actual name and true purpose of this flag is still unknown, however, we have observed that the Client/Server Runtime Subsystem (CSRSS) process will set this flag during process creation/execution on those operating systems.

should I read that you base your interpretations on assumption?

H. Carvey said...

Seriously? The whitepaper is 5 pages long, and InsertFlags are discussed on pages 3 and 4.

Joachim Metz said...

> Seriously? The whitepaper is 5 pages long, and InsertFlags are discussed on pages 3 and 4.

https://www.fireeye.com/blog/threat-research/2015/06/caching_out_the_val.html

has not pages and find "insertFlags" "insert flags" has no hits

Assuming you referred to the wrong link


https://dl.mandiant.com/EE/library/Whitepaper_ShimCacheParser.pdf

When processes are created by CSRSS, the BaseSrvSxsCheckManifestAndDotLocal function in
basesrv.dll adds an entry to the Shim Cache and bitwise OR’s the “InsertFlags” field with a value of 2.
The true purpose of this flag is currently unknown however, during testing,

same comment "The true purpose of this flag is currently unknown"