Friday, September 02, 2011

Friday Updates

Prefetch Analysis
I received an email recently that referred to an older post in this blog regarding Prefetch file analysis.  The sender mentioned that while doing research into Prefetch files, he'd run across this post (in Polish) that indicated that under certain circumstances, the run count in the Prefetch file "isn't precise".  So, being curious, I ran the text of the site through Google Translate, and got the following (in part):

It turns out the meter program starts in the file. Pf is not as accurate. When its value reaches 0A, it is no longer so "eager" to increase at subsequent runs of the program. It also does not update the date of the last run. You can see a correlation here. If the field meter [0x90] updates, it also updates the date of the last run [0x78] (actually in this statement is not only the implication that even equivalence).

I did some small tests and it turns out that if the difference between the current date and the date last run stored in the. Pf is less than 2 minutes (120 seconds) it will not update the counter. Also, if any program (even malware) runs many times in a short period of time and we would like to know the date of the last of his starts, and the number - is on file in the folder Perfetch we can ride well.


Another interesting fact is that if you change the date (even using the watch in the address bar), we can easily cheat files. Pf. Assume that X was the last time the program was launched in July this year. Someone gained physical access to our computer and wants to run our named with letter X program. Of course would not want the contents of the Prefetch betrayed that there was an unauthorized launch. The method is trivial. The attacker changes the date (eg year 2002) and fires the program. The difference between 2002 and 2011 is less than 2 minutes (sounds weird, but subtract the smaller number of larger - we get a negative value). File. Pf remains unchanged, and the program X is seamlessly (from the standpoint of Perfetch) run.


If someone wants a really effective analysis, it appears that the files in the folder Perfetch rather not help him.

In short, what this says is that if someone runs an application several times in quick succession (i.e., 120 seconds, or 2 min), the Prefetch metadata isn't modified accordingly.  Interesting stuff, and worth a further look, as if this is information that truly pans out and can be replicated, then it would likely have a significant impact on analysis and reporting.  One thing I have thought about, however, is...does this happen?  I mean, if a user launches Solitaire, what would be the purpose of launching it again within 2 min?  What about malware?  Let's say an intruder gains access to a system, and copies over some malware...what would be the purpose of launching it several times, within a 2 min period?

Books
I've known for some time that various courses make use of my books, either as recommended reading or as required texts.  For example, I understand from some recent emails that Utica College uses some of my books in their Cyber Security curriculum.  As an author, this is pretty validating, and in a way, better than a review; rather than posting a review that says what's in the book, the instructors are actually recommending it or using it. Also, it's great marketing for the books.

Below is a recommendation for Windows Registry Forensics from Andy Spruill (Senior Director of Risk Management/FSO, GSI), posted here with his permission:

I don’t know anyone who is on the fence about your book.  As far as I am concerned, it is a mandatory item for anyone in this field. 


I have a copy sitting in the lab here at Guidance and another sitting in the lab at the Westminster Police Department, where I am a reserve officer with their high-tech crimes unit.  I have another personal copy that I use as an Adjunct Instructor at California State University, Fullerton, where I teach a year-long certificate program in Computer Forensics. 


As an author, I usually sit back after a book has been out for a while and wonder if the information is of use to folks out there in the community; is the content of any benefit?  I see the reviews posted to sites (Amazon, blogs, etc.) but many of them simply reiterate the table of contents without going into whether the reviewer found the information useful or not.  I get sporadic emails from people saying that they liked the book, but don't often get much of a response when I ask what they liked about it.  So when someone like Andy, with his background, experience, and credibility, uses and recommends the book, that's much better than a review.  This isn't me suggesting to folks that it's a resource...after all, I'm the author, so what else am I going to say?  It's someone like Andy...a practitioner and an instructor, teaching up-and-coming practitioners...saying that it's a resource that lends that statement credibility.  So, a great big "thanks" to Andy, and to all of the other instructors, teachers, mentors, and practitioners out there who recommend books like WRF and DFwOST to their charges and colleagues.

Analysis
I recently posted on Jump List and Sticky Notes analysis, and also released a Sticky Notes parsing tool.  As of 11am, 31 Aug, there were just 10 downloads.  One of the folks who downloaded the tool has apparently actually used it, and sent me an email...I received the following in that email from David Nides (quoted here with his permission):

Time after time I see examiners that aren't performing what I would consider comprehensive analysis because they don't go beyond push buttons forensics.

This is something I've mentioned time and again, using the term "Nintendo forensics".  Chris Pogue also discusses this in his Sniper Forensics presentations.  When developing the tools I wrote for parsing Jump Lists and Sticky Notes, I didn't find a great number of posts on the Interwebs from folks asking for assistance or how to parse these types of files...in fact, I really didn't find any.  But I do know of folks are currently (and have been) analyzing Windows 7 systems; when doing so, do they understand the significance of Jump Lists and Sticky Notes, and are these artifacts being examined?  Or is most of the analysis that's being done out there simply a matter of loading the acquired image into a commercial forensic analysis application and clicking a button?

Windows 8
What?  Windows 8?!?  We were just talking about Windows 7, and you've already moved on to Windows 8...and perhaps rightly so.  It's coming folks...and I ran across this interesting post regarding improvements in the file operations (copy, move, etc.) experience.  There are some interesting statistics described in the blog post, which were apparently derived from analysis of anonymous data provided by Windows 7 users (anyone remember Dr. W. Edwards Deming??).  The post indicates that there's some significant tracking and optimization within the new version of Windows with respect to these file operations, and that users are granted a more granular level of control over these operations.

Okay, great...but in the words of Lon Solomon (who's a fantastic speaker, by the way...), "so what?"  Well, if you remember when Windows XP came out, there was some trepidation amongst the DFIR community, with folks up in arms, screaming, "what is this new thing?!?"...yet over time, we've come to realize that for the sake of the "user eXPerience", there are significantly more artifacts for analysts.  The same is true with Windows 7...so should we (DFIR analysts) expect anything less from Windows 8?

CDFS
If you have had any thoughts or questions regarding the CDFS, or why you should join, here's another resource that provides an excellent view into answering that question.  This is a timely post, considering this post that rehashes issues with accreditation and certification in the DFIR industry.  Yes, I joined this week, and I'm looking forward to the opportunity to have a say in the direction of my chosen profession.

Google Artifacts
Imagine a vendor or software developer actually providing forensic artifacts...yeah, it's just like that!  It seems that Google is doing us DFIR folks a favor and providing offline access to GMail.  Looking at some of the reviews for the app, it doesn't look as if there's overwhelming enthusiasm for the idea, but this is definitely something to look for and take advantage of if you find it.

13 comments:

Anonymous said...

Could it be related to NTFS tunneling effect?

Keydet89 said...

Anonymous,

Could what be related to NTFS tunneling?

troy said...

Regarding the prefetch, this is correct. I relearned this the hard way. I had forgot about it when I was doing my Vista research and couldn't see the fields updating in the prefetch files during my testing. After pulling out my hair for a while, I remembered to wait two minutes. Fields updated.

Some of the reasoning for this behavior is that there should not be significant change to record between two quick, successive launches--particulalry when such activity is often the result of problems.

Keydet89 said...

Troy,

Thanks for that...that's really helpful.

Anonymous said...

I mean the discrepancy in prefetch metada.

Keydet89 said...

No, b/c this doesn't apply to the creation dates stored in the MFT entry...it has to do with the embedded metadata time stamp, stored within the file itself.

HTH

Stefan said...

Harlan,

What about malware? Let's say an intruder gains access to a system, and copies over some malware...what would be the purpose of launching it several times, within a 2 min period?

I could think of click-fraud malware but am unsure as to how prevalent this really is.

Keydet89 said...

Stefan,

Is that implemented through the browser? I ask, because I don't think I've seen Prefetch artifacts for click-fraud malware...are there any? Just because I haven't seen them doesn't mean that there aren't any, which is why I'm curious if there would be any Prefetch artifacts for this type of malware...

Tom Harper said...

Harlan - good point about Win 8. Watching the Windows Forensics cycle mature has been interesting:

while Windows OS exists, do:

1. OSDF examiner finds an artifact and publishes;
2. OSDF examiner writes a FOSS script or tool to harvest the information from an artifact in the current version of Windows;
3. Commercial vendor incorporates the artifact into a product which automates the recovery process;
4. New examiners become reliant on reading the data from the commercial tool;
5. Windows OS changes and new/more forensic artifacts become available (or old artifacts are replaced or deleted) - but the examiners only know how to read the commercial product output so the potential information is out there untapped...

Loop


This process is complicated further by:

- Increasing computing power leading to a larger computing "experience" and personally customizable computing environment, thereby adding artifacts;

- Increasing data storage leading to exponential increase in the volume of artifacts;

- Decreasing attenuation between OS version releases, which decreases the amount of "discovery" time for new artifacts and incorporation into commercial tools;

- Increasing number of platforms/environments where artifacts may be found.

So what does this mean to the DFIR practitioner? Get back to basics!

1. Understand how a computer operates (and how to make it do things, i.e., start programming);
2. Know what you are looking for;
3. Know how to find it, regardless of the artifact it resides in;
4. Know how to extract it from the artifact;
5. Know how to render and interpret the extraction to explain it's relevance to the investigation.

This is important because the stages of metamorphosis in computing are starting to blur and the change is becoming more constant. We must adapt to the paradigm or go hungry. We are no longer pulling fish we can see from a clear, still pool; instead, we are relying on what the rod and the line tell us in order to snatch them from the whitewater rapids. Commercial vendors can't keep up with that type of change.

Learn HOW to FISH, not just how to operate a rod and reel.

Are you an Angler or an Operator?

Keydet89 said...

Tom,

Thanks for the comment. Great to see that you're still alive. ;-)

Decreasing attenuation between OS version releases, which decreases the amount of "discovery" time for new artifacts and incorporation into commercial tools

Okay, I don't take exception to this, but I do think that if this is the way an analyst is going to think and be, then that just ain't a good thing. ProDiscover was the first commercial analysis app that I'm aware of to add parsing of Jump Lists, but even so, it still doesn't parse the DestList stream. But that's just one example.

I think number 2-5 on your list are what's holding up a lot of analysts. What ARE you looking for? Where do the artifacts normally reside? What do you do when something isn't "normal"? Extracting, rendering, and interpreting artifacts are a huge challenge, as well.

Tom Harper said...

Okay, I don't take exception to this, but I do think that if this is the way an analyst is going to think and be, then that just ain't a good thing.

???

Maybe I used a poor choice of words when I said "decreasing attenuation." The point was, the new versions of the Windows OS are being released with increasing frequency, so there is not enough time to assimilate the artifacts as there was with WinXP. Add to that the increasing number of platforms (mobile, tablet, etc.) and we get a situation where commercial products won't be able to keep up with the artifacts. We are practically there already just with Win OS artifacts - and we haven't even talked about proprietary software application artifacts yet.

The folks solving the cases will be FOSS practitioners who are using the proper techniques, not the latest tool.

So,

"We should get our techniques straight." That's all I was trying to say.

Keydet89 said...

Tom,

...so there is not enough time to assimilate the artifacts as there was with WinXP.

I think your choice of words was fine, and I got what you were trying to say. But I think that there has been more than enough time to begin identifying Windows 7 artifacts...it simply hasn't been done. I see and hear the question all the time..."what's new in Windows 7??". What I'm not hearing/seeing a great deal of is, "hey, look what I found..." (to be clear, I am seeing it, but not nearly as much as I hear the previous question).

Add to that the increasing number of platforms (mobile, tablet, etc.) and we get a situation where commercial products won't be able to keep up with the artifacts.

Analysts cannot expect commercial applications to "keep up", nor to provide guidance and direction as to the analysis techniques they employ.

Commercial apps tend to get their direction and feature list from the predominance of their customer base, and if those users aren't aware of the artifacts/techniques, and doesn't ask for those to be added, they won't appear. This, I think, is the real strength of open source tools.

...and we haven't even talked about proprietary software application artifacts yet.

Such as?

The folks solving the cases will be FOSS practitioners who are using the proper techniques, not the latest tool.

Agreed. Now, if we can just get those folks to share and talk about their successes, and their failures.

A. Thulin said...

Your notes on the prefetch analysis help point out that if computer forensics claims to be one of the forensic sciences, research (such as that mentioned) need to be performed and published following baseline scientific protocol.

Without an established test protocol, there's no reliable way to repeat the research -- we don't know the test platform, we don't know how it was prepared prior to the test, we don't even know if the analysis was done on live files or dead files, all of which may influence the result. Nor is there a base to start from when a new OS release comes along.

And consequently, there is no way to examine the test methodology for flaws -- it could so easily be that the tester overlooked something in the functionality of the prefetcher or the rest of the operating system or in test result collection that makes the entire analysis meaningless.

While anecdotal evidence has its place as a starting point for research into CF artifacts, I very strongly believe that if there is no critical verification, the information cannot be trusted as evidence one way or another.

Incidentally, it does happen that an intruder launches multiple instances of the same program. Password sniffers, one per network interface, is an example I have seen myself. It's unlikely to be affected by a 120 second limit, though, as this is typically done by a script within a single second or so.