Pages

Monday, April 11, 2011

Links and Stuff

Digital Forensic Search
Corey Harrell's done some pretty interesting things lately...most recently, he set up a search mechanism that targets a specific subset of Internet resources that are specific to the digital forensics community.  Sometimes when we're searching for something, we head off to our favorite search engine and cast a wide net...and we may not get that many hits initially that are pertinent to what we're looking for; by narrowing the field a bit, more relevant hits may be returned.

One of the issues with the community is that there's a lot of good information out there, but it's out there.  Many analysts have expressed a bit of frustration that they can't seem to find what they're looking for when they need it, and that they don't know that they need it until...well...they need it.  I've also talked to people who've done hours of research but not documented any of it, so when the issue they were working on comes up again, they have to go back and redo all of that research again.

Rootkit Evolution
Greg Hoglund posted to his Fast Horizon blog recently, and the title...Rootkit Evolution...sparked my curiosity.  Sadly, when I read the post, it wasn't really anything more than a sales pitch for Digital DNA, whereas I had expected...well...something about the evolution of rootkits.  I mean, that's kind of what the title suggested.  Anyway, one statement from the blog caught my interest, however:

...we are still ahead of the threat.

While I don't disagree with this, I would suggest that attackers may find that it isn't necessary to employ rootkit technology.  Now, don't get me wrong...I'm sure that this does happen.  But for the most part, is it really necessary?  Look at many of the available annual reports, such the Verizon Business Security report, M-Trends, or TrustWave's Global Security Report...some of the commonalities you may see across the board include considerable persistence without the need to deploy rootkits.

So...is the research important?  Yes, it is.  They're still being used (see the Chinese bootkit).  Now and then, these thing pop up (well, not really...someone finds one...) during an incident as a well-designed rootkit can be very effective.  It's just like NTFS alternate data streams...as soon as the security community considers them passe and stops looking for them, that's when they'll make a resurgence and be used more and more by the bad guys.

What a Tweet Looks Like
Ever wondered what a tweet looks like?  I'm sure you have!  ;-)  By way of a couple of different links comes a very interesting write-up of what a tweet looks like from a developer's standpoint...click on the big picture in the middle of the post to enlarge the map-of-a-tweet (or "Twitter Status Object").  Most forensic analysts will likely look at the map and see the value right away.

Okay, so how would you get at this?  This sort of information would likely be in some unstructured area of the disk, right...the pagefile or unallocated space.  So, if you were to run strings or BinText against the pagefile or unallocated space extracted from an image via blkls, you would end up with a list of strings along with their offsets within the data.  What I've done is write a Perl script that goes into the data at the offset that I'm interested in, and extracts however many bytes on either side of the offset that I specify.  I've used this methodology to extract not only URIs and server responses from the pagefile, but also Windows XP/2003 Event Log records from unallocated space, translating them directly to timeline/TLN format.  Doing this provided me with a capability that went beyond simply carving for files, as I needed to carve for specific, perhaps well-defined data structures.

Something like this could be used to quickly and easily extract tweets from unallocated space or the pagefile.  Run strings/BinText, then search the output to see if you have any unique search terms, such as a user name or screen name.  Then, run the script that goes to the offset of each search term and extracts the appropriate amount of data.  This can be extremely valuable functionality to an examiner, and can be added to an overall data extraction process using free and open source tools.

Writing Open Source Tools
The above section, the imminent publication of Digital Forensics with Open Source Tools (the book was Cory Altheide's idea and he was the primary author, as is due to be published on 15 April), and the upcoming Open Source Forensics Conference (at which Cory and I will both be speaking...), all combine to make a good transition to some comments on writing open source tools.  Also, this is a topic that Cory and I had considered addressing in the book, but had decided that it was too big for a sidebar, and didn't quite fit in anywhere in particular.  After all, with all of the open source tools discussed in the book, we would really need to get input from others to really do the topic justice.  As such, I thought I could post a few comments here...

For me, writing open source tools starts as a way to serve my own needs when conducting analysis.  Throughout my career, I  have had access to commercial forensic analysis applications, and each has served their purpose.  However, as with any tool, these applications have their strengths and weaknesses.  When conducting PCI forensic investigations, a commercial application made it easy to set up a process that all of the analysts could employ, but we also found out that some of the built-in functions were not exactly accurate, and that affected our results.  The result of that was to seek outside assistance to rewrite the built-in functions, in order to get something that was more accurate and better suited to our needs.  We would then export the results and run them through an open-source process to prepare an accurate count of unique numbers, etc.

So, sometimes I'd write an open source tool in order to massage some data into a format that is better suited to presentation or dissemination.  However, there have been other times when no commercial application had the functionality I needed, so I wroted something to meet my needs.  A great example of this is the MBR infector detector script.  Another is the script I wrote to carve Windows XP/2003 Event Log records from unallocated space.

I can guess that one response to all this is going to be, "...but I don't know how to program...", and my response to that is, you don't have to...you just have to know someone who does.  Not every analyst needs needs to know how to program, although many analysts out there can tell you that understanding programming (anything from batch files all the way to assembly...) can be extremely beneficial.  However, having someone who understands what you do and can program can be extremely beneficial when it comes to DFIR work.

Too many times, when it comes to DFIR work, analysts are sort of left on their own.  Business models of dictate the necessity for this...but having a support mechanism for engagements of all kinds can be an extremely effective means of extending your team's capabilities, as well as preserving corporate intellectual property. 

Even if you aren't part of a DFIR team, you can still develop and take advantage of this sort of relationship.  If you know someone within the community with programming skills, what does it hurt to seek their assistance?  If they, in turn, provide you with effective, timely support, then you have a great opportunity to further the relationship by supporting them in some manner...even if that's just a "thank you" for their efforts.  Many folks with some programming capabilities are simply seeking new challenges and new opportunities to learn, or employ their skills in new ways.  So when it comes to writing open source tools, many times, the only real "cost" involved is a "thank you" and acknowledgement of someone's efforts to support you.

Scanners
Lenny's got a post up that lists three tools for scanning the file system for malware with custom signatures.  these are all excellent tools; in fact, if you remember that I had provided instructions (from MHL) regarding how to install pescanner.py on Windows systems, two of the tools that Lenny mentions (ClamAV, Yara) can be included in the setup for pescanner.py and the signatures used to locate suspicious files.

Signatures are one way to locate malware and other suspicious files on a system.  However, signatures change, and they must be kept up to date.  You can also use signatures to locate all packed files, as well as files "hidden" using other obfuscation methods.

Keep in mind, however, that this is only part of the solution.  Because signatures within malware files do change, we also need to consider the network, memory, and other parts of the system (i.e., Registry, etc.) to look for indicators of malware.  In fact, many times, this may be our first indicator of malware.  I've found previous infection attempts were malware has been loaded on a system, only to be detected and quarantined by the installed AV product.  I could see the names of the files within the AV and Application Event Logs.  Interestingly enough, files of the same name were created a couple of weeks later, indicating that the bad guy had obfuscated his malware so that the AV wouldn't detected it, and was able to get it successfully installed.

There's more to malware detection than just scanning files for signatures.  If all you have is an acquired image from a system, and a malware infection was suspected, there are a number of other things you can look at in order to find ancillary indicators of an infection.  Scanners should be part of the malware detection process.

No comments:

Post a Comment