Thursday, August 03, 2006

New Hashing

I blogged the other day regarding Jesse Kornblum's "fuzzy hashing" technique...I can't wait until Tuesday next week to see his presentation at GMU2006. I think this would be very useful in cases in which you're able to use tools such as lspi.pl to extract a binary image from a Windows memory dump.

Andreas posted last night on "authenticating" a reconstructed binary by hashing the immutable sections separately. This, too, is a good idea, but as with Jesse's technique, it changes how things are done now with regards to hashing. Right now, there are lists of file names, metadata, and hashes made available through various sources (NIST, vendors, etc.) that are used as references. There are lists of known-bad files, known-good files, malware, etc. These lists are good for static, old-school data reduction techniques (which is still valid). However, as we continue moving down the road to more sophisticated systems and crimes, we need to adapt our techniques, as well. Tools and techniques such as "fuzzy" hashing and "piecewise" hashing (on a by-section basis, based on the contents of the PE headers) will only help us.

2 comments:

Geoff said...

What happened to Jesse Kornblum this morning? I was in the classroom anticipating his presentation on new hashing, but he didn't show up.
I hope he's here for the afternoon class.

Keydet89 said...

I'm not 100% what happened...ask him when he gives his presentation.