Pages

Saturday, May 19, 2007

Litchfield on Oracle Live Response

Thanks to Richard Bejtlich, I learned this morning that David Litchfield, famed security researcher with NGSSoftware, has released a paper entitled Oracle Forensics Part 4: Live Response. In that paper, David starts off by discussing live response in general, which I found to be very interesting, as he addresses some of the questions that we all face when performing live response, particularly those regarding trust and assurance...trusting the operating system, trusting what the tools are telling use, etc.

David's paper highlights some of the aspects of live response that every responder needs to be aware of...in particular, when the first responder arrives on-scene and wants to collect volatile data, she will usually start by assessing the situation, and then when she's ready to collect that volatile data, insert a CD full of tools into the CD-ROM drive. From David's paper:

When they insert the CD and run one of the tools, due to the way Windows launches new processes, the tool will have key system dynamic link libraries in its address space, i.e. the memory the tool uses.

Great point...but keep in mind that at this point in time, during live response, there really isn't any way to avoid this situation. It happens, and it has to happen. The key to live response isn't how to keep it from happening...rather, it's to have a thoroughly documented process that lets you address the situation head on.

One of the main concerns about live response is often, if we do live response and have to take the information to court, how do we prove that our investigation did not modify the "scene" in any way, and that everything is pristine? The fact is...we can't. Nor should we try. Instead, we need to have a thorough, documented process, and be able to show that while our actions did modify the "scene" (via the application of Locard's Exchange Principle or Heisenberg's Uncertainty Principle...) just as an EMT's actions will modify a real-world crime scene, as investigators we should be looking at the totality of the information or evidence that we're able to collect and examine.

So, in a nutshell, while it is possible that the tools we loaded and ran on the system to collect volatile data were themselves compromised by a patched version of ntdll.dll in memory, what does the totality of the information tell us?

One thing I would suggest is that when you're reading David's excellent paper, and you get to the General Steps of Live Response section, refer back to the Order of Volatility. Dave is correct in that the application-specific information (about Oracle in this case) should be collected last but IMHO, the first thing that should be collected, as soon as possible, is a complete snapshot of physical memory (check out the sample chapter for my book, Windows Forensic Analysis). The reason I would suggest collecting the contents of physical memory first have to do with David's description of process creation...when a process is created, an EPROCESS block and all of the other structures necessary (at least one ETHREAD block) are created, consuming memory. This means that as processes are created, the pages used by other processes will be swapped out to the pagefile. Knowing this, we should collect as much of the contents of RAM as possible before moving on and collecting specific items, such as running processes, or the memory contents (RAM + pagefile) of those processes, etc.

Okay, enough about live response for now...this is a topic that deserves it's own space.

I found David's paper to be particularly interesting, as some of the work I've been involved with (and likely will continue to be involved with) has had to do with databases; was the database compromised, and if so, was sensitive information extracted from the database. I'm not a database guy (ie, a DBA) but I do need to know some things about databases; per David's suggestion, it's often best for an incident responder to work shoulder-to-shoulder with an experienced DBA, bringing the forensics mindset (and requirements) to the table.

If you're interested in database security in general, check out David's databasesecurity.com site for more information and books related to database security. For additional information about other database topics, I picked up a link at Andrew Hay's blog, pointing to the Comprehensive SQL Injection Cheat Sheet (well, the cheat sheet is actually here). This resource is invaluable to anyone performing forensic analysis of a potentially compromised system, particularly if it either has a web server installed, or acted as a database back-end for a web-based system. Hint: any reference in web server logs to SQL stored procedures is worth looking at!

5 comments:

  1. I'm in the midst of reading the paper but I just had a thunk.

    Using Vista, I wonder if there is a way to write code that uses readyboost instead of physical memory to execute itself. Would this spare the neccessary tainting of physical memory....

    ReplyDelete
  2. Hogfly,

    Excellent thought! However, I don't know enough about ReadyBoost (yet) to be able to answer this question thoroughly. However, I would think that there has to be some correlation to ReadyBoost in Main Memory, so that the memory manager knows where everything is. Perhaps such a technique would minimize the impact of the responder's actions on the system.

    ReplyDelete
  3. There definitely is a correlation, as readyboost isn't used like main memory is, but like you say it *may* minimize the impact, which is what I was thinking..

    oh yeah in the paper - David, to a degree seems to validate your Corollary.

    "if a Live Response pulls no suspicious information but other networked devices such as firewalls or NSMs indicate there has been a compromise then this becomes evidence in and of itself"

    ReplyDelete
  4. Validation from a respected figure in the community...wow, can't beat that!

    ReplyDelete
  5. An Excellent paper. A few things I thought that should have been added (from a DBA's point of view)

    1. Selecting from V$database, V$instance for info on uptime and other information about the database.

    2. Selecting from v$datafile to potentially get the last hotbackup time if any.

    3. Selecting from v$loghist, v$logfile, etc.. to get information about the redo logs that had been written, you can potentially see if there are activity when there is not any expected as well as have a list in case you decide you want to restore.

    4. Talk about using logmining and restoring the database from hot or cold backups to potentially comapare to (checksums, etc..).

    5. Use of perfstat in 8i and 9i databases if it is installed.

    Now there is no mention about preparing for an incident. like getting a listing of objects, privileges, etc.. I will use the following example as to why you might want to do this.

    In Oracle Applications Database 11i by running the sql in the following sections of David's paper it will yield the following counts:

    Getting a list of object privileges

    257,629 entries

    Getting a list of all objects

    241,277 entries

    Getting a list of block changes.

    241,277 entries

    Now this is alot of data to wade through and if you have not made any preperations for going through it. Now I also ran some sql to get counts of the following types of objects as well to show how huge the task of running some of David's SQL will be (getting sources and transfer it over the wire and getting checksums):

    Counts of Objects

    FUNCTION 121
    JAVA CLASS 10798
    JAVA DATA 293
    JAVA SOURCE 9
    MATERIALIZED VIEW 386
    PACKAGE 38078
    PACKAGE BODY 36990
    PROCEDURE 260
    TABLE PARTITION 461
    TRIGGER 2695
    VIEW 28358


    Now this is only one of the many packages out there and they keep on getting bigger and bigger.

    Another thing to mention is that you talk of a system once it is compromised how can you trust it and that you should rebuild it from scratch so you can then trust it. How would you do this with a database? You have information that has been out there for the entire life of the system and you might not be able to trust it? How do you rebuild it? If you have a Data Warehouse you may be able to cross reference and comapre some of the data but they may take a while and can your organization afford the system to be down. How do you explain this to the auditors that walk through the front door? Some food for thought.

    Mark

    ReplyDelete