For the past couple of days, I've been writing Perl scripts to parse binary data on Windows systems...I've been staring at 1s and 0s, hex, parsing strings, etc.
My first exercise was to parse PE headers...and the script works very well for legit PE files. By reviewing the information available on MSDN, and correlating that information with other sources (sorry, guys, but there are some holes in your docs!), I have been able to parse all the way down paste the data directories and into the section headers. Very cool! There's still a lot of work to do to make this script really useful, but developing it has really been very beneficial in understanding PE headers and malware.
Now, I'm working on a Perl script to parse .evt files manually, by opening the file in binmode() and parsing the byte stream. The problem I'm having is that even though the EVENTLOGRECORD structure is well documented at the MSDN site, I have not been able to find any information about the data located between offset 0 of the file, and the offset of the first record (which itself seems variable, depending on log type, operating system, etc.). Byte alignment is important, so I know that the API has some inherent method for locating the various records. However, I'm trying to read in the file, basically, a byte at a time...does anyone have any information about the .evt file header info? I'd like to figure out how to parse and make sense of this data.
While this whole thing seems like a pointless exercise, there is a method to my madness. For example, I can use scripts like this with ProDiscover 4.0 to reduce the time it takes to analyze a system. If you know what it is you're looking for, you can automate the activity, increasing efficiency, reducing mistakes, etc. So let's say I have a ProScript (ProDiscover uses a Perl module called ProScript to implement Perl as its scripting language) that I use to identify can copy files from an image. I can then automate scripts using...you guessed it...Perl in order to help me find only the suspicious things. This is referred to as "data reduction". By using the ProScript API, I can automate adding this information to my reports, as well.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Thursday, June 30, 2005
Friday, June 24, 2005
More on memory dump analysis, and some other stuff
First, I'm back from the MISTI "Cracking eFraud" conference. It was great to meet a lot of the folks there, both other presenters, as well as attendees.
I attended Brian Carrier's presentation entitled "Live Forensic Analysis". Brian was kind enough to sign my copy of his book for me. I also need to get a copy of Dan Farmer's book, as well.
While I enjoyed Brian's presentation, I have to say that there wasn't a great deal of "analysis" discussed. Yes, Brian covered tools and techniques, talked about rootkits...but analysis techniques weren't really discussed. However, I don't think that a one-hour presentation was really the venue for that sort of topic. I gave a presentation earlier in the morning on looking in the Registry for specific data, and went over by a few minutes...even though I felt that I was glossing over a lot of things.
Maybe the whole idea of presenting on "analysis" really needs to be a full-day presentation, with hands-on exercises, etc.
Anyway, I wanted to blog on memory dump analysis a bit more this morning. What I'm talking about here is using tools such as dd.exe to grab the contents of physical memory; i.e., RAM. As I blogged earlier, on Windows systems, if you want to grab an "image" of physical memory, you have to generate a crash dump, in which the system halts and the contents of physical memory are written to a file. However, most of the systems we would likely see aren't set up for this...so lots of folks, including LEOs, are using dd.exe to dump the contents of physical memory.
Once you have this file, what do/can you do with it? Well, the first and most obvious thing is to run strings.exe against the file or open it up in BinText. You might also run scripts against the file, looking for specific types of strings, such as email addresses, IP addresses, etc. Such things might be useful.
Another thing you might do is to break the dump down into 4K pages and generate hashes for those pages, and then parse through the file system, doing the same thing for the files. Matches would let you know if the pages were loaded in memory (credit for that one goes to Dan Farmer).
But what about parsing kernel structures? Microsoft has many of the available structures documented in MSDN, so we know what they look like. We know, for instance, that a particular structure is so many bytes long, based on the values in the structure, and we can parse these with computer code, either in C, or in scripting languages such as Perl (using unpack()). Knowing this, we only have one thing left that we need to know...the offsets. The image file itself starts at 0 (or 0x00000000)...we need someway of finding out where the Waldo structure begins in that file, before we can start parsing it.
So...what are your thoughts? Am I off base? Am I close? Do you have have any input at all on how to determine the offsets...other than "ask Microsoft"? Are there any developers out there who can comment on this?
I attended Brian Carrier's presentation entitled "Live Forensic Analysis". Brian was kind enough to sign my copy of his book for me. I also need to get a copy of Dan Farmer's book, as well.
While I enjoyed Brian's presentation, I have to say that there wasn't a great deal of "analysis" discussed. Yes, Brian covered tools and techniques, talked about rootkits...but analysis techniques weren't really discussed. However, I don't think that a one-hour presentation was really the venue for that sort of topic. I gave a presentation earlier in the morning on looking in the Registry for specific data, and went over by a few minutes...even though I felt that I was glossing over a lot of things.
Maybe the whole idea of presenting on "analysis" really needs to be a full-day presentation, with hands-on exercises, etc.
Anyway, I wanted to blog on memory dump analysis a bit more this morning. What I'm talking about here is using tools such as dd.exe to grab the contents of physical memory; i.e., RAM. As I blogged earlier, on Windows systems, if you want to grab an "image" of physical memory, you have to generate a crash dump, in which the system halts and the contents of physical memory are written to a file. However, most of the systems we would likely see aren't set up for this...so lots of folks, including LEOs, are using dd.exe to dump the contents of physical memory.
Once you have this file, what do/can you do with it? Well, the first and most obvious thing is to run strings.exe against the file or open it up in BinText. You might also run scripts against the file, looking for specific types of strings, such as email addresses, IP addresses, etc. Such things might be useful.
Another thing you might do is to break the dump down into 4K pages and generate hashes for those pages, and then parse through the file system, doing the same thing for the files. Matches would let you know if the pages were loaded in memory (credit for that one goes to Dan Farmer).
But what about parsing kernel structures? Microsoft has many of the available structures documented in MSDN, so we know what they look like. We know, for instance, that a particular structure is so many bytes long, based on the values in the structure, and we can parse these with computer code, either in C, or in scripting languages such as Perl (using unpack()). Knowing this, we only have one thing left that we need to know...the offsets. The image file itself starts at 0 (or 0x00000000)...we need someway of finding out where the Waldo structure begins in that file, before we can start parsing it.
So...what are your thoughts? Am I off base? Am I close? Do you have have any input at all on how to determine the offsets...other than "ask Microsoft"? Are there any developers out there who can comment on this?
Saturday, June 18, 2005
Upcoming conferences
Next week, I'll be presenting at the MISTI "Cracking eFraud" conference in Boston. My presentation is on using the Windows Registry as a forensic resource". I'm presenting at (get this!) 0730 on Wed. If you're there, stop by and say hi...I promise that it'll be worth your time.
In August, I'll be giving a couple of presentations at GMU2005. I'll be presenting on document metadata on Windows systems, and on tracking USB devices across Windows systems. It looks like quite a bit of time is scheduled for each of these presentations, and I'm giving them both twice...so I may sneak in MISTI presentation, as well...particularly since the USB data is incorporated in that presentation.
Once again, if you're there, stop by and say hi. At the end of my presentations, I like to give a pop quiz, with copies of my book as prizes.
Also, if you've got or know of a conference coming up where topics like this (or other topics concerning the forensic analysis of Windows systems) would be beneficial to the audience, let me know.
In August, I'll be giving a couple of presentations at GMU2005. I'll be presenting on document metadata on Windows systems, and on tracking USB devices across Windows systems. It looks like quite a bit of time is scheduled for each of these presentations, and I'm giving them both twice...so I may sneak in MISTI presentation, as well...particularly since the USB data is incorporated in that presentation.
Once again, if you're there, stop by and say hi. At the end of my presentations, I like to give a pop quiz, with copies of my book as prizes.
Also, if you've got or know of a conference coming up where topics like this (or other topics concerning the forensic analysis of Windows systems) would be beneficial to the audience, let me know.
Memory collection and analysis follow-up
This topic is by no means closed...in fact, I think that the discussion of dealing with physical memory, memory analysis, etc., for forensics and/or malware analysis is just beginning. Given that I'm one of those folks who has spent the past couple of years knodding my head about using dd.exe to image RAM, simply because I didn't know any better, and given the number of folks I've run into who have done the same thing, it's pretty clear to me that this is an issue that needs to be addressed.
IMHO, the best way to approach this is to provide the knowledge, weigh the pros and cons, and let the user/reader make the decision on how to proceed. For example, sometimes, running tools such as the FSP would be the way to go...correlating and analyzing the results can get you a long way. However, those are user-mode tools and things might be missed...but then again, with the right combination of tools and analysis, you may be able identify those things that are missed, at least...not what the data is, but just the fact that it's missing (think of this as akin to identifying the wind...we can't see it or taste it, but we know it's there based on how it affects the environment).
In other cases, such as malware analysis (and forensics involving specific processes running on the victim system), using the debugger tools to grab the contents of process memory, and then using the same debugger tools to analyze the information retrieved, might be more desireable. You'd get some of the same information about the process as you would with the user-land tools from the FSP (i.e., loaded modules, handles, etc.), but this might be the preferred approach. Putting the tools on a CD and writing the process memory dump file to a thumb drive would be a great way to handle this.
Now, when it comes to a full-out memory dump, so far as I've been able to determine, the only real way to do this is with a crash dump. Crash dumps can be triggered manually, as mentioned in my previous post (via the MS KB article), but doing so requires advanced planning. I'd highly recommend incorporating these changes into malware analysis systems. I'd also recommend that the settings be incorporated into critical systems, as at the very least, analyzing the memory dump would make root cause analysis much easier. One of the methods MS mentions for performing troubleshooting is "send us your crash dump"...Oracle does this as well for their products. For forensic purposes, one has to take this into consideration, but I tend to believe that the benefits may outweigh the risks (i.e., overwriting exculpatory evidence when the full crash dump file is written to disk)...depending upon the situation.
I'd appreciate hearing your thoughts...considering that in the past couple of days, I've completely thrown out the idea of using dd.exe to image physical memory, and thrown out pmdump.exe for getting process memory. Now, I've got to go back and rewrite my already-published articles...
IMHO, the best way to approach this is to provide the knowledge, weigh the pros and cons, and let the user/reader make the decision on how to proceed. For example, sometimes, running tools such as the FSP would be the way to go...correlating and analyzing the results can get you a long way. However, those are user-mode tools and things might be missed...but then again, with the right combination of tools and analysis, you may be able identify those things that are missed, at least...not what the data is, but just the fact that it's missing (think of this as akin to identifying the wind...we can't see it or taste it, but we know it's there based on how it affects the environment).
In other cases, such as malware analysis (and forensics involving specific processes running on the victim system), using the debugger tools to grab the contents of process memory, and then using the same debugger tools to analyze the information retrieved, might be more desireable. You'd get some of the same information about the process as you would with the user-land tools from the FSP (i.e., loaded modules, handles, etc.), but this might be the preferred approach. Putting the tools on a CD and writing the process memory dump file to a thumb drive would be a great way to handle this.
Now, when it comes to a full-out memory dump, so far as I've been able to determine, the only real way to do this is with a crash dump. Crash dumps can be triggered manually, as mentioned in my previous post (via the MS KB article), but doing so requires advanced planning. I'd highly recommend incorporating these changes into malware analysis systems. I'd also recommend that the settings be incorporated into critical systems, as at the very least, analyzing the memory dump would make root cause analysis much easier. One of the methods MS mentions for performing troubleshooting is "send us your crash dump"...Oracle does this as well for their products. For forensic purposes, one has to take this into consideration, but I tend to believe that the benefits may outweigh the risks (i.e., overwriting exculpatory evidence when the full crash dump file is written to disk)...depending upon the situation.
I'd appreciate hearing your thoughts...considering that in the past couple of days, I've completely thrown out the idea of using dd.exe to image physical memory, and thrown out pmdump.exe for getting process memory. Now, I've got to go back and rewrite my already-published articles...
Thursday, June 16, 2005
RAM, memory dumps, and debuggers...oh, my!
One of my recent posts on obtaining the contents of physical memory using userdump.exe attracted the attention of some folks as MS, and I ended up having a long conversation with Robert Hensing on the topic of obtaining and analyzing the contents of (physical) memory.
The long and short of it is this...the tools and techniques you use all depend upon what you want to do. How's that for a direct answer? No, that wasn't Robert's response...he provided much more information than that...I'm just summing it up for you. I'm going to give you a short and sweet explanation, opening this topic up for discussion.
If you want to grab the contents of physical memory, you can do so with dd.exe...keeping in mind that tools like this, as well as LiveKD, don't lock kernel memory prior to performing their dump. Therefore, what you get is a smear, of sorts, as the contents of physical memory are changing all the time, while you're dumping those contents. Also, keep in mind that dd.exe does not produce output that is compatible with MS debuggers.
Now, if you're looking to get the memory used by a process, then the way to go is to download the MS debugging tools, and run those from a thumb drive or CD. The debugging tools have a plethora of switches, but Robert was nice enough to write a VB script that's included in the tools called "adplus.vbs" that will let you easily dump the memory contents of multiple processes. The debugging tools do this by first suspending the process, and after the dump is complete, you can then either resume or kill the process. The output can then be analyzed using the debugging tools.
I have to say that this makes a lot more sense than using just pmdump.exe and strings. Robert even went so far as to point out that this can even be used when hunting for rootkits, as some rootkits will insert code into memory used by all processes...when attempting to perform analysis of the memory dump, the debugger will crash when it attempts to reference memory that doesn't exist or is hidden.
Now, what if you want to capture a snapshot of the system? Well, the only real way to do that is with a crash dump (ie, Blue Screen). And when I say "real", I'm referring to the most forensically sound method of doing this...that being said, being able to do this requires some preparation. I'd highly recommend that if you have critical systems that you're really concerned about, that you put a lot of thought into considering these options. First, take a look at KB254649 for an overview of dump file options on 2000, XP, and 2003, and then KB244139 for a feature that allows a memory dump file to be created from the keyboard. Go through the second KB article thoroughly and consider adding the recommended modifications to critical systems. I'm told that some folks out there have done this.
I'd also consider setting up testing systems with the same options, so that when you're testing malware, you can generate dumps of process memory or even crashdumps as part of your testing and then use the MS debugging tools for your analysis.
Robert added that there are debugger plugins that allow you to view and analyze very useful information, such as TCP/IP connection information, from a full crashdump, as this information is handled by the kernel.
So...the long and short of this is that some recommendations that work on Linux systems aren't exactly advisable on Windows boxen, depending upon what you want to do. Sure, you can use dd.exe to image memory, but the output isn't compatible with the MS debugging tools, and since there don't seem to be any tools available for really parsing and analyzing this information, it's of limited use.
Other useful resources include MS KB articles on debugging.
The long and short of it is this...the tools and techniques you use all depend upon what you want to do. How's that for a direct answer? No, that wasn't Robert's response...he provided much more information than that...I'm just summing it up for you. I'm going to give you a short and sweet explanation, opening this topic up for discussion.
If you want to grab the contents of physical memory, you can do so with dd.exe...keeping in mind that tools like this, as well as LiveKD, don't lock kernel memory prior to performing their dump. Therefore, what you get is a smear, of sorts, as the contents of physical memory are changing all the time, while you're dumping those contents. Also, keep in mind that dd.exe does not produce output that is compatible with MS debuggers.
Now, if you're looking to get the memory used by a process, then the way to go is to download the MS debugging tools, and run those from a thumb drive or CD. The debugging tools have a plethora of switches, but Robert was nice enough to write a VB script that's included in the tools called "adplus.vbs" that will let you easily dump the memory contents of multiple processes. The debugging tools do this by first suspending the process, and after the dump is complete, you can then either resume or kill the process. The output can then be analyzed using the debugging tools.
I have to say that this makes a lot more sense than using just pmdump.exe and strings. Robert even went so far as to point out that this can even be used when hunting for rootkits, as some rootkits will insert code into memory used by all processes...when attempting to perform analysis of the memory dump, the debugger will crash when it attempts to reference memory that doesn't exist or is hidden.
Now, what if you want to capture a snapshot of the system? Well, the only real way to do that is with a crash dump (ie, Blue Screen). And when I say "real", I'm referring to the most forensically sound method of doing this...that being said, being able to do this requires some preparation. I'd highly recommend that if you have critical systems that you're really concerned about, that you put a lot of thought into considering these options. First, take a look at KB254649 for an overview of dump file options on 2000, XP, and 2003, and then KB244139 for a feature that allows a memory dump file to be created from the keyboard. Go through the second KB article thoroughly and consider adding the recommended modifications to critical systems. I'm told that some folks out there have done this.
I'd also consider setting up testing systems with the same options, so that when you're testing malware, you can generate dumps of process memory or even crashdumps as part of your testing and then use the MS debugging tools for your analysis.
Robert added that there are debugger plugins that allow you to view and analyze very useful information, such as TCP/IP connection information, from a full crashdump, as this information is handled by the kernel.
So...the long and short of this is that some recommendations that work on Linux systems aren't exactly advisable on Windows boxen, depending upon what you want to do. Sure, you can use dd.exe to image memory, but the output isn't compatible with the MS debugging tools, and since there don't seem to be any tools available for really parsing and analyzing this information, it's of limited use.
Other useful resources include MS KB articles on debugging.
Wednesday, June 15, 2005
Dumping and analyzing physical memory
Well, my research into dumping and analyzing physical memory is progressing. I can't say that I'm finding a positive answer...all I can say is that the research is going well. ;-)
I got in touch with Joanna Rutkowska over at invisiblethings.org about a presentation she gave in Oct '04, and made reference to dd.exe and memory dumps (i.e., crashdumps) created by Windows tools are not compatible. This has been confirmed via other sources.
MS has a tool called userdump.exe (1, 2) that you can use to collect process memory, but it requires that you run a setup program that installs a kernel-mode driver, so it has to be done ahead of time.
An alternative to this kind of crashdump analysis and debugging is LiveKD.
I got in touch with Joanna Rutkowska over at invisiblethings.org about a presentation she gave in Oct '04, and made reference to dd.exe and memory dumps (i.e., crashdumps) created by Windows tools are not compatible. This has been confirmed via other sources.
MS has a tool called userdump.exe (1, 2) that you can use to collect process memory, but it requires that you run a setup program that installs a kernel-mode driver, so it has to be done ahead of time.
An alternative to this kind of crashdump analysis and debugging is LiveKD.
Memory Dump Analysis
One of the things I'm seeing, or should I say, have been seeing for a while, is a move away from the purist approach to forensics, in that actual practitioners are moving away from the thinking that the process starts by shutting off power to the system. I've corresponded with folks using the FSP, and other similar toolkits, be they homebrew, or the WFT.
Besides collecting volatile data, one of the things that's talked about is imaging memory, or collecting a memory dump. For the most part, this has been talked about, but when I've asked the question (as I did at HTCIA2004), "what would you do with it?", most people respond with a blank stare. Then someone way, way in the back (or it sounds like they're way, way in the back...) says, "run strings on it." Okay, that's fine...but what then? How do you associate anything that you find in memory with the case you're working on?
Well, as a start, Mariusz Burdac, over at seccure.net, has released a white paper entitled, "Digital forensics of the physical memory". Now, some of the grammar may throw you off a bit, but keep in mind that this is a start in the right direction. The table of contents of the paper are pretty impressive, addressing some of the issues faced in performing analysis of memory dumps. Over all, I think that the paper is a really good contribution, and a definite step in the right direction. The collection and analysis of information (re: evidence) from live systems is going to become even more important as time goes on.
However, one thing that really threw me was the fact that the author started the paper off by using the FU Rootkit and SQL Slammer worm as examples to justify performing memory dump analysis. The issue I have with this is that the author then goes into analyzing memory from Linux systems...the FU Rootkit and SQL Slammer worm affect Windows systems. The author makes no mention of Windows systems other than to say that the analysis of memory dumps "can be done".
In order to collect the contents of physical memory from a Windows system, you should look into the Forensic Acquisition Utilities from George Garner. The web site includes examples of how to use dd.exe to obtain an image of PhysicalMemory.
One needs to keep in mind, however, that while the system is running, pages are swapped out of memory and into the pagefile (ie, pagefile.sys). Therefore, if you were able to identify the area of memory used by a process, it would be a fragment of the total amount of memory (RAM + pagefile) used by the process. If you're interested in a particular process, I'd recommend using pmdump, instead.
MS does have some documentation regarding using debugger tools to analyze memory dumps. According to MS documentation:
A complete memory dump file contains the entire contents of physical memory when the Stop error occurred. The file size is equal to the amount of physical memory installed plus 1 MB. When a Stop error occurs, the operating system saves a complete memory dump file to a file named systemroot\Memory.dmp and creates a small memory dump file in the systemroot\Minidump folder.
So, maybe that gets us on our way a bit. This is used to analyze Stop errors, but perhaps it can also be used to analyze memory dumps, as well.
It looks like more research is required, as well as some testing. As I'm digging into this, I'd appreciate hearing from folks with regards to what they did, what worked, what didn't work, etc.
Besides collecting volatile data, one of the things that's talked about is imaging memory, or collecting a memory dump. For the most part, this has been talked about, but when I've asked the question (as I did at HTCIA2004), "what would you do with it?", most people respond with a blank stare. Then someone way, way in the back (or it sounds like they're way, way in the back...) says, "run strings on it." Okay, that's fine...but what then? How do you associate anything that you find in memory with the case you're working on?
Well, as a start, Mariusz Burdac, over at seccure.net, has released a white paper entitled, "Digital forensics of the physical memory". Now, some of the grammar may throw you off a bit, but keep in mind that this is a start in the right direction. The table of contents of the paper are pretty impressive, addressing some of the issues faced in performing analysis of memory dumps. Over all, I think that the paper is a really good contribution, and a definite step in the right direction. The collection and analysis of information (re: evidence) from live systems is going to become even more important as time goes on.
However, one thing that really threw me was the fact that the author started the paper off by using the FU Rootkit and SQL Slammer worm as examples to justify performing memory dump analysis. The issue I have with this is that the author then goes into analyzing memory from Linux systems...the FU Rootkit and SQL Slammer worm affect Windows systems. The author makes no mention of Windows systems other than to say that the analysis of memory dumps "can be done".
In order to collect the contents of physical memory from a Windows system, you should look into the Forensic Acquisition Utilities from George Garner. The web site includes examples of how to use dd.exe to obtain an image of PhysicalMemory.
One needs to keep in mind, however, that while the system is running, pages are swapped out of memory and into the pagefile (ie, pagefile.sys). Therefore, if you were able to identify the area of memory used by a process, it would be a fragment of the total amount of memory (RAM + pagefile) used by the process. If you're interested in a particular process, I'd recommend using pmdump, instead.
MS does have some documentation regarding using debugger tools to analyze memory dumps. According to MS documentation:
A complete memory dump file contains the entire contents of physical memory when the Stop error occurred. The file size is equal to the amount of physical memory installed plus 1 MB. When a Stop error occurs, the operating system saves a complete memory dump file to a file named systemroot\Memory.dmp and creates a small memory dump file in the systemroot\Minidump folder.
So, maybe that gets us on our way a bit. This is used to analyze Stop errors, but perhaps it can also be used to analyze memory dumps, as well.
It looks like more research is required, as well as some testing. As I'm digging into this, I'd appreciate hearing from folks with regards to what they did, what worked, what didn't work, etc.
Sunday, June 12, 2005
Shooting oneself in the foot...
I haven't found any really good malware analysis postings lately, but higB (*secureme blog) came to my rescue and posted about a recent, and personal, incident.
In a nutshell, he infected himself with a Trojan, and then went about figuring out what it did. Reading through it, I see that he did a lot of things right.
One of his comments in particular seemed interesting to me: "system.exe looked normal to me." I'm sure this is the case a lot of times, to a lot of admins. I'm on XP Home right now, and don't see "system.exe", though I do see "System" and "System Idle Process" (via Task Manager). Even using tlist.exe, I don't see anything called "system.exe".
Take a look at his post...what would you have done differently? What things would you have done that higB didn't do? What do you think of his tools and techniques for analyzing the file?
In a nutshell, he infected himself with a Trojan, and then went about figuring out what it did. Reading through it, I see that he did a lot of things right.
One of his comments in particular seemed interesting to me: "system.exe looked normal to me." I'm sure this is the case a lot of times, to a lot of admins. I'm on XP Home right now, and don't see "system.exe", though I do see "System" and "System Idle Process" (via Task Manager). Even using tlist.exe, I don't see anything called "system.exe".
Take a look at his post...what would you have done differently? What things would you have done that higB didn't do? What do you think of his tools and techniques for analyzing the file?
Thursday, June 09, 2005
Some help needed with PE headers
The day before yesterday, I started digging into PE headers while looking at some malcode. One of the tools I've been using is PEView, and another is FileAlyzer. Both tools have proven extremely useful in viewing the PE Headers, as well as other information about the file, breaking things down a little more beyond a simple hex editor.
Here's my question, though. Just prior to the PE headers (ie, before the "PE\0\0") is the MS-DOS stub program, and according to everything I've been able to find on the topic, this is put in place by the linker. Evidently, this is a holdover dating back to MS-DOS 2.0, and for some reason is still in use today. This is the section of code that, when you open a PE file in a hex editor, you see something like "This program cannot be run in DOS mode." I've seen variations of this...which leads to the question. Between the notification about running in DOS mode and the PE header, you'll often times see lots of binary data. Sometimes (as with netsh.exe on XP Pro, for example), you'll see the letters spelling out "Rich".
Does anyone know what this is?
I'm fairly sure that it's the contents of the stub program added by the linker, but I'd like to get confirmation on that, and perhaps even see if it's possible to tie a PE file to a particular linker or development environment.
Here's an example of how I'm thinking that this could be used...let's say you've got a case where you're pretty sure that someone developed a program. You've got a copy of the executable (worm, whatever...) and you think that the suspect created it. You may or may not find bits and pieces of code in slack space. But let's say you find a development environment, such as Cygwin or Borland or MS Visual C++. Would it be possible to tie the stub program added by the linker to the development environment, by comparing the MS-DOS stub program in the PE file to the version of "winstub.exe" (or whatever the default is) on the suspect's machine?
Bonus Question: The "magic number" for a Windows executable is "0x5A4D" (or "MZ"). What is the significance of "MZ", and were did it come from?
Here's my question, though. Just prior to the PE headers (ie, before the "PE\0\0") is the MS-DOS stub program, and according to everything I've been able to find on the topic, this is put in place by the linker. Evidently, this is a holdover dating back to MS-DOS 2.0, and for some reason is still in use today. This is the section of code that, when you open a PE file in a hex editor, you see something like "This program cannot be run in DOS mode." I've seen variations of this...which leads to the question. Between the notification about running in DOS mode and the PE header, you'll often times see lots of binary data. Sometimes (as with netsh.exe on XP Pro, for example), you'll see the letters spelling out "Rich".
Does anyone know what this is?
I'm fairly sure that it's the contents of the stub program added by the linker, but I'd like to get confirmation on that, and perhaps even see if it's possible to tie a PE file to a particular linker or development environment.
Here's an example of how I'm thinking that this could be used...let's say you've got a case where you're pretty sure that someone developed a program. You've got a copy of the executable (worm, whatever...) and you think that the suspect created it. You may or may not find bits and pieces of code in slack space. But let's say you find a development environment, such as Cygwin or Borland or MS Visual C++. Would it be possible to tie the stub program added by the linker to the development environment, by comparing the MS-DOS stub program in the PE file to the version of "winstub.exe" (or whatever the default is) on the suspect's machine?
Bonus Question: The "magic number" for a Windows executable is "0x5A4D" (or "MZ"). What is the significance of "MZ", and were did it come from?
Wednesday, June 08, 2005
Schneier on Attack Trends
Bruce Schneier posted an interesting blog entry the other day on attack trends seen by Counterpane's monitoring service. The post seems to be excerpted from his essay, which is an interesting read. His blog entry was also /.'d, along with some supporting information.
Some of the interesting trends that Bruce talks about include such things as "hacking" moving from a notoriety-based, hobbyist activity to out-and-out economic crime. Examples of this include extortion, as well as the rental/sale of botnets.
The data Counterpane collected supports what others have been seeing. Worms and other malcode used to be written as proof of concept, and some of it even got released into the wild. Now, some malware authors are writing code for demonstration, but providing private versions of the code, with greater capabilities, to those willing to pay for it. It seems that folks are learning from history...while writing something that's annoying can be fun and you can get your 15 minutes of fame amongst your friends, one wrong step and you could end up in jail (just ask this guy). So why not take a targetted approach to your attack, remain quiet and patient, and collect information/data for later use? Some malware now has built-in rootkit capabilities in order to hide activity (Trojan.Blubber, Trojan.Drivus, Backdoor.Ryejet).
Another trend Bruce mentions is the increased sophistication in malware. There's evidence that shows worms becoming more intelligent in their reconnaissance and propogation techniques. The Win32.spybot.KEG worm, for example, includes multiple capabilities, in that it performs scans for specific vulnerabilities, can communicate it's findings over IRC, includes a backdoor, get the contents of the clipboard, grab images from a web cam, etc.
Rather than looking at these as separate trends, consider them together. Attacks are coming quicker, and the attacks and malware are becoming more sophisticated. Malware is getting onto the network via some exposed gateway or rogue (ie, forgotten) system, and scanning for specific vulnerabilities. These tools are becoming less "noisy" (ie, looking for specific things, rather than taking a shotgun approach), moving quicker, and include the necessary capabilities to hide from all but the most sophisticated investigator.
Combine this with the continuing trend of IT as a rapidly growing industry (ie, more and more people moving into IT everyday) - which means that every day, there are new/green/un- or under-trained administrators - and you've got a pretty interesting scenario.
The scary part is that the growth of cybercrime combined with the growth of (excuse me for saying this) "security-challenged" administrators and IT managers opens up the investigative arena for explosive expansion. What does this lead to? An old friend of mine recently told me about an issue he had with a computer system, where he had to determine whether certain documents were on the system. He took the Windows XP system to a "forensic expert" who was really just an expert MAC user...who also never found the documents. Also, my friend gave the "forensic expert" explicit instructions to NOT connect the system to a network under any circumstances...and when he went by the expert's office, he found the system connected to the expert's network via RJ-45/Cat-5 cable, and the ethernet activity lights on the system blinking furiously.
The point of all this is that the attackers continue to be lightyears ahead of the victims. The need for training and education in order to (a) recognize that an incident has occurred or is occurring, and (b) do something about it is paramount.
Some of the interesting trends that Bruce talks about include such things as "hacking" moving from a notoriety-based, hobbyist activity to out-and-out economic crime. Examples of this include extortion, as well as the rental/sale of botnets.
The data Counterpane collected supports what others have been seeing. Worms and other malcode used to be written as proof of concept, and some of it even got released into the wild. Now, some malware authors are writing code for demonstration, but providing private versions of the code, with greater capabilities, to those willing to pay for it. It seems that folks are learning from history...while writing something that's annoying can be fun and you can get your 15 minutes of fame amongst your friends, one wrong step and you could end up in jail (just ask this guy). So why not take a targetted approach to your attack, remain quiet and patient, and collect information/data for later use? Some malware now has built-in rootkit capabilities in order to hide activity (Trojan.Blubber, Trojan.Drivus, Backdoor.Ryejet).
Another trend Bruce mentions is the increased sophistication in malware. There's evidence that shows worms becoming more intelligent in their reconnaissance and propogation techniques. The Win32.spybot.KEG worm, for example, includes multiple capabilities, in that it performs scans for specific vulnerabilities, can communicate it's findings over IRC, includes a backdoor, get the contents of the clipboard, grab images from a web cam, etc.
Rather than looking at these as separate trends, consider them together. Attacks are coming quicker, and the attacks and malware are becoming more sophisticated. Malware is getting onto the network via some exposed gateway or rogue (ie, forgotten) system, and scanning for specific vulnerabilities. These tools are becoming less "noisy" (ie, looking for specific things, rather than taking a shotgun approach), moving quicker, and include the necessary capabilities to hide from all but the most sophisticated investigator.
Combine this with the continuing trend of IT as a rapidly growing industry (ie, more and more people moving into IT everyday) - which means that every day, there are new/green/un- or under-trained administrators - and you've got a pretty interesting scenario.
The scary part is that the growth of cybercrime combined with the growth of (excuse me for saying this) "security-challenged" administrators and IT managers opens up the investigative arena for explosive expansion. What does this lead to? An old friend of mine recently told me about an issue he had with a computer system, where he had to determine whether certain documents were on the system. He took the Windows XP system to a "forensic expert" who was really just an expert MAC user...who also never found the documents. Also, my friend gave the "forensic expert" explicit instructions to NOT connect the system to a network under any circumstances...and when he went by the expert's office, he found the system connected to the expert's network via RJ-45/Cat-5 cable, and the ethernet activity lights on the system blinking furiously.
The point of all this is that the attackers continue to be lightyears ahead of the victims. The need for training and education in order to (a) recognize that an incident has occurred or is occurring, and (b) do something about it is paramount.
MS Security Document
I ran across an interesting document today at the MS Download Center entitled, "The Security Monitoring and Attack Detection Planning Guide" (in PDF).
So far, all I've given it is a quick glance, but it looks like it has some fairly good information in it. For example, chapter 2 discusses tools for correlating security events. But there's a big "uh-oh" in there, too. The document mentions the Event Comb MT tool used for correlating Security Event Log entries (and ONLY Security Event Log entries) from across machines...but then goes on to state that Event ID 12294 (account lockout threshold exceed on the default Administrator account) is reported to the System Event Log. Doh!
For the most part, it looks like the document really addresses a lot of the common sense things that MS has been pushing for years...things like taking a look at who has Admin privileges in your organization (and why they have it), taking a system-wide approach to design (rather than a band-aid, patch it up approach), etc.
Overall, it does look like a good resource, if for no other reason than for providing Appendix A, "Exclude Unnecessary Events". This is one of those sections that made me go "hhhhmmmm"...if an event is "typical behaviour" and deemed "unnecessary", why was it included at all? Well, at least MS has provided some kind of an explanation of various events, so rather than knocking them, I'll thank them.
So far, all I've given it is a quick glance, but it looks like it has some fairly good information in it. For example, chapter 2 discusses tools for correlating security events. But there's a big "uh-oh" in there, too. The document mentions the Event Comb MT tool used for correlating Security Event Log entries (and ONLY Security Event Log entries) from across machines...but then goes on to state that Event ID 12294 (account lockout threshold exceed on the default Administrator account) is reported to the System Event Log. Doh!
For the most part, it looks like the document really addresses a lot of the common sense things that MS has been pushing for years...things like taking a look at who has Admin privileges in your organization (and why they have it), taking a system-wide approach to design (rather than a band-aid, patch it up approach), etc.
Overall, it does look like a good resource, if for no other reason than for providing Appendix A, "Exclude Unnecessary Events". This is one of those sections that made me go "hhhhmmmm"...if an event is "typical behaviour" and deemed "unnecessary", why was it included at all? Well, at least MS has provided some kind of an explanation of various events, so rather than knocking them, I'll thank them.
Tuesday, June 07, 2005
Case studies, and T&E again
I was going through the E-Evidence.info site again this morning...the site is updated with new stuff each month...and saw the Internal Investigations Case Study presentation by Curtis Rose. This is a very informative read, even for technical weenies such as myself who really love the "down in the weeds" stuff.
Something really jumped out at me on slide number 10, though. When I say "jumped out", I mean deja vu, because I know I've been here before. The specific statements surround an internal investigation conducted by a sysadmin (and please don't think I'm using this as an opportunity to bust on sysadmins, because I'm not...not this time, anyway):
The investigative memorandum generated by the system administrator was biased and clearly written to substantiate the suspect was responsible
Really? Go figure. I've been here before, where a sysadmin reports on an incident in such a way as to support his original hypothesis...the one he developed shortly after receiving the first pager alert. At 2am.
A basis for much of the document was information from connection logs, which the memorandum indicated were manipulated
I can't begin to tell you how many times I've seen this, particularly in public lists. In the same post, someone who has "conducted" an "investigation" will state the source of evidence as being authoritative, but also suspect.
People, you can't have it both ways. Is it just me?
One final bullet that I'll comment on is:
What limited analysis was conducted was performed directly on the victim systems
Again, I can't tell you how many times I've seen or heard of this..."Task Manager didn't show any unusual processes."
"So that one process that looks like 'svchost', but is really called 'scvhost'...that one isn't 'unusual' to you?"
So what's with the rant? It's a need for education, folks! Education of whom? Well...get ready for this one...of IT Managers, from the C-level down. If your organization isn't hiring the right people, and the right number of people, to staff your IT department, they're doing themselves a disservice. Of course, some places may choose to do this as a sort of self-imposed governor (like one of those things they used to put on U-Haul truck engines so they wouldn't go over 65 mph, no matter how hard you pushed on the gas pedal).
When I say, "the right people", I'm referring to folks who don't necessarily look at their day job as just that. It seems sometimes that the job market is tight...so having someone who doesn't even try to keep up on things, even on their own time, doesn't make a great deal of sense when there're lots of people out there who do, and would want that job.
But you can't rely simply on self-education and -training. Hiring the right numbers of people will allow for things like taking time off for training and continuing education. One of the approaches I found to be very effective was to come on-site to provide my training. This way, admins were out of the office and engaged in training, but they weren't completely out of pocket. In fact, in at least one case, an Exchange admin used the new skills he'd learned to solve an issue over lunch during the second day of training. I've even recommended splitting the training into "port-starboard"...instead of sending everyone off-site for two days, I'd come on-site for four days and teach the course. The first two days, I'd train half of the staff, and then train the other half during the second two days.
My point is that there are a variety of options available for training and education...it just depends on where you choose to look, and how badly you want it.
Finally, for the guys and gals who are in those positions where continuing education and advancement in the IT field is non-existant...can I recommend Monster.com?
Something really jumped out at me on slide number 10, though. When I say "jumped out", I mean deja vu, because I know I've been here before. The specific statements surround an internal investigation conducted by a sysadmin (and please don't think I'm using this as an opportunity to bust on sysadmins, because I'm not...not this time, anyway):
The investigative memorandum generated by the system administrator was biased and clearly written to substantiate the suspect was responsible
Really? Go figure. I've been here before, where a sysadmin reports on an incident in such a way as to support his original hypothesis...the one he developed shortly after receiving the first pager alert. At 2am.
A basis for much of the document was information from connection logs, which the memorandum indicated were manipulated
I can't begin to tell you how many times I've seen this, particularly in public lists. In the same post, someone who has "conducted" an "investigation" will state the source of evidence as being authoritative, but also suspect.
People, you can't have it both ways. Is it just me?
One final bullet that I'll comment on is:
What limited analysis was conducted was performed directly on the victim systems
Again, I can't tell you how many times I've seen or heard of this..."Task Manager didn't show any unusual processes."
"So that one process that looks like 'svchost', but is really called 'scvhost'...that one isn't 'unusual' to you?"
So what's with the rant? It's a need for education, folks! Education of whom? Well...get ready for this one...of IT Managers, from the C-level down. If your organization isn't hiring the right people, and the right number of people, to staff your IT department, they're doing themselves a disservice. Of course, some places may choose to do this as a sort of self-imposed governor (like one of those things they used to put on U-Haul truck engines so they wouldn't go over 65 mph, no matter how hard you pushed on the gas pedal).
When I say, "the right people", I'm referring to folks who don't necessarily look at their day job as just that. It seems sometimes that the job market is tight...so having someone who doesn't even try to keep up on things, even on their own time, doesn't make a great deal of sense when there're lots of people out there who do, and would want that job.
But you can't rely simply on self-education and -training. Hiring the right numbers of people will allow for things like taking time off for training and continuing education. One of the approaches I found to be very effective was to come on-site to provide my training. This way, admins were out of the office and engaged in training, but they weren't completely out of pocket. In fact, in at least one case, an Exchange admin used the new skills he'd learned to solve an issue over lunch during the second day of training. I've even recommended splitting the training into "port-starboard"...instead of sending everyone off-site for two days, I'd come on-site for four days and teach the course. The first two days, I'd train half of the staff, and then train the other half during the second two days.
My point is that there are a variety of options available for training and education...it just depends on where you choose to look, and how badly you want it.
Finally, for the guys and gals who are in those positions where continuing education and advancement in the IT field is non-existant...can I recommend Monster.com?
Monday, June 06, 2005
IR Tools
I ran across something this morning over on Mike Howard's blog about the use of netsh in troubleshooting the XP SP 2 firewall. The blog entry points to an MS KB article entitled, "Troubleshooting Windows Firewall settings in Windows XP Service Pack 2". I'm always on the lookout for good tools to use, and this one looks great for getting some pretty good information from XP systems, particularly concerning the firewall.
It may not be abundantly clear what I'm talking about if you read the above KB article, so start by taking a look at netsh.exe on an XP box by typing "netsh /?" at the command prompt. There are several options available, but if you're interested in just collecting information about the XP firewall, type "netsh firewall show /?".
It may not be abundantly clear what I'm talking about if you read the above KB article, so start by taking a look at netsh.exe on an XP box by typing "netsh /?" at the command prompt. There are several options available, but if you're interested in just collecting information about the XP firewall, type "netsh firewall show /?".
Training, time, and job responsibilities
As sort of a crossover from my last post, I thought I'd blog about some of the responses I received directly to my email inbox...
The biggest thing I'm seeing, from the responses as well as my own personal/professional experience, is that security takes time...and time is usually something in short supply. IT managers (and by this I mean all the way up to C-level folks) don't count on the fact that staying abreast of security issues takes time. Even if you're in an all-MS shop, with only Cisco routing equipment (as I've been), staying up on the latest viruses, effects of patches (and the systems they apply to), etc., all take time. Usually what ends up happening is that either (a) "security" is an ambiguous assignment to an already-overtasked and under-trained admin, or (b) the security guy/gal is seen sitting around staring at their monitor, so they're asked to help out with everything from helpdesk to router installations.
Now and again, I get perturbed at the content I see in posts to public forums. One of my pet peeves is the admin who posts to one of the SecurityFocus lists, and respondants ask questions for clarification...yet through the life of the thread, the original poster (OP) never responds. In the few instances where I've been able to track the OP down via direct email, 99.99% of the time, the reason for the disappearance is that something else more important came up (ie, Shiny Object Syndrome).
Another pet peeve is the OP who will ask a question that could have been answered, or perhaps simply better phrased, had the OP done some research of their own prior to posting. Sometimes a simple Google search is sufficient, other times simply putting together their own test would have answered their question.
So, what do these peeves of mine have to do with the subject at hand? Well, the first has to do with time...in many cases, it seems that admins are turning to public forums for their answers, but don't want to give out too much information about their networks or the situation they're dealing with. In many cases, troubleshooters/respondants need simple things such as the operating system/application name and version...which the OP may not feel comfortable giving out. However, the real issue is the fact that the OP is posting in the first place...they obviously have an issue they're dealing with but don't have the time to learn basic troubleshooting skills, troubleshoot the problem themselves, or get on the phone with techsupport for the application in question.
The second peeve has to do with education and training, at least indirectly. Well, now that I think about it, they both do. In a lot of cases, when I've asked people why they haven't done their own research, the ones that don't feel like they're being bullied (and they're not) will tell me that they don't have the time. There's no harm in asking questions, but sometimes questions can be answered or better phrased if you reason things through, do a little basic research, and even try something for yourself.
Taking training and education a step further, I've often wondered why there are so many people who lurk on public lists, some who post, and so very few who publish anything. When I say publish, I'm not necessarily talking about writing a book or getting an article published...what I'm talking about is doing some testing, documenting the methodology so that it can be duplicated and verified, and then writing up your results. I think that if there were more of this, the computer forensics community itself would be better served, as a whole. However, when I've asked about this, I've been told that the "requirements" are too rigorous...it takes too much time, and many people actually write so poorly, that they don't want to have to go through the headache of constant editing (and the sense of rejection they may feel when someone corrects their spelling and/or grammar).
You know what? You don't have to be a PhD to get something published. Actually, it's pretty easy...of course, I'm saying that as someone who's already done that (and tried to help others do the same). Not only is the act of getting something published educational in and of itself, but just going through the process of discovery teaches us a lot. I set up a testing methodology before where I've had to go back and redo everything, because after I was done I realized that there was something else I could have done. And then once we put our methodology and findings out there, we're all better for it.
[rant off]
The biggest thing I'm seeing, from the responses as well as my own personal/professional experience, is that security takes time...and time is usually something in short supply. IT managers (and by this I mean all the way up to C-level folks) don't count on the fact that staying abreast of security issues takes time. Even if you're in an all-MS shop, with only Cisco routing equipment (as I've been), staying up on the latest viruses, effects of patches (and the systems they apply to), etc., all take time. Usually what ends up happening is that either (a) "security" is an ambiguous assignment to an already-overtasked and under-trained admin, or (b) the security guy/gal is seen sitting around staring at their monitor, so they're asked to help out with everything from helpdesk to router installations.
Now and again, I get perturbed at the content I see in posts to public forums. One of my pet peeves is the admin who posts to one of the SecurityFocus lists, and respondants ask questions for clarification...yet through the life of the thread, the original poster (OP) never responds. In the few instances where I've been able to track the OP down via direct email, 99.99% of the time, the reason for the disappearance is that something else more important came up (ie, Shiny Object Syndrome).
Another pet peeve is the OP who will ask a question that could have been answered, or perhaps simply better phrased, had the OP done some research of their own prior to posting. Sometimes a simple Google search is sufficient, other times simply putting together their own test would have answered their question.
So, what do these peeves of mine have to do with the subject at hand? Well, the first has to do with time...in many cases, it seems that admins are turning to public forums for their answers, but don't want to give out too much information about their networks or the situation they're dealing with. In many cases, troubleshooters/respondants need simple things such as the operating system/application name and version...which the OP may not feel comfortable giving out. However, the real issue is the fact that the OP is posting in the first place...they obviously have an issue they're dealing with but don't have the time to learn basic troubleshooting skills, troubleshoot the problem themselves, or get on the phone with techsupport for the application in question.
The second peeve has to do with education and training, at least indirectly. Well, now that I think about it, they both do. In a lot of cases, when I've asked people why they haven't done their own research, the ones that don't feel like they're being bullied (and they're not) will tell me that they don't have the time. There's no harm in asking questions, but sometimes questions can be answered or better phrased if you reason things through, do a little basic research, and even try something for yourself.
Taking training and education a step further, I've often wondered why there are so many people who lurk on public lists, some who post, and so very few who publish anything. When I say publish, I'm not necessarily talking about writing a book or getting an article published...what I'm talking about is doing some testing, documenting the methodology so that it can be duplicated and verified, and then writing up your results. I think that if there were more of this, the computer forensics community itself would be better served, as a whole. However, when I've asked about this, I've been told that the "requirements" are too rigorous...it takes too much time, and many people actually write so poorly, that they don't want to have to go through the headache of constant editing (and the sense of rejection they may feel when someone corrects their spelling and/or grammar).
You know what? You don't have to be a PhD to get something published. Actually, it's pretty easy...of course, I'm saying that as someone who's already done that (and tried to help others do the same). Not only is the act of getting something published educational in and of itself, but just going through the process of discovery teaches us a lot. I set up a testing methodology before where I've had to go back and redo everything, because after I was done I realized that there was something else I could have done. And then once we put our methodology and findings out there, we're all better for it.
[rant off]
Friday, June 03, 2005
The need for training
I ran across something interesting this morning...it's not new, but it's the first time I've seen it. I was checking out what's new over on the E-Evidence site and somehow made it to an article that quoted Kate Seigfried about a study she'd conducted. The article said that cyberforensics is a discipline still in it's infancy.
Here's an interesting quote from the article:
In academia, Purdue University’s Center for Education and Research in Information Assurance and Security recently produced a study on the state of the computer forensics’ science. The study found forensic investigative procedures at present were still constructed in an informal manner that could impede the effectiveness or integrity of the investigation. Unfortunately, the study pointed out informal nature of the procedures could prevent verification of the evidence collected and might diminish the value of the evidence in legal proceedings.
Forensic investigative procedures are still constructed in an informal manner? What? The article isn't explicit enough to really say a whole lot, but I know that several law enforcement agencies will document their procedures, which other agencies will use as the model.
The article goes on to say that Eugene Spafford sees two key questions:
1. How do we formalize the process of cyber forensic evidence gathering and analysis using appropriate and rigorous scientific method.
Evidence gathering is the easy part. For the most part, there are formalized processes out there for imaging drives. There are issues that need to be addressed, such as terabyte storage capacities (after all, where are the golden eggs kept these days but in jinormous databases??), RAID, etc., but these can be overcome.
Now, finding evidence is a different matter...that involves search and analysis techniques that haven't been formalized. Why is that? Well, I have a couple of thoughts on that, but would like to hear from you with regards to your thoughts on the matter.
2. How do we augment information systems so as to produce better audit and evidentiary trails while at the same time not exposing them to additional compromise.
I'm not sure, but it would seem to me that making use of the inherent capabilities of the system would be a good start. What I find odd is that there are so many "hardening guides" out there for Windows systems, and we still see these systems being compromised. When you talk to admins, they don't seem to have the knowledge themselves, and some say that there are just too many guides out there - which one or ones are "authoritative"? Point them to the NSA guides (after all, who's more "authoritative" than the NSA??) and many of them will blindly install the settings, and then wonder why they can't do anything.
I think what he's referring to is to design and build systems (remember the Orange Book of the Rainbow Series??) with more robust auditing built in...don't make it something the admin has to add or configure separately, because it won't happen.
[rant]
On a side note, it still mystifies me why MS would produce a "network operating system" that has NO inherent capability to get audit logs (i.e., Event Logs) off of systems. Even with the old NT-style domains, BDCs wouldn't automatically send their logs to the PDC or a designated server...you had to install separate software. How is that a "network" os?
[/rant]
But I digress...
I started looking and found the study referred to in the article (from 2003) entitled, "The Future of Computer Forensics: A needs analysis survey".
This study, conducted by Marcus Rogers and Ms. Seigfried, provides some interesting information that I would think is still true today, almost two years later. Their survey found that training, education, and certification is the top issue mentioned by the respondants, while lack of funding was the least reported issue.
Training, education, and certification? Lack of formalization? Well, they're probably right. I've been to conferences before where one presenter will have "???" in an area of their presentation (the specific example in mind involved NTFS ADSs, OLE documents, and where file summary information is kept...), while another presenter at the same conference had detailed information and even a demonstration that answered the question. The first presenter was a LEO, the second was a private citizen.
A couple of years ago, I was talking to a guy who provided computer forensics training to LEOs. He asked me if NTFS ADSs could be transferred over the network. I told him via file sharing, yes...but not via other protocols, such as FTP and HTTP. He bet me that they could, so we set up a demonstration. Turns out I was right...I knew the answer because I'd already done the research.
My point is that there are folks out there doing reproduceable, verifiable research...but it doesn't seem to be getting out there, even if it's presented at conferences, written into papers, articles, and books. Why is that?
Here's an interesting quote from the article:
In academia, Purdue University’s Center for Education and Research in Information Assurance and Security recently produced a study on the state of the computer forensics’ science. The study found forensic investigative procedures at present were still constructed in an informal manner that could impede the effectiveness or integrity of the investigation. Unfortunately, the study pointed out informal nature of the procedures could prevent verification of the evidence collected and might diminish the value of the evidence in legal proceedings.
Forensic investigative procedures are still constructed in an informal manner? What? The article isn't explicit enough to really say a whole lot, but I know that several law enforcement agencies will document their procedures, which other agencies will use as the model.
The article goes on to say that Eugene Spafford sees two key questions:
1. How do we formalize the process of cyber forensic evidence gathering and analysis using appropriate and rigorous scientific method.
Evidence gathering is the easy part. For the most part, there are formalized processes out there for imaging drives. There are issues that need to be addressed, such as terabyte storage capacities (after all, where are the golden eggs kept these days but in jinormous databases??), RAID, etc., but these can be overcome.
Now, finding evidence is a different matter...that involves search and analysis techniques that haven't been formalized. Why is that? Well, I have a couple of thoughts on that, but would like to hear from you with regards to your thoughts on the matter.
2. How do we augment information systems so as to produce better audit and evidentiary trails while at the same time not exposing them to additional compromise.
I'm not sure, but it would seem to me that making use of the inherent capabilities of the system would be a good start. What I find odd is that there are so many "hardening guides" out there for Windows systems, and we still see these systems being compromised. When you talk to admins, they don't seem to have the knowledge themselves, and some say that there are just too many guides out there - which one or ones are "authoritative"? Point them to the NSA guides (after all, who's more "authoritative" than the NSA??) and many of them will blindly install the settings, and then wonder why they can't do anything.
I think what he's referring to is to design and build systems (remember the Orange Book of the Rainbow Series??) with more robust auditing built in...don't make it something the admin has to add or configure separately, because it won't happen.
[rant]
On a side note, it still mystifies me why MS would produce a "network operating system" that has NO inherent capability to get audit logs (i.e., Event Logs) off of systems. Even with the old NT-style domains, BDCs wouldn't automatically send their logs to the PDC or a designated server...you had to install separate software. How is that a "network" os?
[/rant]
But I digress...
I started looking and found the study referred to in the article (from 2003) entitled, "The Future of Computer Forensics: A needs analysis survey".
This study, conducted by Marcus Rogers and Ms. Seigfried, provides some interesting information that I would think is still true today, almost two years later. Their survey found that training, education, and certification is the top issue mentioned by the respondants, while lack of funding was the least reported issue.
Training, education, and certification? Lack of formalization? Well, they're probably right. I've been to conferences before where one presenter will have "???" in an area of their presentation (the specific example in mind involved NTFS ADSs, OLE documents, and where file summary information is kept...), while another presenter at the same conference had detailed information and even a demonstration that answered the question. The first presenter was a LEO, the second was a private citizen.
A couple of years ago, I was talking to a guy who provided computer forensics training to LEOs. He asked me if NTFS ADSs could be transferred over the network. I told him via file sharing, yes...but not via other protocols, such as FTP and HTTP. He bet me that they could, so we set up a demonstration. Turns out I was right...I knew the answer because I'd already done the research.
My point is that there are folks out there doing reproduceable, verifiable research...but it doesn't seem to be getting out there, even if it's presented at conferences, written into papers, articles, and books. Why is that?
Subscribe to:
Posts (Atom)