I was grabbing some tools from the Foundstone tonight, for a presentation I'm working on, when I ran across something called the "Remote Forensics System". That sounded very interesting, so I Googled it and found a PDF at the Foundstone site that describes the RFS.
I read through the document and found a lot of very good information. In fact, I was envious...to be a grad student again, and have the time to do this kind of work [heavy sigh]. At least part of the system runs on the JRE, and they've incorporated triggers in to the overall system, so that if something pops up on the IDS, for example, data will be automatically retrieved from systems. Cool!
I'll need to read through the document again to get some idea of how the data collected is actually analyzed. However, it does sound like a great idea! My hat's off to Chris Prosise, listed in the Acknowledgements as the advisor.
Imagine my surprise when I found the Forensic Server Project mentioned in section 5 of the document! Wow! Someone actually went by the site and took a look at what I'd done. Unfortunately, the bibliography
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Wednesday, September 28, 2005
System Clock
Recently, a post in a public forum appeared, asking about how to verify the accuracy of the system clock. That got me to thinking...how do you verify the accuracy of the system clock, given nothing more than an image of a Windows system?
I decided to start some of my own research, and to post in other lists to see what others were doing. I received several responses, but most have been along the lines of:
So I set about performing my own experiments to see what happened. First, I ran InControl5 to baseline my system. When the baseline was complete, I double-clicked the time display on the far right of my Task Bar, and that opened the "Date and Time Properties" window, which is essentially the timedate.cpl Control Panel applet. I modified the system time, clicked "OK", then ran the second half of the InControl5 process.
Viewing the results, I ran across a couple of interesting things. First off, the following UserAssist key had been updated.
HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\UserAssist\{75048700-EF1F-11D0-9888-006097DEACF9}/Count
I ran my "uassist.pl" Perl script to translate the values under this key, and found "timedate.cpl". Unfortunately, this key doesn't maintain its values along with an MRU list, so we don't know which was the last value, and therefore, the LastWrite time of the key is of little value to us.
When I actually opened the Control Panel and double-clicked the "Date and Time" applet, another entry occurred in the above UserAssist key...specifically, "UEME_RUNCPL:"C:\WINDOWS\system32\timedate.cpl",Date and Time".
Since I'm on an XP system, I thought I'd take a look in the Prefetch directory and see if there were any tidbits lurking around. Well, the thought didn't simply occur to me...the output of InControl5 told me that the file named "Rundll32.exe-randomstuff.pf" had been changed in some way. Knowing that rundll32.exe is a legit MS app, and that it's used to run things like Control Panel applets, I opened the .pf file in BinText and found Unicode strings that referenced timedate.cpl and w32time.dll, amongst other things.
All this is fine, in that it gives us clues, but I don't think that it's all that definitive. Another place to look, however, is in the Event Log. Specifically, within the Security Event Log there may be an event ID 520 (Category is "System Event") that states that the system time was changed, and includes the username, previous time, and new time. Very helpful!
Barring that (pretty definitive, isn't it??), there may be discrepancies in the actual times associated with the event records themselves. For example, if an event record with a higher number (more recent event) has a generated or written time that's before a previous event, then you may be on to something.
So far, many of the responses I've seen have said something along the lines of "look at your watch when you're standing in front of the system, before you unplug it and image it"...but, like I said, answers like that sort of miss the point of the question.
So...thoughts?
I decided to start some of my own research, and to post in other lists to see what others were doing. I received several responses, but most have been along the lines of:
- Completely missing the part about an "image"
- Using email headers
So I set about performing my own experiments to see what happened. First, I ran InControl5 to baseline my system. When the baseline was complete, I double-clicked the time display on the far right of my Task Bar, and that opened the "Date and Time Properties" window, which is essentially the timedate.cpl Control Panel applet. I modified the system time, clicked "OK", then ran the second half of the InControl5 process.
Viewing the results, I ran across a couple of interesting things. First off, the following UserAssist key had been updated.
HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\UserAssist\{75048700-EF1F-11D0-9888-006097DEACF9}/Count
I ran my "uassist.pl" Perl script to translate the values under this key, and found "timedate.cpl". Unfortunately, this key doesn't maintain its values along with an MRU list, so we don't know which was the last value, and therefore, the LastWrite time of the key is of little value to us.
When I actually opened the Control Panel and double-clicked the "Date and Time" applet, another entry occurred in the above UserAssist key...specifically, "UEME_RUNCPL:"C:\WINDOWS\system32\timedate.cpl",Date and Time".
Since I'm on an XP system, I thought I'd take a look in the Prefetch directory and see if there were any tidbits lurking around. Well, the thought didn't simply occur to me...the output of InControl5 told me that the file named "Rundll32.exe-randomstuff.pf" had been changed in some way. Knowing that rundll32.exe is a legit MS app, and that it's used to run things like Control Panel applets, I opened the .pf file in BinText and found Unicode strings that referenced timedate.cpl and w32time.dll, amongst other things.
All this is fine, in that it gives us clues, but I don't think that it's all that definitive. Another place to look, however, is in the Event Log. Specifically, within the Security Event Log there may be an event ID 520 (Category is "System Event") that states that the system time was changed, and includes the username, previous time, and new time. Very helpful!
Barring that (pretty definitive, isn't it??), there may be discrepancies in the actual times associated with the event records themselves. For example, if an event record with a higher number (more recent event) has a generated or written time that's before a previous event, then you may be on to something.
So far, many of the responses I've seen have said something along the lines of "look at your watch when you're standing in front of the system, before you unplug it and image it"...but, like I said, answers like that sort of miss the point of the question.
So...thoughts?
Tuesday, September 27, 2005
Projects
Every now and again, I poke my head up for a breath of fresh air and a look around...and I wonder if others face the same issues and challenges I do. For example, knowing where to look during forensic analysis for information relevant to the case at hand.
So, my question to all of you out there is this...what issues do you face? What things do you see, need more information/documentation about? What are the things during a case (or just after you've completed one) that leave you wondering? What are those things that would make great research projects?
Sometimes, those things you are wondering about may already have been solved, addressed, or encountered by someone else.
Please feel free to post a comment here, or email me directly...which ever works. If you email me, I might post your idea, but I won't use your name without your consent.
Addendum 28 Sept: Well, I've received a single email so far in response to this post, and the ideas are (a) case studies, and (b) challenges. I'll see what I can do about posting both, but I'm sure that it would be extremely beneficial to hear from others.
Besides the usual suspects that provide forensic challenges (ie, HoneyNet, DFRWS), there are others available. Try TigerTools (the page has links to three different challenges; Feb, March, and July). I'm sure there are others...
So, my question to all of you out there is this...what issues do you face? What things do you see, need more information/documentation about? What are the things during a case (or just after you've completed one) that leave you wondering? What are those things that would make great research projects?
Sometimes, those things you are wondering about may already have been solved, addressed, or encountered by someone else.
Please feel free to post a comment here, or email me directly...which ever works. If you email me, I might post your idea, but I won't use your name without your consent.
Addendum 28 Sept: Well, I've received a single email so far in response to this post, and the ideas are (a) case studies, and (b) challenges. I'll see what I can do about posting both, but I'm sure that it would be extremely beneficial to hear from others.
Besides the usual suspects that provide forensic challenges (ie, HoneyNet, DFRWS), there are others available. Try TigerTools (the page has links to three different challenges; Feb, March, and July). I'm sure there are others...
Monday, September 26, 2005
Some way cool visualization stuff
F-Secure has a way cool visualization presentation on the Bagle worm...check it out. Scroll down to the Fri, 23 Sept entry entitled, "A different look at Bagle". Very cool.
I know that there are visualization tools available for social network analysis. Raytheon's SilentRunner (who owns it now??) uses n-gram analysis to build context and create a basis for it's mapping, and is very interesting. I wonder if the above malware visualization will eventually include details of the actual functions themselves...
I know that there are visualization tools available for social network analysis. Raytheon's SilentRunner (who owns it now??) uses n-gram analysis to build context and create a basis for it's mapping, and is very interesting. I wonder if the above malware visualization will eventually include details of the actual functions themselves...
Friday, September 23, 2005
Creating a timeline analysis tool
I'm not a DBA, and I don't play one on TV. So when it comes to writing a timeline analysis tool, I'm not going to be able to do it myself. But here's what I propose...
I'm most comfortable on Windows platforms, and I know folks that prefer Linux, so that's cool. I know that there are people out there who are familiar with databases, and others that are good graphics programmers. What I'd like to see about doing is opening a project on SourceForge, and see if we can't get a decent start on developing a solution. Here's how I envision it going...and please keep in mind that this isn't the be-all-and-end-all, just my impressions:
As far as the graphics programming goes, I'm not really sure where to go with that one. Would it be better to go with some of the stuff that's already out there, or create something new with Java or some other cross-platform solution?
So...who's interested? If I get enough interest, I'll go ahead and see about creating a project on SourceForge. Like I said, I can provide various means of extracting the data. For example, most of my Perl scripts that parse raw binary files from Windows can be made cross-platform, and easily modified to dump the information into a database. I can also write the necessary scripts for ProDiscover. Providing solutions to pull data from a source and populate the database is something that will be on-going, I'm sure.
Now, there's no doubt in my mind that this sort of project will take a while...if it were easy, everyone would be doing it. I don't expect it to be done overnight. However, I would like to see it get done, b/c I do think that it would be extremely useful to a lot of people.
So...thoughts?
Addendum: I just did a search on SourceForge for "timeline" and there are several projects listed, but most don't have anything more than a simple page...no project files, nothing. There are some projects with files, but I'm not entirely sure that they'd be suitable.
However, there is something promising...check out the Timeline view of GenealogyJ. My concern would be the volume of data, but it does look like a good start.
Also, my hope is that this will be compartmentalized...meaning that the graphics component won't be database-dependant. That way, there can be several different interfaces for presenting the data, and they can develop over time.
I'm most comfortable on Windows platforms, and I know folks that prefer Linux, so that's cool. I know that there are people out there who are familiar with databases, and others that are good graphics programmers. What I'd like to see about doing is opening a project on SourceForge, and see if we can't get a decent start on developing a solution. Here's how I envision it going...and please keep in mind that this isn't the be-all-and-end-all, just my impressions:
- Identify data sources (we've already gotten started with this, but we don't have to be restricted to Windows systems)
- Identify data sets (data source/log normalization, etc.)
- Identify a database format, using mySql (table definitions, etc.)
- Identify and create a graphic component for presenting the data sets
As far as the graphics programming goes, I'm not really sure where to go with that one. Would it be better to go with some of the stuff that's already out there, or create something new with Java or some other cross-platform solution?
So...who's interested? If I get enough interest, I'll go ahead and see about creating a project on SourceForge. Like I said, I can provide various means of extracting the data. For example, most of my Perl scripts that parse raw binary files from Windows can be made cross-platform, and easily modified to dump the information into a database. I can also write the necessary scripts for ProDiscover. Providing solutions to pull data from a source and populate the database is something that will be on-going, I'm sure.
Now, there's no doubt in my mind that this sort of project will take a while...if it were easy, everyone would be doing it. I don't expect it to be done overnight. However, I would like to see it get done, b/c I do think that it would be extremely useful to a lot of people.
So...thoughts?
Addendum: I just did a search on SourceForge for "timeline" and there are several projects listed, but most don't have anything more than a simple page...no project files, nothing. There are some projects with files, but I'm not entirely sure that they'd be suitable.
However, there is something promising...check out the Timeline view of GenealogyJ. My concern would be the volume of data, but it does look like a good start.
Also, my hope is that this will be compartmentalized...meaning that the graphics component won't be database-dependant. That way, there can be several different interfaces for presenting the data, and they can develop over time.
ISC Rootkit Discovery
This post appeared on the Incidents.org (ISC) blog two days ago, and is a very interesting read. The handler, Tom Liston, who works with Ed Skoudis over at IntelGuardians, writes the post in a humorous, Ian Fleming-esque style.
Take a look at the section marked "A view to a kill". Here, Tom mentions a couple of .sys files that seem to be a rootkit. I've run across this before...specifically a file named "rdriv.sys".
Tom's write-up in the final section of the post that describes what actions the malware takes on a system is very interesting, and an excellent read. The one big thing I took away from all this is that the good guys really need to get off their butts and start sharing information like this...tools, techniques, what's been found, etc. This needs to really start happening, because the bad guys are obviously doing it...and doing it much better than the good guys. It's pretty clear that the bad guys are moving away from the old days of "hacking" and writing malware as pranks, and this sort of activity is now driven, at least in part, by economics and financial gain.
Take a look at the section marked "A view to a kill". Here, Tom mentions a couple of .sys files that seem to be a rootkit. I've run across this before...specifically a file named "rdriv.sys".
Tom's write-up in the final section of the post that describes what actions the malware takes on a system is very interesting, and an excellent read. The one big thing I took away from all this is that the good guys really need to get off their butts and start sharing information like this...tools, techniques, what's been found, etc. This needs to really start happening, because the bad guys are obviously doing it...and doing it much better than the good guys. It's pretty clear that the bad guys are moving away from the old days of "hacking" and writing malware as pranks, and this sort of activity is now driven, at least in part, by economics and financial gain.
Thursday, September 22, 2005
Visualization
I've started to see that this issue of "timeline analysis" really isn't one of getting data as much as it is one of visualization. Graphically representing data in some manner for presention it to the audience, can be very powerful. It's been said that a "picture is worth a thousand words", and in many cases, this is true. So, the question becomes, how does one best present a timeline of activity on a system?
For starters, let's simply consider any system. One would hope that any solution would provide for mulitple systems, with the Windows host-based data sources having been covered in previous posts. We could incorporate firewall logs, logs from other systems, syslog, etc., all under the same case heading. There would need to be some sort of normalization process, of course, before the data was incorporated into a database. Along these lines, I'd met with the MountainWave folks many moons ago, in a previous life (...once, in a galaxy far, far away...), before they were purchased by Symantec. Their CyberWolf product was pretty cool, and performed normalization of logs, in a manner similar to what I'm referring to here.
Okay...so once you've populated your database, what next? Ah, yes...queries.
For presenting your data, there are many freeware visualization toolkits available, such as VTK, OpenDX, and GraphViz...but how useful are these...really? Well, GraphViz may have some potential.
One of the commercial tools I've been told is being used is CaseMap from CaseSoft. From what I've been told, though, getting the data into CaseMap can be almost as much of a manual process as Analyst's Notebook. A caveat, though...I haven't worked a great deal with either of these products, so I don't know if the issue of manually entering data is one of operator error or not.
This is all still kind of up in the air...how do you present the data? I think that culling information from a database and presenting a scalable view is still a viable option. The analyst can choose a date and time, and the tool will provide a zoomable view of the data, much in the same way as when you do a search on
For starters, let's simply consider any system. One would hope that any solution would provide for mulitple systems, with the Windows host-based data sources having been covered in previous posts. We could incorporate firewall logs, logs from other systems, syslog, etc., all under the same case heading. There would need to be some sort of normalization process, of course, before the data was incorporated into a database. Along these lines, I'd met with the MountainWave folks many moons ago, in a previous life (...once, in a galaxy far, far away...), before they were purchased by Symantec. Their CyberWolf product was pretty cool, and performed normalization of logs, in a manner similar to what I'm referring to here.
Okay...so once you've populated your database, what next? Ah, yes...queries.
For presenting your data, there are many freeware visualization toolkits available, such as VTK, OpenDX, and GraphViz...but how useful are these...really? Well, GraphViz may have some potential.
One of the commercial tools I've been told is being used is CaseMap from CaseSoft. From what I've been told, though, getting the data into CaseMap can be almost as much of a manual process as Analyst's Notebook. A caveat, though...I haven't worked a great deal with either of these products, so I don't know if the issue of manually entering data is one of operator error or not.
This is all still kind of up in the air...how do you present the data? I think that culling information from a database and presenting a scalable view is still a viable option. The analyst can choose a date and time, and the tool will provide a zoomable view of the data, much in the same way as when you do a search on
Tuesday, September 20, 2005
Cross-platform scripts
I've heard back from one or two people who've run the lsevt.pl, lsreg.pl, and regp.pl scripts I posted a bit ago. For the most part, I've heard pretty positive comments...things have worked well for most folks. The scripts seem to work just fine so far, regardless of the operating system they're run on...as long as it's running on an x86 processor. Yep, you guessed it...endianness is an issue.
However, one astute user was running the scripts on a G5 (PPC processor) and let me know that if you change the arguments of the unpack() function, the scripts work just fine, regardless of which microprocessor they're run on. The change comes in replacing all of the "S" (short, WORD, 2 bytes) and "L" (long, DWORD, 4 bytes) with "v" and "V", respectively. So, take the regp.pl script for example...in the _getNodeType() subroutine, you'll see:
return unpack("S",$record);
Change that to:
return unpack("v", $record);
In the readNkRecord() subroutine, you'll find:
my (@recs) = unpack("SSL3LLLLLLLLLL4LSS",$record);
Change that to:
my (@recs) = unpack("vvV3VVVVVVVVVV4Vvv",$record);
I won't be making these changes to the scripts myself...at least not right away. However, I am working on another book, so I will include those changes in the scripts before I add them to the CD.
However, one astute user was running the scripts on a G5 (PPC processor) and let me know that if you change the arguments of the unpack() function, the scripts work just fine, regardless of which microprocessor they're run on. The change comes in replacing all of the "S" (short, WORD, 2 bytes) and "L" (long, DWORD, 4 bytes) with "v" and "V", respectively. So, take the regp.pl script for example...in the _getNodeType() subroutine, you'll see:
return unpack("S",$record);
Change that to:
return unpack("v", $record);
In the readNkRecord() subroutine, you'll find:
my (@recs) = unpack("SSL3LLLLLLLLLL4LSS",$record);
Change that to:
my (@recs) = unpack("vvV3VVVVVVVVVV4Vvv",$record);
I won't be making these changes to the scripts myself...at least not right away. However, I am working on another book, so I will include those changes in the scripts before I add them to the CD.
Issues with timeline analysis
I've been doing some searches regarding timeline analysis, delving deeper into this. I still don't think that scatter plots and histograms are the way to report on and present this sort of information...there is just too much information that can and needs to be presented, and too many possible ways that it can be viewed.
In my Googling, I'm finding a good deal of references to "timeline analysis", as well as to the term "reconstruction". A lot of this is being offered as a service. There are also products available that can assist you in your timeline development and analysis, but many seem to be limited strictly to file MAC times. Given the various sources for timeline analysis that are available on a Windows system, relying simply on the file MAC times is not doing anyone any good.
So, I think that we're doing pretty well when considering sources of information, and now the question is, how do we present this information so that it's understandable? Is there an "access view" that looks at last access times of files, where a "modified view" would look at last modification times of files, as well as Registry key LastWrite times? What about things like times maintained in the document properties of OLE docs (ie, Word docs, Excel spreadsheets, etc.)? At what point is there too much information? How does one winnow out the valid, normal, usual stuff to get to the interesting stuff?
There's definitely some room for thought and development here. At this point, my thoughts are that there there'd be some sort of database involved, along with perhaps a Java interface for issuing queries and displaying the information. I haven't written Java GUIs in while, but perhaps a zoomable, scalable depiction of a timeline would be a good start...the investigator could pick a time of interest, and the initial queries would display a view of the timeline of activity, plus and minus a certain amount of time (determined by the investigator). Perhaps an interface into the database that shows which information is available, and lets the investigator select items to add and subtract from the view would be helpful. Add to that color coding of event records from the Event Log, and some other graphical representations of severity of events, and we may have something useful.
I mentioned Java above so that the whole thing would be cross-platform, but I can easily see where HTML or XML would work as well. With a mySql database, and the necessary filters to parse out any sort of information that is available to the investigator and get it into the database, I think we may have a pretty serious tool on our hands.
Thoughts? I know for my own part, I still have a lot of thinking to do, regarding such things as anomoly detection and anti-forensics. However, I think that this may be best handled by discussion within the community.
Addendum 21 Sept: As I think about this more and more, and even go so far as to draw out diagrams on a sheet of paper, trying to conceptualize what a "timeline" should look like, I'm even more convinced that a scatter plot isn't the way to go. Why? Well, a horizontal line (representing a time scale) with a bunch of little dots is meaningless...it has no context. Even if you gave different sources separate icons, and even color-coded them, without more information, it can be useless. Let's say you have a cluster of events around a particular time...Registry key LastWrite times, file last access times, and event records. Well, this could be a boot event. But you don't know that until you dig into things and take a look at the event records from the Event Logs.
Somehow I think that a scatter plot in which each of the dots has some identifying information would be just too much...the graph would be far too busy.
Something that may be of value in this effort is something like fe3d. Yes, I know that it's intended to provide visualization for nmap scans, but I think that something like this, with modifications, would be of value. Take a look at some of the screenshots and try to imagine how to map timeline information to this sort of representation. Of course, there are other freeware visualization tools out there...something else may be easier to use.
I will say this...one of the things that caught my eye with fe3d is the different nodes. Given information dumped into a database, one could include sources from other systems, such as Event Log records from a file server, or data from an IDS, or even firewall logs or syslog data...and have that timeline information represented on a different node, but correlated at the same time scale as the system(s) you're investigating.
In my Googling, I'm finding a good deal of references to "timeline analysis", as well as to the term "reconstruction". A lot of this is being offered as a service. There are also products available that can assist you in your timeline development and analysis, but many seem to be limited strictly to file MAC times. Given the various sources for timeline analysis that are available on a Windows system, relying simply on the file MAC times is not doing anyone any good.
So, I think that we're doing pretty well when considering sources of information, and now the question is, how do we present this information so that it's understandable? Is there an "access view" that looks at last access times of files, where a "modified view" would look at last modification times of files, as well as Registry key LastWrite times? What about things like times maintained in the document properties of OLE docs (ie, Word docs, Excel spreadsheets, etc.)? At what point is there too much information? How does one winnow out the valid, normal, usual stuff to get to the interesting stuff?
There's definitely some room for thought and development here. At this point, my thoughts are that there there'd be some sort of database involved, along with perhaps a Java interface for issuing queries and displaying the information. I haven't written Java GUIs in while, but perhaps a zoomable, scalable depiction of a timeline would be a good start...the investigator could pick a time of interest, and the initial queries would display a view of the timeline of activity, plus and minus a certain amount of time (determined by the investigator). Perhaps an interface into the database that shows which information is available, and lets the investigator select items to add and subtract from the view would be helpful. Add to that color coding of event records from the Event Log, and some other graphical representations of severity of events, and we may have something useful.
I mentioned Java above so that the whole thing would be cross-platform, but I can easily see where HTML or XML would work as well. With a mySql database, and the necessary filters to parse out any sort of information that is available to the investigator and get it into the database, I think we may have a pretty serious tool on our hands.
Thoughts? I know for my own part, I still have a lot of thinking to do, regarding such things as anomoly detection and anti-forensics. However, I think that this may be best handled by discussion within the community.
Addendum 21 Sept: As I think about this more and more, and even go so far as to draw out diagrams on a sheet of paper, trying to conceptualize what a "timeline" should look like, I'm even more convinced that a scatter plot isn't the way to go. Why? Well, a horizontal line (representing a time scale) with a bunch of little dots is meaningless...it has no context. Even if you gave different sources separate icons, and even color-coded them, without more information, it can be useless. Let's say you have a cluster of events around a particular time...Registry key LastWrite times, file last access times, and event records. Well, this could be a boot event. But you don't know that until you dig into things and take a look at the event records from the Event Logs.
Somehow I think that a scatter plot in which each of the dots has some identifying information would be just too much...the graph would be far too busy.
Something that may be of value in this effort is something like fe3d. Yes, I know that it's intended to provide visualization for nmap scans, but I think that something like this, with modifications, would be of value. Take a look at some of the screenshots and try to imagine how to map timeline information to this sort of representation. Of course, there are other freeware visualization tools out there...something else may be easier to use.
I will say this...one of the things that caught my eye with fe3d is the different nodes. Given information dumped into a database, one could include sources from other systems, such as Event Log records from a file server, or data from an IDS, or even firewall logs or syslog data...and have that timeline information represented on a different node, but correlated at the same time scale as the system(s) you're investigating.
Monday, September 19, 2005
Sources for timeline analysis
I just wanted to take a moment and list out some of the sources for timeline analysis on a Windows system:
- MAC file times
- Registry key LastWrite times
- Event Logs
- Other logs (ie, setupapi.log, schedlgU.txt, etc.)
- INFO2 files
Are there any other sources that should be added?
On a side note, does anyone have any credible/supported information regarding which Registry key maintains the audit policy? This may be something that's very important to check.
Friday, September 16, 2005
Timeline analysis
I've blogged about different forms of analysis in the past, and thought I'd take a look at timeline analysis. I had an opportunity to dig around in EnCase v5.0 recently and noticed a nice graphical timeline visualization tool. Very cool. This would have been helpful in a recent situation, had the issue been with filetimes, and not Registry key LastWrite times.
I know that the TimeLine view is not a new feature for EnCase...I simply haven't had to get really deeply involved in using EnCase in a while. However, lsevt.pl and regp.pl. These scripts parse the Windows Event Log and Registry files, respectively. Lsevt.pl contains instructions within the comments for the code for formatting the output in semi-colon-delimited format, suitable for opening in Excel. Let's say we did that, and then sorted everything on the column with the "Time Generated" field from the event records. Then we could do easy (albiet non-graphical) analysis of events that occurred at a certain time.
The regp.pl script doesn't do this specifically, but minor modifications to the file will print out only the key name and LastWrite times, in semi-colon- or comma-delimited format. Again, open the file in Excel, sort on the LastWrite time column, etc.
I think that it would be interesting to have a tool that does all of that, don't you? One that presents a timeline in a spreadsheet or graphical format, but incorporates not only file/directory MAC times, but Registry key LastWrite times, Event Log event record "time generated" times, etc. Being able to trace back to a specific date and time that, say, a file was created on the system, you might see the progression of files installed, Registry keys created, etc., as well as any preceeding events, such a failed login attempts to an Admin-level account, etc.
Addendum 17 Sept: The more I think about it, the more I find that I'm not really sure what I mean when I say "timeline analysis". I received an email from someone that pointed out limitations in Excel (i.e., the number of lines it could handle) and recommending that I look at gnuplot. That's good information, but how useful is it, really? Think about it. A colorful histogram might be nice, but what does it tell me?
When I've had to use timeline analysis of some sort, I've had a date in mind...usually from a complaint or incident report. In some cases, I've noticed "interesting" events that occurred around the same time, such as the LastWrite time on the Registry key for a service called "rdriv.sys". At that point, what I'd like to be able to do is (a) get a snapshot of everything else that occurred around that time...file changes, other Registry keys, events from the Event Log, etc....within a pretty immediate time frame (within seconds), (b) get another snapshot, but with a bit wider scope (hours, maybe less than a day), and (c) "interesting" events that occurred following the initial event.
In my mind, I'm not entirely sure that this is something that is suitable, particularly during the initial phase of the investigation, to be displayed in a candlestick or even a histogram plot. In some cases, I think it would be way too messy. In others, I'm not sure that sorting on groupings of activities or concentrations of events would be necessarily informative, either...you'd see events like reboots.
Don't get me wrong, though...I do think that perhaps something like gnuplot would be useful in the presentation phase of the investigation. During the investigation, a plot of the frequency of certain types of events, such as failed login attempts, network logins, or of the types of queries to appear in IIS logs, would be useful, I think.
With the glut of files on a Windows system, one would need some method for winnowing the wheat from the chaff. Any sort of plot you do based on the last accessed time of a file, looking for the "bad things" will likely be hidden behind the noise of all of the "normal things" that go on on a system.
So...at this point, my thoughts are along these lines...some sort of database will need to be used in order to facilitate searching...I'm thinking mySql. At this point, I'm not entirely sure what kind of table structure would be most useful, but I do have an idea of the data that would need to go into the database. From files, you'd want things like full path, MAC times, hashes, file version information (useful in narrowing down searches to all non-MS files), file signature status (i.e., pass or fail), any alternate data streams, etc. I think I'd want to populate the database with information from Event Logs, and other logs on the system...schedlogu.txt, IIS logs, setupapi.log, etc. Of course, I'd also want to include Registry key LastWrite times, as well.
Once all this information was in the database, I'd start with some sort of default search queries that allow the investigator to target specific dates and look for anything "interesting" within certain windows of time, around the target event. I don't see simply having a dot showing a file change, or a number representing how many files changed, as useful. For me, I'd have to see which files changed, by name and location, as well as Registry keys that may have changed just prior to or immediately following those file changes/additions. I think that used in combination with the file hashes and versioning information, something like this might be very useful to investigators, and help in narrowing down the time it takes to find "interesting" events.
Addendum 19 Sept: I received the following email over the weekend: "I've long desired a consolidated timeline view of "what happened on this box and when". The filesystem MAC times tell a big part of the story, and now with your perl scripts, I can add two more important pieces of events and registry key writes. In a single view, I can see "DCOM service unexpectedly restarted, file backdoor.exe was created, and the CurrentVersion\Run key was last written". That is powerful!"
Combining MAC times with Event Logs, Registry key LastWrite times, and other log resources, along with explanations (ie, what the Event Log entries mean) would be a pretty valuable source of information, wouldn't it?
I know that the TimeLine view is not a new feature for EnCase...I simply haven't had to get really deeply involved in using EnCase in a while. However, lsevt.pl and regp.pl. These scripts parse the Windows Event Log and Registry files, respectively. Lsevt.pl contains instructions within the comments for the code for formatting the output in semi-colon-delimited format, suitable for opening in Excel. Let's say we did that, and then sorted everything on the column with the "Time Generated" field from the event records. Then we could do easy (albiet non-graphical) analysis of events that occurred at a certain time.
The regp.pl script doesn't do this specifically, but minor modifications to the file will print out only the key name and LastWrite times, in semi-colon- or comma-delimited format. Again, open the file in Excel, sort on the LastWrite time column, etc.
I think that it would be interesting to have a tool that does all of that, don't you? One that presents a timeline in a spreadsheet or graphical format, but incorporates not only file/directory MAC times, but Registry key LastWrite times, Event Log event record "time generated" times, etc. Being able to trace back to a specific date and time that, say, a file was created on the system, you might see the progression of files installed, Registry keys created, etc., as well as any preceeding events, such a failed login attempts to an Admin-level account, etc.
Addendum 17 Sept: The more I think about it, the more I find that I'm not really sure what I mean when I say "timeline analysis". I received an email from someone that pointed out limitations in Excel (i.e., the number of lines it could handle) and recommending that I look at gnuplot. That's good information, but how useful is it, really? Think about it. A colorful histogram might be nice, but what does it tell me?
When I've had to use timeline analysis of some sort, I've had a date in mind...usually from a complaint or incident report. In some cases, I've noticed "interesting" events that occurred around the same time, such as the LastWrite time on the Registry key for a service called "rdriv.sys". At that point, what I'd like to be able to do is (a) get a snapshot of everything else that occurred around that time...file changes, other Registry keys, events from the Event Log, etc....within a pretty immediate time frame (within seconds), (b) get another snapshot, but with a bit wider scope (hours, maybe less than a day), and (c) "interesting" events that occurred following the initial event.
In my mind, I'm not entirely sure that this is something that is suitable, particularly during the initial phase of the investigation, to be displayed in a candlestick or even a histogram plot. In some cases, I think it would be way too messy. In others, I'm not sure that sorting on groupings of activities or concentrations of events would be necessarily informative, either...you'd see events like reboots.
Don't get me wrong, though...I do think that perhaps something like gnuplot would be useful in the presentation phase of the investigation. During the investigation, a plot of the frequency of certain types of events, such as failed login attempts, network logins, or of the types of queries to appear in IIS logs, would be useful, I think.
With the glut of files on a Windows system, one would need some method for winnowing the wheat from the chaff. Any sort of plot you do based on the last accessed time of a file, looking for the "bad things" will likely be hidden behind the noise of all of the "normal things" that go on on a system.
So...at this point, my thoughts are along these lines...some sort of database will need to be used in order to facilitate searching...I'm thinking mySql. At this point, I'm not entirely sure what kind of table structure would be most useful, but I do have an idea of the data that would need to go into the database. From files, you'd want things like full path, MAC times, hashes, file version information (useful in narrowing down searches to all non-MS files), file signature status (i.e., pass or fail), any alternate data streams, etc. I think I'd want to populate the database with information from Event Logs, and other logs on the system...schedlogu.txt, IIS logs, setupapi.log, etc. Of course, I'd also want to include Registry key LastWrite times, as well.
Once all this information was in the database, I'd start with some sort of default search queries that allow the investigator to target specific dates and look for anything "interesting" within certain windows of time, around the target event. I don't see simply having a dot showing a file change, or a number representing how many files changed, as useful. For me, I'd have to see which files changed, by name and location, as well as Registry keys that may have changed just prior to or immediately following those file changes/additions. I think that used in combination with the file hashes and versioning information, something like this might be very useful to investigators, and help in narrowing down the time it takes to find "interesting" events.
Addendum 19 Sept: I received the following email over the weekend: "I've long desired a consolidated timeline view of "what happened on this box and when". The filesystem MAC times tell a big part of the story, and now with your perl scripts, I can add two more important pieces of events and registry key writes. In a single view, I can see "DCOM service unexpectedly restarted, file backdoor.exe was created, and the CurrentVersion\Run key was last written". That is powerful!"
Combining MAC times with Event Logs, Registry key LastWrite times, and other log resources, along with explanations (ie, what the Event Log entries mean) would be a pretty valuable source of information, wouldn't it?
Thursday, September 15, 2005
Not your everyday Perl on Windows
I program Perl. I program Perl on Windows systems. My Perl scripts don't usually do general stuff like file manipulation; ie, open a file containing lines of text, read in the lines, sort/manipulate the lines in some manner.
I've written scripts the implement the Windows Management Interface (WMI), and the Windows Driver Model (WDM).
I've written scripts that open and parse files in binary mode; ie, the Registry and Event Log files, as well as PE headers.
I've written scripts that use the Perl API, or directly access the Windows API, and I've completely bypassed the Windows API all together.
I'm not an expert...I see myself simply as trying really hard. And I find myself wondering if others want to see what I've done...not just the code, but the process I've gone through.
Is this something others are interested in seeing? If so, in what format?
I've written scripts the implement the Windows Management Interface (WMI), and the Windows Driver Model (WDM).
I've written scripts that open and parse files in binary mode; ie, the Registry and Event Log files, as well as PE headers.
I've written scripts that use the Perl API, or directly access the Windows API, and I've completely bypassed the Windows API all together.
I'm not an expert...I see myself simply as trying really hard. And I find myself wondering if others want to see what I've done...not just the code, but the process I've gone through.
Is this something others are interested in seeing? If so, in what format?
Offline Registry parser on Linux
I thought I'd post this, as I found it pretty interesting...
I got some feedback from someone who'd used the offline Registry parsing script...he told me that it worked well for him. What was interesting to me was that he was running the script in Perl v5.8.4, on Debian Stable, with kernel 2.6.5 for the i686 architecture.
He pulled down the script, and copied some Registry files over from a Windows VMWare session, and things worked fine.
Wow. That's great feedback! And positive, too.
I got some feedback from someone who'd used the offline Registry parsing script...he told me that it worked well for him. What was interesting to me was that he was running the script in Perl v5.8.4, on Debian Stable, with kernel 2.6.5 for the i686 architecture.
He pulled down the script, and copied some Registry files over from a Windows VMWare session, and things worked fine.
Wow. That's great feedback! And positive, too.
Monday, September 12, 2005
Rewriting the MS API
I've released lsevt.pl, a Perl script that I developed to parse through raw Event Log files. As I mentioned in my GMU2005 presentation on the subject of Event Log files, I had one instance in which parsing through an Event Log file manually revealed a "hidden" event record, one not seen by the MS API.
I released my initial script earlier, but it was proof-of-concept, and I received requests to complete the script and return all available information from the event records. So I added parsing of the event source, computername, message strings, data, etc.
To run the script, simply pass in the path to the Event Log file that you're interested in, and redirect the output to a file:
C:\Perl>perl lsevt.pl c:\windows\system32\config\sysevent.evt > sys.log
An example of the output that the script generates is:
Record Number : 10876
Source : Service Control Manager
Computer Name : ENDER
Event ID : 7036
Event Type : EVENTLOG_INFORMATION_TYPE
Event Category: 0
Time Generated: Tue Aug 2 02:12:25 2005
Time Written : Tue Aug 2 02:12:25 2005
Message Str : iPod Service stopped
Record Number : 10877
Source : EventLog
Computer Name : ENDER
Event ID : 6006
Event Type : EVENTLOG_INFORMATION_TYPE
Event Category: 0
Time Generated: Tue Aug 2 02:12:31 2005
Time Written : Tue Aug 2 02:12:31 2005
Message Data : ff 00 00 00
I've included directions in the script itself for those who prefer the output in a semi-colon delimited format, suitable for opening in Excel.
As always, I hope someone finds this useful.
I released my initial script earlier, but it was proof-of-concept, and I received requests to complete the script and return all available information from the event records. So I added parsing of the event source, computername, message strings, data, etc.
To run the script, simply pass in the path to the Event Log file that you're interested in, and redirect the output to a file:
C:\Perl>perl lsevt.pl c:\windows\system32\config\sysevent.evt > sys.log
An example of the output that the script generates is:
Record Number : 10876
Source : Service Control Manager
Computer Name : ENDER
Event ID : 7036
Event Type : EVENTLOG_INFORMATION_TYPE
Event Category: 0
Time Generated: Tue Aug 2 02:12:25 2005
Time Written : Tue Aug 2 02:12:25 2005
Message Str : iPod Service stopped
Record Number : 10877
Source : EventLog
Computer Name : ENDER
Event ID : 6006
Event Type : EVENTLOG_INFORMATION_TYPE
Event Category: 0
Time Generated: Tue Aug 2 02:12:31 2005
Time Written : Tue Aug 2 02:12:31 2005
Message Data : ff 00 00 00
I've included directions in the script itself for those who prefer the output in a semi-colon delimited format, suitable for opening in Excel.
As always, I hope someone finds this useful.
Memory dumps revisited
A while back, I blogged on Memory Collection and Analysis. Since then, the results of the DRFWS Memory Challenge have been posted, and the results look promising. I haven't had a chance to work with either of the tools, as they don't seem to be available, but they do look interesting.
John H. Sawyer has commented in his blog. One of his more interesting comments, with regards to the MS Debugging Tools, is, "The tools weren't intuitive, I'm not a programmer and you have to have the machine preconfigured to make the dump that the debugging tools can read. LAME!" I think that his sentiment sums up the issue quite nicely...most of the folks using the MS Debugging Tools likely aren't programmers, haven't had the opportunity to work with and learn how to use the tools, and simply haven't configured their systems to use the MS tools...which, like any tool, has it's own inherent strengths and weaknesses.
It does look as if this issue is taking a step in the right direction...we'll see how useful this sort of thing is as long as the tools remain private.
John H. Sawyer has commented in his blog. One of his more interesting comments, with regards to the MS Debugging Tools, is, "The tools weren't intuitive, I'm not a programmer and you have to have the machine preconfigured to make the dump that the debugging tools can read. LAME!" I think that his sentiment sums up the issue quite nicely...most of the folks using the MS Debugging Tools likely aren't programmers, haven't had the opportunity to work with and learn how to use the tools, and simply haven't configured their systems to use the MS tools...which, like any tool, has it's own inherent strengths and weaknesses.
It does look as if this issue is taking a step in the right direction...we'll see how useful this sort of thing is as long as the tools remain private.
Saturday, September 10, 2005
Updated Registry parsing tool
Well, this isn't so much an update as it is a modification. I've released lsreg.pl, a Perl script that allows the administrator/investigator to search raw Registry files (ie, NTUSER.DAT, system32\config\SYSTEM, system32\config\SOFTWARE) for specific keys and values.
The Perl script takes two arguments...the path to the raw Registry file, and the path to the file containing the keys/values you're looking for. An example of the output is:
Key -> CurrentControlSet\Control\Windows\ShutdownTime
LastWrite : Tue Aug 2 12:06:56 2005
Value : ShutdownTime;REG_BINARY;c4 96 a0 ad 5a 97 c5 01
Key -> Select
LastWrite : Wed Feb 23 09:37:25 2000
Value :Current;REG_DWORD;1
Value :Default;REG_DWORD;1
Value :Failed;REG_DWORD;0
Value :LastKnownGood;REG_DWORD;2
Key -> Setup
LastWrite : Tue Apr 29 21:33:53 2003
Value :SetupType;REG_DWORD;0
Value :SystemSetupInProgress;REG_DWORD;0
Value :CmdLine;REG_MULTI_SZ;setup -newsetup -mini
Value :SystemPrefix;REG_BINARY;d2 03 00 00 00 00 39 80
Value :SystemPartition;REG_SZ;\Device\HarddiskVolume1
Value :OsLoaderPath;REG_SZ;\
Value :CloneTag;REG_MULTI_SZ;Wed Feb 23 01:44:25 2000
The script uses no MS APIs (so basically, I rewrote the API), but instead parses the Registry files in binary mode. Notice that the output includes the LastWrite time of the keys. If a key is being searched for, the script returns all of the values in that key, if there are any. If a value is being searched for, the script returns the value and data, if found.
As always, comments and questions are welcome.
The Perl script takes two arguments...the path to the raw Registry file, and the path to the file containing the keys/values you're looking for. An example of the output is:
Key -> CurrentControlSet\Control\Windows\ShutdownTime
LastWrite : Tue Aug 2 12:06:56 2005
Value : ShutdownTime;REG_BINARY;c4 96 a0 ad 5a 97 c5 01
Key -> Select
LastWrite : Wed Feb 23 09:37:25 2000
Value :Current;REG_DWORD;1
Value :Default;REG_DWORD;1
Value :Failed;REG_DWORD;0
Value :LastKnownGood;REG_DWORD;2
Key -> Setup
LastWrite : Tue Apr 29 21:33:53 2003
Value :SetupType;REG_DWORD;0
Value :SystemSetupInProgress;REG_DWORD;0
Value :CmdLine;REG_MULTI_SZ;setup -newsetup -mini
Value :SystemPrefix;REG_BINARY;d2 03 00 00 00 00 39 80
Value :SystemPartition;REG_SZ;\Device\HarddiskVolume1
Value :OsLoaderPath;REG_SZ;\
Value :CloneTag;REG_MULTI_SZ;Wed Feb 23 01:44:25 2000
The script uses no MS APIs (so basically, I rewrote the API), but instead parses the Registry files in binary mode. Notice that the output includes the LastWrite time of the keys. If a key is being searched for, the script returns all of the values in that key, if there are any. If a value is being searched for, the script returns the value and data, if found.
As always, comments and questions are welcome.
Thursday, September 08, 2005
The Windows Registry as a Forensic Resource
The subject article is now online at ScienceDirect. I wrote this article back in July. In the article, I walk through some of the basics of the Registry and its structure, and then get into where the investigator can look in the Registry for certain information that may help with a case.
Besides addressing autostart locations, the article also discusses Registry entries that pertain to USB removable storage devices and the key/values that contain information on wireless SSIDs that the system has connected to.
Comments are welcome and appreciated.
Besides addressing autostart locations, the article also discusses Registry entries that pertain to USB removable storage devices and the key/values that contain information on wireless SSIDs that the system has connected to.
Comments are welcome and appreciated.
Tuesday, September 06, 2005
Updated offline Registry parsing script
I've updated the offline Registry parsing script...it's here. The updates include:
To run the script, simply use a command-line similar to this:
C:\Perl>perl regp.pl [path to Reg file] > regp.log
For example, I have a couple of raw Registry files in C:\reg, so my command line looks like:
C:\Perl>perl regp.pl C:\reg\software > regp.log
As with the earlier version, this file is also easily compiled into a stand-alone executable for Windows systems.
Be forewarned...this script will take a while when being run against the "Software" file...the one that holds the HKLM\Software hive. This is due in part to the fact that the Classes key is HUGE!
The next step for me with this project is to complete a script that will allow the investigator to search for arbitrary Registry keys and values. It's been challenging so far, but I've got a pretty good handle on it, so hopefully I'll be able to post something soon.
A couple of final notes...
1. Testing has been limited, as I have only a limited number of VMWare images to pull test files from.
2. This script is intended for raw Registry files (ie, system32\config\software, system32\config\system, ntuser.dat) from NT/2K/XP/2K3 systems.
3. I haven't tested this script on Linux systems, because I don't have regular, unimpeded access to such systems. I am trying to get some things tested on a Mac, but that's someone elses machine.
4. If things don't work as expected, please feel free to let me know. When you do that, though, please give me as much info as you can. I received on email from someone who said that things "looked wonky"...I have no idea what that is. Can you send me the output file, and maybe post the file you ran the script against somewhere (i.e., on a web or FTP site??)? That would be helpful in troubleshooting.
Thanks!
- Cleaner, more modular code
- Better documentation of the code
- Better handling of binary data types
- Translation (i.e., "decrypting" Rot-13 encoding) of UserAssist Registry key value names
To run the script, simply use a command-line similar to this:
C:\Perl>perl regp.pl [path to Reg file] > regp.log
For example, I have a couple of raw Registry files in C:\reg, so my command line looks like:
C:\Perl>perl regp.pl C:\reg\software > regp.log
As with the earlier version, this file is also easily compiled into a stand-alone executable for Windows systems.
Be forewarned...this script will take a while when being run against the "Software" file...the one that holds the HKLM\Software hive. This is due in part to the fact that the Classes key is HUGE!
The next step for me with this project is to complete a script that will allow the investigator to search for arbitrary Registry keys and values. It's been challenging so far, but I've got a pretty good handle on it, so hopefully I'll be able to post something soon.
A couple of final notes...
1. Testing has been limited, as I have only a limited number of VMWare images to pull test files from.
2. This script is intended for raw Registry files (ie, system32\config\software, system32\config\system, ntuser.dat) from NT/2K/XP/2K3 systems.
3. I haven't tested this script on Linux systems, because I don't have regular, unimpeded access to such systems. I am trying to get some things tested on a Mac, but that's someone elses machine.
4. If things don't work as expected, please feel free to let me know. When you do that, though, please give me as much info as you can. I received on email from someone who said that things "looked wonky"...I have no idea what that is. Can you send me the output file, and maybe post the file you ran the script against somewhere (i.e., on a web or FTP site??)? That would be helpful in troubleshooting.
Thanks!
USB device descriptors and serial numbers
Whenever I've talked about USB storage devices and their serial numbers, someone always seems to ask me, "If I have an image of a thumb drive, can I find the serial number somewhere in the image?"
My answer to this question has been, "Why not go back and take a look at the image with a hex editor...or at least, say, the first megabyte of the image, and tell me if you find the serial number listed?" After all, you can see if a USB device has a serial number (and if so, what it is) using tools like UVCView. Well, so far, I haven't received any email from someone who's done this, so I started taking a look into this myself.
The answer to the question is simply, "No." The reason for this answer is that the device descriptor is usually stored in EPROM, Flash, or some form of ROM, and is not read when the device is imaged using tools like 'dd'. Generally speaking, you wouldn't want to allow the device descriptor to be modified, as a user could alter some of the data, causing an incorrect driver to be loaded, and the device could potentially then be unusable.
Therefore, I'd like to make a suggestion to law enforcement (and everyone else) when it comes to imaging USB storage devices. Make sure that you have a tool like UVCView (or whatever is suitable or available for your platform) on hand as part of your imaging kit, and copy the device descriptor from the device as part of the imaging process.
One of the elements of the device descriptor is the vendor ID, which is assigned by the fine folks at USB.org. This information can be used in a manner similar to the first couple of octets of a MAC address; ie, to identify the vendor of the product.
My answer to this question has been, "Why not go back and take a look at the image with a hex editor...or at least, say, the first megabyte of the image, and tell me if you find the serial number listed?" After all, you can see if a USB device has a serial number (and if so, what it is) using tools like UVCView. Well, so far, I haven't received any email from someone who's done this, so I started taking a look into this myself.
The answer to the question is simply, "No." The reason for this answer is that the device descriptor is usually stored in EPROM, Flash, or some form of ROM, and is not read when the device is imaged using tools like 'dd'. Generally speaking, you wouldn't want to allow the device descriptor to be modified, as a user could alter some of the data, causing an incorrect driver to be loaded, and the device could potentially then be unusable.
Therefore, I'd like to make a suggestion to law enforcement (and everyone else) when it comes to imaging USB storage devices. Make sure that you have a tool like UVCView (or whatever is suitable or available for your platform) on hand as part of your imaging kit, and copy the device descriptor from the device as part of the imaging process.
One of the elements of the device descriptor is the vendor ID, which is assigned by the fine folks at USB.org. This information can be used in a manner similar to the first couple of octets of a MAC address; ie, to identify the vendor of the product.
Monday, September 05, 2005
Upcoming Speaking Engagement
I received word this weekend that I will be presenting at the DoD Cyber Crime Conference 2006 in Palm Harbor, FL, in Jan 2006. I will be presenting a case study, a walk-through of an investigation that takes place on a Windows system. I will be basically tying a couple of my previous presentations (i.e., "The Windows Registry as a Forensics Resource", "Tracking USB Storage Devices on Windows Systems", etc.) together with some "practical" application.
The agenda for the 2005 conference looks pretty interesting, with a good deal of emphasis on LEO-type information (i.e., courses/presentations on ILook, law, etc.)
It seems that Richard Bejtlich will be there, as well. I look forward to meeting him, and listening to his presentations.
The agenda for the 2005 conference looks pretty interesting, with a good deal of emphasis on LEO-type information (i.e., courses/presentations on ILook, law, etc.)
It seems that Richard Bejtlich will be there, as well. I look forward to meeting him, and listening to his presentations.
Thursday, September 01, 2005
NTFS ADSs, again
I was browsing over on CPAN today, as I do a couple of times a week, to see if there were any new and interesting Perl modules available. I found one called Win32::StreamNames, and decided to take a look. This module, by Clive Darke, takes a filename as an argument and lists any ADS stream names associated with it. I don't see the StreamNames module on the list of Windows packages at ActiveState, but hopefully, it won't be long until it appears.
I haven't installed this module yet, but I will try to do so once (if??) it appears on the ActiveState site. It looks as if it might be useful for automatically parsing and categorizing ADSs, and flagging what's bad, and what Windows just does normally.
I haven't installed this module yet, but I will try to do so once (if??) it appears on the ActiveState site. It looks as if it might be useful for automatically parsing and categorizing ADSs, and flagging what's bad, and what Windows just does normally.
Subscribe to:
Posts (Atom)