In my last post, I mentioned that the FRU and FSP had been demos as part of the CERT VTE. Very cool. If you've read this blog for any period of time, you'll know that I've been interested in live response and forensic analysis of Windows systems for a while. One thing that the VTE demo showed me is that I really have to write up a user guide or manual for using the FSP tools, and then add a GUI to them.
That being said, I've been purusing the usual blogs for write ups regarding the recent BlackHat Federal conference. When you can't attend, reading other's impressions is the next best thing to being there. Kevin Mandia's presentation caught my eye, so I downloaded it and read through it. One of the things mentioned in the presentation is a "Live Response" tool that should be released by Kevin's company, Red Cliff Consulting, in January. After some discussion in the presentation on how incidents are detected, things that need to be collected are mentioned on slide 55 (there are 90 slides, folks, but the presentation is well worth the wait). The next slide contains a list of tools that can be run - while I agree with the tools for the most part, there are a couple that I'm not sure I'd run, but that's all covered in my book.
Slide 66 shows an image of the Live Response tool...it looks very interesting, and I really wish I'd been able to make it to the conference. I really like what I see in the presentation, overall...Kevin evidently went over several things (at least, in the slides) that I've been thinking about for some time now, such as the fact that live response is evolving due to the notification laws, with California's SB1386 being one of the first. In essence, companies need to know if client data has been compromised in anyway.
Thoughts? Where do we go from here? Is live response viable, particularly on Windows systems?
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Pages
▼
Sunday, January 29, 2006
Thursday, January 26, 2006
In search of...
No, this isn't a flashback to that old Leonard Nimoy show...it's about training. I got an email last night (don't have permission yet to say from whom) that pointed me to CERT's Virtual Training Environment. It's an online classroom, of sorts, where you can go, select a class, and watch it.
So I checked it out this morning. I went to the "Welcome to VTE" page, and clicked on "Launch VTE". Within seconds, I was looking at a list of topics. I saw "Forensics and Incident Response" and dove right in!
Once the choices of "classes" appeared, I saw that I could choose from documents, demos, and labs. What you get when you run one of the demos is basically a movie. Someone has a screen capture utility running while they narrate what they're doing, and they walk through things. The first one I looked at was "Analyzing Log Files with Notepad". It was pretty basic, but also pretty straightforward and really easy to follow.
There were a lot of other demos available, not all for Windows...there are some for Linux, as well. What I found most interesting, though, is that there is a "Configuration and Setup of the FCU" demo (it's my FRU, just misspelled), and a "Configuration and Setup of the FSP" module!
There's quite a bit of info at this site. The "Forensics" topic includes demos on EnCase, Autopsy, the use of dd, etc. It's very informative...take a look when you get a chance.
So I checked it out this morning. I went to the "Welcome to VTE" page, and clicked on "Launch VTE". Within seconds, I was looking at a list of topics. I saw "Forensics and Incident Response" and dove right in!
Once the choices of "classes" appeared, I saw that I could choose from documents, demos, and labs. What you get when you run one of the demos is basically a movie. Someone has a screen capture utility running while they narrate what they're doing, and they walk through things. The first one I looked at was "Analyzing Log Files with Notepad". It was pretty basic, but also pretty straightforward and really easy to follow.
There were a lot of other demos available, not all for Windows...there are some for Linux, as well. What I found most interesting, though, is that there is a "Configuration and Setup of the FCU" demo (it's my FRU, just misspelled), and a "Configuration and Setup of the FSP" module!
There's quite a bit of info at this site. The "Forensics" topic includes demos on EnCase, Autopsy, the use of dd, etc. It's very informative...take a look when you get a chance.
Wednesday, January 25, 2006
Cool Magazines
Recently, with the demise of Phrack, I've run across a couple of very interesting online magazines, or e-zines.
From Richard Bejtlich's TaoSecurity blog, the Uniformed, and the CodeBreaker's Journal. I haven't really taken the time to dig into either of these, but on the surface, they look like promising technical e-zines.
I ran across CheckMate today...pretty interesting first issue. This e-zine specifically targets computer forensics and incident response, and looks as if it may be a pretty good read. I went through one of the first articles on examining a user's browsing activities, and it provided pretty thorough coverage of the topic. One of the things I like most about the article is that it gave the reader information on how to interpret what they saw, rather than just pointing the reader to tools.
Know of any others?
From Richard Bejtlich's TaoSecurity blog, the Uniformed, and the CodeBreaker's Journal. I haven't really taken the time to dig into either of these, but on the surface, they look like promising technical e-zines.
I ran across CheckMate today...pretty interesting first issue. This e-zine specifically targets computer forensics and incident response, and looks as if it may be a pretty good read. I went through one of the first articles on examining a user's browsing activities, and it provided pretty thorough coverage of the topic. One of the things I like most about the article is that it gave the reader information on how to interpret what they saw, rather than just pointing the reader to tools.
Know of any others?
Tuesday, January 24, 2006
"Ooops, I did it again..."
No, this isn't a post about what would be on my iPod if I had one...it's even better.
Recently, the NSA posted a document entitled, "Redacting with Confidence: How to Safely Publish Sanitized Reports Converted From Word to PDF". I downloaded the document, because, well...I like metadata. I decided to see what metadata is in the PDF itself, and found this:
Title -> Redacting with Confidence: How to Safely Publish Sanitized Reports Converted from Word to PDF
Author -> SNAC
CreationDate -> D:20060110111526Z
ModDate -> D:20060120090543-05'00'
Creator -> activePDF DocConverter
Keywords -> word, pdf, redaction, metadata
Producer -> 5D PDFLib
Subject -> I333-015R-2005
Very nice. I pulled this out of the document using the pdfmeta.pl script listed on page 254 of my book.
Recently, the NSA posted a document entitled, "Redacting with Confidence: How to Safely Publish Sanitized Reports Converted From Word to PDF". I downloaded the document, because, well...I like metadata. I decided to see what metadata is in the PDF itself, and found this:
Title -> Redacting with Confidence: How to Safely Publish Sanitized Reports Converted from Word to PDF
Author -> SNAC
CreationDate -> D:20060110111526Z
ModDate -> D:20060120090543-05'00'
Creator -> activePDF DocConverter
Keywords -> word, pdf, redaction, metadata
Producer -> 5D PDFLib
Subject -> I333-015R-2005
Very nice. I pulled this out of the document using the pdfmeta.pl script listed on page 254 of my book.
Thursday, January 12, 2006
The need for IR training
This ComputerWorld article caught my eye this morning...I've done vulnerability assessments before, and I fully agree with all of these mistakes. They do, in fact, occur.
Each of the mistakes has earned it's rightful place in precedence. I started doing vulnerability scanning, commercially, in 1997. This has always included much more than simply running a scanning tool, but what occurred then continues today...we'd deliver our report, and most customers would thank us...and that was it. In some very few cases, some of the issues would be fixed, but mostly due to infrastructure changes, upgrades, etc.
The big one that jumps out at me know, though is number 5 - "Being unprepared for the unknown". To me, this is an issue of being prepared for incidents. The real world has all sorts of incident response capability. I've been on plains with rescue dogs that go to Washington, DC, once a year for testing...they'd been involved in the 9/11 search and rescue, and their condition is being tracked for any signs of health issues. We see cops and firefighters on the news. Ever hear of "smoke jumpers"? Heck, even the military is an "incident response capability" in and of itself.
So look around your IT shop right now. What's your incident response capability? If you're reading this, it's probably you. Are you prepared? How do you recognize or receive notification that an incident has occurred? Is it based on known signatures?
Are you prepared for zero-day exploits?
And don't think for an instant that you can't be prepared for these. I know what some of you are saying, that by definition, one can't be prepared for a "zero-day", because it isn't known. Well, I'm hear to tell you...you're wrong. You're prepared if you know that not all malware processes appear in the Task Manager as "danger.exe" or "malware.exe". Do you know how to get more information about processes, about what's running on a system? Can you triage a system? Can you gather specific information from a system, so that you (or someone else) has a fairly complete snapshot of what's going on and can at least begin to figure out what's going on?
Here's another consideration...what do you watch at night? Are you a CSI fan? How about House? Watch a couple episodes of House and start thinking about how you'd perform a "differential diagnosis" of a system.
Looking at the ComputerWorld article one last time, I guess, in a way, my mind ties all of the mistakes back to training issues and misconceptions...chicken. Egg.
Each of the mistakes has earned it's rightful place in precedence. I started doing vulnerability scanning, commercially, in 1997. This has always included much more than simply running a scanning tool, but what occurred then continues today...we'd deliver our report, and most customers would thank us...and that was it. In some very few cases, some of the issues would be fixed, but mostly due to infrastructure changes, upgrades, etc.
The big one that jumps out at me know, though is number 5 - "Being unprepared for the unknown". To me, this is an issue of being prepared for incidents. The real world has all sorts of incident response capability. I've been on plains with rescue dogs that go to Washington, DC, once a year for testing...they'd been involved in the 9/11 search and rescue, and their condition is being tracked for any signs of health issues. We see cops and firefighters on the news. Ever hear of "smoke jumpers"? Heck, even the military is an "incident response capability" in and of itself.
So look around your IT shop right now. What's your incident response capability? If you're reading this, it's probably you. Are you prepared? How do you recognize or receive notification that an incident has occurred? Is it based on known signatures?
Are you prepared for zero-day exploits?
And don't think for an instant that you can't be prepared for these. I know what some of you are saying, that by definition, one can't be prepared for a "zero-day", because it isn't known. Well, I'm hear to tell you...you're wrong. You're prepared if you know that not all malware processes appear in the Task Manager as "danger.exe" or "malware.exe". Do you know how to get more information about processes, about what's running on a system? Can you triage a system? Can you gather specific information from a system, so that you (or someone else) has a fairly complete snapshot of what's going on and can at least begin to figure out what's going on?
Here's another consideration...what do you watch at night? Are you a CSI fan? How about House? Watch a couple episodes of House and start thinking about how you'd perform a "differential diagnosis" of a system.
Looking at the ComputerWorld article one last time, I guess, in a way, my mind ties all of the mistakes back to training issues and misconceptions...chicken. Egg.
Wednesday, January 11, 2006
What is "security"??
Good question. We each approach topics like this differently, based on our background, experiences, etc. This thread on Slashdot caught my eye this morning, as did Brian Kreb's latest blog entry on SecurityFix (careful folks, it's a long blog entry, but an excellent read nonetheless).
My background in security started with pen-tests, war-dialing, and vulnerability assessments. I've also done policy development, etc. I've had a hand in incident response and forensics, as well. This is very different from Bill Gate's background...so we view security differently. The Microsoft stance has been to support better security practices and there have been intiatives with regards to security...so in light of things like SoBig, CodeRed, Nimda, and the more recent WMF issues, can we say that Micrsofot has failed?
Before giving you my thoughts, let me tell you about something that happened to me back on '00. I was working at a now-defunct telecomm company, as part of the corporate security staff. There was a rogue group of guys who claimed that they had security responsibilities, but you could never really tie them down...they were like kids who were told to not to do something, but they did it anyway. So, at one point, one of the guys from the team comes over and tells me how his group had identified an issue and confiscated a system. When they confronted the employee (without the presence of or even notifying HR, BTW...), the employee denied any knowledge of the issue. So these guys hired an outside consulting firm to come in and do forensic analysis of the hard drive...and the tasking they gave was to locate any files specific to the SubSeven Trojan/backdoor. That's it.
So this guy tells me that he looked at the hard drive and found a hidden DOS partition. He told me that we shouldn't deal with this company b/c in his mind, they didn't know what they were doing.
We (my boss and I) sat down and talked to the forensic analyst from the company. He showed us the tasking, and their final report. The documents clearly stated that the sole tasking was to locate files associated with SubSeven, which the company did (and to be honest, pretty much anyone could have done at the time).
So the question is, did the company "fail" or perform poorly? The analyst said that he'd identified the hidden DOS partition, but that partition did not contain the files in question. Since it wasn't part of the tasking, and the company was never given any information regarding the overall case or issue, they provided what they were asked for.
Now, back to the issue with Microsoft. I think what this all boils down to is a matter of expectations. When someone high up the food chain within Microsoft gets up on stage, most of the security guys in the audience hear "blahblahblahsecurityblahblah". They then fill in the gaps surrounding "security" with their own expectations, and feel justified pointing out failures. But wait a second...had they listened to the speaker, they might have heard him (or her) set those expectations and define "success" in their own context.
So, on the one hand, you can look at what Microsoft has done to improve security with things like a firewall for XP, and IIS 6.x functionality that's "off" by default as successful steps toward better "security". But does the recent WMF exploit issue really show that Microsoft has failed overall? Perhaps not. Microsoft's stance seems to be, "yeah, we know that this issue has been around since Windows 3.0, but there haven't been any publicly available exploits until now, and we had higher priority things to work on." Can you get mad at them for that? Really? I mean, don't we do the exact some thing everyday? Don't we have limited resources (time, money, etc.) and make decisions about what's important to us? How do we then feel when someone comes back to us and says that we "failed", but their determination of success is different from our own?
Maybe the approach that needs to be taken is different. Maybe what needs to happen is that more of Microsoft's customers need to get together and say, "hey, this stuff you've done is all well and good, but you know, malware, worms and rootkits are really kicking our butts...can you help us out?" Maybe if enough customers said this, Horton would hear the Who (NOT Roger Daltry). After all, haven't customers gotten Microsoft to redefine "success" before? Didn't someone from Microsoft say back in the early '90s that the Internet would never become what it has, in fact, become today?
My background in security started with pen-tests, war-dialing, and vulnerability assessments. I've also done policy development, etc. I've had a hand in incident response and forensics, as well. This is very different from Bill Gate's background...so we view security differently. The Microsoft stance has been to support better security practices and there have been intiatives with regards to security...so in light of things like SoBig, CodeRed, Nimda, and the more recent WMF issues, can we say that Micrsofot has failed?
Before giving you my thoughts, let me tell you about something that happened to me back on '00. I was working at a now-defunct telecomm company, as part of the corporate security staff. There was a rogue group of guys who claimed that they had security responsibilities, but you could never really tie them down...they were like kids who were told to not to do something, but they did it anyway. So, at one point, one of the guys from the team comes over and tells me how his group had identified an issue and confiscated a system. When they confronted the employee (without the presence of or even notifying HR, BTW...), the employee denied any knowledge of the issue. So these guys hired an outside consulting firm to come in and do forensic analysis of the hard drive...and the tasking they gave was to locate any files specific to the SubSeven Trojan/backdoor. That's it.
So this guy tells me that he looked at the hard drive and found a hidden DOS partition. He told me that we shouldn't deal with this company b/c in his mind, they didn't know what they were doing.
We (my boss and I) sat down and talked to the forensic analyst from the company. He showed us the tasking, and their final report. The documents clearly stated that the sole tasking was to locate files associated with SubSeven, which the company did (and to be honest, pretty much anyone could have done at the time).
So the question is, did the company "fail" or perform poorly? The analyst said that he'd identified the hidden DOS partition, but that partition did not contain the files in question. Since it wasn't part of the tasking, and the company was never given any information regarding the overall case or issue, they provided what they were asked for.
Now, back to the issue with Microsoft. I think what this all boils down to is a matter of expectations. When someone high up the food chain within Microsoft gets up on stage, most of the security guys in the audience hear "blahblahblahsecurityblahblah". They then fill in the gaps surrounding "security" with their own expectations, and feel justified pointing out failures. But wait a second...had they listened to the speaker, they might have heard him (or her) set those expectations and define "success" in their own context.
So, on the one hand, you can look at what Microsoft has done to improve security with things like a firewall for XP, and IIS 6.x functionality that's "off" by default as successful steps toward better "security". But does the recent WMF exploit issue really show that Microsoft has failed overall? Perhaps not. Microsoft's stance seems to be, "yeah, we know that this issue has been around since Windows 3.0, but there haven't been any publicly available exploits until now, and we had higher priority things to work on." Can you get mad at them for that? Really? I mean, don't we do the exact some thing everyday? Don't we have limited resources (time, money, etc.) and make decisions about what's important to us? How do we then feel when someone comes back to us and says that we "failed", but their determination of success is different from our own?
Maybe the approach that needs to be taken is different. Maybe what needs to happen is that more of Microsoft's customers need to get together and say, "hey, this stuff you've done is all well and good, but you know, malware, worms and rootkits are really kicking our butts...can you help us out?" Maybe if enough customers said this, Horton would hear the Who (NOT Roger Daltry). After all, haven't customers gotten Microsoft to redefine "success" before? Didn't someone from Microsoft say back in the early '90s that the Internet would never become what it has, in fact, become today?
Thursday, January 05, 2006
How are you spending your security dollars?
I was reading some comments on another blog today, and one comment in particular caught my eye. A retired CIO lamented the fact that "millions of dollars" were spent providing security to Windows systems...but he put a Mac on the network and never had a problem. Yeah, you read it right..."millions", with an M.
Ugh. My dooty detector goes off, screaming like a banshee, whenever a C-level executive makes comments like that. No, I'm not going to break things down to a para-religious argument over who's OS is better. That's not where I'm going with this one. What I am leading to here is, this sounds like a training issue to me. It sounds like the knowledge level of the IT staff has...shall we say...room for improvement.
Now, I have no doubt that there are some really bright, very knowledgeable IT guys and gals out there, so if this doesn't apply to you, feel free to leave the room.
I was a security weenie at a company once, and the senior admin guy had a bunch of guys working for him. The senior admin guy went to his desktop support guy, a legitimate Dude Among Dudes, and told him, "we can't promote you to an administrator, like I promised, until you get your MCSE." I heard that and thought it was funny...not laughing funny, but "here, try this, it tastes like crap" funny...because none of the current IT administrator staff had an MCSE. Yep, you read that right..."none", with an N.
My point of all this is that this Dude, who'd helped me with virus eradication, was knowledgeable and had a good head on his shoulders (and still does). The admins who wouldn't let him play in their reindeer games, botched pretty much every incident they responded to, had no documentation and had no network diagram. They didn't even know where the egress points were...the ones that bypassed the firewall...even though they'd set them up.
Okay, getting back on track here...where was I? Oh, yeah...training. My thought is this...when it comes to securing any network, regardless of operating systems and applications, you need to start with documentation. If you don't have it, then getting it is going to be a very necessary exercise. This is a good place to start when identifying your risks. Why? Because you have to know what your risks are so that you can start mitigating them...right?
What I'm getting at here is that spending "millions of dollars" to secure Windows systems probably wasn't necessary. If you don't know what you're doing, then of course trying to secure a network is going to be expensive. I simply think that that kind of money would be better spent on things like hiring better qualified personnel, and training the ones you have.
Oh, one other thing...if you're reading this blog entry, then let me throw this out. There are training opportunities available from a variety of sources, at a variety of price points. But what would you say if you could get your entire staff (like, 12 - 20 people) training, with that training targetted to your needs (for your environment), for less than it takes to send 2 or 3 people away to some of the bigger training events? How about if that training led to follow-on training and services that continued to apply specifically to your needs? Would you be interested in something like this?
Ugh. My dooty detector goes off, screaming like a banshee, whenever a C-level executive makes comments like that. No, I'm not going to break things down to a para-religious argument over who's OS is better. That's not where I'm going with this one. What I am leading to here is, this sounds like a training issue to me. It sounds like the knowledge level of the IT staff has...shall we say...room for improvement.
Now, I have no doubt that there are some really bright, very knowledgeable IT guys and gals out there, so if this doesn't apply to you, feel free to leave the room.
I was a security weenie at a company once, and the senior admin guy had a bunch of guys working for him. The senior admin guy went to his desktop support guy, a legitimate Dude Among Dudes, and told him, "we can't promote you to an administrator, like I promised, until you get your MCSE." I heard that and thought it was funny...not laughing funny, but "here, try this, it tastes like crap" funny...because none of the current IT administrator staff had an MCSE. Yep, you read that right..."none", with an N.
My point of all this is that this Dude, who'd helped me with virus eradication, was knowledgeable and had a good head on his shoulders (and still does). The admins who wouldn't let him play in their reindeer games, botched pretty much every incident they responded to, had no documentation and had no network diagram. They didn't even know where the egress points were...the ones that bypassed the firewall...even though they'd set them up.
Okay, getting back on track here...where was I? Oh, yeah...training. My thought is this...when it comes to securing any network, regardless of operating systems and applications, you need to start with documentation. If you don't have it, then getting it is going to be a very necessary exercise. This is a good place to start when identifying your risks. Why? Because you have to know what your risks are so that you can start mitigating them...right?
What I'm getting at here is that spending "millions of dollars" to secure Windows systems probably wasn't necessary. If you don't know what you're doing, then of course trying to secure a network is going to be expensive. I simply think that that kind of money would be better spent on things like hiring better qualified personnel, and training the ones you have.
Oh, one other thing...if you're reading this blog entry, then let me throw this out. There are training opportunities available from a variety of sources, at a variety of price points. But what would you say if you could get your entire staff (like, 12 - 20 people) training, with that training targetted to your needs (for your environment), for less than it takes to send 2 or 3 people away to some of the bigger training events? How about if that training led to follow-on training and services that continued to apply specifically to your needs? Would you be interested in something like this?
New NIST document (draft)
I was reviewing the updated E-evidence.info site this morning, and one of the interesting things I came across was the draft NIST SP800-86, Guide to Computer and Network Data Analysis: Applying Forensic Techniques to Incident Response.
As I read through the document for the first time, it's clear that this is a great place to start. From my perspective, I'm glad to see a short, 2 paragraph discussion of NTFS alternate data streams on page 4-5 of the document. The author's did provide footnotes with links to URLs for more information. There's also a section on collecting volatile data from systems.
It's a good resource, that's for sure. Take a look when you get a chance.
As I read through the document for the first time, it's clear that this is a great place to start. From my perspective, I'm glad to see a short, 2 paragraph discussion of NTFS alternate data streams on page 4-5 of the document. The author's did provide footnotes with links to URLs for more information. There's also a section on collecting volatile data from systems.
It's a good resource, that's for sure. Take a look when you get a chance.