The subject thread I've been following and contributing to has proved to be interesting, to say the least, and the most interesting aspects have little to do with the subject...
First off, I guess I really shouldn't be surprised how many respondants go almost immediately off-topic, without bothering to change the subject line. Bad etiquette, guys, but I guess that's to be expected.
I want to take a second or two so summarize some of the most popular responses I've seen in watching this thread, and comment on them...
There's a lot of talk about "protecting against currently unknown exploits"...and I have to ask, if it's unknown, how do you protect against it and how do you know A/V will help you? Really. One example came up of a new exploit in which the bad guy took control of the system and dropped an already-known rootkit (ie, HackerDefender, etc.) on the system...in such a case, A/V would work. Yes, and you'd be very lucky, b/c the bad guy was very stupid. If the bad guy gained such access, why would he not bother to disable or even uninstall the A/V first? Or roll back the .dat file? Or why use something that's old? If he's using a zero-day exploit, why would he then use an old rootkit?
There have been mentions of Code Red, SQL Spida, and SQL Slammer. To me, it's odd that someone would use those examples and say, "I've seen A/V come to the rescue with these viruses", when the infections could have been easily prevented in the first place with configuration settings. With Code Red, all the IIS admin had to do was disable the .ida/.idq script mappings...something not many folks used anyway. With SQL Spida, all the SQL admin had to do was put a password on the 'sa' account. Need I say it? Duh!
Another biggy is that A/V protects against the things that have been missed, the things that haven't been done. Good point, but I would suggest that perhaps the security process itself needs to revisited. If something wasn't done, why was that? Was it part of the process and the setting wasn't verified before putting the server into production? Or had it been changed? Yet again, A/V software is justified...but as a band-aid approach.
There have been respondants who are in positions where the web servers are administered by multiple folks, and perhaps even developers have admin access to the servers, for adding updates. This falls right into the same category as malware that gains SYSTEM level access...A/V isn't going to help, b/c at that point, it's GAME OVER, guys! Someone with that level of access can simply disable your A/V software.
So...am I being arrogant? I don't think so. I've managed IIS 4.0/5.0 web servers and watched the logs fill up with failed Code Red/Nimda attempts, etc. I've had NT 4.0 boxes (my own) connected to a DSL hookup with no firewall, and no A/V software...and the only time I've ever gotten a virus, worm, or rootkit on my system is when I put it there myself.
Am I saying that A/V software shouldn't be used? Not at all...I think like every tool, it has it's place. However, I do think that if a web server admin is doing their job, then it's not necessary to put A/V software on a web server. That was the original question I responded to.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Thursday, July 21, 2005
Training
I recently had someone contact me, someone who had read my book and wanted to see about attending my course. Well, one way to do that would be to sign up at MegaMind.org (note: the course actually covers much more than just Window2000). Another way to do it is to have me come on-site and teach it at your facility.
This person suggested that I contact Amazon for a list of names of people who are interested in the training, but to my knowledge, Amazon doesn't maintain anything like that.
So I thought I would pose the question via the blog and see what kind of response I got...so, here it is...
If you're interested in incident response training specific to Windows systems, what sort of program do you prefer? Would you prefer traveling to another location to take the course, or having me come to your facility and teach it? How about the material? The course is specific to incident detection, verification, and resolution on live systems...but I have tailored it to the specific needs of the certain organizations (i.e., I've extended the 2-day course to 5 days, added or removed material as needed, etc.).
How about forensic analysis of Windows systems? As I've already started working on another book along those lines, I can easily see the necessary material for a detailed, hands-on course coming together.
So...what are your thoughts, preferences, comments?
This person suggested that I contact Amazon for a list of names of people who are interested in the training, but to my knowledge, Amazon doesn't maintain anything like that.
So I thought I would pose the question via the blog and see what kind of response I got...so, here it is...
If you're interested in incident response training specific to Windows systems, what sort of program do you prefer? Would you prefer traveling to another location to take the course, or having me come to your facility and teach it? How about the material? The course is specific to incident detection, verification, and resolution on live systems...but I have tailored it to the specific needs of the certain organizations (i.e., I've extended the 2-day course to 5 days, added or removed material as needed, etc.).
How about forensic analysis of Windows systems? As I've already started working on another book along those lines, I can easily see the necessary material for a detailed, hands-on course coming together.
So...what are your thoughts, preferences, comments?
Tuesday, July 19, 2005
A/V software on web servers
There's an interesting thread over on the SecurityFocus Focus-MS list, regarding the installation of anti-virus software on IIS 6.x web servers. I attempted to ignore it, but this morning felt the need to interject.
In a nutshell, my point has simply been that if a web server is just that...a web server...then for the most part, it's serving up content via port 80 (and possibly port 443). If the system is properly configured, why then is anti-virus software needed? If it's just a web server, and your LAN/infrastructure is properly configured and administered, what is the purpose of adding yet another software package to a system, that's going to generate yet another set of logs that will be ignored?
One post mentioned the fact that you can't possibly know all of the threats you'll face. Well, that's true...if you've got your head stuck in the sand. Remote attacks come down to two basic types...those that exploit poor configurations (ie, weak passwords, running unnecessary services, etc.), and those that exploit improper bounds checking in the software itself (ie, buffer overflows). Understanding this makes it easier to reduce the attack surface by greatly restricting the avenues available to an attacker. In fact, it is possible to reduce the attack surface to the point where you still have your necessary functionality, but pretty much the only way to compromise the system is to be the administrator sitting at the console...and at that point, no amount of anti-virus software is going to do you any good.
Another post mentioned that the web server is the public interface to the rest of the world, and anything that gets on the LAN can make it over to the web server, and you may possibly end up with an embarrassing situation (ie, defacement, malware proliferation, etc.). This may be the case...if you've got your web server installed on the same LAN segment as your employee desktops. However, the most likely avenue of attack at that point would be via NetBEUI/file sharing, and it that's enabled on your web server, then it's no longer just a web server, is it?
Take a look at some of the stuff that's hit the Internet. Remember Code Red? Well, some folks at MS set up an IIS 4.0 web server, and it was not susceptible to Code Red a full year before Code Red was launched. In fact, if everyone had simply disabled the proper script mapping the day before Code Red came out, they would not have been vulnerable.
If you're considering putting anti-virus software on a web server, I'd suggest that you look at the root cause as to why you want to do that. Perhaps a better investment would be in training of your staff (ie, support staff, as well as management), or in another product all together.
Am I saying that web servers should never have A/V installed? No, not at all...what I am saying is that before doing so, you should take a hard look at the reasons why you're doing so. Sometimes just setting some ACLs and removing unnecessary services will go a lot further than installing another software package that needs to be maintained.
As a side note, it seems to me that a lot of folks out there are of the mind, "if you're running Windows, you must have anti-virus installed." To me, this seems to be a very uneducated and misinformed position. I do not use A/V software on any of my personal systems, and have never been infected when I haven't done so intentionally. At work, there are no log entries from the corporate A/V software to indicate an infection of any kind on my workstation. Sure, if your kids are using a home system, you'd want to have anti-virus software installed, but that applies regardless of the platform.
Addendum: I was checking out Bruce Schneier's blog this morning and noticed something that rang true with this blog entry. Specifically, in one of his entries about turning off cell phones in tunnels, Bruce says, "This is as idiotic as it gets. It's a perfect example of what I call "movie plot security": imagining a particular scenario rather than focusing on the broad threats." Wow. Bruce gets lauded as a security expert when he says things like that (and I happen to think he's right). However, when I say something like, "There's no point in installing A/V software on a web server", which is pretty much along the same lines as what Bruce said, only mapped to the digital world, I end up getting emails that are better not quoted in public.
Most known threats can be protected against without the use of A/V software on web servers. New threats won't be caught because they're new, and not yet subject to scrutiny by the A/V community.
Oh, and one other thing...more than one person sent me email telling me that they "saw" A/V software protect systems from being infected with the SQLSpida worm. All I can say is, thanks guys, but you made my point for me.
In a nutshell, my point has simply been that if a web server is just that...a web server...then for the most part, it's serving up content via port 80 (and possibly port 443). If the system is properly configured, why then is anti-virus software needed? If it's just a web server, and your LAN/infrastructure is properly configured and administered, what is the purpose of adding yet another software package to a system, that's going to generate yet another set of logs that will be ignored?
One post mentioned the fact that you can't possibly know all of the threats you'll face. Well, that's true...if you've got your head stuck in the sand. Remote attacks come down to two basic types...those that exploit poor configurations (ie, weak passwords, running unnecessary services, etc.), and those that exploit improper bounds checking in the software itself (ie, buffer overflows). Understanding this makes it easier to reduce the attack surface by greatly restricting the avenues available to an attacker. In fact, it is possible to reduce the attack surface to the point where you still have your necessary functionality, but pretty much the only way to compromise the system is to be the administrator sitting at the console...and at that point, no amount of anti-virus software is going to do you any good.
Another post mentioned that the web server is the public interface to the rest of the world, and anything that gets on the LAN can make it over to the web server, and you may possibly end up with an embarrassing situation (ie, defacement, malware proliferation, etc.). This may be the case...if you've got your web server installed on the same LAN segment as your employee desktops. However, the most likely avenue of attack at that point would be via NetBEUI/file sharing, and it that's enabled on your web server, then it's no longer just a web server, is it?
Take a look at some of the stuff that's hit the Internet. Remember Code Red? Well, some folks at MS set up an IIS 4.0 web server, and it was not susceptible to Code Red a full year before Code Red was launched. In fact, if everyone had simply disabled the proper script mapping the day before Code Red came out, they would not have been vulnerable.
If you're considering putting anti-virus software on a web server, I'd suggest that you look at the root cause as to why you want to do that. Perhaps a better investment would be in training of your staff (ie, support staff, as well as management), or in another product all together.
Am I saying that web servers should never have A/V installed? No, not at all...what I am saying is that before doing so, you should take a hard look at the reasons why you're doing so. Sometimes just setting some ACLs and removing unnecessary services will go a lot further than installing another software package that needs to be maintained.
As a side note, it seems to me that a lot of folks out there are of the mind, "if you're running Windows, you must have anti-virus installed." To me, this seems to be a very uneducated and misinformed position. I do not use A/V software on any of my personal systems, and have never been infected when I haven't done so intentionally. At work, there are no log entries from the corporate A/V software to indicate an infection of any kind on my workstation. Sure, if your kids are using a home system, you'd want to have anti-virus software installed, but that applies regardless of the platform.
Addendum: I was checking out Bruce Schneier's blog this morning and noticed something that rang true with this blog entry. Specifically, in one of his entries about turning off cell phones in tunnels, Bruce says, "This is as idiotic as it gets. It's a perfect example of what I call "movie plot security": imagining a particular scenario rather than focusing on the broad threats." Wow. Bruce gets lauded as a security expert when he says things like that (and I happen to think he's right). However, when I say something like, "There's no point in installing A/V software on a web server", which is pretty much along the same lines as what Bruce said, only mapped to the digital world, I end up getting emails that are better not quoted in public.
Most known threats can be protected against without the use of A/V software on web servers. New threats won't be caught because they're new, and not yet subject to scrutiny by the A/V community.
Oh, and one other thing...more than one person sent me email telling me that they "saw" A/V software protect systems from being infected with the SQLSpida worm. All I can say is, thanks guys, but you made my point for me.
Monday, July 18, 2005
Bots writing Registry entries
As I've purused some of the anti-virus sites of late, I've noticed a trend that malware...specifically, bots...are writing two particular Registry entries:
[HKCUHKLM]\System\CurrentControlSet\Control\Lsa
and
[HKCUHKLM]\Software\Microsoft\OLE
I'm seeing this with several bots...W32.Bropia, W32.MyTob, etc. Some A/V sites point out that these are variations of SD-Bot, which wrote to the keys, as well...but why? A/V companies do a great job of saying which keys get created or modified, but it's tough to figure out *why*.
What's the purpose for writing to these keys? Does it have something to do with the LSASS vulnerality in MS04-011? Is this another autostart location?
[HKCUHKLM]\System\CurrentControlSet\Control\Lsa
and
[HKCUHKLM]\Software\Microsoft\OLE
I'm seeing this with several bots...W32.Bropia, W32.MyTob, etc. Some A/V sites point out that these are variations of SD-Bot, which wrote to the keys, as well...but why? A/V companies do a great job of saying which keys get created or modified, but it's tough to figure out *why*.
What's the purpose for writing to these keys? Does it have something to do with the LSASS vulnerality in MS04-011? Is this another autostart location?
Wednesday, July 13, 2005
Prefetch file metadata
I exchanged emails with the anonymous poster from my previous metadata entry, and got an interesting perspective. Specifically, not enough of the folks actually performing forensic analysis of Windows XP systems are aware of the Prefetch directory and what it contains.
This reminds me of a very brief exchange I had w/ one of the virus writers from the group 29A a while back. Specifically, Benny and Ratter had written some viruses that took advantage of NTFS alternate data streams, and I asked them where they saw things going. The response I got back stated, in brief, that it was a deadend b/c everyone knows about ADSs.
Hhhhmmm...so why is it that when I talk about them at conferences, attendees sit up and say things like, "Okay...go back a sec..."??
My point is that just b/c some of us know something, we have to realize that not everyone does. Just b/c someone is, say, a forensic analyst for local, state, or even federal law enforcement, that doesn't mean that they know all of the ins and outs of Windows XP.
Keeping that in mind, .pf files within the Prefetch directory have certain metadata associated with them, specifically, the file contains several Unicode strings (view using strings.exe or BinText from FoundStone), one of which is the path to executable image. So you will see from where the executable was launched.
So...outside of strings and MAC times...what is there? Has anyone ever seen an ADS associated with a .pf file?
This reminds me of a very brief exchange I had w/ one of the virus writers from the group 29A a while back. Specifically, Benny and Ratter had written some viruses that took advantage of NTFS alternate data streams, and I asked them where they saw things going. The response I got back stated, in brief, that it was a deadend b/c everyone knows about ADSs.
Hhhhmmm...so why is it that when I talk about them at conferences, attendees sit up and say things like, "Okay...go back a sec..."??
My point is that just b/c some of us know something, we have to realize that not everyone does. Just b/c someone is, say, a forensic analyst for local, state, or even federal law enforcement, that doesn't mean that they know all of the ins and outs of Windows XP.
Keeping that in mind, .pf files within the Prefetch directory have certain metadata associated with them, specifically, the file contains several Unicode strings (view using strings.exe or BinText from FoundStone), one of which is the path to executable image. So you will see from where the executable was launched.
So...outside of strings and MAC times...what is there? Has anyone ever seen an ADS associated with a .pf file?
Tuesday, July 12, 2005
"Hidden" event records
I can't release the info I have yet, and I'm not trying to tease anyone...honest. However, I can say this...the method I'm using to retrieve event records from an Event Log file is very interesting. In my single-minded simplicity, I've found a way to locate "hidden" event records!
Seriously. I have a test Event Log file from a system, and using the MS API (ie, Event Viewer, psloglist.exe, and the Perl Win32::EventLog module), I can "see" 2363 total event records, running from 11239 to 13601, inclusive. However, using my Perl script, and verifying it by hand, I can also "see" event number 11238, which wasn't over written yet...the header information for the Event Log file simply tells the MS API to start with event ID 11239 (by giving not only the event ID number, but the offset where it's located within the file).
Very interesting stuff.
Seriously. I have a test Event Log file from a system, and using the MS API (ie, Event Viewer, psloglist.exe, and the Perl Win32::EventLog module), I can "see" 2363 total event records, running from 11239 to 13601, inclusive. However, using my Perl script, and verifying it by hand, I can also "see" event number 11238, which wasn't over written yet...the header information for the Event Log file simply tells the MS API to start with event ID 11239 (by giving not only the event ID number, but the offset where it's located within the file).
Very interesting stuff.
Saturday, July 09, 2005
File Metadata
I've been working on my GMU2005 presentation regarding file metadata on Windows systems...basically, showing the types of metadata that are in and associated with various files on Windows boxen.
The stuff I've covered includes Office documents (I even include my MergeStreams demo, b/c it's way cool), PDF documents, Event Log files, and PE file headers. I also cover NTFS Alternate Data Streams and MAC times.
Am I missing anything really obvious here? My goal with this presentation is to tell the audience, "hey, guys and gals...there're all these files on Windows systems, and they're usually there by default, in many environments. There's a lot more information you can pull from them than just the fact that they exist."
I'm just trying to do a sanity check. I went back through my book to see if there's anything I really missed, and I think I've got it. Sometimes, you get to doing this stuff so often, that you stop seeing how important it is for others in the field to know it...you stop seeing the forest for the trees, so to speak.
Oh, and I emailed the guy in charge of the GMU2005 conference, and asked if I could be squeezed in at the last minute with a presentation on the Event Log file format. I specifically asked to get a slot during prime time...not at 5pm on the last day. We'll see how it goes...but I'll be putting the actual presentation together next week and getting it in the approval pipeline. That'll give me 5 presentations at a single conference. Ouch!
And yes, once the presentations have been approved for public release, I'll post them.
The stuff I've covered includes Office documents (I even include my MergeStreams demo, b/c it's way cool), PDF documents, Event Log files, and PE file headers. I also cover NTFS Alternate Data Streams and MAC times.
Am I missing anything really obvious here? My goal with this presentation is to tell the audience, "hey, guys and gals...there're all these files on Windows systems, and they're usually there by default, in many environments. There's a lot more information you can pull from them than just the fact that they exist."
I'm just trying to do a sanity check. I went back through my book to see if there's anything I really missed, and I think I've got it. Sometimes, you get to doing this stuff so often, that you stop seeing how important it is for others in the field to know it...you stop seeing the forest for the trees, so to speak.
Oh, and I emailed the guy in charge of the GMU2005 conference, and asked if I could be squeezed in at the last minute with a presentation on the Event Log file format. I specifically asked to get a slot during prime time...not at 5pm on the last day. We'll see how it goes...but I'll be putting the actual presentation together next week and getting it in the approval pipeline. That'll give me 5 presentations at a single conference. Ouch!
And yes, once the presentations have been approved for public release, I'll post them.
Friday, July 08, 2005
Where, oh, where did my little SSID go...?
Got wireless? Ever go to the Control Panel, to the Network Connections applet, open up the properties for the Wireless Network Connection, click on the Wireless Networks tab, and see a whole bunch of SSIDs listed in the "Preferred Networks" box? You probably know how they got there, and you can easily get rid of them...but have you ever wondered where they're kept on the system?
Ever imaged a Windows XP drive and wondered what wireless networks the suspect connected to?
Well, I've been digging around and I've found it. Open up RegEdit and navigate to the following key:
HKLM\Software\Microsoft\WZCSVC\Parameters\Interfaces
See a subkey that looks like a GUID there? I've got one on my system, you may have more. Well, click on the subkey and look over in the right-hand panel. If you don't see values named "ActiveSettings", "Static#0000", etc., then move on to the next GUID.
If you find one of these values, right-click it and choose "Modify". See the SSID in the binary data?
Now, my laptop is a Dell system and uses BroadCom software. If you don't see the values I mentioned in your Registry, check your client application for your wireless stuff, and let me know what you've got. I've read on the 'Net that Cisco and 3Com client apps keep the SSIDs in the Registry in plain text.
Ever imaged a Windows XP drive and wondered what wireless networks the suspect connected to?
Well, I've been digging around and I've found it. Open up RegEdit and navigate to the following key:
HKLM\Software\Microsoft\WZCSVC\Parameters\Interfaces
See a subkey that looks like a GUID there? I've got one on my system, you may have more. Well, click on the subkey and look over in the right-hand panel. If you don't see values named "ActiveSettings", "Static#0000", etc., then move on to the next GUID.
If you find one of these values, right-click it and choose "Modify". See the SSID in the binary data?
Now, my laptop is a Dell system and uses BroadCom software. If you don't see the values I mentioned in your Registry, check your client application for your wireless stuff, and let me know what you've got. I've read on the 'Net that Cisco and 3Com client apps keep the SSIDs in the Registry in plain text.
Thursday, July 07, 2005
Rootkit Detection, and a prediction (of sorts)
I was over on Rootkit.com again today, reading up on some of the recent entires...if you're at all interested in Windows security, you should really consider signing up. Anyway, I was reading an article by the erstwhile Joanna Rutkowska on crossview-based rootkit detection, and was really fascinated by what I was reading. Her article on Rootkit.com discusses various issues and does a good job of outlining the "war of attrition" as the good guys develop new ways to detect rootkits, and the rootkit authors (some can be good guys, too...) develop new ways to avoid detection.
Joanna makes some interesting comments, in particular:
One may ask a simple question now: why bother to hide files at all? Isn’t the idea of “hide in the crowed” equally stealth? The answer, fortunately, is no...
and
The answer is no, because the current antivirus technology is able to find all (unhidden) executable files and then perform some kind of analysis if the given PE file looks like a potential rootkit/malware installer (for e.g. check if it uses functions like OpenProcess(), OpenSCManager(), ZwSetSystemInformation() and similar). When designing such scanner we need to remember that rootkit executable can comprises of two parts, one being an actual malware loader and the other being a (polymorphic) decoder.
I'm not sure that I entirely agree with her statement, though her reasoning is certainly sound. First off, let me just say that we all come from different places and have different opinions based on different experiences. For example, I have military training in my background, which includes the concept of Maneuver Warfare (as practiced by experts). Given that, and also given that we're seeing more and more attacks in the media that seem to take a more economic or financial focus, my thought is that we're going to see more targetted attacks.
What does this mean? Well, rather than going for mass infections, we'll likely see programs installed on fewer, but targetted machines. Am I saying that this is the death of Internet worms? Not at all...we'll have those around for a long while yet. But what I am saying is that it's very likely that the worms will be test cases...what works, how "noisy" is it, how quickly is something detected and turned over to the anti-virus vendors for analysis and signature creation? With this kind of information, the attacker can target his approach...and all without rootkit technology.
I'll give you an example. In the military, it's commonly known that during inspections, you give the inspector something to find...b/c if you don't, he won't leave until he's gone to some very dark, uncomfortable places with a microscope and a pen light. So, you give him something to find...not too significant, but enough to satisfy him so he'll...well...go away. Well, map that sort of thing over to what we've been seeing since the inception of viruses, and especially since backdoors like Back Orifice were released...when the incident occurs, it's detected b/c it has a significant and often immediately noticeable impact on systems. Well, what if the attacker decided to be really stealthy, and not give the inspector (Administrator, in this case) any cause to even look around in the first place?
Why are rootkits used? To hide the attacker's presence when the administrator or investigator comes looking. So...don't do anything to cause the investigator to look in the first place.
Where are attacks going? Think about maneuver warfare...one of the concepts is to bypass strongpoints. Marines assaulting a beach will bypass a bunker that's facing the beach, and cut it off from the rear, choking off the supply routes that keep the guys in the bunker in beans, bullets, and band-aids. The same holds true with crime...bad guys are going to attack the easy targets first...the unlocked cars and houses, the unescorted children and women, etc. Online, something that looks like it's fairly unattended/unmanaged will be attacked first. Why go after the heavily protected server, where *if* you do get in, you'll create a lot of noise in doing so (in my book, I used the example of Ethan Hawke in Mission Impossible crushing up a light bulb and spreading the shards outside the apartment in the safe house...), and someone's going to come looking.
Attacks are likely going to be targetting less well protected systems, and the attacks are likely going to have less of an impact on the systems over all. The attacks will be more subtle, and the attacker is going to take great pains to stay stealthy and hidden, by not attracting attention to the fact that he's there. Do you need rootkit technology for this? No. It's been widely seen that it doesn't take a lot of effort to remain hidden from most administrators, even if you're hiding in plain sight (no disrespect intended, guys and gals). Adding a program to a system that isn't going to be detected by anti-virus software (all that takes is something new), isn't going to create a lot of noise, and isn't going crash or overwhelm the system is all it takes.
Are you like me, and need examples and specifics? No problem. Anyone remember
Joanna makes some interesting comments, in particular:
One may ask a simple question now: why bother to hide files at all? Isn’t the idea of “hide in the crowed” equally stealth? The answer, fortunately, is no...
and
The answer is no, because the current antivirus technology is able to find all (unhidden) executable files and then perform some kind of analysis if the given PE file looks like a potential rootkit/malware installer (for e.g. check if it uses functions like OpenProcess(), OpenSCManager(), ZwSetSystemInformation() and similar). When designing such scanner we need to remember that rootkit executable can comprises of two parts, one being an actual malware loader and the other being a (polymorphic) decoder.
I'm not sure that I entirely agree with her statement, though her reasoning is certainly sound. First off, let me just say that we all come from different places and have different opinions based on different experiences. For example, I have military training in my background, which includes the concept of Maneuver Warfare (as practiced by experts). Given that, and also given that we're seeing more and more attacks in the media that seem to take a more economic or financial focus, my thought is that we're going to see more targetted attacks.
What does this mean? Well, rather than going for mass infections, we'll likely see programs installed on fewer, but targetted machines. Am I saying that this is the death of Internet worms? Not at all...we'll have those around for a long while yet. But what I am saying is that it's very likely that the worms will be test cases...what works, how "noisy" is it, how quickly is something detected and turned over to the anti-virus vendors for analysis and signature creation? With this kind of information, the attacker can target his approach...and all without rootkit technology.
I'll give you an example. In the military, it's commonly known that during inspections, you give the inspector something to find...b/c if you don't, he won't leave until he's gone to some very dark, uncomfortable places with a microscope and a pen light. So, you give him something to find...not too significant, but enough to satisfy him so he'll...well...go away. Well, map that sort of thing over to what we've been seeing since the inception of viruses, and especially since backdoors like Back Orifice were released...when the incident occurs, it's detected b/c it has a significant and often immediately noticeable impact on systems. Well, what if the attacker decided to be really stealthy, and not give the inspector (Administrator, in this case) any cause to even look around in the first place?
Why are rootkits used? To hide the attacker's presence when the administrator or investigator comes looking. So...don't do anything to cause the investigator to look in the first place.
Where are attacks going? Think about maneuver warfare...one of the concepts is to bypass strongpoints. Marines assaulting a beach will bypass a bunker that's facing the beach, and cut it off from the rear, choking off the supply routes that keep the guys in the bunker in beans, bullets, and band-aids. The same holds true with crime...bad guys are going to attack the easy targets first...the unlocked cars and houses, the unescorted children and women, etc. Online, something that looks like it's fairly unattended/unmanaged will be attacked first. Why go after the heavily protected server, where *if* you do get in, you'll create a lot of noise in doing so (in my book, I used the example of Ethan Hawke in Mission Impossible crushing up a light bulb and spreading the shards outside the apartment in the safe house...), and someone's going to come looking.
Attacks are likely going to be targetting less well protected systems, and the attacks are likely going to have less of an impact on the systems over all. The attacks will be more subtle, and the attacker is going to take great pains to stay stealthy and hidden, by not attracting attention to the fact that he's there. Do you need rootkit technology for this? No. It's been widely seen that it doesn't take a lot of effort to remain hidden from most administrators, even if you're hiding in plain sight (no disrespect intended, guys and gals). Adding a program to a system that isn't going to be detected by anti-virus software (all that takes is something new), isn't going to create a lot of noise, and isn't going crash or overwhelm the system is all it takes.
Are you like me, and need examples and specifics? No problem. Anyone remember
Event Log file format
As a follow-up to my earlier post on the EventLogRecord structure, I wanted to mention that after no small effort, and with some assistance from someone involved with PyFlag, I was able to figure out the format of Event Log files.
Okay, that this point, you're probably thinking...so what? Well, consider this..you're analyzing a Windows system, but you're on Linux. Or you have a corrupted Event Log file, and the Event Viewer (and even psloglist.exe) can't open it. Or you're looking for event records in slack space. In any one of these instances, knowing the structure of the Event Log file would be very helpful. I've drafted a paper, and I need to see about getting it through my employee for public release. If that doesn't work, or it gets stalled, I'll see about releasing the information via another (albiet acceptable) means.
So why am I telling you this, only to say, "I can't release it yet?" Well, my thought was that if anyone has a pressing need to know something about the Event Log file format now (and I mean right now), send me an email and I'll see what I can do to answer your question(s). Otherwise, hang tight and I'll see about getting the information out.
The other reason I'm posting this is to ask about forums/magazines suitable for posting this sort of information. I've been working on articles for the Digital Investigation Journal, but it does take a while for the article to be available to the public. If there's a forum similar to the DIJ, but quicker...let me know.
Addendum: I wanted to add a couple of comments with regards to my effort in this project. First off, since I started this project, a tool called GrokEVT was released. A post hit the SF Forensics list, and I initially caught wind of it there, and since I'd posted asking about this subject, I got a couple of emails from folks pointing it out. There are also some other materials out there, but I can't provide links, b/c right now the links seem to be broken.
Anyway...GrokEVT looks like an excellent tool. It seems to do pretty much everything; extract the event records from the file, search the Registry for message files, then extract the message strings from the file. However, the documentation does state that some of these functions are "unstable". Well...it's a good start.
The one thing that the package doesn't seem to do is explain the format of the Event Log file itself. Yes, I've only looked at a small piece of the puzzle, and no, my solution isn't as comprehensive as GrokEVT or PyFlag. However, the little bit that I've done does provide the forensic analyst with the necessary information to locate event records in slack space, and extract and interpret those records. What I've also done is create a Perl script that uses several functions to retrieve event records from a file. These functions can be used to retrieve records from a corrupted or partially deleted Event Log file.
Knowledge of the Event Log file format is also useful in understanding and detecting anti-forensics techniques involving the Event Log.
My hope is that someone finds this information useful.
Okay, that this point, you're probably thinking...so what? Well, consider this..you're analyzing a Windows system, but you're on Linux. Or you have a corrupted Event Log file, and the Event Viewer (and even psloglist.exe) can't open it. Or you're looking for event records in slack space. In any one of these instances, knowing the structure of the Event Log file would be very helpful. I've drafted a paper, and I need to see about getting it through my employee for public release. If that doesn't work, or it gets stalled, I'll see about releasing the information via another (albiet acceptable) means.
So why am I telling you this, only to say, "I can't release it yet?" Well, my thought was that if anyone has a pressing need to know something about the Event Log file format now (and I mean right now), send me an email and I'll see what I can do to answer your question(s). Otherwise, hang tight and I'll see about getting the information out.
The other reason I'm posting this is to ask about forums/magazines suitable for posting this sort of information. I've been working on articles for the Digital Investigation Journal, but it does take a while for the article to be available to the public. If there's a forum similar to the DIJ, but quicker...let me know.
Addendum: I wanted to add a couple of comments with regards to my effort in this project. First off, since I started this project, a tool called GrokEVT was released. A post hit the SF Forensics list, and I initially caught wind of it there, and since I'd posted asking about this subject, I got a couple of emails from folks pointing it out. There are also some other materials out there, but I can't provide links, b/c right now the links seem to be broken.
Anyway...GrokEVT looks like an excellent tool. It seems to do pretty much everything; extract the event records from the file, search the Registry for message files, then extract the message strings from the file. However, the documentation does state that some of these functions are "unstable". Well...it's a good start.
The one thing that the package doesn't seem to do is explain the format of the Event Log file itself. Yes, I've only looked at a small piece of the puzzle, and no, my solution isn't as comprehensive as GrokEVT or PyFlag. However, the little bit that I've done does provide the forensic analyst with the necessary information to locate event records in slack space, and extract and interpret those records. What I've also done is create a Perl script that uses several functions to retrieve event records from a file. These functions can be used to retrieve records from a corrupted or partially deleted Event Log file.
Knowledge of the Event Log file format is also useful in understanding and detecting anti-forensics techniques involving the Event Log.
My hope is that someone finds this information useful.
System Analysis
Ever notice how you go to a conference or view something online that talks about "analysis", and all you see/hear/read about is data collection?
I was at a conference a couple of weeks ago, and one of the speakers was giving a presentation on "live data analysis". The speaker did a great job talking about collecting data...but that's not analysis. Or is it?
I have to say at this point, I'm confused. I've been thinking for a while that data collection and analysis are two different things. After all, we see it all around us as two separate actions. The men and women on TV shows like "CSI" do collection...and then they perform their analysis.
At the beginning of every month, I like to drop by the E-Evidence.info site to see what wonderful new papers and presentations are posted. I love reading through some of the stuff that appears on the site. At one point, I even went back through the archives from previous months and years. There's always something interesting posted here.
Yesterday, I ran across a paper about checking Windows boxen for signs of compromise. Reading through the paper, I think it's extremely useful to the right audience, but it doesn't say anything about analysis...it's all about running tools. Some of the descriptions of tools and why they are used are pretty bare, to say the least. But like I said, it's a great paper for the right audience.
I then read through a presentation on the "live analysis" of a Linux system. Again...go here, get this tool, run it. While the presentation does present issues such as Tor networks used for anonymity and privacy (if you're interested in that kind of thing, check out VPM), it really doesn't do much to cover "analysis". Even Dana Epp's 2004 presentation includes the word "analysis" in the title, but the presentation itself only glosses over any actual analysis.
My point is that data collection is easy...it's the analysis that's the hard part, and what we need to start focusing on. The tools are there. Techniques and methodologies for collecting data abound. But I'm not saying that we shouldn't keep presenting and writing on data collection...what I am saying is that if the presentation or paper has the word "analysis" in the title, then analysis should be discussed.
Okay, I know what you're going to say..."hey, dude, chill! There are just too many possible things that someone can look for when doing analysis." And I'd agree with you. But I also think that a really cool way to do presentations is to pick something...something you've seen or done, or something that's of interest to your audience (I know, it's really hard to get information on what others are interested in...)...and go over that in detail. I've decided that for my part, I'm going to start doing more to cover the actual analysis...now that you have the data, what is it telling you...in my papers and presentations. In fact, I submitted an abstract for the DoD CyberCrime Conference for a paper/presentation that does exactly that. I don't know if the proposal has been accepted yet or not, but my intention is to walk through the analysis of a corporate case, specifically the theft of proprietary information. It should be interesting...not only in putting the presentation together, but also in the audience's reaction.
I was at a conference a couple of weeks ago, and one of the speakers was giving a presentation on "live data analysis". The speaker did a great job talking about collecting data...but that's not analysis. Or is it?
I have to say at this point, I'm confused. I've been thinking for a while that data collection and analysis are two different things. After all, we see it all around us as two separate actions. The men and women on TV shows like "CSI" do collection...and then they perform their analysis.
At the beginning of every month, I like to drop by the E-Evidence.info site to see what wonderful new papers and presentations are posted. I love reading through some of the stuff that appears on the site. At one point, I even went back through the archives from previous months and years. There's always something interesting posted here.
Yesterday, I ran across a paper about checking Windows boxen for signs of compromise. Reading through the paper, I think it's extremely useful to the right audience, but it doesn't say anything about analysis...it's all about running tools. Some of the descriptions of tools and why they are used are pretty bare, to say the least. But like I said, it's a great paper for the right audience.
I then read through a presentation on the "live analysis" of a Linux system. Again...go here, get this tool, run it. While the presentation does present issues such as Tor networks used for anonymity and privacy (if you're interested in that kind of thing, check out VPM), it really doesn't do much to cover "analysis". Even Dana Epp's 2004 presentation includes the word "analysis" in the title, but the presentation itself only glosses over any actual analysis.
My point is that data collection is easy...it's the analysis that's the hard part, and what we need to start focusing on. The tools are there. Techniques and methodologies for collecting data abound. But I'm not saying that we shouldn't keep presenting and writing on data collection...what I am saying is that if the presentation or paper has the word "analysis" in the title, then analysis should be discussed.
Okay, I know what you're going to say..."hey, dude, chill! There are just too many possible things that someone can look for when doing analysis." And I'd agree with you. But I also think that a really cool way to do presentations is to pick something...something you've seen or done, or something that's of interest to your audience (I know, it's really hard to get information on what others are interested in...)...and go over that in detail. I've decided that for my part, I'm going to start doing more to cover the actual analysis...now that you have the data, what is it telling you...in my papers and presentations. In fact, I submitted an abstract for the DoD CyberCrime Conference for a paper/presentation that does exactly that. I don't know if the proposal has been accepted yet or not, but my intention is to walk through the analysis of a corporate case, specifically the theft of proprietary information. It should be interesting...not only in putting the presentation together, but also in the audience's reaction.
Friday, July 01, 2005
The media, and how they skew "attacks"
I read an article in the Bozeman Daily Chronicle today, about a database system housing personal information about hunters in Montana was compromised.
I have to say, I'm extremely disappointed, not only with the state of IT, but of the popular media.
Reading through the article, it's pretty clear what happened. A system owned and administered by folks in Montana was compromised, and someone tried to turn it into a warez server. In fact, the activity that appeared in the logs could have been completely automated. Many of us have seen this sort of thing before...an automated script scans for FTP servers and tries to log into the "anonymous" account. If it's able to do so, it tries to create a directory, in order to see if it has write access. If the script is successful, it either logs the IP address of the vulnerable system and moves on, or it creates the necessary directory structure and starts uploading files. Of course, this is one of many ways that this kind of activity can occur.
So what's my point? Well, if I were a hunter in Montana, I'd want to know why my personal information was on a database system that was accessible from the Internet, in a manner such that it could be attacked in this way. What service was attacked? The database is Oracle, and though that software has had it's share of vulnerabilities, I don't get the sense (again, my source being the article in question) that the database itself was attacked...but that the system it was running on was attacked through another (possibly unnecessary) service. So...why was it connected to the Internet in such a way as to be accessed by this "attacker", and what (potentially unnecessary) services/daemons were running on it and why?
Speaking of questions, the author of the article had an excellent opportunity to make a mark by asking those tough questions. I believe that legislation is getting us to the point where incidents such as this must be reported. Now what needs to happen is that knowledgeable people need to ask the tough questions...why was the system connected to the Internet in this manner? Who is responsible for the design decision? Who is responsible for the administration of the system? Once these questions start to be asked, maybe the IT folks actually making the decisions will start thinking a bit harder about what they're doing.
So, while we don't...and probably never will...have all of the information about the attack and what actually occurred, articles like this tend to spread FUD amongst the parts of the population that aren't as familiar with security (this includes a lot of IT folks) issues as some of us. I'm not saying that I'm an expert, but I do know enough to recognize FUD like this...
I have to say, I'm extremely disappointed, not only with the state of IT, but of the popular media.
Reading through the article, it's pretty clear what happened. A system owned and administered by folks in Montana was compromised, and someone tried to turn it into a warez server. In fact, the activity that appeared in the logs could have been completely automated. Many of us have seen this sort of thing before...an automated script scans for FTP servers and tries to log into the "anonymous" account. If it's able to do so, it tries to create a directory, in order to see if it has write access. If the script is successful, it either logs the IP address of the vulnerable system and moves on, or it creates the necessary directory structure and starts uploading files. Of course, this is one of many ways that this kind of activity can occur.
So what's my point? Well, if I were a hunter in Montana, I'd want to know why my personal information was on a database system that was accessible from the Internet, in a manner such that it could be attacked in this way. What service was attacked? The database is Oracle, and though that software has had it's share of vulnerabilities, I don't get the sense (again, my source being the article in question) that the database itself was attacked...but that the system it was running on was attacked through another (possibly unnecessary) service. So...why was it connected to the Internet in such a way as to be accessed by this "attacker", and what (potentially unnecessary) services/daemons were running on it and why?
Speaking of questions, the author of the article had an excellent opportunity to make a mark by asking those tough questions. I believe that legislation is getting us to the point where incidents such as this must be reported. Now what needs to happen is that knowledgeable people need to ask the tough questions...why was the system connected to the Internet in this manner? Who is responsible for the design decision? Who is responsible for the administration of the system? Once these questions start to be asked, maybe the IT folks actually making the decisions will start thinking a bit harder about what they're doing.
So, while we don't...and probably never will...have all of the information about the attack and what actually occurred, articles like this tend to spread FUD amongst the parts of the population that aren't as familiar with security (this includes a lot of IT folks) issues as some of us. I'm not saying that I'm an expert, but I do know enough to recognize FUD like this...
Subscribe to:
Posts (Atom)