Online Meetups
Usually when I ask online for input into the NoVA Forensics Meetups, I most often get back responses from folks who have not attended the meetups, but want to, and most of those responses are from people who live too far away to attend the meetups...so they ask me when I'm going to start a meetup in their area. I had a chance to speak with Mike Wilkinson (teaches at Champlain College) while we were both out at PFIC 2011, and not long ago, Mike posted a survey to see if folks would be interested in attending or presenting at online meetups. Mike posted the results of the survey recently, and posted a schedule of presentations, as well.
Based on the results, Mike will be running the online meetups on the third Thursday of each month, at 8pm EST, starting on 15 Dec 2011.
NoVA Meetup
While we're on the topic of the meetups, I thought I'd throw out a reminder to everyone about the next NoVA Forensics Meetup on Wed, 7 Dec. I'm looking forward to this one, as Sam Brothers will be presenting on mobile forensics.
Case Notes
Corey posted a narrative version of case notes for an exam he recently worked. Corey does a great job of walking the reader through the process of discovery during the exam, and if you look at what he's doing, you'll see his process pretty clearly, starting with his goal of determining the IIV for the malware infection. He even went so far as to post his investigative plan.
Another aspect of Corey's post that I really liked was this:
Some activities were conducted in parallel to save time.
I can't tell you the number of times I have seen examiners with several systems (a Mac Pro Server and two MacBook Pro systems) do nothing but start a CCN search against the one image they have, using EnCase, and state that they can't do anything else because the image is in use. When you've got multiple systems, you can easily extract designated data from within the image before subjecting it to a long-running process (AV or CCN scans, etc.)...or simply create a second working copy of the image. Or, instead of starting the long-running processes at the beginning of your day, start them when you know you're going to have some down-time, or even before you leave the lab.
As part of his analysis process, Corey did two other really impressive things; he made use of the tools he had available, and he created a timeline. One of the things Corey mentioned in his post was that he created a batch file to run specific RegRipper/rip.exe plugins and extract specific data; this is a great use of available tools - not just RegRipper, but also batch file scripting - to get the job done. Also, Corey walks through portions of the timeline he created, opening it in Excel and highlighting (in yellow or red) specific entries.
I'll leave the rest of the post to the reader...great job, Corey!
Tools
Scalpel was updated a bit ago...if you do any file carving, check it out.
After I posted my macl.pl tool, I received an email regarding wwtool v0.1, a CLI tool for listing available wireless networks, from WiFiMafia. While not specifically a tool for DFIR work, I can easily see how this would be useful for assessment work.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Wednesday, November 23, 2011
Friday, November 18, 2011
Good Stuff
Geolocation Information
Chad had an excellent post recently regarding geolocation data; besides mobile devices, Windows systems can potentially contain two sources of geolocation information. One is the WiFi MAC addresses that you can retrieve from the Registry...once you do, you can use tools like macl.pl to plot the location of the WAP on a map. Second, some users back up their smartphones to their desktop, using iTunes or the BlackBerry Desktop Manager...you may be able to pull geolocation information from these backups, as well. Check out the FOSS page for some tools that may help you extract that information.
Interviews
Like most analysts, I like to see or hear what other analysts are seeing, and how they're addressing what they're seeing.
Ryan Washington's CyberJungle interview (episode 238) - Ryan was interviewed about his PFIC 2011 presentation about how forensicators can discover artifacts of anti-forensic attempts. As with his presentation, Ryan discusses not just hiding from the user, but also how even seasoned pen testers leave tracks on systems, often when they try very hard to be stealthy.
I remember a discussion I had with members of the IBM ISS X-Force a while ago regarding an Excel exploit that allowed them access to a system. I asked about artifacts, and was told that there were none. I asked explicitly that if the exploit included sending a malicious Excel file and having the user open it, wouldn't the Excel spreadsheet be an "artifact"? After all, many a forensicator has nailed down a phishing attack by locating the malicious PDF file in the email attachment archive.
Interestingly, Ryan also mentions digital "pocket litter", which isn't something that many folks who try to hide their activities are really aware of...
Chris Pogue's Pauldotcom interview - episode 267, starts about 56:33 into the video; Chris talks about Sniper Forensics; what it means, where we are now, where we need to go, all with respect to DFIR. Chris also references some of the same topics that Ryan discussed, and in some cases goes into much more technical detail (re: discussion of MFT attributes). Chris talks about some of the things that he and his team have seen, including MBR infectors, and memory analysis.
Another cool thing about the interview is that you get to see Chris's office, and hear his cell phone ring tone!
Chad had an excellent post recently regarding geolocation data; besides mobile devices, Windows systems can potentially contain two sources of geolocation information. One is the WiFi MAC addresses that you can retrieve from the Registry...once you do, you can use tools like macl.pl to plot the location of the WAP on a map. Second, some users back up their smartphones to their desktop, using iTunes or the BlackBerry Desktop Manager...you may be able to pull geolocation information from these backups, as well. Check out the FOSS page for some tools that may help you extract that information.
Interviews
Like most analysts, I like to see or hear what other analysts are seeing, and how they're addressing what they're seeing.
Ryan Washington's CyberJungle interview (episode 238) - Ryan was interviewed about his PFIC 2011 presentation about how forensicators can discover artifacts of anti-forensic attempts. As with his presentation, Ryan discusses not just hiding from the user, but also how even seasoned pen testers leave tracks on systems, often when they try very hard to be stealthy.
I remember a discussion I had with members of the IBM ISS X-Force a while ago regarding an Excel exploit that allowed them access to a system. I asked about artifacts, and was told that there were none. I asked explicitly that if the exploit included sending a malicious Excel file and having the user open it, wouldn't the Excel spreadsheet be an "artifact"? After all, many a forensicator has nailed down a phishing attack by locating the malicious PDF file in the email attachment archive.
Interestingly, Ryan also mentions digital "pocket litter", which isn't something that many folks who try to hide their activities are really aware of...
Chris Pogue's Pauldotcom interview - episode 267, starts about 56:33 into the video; Chris talks about Sniper Forensics; what it means, where we are now, where we need to go, all with respect to DFIR. Chris also references some of the same topics that Ryan discussed, and in some cases goes into much more technical detail (re: discussion of MFT attributes). Chris talks about some of the things that he and his team have seen, including MBR infectors, and memory analysis.
Another cool thing about the interview is that you get to see Chris's office, and hear his cell phone ring tone!
Wednesday, November 16, 2011
Stuff, Reloaded
More APT Confusion
I ran across an interesting article on TechTarget recently, which states that due to confusion over the APT threat, which "...leads companies to often misappropriate resources, making unnecessary or uninformed investments."
Really? I remember going on-site to perform IR back in 2006 when I was with the ISS ERS Team, and understanding how the customer knew to contact us. They had three copies of ISS RealSecure. All still in their shrink-wrap. One was used to prop a door open. So what I'm saying is that, with respect to the TechTarget article, it isn't necessarily confusion over what "APT" means that leads to "uninformed investments", although I do think that what most organizations find themselves inundated with, with respect to marketing, does lead to significant confusion. I think it's not understanding threats in general, as well as the panic that follows an incident, particularly one that, when investigated, is found to have been going on for some time (weeks, months) prior to detection.
Context...no, WFP. Wait...what?
When presenting on timeline analysis, or most recently at PFIC 2011, Windows forensic analysis, one of the concepts I cover is context within your examination. Recently, Chris posted on the same topic, and gives a great example.
Something about the post, and in particular the following words, caught my eye:
"...manually went through the list of running services using the same methodology...right name, wrong directory, or slightly misspelled name, right directory (for the answer to why I do this, check this out... http://support.microsoft.com/kb/222193)."
Looking at this, I was a little confused...what does Windows File Protection (WFP) have to do with looking for the conditions that Chris mentioned in the above quote? I mean, if a malware author were to drop "svch0st.exe" into the system32 directory, or "svchost.exe" into the Windows directory, then WFP wouldn't come into play, would it?
What's not mentioned in the post is that, while both of the conditions are useful techniques for hiding malware (because they work), WFP is also easily "subverted". The reason I put "subverted" in quotes is that it's not so much a hack as it is using an undocumented MS API call. That's right! To break stuff, you don't have to break other stuff first...you just use the exit ramp that the vendor didn't post signs for. ;-)
Okay, to start, open WFA 2/e and turn to pg. 328. Just below the middle of the page, there's a link to a BitSum page (the page doesn't seem to be available any longer...you'll need to look here) that discusses various methods for disabling WFP...one that I've seen used is method #3; that is, disable WFP for one minute for a particular file. This is something that is likely used by Windows Updates. This CodeProject page has some additional useful information regarding the use of the undocumented SfcFileException API call.
To show you what I mean by "undocumented", take a look at the image to the right...this is the Export Address Table from sfc_os.dll from a Windows XP system, via PEView. If you look at the Export Ordinal Table, you'll see only the last 4 functions listed, by name. However, in the Export Address Table, you don't see names associated with several of the functions.
Note that at the top of the BitSum page (archived version), several tools are listed to demonstrate some of the mentioned techniques. As the page appears to be no longer available, I'm sure that the tools are not available either...not from this site, anyway.
Mandiant has a good example of how WFP "subversion" has been used for malware persistence; see slide 25 from this Mandiant The Malies presentation. W32/Crimea is another example of how disabling WFP may be required (I've seen the target DLL as a "protected" file on some XP systems, but not on others...). This article describes the WFP subversion technique and points to this McAfee blog post.
Yes, Virginia...it is UTC
I recently posted a link to some of my timeline analysis materials that I've used in previous presentations. I've mentioned before that I write all of my tools to normalize the time stamps to 32-bit Unix time format, based on the system's interpretation of UTC (which, for the most part, is analogous to GMT). In fact, if you open the timeline presentation from this archive, slide 18 includes a bullet that states "Time (normalized to Unix epoch time, UTC)".
I hope this makes things a bit clearer to folks. Thanks!
Intel Sharing
Not long ago, I posted about OpenIOC.org, and recently ran across this DarkReading article that discusses intel sharing. Sharing within the community, of any kind, is something that's been discussed time and time again...very recently, I chatted with some great folks at PFIC (actually, at the PFIC AfterDark event held at The Spur in Park City) about this subject.
In the DarkReading article, Dave Merkel, Mandiant CTO, is quoted as saying, "There's no single, standardized way for how people to share attack intelligence." I do agree with this...with all of the various disparate technologies available, it's very difficult to express an indicator of compromise (IoC) in a manner that someone else can immediately employ it within their infrastructure. I mean, how does someone running Snort communicate attack intel to someone else who monitors logs?
I'd suggest that it goes a bit further beyond that, however...there's simply no requirement (nor apparently any desire) for organizations to collect attack intelligence, or even simply share artifacts. Most "victim" organizations are concerned with resuming business operations, and consulting firms are likely more interested in competitive advantage. At WACCI 2010, Ovie talked about the lack of sharing amongst analysts during his keynote presentation, and like others, I've experienced that myself on the teams I've worked with...I wouldn't have any contact with another analyst on my team for, say, 3 months, and after all that time, they had nothing to share from their engagements. We took steps to overcome that...Chris Pogue and I wrote a white paper on SQL injection, we developed some malware characteristics, and I even wrote plugins for RegRipper. I've seen the same sharing issue when I've talked to groups, not just about intel sharing, but also about the forensic scanner.
I think that something like OpenIOC does provide a means for describing IoCs in a manner that can be used by others...but only others with the same toolset. Also, it is dependent upon what folks find, and from that, what they choose to share. As an example, take a look at the example Zeus IOC provided at the OpenIOC.org site. It contains some great information...file names/paths, process handles, etc...but no persistence mechanism for the malware itself, and no Registry indicators. So, this IoC may be great if I have a copy of IOCFinder and a live system to run it against. But what happens if I have a memory dump and an acquired image, or just a Windows machine that's been shut off? Other IoCs, like this one, are more comprehensive...maybe with a bit more descriptive information and an open parser, an analyst could download the XML content and parse out just the information they need/can use.
Now, just to be clear...I'm not saying that no one shares DFIR info or intel. I know that some folks do...some folks have written RegRipper plugins, but I've also been in a room full of people who do forensic analysis, and while everyone admits to having a full schedule, not one person has a single artifact to share. I do think that the IoC definition is a good start, and I hope others pick it up and start using it; it may not be perfect, but the best way to improve things is to use them.
DoD CyberCrime Conference
Thanks to Jamie Levy for posting the DC3 track agenda for Wed, 25 Jan 2012. It looks like there're a number of interesting presentations, many of which all go on at the same time. Wow. What's a girl to do?
NoVA Forensics Meetup
Just a quick reminder about the next NoVA Forensics Meetup, scheduled for Wed, 7 Dec 2011, at the ReverseSpace location. Sam Brothers will be presenting on mobile forensics.
I ran across an interesting article on TechTarget recently, which states that due to confusion over the APT threat, which "...leads companies to often misappropriate resources, making unnecessary or uninformed investments."
Really? I remember going on-site to perform IR back in 2006 when I was with the ISS ERS Team, and understanding how the customer knew to contact us. They had three copies of ISS RealSecure. All still in their shrink-wrap. One was used to prop a door open. So what I'm saying is that, with respect to the TechTarget article, it isn't necessarily confusion over what "APT" means that leads to "uninformed investments", although I do think that what most organizations find themselves inundated with, with respect to marketing, does lead to significant confusion. I think it's not understanding threats in general, as well as the panic that follows an incident, particularly one that, when investigated, is found to have been going on for some time (weeks, months) prior to detection.
Context...no, WFP. Wait...what?
When presenting on timeline analysis, or most recently at PFIC 2011, Windows forensic analysis, one of the concepts I cover is context within your examination. Recently, Chris posted on the same topic, and gives a great example.
Something about the post, and in particular the following words, caught my eye:
"...manually went through the list of running services using the same methodology...right name, wrong directory, or slightly misspelled name, right directory (for the answer to why I do this, check this out... http://support.microsoft.com/kb/222193)."
Looking at this, I was a little confused...what does Windows File Protection (WFP) have to do with looking for the conditions that Chris mentioned in the above quote? I mean, if a malware author were to drop "svch0st.exe" into the system32 directory, or "svchost.exe" into the Windows directory, then WFP wouldn't come into play, would it?
What's not mentioned in the post is that, while both of the conditions are useful techniques for hiding malware (because they work), WFP is also easily "subverted". The reason I put "subverted" in quotes is that it's not so much a hack as it is using an undocumented MS API call. That's right! To break stuff, you don't have to break other stuff first...you just use the exit ramp that the vendor didn't post signs for. ;-)
Okay, to start, open WFA 2/e and turn to pg. 328. Just below the middle of the page, there's a link to a BitSum page (the page doesn't seem to be available any longer...you'll need to look here) that discusses various methods for disabling WFP...one that I've seen used is method #3; that is, disable WFP for one minute for a particular file. This is something that is likely used by Windows Updates. This CodeProject page has some additional useful information regarding the use of the undocumented SfcFileException API call.
To show you what I mean by "undocumented", take a look at the image to the right...this is the Export Address Table from sfc_os.dll from a Windows XP system, via PEView. If you look at the Export Ordinal Table, you'll see only the last 4 functions listed, by name. However, in the Export Address Table, you don't see names associated with several of the functions.
Note that at the top of the BitSum page (archived version), several tools are listed to demonstrate some of the mentioned techniques. As the page appears to be no longer available, I'm sure that the tools are not available either...not from this site, anyway.
Mandiant has a good example of how WFP "subversion" has been used for malware persistence; see slide 25 from this Mandiant The Malies presentation. W32/Crimea is another example of how disabling WFP may be required (I've seen the target DLL as a "protected" file on some XP systems, but not on others...). This article describes the WFP subversion technique and points to this McAfee blog post.
Yes, Virginia...it is UTC
I recently posted a link to some of my timeline analysis materials that I've used in previous presentations. I've mentioned before that I write all of my tools to normalize the time stamps to 32-bit Unix time format, based on the system's interpretation of UTC (which, for the most part, is analogous to GMT). In fact, if you open the timeline presentation from this archive, slide 18 includes a bullet that states "Time (normalized to Unix epoch time, UTC)".
I hope this makes things a bit clearer to folks. Thanks!
Intel Sharing
Not long ago, I posted about OpenIOC.org, and recently ran across this DarkReading article that discusses intel sharing. Sharing within the community, of any kind, is something that's been discussed time and time again...very recently, I chatted with some great folks at PFIC (actually, at the PFIC AfterDark event held at The Spur in Park City) about this subject.
In the DarkReading article, Dave Merkel, Mandiant CTO, is quoted as saying, "There's no single, standardized way for how people to share attack intelligence." I do agree with this...with all of the various disparate technologies available, it's very difficult to express an indicator of compromise (IoC) in a manner that someone else can immediately employ it within their infrastructure. I mean, how does someone running Snort communicate attack intel to someone else who monitors logs?
I'd suggest that it goes a bit further beyond that, however...there's simply no requirement (nor apparently any desire) for organizations to collect attack intelligence, or even simply share artifacts. Most "victim" organizations are concerned with resuming business operations, and consulting firms are likely more interested in competitive advantage. At WACCI 2010, Ovie talked about the lack of sharing amongst analysts during his keynote presentation, and like others, I've experienced that myself on the teams I've worked with...I wouldn't have any contact with another analyst on my team for, say, 3 months, and after all that time, they had nothing to share from their engagements. We took steps to overcome that...Chris Pogue and I wrote a white paper on SQL injection, we developed some malware characteristics, and I even wrote plugins for RegRipper. I've seen the same sharing issue when I've talked to groups, not just about intel sharing, but also about the forensic scanner.
I think that something like OpenIOC does provide a means for describing IoCs in a manner that can be used by others...but only others with the same toolset. Also, it is dependent upon what folks find, and from that, what they choose to share. As an example, take a look at the example Zeus IOC provided at the OpenIOC.org site. It contains some great information...file names/paths, process handles, etc...but no persistence mechanism for the malware itself, and no Registry indicators. So, this IoC may be great if I have a copy of IOCFinder and a live system to run it against. But what happens if I have a memory dump and an acquired image, or just a Windows machine that's been shut off? Other IoCs, like this one, are more comprehensive...maybe with a bit more descriptive information and an open parser, an analyst could download the XML content and parse out just the information they need/can use.
Now, just to be clear...I'm not saying that no one shares DFIR info or intel. I know that some folks do...some folks have written RegRipper plugins, but I've also been in a room full of people who do forensic analysis, and while everyone admits to having a full schedule, not one person has a single artifact to share. I do think that the IoC definition is a good start, and I hope others pick it up and start using it; it may not be perfect, but the best way to improve things is to use them.
DoD CyberCrime Conference
Thanks to Jamie Levy for posting the DC3 track agenda for Wed, 25 Jan 2012. It looks like there're a number of interesting presentations, many of which all go on at the same time. Wow. What's a girl to do?
NoVA Forensics Meetup
Just a quick reminder about the next NoVA Forensics Meetup, scheduled for Wed, 7 Dec 2011, at the ReverseSpace location. Sam Brothers will be presenting on mobile forensics.
Tuesday, November 15, 2011
Stuff
Registry Parsing
Andrew Case, developer of Registry Decoder, recently posted regarding using reglookup for Registry analysis. There are a number of links in Andrew's post to some of Tim Morgan's papers regarding such topics as looking for deleted Registry keys, so be sure to take a look.
PFIC 2011
I had an opportunity to meet a lot of great folks in Park City, many of whom I had only known about via their online presence. One of those is fellow DFIR'er and fellow former Marine Corey Harrell. Corey's one of those impressive folks that you want to reach to and find in the community; rather than just sitting quietly, or just clicking "+1" or "Like", Corey goes out and does stuff, a good deal of which he's posted to his blog.
Corey posted his PFIC 2011 Review to his blog recently (Girl, Unallocated posted her thoughts and experiences, as well)...this is great stuff, for a couple of reasons. First, some conferences, like PFIC, have a number of good topics and speakers, often during the same time slot. As such, you may not be able to get to all of the presentations that you'd like to, and having someone post their "take-aways" from the presentation you missed is a good way to get a bit of insight beyond simply downloading the slide pack. Taking that a step further, not everyone can attend conferences, so this gives folks who couldn't attend an opportunity to peek behind the curtain and see what's going on. Finally, this gets the word out about next year's conference, as well, and may get someone over the hump of whether to attend or not.
DoD CyberCrime
Speaking of presentations, I got word recently that my DoD CyberCrime Conference presentation on timeline analysis on 25 Jan 2012, from 8:30-10:20am. The last (and first) time I attended DC3 was in 2007, and unfortunately, within less than an hour of finishing my presentation, I was on an incident call, and off the next day to another major city. Ah...such was the life of an emergency responder.
My timeline analysis presentation (an example of a previous presentation can be found here) is a bit different from most of those that I find available online, in part because I don't focus on using the SANS SIFT Workstation. That's not to say that SIFT isn't a great resource...because it is. Rob's done a great job of assembling a range of open source tools, and getting them all set up and ready to use. However, the approach I tend to take is to start by attempting to engage the audience and discussing with them the reasons why we'd want to do timeline analysis in the first place, discussing concepts such as context and increased relative confidence in the data. Understanding these concepts can often be what gets folks to see the value of creating a timeline, when "...because this guy said so..." just isn't enough. From there, we walk through using the tools, and demonstrate how timelines can be used as part of your analysis process...keeping in mind that like any other tool, this is just a tool and needs to be used accordingly. Creating a timeline when it doesn't make sense to do simply...well...doesn't make sense.
Anyway, I'm really looking forward to this opportunity, and hopefully seeing a bunch of really good presentations, as well. Looking at the conference agenda as it is so far, it looks like there's a couple of good social events, as well, which will lead to some great networking.
MMPC Updates
The Microsoft Malware Protection Center (MMPC) recently posted regarding some new MSRT definitions, including Win32/Cridex, another bit of malware that steals online banking credentials. Cridex uses the user's Run key for persistence, and apparently stores data in the Default value of the HKCU\Software\Microsoft\Windows Media Center\ key. Figure 3 of the MMPC post includes a screen capture of what this data looks like.
Duqu
Although I haven't had an opportunity to analyze a system infected with Duqu, as always, I remain interested in what's out there, particularly from a host-based perspective. I ran across a set of open source tools for detecting Duqu files (readme here). There's also the Symantec write-up on Duqu, which is very interesting, as it defines the Duqu "load point", which is a driver loaded as a Windows service, specifically HKLM\SYSTEM\CurrentControlSet\Services\JmiNET3. Apparently, configuration information is maintained in the FILTER subkey beneath this key.
Interestingly, the load point is described as "JmiNET7.sys", but the Symantec paper goes on to say that the service name is "JmiNET3".
The Symantec paper goes on to describe the loading techniques for the payload loader, and method 3 involves a section within a DLL called ".zdata".
Finally, the Diagnostics section of the paper includes another Registry key that is supposed to indicate an infected system; specifically, HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\4\”CFID”.
Anyone interested in learning more about Duqu should take a look at the Symantec paper, as well as anything else that's out there. There seem to be some interesting (and possibly unique) indicators that you can use to scan your infrastructure for infected systems; per the Symantec paper, part of the Duqu threat involves infostealers.
Tool Updates
There've been some updates to the SysInternals tools recently, in particular to AutoRuns (new v 11.1), including some new autostart locations. Check them out.
Andreas has updated his Evtx Parser tool (written in Perl), as well.
ImDisk was recently updated to version 1.5.2.
I updated my maclookup.pl WiFi geolocation script to macl.pl. The previous version of the script used Skyhook to perform lookups, in an attempt to translate a WiFi WAP MAC address (found in the Windows Registry) to a lat/long pair. I found out recently that this stopped working, so I sought out...and found...a way to update the script.
Reading
The e-Evidence.info what's new site was updated recently, and as always, there's lots of great reading material. This presentation on using open source tools for digital forensic analysis spends a good couple of slides demonstrating how to use RegRipper. David Hull has a timeline presentation available that discusses the use of SIFT v2.0 to create super timelines.
Andrew Case, developer of Registry Decoder, recently posted regarding using reglookup for Registry analysis. There are a number of links in Andrew's post to some of Tim Morgan's papers regarding such topics as looking for deleted Registry keys, so be sure to take a look.
PFIC 2011
I had an opportunity to meet a lot of great folks in Park City, many of whom I had only known about via their online presence. One of those is fellow DFIR'er and fellow former Marine Corey Harrell. Corey's one of those impressive folks that you want to reach to and find in the community; rather than just sitting quietly, or just clicking "+1" or "Like", Corey goes out and does stuff, a good deal of which he's posted to his blog.
Corey posted his PFIC 2011 Review to his blog recently (Girl, Unallocated posted her thoughts and experiences, as well)...this is great stuff, for a couple of reasons. First, some conferences, like PFIC, have a number of good topics and speakers, often during the same time slot. As such, you may not be able to get to all of the presentations that you'd like to, and having someone post their "take-aways" from the presentation you missed is a good way to get a bit of insight beyond simply downloading the slide pack. Taking that a step further, not everyone can attend conferences, so this gives folks who couldn't attend an opportunity to peek behind the curtain and see what's going on. Finally, this gets the word out about next year's conference, as well, and may get someone over the hump of whether to attend or not.
DoD CyberCrime
Speaking of presentations, I got word recently that my DoD CyberCrime Conference presentation on timeline analysis on 25 Jan 2012, from 8:30-10:20am. The last (and first) time I attended DC3 was in 2007, and unfortunately, within less than an hour of finishing my presentation, I was on an incident call, and off the next day to another major city. Ah...such was the life of an emergency responder.
My timeline analysis presentation (an example of a previous presentation can be found here) is a bit different from most of those that I find available online, in part because I don't focus on using the SANS SIFT Workstation. That's not to say that SIFT isn't a great resource...because it is. Rob's done a great job of assembling a range of open source tools, and getting them all set up and ready to use. However, the approach I tend to take is to start by attempting to engage the audience and discussing with them the reasons why we'd want to do timeline analysis in the first place, discussing concepts such as context and increased relative confidence in the data. Understanding these concepts can often be what gets folks to see the value of creating a timeline, when "...because this guy said so..." just isn't enough. From there, we walk through using the tools, and demonstrate how timelines can be used as part of your analysis process...keeping in mind that like any other tool, this is just a tool and needs to be used accordingly. Creating a timeline when it doesn't make sense to do simply...well...doesn't make sense.
Anyway, I'm really looking forward to this opportunity, and hopefully seeing a bunch of really good presentations, as well. Looking at the conference agenda as it is so far, it looks like there's a couple of good social events, as well, which will lead to some great networking.
MMPC Updates
The Microsoft Malware Protection Center (MMPC) recently posted regarding some new MSRT definitions, including Win32/Cridex, another bit of malware that steals online banking credentials. Cridex uses the user's Run key for persistence, and apparently stores data in the Default value of the HKCU\Software\Microsoft\Windows Media Center\
Duqu
Although I haven't had an opportunity to analyze a system infected with Duqu, as always, I remain interested in what's out there, particularly from a host-based perspective. I ran across a set of open source tools for detecting Duqu files (readme here). There's also the Symantec write-up on Duqu, which is very interesting, as it defines the Duqu "load point", which is a driver loaded as a Windows service, specifically HKLM\SYSTEM\CurrentControlSet\Services\JmiNET3. Apparently, configuration information is maintained in the FILTER subkey beneath this key.
Interestingly, the load point is described as "JmiNET7.sys", but the Symantec paper goes on to say that the service name is "JmiNET3".
The Symantec paper goes on to describe the loading techniques for the payload loader, and method 3 involves a section within a DLL called ".zdata".
Finally, the Diagnostics section of the paper includes another Registry key that is supposed to indicate an infected system; specifically, HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\4\”CFID”.
Anyone interested in learning more about Duqu should take a look at the Symantec paper, as well as anything else that's out there. There seem to be some interesting (and possibly unique) indicators that you can use to scan your infrastructure for infected systems; per the Symantec paper, part of the Duqu threat involves infostealers.
Tool Updates
There've been some updates to the SysInternals tools recently, in particular to AutoRuns (new v 11.1), including some new autostart locations. Check them out.
Andreas has updated his Evtx Parser tool (written in Perl), as well.
ImDisk was recently updated to version 1.5.2.
I updated my maclookup.pl WiFi geolocation script to macl.pl. The previous version of the script used Skyhook to perform lookups, in an attempt to translate a WiFi WAP MAC address (found in the Windows Registry) to a lat/long pair. I found out recently that this stopped working, so I sought out...and found...a way to update the script.
Reading
The e-Evidence.info what's new site was updated recently, and as always, there's lots of great reading material. This presentation on using open source tools for digital forensic analysis spends a good couple of slides demonstrating how to use RegRipper. David Hull has a timeline presentation available that discusses the use of SIFT v2.0 to create super timelines.
Monday, November 14, 2011
Tool Update - WiFi Geolocation
I wanted to let everyone know that I've updated the maclookup.pl Perl script which can be used for WiFi geolocation; that is, taking the MAC address for a WAP and performing a lookup in an online database to determine if there are lat/longs available for that address. If there are, then you can convert the lat/long coordinates into a Google map for visualization purposes.
A while back I'd posted the location of WiFi WAP MAC addresses within the Vista and Windows 7 Registry to ForensicArtifacts.com. This information can be used for intelligence purposes, particularly WiFi geolocation, that is, if the WAP MAC address has been mapped and the lat/longs added to an online database, they can then be looked up and plotted on a map (such as Google Maps). I've blogged about this, and covered it in my upcoming Windows Forensic Analysis 3/e. I also wrote maclookup.pl, which used a URL to query the Skyhook Wireless database to attempt to retrieve lat/longs for a particular WAP MAC address. As it turns out, that script no longer works, and I've been looking into alternatives.
One alternative appears to be WiGLE.net; there seems to be a free search functionality that requires registration to use. Registration is free, and you must agree to non-commercial use during the registration process. Fortunately, there's a Net::Wigle Perl module available, which means that you can write your own code to query WiGLE, get lat/longs, and produce a Google Map...but you have to have Wigle.net credentials to use it. I use ActiveState Perl, so installation of the module was simply a matter of extracting the Wigle.pm file to the C:\Perl\site\lib\Net directory.
So, I updated the maclookup.pl script, using the Net::Wigle module (thanks to the author of the module, as well as Adrian Crenshaw, for some assistance in using the module). I wrote a CLI Perl script, macl.pl, which performs the database lookups, and requires you to enter your Wigle.net username/password in clear text at the command line...this shouldn't be a problem, as you'll be running the script from your analysis workstation. The script takes a WAP MAC address, or a file containing MAC addresses (or both), at the prompt, and allows you to format your output (lat/longs) in a number of ways:
- tabular format
- CSV format
- Each set of lat/longs in a URL to paste into Google Maps
- A KML file that you can load into Google Earth
All output is sent to STDOUT, so all you need to do is add a redirection operator and the appropriate file name, and you're in business.
The code can be downloaded here (macl.zip). The archive contains a thoroughly-documented script, a readme file, and a sample file containing WAP MAC addresses. I updated my copy of Perl2Exe in order to try and create/"compile" a Windows EXE from the script, but there's some more work that needs to be done with respect to modules that "can't be found".
Getting WAP MAC Addresses
So, the big question is, where do you get the WAP MAC addresses? Well, if you're using RegRipper, the networklist.pl plugin will retrieve the information for you. For Windows XP systems, you'll want to use the ssid.pl plugin.
Important Notes
Once again, there are a couple of important things to remember when running the macl.pl script. First, you must have Perl and the Net::Wigle Perl module installed. Neither is difficult to obtain or install. Second, you MUST have a Wigle.net account. Again, this is not difficult to obtain. The readme file in the provided archive provides simple instructions, as well.
Resources
Adrian wrote a tool called IGiGLE.exe (using AutoIT) that allows you to search the Wigle.net database (you have to have a username and password) based on ZIP code, lat/longs, etc.
Here is the GeoMena.org lookup page.
Here is a review of some location service APIs. I had no idea there were that many.
A while back I'd posted the location of WiFi WAP MAC addresses within the Vista and Windows 7 Registry to ForensicArtifacts.com. This information can be used for intelligence purposes, particularly WiFi geolocation, that is, if the WAP MAC address has been mapped and the lat/longs added to an online database, they can then be looked up and plotted on a map (such as Google Maps). I've blogged about this, and covered it in my upcoming Windows Forensic Analysis 3/e. I also wrote maclookup.pl, which used a URL to query the Skyhook Wireless database to attempt to retrieve lat/longs for a particular WAP MAC address. As it turns out, that script no longer works, and I've been looking into alternatives.
One alternative appears to be WiGLE.net; there seems to be a free search functionality that requires registration to use. Registration is free, and you must agree to non-commercial use during the registration process. Fortunately, there's a Net::Wigle Perl module available, which means that you can write your own code to query WiGLE, get lat/longs, and produce a Google Map...but you have to have Wigle.net credentials to use it. I use ActiveState Perl, so installation of the module was simply a matter of extracting the Wigle.pm file to the C:\Perl\site\lib\Net directory.
So, I updated the maclookup.pl script, using the Net::Wigle module (thanks to the author of the module, as well as Adrian Crenshaw, for some assistance in using the module). I wrote a CLI Perl script, macl.pl, which performs the database lookups, and requires you to enter your Wigle.net username/password in clear text at the command line...this shouldn't be a problem, as you'll be running the script from your analysis workstation. The script takes a WAP MAC address, or a file containing MAC addresses (or both), at the prompt, and allows you to format your output (lat/longs) in a number of ways:
- tabular format
- CSV format
- Each set of lat/longs in a URL to paste into Google Maps
- A KML file that you can load into Google Earth
All output is sent to STDOUT, so all you need to do is add a redirection operator and the appropriate file name, and you're in business.
The code can be downloaded here (macl.zip). The archive contains a thoroughly-documented script, a readme file, and a sample file containing WAP MAC addresses. I updated my copy of Perl2Exe in order to try and create/"compile" a Windows EXE from the script, but there's some more work that needs to be done with respect to modules that "can't be found".
Getting WAP MAC Addresses
So, the big question is, where do you get the WAP MAC addresses? Well, if you're using RegRipper, the networklist.pl plugin will retrieve the information for you. For Windows XP systems, you'll want to use the ssid.pl plugin.
Addendum: On Windows 7 systems, information about wireless LANs to which the system has been connected may be found in the Microsoft-Windows-WLAN-AutoConfig/Operational Event Log (event IDs vary based on the particular Task Category).
Important Notes
Once again, there are a couple of important things to remember when running the macl.pl script. First, you must have Perl and the Net::Wigle Perl module installed. Neither is difficult to obtain or install. Second, you MUST have a Wigle.net account. Again, this is not difficult to obtain. The readme file in the provided archive provides simple instructions, as well.
Resources
Adrian wrote a tool called IGiGLE.exe (using AutoIT) that allows you to search the Wigle.net database (you have to have a username and password) based on ZIP code, lat/longs, etc.
Here is the GeoMena.org lookup page.
Here is a review of some location service APIs. I had no idea there were that many.
Thursday, November 10, 2011
PFIC 2011
I just returned from PFIC 2011, and I thought I'd share my experiences. First, let me echo the comments of a couple of the attendees that this is one of the best conferences to attend if you're in the DFIR field.
What I Liked
Meeting people. I know what you're thinking..."you're an ISTJ...you don't like people." That isn't the case at all. I really enjoyed meeting and engaging with a lot of folks at the conference...I won't name them all here, as many don't have an open online presence, and I want to respect their privacy. Either way, it's always great to put a face to a name or online presence, and to meet new people, especially fellow practitioners.
The content. I didn't get to attend many presentations (unfortunately), but those that I did get to attend were real eye-openers, in a number of ways. I didn't get to sit in on anything the first day (travel, etc.), but on Tuesday, I attended Ryan's presentation on how hiding indications of activity leaves artifacts, and Amber's mobile devices presentation. Ryan's presentation was interesting due to the content, but also due to the reactions of many of the attendees...I got the sense from looking around the room (even from my vantage point) that for some, Ryan's presentation was immediately useful...which is a plus in my book.
Amber's presentation was interesting to me, as I really haven't had an opportunity to this point to work with mobile devices. Who knew that an old microwave oven (with the cord cut) was an acceptable storage facility for mobile devices? As an electrical engineer, I know that a microwave oven is a Faraday cage, but like I said...I haven't had a chance to work with mobile devices. Amber also brought up some very interesting points about clones, and even demonstrated how a device might look like an iPhone, but not actually be one, requiring careful observation and critical thinking.
Another great thing about the content of presentations is that there were enough presentations along a similar vein that you could refer back to someone else's presentation in order to add relevance to what you were talking about. I referred to Ryan Washington's presentation several times, as well as to an earlier presentation regarding the NTFS file system. In a lot of ways, this really worked well to tie several presentations together.
After-hours event. I attended the PFIC After Dark even this year...The Spur bar had been shut down just for the event, and we had shuttle transportation between the hotel and bar. It was a great time to meet up with folks you hadn't had a chance to talk to, or to just talk about things that you might not have had a chance to talk about before. I greatly appreciated the opportunity to talk to a number of folks...even those who took the opportunity to buy me a Corona, which I greatly appreciated!
My room. I got in to the venue, and found that I had a complimentary upgrade to another room. Wow! The original room was awesome (or would have been), but then I got a room right by the slopes where they were creating snow for the upcoming ski season. I really like how ski resorts get business in the off-season through conferences and other events...it's a great use of facilities and brings a good deal of business to the local area.
What I'd Do Differently
This section is really a combination of what I'd do differently, as well as what I think, based on my experience, would make the event a better experience overall...
Adjust my travel. I flew in on the Monday of the conference, got in, got cleaned up from my time in airports, grabbed a bite to eat, and then gave my first presentation. Next year, I think I'd like to see about getting to the conference site a bit earlier, and maybe being able to participate in some more things. For example, I was invited to speak on the panel that took place on Wed morning, but my flight out left about an hour before the panel started.
Encourage more tweeting. Social media is a great way to get the word out about upcoming events, but I've also found that live tweeting during the event is also a great way to generate buzz and encourage participation. I did a search this morning for "#PFIC" and turned up only 20 tweets, some in Spanish. I know that Mike Murr wasn't at this
Contests. In addition to the tweeting, Amber mentioned an idea for next year...a forensic challenge of some kind, complete with each team delivering their findings and being judged/graded. I think that would encourage some great participation. I think that these sorts of things attract attention to the blog.
Presentations. One thing I saw and heard others talk about was the fact that there were several good presentations going on at the same time. For example, I had wanted to attend Chad's presentation, but couldn't because I was presenting. On Tues morning, there were two presentations on what appeared to be similar topics that I wanted to attend, and I chose to attend Ryan's.
On the topic of presentations, as the "I" in the conference name stands for "innovation", I think next year would be a fantastic time to hear from the Carbon Black guys.
My Presentations
I gave two presentations this year...thanks again to Amber and Stephanie for allowing me to do so. As the presentation materials don't really convey what was said in the presentation itself, I wanted to share some of my thinking in developing the presentations, as well as the gist of what was said...
Scanning for Low-hanging Fruit: This presentation centered on the forensic scanner I've been working on, both the concept (as in, why would you want to do this...) and the actual implementation (still very proof-of-concept at this point). The presentation even included a demo, which actually worked pretty well.
The idea of the presentation, which may not be apparent from the title, was that once we've found something that we've never seen before (either a new variant of something, or an entirely new thing...), that becomes low-hanging fruit that we can check for each time via automation. The idea would then be to free the analyst to do analysis, rather than having the analyst spend time performing a lot of manual checks, and possibly forgetting some of them in the process. As I mentioned, the demo went over very well, but there's still work to be done with respect to the overall project. Up until now, I haven't had a great deal of opportunity to really develop this project, and I hope to change that in the future.
Introduction to Windows Forensics: When developing this presentation, I really had to think about what constitutes an introduction to Windows forensics. What I decided on...and this seemed to work really well, based on the reactions of the attendees...was to assume that most everyone in the presentation already understood the basics of forensic analysis, and we'd progress on to the forensic analysis of Windows systems. The distinction at that point was that the introduction included some discussion of analysis concepts, and then went into discussing analysis of a Windows system, based on the premise that we'd be analyzing a complex system. So we started out with some concepts, and went into discussing not just the forensic potential of various artifacts and sources (the Registry, Event Log, Prefetch files, etc.), but also the value of considering multiple sources together in order to develop context and a greater relative confidence in the data itself.
Overall, I think that this presentation went well, even though I went really fast (without any RedBull, I should mention...) and finished almost exactly on time. I spoke to Stephanie after the presentation, and hope to come back next year and give a longer, hands-on version of this presentation. I think a bootcamp or lab would be great, as I really want to convey the information in this presentation in a much more "use this right away" format. Also, Windows Forensic Analysis 3/e is scheduled to be published early in 2012, and will provide a great foundation for the lab.
Slide Decks
I put the PDF versions of my presentations (in a zipped archive) up on Google Docs...you can find them here. I've also share the malware detection checklist I mentioned at the conference; keeping in mind that this is a living document, and I'd greatly appreciate feedback.
Links to Attendee's blogs:
Girl, Unallocated - It was great to put a face to a name, and hear how some folks name their blogs...
Journey into IR - It was great to finally meet Corey in person...
ForensicMethods - I'm looking forward to seeing Chad in Atlanta at DC3.
What I Liked
Meeting people. I know what you're thinking..."you're an ISTJ...you don't like people." That isn't the case at all. I really enjoyed meeting and engaging with a lot of folks at the conference...I won't name them all here, as many don't have an open online presence, and I want to respect their privacy. Either way, it's always great to put a face to a name or online presence, and to meet new people, especially fellow practitioners.
The content. I didn't get to attend many presentations (unfortunately), but those that I did get to attend were real eye-openers, in a number of ways. I didn't get to sit in on anything the first day (travel, etc.), but on Tuesday, I attended Ryan's presentation on how hiding indications of activity leaves artifacts, and Amber's mobile devices presentation. Ryan's presentation was interesting due to the content, but also due to the reactions of many of the attendees...I got the sense from looking around the room (even from my vantage point) that for some, Ryan's presentation was immediately useful...which is a plus in my book.
Amber's presentation was interesting to me, as I really haven't had an opportunity to this point to work with mobile devices. Who knew that an old microwave oven (with the cord cut) was an acceptable storage facility for mobile devices? As an electrical engineer, I know that a microwave oven is a Faraday cage, but like I said...I haven't had a chance to work with mobile devices. Amber also brought up some very interesting points about clones, and even demonstrated how a device might look like an iPhone, but not actually be one, requiring careful observation and critical thinking.
Another great thing about the content of presentations is that there were enough presentations along a similar vein that you could refer back to someone else's presentation in order to add relevance to what you were talking about. I referred to Ryan Washington's presentation several times, as well as to an earlier presentation regarding the NTFS file system. In a lot of ways, this really worked well to tie several presentations together.
After-hours event. I attended the PFIC After Dark even this year...The Spur bar had been shut down just for the event, and we had shuttle transportation between the hotel and bar. It was a great time to meet up with folks you hadn't had a chance to talk to, or to just talk about things that you might not have had a chance to talk about before. I greatly appreciated the opportunity to talk to a number of folks...even those who took the opportunity to buy me a Corona, which I greatly appreciated!
My room. I got in to the venue, and found that I had a complimentary upgrade to another room. Wow! The original room was awesome (or would have been), but then I got a room right by the slopes where they were creating snow for the upcoming ski season. I really like how ski resorts get business in the off-season through conferences and other events...it's a great use of facilities and brings a good deal of business to the local area.
What I'd Do Differently
This section is really a combination of what I'd do differently, as well as what I think, based on my experience, would make the event a better experience overall...
Adjust my travel. I flew in on the Monday of the conference, got in, got cleaned up from my time in airports, grabbed a bite to eat, and then gave my first presentation. Next year, I think I'd like to see about getting to the conference site a bit earlier, and maybe being able to participate in some more things. For example, I was invited to speak on the panel that took place on Wed morning, but my flight out left about an hour before the panel started.
Encourage more tweeting. Social media is a great way to get the word out about upcoming events, but I've also found that live tweeting during the event is also a great way to generate buzz and encourage participation. I did a search this morning for "#PFIC" and turned up only 20 tweets, some in Spanish. I know that Mike Murr wasn't at this
Contests. In addition to the tweeting, Amber mentioned an idea for next year...a forensic challenge of some kind, complete with each team delivering their findings and being judged/graded. I think that would encourage some great participation. I think that these sorts of things attract attention to the blog.
Presentations. One thing I saw and heard others talk about was the fact that there were several good presentations going on at the same time. For example, I had wanted to attend Chad's presentation, but couldn't because I was presenting. On Tues morning, there were two presentations on what appeared to be similar topics that I wanted to attend, and I chose to attend Ryan's.
On the topic of presentations, as the "I" in the conference name stands for "innovation", I think next year would be a fantastic time to hear from the Carbon Black guys.
My Presentations
I gave two presentations this year...thanks again to Amber and Stephanie for allowing me to do so. As the presentation materials don't really convey what was said in the presentation itself, I wanted to share some of my thinking in developing the presentations, as well as the gist of what was said...
Scanning for Low-hanging Fruit: This presentation centered on the forensic scanner I've been working on, both the concept (as in, why would you want to do this...) and the actual implementation (still very proof-of-concept at this point). The presentation even included a demo, which actually worked pretty well.
The idea of the presentation, which may not be apparent from the title, was that once we've found something that we've never seen before (either a new variant of something, or an entirely new thing...), that becomes low-hanging fruit that we can check for each time via automation. The idea would then be to free the analyst to do analysis, rather than having the analyst spend time performing a lot of manual checks, and possibly forgetting some of them in the process. As I mentioned, the demo went over very well, but there's still work to be done with respect to the overall project. Up until now, I haven't had a great deal of opportunity to really develop this project, and I hope to change that in the future.
Introduction to Windows Forensics: When developing this presentation, I really had to think about what constitutes an introduction to Windows forensics. What I decided on...and this seemed to work really well, based on the reactions of the attendees...was to assume that most everyone in the presentation already understood the basics of forensic analysis, and we'd progress on to the forensic analysis of Windows systems. The distinction at that point was that the introduction included some discussion of analysis concepts, and then went into discussing analysis of a Windows system, based on the premise that we'd be analyzing a complex system. So we started out with some concepts, and went into discussing not just the forensic potential of various artifacts and sources (the Registry, Event Log, Prefetch files, etc.), but also the value of considering multiple sources together in order to develop context and a greater relative confidence in the data itself.
Overall, I think that this presentation went well, even though I went really fast (without any RedBull, I should mention...) and finished almost exactly on time. I spoke to Stephanie after the presentation, and hope to come back next year and give a longer, hands-on version of this presentation. I think a bootcamp or lab would be great, as I really want to convey the information in this presentation in a much more "use this right away" format. Also, Windows Forensic Analysis 3/e is scheduled to be published early in 2012, and will provide a great foundation for the lab.
Slide Decks
I put the PDF versions of my presentations (in a zipped archive) up on Google Docs...you can find them here. I've also share the malware detection checklist I mentioned at the conference; keeping in mind that this is a living document, and I'd greatly appreciate feedback.
Links to Attendee's blogs:
Girl, Unallocated - It was great to put a face to a name, and hear how some folks name their blogs...
Journey into IR - It was great to finally meet Corey in person...
ForensicMethods - I'm looking forward to seeing Chad in Atlanta at DC3.
Friday, November 04, 2011
DF Analysis Lifecycle
In an effort to spur some interest within the DFIR community (and specifically with the NoVA Forensics Meetup group) in engaging and sharing information, I thought it would be a good idea to point out "forensic challenges" or exercises that are available online, as well as to perhaps setup and conduct some exercises of our (the meetup group) own.
As I was thinking about how to do this, one thing occurred to me...whenever I've done something like this as part of a training exercise or engagement, many times the first things folks say is that they don't know how to get started. When I've conducted training exercises, they've usually been for mixed audiences..."mixed" in the sense that the attendees often aren't all just DF analysts/investigators; some do DF work part-time, some do variations of DF work (such as "online forensics") and others are SOC monitors and may not really do DF analysis.
As such, what I wanted to do was lay out the way I approach analysis engagements, and make that process available for others to read and comment on; I thought that would be a good way to get started on some of the analysis exercises that we can engage in going forward. I've included some additional resources (by no means is this a complete list) at the end of this blog post.
Getting Started
The most common scenario I've faced is receiving either a hard drive or an image for analysis. In many cases, it's been more than one, but if you know how to conduct the analysis of one image, then scaling it to multiple images isn't all that difficult. Also, acquiring an image is either one of those things that you can gloss over in a short blog post, or you have to write an entire blog post (or series of posts) on how to do it...so let's just start our examination based on the fact that we received an image.
Documentation
Documentation is the key to any analysis. It's also the hardest thing to get technical folks to do. For whatever reason, getting technical folks to document what they're doing is like herding cats down a beach. If you don't believe me...try it. Why it's so hard is up for discussion...but the fact of the matter is that proper documentation is an incredibly useful tool, and when you do it, you'll find that it will actually allow you to do more of the cool, sexy analysis stuff that folks like to do.
Document all the things!
Most often when we talk about documentation during analysis, we're referring to case notes, and as such, we need to document pretty much everything (please excuse the gratuitous meme) about the case that we're working on. This includes when we start, what we start with, the tools and processes/procedures we use, our findings, etc.
One of the documentation pitfalls that a lot of folks run into is that they start their case notes on a "piece of paper", and by the end of the engagement, those notes never quite make it into an electronic document. It's best to get used to (and start out) documenting your analysis in electronic format, particularly so your notes can be stored and shared. One means of doing so is to use Forensic CaseNotes from QCC. You can modify the available tabs to meet your needs. However, you can just as easily document what you're doing in MS Word; you can add bold and italics to the document to indicate headers, and you can even add images and tables (or embed Visio diagrams) to the document, if you need to.
The reasons why we document what we do are (1) you may get "hit by a bus" and another analyst may need to pick up your work, and (2) you may need to revisit your analysis (you may be asked questions about it) 6 months or a year later. I know, I know...these examples are used all the time and I know folks are tired of hearing them...but guess what? We use these examples because they actually happen. No, I don't know of an analyst who was actually "hit by a bus", but I do know of several instances where an analyst was on vacation, in surgery, or had left the organization, and the analysis had to be turned over to someone else. I also know of several instances where a year or more after the report was delivered to the customer, questions were posed...this can happen when you're engaged by LE and the defense has a question, or when you're engaged by an organization, and their compliance and regulatory bodies have additional questions. We often don't think much about these scenarios, but when they do occur, we very often finding ourselves wishing we'd kept better notes.
So, one of the questions I hear is, "...to what standard should I keep case notes?" Well, consider the two above scenarios, and keep your case notes such that (1) they can be turned over to someone else or (2) you can come back a year later and clearly see what you did. I mean, honestly...it really isn't that hard. For example, I start my case notes with basic case information...customer point of contact (PoC), exhibits/items I received, and most importantly, the goals of my exam. I put the goals right there in front of me, and have them listed clearly and concisely in their own section so that I can always see them, and refer back to them. When I document my analysis, I do so by including the tool or process that I used, and I include the version of the tool I used. I've found this to be critical, as tools tend to get updated. Look at EnCase, ProDiscover, or Mark Woan's JumpLister. If you used a specific version of a tool, and a year later that tool had been updated (perhaps even several times), then you'd at least have an explanation as to why you saw the data that you did.
Case notes should be clear and concise, and not include the complete output from every tool that you use or run. You can, however, include pertinent excerpts from tool output, particularly if that output leads your examination in a particular direction. By contrast, dumping the entire output of a tool into your case notes and including a note that "only the 3 of the last 4 lines in the output are important" is far from clear or concise. I would consider including information about why something is important or significant to your examination, and I've even gone so far as to include references, such as links to Microsoft KnowledgeBase articles, particularly if those references support my reasoning and conclusions.
If you keep your case notes in a clear and concise manner, then the report almost writes itself.
Now, I will say that I have heard arguments against keeping case notes; in particular, that they're discoverable. Some folks have said that because case notes are discoverable, the defense could get ahold of them and make the examiner's life difficult, at best. And yet, for all of these comments, no one has ever elaborated on this beyond the "maybe" and the "possibly". To this day, I do not understand why an analyst, as a matter of course, would NOT keep case notes, outside of being explicitly instructed to do so (i.e., to not keep case notes) by whomever you're working for.
Checklists
Often, we use tools and scripts in our analysis process in order to add some level of automation, particularly when the tasks are repetitive. A way to expand that is to use checklists, particularly for involved sets of tasks. I use a malware detection checklist that I put together based on a good deal of work that I'd done, and I pull out a copy of that checklist whenever I have an exam that involves attempting to locate malware within an acquired image. The checklist serves as documentation...in my case notes, I refer to the checklist, and I keep a completed copy of the checklist in the case directory along with my case notes. The checklist allows me to keep track of the steps, as well as the tools (and versions) I used, any significant findings, as well as any notes or justification I may have for not completing a step. For example, I won't run a scan for NTFS ADSs if the file system of the image is FAT.
The great thing about using a checklist is that it's a living document...as I learn and find new things, I can add them to the checklist. It also allows me to complete the analysis steps more thoroughly and completely, and in a timely manner. This, in turn, leaves me more time for things like conducting deep(er) analysis. Checklists and procedures can also be codified into a forensic scanner, allowing the "low hanging fruit" and artifacts that you've previously found to searched for quickly, thereby allowing you to focus on further analysis. If the scanner is designed to keep a log of it's activity, then you've got a good deal of documentation right there.
Remember that when using a checklist or just conducting your analysis, no findings can be just as important as an interesting finding. Let's say that you have a checklist that includes 10 steps, and of those, only 1 step finds anything interesting. Let's say you follow all 10 (again, purely arbitrary number, used only as an example) steps of your malware detection checklist, and only the ADS detection step finds anything of interest, but it turns out to be nothing. If you choose to not document the steps that had no significant findings, what does that tell another analyst who picks up your case, or what does it tell the customer who reads your report? Not much. In fact, it sounds like all you did was run a scan for ADSs...and the customer is paying how much for that report? Doing this makes whomever reads your report think that you weren't very thorough, when you were, in fact, extremely thorough.
One final note about checklists and procedures...they're a good place to start, but they're by no means the be-all-end-all. They're tools...use them as such. Procedures and checklists often mean the difference between conducting "Registry analysis" and getting it knocked out, and billing a customer for 16 hrs of "Registry analysis", with no discernible findings or results. If you run through your checklist and find something odd or interesting (for example, no findings), use that as a launching point from which to continue your exam.
Start From The End
This is advice that I've given to a number of folks, and I often get a look like I just sprouted a third eye in the middle of my forehead. What do you mean, "start at the end"? Well, this goes back to the military "backwards planning" concept...determine where you need to be at the end of the engagement (clear, concise report delivered to a happy customer), and plan backwards based on where you are now (sitting at your desk with a drive image to analyze). In other words, rather than sitting down with a blank page, start with a report template (you know you're going to have to deliver a report...) and work from there.
Very often when I have managed engagements, I would start filling in the report template while the analyst (or analysts) was getting organized, or even while they were still on-site. I'll get the executive summary knocked out, putting the background and goals (the exact same goals that the analyst has in their case notes) into the report, and replicating that information into the body of the report. That leaves the analyst to add the exhibits (what was analyzed) and findings information into the report, without having to worry about all of the other "stuff", and allows them to focus on the cool part of the engagement...the analysis. Using a report template (and using the same one every time), they know what needs to be included where, and how to go about writing their findings (i.e., clear and concise). As mentioned previously, the analysis steps and findings are often taken directly from the case notes.
What's the plan, Stan?
Having an analysis plan to start with can often be key to your analysis. Have you ever seen someone start their analysis by loading the image into an analysis application and start indexing the entire image? This activity can take a great deal of time, and we've all seen even commercial applications crash during this process. If you're going to index an entire image, why are you doing so? In order to conduct keyword searches? Okay...what's your list of keywords?
My point is to think critically about what you're doing, and how you're going to go about doing it. Are you indexing an entire image because doing so is pertinent to your analysis, or "because that's what we've always done"? If it's pertinent, that's great...but consider either extracting data from the image or making an additional working copy of the image before kicking off the indexing process. That way, you can be doing other analysis during the indexing process. Also, don't waste time doing stuff that you don't need to be doing.
Report Writing
No one likes to write reports. However, if we don't write reports, how do we get paid? How do we communicate our findings to others, such as the customer, or the prosecutor, or to anyone else? Writing reports should not be viewed as a necessary evil, but instead as a required skill set.
When writing your report, as with your case notes, be clear and concise. There's no need to be flowery and verbose in your language. Remember, you're writing a report that takes a bunch of technical information and very often needs to translate that into something a non-technical person needs to understand in order to make a business or legal decision. It's not only harder to make up new verbiage for different sections of your report, it also makes the finished product harder to read and understand.
When walking through the analysis or findings portion of the report (leading up to my conclusions), I've found that it's best to use the same cadence and structure in my writing. It not only makes it easier to write, but it also makes it easier to read. For example, if I'm analyzing an image in order to locate suspected malware, in each section, I'll list what I did ("ran AV scan"), which tools I used ("AV scanner blah, version X"), and what I found ("no significant/pertinent findings", or "Troj/Win32.Blah found"). I've found that when trying to convey technical information to a non-technical audience, using the same cadence and structure over and over often leaves the reader remembering the aspects of the report that you want them to remember. In particular, you want to convey that you did a thorough job in your analysis. In contrast, having each section worded in a significantly different manner not only makes it harder for me to write (I have to make new stuff up for each section), but the customer just ends up confused, and remembering only those things that were different.
Be professional in your reporting. You don't have to be verbose and use $5 words; in fact, doing so can often lead to confusion because you've used a big word incorrectly. Have someone review your report, and for goodness sake, run spell check before you send it in for review! If you run spell check and see a bunch of words underlined with red squiggly lines, or phrases underlined with green squiggly lines, address them. Get the report in for review early enough for someone to take a good look at it, and don't leave it to the last minute. Finally, if there's something that needs to be addressed in the report, don't tell your reviewer, "fine, if you don't like it, fix it yourself." Constructive criticism is useful and helps us all get better at what we do, but the petulant "whatever...fix it yourself" attitude doesn't go over well.
The report structure is simple...start with an executive summary (ExSumm). This is exactly as described...it's a summary for executives. It's not a place for you to show off how many really cool big words you know. Make it simple and clear...provide some background info on the incident, the goals of the analysis (as decided upon with the customer) and your conclusions. Remember your audience...someone non-technical needs a clear and concise one-pager (no more than 2) with the information that they can use to make critical business decisions. Were they compromised? Yes or no? There's no need to pontificate on how easily they had been compromised...just be clear about it. "A successful SQL injection attack led to the exposure of 10K records."
The body of the report should include background on the incident (with a bit more detail than the ExSumm), followed by the exhibits (what was analyzed), and the goals of the analysis. From there, provide information on the analysis you conducted, your findings, and your conclusions. The goals and conclusions from the body of the report should be identical...literally, copy-and-paste...from the ExSumm.
Finally, many reports include some modicum of recommendations...sometimes this is appropriate, other times it isn't. For example, if you're looking at 1 or 10 images, does that really give you an overall view into the infrastructure as a whole? Just because MRT isn't up-to-date on 5 systems, does that mean that the organization needs to develop and implement a patch management infrastructure? How do you know that they haven't already? This is the part of the report that is usually up for discussion, as to whether or not it's included.
Summary
So, my intention with this post has been to illustrate an engagement lifecycle, and to give an overview of what an engagement can look like, cradle-to-grave. This has by no means been intended to be THE way of doing things...rather, this is a way of conducting an engagement that has been useful to me, and I've found to be successful.
Resources
Chris Pogue's "Sniper Forensics: One Shot, One Kill" presentation from DefCon18
Chris Pogue's "Sniper Forensics v.3" from the most recent SecTor (scroll down)
TrustWave SpiderLabs "Sniper Forensics" blog posts (five posts in the series)
Girl, Unallocated On Writing
UnChained Forensics Lessons Learned
Brad Garnett's tips on Report Writing (SANS)
Computer Forensics Processing Checklist
Useful Analysis Tidbits
Corey's blog posts on exploit artifacts
As I was thinking about how to do this, one thing occurred to me...whenever I've done something like this as part of a training exercise or engagement, many times the first things folks say is that they don't know how to get started. When I've conducted training exercises, they've usually been for mixed audiences..."mixed" in the sense that the attendees often aren't all just DF analysts/investigators; some do DF work part-time, some do variations of DF work (such as "online forensics") and others are SOC monitors and may not really do DF analysis.
As such, what I wanted to do was lay out the way I approach analysis engagements, and make that process available for others to read and comment on; I thought that would be a good way to get started on some of the analysis exercises that we can engage in going forward. I've included some additional resources (by no means is this a complete list) at the end of this blog post.
Getting Started
The most common scenario I've faced is receiving either a hard drive or an image for analysis. In many cases, it's been more than one, but if you know how to conduct the analysis of one image, then scaling it to multiple images isn't all that difficult. Also, acquiring an image is either one of those things that you can gloss over in a short blog post, or you have to write an entire blog post (or series of posts) on how to do it...so let's just start our examination based on the fact that we received an image.
Documentation
Documentation is the key to any analysis. It's also the hardest thing to get technical folks to do. For whatever reason, getting technical folks to document what they're doing is like herding cats down a beach. If you don't believe me...try it. Why it's so hard is up for discussion...but the fact of the matter is that proper documentation is an incredibly useful tool, and when you do it, you'll find that it will actually allow you to do more of the cool, sexy analysis stuff that folks like to do.
Document all the things!
Most often when we talk about documentation during analysis, we're referring to case notes, and as such, we need to document pretty much everything (please excuse the gratuitous meme) about the case that we're working on. This includes when we start, what we start with, the tools and processes/procedures we use, our findings, etc.
One of the documentation pitfalls that a lot of folks run into is that they start their case notes on a "piece of paper", and by the end of the engagement, those notes never quite make it into an electronic document. It's best to get used to (and start out) documenting your analysis in electronic format, particularly so your notes can be stored and shared. One means of doing so is to use Forensic CaseNotes from QCC. You can modify the available tabs to meet your needs. However, you can just as easily document what you're doing in MS Word; you can add bold and italics to the document to indicate headers, and you can even add images and tables (or embed Visio diagrams) to the document, if you need to.
The reasons why we document what we do are (1) you may get "hit by a bus" and another analyst may need to pick up your work, and (2) you may need to revisit your analysis (you may be asked questions about it) 6 months or a year later. I know, I know...these examples are used all the time and I know folks are tired of hearing them...but guess what? We use these examples because they actually happen. No, I don't know of an analyst who was actually "hit by a bus", but I do know of several instances where an analyst was on vacation, in surgery, or had left the organization, and the analysis had to be turned over to someone else. I also know of several instances where a year or more after the report was delivered to the customer, questions were posed...this can happen when you're engaged by LE and the defense has a question, or when you're engaged by an organization, and their compliance and regulatory bodies have additional questions. We often don't think much about these scenarios, but when they do occur, we very often finding ourselves wishing we'd kept better notes.
So, one of the questions I hear is, "...to what standard should I keep case notes?" Well, consider the two above scenarios, and keep your case notes such that (1) they can be turned over to someone else or (2) you can come back a year later and clearly see what you did. I mean, honestly...it really isn't that hard. For example, I start my case notes with basic case information...customer point of contact (PoC), exhibits/items I received, and most importantly, the goals of my exam. I put the goals right there in front of me, and have them listed clearly and concisely in their own section so that I can always see them, and refer back to them. When I document my analysis, I do so by including the tool or process that I used, and I include the version of the tool I used. I've found this to be critical, as tools tend to get updated. Look at EnCase, ProDiscover, or Mark Woan's JumpLister. If you used a specific version of a tool, and a year later that tool had been updated (perhaps even several times), then you'd at least have an explanation as to why you saw the data that you did.
Case notes should be clear and concise, and not include the complete output from every tool that you use or run. You can, however, include pertinent excerpts from tool output, particularly if that output leads your examination in a particular direction. By contrast, dumping the entire output of a tool into your case notes and including a note that "only the 3 of the last 4 lines in the output are important" is far from clear or concise. I would consider including information about why something is important or significant to your examination, and I've even gone so far as to include references, such as links to Microsoft KnowledgeBase articles, particularly if those references support my reasoning and conclusions.
If you keep your case notes in a clear and concise manner, then the report almost writes itself.
Now, I will say that I have heard arguments against keeping case notes; in particular, that they're discoverable. Some folks have said that because case notes are discoverable, the defense could get ahold of them and make the examiner's life difficult, at best. And yet, for all of these comments, no one has ever elaborated on this beyond the "maybe" and the "possibly". To this day, I do not understand why an analyst, as a matter of course, would NOT keep case notes, outside of being explicitly instructed to do so (i.e., to not keep case notes) by whomever you're working for.
Checklists
Often, we use tools and scripts in our analysis process in order to add some level of automation, particularly when the tasks are repetitive. A way to expand that is to use checklists, particularly for involved sets of tasks. I use a malware detection checklist that I put together based on a good deal of work that I'd done, and I pull out a copy of that checklist whenever I have an exam that involves attempting to locate malware within an acquired image. The checklist serves as documentation...in my case notes, I refer to the checklist, and I keep a completed copy of the checklist in the case directory along with my case notes. The checklist allows me to keep track of the steps, as well as the tools (and versions) I used, any significant findings, as well as any notes or justification I may have for not completing a step. For example, I won't run a scan for NTFS ADSs if the file system of the image is FAT.
The great thing about using a checklist is that it's a living document...as I learn and find new things, I can add them to the checklist. It also allows me to complete the analysis steps more thoroughly and completely, and in a timely manner. This, in turn, leaves me more time for things like conducting deep(er) analysis. Checklists and procedures can also be codified into a forensic scanner, allowing the "low hanging fruit" and artifacts that you've previously found to searched for quickly, thereby allowing you to focus on further analysis. If the scanner is designed to keep a log of it's activity, then you've got a good deal of documentation right there.
Remember that when using a checklist or just conducting your analysis, no findings can be just as important as an interesting finding. Let's say that you have a checklist that includes 10 steps, and of those, only 1 step finds anything interesting. Let's say you follow all 10 (again, purely arbitrary number, used only as an example) steps of your malware detection checklist, and only the ADS detection step finds anything of interest, but it turns out to be nothing. If you choose to not document the steps that had no significant findings, what does that tell another analyst who picks up your case, or what does it tell the customer who reads your report? Not much. In fact, it sounds like all you did was run a scan for ADSs...and the customer is paying how much for that report? Doing this makes whomever reads your report think that you weren't very thorough, when you were, in fact, extremely thorough.
One final note about checklists and procedures...they're a good place to start, but they're by no means the be-all-end-all. They're tools...use them as such. Procedures and checklists often mean the difference between conducting "Registry analysis" and getting it knocked out, and billing a customer for 16 hrs of "Registry analysis", with no discernible findings or results. If you run through your checklist and find something odd or interesting (for example, no findings), use that as a launching point from which to continue your exam.
Start From The End
This is advice that I've given to a number of folks, and I often get a look like I just sprouted a third eye in the middle of my forehead. What do you mean, "start at the end"? Well, this goes back to the military "backwards planning" concept...determine where you need to be at the end of the engagement (clear, concise report delivered to a happy customer), and plan backwards based on where you are now (sitting at your desk with a drive image to analyze). In other words, rather than sitting down with a blank page, start with a report template (you know you're going to have to deliver a report...) and work from there.
Very often when I have managed engagements, I would start filling in the report template while the analyst (or analysts) was getting organized, or even while they were still on-site. I'll get the executive summary knocked out, putting the background and goals (the exact same goals that the analyst has in their case notes) into the report, and replicating that information into the body of the report. That leaves the analyst to add the exhibits (what was analyzed) and findings information into the report, without having to worry about all of the other "stuff", and allows them to focus on the cool part of the engagement...the analysis. Using a report template (and using the same one every time), they know what needs to be included where, and how to go about writing their findings (i.e., clear and concise). As mentioned previously, the analysis steps and findings are often taken directly from the case notes.
What's the plan, Stan?
Having an analysis plan to start with can often be key to your analysis. Have you ever seen someone start their analysis by loading the image into an analysis application and start indexing the entire image? This activity can take a great deal of time, and we've all seen even commercial applications crash during this process. If you're going to index an entire image, why are you doing so? In order to conduct keyword searches? Okay...what's your list of keywords?
My point is to think critically about what you're doing, and how you're going to go about doing it. Are you indexing an entire image because doing so is pertinent to your analysis, or "because that's what we've always done"? If it's pertinent, that's great...but consider either extracting data from the image or making an additional working copy of the image before kicking off the indexing process. That way, you can be doing other analysis during the indexing process. Also, don't waste time doing stuff that you don't need to be doing.
Report Writing
No one likes to write reports. However, if we don't write reports, how do we get paid? How do we communicate our findings to others, such as the customer, or the prosecutor, or to anyone else? Writing reports should not be viewed as a necessary evil, but instead as a required skill set.
When writing your report, as with your case notes, be clear and concise. There's no need to be flowery and verbose in your language. Remember, you're writing a report that takes a bunch of technical information and very often needs to translate that into something a non-technical person needs to understand in order to make a business or legal decision. It's not only harder to make up new verbiage for different sections of your report, it also makes the finished product harder to read and understand.
When walking through the analysis or findings portion of the report (leading up to my conclusions), I've found that it's best to use the same cadence and structure in my writing. It not only makes it easier to write, but it also makes it easier to read. For example, if I'm analyzing an image in order to locate suspected malware, in each section, I'll list what I did ("ran AV scan"), which tools I used ("AV scanner blah, version X"), and what I found ("no significant/pertinent findings", or "Troj/Win32.Blah found"). I've found that when trying to convey technical information to a non-technical audience, using the same cadence and structure over and over often leaves the reader remembering the aspects of the report that you want them to remember. In particular, you want to convey that you did a thorough job in your analysis. In contrast, having each section worded in a significantly different manner not only makes it harder for me to write (I have to make new stuff up for each section), but the customer just ends up confused, and remembering only those things that were different.
Be professional in your reporting. You don't have to be verbose and use $5 words; in fact, doing so can often lead to confusion because you've used a big word incorrectly. Have someone review your report, and for goodness sake, run spell check before you send it in for review! If you run spell check and see a bunch of words underlined with red squiggly lines, or phrases underlined with green squiggly lines, address them. Get the report in for review early enough for someone to take a good look at it, and don't leave it to the last minute. Finally, if there's something that needs to be addressed in the report, don't tell your reviewer, "fine, if you don't like it, fix it yourself." Constructive criticism is useful and helps us all get better at what we do, but the petulant "whatever...fix it yourself" attitude doesn't go over well.
The report structure is simple...start with an executive summary (ExSumm). This is exactly as described...it's a summary for executives. It's not a place for you to show off how many really cool big words you know. Make it simple and clear...provide some background info on the incident, the goals of the analysis (as decided upon with the customer) and your conclusions. Remember your audience...someone non-technical needs a clear and concise one-pager (no more than 2) with the information that they can use to make critical business decisions. Were they compromised? Yes or no? There's no need to pontificate on how easily they had been compromised...just be clear about it. "A successful SQL injection attack led to the exposure of 10K records."
The body of the report should include background on the incident (with a bit more detail than the ExSumm), followed by the exhibits (what was analyzed), and the goals of the analysis. From there, provide information on the analysis you conducted, your findings, and your conclusions. The goals and conclusions from the body of the report should be identical...literally, copy-and-paste...from the ExSumm.
Finally, many reports include some modicum of recommendations...sometimes this is appropriate, other times it isn't. For example, if you're looking at 1 or 10 images, does that really give you an overall view into the infrastructure as a whole? Just because MRT isn't up-to-date on 5 systems, does that mean that the organization needs to develop and implement a patch management infrastructure? How do you know that they haven't already? This is the part of the report that is usually up for discussion, as to whether or not it's included.
Summary
So, my intention with this post has been to illustrate an engagement lifecycle, and to give an overview of what an engagement can look like, cradle-to-grave. This has by no means been intended to be THE way of doing things...rather, this is a way of conducting an engagement that has been useful to me, and I've found to be successful.
Resources
Chris Pogue's "Sniper Forensics: One Shot, One Kill" presentation from DefCon18
Chris Pogue's "Sniper Forensics v.3" from the most recent SecTor (scroll down)
TrustWave SpiderLabs "Sniper Forensics" blog posts (five posts in the series)
Girl, Unallocated On Writing
UnChained Forensics Lessons Learned
Brad Garnett's tips on Report Writing (SANS)
Computer Forensics Processing Checklist
Useful Analysis Tidbits
Corey's blog posts on exploit artifacts
Thursday, November 03, 2011
Stuffy Updates
Meetup
We had about 15 or so folks show up for last night's NoVA Forensics Meetup. I gave a presentation on malware characteristics, and the slides are posted to the NoVA4n6Meetup Yahoo group, if you want to take a look. Sorry about posting them the day of the meetup...I'm trying to get slides posted beforehand so that folks can get them and have them available.
One of the things I'd like to develop is interest in the meetup, and get more folks interested in showing up on a regular basis, because this really helps us develop a sense of community. Now, one of the things I've heard from folks is that the location isn't good for them, and I understand that...not everyone can make it. However, I do think that we likely have enough folks from the local area to come by on a regular basis, as well as folks who are willing to attend when they can. The alternative to the location issue is that instead of saying that the drive is too far, start a meetup in your local area. Seriously. The idea it develop a sense of community, which we don't get with "...I can't make it to the meetup because it's too far..."; starting a local meetup increases the community, rather than divide it.
I've also received some comments regarding what folks are looking for with respect to content. I like some of the ideas that have been brought up, such as having something a bit more interactive. However, I'd also like to see more of a community approach to this sort of thing...one person can't be expected to do everything; that's not "community". I really think that there as some good ideas out there, and if we have more folks interested in attending the meetups and actually showing up, then we can get the folks who want to know more about something in the same room as others who know more about that subject and may be willing to give a presentation.
Next month (7 Dec), we're going to be blessed with a presentation on mobile forensics from Sam Brothers. In order to bring more folks in, Cory Altheide suggested that we have a Google Plus (G+) hangout, so I'm going to look at bringing a laptop for that purpose, and also see about live tweeting during the presentation (and getting others to do so).
Finally, we confirmed that adult beverages are permitted at the ReverseSpace site, as long as everyone polices their containers. There didn't seem to be any interest this month in meeting for a pre-meetup warm-up at a nearby pub, so maybe for next month's meetup, some folks would consider bringing something to share. I know from experience what Sam likes, so maybe we can make the event just a bit more entertaining for everyone.
A couple of things to think about regarding the future of the meetups and the NoVA forensics community. First, I've talked to the ReverseSpace folks about the possibility of holding a mini forensics-con at their facility.
Second, what would be the interest in forensic challenges? We could use online facilities and resources to post not only the challenges, but also the results, and folks could then get together to discuss tools and techniques used. The great thing about having these available online is that folks who may not be able to make it to the meetups can also participate.
Finally, the last thing I wanted to bring up regarding the meetups is this...what are some thoughts folks have regarding available online resources for the meetups? I set up the Yahoo group, and I post meetup reminders to that group, as well as the Win4n6 group, to my blog, LinkedIn acct, and Twitter. After the Oct meetup, two LinkedIn groups were set up for the meetup. Even so, I just saw a tweet today where someone said that they just found out about the meetups via my blog. I'd like to hear some thoughts on how to get the word out, as well as get things posted (slide decks, challenges, reminders, announcements) and available in a way that folks will actually get the information. What I don't want to do is have so many facilities that no one knows what to use or where to go.
Memory Analysis
Melissa's got another post up on the SketchyMoose blog regarding Using Volatility: Suspicious Process. She's posted a couple of videos that she put together that are well worth watching. You may need to turn up the volume a bit (I did)...if you want to view the videos in a larger window, check out the SketchyMoose channel on YouTube.
Something I like about Melissa's post is that she's included reference material at the end of the post, linking to further information on some of what she discussed in the videos.
While we're on the topic of memory analysis, Greg Hoglund posted to the Fast Horizon blog; his topic was Detecting APT Attackers in Memory with Digital DNA. Yes, the post is vendor-specific, but it does provide some insight into what you can expect to see from these types of attackers.
Attack Vectors/Intel Gathering
When investigating an incident or issue, analysts are often asked to determine how the bad guy got in or how the infection occurred. Greg's post (mentioned above) refers to a threat that often starts with a spear phishing attack, which is based on open source intelligence gathering. The folks over at Open Source Research have posted on real-world pen-testing attack vectors, and believe me, it really is that easy. Back in '98-'99 when I was doing this kind of work myself, we'd use open source intel collection (which is a fancy way of saying we used Lycos and DogPile...the pre-Google stuff...searches) to start collecting information.
I think that if folks really started to look around, they'd be pretty surprised at what's out there. Starting at the company executive management site will give you some names to start with, and from there you can use that information and the company name itself to search for things like speaker bios, social networking profiles, etc. As suggested in one of the comments to the post, you can also check for metadata in documents available via the corporate site (also consider checking P2P networking infrastructures...you might be surprised at what you find...).
Documents aren't the only sources of information...keep in mind that images also contain metadata.
Intel Collection During Analysis
Funny how writing this post is progressing this morning...one section of the post leads to another. As I mentioned, during analysis we're often asked to determine how a system became compromised in the first place..."how did it happen?", where "it" is often a malware infection or someone having obtained unauthorized access to the system. However, there are often times when it is important to gather intelligence during analysis, such as determining the user's movements and activities. One way of doing this to see which WAPs the system (if it's a laptop) had connected to...another way to determine a user's movements is through smart phone backups. I recently posted some tools to the FOSS page for this blog that might help with that.
In addition, you can use Registry analysis to determine if a smart phone had been connected to the system, even if a management (iPhone and iTunes, BB and the BB Desktop Manager) application hadn't been used. From there you may find pictures or videos that are named based on the convention used by that device, and still contain metadata that points to such a device. In cases such as this, the "intelligence" may be that the individual had access to a device that had not been confiscated or collected during the execution of a search warrant.
OpenIOC
I recently commented on Mandiant's OpenIOC site, and what's available there. One of the things that they're sharing via this site is example IOCs, such as this one. There are a couple of things that I like about this sharing...one is that the author of the IOC added some excellent comments that give insight into what they found. I know a lot of folks out there in the DFIR community like that sort of thing...they like to see what other analysts saw, how they found it, tools and techniques used, etc. So this is a great resource for that sort of thing.
The IOCs are also clear enough that I can write a plugin for my forensic scanner that looks for the same thing. The scanner is intended for acquired images and systems accessed via F-Response, and doesn't require visibility into memory. However, the IOCs listed at the OpenIOC site have enough disk-based information in them (file system, Registry, etc.) that it's fairly easy to create a plugin to look for those same items.
We had about 15 or so folks show up for last night's NoVA Forensics Meetup. I gave a presentation on malware characteristics, and the slides are posted to the NoVA4n6Meetup Yahoo group, if you want to take a look. Sorry about posting them the day of the meetup...I'm trying to get slides posted beforehand so that folks can get them and have them available.
One of the things I'd like to develop is interest in the meetup, and get more folks interested in showing up on a regular basis, because this really helps us develop a sense of community. Now, one of the things I've heard from folks is that the location isn't good for them, and I understand that...not everyone can make it. However, I do think that we likely have enough folks from the local area to come by on a regular basis, as well as folks who are willing to attend when they can. The alternative to the location issue is that instead of saying that the drive is too far, start a meetup in your local area. Seriously. The idea it develop a sense of community, which we don't get with "...I can't make it to the meetup because it's too far..."; starting a local meetup increases the community, rather than divide it.
I've also received some comments regarding what folks are looking for with respect to content. I like some of the ideas that have been brought up, such as having something a bit more interactive. However, I'd also like to see more of a community approach to this sort of thing...one person can't be expected to do everything; that's not "community". I really think that there as some good ideas out there, and if we have more folks interested in attending the meetups and actually showing up, then we can get the folks who want to know more about something in the same room as others who know more about that subject and may be willing to give a presentation.
Next month (7 Dec), we're going to be blessed with a presentation on mobile forensics from Sam Brothers. In order to bring more folks in, Cory Altheide suggested that we have a Google Plus (G+) hangout, so I'm going to look at bringing a laptop for that purpose, and also see about live tweeting during the presentation (and getting others to do so).
Finally, we confirmed that adult beverages are permitted at the ReverseSpace site, as long as everyone polices their containers. There didn't seem to be any interest this month in meeting for a pre-meetup warm-up at a nearby pub, so maybe for next month's meetup, some folks would consider bringing something to share. I know from experience what Sam likes, so maybe we can make the event just a bit more entertaining for everyone.
A couple of things to think about regarding the future of the meetups and the NoVA forensics community. First, I've talked to the ReverseSpace folks about the possibility of holding a mini forensics-con at their facility.
Second, what would be the interest in forensic challenges? We could use online facilities and resources to post not only the challenges, but also the results, and folks could then get together to discuss tools and techniques used. The great thing about having these available online is that folks who may not be able to make it to the meetups can also participate.
Finally, the last thing I wanted to bring up regarding the meetups is this...what are some thoughts folks have regarding available online resources for the meetups? I set up the Yahoo group, and I post meetup reminders to that group, as well as the Win4n6 group, to my blog, LinkedIn acct, and Twitter. After the Oct meetup, two LinkedIn groups were set up for the meetup. Even so, I just saw a tweet today where someone said that they just found out about the meetups via my blog. I'd like to hear some thoughts on how to get the word out, as well as get things posted (slide decks, challenges, reminders, announcements) and available in a way that folks will actually get the information. What I don't want to do is have so many facilities that no one knows what to use or where to go.
Memory Analysis
Melissa's got another post up on the SketchyMoose blog regarding Using Volatility: Suspicious Process. She's posted a couple of videos that she put together that are well worth watching. You may need to turn up the volume a bit (I did)...if you want to view the videos in a larger window, check out the SketchyMoose channel on YouTube.
Something I like about Melissa's post is that she's included reference material at the end of the post, linking to further information on some of what she discussed in the videos.
While we're on the topic of memory analysis, Greg Hoglund posted to the Fast Horizon blog; his topic was Detecting APT Attackers in Memory with Digital DNA. Yes, the post is vendor-specific, but it does provide some insight into what you can expect to see from these types of attackers.
Attack Vectors/Intel Gathering
When investigating an incident or issue, analysts are often asked to determine how the bad guy got in or how the infection occurred. Greg's post (mentioned above) refers to a threat that often starts with a spear phishing attack, which is based on open source intelligence gathering. The folks over at Open Source Research have posted on real-world pen-testing attack vectors, and believe me, it really is that easy. Back in '98-'99 when I was doing this kind of work myself, we'd use open source intel collection (which is a fancy way of saying we used Lycos and DogPile...the pre-Google stuff...searches) to start collecting information.
I think that if folks really started to look around, they'd be pretty surprised at what's out there. Starting at the company executive management site will give you some names to start with, and from there you can use that information and the company name itself to search for things like speaker bios, social networking profiles, etc. As suggested in one of the comments to the post, you can also check for metadata in documents available via the corporate site (also consider checking P2P networking infrastructures...you might be surprised at what you find...).
Documents aren't the only sources of information...keep in mind that images also contain metadata.
Intel Collection During Analysis
Funny how writing this post is progressing this morning...one section of the post leads to another. As I mentioned, during analysis we're often asked to determine how a system became compromised in the first place..."how did it happen?", where "it" is often a malware infection or someone having obtained unauthorized access to the system. However, there are often times when it is important to gather intelligence during analysis, such as determining the user's movements and activities. One way of doing this to see which WAPs the system (if it's a laptop) had connected to...another way to determine a user's movements is through smart phone backups. I recently posted some tools to the FOSS page for this blog that might help with that.
In addition, you can use Registry analysis to determine if a smart phone had been connected to the system, even if a management (iPhone and iTunes, BB and the BB Desktop Manager) application hadn't been used. From there you may find pictures or videos that are named based on the convention used by that device, and still contain metadata that points to such a device. In cases such as this, the "intelligence" may be that the individual had access to a device that had not been confiscated or collected during the execution of a search warrant.
OpenIOC
I recently commented on Mandiant's OpenIOC site, and what's available there. One of the things that they're sharing via this site is example IOCs, such as this one. There are a couple of things that I like about this sharing...one is that the author of the IOC added some excellent comments that give insight into what they found. I know a lot of folks out there in the DFIR community like that sort of thing...they like to see what other analysts saw, how they found it, tools and techniques used, etc. So this is a great resource for that sort of thing.
The IOCs are also clear enough that I can write a plugin for my forensic scanner that looks for the same thing. The scanner is intended for acquired images and systems accessed via F-Response, and doesn't require visibility into memory. However, the IOCs listed at the OpenIOC site have enough disk-based information in them (file system, Registry, etc.) that it's fairly easy to create a plugin to look for those same items.
Wednesday, November 02, 2011
Stuff
NoVA Forensics Meetup Reminder
Don't forget about the meetup tonight...and thanks to David for pointing out my typo on the Meetup Page.
I haven't received any responses regarding a pre-meetup warm-up at a local pub, so I'll look forward to seeing everyone who's attending tonight at 7pm at our location.
I posted the slides for tonight's presentation to the NoVA Forensics Meetup Yahoo group.
SSDs
I was recently asked to write an article for an online forum regarding SSDs. Up until now, I haven't had any experience with these, but I thought I'd start looking around and see what's already out there so I can begin learning about solid state drives, as they're likely to replace more traditional hard drives in the near future.
In Windows 7, if the drive is an SSD, ReadyBoot and SuperFetch are disabled.
Resources
SSDTrim.com
Andre Ross' post on on the DigFor blog.
OpenIOC
With this M-unition blog post, Mandiant announced the OpenIOC framework web site. I strongly suggest that before going to the OpenIOC.org site that you read through the blog post thoroughly, so that you understand what's being presented and offered up, and to set your expectations when you go to the site.
What I mean by this is that the framework itself has been around and discussed for some time, particularly through the Mandiant site. Here is a presentation from some Mandiant folks that includes some discussion/slides regarding the OpenIOC. There's also been an IOC editor available, which allows you to create IOCs, and now, with the OpenIOC.org site being released, the command line IOC finder tool has been released. This tool (per the description in the blog post) allows a responder to check one host at a time for the established IOCs.
Fortunately, several example IOCs are also provided, such as this shelldc.dll example. I tend to believe that this is where the real power of this (or any other) framework will come from; regardless of the type of framework or schema (or standard) used to describe indicators of compromise, the real power is going to come from the ability of #DFIR folks to understand and share these IoCs. Having a standard for this sort of thing raises the bar for DFIR...not for admission, but it tells everyone where they have to be with respect to their understanding of DFIR activities, because not only will they have to understand what's out there, but they'll have to really understand it in order to be part of the community and share their own findings.
So, in a lot of ways, this is a step in the right direction. I hope it takes off...as has been seen with GSI's EnScripting and the production of RegRipper plugins, sometimes no matter how useful something is to a small subset of analysts, it's not really picked up by the larger community.
Breach Reporting
There's been some interesting discussion in various forums (G+, Twitter, etc.) lately regarding breach reporting. Okay, not so much discussion as folks posting links...I do think that there needs to be more discussion of this topic.
For example, much of the breach reporting is centered around actually reporting that a breach occurred. Now, if you read any of the published annual reports (Verizon, TrustWave, Mandiant), you'll see historically that a large percentage of breach victims are notified by external third parties. These numbers appear to be across the board, as each of the organizations publishing these reports target slightly different customer bases and respond predominantly to different types of breaches (PCI/PII, APT, etc.).
Maybe a legislative requirement for reporting a breach, regardless of how it was discovered, is just the first step. I mean, I've seen during PCI breaches where a non-attorney executive has stated emphatically that their company would not report a breach, but I tend to think that was done out of panic and a lack of understanding/information regarding the breach itself. However, if breaches start getting reported, there will be greater visibility into the overall issue, and from there, intelligent metrics can be developed, followed by better detection mechanisms and processes.
With respect to PII, it appears that there are 46 states with some sort of breach notification requirements, and there's even a bill put forth by Sen. Leahy (D-VT) and others regarding a national standard requiring reporting of discovered breaches.
Resources
Leahy Personal Data Privacy and Security Act
Don't forget about the meetup tonight...and thanks to David for pointing out my typo on the Meetup Page.
I haven't received any responses regarding a pre-meetup warm-up at a local pub, so I'll look forward to seeing everyone who's attending tonight at 7pm at our location.
I posted the slides for tonight's presentation to the NoVA Forensics Meetup Yahoo group.
SSDs
I was recently asked to write an article for an online forum regarding SSDs. Up until now, I haven't had any experience with these, but I thought I'd start looking around and see what's already out there so I can begin learning about solid state drives, as they're likely to replace more traditional hard drives in the near future.
In Windows 7, if the drive is an SSD, ReadyBoot and SuperFetch are disabled.
Resources
SSDTrim.com
Andre Ross' post on on the DigFor blog.
OpenIOC
With this M-unition blog post, Mandiant announced the OpenIOC framework web site. I strongly suggest that before going to the OpenIOC.org site that you read through the blog post thoroughly, so that you understand what's being presented and offered up, and to set your expectations when you go to the site.
What I mean by this is that the framework itself has been around and discussed for some time, particularly through the Mandiant site. Here is a presentation from some Mandiant folks that includes some discussion/slides regarding the OpenIOC. There's also been an IOC editor available, which allows you to create IOCs, and now, with the OpenIOC.org site being released, the command line IOC finder tool has been released. This tool (per the description in the blog post) allows a responder to check one host at a time for the established IOCs.
Fortunately, several example IOCs are also provided, such as this shelldc.dll example. I tend to believe that this is where the real power of this (or any other) framework will come from; regardless of the type of framework or schema (or standard) used to describe indicators of compromise, the real power is going to come from the ability of #DFIR folks to understand and share these IoCs. Having a standard for this sort of thing raises the bar for DFIR...not for admission, but it tells everyone where they have to be with respect to their understanding of DFIR activities, because not only will they have to understand what's out there, but they'll have to really understand it in order to be part of the community and share their own findings.
So, in a lot of ways, this is a step in the right direction. I hope it takes off...as has been seen with GSI's EnScripting and the production of RegRipper plugins, sometimes no matter how useful something is to a small subset of analysts, it's not really picked up by the larger community.
Breach Reporting
There's been some interesting discussion in various forums (G+, Twitter, etc.) lately regarding breach reporting. Okay, not so much discussion as folks posting links...I do think that there needs to be more discussion of this topic.
For example, much of the breach reporting is centered around actually reporting that a breach occurred. Now, if you read any of the published annual reports (Verizon, TrustWave, Mandiant), you'll see historically that a large percentage of breach victims are notified by external third parties. These numbers appear to be across the board, as each of the organizations publishing these reports target slightly different customer bases and respond predominantly to different types of breaches (PCI/PII, APT, etc.).
Maybe a legislative requirement for reporting a breach, regardless of how it was discovered, is just the first step. I mean, I've seen during PCI breaches where a non-attorney executive has stated emphatically that their company would not report a breach, but I tend to think that was done out of panic and a lack of understanding/information regarding the breach itself. However, if breaches start getting reported, there will be greater visibility into the overall issue, and from there, intelligent metrics can be developed, followed by better detection mechanisms and processes.
With respect to PII, it appears that there are 46 states with some sort of breach notification requirements, and there's even a bill put forth by Sen. Leahy (D-VT) and others regarding a national standard requiring reporting of discovered breaches.
Resources
Leahy Personal Data Privacy and Security Act
Subscribe to:
Posts (Atom)