Do I need to say it again? The age of Nintendo Forensics is gone, long past.
Acquiring a system is no longer as simple as removing the hard drive, hooking it up to a write blocker, and imaging it. Storage capacity is increasing, devices capable of storing data are diversifying and becoming more numerous (along with the data formats)...all of which are becoming more ubiquitous and common-place. As the sophistication and complexity of devices and operating systems increases, the solution to the issue of backlogs due to examinations requiring additional resources is training and education.
Training and education lead to greater subject-matter knowledge, allowing the investigator to ask better questions, and perhaps even make better requests for assistance. Having a better understanding of what is available to you and where to go to look leads to better data collection, and more thorough and efficient examinations. It also leads to solutions that might not be readily apparent to those that follow the "point and click execution" methodology.
Take this article from Police Chief Magazine, for example. 1stSgt Cohen goes so far as to specifically mention the collection of volatile data and the contents of RAM. IMHO, this is a HUGE step in the right direction. In this instance, a law enforcement officer is publicly recognizing the importance of volatile data in an investigation.
It's also clear from the article that training and education has led to the use of a "computer forensics field triage", which simply exemplifies the need for growth in this area. It's also clear from the article that a partnership between law enforcement, the NW3C and Purdue University has benefited all parties. It would appear that at some point in the game, the LEs were able to identify what they needed, and were able to request the necessary assistance from Purdue and NW3C...something known in consulting circles as "requirements analysis". At some point, the cops understood the importance of volatile memory, and thought, "we need this...now, how do we collect it in the proper manner?"
So what does this have to do with alternative methods of analysis? An increase in knowledge allows you to seek out alternative methods for your investigation.
For example, take the Trojan Defense. The "purist" approach to computer forensics...remove the hard drive from the system, acquire an image, and look for files...appears to have been less than successful, in at least one case in 2003. The effect of this decision may have set the stage for other similar decisions. So, let's say you've examined the image, searched for files, even mounted the image as a file system and hit it with multiple AV, anti-spyware, and hash-comparison, and still haven't found anything. Then lets assume you had collected volatile data...active process list, network connections, port-to-process mapping, etc. Parsing that data, wouldn't the case have been a bit more iron-clad? You'd be able to show that at the time the system was acquired, here are the processes that were running (along with their command line options, etc.), installed modules, network connections, etc.
At that point, the argument may have been that the Trojan included a rootkit component that was memory-resident and never wrote to the disk. Okay, so let's say that instead of running individual comments to collect specific elements of memory (or, better yet, before doing that...), you'd grabbed the contents of RAM? Tools for parsing the contents of RAM do not need to employ the MS API to do so, and can even locate an exited or unlinked process, and then extract the executable image file for that process from the RAM dump.
What if the issue had occurred in an environment with traffic monitoring...firewall, IDS and IPS logs may have come into play, not to mention traffic captures gathered by an alert admin or a highly-trained IR team? Then you'd have even more data to correlate...filter the network traffic based on the IP address of the system, isolate that traffic, etc.
The more you know about something, the better. The more you know about your car, for example, the better you are able to describe the issue to a mechanic. With even more knowledge, you may even be able to diagnose the issue and be able to provide something more descriptive than "it doesn't work". The same thing applies to IR, as well as to forensic analysis...greater knowledge leads to better response and collection, which leads to more thorough and efficient analysis.
So how do we get there? Well, someone figured it out, and joined forces with Purdue and NW3C. Another way to do this is through online collaboration, forums, etc.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Wednesday, November 21, 2007
Alternative Methods of Analysis
Tuesday, November 20, 2007
Windows Memory Analysis
It's been a while since I've blogged on memory analysis, I know. This is in part due to my work schedule, but it also has a bit to do with how things have apparently cooled off in this area...there just doesn't seem to be the flurry of activity that there was in the past...
However, I could be wrong on that. I had received an email from someone telling me that certain tools mentioned in my book were not available (of those mentioned...nc.exe, BinText, and pmdump.exe, only BinText seems to be no longer available via the FoundStone site), so I began looking around to see if this was, in fact, the case. While looking for pmdump.exe, I noticed that Arne had released a tool called memimager.exe recently, which allows you to dump the contents of RAM using the NtSystemDebugControl API. I downloaded memimager.exe and ran it on my XP SP2 system, and then ran lsproc.pl (a modified version of my lsproc.pl for Windows 2000 systems) against it and found:
0 0 Idle
408 2860 cmd.exe
2860 3052 memimager.exe
408 3608 firefox.exe
408 120 aim6.exe
408 3576 realsched.exe
120 192 aolsoftware.exe(x)
1144 3904 svchost.exe
408 2768 hqtray.exe
408 1744 WLTRAY.EXE
408 2696 stsystra.exe
244 408 explorer.exe
Look familiar? Sure it does, albeit the above is only an extract of the output. Memimager.exe appears to work very similar to the older version of George M. Garner, Jr's dd.exe (the one that accessed the PhysicalMemory object), particularly where areas of memory that could not be read were filled with 0's. I haven't tried memimager on a Windows 2003 (no SPs) system yet. However, it is important to note that Nigilant32 from Agile Risk Management is the only other freely available tool that I'm aware of that will allow you to dump the contents of PhysicalMemory from pre-Win2K3SP1 systems...it's included with Helix, but if you're a consultant thinking about using it, be sure to read the license agreement. If you're running Nigilant32 from the Helix CD, the AgileRM license agreement applies.
I also wanted to followup and see what AAron's been up to over at Volatile Systems...his Volatility Framework is extremely promising! From there, I went to check out his blog, and saw a couple of interesting posts and links. AAron is definitely one to watch in this area of study, and he's coming out with some really innovative tools.
One of the links on AAron's blog went to something called "Push the Red Button"...this apparently isn't the same RedButton from MWC, Inc. (the RedButton GUI is visible in fig 2-5 on page 50 of my first book...you can download your own copy of the "old skool" RedButton to play with), but is very interesting. One blogpost that caught my eye had to do with carving Registry hive files from memory dumps. I've looked at this recently, albeit from a different perspective...I've written code to locate Registry keys, values, and Event Log records in memory dumps. The code is very alpha at this point, but what I've found appears fairly promising. Running such code across an entire memory dump doesn't provide a great deal of context for your data, so I would strongly suggest first extracting the contents of process memory (perhaps using lspm.pl, found on the DVD with my book), or using a tool such as pmdump.exe to extract the process memory itself during incident response activities. Other tools of note for more general file carving include Scalpel and Foremost.
So...more than anything else, it looks like it's getting to be a good time to update processes and tools. I mentioned an upcoming speaking engagement earlier, and I'm sure that there will be other opportunities to speak on Windows memory analysis in the future.
However, I could be wrong on that. I had received an email from someone telling me that certain tools mentioned in my book were not available (of those mentioned...nc.exe, BinText, and pmdump.exe, only BinText seems to be no longer available via the FoundStone site), so I began looking around to see if this was, in fact, the case. While looking for pmdump.exe, I noticed that Arne had released a tool called memimager.exe recently, which allows you to dump the contents of RAM using the NtSystemDebugControl API. I downloaded memimager.exe and ran it on my XP SP2 system, and then ran lsproc.pl (a modified version of my lsproc.pl for Windows 2000 systems) against it and found:
0 0 Idle
408 2860 cmd.exe
2860 3052 memimager.exe
408 3608 firefox.exe
408 120 aim6.exe
408 3576 realsched.exe
120 192 aolsoftware.exe(x)
1144 3904 svchost.exe
408 2768 hqtray.exe
408 1744 WLTRAY.EXE
408 2696 stsystra.exe
244 408 explorer.exe
Look familiar? Sure it does, albeit the above is only an extract of the output. Memimager.exe appears to work very similar to the older version of George M. Garner, Jr's dd.exe (the one that accessed the PhysicalMemory object), particularly where areas of memory that could not be read were filled with 0's. I haven't tried memimager on a Windows 2003 (no SPs) system yet. However, it is important to note that Nigilant32 from Agile Risk Management is the only other freely available tool that I'm aware of that will allow you to dump the contents of PhysicalMemory from pre-Win2K3SP1 systems...it's included with Helix, but if you're a consultant thinking about using it, be sure to read the license agreement. If you're running Nigilant32 from the Helix CD, the AgileRM license agreement applies.
I also wanted to followup and see what AAron's been up to over at Volatile Systems...his Volatility Framework is extremely promising! From there, I went to check out his blog, and saw a couple of interesting posts and links. AAron is definitely one to watch in this area of study, and he's coming out with some really innovative tools.
One of the links on AAron's blog went to something called "Push the Red Button"...this apparently isn't the same RedButton from MWC, Inc. (the RedButton GUI is visible in fig 2-5 on page 50 of my first book...you can download your own copy of the "old skool" RedButton to play with), but is very interesting. One blogpost that caught my eye had to do with carving Registry hive files from memory dumps. I've looked at this recently, albeit from a different perspective...I've written code to locate Registry keys, values, and Event Log records in memory dumps. The code is very alpha at this point, but what I've found appears fairly promising. Running such code across an entire memory dump doesn't provide a great deal of context for your data, so I would strongly suggest first extracting the contents of process memory (perhaps using lspm.pl, found on the DVD with my book), or using a tool such as pmdump.exe to extract the process memory itself during incident response activities. Other tools of note for more general file carving include Scalpel and Foremost.
So...more than anything else, it looks like it's getting to be a good time to update processes and tools. I mentioned an upcoming speaking engagement earlier, and I'm sure that there will be other opportunities to speak on Windows memory analysis in the future.
Sunday, November 18, 2007
Jesse's back!
Jesse blogged recently about being over at the F3 conference in the UK, and how he was impressed with the high SNR. Jesse shared with me that during a trivia contest, one of the teams chose the name, "The Harlan Carvey Book Club". Thanks, guys (and gals, as the case may be)! It's a nice thought and very flattering...though I don't ever expect to be as popular as Jesse or The Shat. ;-)
Saturday, November 17, 2007
Upcoming Speaking Engagement
Next month, I'll be in Hong Kong, speaking to the local police, as well as at the HTCIA-HK conference. The primary topic I've been asked to present is Registry analysis, with some live response/memory analysis, as well. The presentations vary from day-long to about 5 hrs, with a 45 min presentation scheduled for Thu afternoon - I'll likely be summing things up, with that presentation tentatively titled "Alternative Analysis Methods". My thought is that I will speak to the need for such things, and then summarize my previous three days of talks to present some of those methods.
Sunday, November 11, 2007
Pimp my...Registry analysis
There are some great tools out there for viewing the Registry in an acquired image. EnCase has this, as does ProDiscover (I tend to prefer ProDiscover's ability to parse and display the Registry...) and AccessData's Registry Viewer. Other tools have similar abilities, as well. But you know what? Most times, I don't want to view the Registry. Nope. Most times, I don't care about 90% of what's there. That's why I wrote most of the tools available on the DVD that ships with my book, and why I continue to write other, similar tools.
For example, if I want to get an idea of the user's activity on a system, one of the first places I go is to the SAM hive, and see if the user had a local account on the system. From there, I go to the user's hive file (NTUSER.DAT) located in their profile, and start pulling out specific bits of information...parsing the UserAssist keys, etc...anything that shows not only the user's activities on the system, but also a timeline of that activity. Thanks to folks like Didier Stevens, we all have a greater understanding of the contents of the UserAssist keys.
Now, the same sort of thing applies to the entire system. For example, one of the tools I wrote allows me to type in a single command, and I'll know all of the USB removable storage devices that had been attached to the system, and when they were last attached. Note: this is system-wide information, but we now know how to tie that activity to a specific user.
On XP systems, we also have the Registry files in the Restore Points available for analysis. One great example of this is the LEO that wanted to know when a user had been moved from the Users to the Administrators group...by going back through the SAM hives maintained in the Restore Points, he was able to show approximately when that happened, and then tied that to other activity on the system, as well.
So...it's pretty clear that when it comes to Registry analysis, the RegEdit-style display of the Registry has limited usefulness. But it's also clear that there really isn't much of a commercial market for these kinds of tools. So what's the answer? Well, just like the folks who get their rides or cribs pimped out on TV, specialists bring a lot to the table. What needs to happen is greater communication of needs, and there are folks out there willing and able to fulfill that need.
Here's a good question to get discussion started...what's good, easy-to-use and easy-to-access format for a guideline of what's available in the Registry (and where)? I included an Excel spreadsheet with my book...does this work for folks? Is the "compiled HTML" (ie, *.chm) Windows Help format easier to use?
If you can't think of a good format, maybe the way to start is this...what information would you put into something like this, and how would you format or organize it?
For example, if I want to get an idea of the user's activity on a system, one of the first places I go is to the SAM hive, and see if the user had a local account on the system. From there, I go to the user's hive file (NTUSER.DAT) located in their profile, and start pulling out specific bits of information...parsing the UserAssist keys, etc...anything that shows not only the user's activities on the system, but also a timeline of that activity. Thanks to folks like Didier Stevens, we all have a greater understanding of the contents of the UserAssist keys.
Now, the same sort of thing applies to the entire system. For example, one of the tools I wrote allows me to type in a single command, and I'll know all of the USB removable storage devices that had been attached to the system, and when they were last attached. Note: this is system-wide information, but we now know how to tie that activity to a specific user.
On XP systems, we also have the Registry files in the Restore Points available for analysis. One great example of this is the LEO that wanted to know when a user had been moved from the Users to the Administrators group...by going back through the SAM hives maintained in the Restore Points, he was able to show approximately when that happened, and then tied that to other activity on the system, as well.
So...it's pretty clear that when it comes to Registry analysis, the RegEdit-style display of the Registry has limited usefulness. But it's also clear that there really isn't much of a commercial market for these kinds of tools. So what's the answer? Well, just like the folks who get their rides or cribs pimped out on TV, specialists bring a lot to the table. What needs to happen is greater communication of needs, and there are folks out there willing and able to fulfill that need.
Here's a good question to get discussion started...what's good, easy-to-use and easy-to-access format for a guideline of what's available in the Registry (and where)? I included an Excel spreadsheet with my book...does this work for folks? Is the "compiled HTML" (ie, *.chm) Windows Help format easier to use?
If you can't think of a good format, maybe the way to start is this...what information would you put into something like this, and how would you format or organize it?
Pimp my...live acquisition
Whenever you perform a live acquisition, what do you do? Document the system, write down things like the system model (ie., Dell PowerEdge 2960, etc), maybe write down any specific identifiers (such as the Dell service tag) and then acquire the system. But is this enough data? Are we missing things by not including the collection of other data in our live acquisition process?
What about collecting volatile data? I've had to perform live acquisitions of systems that had been rebooted multiple times since the incident was discovered, as well as systems that could not be acquired without booting them (SAS/SATA drives, etc.). Under those circumstances, maybe I wouldn't need to collect volatile data...after all, what data of interest would it contain...but maybe we should do so anyway.
How about collecting non-volatile data? Like the system name and IP address? Or the disk configuration? One of the tools available on the DVD that comes with my book lets you see the following information about hard drives attached to the system:
DeviceID : \\.\PHYSICALDRIVE0
Model : ST910021AS
Interface : IDE
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x41ab2316
Serial No : 3MH0B9G3
DeviceID : \\.\PHYSICALDRIVE1
Model : WDC WD12 00UE-00KVT0 USB Device
Interface : USB
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x96244465
Serial No :
Another tool lets you see the following:
Drive Type File System Path Free Space
----- ----- ----------- ----- ----------
C:\ Fixed NTFS 17.96 GB
D:\ Fixed NTFS 38.51 GB
E:\ CD-ROM 0.00
G:\ Fixed NTFS 42.24 GB
In the above output, notice that there are no network drives attached to the test system...no shares mapped. Had there been any, the "Type" would have been listed as "Network", and the path would have been displayed.
Does it make sense to acquire this sort of information when performing a live acquisition? If so, is this information sufficient...or, what other information, at a minimum, should be collected?
Are there conditions under which I would acquire certain info, but not others? For example, if the system had not been rebooted, would I dump the contents of physical memory (I'd say "yes" to that one)...however, if the system had been accessed by admins, scanned with AV, and rebooted several times, would it do me any good at that point to dump RAM or should I simply document the fact that I didn't, and why?
Would this information be collected prior to the live acquisition, or immediately following?
What are your thoughts...and why?
PS: If you can think of a pseudonym I can use...think of it as a "hacker handle" (that thing that Joey from Hackers kept whining about), but REALLY cool, like "Xzibit", let me know. Oh, yeah...Ovie Carroll needs one, too. ;-)
What about collecting volatile data? I've had to perform live acquisitions of systems that had been rebooted multiple times since the incident was discovered, as well as systems that could not be acquired without booting them (SAS/SATA drives, etc.). Under those circumstances, maybe I wouldn't need to collect volatile data...after all, what data of interest would it contain...but maybe we should do so anyway.
How about collecting non-volatile data? Like the system name and IP address? Or the disk configuration? One of the tools available on the DVD that comes with my book lets you see the following information about hard drives attached to the system:
DeviceID : \\.\PHYSICALDRIVE0
Model : ST910021AS
Interface : IDE
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x41ab2316
Serial No : 3MH0B9G3
DeviceID : \\.\PHYSICALDRIVE1
Model : WDC WD12 00UE-00KVT0 USB Device
Interface : USB
Media : Fixed hard disk media
Capabilities :
Random Access
Supports Writing
Signature : 0x96244465
Serial No :
Another tool lets you see the following:
Drive Type File System Path Free Space
----- ----- ----------- ----- ----------
C:\ Fixed NTFS 17.96 GB
D:\ Fixed NTFS 38.51 GB
E:\ CD-ROM 0.00
G:\ Fixed NTFS 42.24 GB
In the above output, notice that there are no network drives attached to the test system...no shares mapped. Had there been any, the "Type" would have been listed as "Network", and the path would have been displayed.
Does it make sense to acquire this sort of information when performing a live acquisition? If so, is this information sufficient...or, what other information, at a minimum, should be collected?
Are there conditions under which I would acquire certain info, but not others? For example, if the system had not been rebooted, would I dump the contents of physical memory (I'd say "yes" to that one)...however, if the system had been accessed by admins, scanned with AV, and rebooted several times, would it do me any good at that point to dump RAM or should I simply document the fact that I didn't, and why?
Would this information be collected prior to the live acquisition, or immediately following?
What are your thoughts...and why?
PS: If you can think of a pseudonym I can use...think of it as a "hacker handle" (that thing that Joey from Hackers kept whining about), but REALLY cool, like "Xzibit", let me know. Oh, yeah...Ovie Carroll needs one, too. ;-)
Thursday, November 08, 2007
Pimp my...forensics analysis
How often do you find yourself in the position where, when performing forensic analysis, you end up either not having the tools you need (ie, the tools you do have don't show you what you need, or don't provide you with useful output)? Many of the tools we use provide basic functionality, but there are very few tools that go beyond that, and are capable of providing what we need over a large number of cases (or in some instances, even examination to examination). This leads to one of the major challenges (IMHO) of the forensic community...having the right tool for the job. Oddly enough, there just isn't a great market for tools that do very specific things like parse binary files, extract data from the Registry, etc. The lack of such tools is very likely due to the volume of work (i.e., case load) that needs to be done, and to a lack of training...commercial GUI tools with buttons to push seem to be preferred over open-source command line tools, but only if the need is actually recognized. Do we always tell someone when we need something specific, or do we just put our heads down, push through on the investigation using the tools and skill sets that we have at hand, and never address the issue because of our work load?
With your forensic-analysis-tool-of-choice (FTK, EnCase, ProDiscover, etc.), many times you may still be left with the thought, "...that data isn't in a format I can easily understand or use...". Know what I mean? Ever been there? I'm sure you have...extract that Event Log file from an image and load it up into Event Viewer on your analysis system, only to be confronted with an error message telling you that the Event Log is "corrupted". What do you do? Boot the image with LiveView (version 0.6 is available, by the way) and log into it to view the Event Log? Got the password?
The answer to this dilemma is to take a page from Xzibit's book and "pimp my forensics analysis". That's right, we're going to customize or "trick it out" our systems with the tools and stuff you need to do the job.
One way to get started on this is to take a look at my book [WARNING: Shameless self-promotion approaching...], Windows Forensic Analysis; specifically at some of the tools that ship on the accompanying DVD. All of the tools were written in Perl, but all but a few have been "compiled" into standalone EXEs so that you don't have to have Perl installed to run them, or know anything about Perl -- in fact, based on the emails I have received since the book was released in May 2007, the real limiting factor appears to be nothing more than a lack of familiarity with running command line (CLI) tools (re: over-dependence on pushing buttons). The tools were developed out of my own needs, and my hope is that as folks read the book, they too will recognize the value in parsing the Event Log files, Registry, etc., as well as the value of the tools provided.
Another resource is the upcoming Perl Scripting for Forensic Investigation and Security, to be published in the near future by Syngress/Elsevier.
What do these two resources provide? In a nutshell, customized and customizable tools. While a number of tools exist that will let you view the Registry in the familiar (via RegEdit) tree-style format, how many of those tools will translate arbitrary binary data stored in the Registry values? How many will do correlation, not only between multiple keys within a hive, but across hives, as well?
How many of the tools will allow you to parse the contents of a Windows 2000, XP, or 2003 Event Log file into an Excel spreadsheet, in a format that is easily sorted and searched? Or report on various event record statistics?
How many of these tools provide the basis for growth and customization, in order to meet the needs of a future investigation? Some tools do...they provide base functionality, and allow the user to extend that functionality through scripting languages. Some are easier to learn than others, and some are more functional that others. But even these can be limiting sometimes.
The data is there, folks...what slows us down sometimes is either (a) not knowing that the data is there (and that a resource is a source of valuable data or evidence), and (b) not knowing how to get at that data and present it in a usable and understandable manner. Overcoming this is as simple as identifying what you need, and then reaching out to the Xzibits and Les Stroud's of the forensic community to get the job done. Rest assured, you're not the only one looking for that functionality...
With your forensic-analysis-tool-of-choice (FTK, EnCase, ProDiscover, etc.), many times you may still be left with the thought, "...that data isn't in a format I can easily understand or use...". Know what I mean? Ever been there? I'm sure you have...extract that Event Log file from an image and load it up into Event Viewer on your analysis system, only to be confronted with an error message telling you that the Event Log is "corrupted". What do you do? Boot the image with LiveView (version 0.6 is available, by the way) and log into it to view the Event Log? Got the password?
The answer to this dilemma is to take a page from Xzibit's book and "pimp my forensics analysis". That's right, we're going to customize or "trick it out" our systems with the tools and stuff you need to do the job.
One way to get started on this is to take a look at my book [WARNING: Shameless self-promotion approaching...], Windows Forensic Analysis; specifically at some of the tools that ship on the accompanying DVD. All of the tools were written in Perl, but all but a few have been "compiled" into standalone EXEs so that you don't have to have Perl installed to run them, or know anything about Perl -- in fact, based on the emails I have received since the book was released in May 2007, the real limiting factor appears to be nothing more than a lack of familiarity with running command line (CLI) tools (re: over-dependence on pushing buttons). The tools were developed out of my own needs, and my hope is that as folks read the book, they too will recognize the value in parsing the Event Log files, Registry, etc., as well as the value of the tools provided.
Another resource is the upcoming Perl Scripting for Forensic Investigation and Security, to be published in the near future by Syngress/Elsevier.
What do these two resources provide? In a nutshell, customized and customizable tools. While a number of tools exist that will let you view the Registry in the familiar (via RegEdit) tree-style format, how many of those tools will translate arbitrary binary data stored in the Registry values? How many will do correlation, not only between multiple keys within a hive, but across hives, as well?
How many of the tools will allow you to parse the contents of a Windows 2000, XP, or 2003 Event Log file into an Excel spreadsheet, in a format that is easily sorted and searched? Or report on various event record statistics?
How many of these tools provide the basis for growth and customization, in order to meet the needs of a future investigation? Some tools do...they provide base functionality, and allow the user to extend that functionality through scripting languages. Some are easier to learn than others, and some are more functional that others. But even these can be limiting sometimes.
The data is there, folks...what slows us down sometimes is either (a) not knowing that the data is there (and that a resource is a source of valuable data or evidence), and (b) not knowing how to get at that data and present it in a usable and understandable manner. Overcoming this is as simple as identifying what you need, and then reaching out to the Xzibits and Les Stroud's of the forensic community to get the job done. Rest assured, you're not the only one looking for that functionality...
Subscribe to:
Posts (Atom)