So we got to talking about the effect that just the first tool will have on memory...whichever tool you choose to use to dump the contents of physical memory, be it F-Response, FastDump, mdd, win32dd, winen, Memoryze, or if you choose to kick it old school on XP, an old, unsupported copy of dd.exe. Regardless, when you launch the tool, the EXE is loaded into memory to be run, establishing a presence in memory and a "footprint". This is something that cannot be avoided...you cannot interact with a live system without having some effect on the system. Responders should also be aware that the converse is true...during incident response for something like a major data breach (not naming names here...
When the EXE is loaded into memory, it consumes pages that have been marked as available; these pages are not currently in use, but may contain remnants that may be valuable later in the examination, much like artifacts found in unallocated space on the hard drive. This is somewhat intuitive, if you think about it...if Windows were to launch a new process using pages already in use by another process, then you'd have random crashes all over the place. Depending on the amount of memory in the system, the memory manager may need to write some of the pages being used by other processes out to the pagefile/swap file in order to free up space...but again, the new process will not consume pages currently being used by another process.
I know that this is an overly simplistic view of the whole process, but its meant to illustrate a point...that point being that there's a LOT of discussion about "memory footprints" of various tools, but when I listen to the discussion, I'm struck by the fact that it hasn't changed a great deal in the past 5 years, and that it doesn't incorporate either the experts in the field, or actual testing. In fact, what strikes me most about these conversations is that the primary concern about "memory footprints" in many cases comes from the folks who "don't have the time" to conduct the very research and quantitative analysis that they seem to be asking for. I think that for the most part, many of use accept the fact that there, yes, Virginia, there is a Santa...I mean, there will be memory footprints of our live response actions, but that it is impossible to quantify the content that is overwritten.
I think that we can agree that memory pages (and content, given pool allocations, which are smaller than a 4K memory page) are consumed during live response. I think that we can also agree that from a technical perspective, while some pages may be swapped out to the page file, the only pages that will actually be consumed or overwritten with new content, and their old content will no longer be available (either in physical memory or in the page file) are those marked as available by the operating system. From an analyst's perspective, this is similar to what happens when files are deleted from a disk, and sectors used by those files are overwritten. This will lead to instances in which you can "see" data available in a RAM dump (through the use of such tools as strings or grep), but as the content may be in pages marked available for use, that data will have little context (if any) as it cannot be associated with a specific process, thread, etc.