Pages

Tuesday, July 31, 2012

Links and Updates

Blogosphere
Cory Harrell has another valuable post up, this one walking through an actual root cause analysis.  I'm not going to steal Corey's thunder on this...the post or the underlying motive...but suffice to say that performing a root cause analysis is critical, and it's something that can be done in an efficient manner.  There's more to come on this topic, but this sort of analysis needs to be done, it can be done effectively and efficiently, and if you don't do it, it will end up being much more expensive for you in the long run.

Jimmy Weg started blogging a bit ago, and in very short order has already posted several very valuable articles.  Jimmy's posts so far a very tutorial in nature, and provide a great deal of information regarding the analysis of Volume Shadow Copies.  He has a number of very informative posts available, to include a very recent post on using X-Ways to cull evidence from VSCs.

Mari has started blogging, and her inaugural post sets the bar pretty high.  I mentioned in this blog post that blogging is a great way to get started with respect to sharing DFIR information, and that even an initial blog post can lead to further research.  Mari's post included enough information to begin writing another parser for the Bing search bar artifacts, if you're looking to write one that presents the data in a manner that's usable in your reporting format.

David Nides recently posted to his blog regarding what he discussed in his SANS Forensic Summit presentation.  David's post focuses exclusively on log2timeline as the sole means for creating timelines, as well as some of what he sees as the short-comings with respect to analyzing the output of Kristinn's framework.  However, that's not what struck me about David's post...rather, what caught my attention was statements such as, for the average DFIR professional who is not familiar with CLI.

Now, don't think that I'm on the side of the fence that feels that every DFIR "professional" must be well-versed in CLI tools.  Not at all.  I don't even think that it should a requirement that DFIR folks be able to program.  However, I do see something...off...about a statement that includes the word "professional" along with "not familiar with CLI".

I've worked with several analysts throughout my time in the infosec field, and I've taught (and teach) a number of courses.  I have worked with analysts who have used *only* Linux, and done so at the command line...and I have engaged with paid professionals tasked with analysis work who are only able to use one commercial analysis framework.  So, while I am concerned by repeated statements in a post that seem to say, "...this doesn't work, because Homey don't play that...", I am also familiar with the reality of it.

Speaking of the SANS Forensic Summit, the Volatility blog has a new post up that is something of a different perspective on the event.  Sometimes it can be refreshing to get away from the distractions of the cartoons, and it's always a good idea to get different perspectives on events. 

Tools
The folks over at TZWorks have put together a number of new tools.  Their Jump List parser works for both the *.automaticDestinations-ms Jump Lists and the *.customDestinations-ms files, as well.  There's a Prefetch file parser, USB storage parser, and a number of other very useful utilities freely available.  The utilities are very useful, and are available for a variety of platforms (Win32- and 64-bit, Linux, Mac OS X).

If you're not familiar with the TZWorks.net site, take a look and bookmark it.  If you're downloading the tools for use, be sure to read the license agreement.  Remember, if you're reporting on your analysis properly, you're identifying the tools (and the versions) that you used, and relying on these tools for commercial work may come back and bite you.

Andrew posted to the ForensicsArtifacts.com site recently regarding MS Office Trust Records, which appear to be generated when a user trusts content via MS Office 2010.  Andrew, co-creator of Registry Decoder, pointed out that Mark Woan's RegExtract parses this information, and shortly after reading his post, I wrote a RegRipper plugin to extract the information, and then created another version of that plugin to extract the data in TLN format.  This information is very valuable, as it is an indicator of explicit user activity...when opening a document from an untrusted source, the user must click the "Enable Editing" button that appears in the application in order to proceed with editing it.  Clearly, this requires some additional testing to determine actions that cause this artifact to be populated, etc., but for now, it clearly demonstrates user access to resources (i.e., network drives, external drives, files, etc.).  In the limited testing that I've done so far, the time stamp associated with the data appears to be when the document was created on the system, not when the user clicked the "Enable Editing" button.  What I've done is downloaded a document (MS Word .docx) to my desktop via Chrome, recorded the date and time of the download, and then opened the file.  When the "Enable Editing" button is present in the warning ribbon at the top of the document, I will wait up to an hour (sometimes more) to click the button and record the time I did so.  Once I do, I generally close the document.  I then reboot the system and use FTK Imager to get a copy of the NTUSER.DAT hive, and run the plugin.  In every case so far, what I've seen is that the time stamp associated with the values in the key correlate to the creation time of the file, further evidenced by running "dir /tc".

Monday, July 30, 2012

Adding Value to Timelines

Timeline analysis is valuable to an analyst, in that a timeline of system events provides context, situational awareness, and an increased relative confidence in the data with which the analyst is engaged.

We can increase the value of a timeline by adding events to that timeline, but adding events for it's own sake isn't what we're particularly interested in.  Timeline analysis is a form of data reduction, and adding events to our timeline, for it's own sake, is moving away from that premise.  What we want to do is add events of value, and we can do that in a couple of ways.

Categorizing Events
Individual events within a timeline, in and of themselves, can have little meaning, particular if we're unfamiliar with those specific events.  We try to minimize the amount of information that's in an event, in order to get as many events as we can on our screen and within our field of vision, in order to get some context or situational awareness around that particular event.  As we see events over and over again, we develop something of an "expert" or "experience" recognition system in our minds...we recognize that some events, or groups of events, are most often associated with various system or user activities.  For example, we begin to recognize, through repetition and research, that one event (or a series of events) indicates that a USB device was connected to a system, or a program was installed, or that a user accessed a file with a particular program.  In our minds, we begin to group these events into categories.

Consider this...given the myriad of events listed in the Windows Event Log, particularly on Windows 7 and 2008 R2 systems, having the ability to map events to categories, based on event source and ID pairs, can be extremely valuable to an analyst.  An analyst can do the research regarding an event once, and then add the event source/ID pair, along with an event category to the event mapping file, along with a credible reference.  From that point on, the event mapping file gets used over and over again, automatically mapping event source/ID pairs to the category that the analyst identified.  If there's any question about the meaning or context of a particular event, the reference is right there and available in the event mapping file.

As an example of this event mapping, we may find through analysis and research that the event source WPD-ClassInstaller with the ID 24576 within the System Event Log refers to a successful driver installation, and as such, we might give this event a category ID of "[Driver Install]" for easy reference.  We might also then know to look for events with source UserPnp and IDs 20001 and 20003 in order to identify the USB device that was installed.  This event mapping also allows us to identify specific events of interest, events that we may want to focus on in our exams.

We can then codify this "expert system" (perhaps a better term is an "experience system") by adding category IDs to events.  One benefit of this quicker recognition; we're no longer relying on memory, but instead adding our experience to our timeline analysis process, thereby adding value to the end result.

Note: In the above paragraph, I am not referring to adding category information to an event after the timeline has been generated.  Instead, I am suggesting that category IDs be added to events, so that they "live" with the event.

Another benefit is that by sharing this "experience system" with others, we reduce their initial cost of entry into analyzing timelines, and increase the ability of the group to recognize patterns in the data. By adding the ability to recognize patterns to the group as a whole, we then provide a greater capability for processing the overall data.

Now, some events may fit into several categories at once.  For example, the simple fact that we have an *.automaticDestinations-ms Jump List on a Windows 7 or 2008 R2 system indicates an event category of "Program Execution"; after all, the file would not exist unless an application had been executed.  Depending upon which application was launched, other event categories may also apply to the various entries found within the DestList stream of the Jump List file.  For MS Word, the various entries refer to files that had been accessed; as such, each entry might fall within a "File Access" event category.  As Jump Lists are specific to a user, events extracted from the DestList stream or from the corresponding LNK streams within the Jump List file may also fall within a more general "User Activity" event category.

Incorporating Metadata
One of the things missing from the traditional approach to creating timelines is the incorporation of file metadata into the timeline itself.

Let's say that we run the TSK took fls.exe against an image in order to get the file system metadata for files in a particular volume.  Now we have what amounts to the time stamps from the $STANDARD_INFORMATION attribute (yes, we're assuming NTFS) within the MFT.  This is clearly useful, but depending upon our goals, we can potentially make this even more useful by accessing each of the files themselves and (a) determining what metadata may be available, and (b) providing the results of filtering that metadata.

Here's an example...let's say that you're analyzing a system thought to have been infected with malware of some kind, and you've already run an AV scan or two and not found anything conclusive.  What are some of the things that you could look for beyond simply running an AV scan (or two)?  If there are multiple items that you'd look for, what's the likelihood that you'll remember all of those items, for every single case?  How long does it take you to walk through your checklist by hand, assuming you have one?  Let's take just one potential step in that checklist...say, scanning user's temporary directories.  You open the image in FTK Imager, navigate in the tree view to the appropriate directory, and you see that the user has a lot of files in their temp directory, all with .tmp extensions.  So you start accessing each file via the FTK Imager hex view and you see that some of these files appear to be executable files.  Ah, interesting.  Wouldn't it be nice to have that information in your timeline, to have something that says, "hey, this file with the .tmp extension is really an executable file!"

Let's say you pick a couple of those files at random, export them, and after analysis using some of your favorite tools, determine that some of them are packed or obfuscated in some way.  Wouldn't it be really cool to have this kind of information in your timeline in some way, particularly within the context that you're using for your analysis? 

For an example of why examining the user's temporary folder might be important, take a look at Corey Harrell's latest Malware Root Cause Analysis blog post.

Benefits
Some benefits of adding these two practices to our timeline creation and analysis process is that we automate the collection and presentation of low-hanging fruit, increasing the efficiency at which we do so, and reduce the potential for mistakes (forgetting things, following the wrong path to various resources, etc.).  As such, root cause analysis becomes something that we no longer have to forego because "it takes too long".  We can achieve that "bare metal analysis".

Summary
When creating timelines, we want to look at adding value, not volume (particularly not for volume's sake).  Yes, there is something to be said regarding the value of seeing as much activity as possible that is related to an event, particularly when external sources of information regarding certain aspects of an event may fall short in their descriptions and technical details.  Having all of the possible information may allow you to find a unique artifact that will allow you to better monitor for future activity, to find indications of the incident across your enterprise, or to increase the value of the intelligence you share with other organizations.


Tuesday, July 17, 2012

Thoughts on RegRipper Support

One of the things I've considered recently is taking a more active role in supporting RegRipper, particularly when it comes to plugins.

When I first released RegRipper in 2009 or so, my hope was that it would be completely supported by community.  For a while there, we saw some very interesting things being done, such as RegRipper being added as a viewer to EnCase.  Over the past year or so, largely thanks to the wonderful and greatly appreciated support of folks like Brett Shavers and Corey Harrell, RegRipper has sort of taken off.

From the beginning, I think that the message about RegRipper has been largely garbled, confused, or simply misunderstood.  As such, I'd like to change that.

When someone has wanted a plugin created or modified, I've only ever asked for two things...a concise description of what you're looking for, and a sample hive.  Now, over the past 3 or so years, I've received requests for plugins, accompanied by either a refusal to provide a sample hive, or the sample hive simply being absent.  For those of you who have provided a sample hive, you know that I have treated that information in the strictest confidence and wiped the hive or hives after usage.  In addition, rather then being subjected to a barrage of emails to get more information about what you are looking for, those of you who have provided sample hives have also received the requested plugin in very short order, often as quickly as within the hour.

One of the things I've tried to do is be responsive to the community regarding needs.  For example, I provided a means for listing available plugins as part of the CLI component of RegRipper (i.e., rip.pl/.exe).  As this is CLI, some folks wanted a GUI method for doing the same thing, so I wrote the Plugin Browser.  Even so, to this day, I get questions about the available plugins; I was recently asked if two plugins were available, one that was originally written almost 3 years ago, and one what I'd written two months ago.  

I'm not trying to call anyone out, but what I would like to know is, what is a better means for getting information out there and in the hands of those folks using RegRipper?

Recently, some confusion in the RegRipper message became very apparent to me, when information that another Perl script that I had released was a RegRipper plugin was shared across the community.  It turned out that, in fact, that script had nothing whatsoever to do with either RegRipper or the Registry.

Speaking of plugins, there are a number of folks who've taken it upon themselves to write RegRipper plugins of their own, and share them with the public, and for that, I salute you.  Would it be useful to have a testing and review mechanism, or at least identify the state (testing, dev, beta, final) of plugins?

Finally, I've written a good number of plugins myself that I haven't yet provided to the general public.  I have provided many of those plugins to a couple of folks within the community who I know would (and have) use them, and provide feedback.  In some cases, I haven't released the plugins because of the amount of confusion there seems to be with regards to what a plugin is and how it's used by RegRipper; i.e., as it's currently written, you can't just drop a plugin in the RegRipper plugins directory and have it run by RegRipper (or via rip.pl/.exe).  Some effort is required on the part of the analyst to include plugins in a profile in order to have it run by RegRipper.

As such, I've considered becoming more active in getting the message about RegRipper out to the DFIR community as a whole, and I'd like to know, from the folks who use RegRipper, how would we/I do a better job with RegRipper, as well as in supporting it?

Publishing DFIR Materials, pt II

After posting on this topic previously and getting several comments, along with comments via other venues, I wanted to follow up with some further thoughts on opportunities for publishing DFIR materials.

When it comes to publishing DFIR materials, there are a number of ways to publish or provide information to the community, from tweets and commenting on blog posts, all the way up to writing books.  As it turns out, this may be a good way to define the spectrum for publishing DFIR materials...starting with blog posts, and progressing through a number of media formats to book publishing.

Based on some previous thought and comments, I wanted to share some of the different publishing mechanisms within that spectrum, as well as what might be pros and cons of each of them.

Blogging
Blogging is a great way to make DFIR information (both technical and non-technical) available to the community.  It can be quick, with some bloggers posting within minutes of an event or of finding information.

One of the best examples of DFIR blogging that I've seen, in which the content is consistently excellent, is Corey Harrell's Journey Into IR blog.  Corey's posts are consistently well-written, insightful, and chock full of great technical information.  Corey has taken the opportunity a number of times post not only his research set-up, but also provide a comprehensive write-up regarding the tools and techniques he used, as well as his findings.

Pros
Blogging is a great way to get information out on a particular topic, particularly if your goal is to get the initial information out to show the results of a tool or initial research.  This is a great way to see if anyone else is interested in what you're working on, either investigating or developing, and to see if there's interest in taking the research further.

This is also a great way to quickly put out information regarding findings, particularly if it requires more than 140 characters, but isn't too voluminous or extensive.

Cons
There can be a number of 'cons' associated with this publishing mechanism...one of which is the simple fact that it can be very difficult to keep up with all of the possible blogs that are out there.  Also, with blogs, it can be difficult to see the progression of information as (or 'if') it is updated.

Call me a spelling n@zi if you like, but another issue that I see with blogs is that pretty much anyone can create one, and if the author has little interest in such things as grammar and spelling, it can be difficult to read, if not find, such blogs.  If you're searching for something via Google as part of your research, and someone posted a great blog post but opted to not spell certain terms properly, or opted to not use the generally accepted terminology (i.e., Registry key vs. value), you might have some difficulty finding that post.

If you're interested in purely technical DFIR information, blogs may or may not be the best resource.  Some authors do not feel the need to research their information or provide things such as references, and some may not provide solely technical information via their blog, using it also as a personal diary or political platform.  There's nothing wrong with this, mind you...it's just that it may be difficult to find something if it's mixed in with a lot of other stuff.

Some blogs and blog posts provide nothing more than a list of links, with no insight or commentary from the author.  While this method of blogging can provide the information to a wider audience than would normally view the original blog post, it really doesn't do much to further the community as a whole.  If someone posts about finding and using a tool, and feel as if they want to post a blog of their own, why not provide some insight into how you found the tool to be useful, or not, if that's the case?  What if it's not a tool, but information...wouldn't it be useful to others within the community if you

Wiki
I like wikis, as they can provide a valuable means for maintaining updated, accurate information, particularly on very technical subjects.  Most of the formats I've seen include the ability to add references and links to supporting information, which add credibility to the information being provided.  Blogs provide this as well, but a wiki allows you to edit the information, providing the latest and most up-to-date information in one location.

Pros
Wikis can be extremely beneficial resources, in that they can provide a single, updated repository of information on DFIR topics.

Perhaps the best use of a wiki is as an internal resource, one in which members of your team are the only ones who can access it and update it.

Cons
One of the primary cons I would associate with the use of Wikis is that a lot of folks don't seem to use them. One of the wikis I frequent is the ForensicsWiki; while I find this to be a valuable resource and have even posted information there myself, my experience within the public lists and forums is that most folks don't seem to consider going to sites such as this, or using them as a resource.  I know that schools and publishers, including my own, frown upon the use of wikis as references, but if the information is accurate (which you've determined through research and testing), what's really the issue?

PDFs
After I got involved in writing books, I started to see the value of providing up-to-date documents on specific analysis topics.  Rather than writing a book, take a single analysis technique (say, file extension analysis), or a series of steps to perform a specific type of analysis (i.e., determining CD burning by a user, etc.), write it up into a 6-10 page PDF document and release it.

To see an example of this publishing mechanism, go to my Google Code book repository, download the "RR.zip" archive, and look in the DVD\stuff subdirectory.  You'll find a number of PDF documents that I'd written up and provided as "bonus" material with the book.  Since releasing this information, I haven't heard from anyone how useful it is, or if it's completely worthless.  ;-)

Another excellent example of this sort of publishing is a newsletter such as the Into The Boxes e-zine.  It's unfortunate that the community support wasn't there to keep Don's efforts going.  Another excellent example of using this mechanism to publish DFIR information is the DFIR poster that the SANS folks made available recently.

Pros
This mechanism can be extremely valuable to analysts in a number of areas.  While I was on the IBM team, we wanted to have a way to provide analysts with information that they could download and take with them when they were headed out on a response engagement.  This was "just-in-time" familiarization and/or training that could get an analyst up to speed on a particular topic quickly, and could also be used as an on-site reference.  Our thinking was that if we had someone who had to go on-site in order to acquire a MacBook, or a boot-from-SAN device, or try to conduct a live acquisition of a system that has only USB 1.0 connections, we could provide extremely useful reference information so that the analyst could act with confidence, which is paramount when you're in front of a customer.  Many times, while we had other analysts who were just a phone call away, we would find ourselves either in a data center with no cell phone signal, or standing directly in front of a customer.

I talked to several LE analysts about this type of JIT training, and received some enthusiastic responses at the time.  Having 6-10 page PDFs that can be printed out and included in a binder, with updated PDFs replacing older information, was seen as very valuable.  I know that some folks have also expressed a desire to have something easily searchable.

Cons
This publishing mechanism depends on the expertise of the individual author, and their willingness to not only provide the information, but keep it up to date.  If this is something that someone just decides to do, then you have similar issues as with blogging...spelling, grammar, completeness and accuracy of information.  One way around this is to have a group available, either through volunteers or a publisher, that provides for reviews of submitted material, checking for clarity, consistency, and accuracy.

IOCs/Plugins
IOCs, or "indicators of compromise", should be included as a publishing mechanism, as it is intended for sharing information and/or intelligence, albeit following a specific structure or specification.  Perhaps the most notable effort along these lines is OpenIOC.org, which uses a schema developed by the folks at Mandiant.  The OpenIOC framework is intended to provide an extendable, customizable structure for sharing sophisticated indicators and intelligence in order to promote advanced threat detection.

I would also include plugins in this category, particularly (although not specifically) those associated with RegRipper.  I know that other tools have taken up an approach similar to RegRipper's plugins, and this is a good place to include them, even if they don't follow as structured a format as IOCs.

Pros
IOCs and plugins can be a great publishing mechanism, providing for the retention of corporate knowledge, as well as being a force multiplier.  Let's say someone finds something after 8 or 12 hrs of analysis, something that they hadn't seen before...then they write up an IOC or plugin, and share it with their 10 other team members.  With a few minutes of time, they've just saved their team at least 10 x 12 hrs, or 120 hrs of work, where each team member (assuming equal skill level across the team) would have had to spend 12 hrs of their own time to find that same indicator.  Now, each team member has 100% of the knowledge and capability for locating the indicator, while having to spend 0 time in attaining that knowledge.

IOCs and plugins put tools and capabilities in the hands of the analysts who need them, and using the appropriate mechanism for querying for the indicators provides for those indicators to be searched for every time, in an automated manner.

Cons
One 'con' I have seen so far with respect to IOCs is that there is either a limitation within the schema, or a self-imposed (by the IOC author) limitation of some kind.  What I mean by this is, I've seen several malware-specific IOCs released online recently, and in some cases, there is no persistence mechanism listed within the IOC.  I contacted the author, and was told that while the particular malware sample used the ubiquitous Run key within the Windows Registry for persistence, the value name used could be defined by the malware author and in essence, completely random.  As such, the author found no easy means for codifying this information via the schema, and felt that the best thing to do was to simply leave it out.  To me, this seems like a self-imposed blind spot and a gap in potential intelligence.  I'm not familiar enough with the OpenIOC schema to know if it provides a means for identifying this sort of information or not, but I do think that by not providing something, that there is a significant blind spot.

Another 'con' associated with IOCs that I have heard others mention, particularly at the recent SANS Forensic Summit, is that no one is going to give away their "secret sauce", thereby giving up their competitive advantage.  I would go so far as to say that this applies to other publishing mechanisms, as well, and is not a 'con' that is specific solely to IOCs.  Like most, I am fully aware that while some sites (i.e., blogs, etc.) may provide DFIR information, not all of it is necessarily cutting edge.  In fact, there are a number of sites where, when DFIR information does appear, it is understood to be 6 months or more old, for that particular provider, even though others may not have seen it before.

As with IOCs, RegRipper plugins can be difficult for folks to write, or write correctly, on their own.  This can be particularly true if the potential author is new to either programming or to the response and analysis techniques that generally go hand-in-hand with, or precede, the ability to write IOCs and plugins.

Short Form
I recently had a discussion with a member of the Syngress publishing staff regarding a "new" publishing format that the publishing company is pursuing; specifically, rather than having authors write an entire book, have them instead write a "module", which is not so much a part or portion of a book, but more of a standalone publishing mechanism.  The idea with this "short form" of publishing is that DFIR information will be available to the community quicker, as the short form is easier for the author to write, and for the publisher to review and publish.

A very good example of short form publishing is the IACIS Quick Reference from Lock and Code, which is an excellent reference, and available in both a free and a for-fee form.

In a lot of ways, this is very similar to the PDF publishing mechanism I mentioned earlier, albeit this mechanism can reach up to over 100 pages; while it's longer than a PDF, it is still shorter than a complete book.

Pros
Benefits of this publishing mechanism are that the information is more complete than a blog post or PDF, is reviewed by someone for technical accuracy, as well as spelling and grammar, is formatted, and is available quicker than a full book.

Another benefit of this mechanism is that folks can pick the modules that they're interested in, rather than purchasing a full book of 8 or more chapters, when they're only interested in about half of the content.  Hopefully, this will also mean that folks who are interested in several modules and want a hard-copy version of the material can choose the modules that they want and have them printed to a soft-bound edition.

Cons
Even the short form publishing mechanism can take time to make it "to market" and be available to the community.  For example, in my experience, it can take quite a while for someone to write something that is 100 pages long, even if they are experienced writers.  Let's say that the author is focused, has some good guidance and motivation, and gets something through the authoring, review, and revision process in 90 days. How long will it take to then have that information available to the public?  At this point, everything is dependent upon the publisher's schedule...who is available to review the module and get it into a printer-ready format?  What about contracts with printers?  Will an electronic version of the module be ready sooner than the hard-copy version?

Books
Books are great resources for DFIR information, whether the author is going through a publishing company, or following a self-publishing route.

Pros
One of the biggest 'pros' of publishing books containing DFIR material is that a publishing company has a structure already set up for publishing books, which includes having the book technically reviewed by someone known within the field, as well as reviewed for consistency, grammar, spelling, etc., prior to being sent to the printer.

Writing a book can be an arduous undertaking, and keeping track of everything that goes into it...paragraph formats, side bars, code listings, figures, etc...can be a daunting task.  Working with a publisher means that you have a signed contract and schedule to meet, which can act as the "hot poker motivation" that is often needed to get an author to sit down and start writing.  As chapters are written, they're sent off to someone to perform a technical review, which can be very beneficial because the author may loose sight of the forest for the trees, and having someone who's not so much "in the weeds" review the material is a great way to keep you on track.  Finally, having someone review the finished product for grammar and spelling, and catching all of those little places where you put in the wrong word or left one out can be very helpful.  Overall, this structure adds credibility to the finished product.

Cons
Publishing a book can take some time.  My first book literally took me a year to write, and from there 3 - 3 1/2 months to go from an MS Word manuscript to the PDF proofs to a published book available for purchase.  Due to the amount of time and effort it takes, some authors who start down the road of writing a book and even get to the point of having a signed contract never even get to the point where they have a published book.  As I've progressed along in writing books, I've been able to reduce the total amount of time between the signed contract and the publication date, but the fact is that it can still take a year or more.

Another aspect of the book form is that different publishers may support different ebook formats.  When my first book was published with Syngress, there was a PDF version of the book available, and for a while after the soft-bound book was available, those purchasing the book via Syngress would also receive a PDF copy of the book, as well.  However, shortly thereafter, Syngress was purchased by Elsevier, a publishing company that does not support the PDF ebook format.

One of the benefits for some folks, believe it or not, of working with a publisher is that they have a schedule and the 'hot poker motivation' to get the work done.  As such, one of the detriments of self-publishing is that without the necessary internal stimulus to keep the author to a schedule, the finished product may never materialize.

Overall Pros
Publishing DFIR information can potentially make us all stronger examiners.  No single analyst knows everything that there is to know about DFIR analysis, but working together and sharing information and intel, we can all be much stronger analysts.  It's the sharing of not only information and intelligence, but digesting that information, providing insights on it, and engaging in discussions of it that makes us all stronger analysts.

Some information changes quickly, while other information remains pretty consistent over a considerable length of time.  Choosing the appropriate publishing mechanism can make the appropriate information available in a timely manner; for example, a blog post can raise awareness of an issue or indicator, which can lead to more research and the creation of a tool, an IOC, or a plugin.

Overall Cons
All publishing mechanisms rely on the interest and desire of the author(s) to provide information, to research it, and to keep it up to date.  Sometimes due to work, life, or simply lack of interest, information isn't kept up to date.  However, the 'pro' that can come out of this, the 'silver lining' if you will, is that perhaps all that is needed is to provide that initial information on a topic, and someone else may pick it up.

Another significant 'con' associated with publishing DFIR material to the community in general is a lack of support and feedback from the community.  Well, to be honest, this is a 'con' only if it's something that you're looking for; I happen to be of a mind that no one knows everything, and no one of us is as smart as all of us. As such, I honestly believe that the way to improve overall is to provide insightful commentary and feedback to someone who has provided something to the community, be it a tool or utility, or something published using any of the above means.  If someone provides DFIR information, I try to take the time to let them, or the community as a whole, know what was useful about it, from my perspective.  Feedback from the community is what leads to improvement in and expansion of the material itself.  Not everyone has the same perspective on the cases that they work, nor of the information that they look at on any particular case.  You may have two examiners with the same system, but as they're from different organizations, the goals of their exam, as well as their individual perspectives, will be different.

Goals
Finally, something needs to be said about the goals of publishing DFIR information; often, this is highly personal, in that an author's goals or reasons for publishing may be something that they do not want to discuss...and there's nothing wrong with that.

Usually, if someone is publishing DFIR information, it's because they wanted, first and foremost, to make the information available and contribute something to the community at large.  However, there can be other goals to publishing that motivate someone and direct them toward a particular publishing mechanism.  For example, writing blog posts that are of a consistently high quality (with respect to both the information and the presentation) will lead to that author becoming recognized as an expert in their field.  One follow-on to this that is often mentioned is that by being recognized as something of an expert, that author will be consulted for advice, or contracted with to perform work...I personally haven't encountered this, per se, but it is something that is mentioned by others.

Another goal for publishing information and choosing the appropriate mechanism is that the author(s) may want to be compensated for their time and work, and who can really blame them?  I mean, really...is it such a bad thing to, after sacrificing evenings and weekends to produce something that others find of value, want to take your wife to dinner?  How about to raise money to contribute to a charity?  Or to pay for the tools that you purchased in order to conduct your research, or tools you'll need to further that research?