Saturday, April 29, 2006

Future Trends

Would I be remiss if I were to NOT discuss future trends in computer forensics?

Every now and then you see the curious posting questions about future trends and challenges in the computer forensics field, and invariably, the responses include something do to with the increase in the density of storage media. For example, information was recently leaked from Seagate regarding 750GB drives. But is this really a "future trend"?

Think about it. Not long ago, those tasked with performing computer forensics were facing 100 or 200 MB drives...yes, "megabyte", with an M. Even today, larger capacity with smaller form facter is just something we deal with. So...if this is something we've been dealing with from the beginning, does it really constitute a "future trend"?

Rather than sitting back and being driven by the course of events, IMHO, forensic analysts need to be the driving force in the future trends within the community. Specifically, there needs to be a greater level of education. I know that this is very easy for me to say, sitting here at oh-dark thirty, blogging away. However, I sincerely believe that this is the case. Let me provide some background and perhaps illuminate what I'm referring to...

Computer systems are becoming ever-more sophisticated. The bad guys are, too. Things that used to be done for fun are now being done for profit, or revenge. The face of computer crime itself is changing. While computer forensic analysis techniques are changing, they aren't being updated at anywhere near the same rate as the techniques used by those who end up becoming the focus of an investigation. There are still many folks out there, tasked with performing computer forensics, who firmly believe (through their initial training) that a computer forensics investigation begins with unplugging the affected system, securing it, and imaging the hard drive.

But what happens when you do this? Think of the massive amounts of data that are lost when power is removed from a system. Think of fraud or sexual harassment investigation, in which data was stored on the clipboard. Think about the malware that only exists in memory. Personally, I'm reminded of a case from 2000 in which someone else determined that the SubSeven Trojan was on a system via a file search...after power had been removed from the system. Sure, the MAC times on the files would give the investigator some information, but no one could say for sure if (a) the backdoor was running when the system was unplugged, or (b) if a bad guy were connected to the backdoor, or (c) if the "suspect" was using connecting to another infected system somewhere on our corporate network.

One of the main techniques still in use today by forensic examiners is the keyword search. Don't get me wrong...there's nothing wrong with this fact, it's proven to be quite useful. However, it should be a tool, not the tool, in the investigator's toolbox. Keyword searches across file systems and sectors can be fruitful, but not everything is stored on a system in ASCII or Unicode. Take a look at the Windows Registry...many important pieces of information are stored in binary format, or via Rot-13 "encryption". Both of these will cause simple keyword searches to fail.

Another thing to think about is disk encryption software. Unplug the power and what are you left with? Okay, now think about it this way...if you acquired the system live, what would you be left with?

Lets get right to the point...perhaps there really is no "future trend" in computer forensics, but rather, we're going to simply be revisiting the same old trends that we've faced in the past. IMHO, I don't see increased storage density as a new's something we've had to deal with for a while. HOW we deal with it is what's going to change the face of forensic computing...greater education and training will drive forensic investigators to include live response techniques (live acquisition, volatile data collection and analysis, etc) in their "bag of tricks", AND allow them to be able to testify about these techniques and data in court.

One final note...there are those who say that they would never perform a live investigation until there's case law and court decisions supporting the use of these techniques. Okay...we're back to the chicken or the egg argument. My response is to say that rather than waiting for the courts to make a change, the investigators need to start moving in that direction first, getting training and knowledge to not only perform live response but to also be able to present and explain that information in court. After all, many of us are already performing live response investigations, as well as Registry analysis, as a matter of course.


Thursday, April 20, 2006

New ProScripts

The user forums at TechPathways were recently revamped and updated (WRT functionality). In the process, previous threads were lost. I've added a couple of ProScripts that you may find helpful...

One of the scripts parses through the Registry (the script makes the assumption that there is only one Windows image in the project, and only one Registry) and pulls out user SIDs. From there, it goes through the HKEY_USERS hive and parses out the UserAssist know, the ones with the values that are ROT-13 "encrypted". If possible, the script also pulls out timestamps from the value data. Here's an example of the output when I ran the script against the hacking case image:

UEME_RUNPATH:C:\Program Files\Cain\Cain.exe --> Fri Aug 27 15:33:02 2004
UEME_RUNPATH:C:\Program Files\Whois\whois.exe --> Thu Aug 26 15:13:57 2004
UEME_RUNPATH:C:\WINDOWS\System32\telnet.exe --> Thu Aug 26 15:05:15 2004
UEME_RUNPATH:C:\Program Files\Network Stumbler\NetStumbler.exe --> Fri Aug 27 15:12:35 2004
UEME_RUNCPL:"C:\WINDOWS\System32\appwiz.cpl",Add or Remove Programs --> Fri Aug 27 15:14:44 2004
UEME_RUNPATH:C:\Documents and Settings\Mr. Evil\Desktop\WinPcap_3_01_a.exe --> Fri Aug 27 15:15:08 2004
UEME_RUNPATH:C:\Documents and Settings\Mr. Evil\Desktop\ethereal-setup-0.10.6.exe --> Fri Aug 27 15:28:36 2004
UEME_RUNPATH:C:\Program Files\Ethereal\ethereal.exe --> Fri Aug 27 15:34:54 2004

So why is this important? Well, for one, it ties activity such as running executables to a specific user. This info is pulled right out of the NTUSER.DAT file, and is visible in ProDiscover under the HKEY_USERS hive.

This information can be correlated to the contents of the Prefetch directory (on XP systems, which perform application prefetching by default). I wrote a ProScript that would run through the Prefetch directory and list the .pf files. For each one, it retrieves and displays the last run timestamp and run count from the contents of the file. See the following excerpt from the output of the ProScript:

Name :
Last Run : Fri Aug 27 15:33:03 2004
Run Count : 2

Name :
Last Run : Fri Aug 27 15:12:35 2004
Run Count : 1

Name :
Last Run : Thu Aug 26 15:05:15 2004
Run Count : 1

Name :
Last Run : Fri Aug 27 15:15:08 2004
Run Count : 1

Notice how the "Last Run" times from the .pf file correlate with the same time from the UserAssist key? So now, if we find an interesting .pf file in the Prefetch directory, we have a way to correlate it and tie it to a particular user. Of course, we can also use the Security Event Log for further correlation...if it is configured to audit logins.

Note that the Prefetch ProScript requires ProDiscover 4.642 (which should now be available) or greater. This is due to updates in one of the APIs.

Yet another ProScript copies the Event Log files out of the Windows\system32\config directory so you can use File::ReadEVT to pull out the data, collect statistics, etc. And I reposted the ProScript that parses V and F values out of the SAM file to determine user information and group membership.

Saturday, April 15, 2006

Reassembling an image file from a memory dump

Andreas Schuster posted a tutorial on his blog for reassembling the executable image of a process from a dump of physical memory a bit ago. As I've already got Perl code for parsing PE headers, why not automate the process?

So far, I've completed the first step in reassembling the image...parsing the PE headers. To do this, locate the PEB for the process (from the information located in the EPROCESS block...use lsproc and then lspd from my SourceForge site to get the necessary information) and from there, get the value for the ImageBaseAddress (a DWORD located at offset 0x08 within the PEB). Convert that virtual address to a physical offset within the dump file (done by the code) and, if the physical offset is "present", read in the 4K page located at that address. Now, the PE header isn't usually 4K (4096 bytes) in size, so most of the page will be zeros. However, we can parse out the information we need. For example, I ran some tests using the process named "nc.exe"...

The initial information looked good:

DOS header located.
e_lfanew = 128 (0x00000080)
NT Header = 0x4550

The "DOS Header" is "MZ", and the value for e_lfanew is something that we're going to use for several calculations. The NT Header is valid, as it translates to "PE". Next, we read the Image File Header and find that there are 4 sections, and the image has the following characteristics:


From the Image Optional Header, we get:

Opt Header Magic = 0x10b
Entry Pt Addr : 0x00004c00

Notice the value for the entry point address. This will be important later, particularly when we perform detailed analysis of the image file itself. Tools such as PeID use entry point analysis to determine things such as packers and encryption used on obfuscated binaries (mostly malware). If you know what you're doing, you can get a lot of information by determining not only the location of the entry point, but also the contents (first 100 bytes or so) of the entry point.

Moving on, we pull out the Image Data Directories:

Data Directory RVA Size
-------------- --- ----
ResourceTable 0x00000000 0x00000000
DebugTable 0x00000000 0x00000000
BaseRelocTable 0x00000000 0x00000000
DelayImportDesc 0x00000000 0x00000000
TLSTable 0x00000000 0x00000000
GlobalPtrReg 0x00000000 0x00000000
ArchSpecific 0x00000000 0x00000000
CLIHeader 0x00000000 0x00000000
LoadConfigTable 0x00000000 0x00000000
ExceptionTable 0x00000000 0x00000000
ImportTable 0x00012000 0x0000003c
unused 0x00000000 0x00000000
BoundImportTable 0x00000000 0x00000000
ExportTable 0x00000000 0x00000000
CertificateTable 0x00000000 0x00000000
IAT 0x000121a0 0x00000164

We see from this that the only data directories are the Import (Name) Table and the Import Address Table (IAT). Once we reassemble the image, we will be able to use this information to determine which DLLs the image accesses, and which functions from those DLLs it imports.

Finally, we look at the Image Section Headers:

Name Virt Sz Virt Addr rData Ofs rData Sz Char
---- ------- --------- --------- -------- ----
.text 0x00009770 0x00001000 0x00000400 0x00009800 0x60000020
.data 0x00005244 0x0000c000 0x0000a200 0x00003e00 0xc0000040
.idata 0x0000075c 0x00012000 0x0000e000 0x00000800 0xc0000040
.rdata 0x00000417 0x0000b000 0x00009c00 0x00000600 0x40000040

Remember, from the Image File Header, we found that there were 4 sections. Now we have more detailed information about those sections.

Using the code I have now, I took a look at the same process that Andreas used in his tutorial...dd.exe, PID 284. I extracted the same information he listed, down to the section headers. Now, to begin automatically reassembling the image, we simply need to follow the advice given by Andreas; specifically, given the information we've extracted from the PE header, go back into the physical dump file, extract those pages and reassemble them in order. The test will then be not only to view the complete PE header information, Import Table and resources of the file using something like File::ReadPE (or use tools such as pedump or PEView), but to also actually launch the image.

Friday, April 14, 2006

Why Perl?

I haven't actually been asked this question yet, but I'm anticipating it...why Perl? Why am I using Perl for the RAM dump parsing tools, and not C, Java, or C#?

Well, first of all, Perl is pretty's everywhere. Perl is not only used for system administration, but for Web-based (CGI) interfaces, and a wide variety of other activities. Perl is used in security projects such as The SleuthKit, the MetaSploit Framework, and used extensively throughout my book and the Real Digital Forensics book.

Perl is great as a quick prototyping tool. Starting from scratch, or by using some code you've already got, you can put together a tool for performing a specific task, and run it quickly. If there are any errors (yes, I know I need to use "-w" more), you can quickly open the file up, locate the error and fix it. As Perl is an interpretted language, you don't have to rebuild/recompile the can run it as soon as you make modifications and save them.

Perl is pretty easy to read...when compared to other languages. I like to comment my code, and will sometimes have more comments in a script than I have lines of code. Perl is also a great educational tool, as you can clearly walk through your script, and show were different things are done; open files, write to a file, access an API, close a file, etc.

One other thing I like about Perl is that because Perl is available on other (Mac, Linux, etc.) systems besides just Windows (yeah, imagine that!), with a little care (predominantly toward endianness) the script I write on Windows can be run on those other platforms. What this means is that a forensic analyst isn't restricted to performing analysis of a Windows system on a Windows system. So, if you dump the contents of physical memory to a file and you're analyzing that file on a Linux system (or if you're Jesse Kornblum, on a Mac), then you're not stuck and restricted to only tools written for that platform. The same is true if you're analyzing an image of a Windows system, and you're on Linux.

Now, if you are on Windows and don't want to install Perl, I try to provide standalone executables of the Perl scripts via Perl2Exe or PAR. These aren't really executables in the sense of a C/C++ file compiled in to a PE file, but more of the Perl script wrapped up the Perl interpreter. So, the EXE files will be larger than a "normal" EXE.

As with my last post, I hope this makes sense.


When it comes to using dd.exe to dump the contents of physical memory from a Windows system, one thing that's been asked is...why? Why is any of this important? Why use dd.exe to dump the contents of RAM, and then why all the effort to write tools to parse through the resulting file? After all, if it really were important, wouldn't someone have done all of this already?

For a long time, the "normal" forensics steps that've been taken have been to document a scene, then remove power from the system before creating a forensic image of the hard drive. However, in recent years, there's been a realization, even among law enforcement, that there may be something of value found in the volatile memory of the system...perhaps even evidentary. For example, we know that the Clipboard occupies an area of memory designated by the system, and we can run a tool that will dump the contents of the Clipboard.

When I've talked to some law enforcement officers at conferences I've attended, I've asked them why they collect the contents of RAM, rather than say, run specific tools to get things like the contents of the Clipboard, memory used by specific processes, etc. In most cases, I'm simply told, "we want it all." When I ask, "what for?", I'm usually met with stares or responses like, "in case we need it."

Don't get me wrong...there may be something of value in memory that can be used to help further the investigation. Folks have found evidence of malware, passwords, etc., but most of the examination has been hit or miss...create the dump, then run strings on it to see what's there. At that point, you end up with a lot of output and you really have no way of tying what you found back to a particular process. Is that IP address or potential password you found in the output of strings from a malcode process, such as a backdoor, Trojan, or worm...or was it part of an email, or word processing document, or...? There's no way to know.

This is why I'm working on these tools. Not only because it's a challenge...and I have to thank several folks out there, particularly Andreas Schuster, for their help and assistance in moving this along. The other reasons are that (a) tools like this are needed...needed by folks who are working in the field and have a 512MB dump of RAM on their system, and no idea what to do next, and (b) knowledge of these things is needed. By talking to others, figuring things out, and presenting the information to more people, there might be a few people who get over that initial hurdle of "I don't know where to start", and start learning about this topic...and what we end up with is more folks with more knowledge, and we're all smarter.

I hope this makes sense.

Thursday, April 13, 2006

Updated lsproc

I've updated lsproc with some small changes. For one, I made a small change to the detection portion of the script.

The other change I made was to output the creation time of the process rather than the FLink/BLink values. In doing so, I ran across some interesting output. Take a look:

Proc 156 176 winlogon.exe 0x01045d60 Sun Jun 5 00:32:44 2005
Proc 156 176 winlogon.exe 0x01048140 Sat Jun 4 23:36:31 2005
Proc 144 164 winlogon.exe 0x0104ca00 Fri Jun 3 01:25:54 2005
Proc 156 180 csrss.exe 0x01286480 Sun Jun 5 00:32:43 2005
Proc 144 168 csrss.exe 0x01297b40 Fri Jun 3 01:25:53 2005
Proc 8 156 smss.exe 0x012b62c0 Sun Jun 5 00:32:40 2005

Looking at the output, most of the processes seem to have been started on Sun, Jun 5...and yet there are a couple of processes that were started well before then. Definitely something to look into.

As with the other tools, the Perl source and a standalone executable for Windows are available in the archive.

Prefetch files, revisited

I was listening to the latest CyberSpeak podcast (8 Apr) today, and picked up a little tidbit. With regards to those .pf files located in the Prefetch directory on Windows XP, Ovie and Bret stated that the DWORD located at offset 0x90 in the file records the number of times that particular application was launched, with the caveat that this does not apply to those applications autostarted (as via Registry entries). So this will tell you how many times the user launched that application.

Also, the guys said that the 2 DWORDs located at offset 0x78 is the FILETIME object for the time that the application was last launched. This should probably correlate with the last write time on the .pf file itself.

Anyone have any other tidbits like this that can be incorporated into a nice little Perl script? ;-)

Saturday, April 08, 2006

lspd posted

I've posted lspd (for "list process details") to the SourceForge site this morning.

First, I want to say that this is a tool written in Perl. The archive I posted includes the Perl source, as well as an executable compiled from the script using Perl2Exe (note: if you use this executable, you need to keep p2x587.dll with the exe at all times). If you don't want to use this executable, or want to modify the source and recompile it, take a look at the PAR module, which is available for Perl. PAR can produce either a portable package that you can run on other platforms (though this script does not use any additional modules), or an executable that is simply more portable.

I wanted to mention this up front, as you don't have to have just Windows to run the can run it on any system that runs Perl. Well, take that with a caveat...I haven't actually tested this on some of the other platforms that run Perl, such as the Tivo.

So...on to the show.

lspd dumps the details of a process from a dd.exe-style dump file, such as those provided with the DFRWS 2005 Memory Challenge. The first thing you have to do is run the lsproc tool against the dump, and get the value of the offset for the process you're interested in.

For example, I ran lsproc against the first memory dump from the Memory Challenge, and found this:

Proc 1112 284 dd.exe 0x0414dd60 0x8046b980 0xff1190c0

From there, I ran the script against the dump file using he following command line:

C:\Perl> d:\hacking\dfrws-mem1.dmp 0x0414dd60

As you can see, lspd takes 2 arguements...the dump file, followed by the hex offset of the process you're interested in.

lspd ran very quickly, and dumped out process details, such as details from the EPROCESS block and process environment block (PEB), the command line used to launch the process, the Desktop name, the Window title, etc. lspd also extracts the modules and handles, if available.

Here's an example of the handles extracted for this process:

Type : File
Name = \Shells
Type : WindowStation
Type : WindowStation
Type : Desktop
Type : File
Name = \intrusion2005\audit.log
Type : File
Name = \intrusion2005\audit.log

lspd was also able to determine that the page located at the Image Base Address (from the PEB) contains a PE header. This will be addressed with another tool (tentative name: lspi, for "list process image").

One thing to keep in mind while using this tool...not all of the information that we're trying to extract is present in the physical memory dump file. When a virtual address is translated into a physical offset within the dump file, flags need to be checked to see if the 4K page is "present" (I put that in quotes b/c that's the name of the flag). Andreas Schuster has a very good tutorial about how this works, but I'm considering writing something specific to these tools, should I find the time to do so.

As always, if you have any comments or questions, please feel free to contact me.

Friday, April 07, 2006

lsproc released

I've released lsproc.exe to the WindowsIR site on SourceForge. This is a small tool that parses through a dd.exe-style dump of physical memory (RAM) from a Windows 2000 system, locating EPROCESS blocks. The program prints out some information about each process, as shown here (an excerpt from the output of lsproc.exe, run against the first DFRWS 2005 Memory Challenge dump file):

Type PPID PID Name Offset FLink BLink
---- ---- --- ---- ------ ----- -----
Proc 228 672 WinMgmt.exe 0x0017dd60 0xff1bab80 0xff22f820
Proc 820 324 helix.exe 0x00306020 0xff0e4e00 0xff16e460
Proc 0 0 Idle 0x0046d160 0x00000000 0x00000000
Proc 600 668 UMGR32.EXE 0x0095f020 0xff1916e0 0xff191ce0
Proc 324 1112 cmd2k.exe 0x00dcc020 0xff0dae00 0xff0e4e00
Proc 668 784 dfrws2005.exe(x)0x00e1fb60 0x00000000 0x00000000
Proc 156 176 winlogon.exe 0x01045d60 0xff29d120 0xfcc69520
Proc 156 176 winlogon.exe 0x01048140 0xff29f520 0xfcc6c3a0
Proc 144 164 winlogon.exe 0x0104ca00 0xff2ae0c0 0xfcc7abe0
Proc 156 180 csrss.exe 0x01286480 0xfca28e00 0xfcc99360
Proc 144 168 csrss.exe 0x01297b40 0xfca2faa0 0xfcca50c0
Proc 8 156 smss.exe 0x012b62c0 0xfcc69520 0xfce00d00
Proc 0 8 System 0x0141dc60 0xfcc99360 0x8046b980
Proc 668 784 dfrws2005.exe(x)0x016a9b60 0x00000000 0x00000000
Proc 1112 1152dd.exe(x) 0x019d1980 0x00000000 0x00000000

Sorry about any issues with formatting...however, I have included the complete output from the first dump in the zipped archive provided at SourceForge.

Notice that some of the process names are appended with "(x)". This indicates that the process has exited; this also accounts for why the FLink and BLink values are 0x00 in those cases.

Lsproc.exe works by opening the dump file in binary mode, and searching through that file one DWORD (a DWORD is 4 bytes) at a time. On Windows 2000, the EPROCESS block (as well as the ETHREAD structure) has a specific signature, so by locating that signature and then performing certain follow-on checks, we can locate these structures. Again, lsproc.exe doesn't retrieve *all* of the data about the process...we're leaving that for other tools.

This will be the first of several tools for retrieving information from these dumps. The follow-on tools will make use of the information displayed in the output of lsproc.exe. I wanted to separate the process of searching for processes from the process of gathering process details, as the search can take a while. Once the offset to an EPROCESS block is located, dumping the process environment, memory pages, and image are relatively straightforward.

As with previous tools, this one is (and the others to come) were created in Perl. The approach I've taken with these tools is to try to make them platform independant, meaning that even though the physical memory dump needs to be retrieved from a Windows 2000 system, the tools themselves don't need to be run on Windows. In fact, they can be run on Linux or even a Mac the analyst is not restricted to a specific analysis platform.

Thursday, April 06, 2006

WindowsIR SourceForge site

Okay, based on a question the Bret posed in the CyberSpeak interview, I've set up a SourceForge site for my tools. Thinking about it, this is probably a better idea than using the site.

Right now, all I've got posted to the site is the RAMDump GUI I wrote, which is a wrapper around George M. Garner's version of dd.exe. The basic idea is to allow someone to capture/dump \\.\PhysicalMemory from a Windows 2000/XP system, with less knowledge. The GUI will tell you how much physical memory is on the system, and which drives (and type of drive...fixed, removeable, network) are available, and once the dump process is started, will give you a status. It's pretty simple and straightforward, and the source is provided in case you want to modify the command line that is launched, or the messages, or whatever.

If you don't have Perl2Exe to create a standalone executable from the Perl script, look at installing Perl (if you haven't already) and the PAR module ("ppm install PAR" under Activestate) and using that to create the standalone EXEs.

Over time, I'll be adding the FSP and FRU tools, the tools I've created for processing dumps of physical memory, and other supporting tools for the FSP/FRU. This will include analysis/correlation tools for processing the data collected by the FRU/FSP.

Saturday, April 01, 2006


Okay, let's get this blog back on track, shall we?

I recently had an opportunity to use the FSP in an engagement, and to be honest, it worked very well. I was pretty happy with how things worked, but I did find somethings that I want to improve upon.

In a general sense, here's how the engagement went...I plugged my laptop into the network and set up the FSP server component, listening on port 7070 (the default). After calling the MSSP SOC for the client to let them know that they'd see some traffic on this port and that we were responsible for it, I would walk to the server and put the FRU CD into the CD-ROM tray. I was working with a sysadmin, and he'd terminal into the server and launch the FRU command line...which I'd typed into a Notepad window, so that all he had to do was cut-n-paste it into the command prompt. Very quick, very smooth. Nothing was written to the hard drive, and things like the .NET framework weren't required to be on any of the systems.

So...I did mention some improvements. Well, it's been suggested (yes, Ovie, I did hear you) that I set up a site on SourceForge for the FSP and various other tools. So, I've set up an account and I'm waiting to hear back if they'll accept my submission for a site. Once I do get a site, I'll start posting the various tools I've put together. This includes the FSP, as well as the supporting tools, and others, like the Perl modules I've posted to CPAN.

I also need to start working on the analysis suite of tools, one of them being to correlate information collected by the FRU and sent to the FSP into a nice HTML format. That, and I need to put together a user guide. I keep thinking that the FSP has appeared on the CERT VTE and is included in Helix, but there's nothing in the way of a comprehensive user guide.

Oh, and one more thing...I've been working on a GUI for folks to use for launching dd.exe. A friend asked me to put this together, and I've just about got it done...I just have some minor adjustments to make, then I'll fully document the code and post it. The idea is to make it easy for folks who need to do so to dump the contents of physical memory by identifying various drives (fixed, removable, network, etc.) to write to, etc.

...a couple of things, gents...

I just wanted to post a couple of thoughts and comments really quick...

First off, on Thu, I was interviewed by Ovie and Bret, the authors of the CyberSpeak podcasts. I'd heard about these podcasts before, but hadn't listened to them. Then a friend of mine heard that Ovie and Bret had mentioned my blog in their March 11 podcast, when they'd mentioned physical memory analysis. I've since listened to several of the podcasts (ok, all but one...) and they are pretty interesting. I don't own an iPod, because when I run, I don't feel comfortable cutting myself off from my surroundings like that. What I like to do is download the podcasts to my desktop and listen to them...that way I can have them on while I'm working, and easily pause them when I get up to do something else.

On a completely separate note, I received the numbers for my book from June through Dec 2005. The book seems to be doing okay, though not I've said before, it's not setting the world on fire. I initially thought that was holding me back from getting the proposal for my next book approved, but it has turned out that the real issue has been reorgs at the publisher and reviewers simply being too busy.

So...kind of anti-climatic for a 200th post, eh?