Juking the stats in academic publishing

I heard recently of a case where someone got a paper back with revisions requested, and a deadline for said revisions. They ended up asking for a week extension, but then the journal said they would have to do a fresh submission and redo the whole review cycle. I found this baffling — but then that person pointed out that the journal has built a reputation on fast turnaround times, and so to keep their “sub-to-pub” numbers low, they don’t want to give any extensions to the authors. It’s better to do a resubmission than to continue with the same “paper ID” in the system.

This is a classic example of juking the stats:

I just got a rejection from KDD 2012 which smacks of the same ominous reasoning:

We try to notify authors once a decision on a submission is concretely made, and hope that the early notifications can reduce the average review turn-over time.

But the real kicker is that “due to technical constraints” they can’t give us the reviews until May 4th. So I’m not really sure what I am supposed to do with this information — I can’t really start on revisions without the reviews, so this “early notification” thing is really just to make them feel better about themselves, it seems. Or perhaps they can then report that the reviewing was “more efficient.”

In any case, no harm is done, per se. But optimizing metrics like “sub-to-pub” seems to be as misguided as teaching to the test. What do we really want out of our peer review process? Or should we abandon it?