Juking the stats in academic publishing

I heard recently of a case where someone got a paper back with revisions requested, and a deadline for said revisions. They ended up asking for a week extension, but then the journal said they would have to do a fresh submission and redo the whole review cycle. I found this baffling — but then that person pointed out that the journal has built a reputation on fast turnaround times, and so to keep their “sub-to-pub” numbers low, they don’t want to give any extensions to the authors. It’s better to do a resubmission than to continue with the same “paper ID” in the system.

This is a classic example of juking the stats:

I just got a rejection from KDD 2012 which smacks of the same ominous reasoning:

We try to notify authors once a decision on a submission is concretely made, and hope that the early notifications can reduce the average review turn-over time.

But the real kicker is that “due to technical constraints” they can’t give us the reviews until May 4th. So I’m not really sure what I am supposed to do with this information — I can’t really start on revisions without the reviews, so this “early notification” thing is really just to make them feel better about themselves, it seems. Or perhaps they can then report that the reviewing was “more efficient.”

In any case, no harm is done, per se. But optimizing metrics like “sub-to-pub” seems to be as misguided as teaching to the test. What do we really want out of our peer review process? Or should we abandon it?

Manuscript Central is annoying

The IEEE Transactions on Information Theory recently transitioned to using Manuscript Central from the old Pareja system, so now all of the IEEE journals for which I review seem to be managed by the same external management system. As a reviewer/author, I have a lot of complaints (small and large) about Manuscript Central:

  • Why oh why do I need to disable my popup blocker for your site to work?
  • Why can login information not be shared across different IEEE publications? I have a separate account for each journal, with a separate password. Thank goodness I have LastPass, but even that program gets confused sometimes.
  • What is the deal with the mandatory subject classifications for papers? One of the “topics” I could pick was “IEEE Transactions on Information Theory.” Really? That’s a topic?
  • Why must papers for review be emblazoned with that stupid pale blue “For Peer Review Only” running diagonally across each page? This causes PDF annotations such as highlighting to barf, making paperless reviewing of papers significantly more annoying than it needs to be.

The worst part is that I am sure IEEE could implement a significantly cheaper and just-as-effective system itself, but now each Society is forking over money to Manuscript Central, which as far as I can tell, offers significantly more annoyances for authors and reviewers and is a shoddy product. Perhaps as an editor it’s significantly better (I imagine it is), but it seems like a bad deal overall.

Of course, now I sound curmudgeonly. Get off my lawn!

Do other people like MC? Or do you have other pet peeves?

IEEE page charges for Open Access

I just got an email saying my page proofs are ready for my paper with Alex Dimakis on mobility in gossip algorithms. If I want to make the paper open access, I have to shell out $3000. I think this is in addition to the $110 per page “voluntary” page charges. Now, I’m on the record as being a fan of Open Access, but $3k is a pretty hefty chunk of change! Has anyone else had experience with this?

A new uncertainty principle

During a recent Google+ conversation about the quality of reviews and how to improve them (more from the CS side), the issue of the sheer number of reviews seemed to be a limiting factor. Given the window of time for a conference, there is not enough time to have a dialogue between reviewers and authors. By contrast, for journals (such as Trans. IT), I find that I’ve gotten really thorough reviews and my papers have improved a lot through the review process, but it can take years to get something published due to the length of time for communication.

This points to a new fundamental limit for academic communications:

Theorem. Let R be the number of papers submitted for review, Q be the average quality of reviews for those papers, and T be the time allotted to reviewing the papers. Then

R Q / T = K.

where K is a universal constant.

Not really the digital divide

I started my new job here at TTI Chicago this fall and I’ve been enjoying the fact that TTI is partnered up with the University of Chicago — I get access to the library, and a slightly better rate at the gym (still got to get on that), and some other perks. However, U of C doesn’t have an engineering school. So the library has a pretty minimal subscription to IEEExplore. Which leaves me in a bit of predicament — I’m a member of some of the IEEE societies, so I can get access to those Transactions, but otherwise I have to work a bit harder to get access to some papers. So far it hasn’t proved to be problem, but I think I might run into a situation like the one recently mentioned by David Eppstein.

clarification on reviewer incentives

I seem to given the wrong impression (probably due to grumpiness) in the previous post about my views on the value of reviewing. I actually enjoy reviewing – I get a sneak preview of new results and techniques through the review process, and there are often many interesting tidbits. My perspective is skewed by the IEEE Transactions on Information Theory, which has a notoriously lengthy review process. For example, it took 15 months for me to get two reviews of a manuscript that I submitted. One of the top priorities for the IT Society has been to get the time from submission to publication down to something reasonable. That’s the motivation for my question about incentives for timely reviewing. So why should you submit a timely review?

Reviewing is service. Firstly, it’s your obligation to review papers if you submit papers. Furthermore, you should do it quickly because you would like your reviews quickly. This seems pretty fair.

Reviewing builds your reputation. There is the claim that you build reputation by submitting timely and thorough reviews. I think this is a much weaker claim — this reputation is not public, which is an issue that was raised in the paper by Parv and Anant that I linked to earlier. It’s true that the editorial board might talk about how you’re a good reviewer and that later on down the line, an Associate Editor for whom you did a fair bit of work may be asked to write you a tenure letter, but this is all a bit intangible. I’ve reviewed for editors whom I have never met and likely never will meet.

Doing a good review on time is its own reward. This is certainly true. As I said, I have learned a ton from reviewing papers and it has also helped me improve my own writing. Plus, as Rif mentioned, you can feel satisfied that you were true to your word and did a good job.

Isn’t all of this enough? Apparently not. There are a lot of additional factors which make these benefits “not enough.” Firstly, doing service for your intellectual community is good, but this takes you as far as “you should accept reviews if the paper seems relevant and you would be a good reviewer.” I don’t think the big problem is freeloading; people accept reviews but then miss lots of deadlines. Most people don’t bother to say “no” when asked to do a review, leaving the AE (or TPC member) in limbo. There needs to be a way to make saying “no” acceptable and obligatory.

The real issue with reputation-building is that it’s a slow process; the incentive to review a particular paper now is both incremental and non-immediate. One way out would be to hold submitted papers hostage until the authors review another paper, but that is a terrible idea. There should be a way for reviewers to benefit more immediately from going a good and timely job. Cash payouts are probably not the best option…

Finally, the self-satisfaction of doing a good job is a smaller-scale benefit than those from other activities. It is the sad truth that many submitted manuscripts are a real chore to review. These papers languish in the reviewer’s stack because working up the energy to review them is hard and because doing the review doesn’t seem nearly as important as other things, like finishing your own paper, or that grant proposal, etc. The longer a paper sits gathering dust on the corner of your desk, the less likely you are to pick it up. I bet that much more than half the reviews are not even started until the Associate Editor sends an email reminder.

It takes a fair bit of time to review a 47 page 1.5-spaced mathematically dense manuscript, and to do it right you often need to allocate several contiguous chunks of time. These rare gems often seem better spent on writing grant proposals or doing your own research. The rewards for those activities are much more immediate and beneficial than the (secret) approval and (self-awarded) congratulations you will get for writing a really helpful review. The benefits for doing a good timely review are not on the same order as other activities competing for one’s time.

I guess the upshot is that trusting the research community to make itself efficient at providing timely and thorough reviews may not be enough. Finding an appropriate solution or intervention requires looking at some data. What is the distribution of review times? (Cue power-law brou-ha-ha). What fraction of contacted reviewers fail to respond? What fraction of reviewers accept? For each paper, how does length/quality of review correlate with delay? Knowing things like this might help get things back up to speed.

What is the reward for timely reviewing?

I know I complain about this all the time, but in my post-job-hunt effort to get back on top of things, I’ve been trying to manage my review stack.

It is unclear to me what the reward for submitting a review on time is. If you submit a review on time, the AE knows that you are a reliable reviewer and will ask you to review more things in the future. So you’ve just increased your reviewing load. This certainly doesn’t help you get your own work done, since you end up spending more time reviewing papers. Furthermore, there’s something disheartening about submitting a review and then a few months later getting BCC-ed on the editorial decision. Of course, reviewing can be its own reward; I’ve learned a lot from some papers. It struck me today that there’s no real incentive to get the review in on time. Parv and Anant may be on to something here (alternate link).

Responses to reviewers : raw TeX please

I am revising a paper now and one of the reviewers sent their comments as a PDF what looked like a Word document or RTF containing the LaTeX, rather than a rendered version. So it was a little annoying to read, what with all of the $’s and so on. The beauty of it was that I could just cut and paste from the PDF into my document of responses to the reviewers without having to reformat and it (almost) rendered without a hitch. There were some lingering issues though with quotation marks (I hate smart quotes) and itemizing/enumerating lists.

As a recommendation, I think that the raw .tex file of the review should be uploaded instead. That will make it much easier for the authors to revise, no? I plan on doing this in the future. What do you do?