PLoS One and its absurdly short review times

I was asked to review a manuscript for PLos One¬†recently and declined because they asked for a review in 10 days. This might be standard for biology papers or something, but seems absurd for a paper where the reviewer is asked to sign off on technical correctness for something which may entail a fair bit of math. This sort of one-size-fits-all approach to academic practice drives me nuts. It’s the same kind of thing that leads to workshops on grant writing led by someone who has had a lot of success writing grants to one program at NIH/NSF/wherever and then dispenses advice specific to that area with almost zero recognition that different programs/agencies have different priorities. Wow, context matters! Who knew?

Now, a reasonable claim is that 10 days at 8 hours a day is 80 hours and that is a totally reasonable amount of time to check all the math in a paper, assuming I had nothing else to do with my time. A friend told me their advisor had a policy to decline a review if they couldn’t do it in the next week. This strikes me as an admirable approach to things that probably worked well in the 80s.

However, given that 50% of papers are accepted to PLoS on a pay-to-publish model, what is the prior belief that spending even 30 minutes of my time reading the paper is worthwhile? Far better to spend 10 minutes complaining about it on a nearly defunct blog, no?

Advertisements

ISIT Deadline Extended to Monday

Apparently not everyone got this email, so here it is. I promise this blog will not become PSA-central.

Dear ISIT-2015-Submission Reviewers:

In an effort to ensure that each paper has an appropriate number of reviews, the deadline for the submission of all reviews has been extended to March 2nd. If you have not already done so, please submit your review by March 2nd as we are working to a very tight deadline.

In filling out your review, please keep in mind that

(a) all submissions are eligible to be considered for presentation in a semi-plenary session — Please ensure that your review provides an answer to Question 11
(b) in the case of a submission that is eligible for the 2015 IEEE Jack Keil Wolf ISIT Student Paper Award, the evaluation form contains a box at the top containing the text:
Notice: This paper is to be considered for the 2015 IEEE Jack Keil Wolf ISIT Student Paper Award, even if the manuscript itself does not contain a statement to that effect.
– Please ensure that your review provides an answer to Question 12 if this is the case.

Thanks very much for helping out with the review process for ISIT, your inputs are of critical importance in ensuring that the high standards of an ISIT conference are maintained. We know that reviewing a paper takes much effort and we are grateful for all the time you have put in!

With regards,

Pierre, Suhas and Vijay
(TPC Co-Chairs, ISIT 2015)

NIPS 2014 Review Quality Control Procedure

I got this email yesterday:

Dear Author of a NIPS 2014 Submission,

You are in for a treat! This year we will carry out an experiment that will give us insight to the fairness and consistency of the NIPS reviewing process. 10% of the papers, selected at random, will be duplicated and handled by independent Area Chairs. In cases where the Area Chairs arrive at different recommendations for accept/reject, the papers will be reassessed and a final recommendation will be determined.

I welcome this investigation — as an author and reviewer, I have found the NIPS review process to be highly variable in terms of the thoroughness of reviews, discussion, and the consistency of scores. I hope that the results of this experiment are made more publicly available — what is the variance of the scores? How do score distributions vary by area chair (a proxy for area)? There are a lot of ways to slice the data, and I would encourage the organizing committee to take the opportunity to engage with the “NIPS community” to investigate the correlation between the numerical measures provided by the review process and the outcomes.

Graham Cormode on how to not review a paper

I think during my time hanging out with machine learners, no topic has received as much attention as the quality of the review process for competitive conferences. My father passed along this paper by Graham Cormode on “the tools and techniques of the adversarial reviewer”, which should be familiar to many. I had not seen it before, but a lot of the “adversarial” techniques sounded familiar from reviews I have received. I also wonder to what extent reviews I have written could be interpreted as deliberately adversarial. I don’t go into the review process that way, but it’s easy to ascribe malign intent to negative feedback.

Cormode identifies 4 characteristics of the adversarial reviewer: grumpiness, elitism, peevishness, and arrogance. He then identifies several boilerplate approaches to writing a negative review, specific strategies for different sections of the paper, and the art of writing pros and cons for the summary. My favorite in this latter section is that the comment “paper is clearly written” really means “clearly, the paper has been written.”

As Cormode puts it himself at the end of the paper: “I am unable to think of any individual who consistently acts as an adversarial reviewer; rather, this is a role that we can fall into accidentally when placed under adverse conditions.” I think this is all-to-true. When reviewing the 9th paper for a conference with 3 weeks to do all 9, the patience of the reviewer may be worn a bit thin, and it’s easy to be lazy and not take the paper on its own merits. What’s certainly true, however, is that “editors and PC members” often do not “realize when a review is adversarial.” In part this is because as a research community, we don’t want to acknowledge that there are real problems with the review process that need fixing.