That review is so… meta

Reviewing has started for NeurIPS 2019 and this time around I am an area chair (AC). We’ve been given a lot of instructions and some tasks: bidding on papers, bidding on reviewers, adjusting reviewers, identifying what we think are likely rejects in the batch of papers we are handling, and so on. It’s a little more involved than being an AC for ICML, but that’s to be expected since the whole reviewing game has been evolving rapidly to adapt to the massive increase in submissions.

Since there is yet another tier of TPC above the ACs (the Senior ACs), how should one approach the meta-review? One view is that the meta-review is AC’s decision/opinion informed by the reviews, the response, the discussion, and their own reading of the paper. This makes the AC a bit like an associate editor at a journal. This also gives the AC quite a bit of flexibility: if the discussion is limited or not particularly useful, the AC can fill in the gap by adding their own voice. The downside is that ACs might bring more of their own preferences (or biases) to the process.

A different approach is to make the meta-review akin to a panel summary as part of an NSF review. In the panels I have been on, there are N people who write reviews of each proposal, one of whom leads the discussion. There is also a scribe for the discussion who has not written a review: a dispassionate observer. The whole panel (even those who didn’t read the proposal) participates in the discussion. The scribe is supposed to draft a summary/synthesis of the discussion and runs it past the panel for edits until they reach a consensus. The N reviews are still there though, with their diversity of opinion.

I think I might prefer the second model. The setup is a bit different, since authors get to respond to the reviews. The meta-review is supposed to augment the existing reviews by incorporating the discussion and author response. The AC is supposed to guide the discussion, which is a role shared by the lead discussant and program officer in the NSF model. The only problem is that the amount of discussion on each paper is highly variable. It’s sometimes like pulling teeth to get reviewers to respond/interact. Reviewers, for their part, might be participating in 5 different discussions, so context switching to each paper can be tough. But for papers with some reasonable discussion, the meta-review as panel summary might be a good way to go.

One complaint about panel summaries is that they often feel anodyne. However, I think this might be desirable in a meta-review, since it could lead to fewer angry authors. One aspect of the NSF model which I think could be adopted, regardless of how the AC views their job, is running the meta-review past the reviewers. I did this for ICML and got some edits and feedback from the reviewers that improved the final review.

PLoS One and its absurdly short review times

I was asked to review a manuscript for PLos One recently and declined because they asked for a review in 10 days. This might be standard for biology papers or something, but seems absurd for a paper where the reviewer is asked to sign off on technical correctness for something which may entail a fair bit of math. This sort of one-size-fits-all approach to academic practice drives me nuts. It’s the same kind of thing that leads to workshops on grant writing led by someone who has had a lot of success writing grants to one program at NIH/NSF/wherever and then dispenses advice specific to that area with almost zero recognition that different programs/agencies have different priorities. Wow, context matters! Who knew?

Now, a reasonable claim is that 10 days at 8 hours a day is 80 hours and that is a totally reasonable amount of time to check all the math in a paper, assuming I had nothing else to do with my time. A friend told me their advisor had a policy to decline a review if they couldn’t do it in the next week. This strikes me as an admirable approach to things that probably worked well in the 80s.

However, given that 50% of papers are accepted to PLoS on a pay-to-publish model, what is the prior belief that spending even 30 minutes of my time reading the paper is worthwhile? Far better to spend 10 minutes complaining about it on a nearly defunct blog, no?

ISIT Deadline Extended to Monday

Apparently not everyone got this email, so here it is. I promise this blog will not become PSA-central.

Dear ISIT-2015-Submission Reviewers:

In an effort to ensure that each paper has an appropriate number of reviews, the deadline for the submission of all reviews has been extended to March 2nd. If you have not already done so, please submit your review by March 2nd as we are working to a very tight deadline.

In filling out your review, please keep in mind that

(a) all submissions are eligible to be considered for presentation in a semi-plenary session — Please ensure that your review provides an answer to Question 11
(b) in the case of a submission that is eligible for the 2015 IEEE Jack Keil Wolf ISIT Student Paper Award, the evaluation form contains a box at the top containing the text:
Notice: This paper is to be considered for the 2015 IEEE Jack Keil Wolf ISIT Student Paper Award, even if the manuscript itself does not contain a statement to that effect.
– Please ensure that your review provides an answer to Question 12 if this is the case.

Thanks very much for helping out with the review process for ISIT, your inputs are of critical importance in ensuring that the high standards of an ISIT conference are maintained. We know that reviewing a paper takes much effort and we are grateful for all the time you have put in!

With regards,

Pierre, Suhas and Vijay
(TPC Co-Chairs, ISIT 2015)

NIPS 2014 Review Quality Control Procedure

I got this email yesterday:

Dear Author of a NIPS 2014 Submission,

You are in for a treat! This year we will carry out an experiment that will give us insight to the fairness and consistency of the NIPS reviewing process. 10% of the papers, selected at random, will be duplicated and handled by independent Area Chairs. In cases where the Area Chairs arrive at different recommendations for accept/reject, the papers will be reassessed and a final recommendation will be determined.

I welcome this investigation — as an author and reviewer, I have found the NIPS review process to be highly variable in terms of the thoroughness of reviews, discussion, and the consistency of scores. I hope that the results of this experiment are made more publicly available — what is the variance of the scores? How do score distributions vary by area chair (a proxy for area)? There are a lot of ways to slice the data, and I would encourage the organizing committee to take the opportunity to engage with the “NIPS community” to investigate the correlation between the numerical measures provided by the review process and the outcomes.

Graham Cormode on how to not review a paper

I think during my time hanging out with machine learners, no topic has received as much attention as the quality of the review process for competitive conferences. My father passed along this paper by Graham Cormode on “the tools and techniques of the adversarial reviewer”, which should be familiar to many. I had not seen it before, but a lot of the “adversarial” techniques sounded familiar from reviews I have received. I also wonder to what extent reviews I have written could be interpreted as deliberately adversarial. I don’t go into the review process that way, but it’s easy to ascribe malign intent to negative feedback.

Cormode identifies 4 characteristics of the adversarial reviewer: grumpiness, elitism, peevishness, and arrogance. He then identifies several boilerplate approaches to writing a negative review, specific strategies for different sections of the paper, and the art of writing pros and cons for the summary. My favorite in this latter section is that the comment “paper is clearly written” really means “clearly, the paper has been written.”

As Cormode puts it himself at the end of the paper: “I am unable to think of any individual who consistently acts as an adversarial reviewer; rather, this is a role that we can fall into accidentally when placed under adverse conditions.” I think this is all-to-true. When reviewing the 9th paper for a conference with 3 weeks to do all 9, the patience of the reviewer may be worn a bit thin, and it’s easy to be lazy and not take the paper on its own merits. What’s certainly true, however, is that “editors and PC members” often do not “realize when a review is adversarial.” In part this is because as a research community, we don’t want to acknowledge that there are real problems with the review process that need fixing.