For those readers of the blog who have not submitted papers to machine learning (or related) conferences, the conference review process is a bit like a mini-version of a journal review. You (as the author) get the reviews back and have to write a response and then the reviewers discuss the paper and (possibly, but in my experience rarely) revise their reviews. However, they generally are supposed to take into account the response in the discussion. In some cases people even adjust their scores; when I’ve been a reviewer I often adjust my scores, especially if the author response addresses my questions.
This morning I had the singular experience of having a paper rejected from ICML 2014 in which all of the reviewers specifically marked that they did not read and consider the response. Based on the initial scores the paper was borderline, so the rejection is not surprising. However, we really did try to address their criticisms in our rebuttal. In particular, some misunderstood what our claims were. Had they bothered to read our response (and proposed edits), perhaps they would have realized this.
Highly selective (computer science) conferences often tout their reviews as being just as good as a journal, but in both outcomes and process, it’s a pretty ludicrous claim. I know this post may sound like sour grapes, but it’s not about the outcome, it’s about the process. Why bother with the facade of inviting authors to rebut if the reviewers are unwilling to read the response?
To be fair, I know a lot of reviewers do read the response but fail to check the box that says “I have read the authors’ rebuttal” out of laziness. And of course, probably a lot of reviewers check the box without having actually read the rebuttal.
What I found more disconcerting is that there was no meta-review from the area chair.
I agree, but in fact, all 4 reviewers didn’t check the box. Assuming a sort of i.i.d. laziness process, it still smells fishy.
I was also weirded out by having no meta-review, even after delaying for another few days.
Out of 10 reviews on various papers for ICML this year, as far as I know, only 1 changed anything from their text, and their response was reasonable (basically “fixing all of the things would change the conclusions too much to recommend acceptance.”)
I’m also shocked. I understand reviewing and reviewing rebuttals is a little-appreciated chore that’s put on top of all the other responsibilities one has, but, it ought to be done as well as possible.
I believe meta-reviews are forthcoming, from the e-mails ICML is sending out. (They apparently were not made visible by mistake.) As the sort of lazy reviewer that would read a rebuttal but not mark the box that I had myself, I don’t think you should read that much into the boxes not being marked.
That being said, your main point about (at least some) computer science reviewing being far from what one might and should desire or expect is well on point; I’ve blogged about this myself, mostly with regard to theory conferences. I haven’t been on an ICML committee, so I don’t really want to judge their specific process, but it is an issue the community as a whole continues to grapple with. There still seems to be much room for improvement, but it’s a difficult incentive problem — just ask an econ-cs person.
Some computer security conferences have rebuttals and although I’ve heard that responses have changed reviews and acceptance/rejection, I’ve never seen a review change by even a word. In my experience, rebuttals are a total waste of time and pretty frustrating for the authors.
The ICML story is definitely sad, if it is indeed a deliberate snub of rebuttals. By contrast, in PL and systems conferences like PLDI, ASPLOS, etc., rebuttals are effectively required. A failure to rebut is tantamount to saying “yes, go right ahead and reject my paper.” PC members are directed to read all rebuttals, and I have seen plenty of discussion about rebuttals. Most PC chairs (myself included) ask PC members to update their reviews as appropriate, and in some (though not all) cases a reviewer is asked to summarize the discussion (electronic and in-person). I have many times written and seen updated reviews.
My only interaction with this “rebuttal for conference” review process has been for ML (and one-off security/crypto) conferences. As a reviewer I always modify/update my review based on the rebuttal (and specifically comment on it), but I haven’t seen the same from the majority (say 50-60%) of the other reviewers on the papers I’ve reviewed. So perhaps it’s an academic sub-community thing.