clarification on reviewer incentives

I seem to given the wrong impression (probably due to grumpiness) in the previous post about my views on the value of reviewing. I actually enjoy reviewing – I get a sneak preview of new results and techniques through the review process, and there are often many interesting tidbits. My perspective is skewed by the IEEE Transactions on Information Theory, which has a notoriously lengthy review process. For example, it took 15 months for me to get two reviews of a manuscript that I submitted. One of the top priorities for the IT Society has been to get the time from submission to publication down to something reasonable. That’s the motivation for my question about incentives for timely reviewing. So why should you submit a timely review?

Reviewing is service. Firstly, it’s your obligation to review papers if you submit papers. Furthermore, you should do it quickly because you would like your reviews quickly. This seems pretty fair.

Reviewing builds your reputation. There is the claim that you build reputation by submitting timely and thorough reviews. I think this is a much weaker claim — this reputation is not public, which is an issue that was raised in the paper by Parv and Anant that I linked to earlier. It’s true that the editorial board might talk about how you’re a good reviewer and that later on down the line, an Associate Editor for whom you did a fair bit of work may be asked to write you a tenure letter, but this is all a bit intangible. I’ve reviewed for editors whom I have never met and likely never will meet.

Doing a good review on time is its own reward. This is certainly true. As I said, I have learned a ton from reviewing papers and it has also helped me improve my own writing. Plus, as Rif mentioned, you can feel satisfied that you were true to your word and did a good job.

Isn’t all of this enough? Apparently not. There are a lot of additional factors which make these benefits “not enough.” Firstly, doing service for your intellectual community is good, but this takes you as far as “you should accept reviews if the paper seems relevant and you would be a good reviewer.” I don’t think the big problem is freeloading; people accept reviews but then miss lots of deadlines. Most people don’t bother to say “no” when asked to do a review, leaving the AE (or TPC member) in limbo. There needs to be a way to make saying “no” acceptable and obligatory.

The real issue with reputation-building is that it’s a slow process; the incentive to review a particular paper now is both incremental and non-immediate. One way out would be to hold submitted papers hostage until the authors review another paper, but that is a terrible idea. There should be a way for reviewers to benefit more immediately from going a good and timely job. Cash payouts are probably not the best option…

Finally, the self-satisfaction of doing a good job is a smaller-scale benefit than those from other activities. It is the sad truth that many submitted manuscripts are a real chore to review. These papers languish in the reviewer’s stack because working up the energy to review them is hard and because doing the review doesn’t seem nearly as important as other things, like finishing your own paper, or that grant proposal, etc. The longer a paper sits gathering dust on the corner of your desk, the less likely you are to pick it up. I bet that much more than half the reviews are not even started until the Associate Editor sends an email reminder.

It takes a fair bit of time to review a 47 page 1.5-spaced mathematically dense manuscript, and to do it right you often need to allocate several contiguous chunks of time. These rare gems often seem better spent on writing grant proposals or doing your own research. The rewards for those activities are much more immediate and beneficial than the (secret) approval and (self-awarded) congratulations you will get for writing a really helpful review. The benefits for doing a good timely review are not on the same order as other activities competing for one’s time.

I guess the upshot is that trusting the research community to make itself efficient at providing timely and thorough reviews may not be enough. Finding an appropriate solution or intervention requires looking at some data. What is the distribution of review times? (Cue power-law brou-ha-ha). What fraction of contacted reviewers fail to respond? What fraction of reviewers accept? For each paper, how does length/quality of review correlate with delay? Knowing things like this might help get things back up to speed.

6 thoughts on “clarification on reviewer incentives

  1. Ah, I misunderstood your first post I guess.

    I thought you were asking “Why should I do reviews quickly?” I think my original answers hold. But instead you were asking “Why don’t people do reviews quickly?” or maybe “How do I change the world so that people do reviews quickly?” This seems to require a change in incentives or cultural norms, which is hard, but I will too out some speculative ideas.

    If I were trying to solve this [and I’m not], I might try to gamify it. Start a website where everyone [who signs up] gets a score based on numbers of reviews done and timeliness of reviews done. Announce and plug your website at conferences. Give people javascript badges that display their score on the homepage. Try to convince conferences or journals to slightly preferentially accept papers from people with high scores [in borderline cases] or reject abusers. Basically, you’re talking about changing the culture and creating an institution. Have fun.

    Speaking personally, I don’t agree to review very many papers, and when I do, I review them quickly, so none of this really speaks to me. I actually think the entire publication process is pretty broken as an incentive scheme anyhow; I’d be fine if all journals were replaced by arXiv.

    • Yeah, I think that scores or rewards/reputation is the only way to do it.

      I am rather opposed to journals being replaced by ArXiV, not because I think that journals are the best way to do things, but that arxiv is not a superb solution. However, these things go in cycles anyway. My old roommate Samidh wrote an interesting MA thesis on the history of peer review.

  2. I also had this thought: “There needs to be a way to make saying “no” acceptable and obligatory.”

    For instance, in Spain no selecting comittee cares if you review or not. And Universities do not consider reviewing as part of your research duty, as if to say that you shoud review at home in your free time … Which most of the time is actually the case.

    So in a way reviewing turns out to be a kind of a “hobby” that you do whenever you don’t have anything else to do ..

  3. I agree that the culture needs to change. I think we ought to have some sort of Reviewer of the Year awards. I’ve received a small number of exceptional reviews that deserve an award.

    I also tend to think maybe we ought to just use arxiv. Then we just use citation counts for arxiv papers to judge impact. On the other hand, I think the review process increases the quality of what appears in the archival journal. That is, the archival quality of arxiv is likely to be less than that of the journal.

    Based on my own experience as an editor:

    What is the distribution of review times? (Cue power-law brou-ha-ha).

    Definitely a long tail!

    What fraction of contacted reviewers fail to respond?

    It depends on if the paper is an area close to the editor where the reviews are requested from people you know.It can be high if you need reviews from people you don’t know.

    What fraction of reviewers accept? Half?

    One thing that’s frustrating is that people who move away from academic research tend to refuse reviews in their old fields, even if the submissions cite their own PhD work. The lack of incentive here is the big problem.

    For each paper, how does length/quality of review correlate with delay? Longer delays generally mean lower quality review. You get more reviews that are done quickly based on guilt for being late.

    • See, if there was a way to actually get the stats on this and then publish them, it might get people thinking about it. Right now I feel like everyone does the same thing I do — bellyache about how the review process is so slow. But it’s hard to understand “how” it’s broken.

Leave a reply to Anand Sarwate Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.