ICML 2019 encouraged code submission. That is great!

ICML 2019 had an optional code submission for papers. As an area chair, I handled a mix of papers, some more theoretical than others, but almost all of them had some empirical validation. Not all of them submitted code. For a paper with a theorem, the experiments can range from sanity checks to a detailed exploration of the effects of some parameters for problem sizes of interest. For more applied/empirical papers, the experiments are doing the heavy lifting of making a case. A survey just went out to Area Chairs asking to what degree code submission was taken as a factor in our recommendations to the senior program committee.

Absent a compelling reason not to submit code, I think that ensuring some form of reproducibility is important for both transparency and the open communication of ideas. Reviewers already approach reading a paper with some skepticism — the burden of proof is on the authors to make a compelling argument in their paper. But if the argument is largely empirical (e.g. “this heuristic works very well for problem A”) then the burden of proof consists of making a case that the experiments, as described in the paper, were in fact carried out and not mere fabrications. How better to do that than to provide the implementation of the method?

Providing implementations is not always possible: examples abound in multiple fields, including electrical engineering. In antenna design the schematic might be provided in the paper, but the actual fabricated antenna and anechoic chamber are not available to the reviewers. Nobody seems to think this is a problem: reviewers somehow trust that the authors are not making things up. Shouldn’t we trust ML authors as well?

One factor that makes a difference is that conferences are just not as competitive outside of computer science. Conferences have a short review period in which to evaluate a large volume of papers. The prestige conferred by getting a paper accepted to a top CS conference is often compared to getting a paper accepted to a top journal. Authors benefit a lot from the research community accepting their paper. It is only appropriate that they also share a lot.

Let’s take an example. Suppose you are working in academia and developed a new method for solving Problem X. You are going to launch a startup based on this method. How much more appealing would it be to funders if you had one (or more!) ICML papers about how you’ve totally nailed Problem X, showing that you are a total rockstar in the ML/AI community? But your competitive advantage might be at risk if reviewers (and then later the community) has access to your code. So then you write a paper where you discuss the main ideas behind your approach and give the experimental results but no implementation with the 5 other things you had to do to make the thing actually work. In this case you’re getting the stamp of approval while not sharing with the rest of the research community.

Of course, one can imagine that submissions from industry authors might rely on proprietary code bases which they cannot (for policy reasons) provide. An academic conference is about the open and free exchange of ideas, knowledge, and techniques. It seems that a trade show would be a more appropriate venue for showing the results without sharing the methods. I’m not trying to suggest that industry researchers are nefarious in some way, but it’s important to think about the incentives and benefits. The rules for submission (in this case code submission) articulate some of the values of the research community. Encouraging (but not requiring) code submission requires authors to signal (and allows reviewers to consider) whether they agree to the social contract.