Linkage

Some interesting stuff has passed my way while being in India (and one or two things from before). Might as well post them before I forget, no?

Slavoj Žižek may be a curmudgeonly Marxist, but the animation helps soften it, I think. I don’t think I fully agree with him, but there’s stuff in there to chew on.

The Purdue anonymization project won a big NSF award.

Tips for tasks related to graduating (h/t Bobak).

Some interesting news about the future of the textbook market. It’s doubly interesting since I am in Pune, a treasure-trove of cheaper editions of technical books.

Apparently I sometimes wear a lab coat.

Nixing negative reviewers

A question came up while chatting with a friend — how do you tell the editors of the journal to not ask certain people for a review? Say you submit a paper to a journal and in the cover letter you want some language to the effect that “please don’t choose Dr. X as a reviewer, since they will be biased.” This must be a relatively common situation, especially where people have axes to grind, and what better way to grind them than while reviewing the other camp’s paper or grant proposal?

Let’s create a cartoon situation: suppose Dr. X really hates your guts (intellectually, of course) — this is actually the case, and not just your own misperceptions of Dr. X. I know that at some schools for tenure cases the candidate can give a list of people not to ask for letters. But in the context of paper submission, hows can you politely suggest that Dr. X may not be the most objective reviewer for your paper?

Completely ridiculous stock photography

I get the IEEE Communications and Signal Processing magazines electronically to save paper (I find they don’t make for great bus reading, so the print version is less appealing). Every month they dutifully send me an email with possibly the most ridiculous stock images. For example, here’s the one for the Signal Processing magazine:

Happy Signal Processing Dude

Oh man, I am so STOKED to get this Signal Processing Magazine! Wooo hoooo!

First off, who is this dude, and what is wrong with his life such that getting this magazine makes him so happy? Clearly he’s not an engineer since he’s wearing a suit. Maybe he works in finance? Of perhaps government, since he’s walking down some pretty “city hall”-looking stairs. Maybe it’s a courthouse, and he’s been cleared of all charges, thanks to the evidence in the signal processing magazine?

Now here’s the Communications one:

Peek inside Communications

Hey there, want to have some sitcom-like hijinks with 4G communication systems?

Again, who is this woman, and why is she creepily hiding behind the magazine, only to pop around the side, holding the pages shut just as I’m opening it? You scared me, lady! What are you trying to do, give me a heart attack? Or are you some sort of not-so-subtle ploy to lure the predominantly male engineering audience to download the magazine?

Honestly, I wish IEEE would not bother paying for the graphic design of the download image and instead use the money for something else, like defraying subscription costs for developing nations. As it stands, these emails make me take the magazine less seriously.

UC Libraries vs. the Nature Publishing Group

Earlier this month, a letter was circulated to the UC Faculty regarding the Nature Publishing Group (NPG)’s proposal to increase the licensing fees for online access by 400%, which is pretty dramatic given a) the high cost of the subscription in the first place and b) the fact that library budgets are going down. There was a suggestion of a boycott.

NPG felt like they had been misrepresented, and issued a press statement saying “you guys are a bunch of whiners, our stuff is the best, and 7% price hikes per year is totally reasonable.” Furthermore, they said “you guys have been getting a crazy good deal for way too long anyhow and its time you paid your fair share.” I suppose behaving like complete jerks is an ok way to react when you are trying to sell somebody something, especially something that is made up of stuff written by your potential buyers. I wonder what their profit margins are like.

The University of California responded, pointing out that 7% increases, compounded, starts getting out of hand pretty fast. “Plainly put, UC Faculty do not
think that their libraries should have to pay exorbitant and unreasonable fees to get access to their own work.”

Looks like PLoS better start rolling out some new titles!

More info can be found at the OSC website, which oddly doesn’t say what OSC stands for.

Pitfalls in author ordering

Apparently the number of co-authored papers in political science is on the rise, and there are questions on how to order the author names. I had never heard the phrase “the tyranny of the alphabet” before to refer to alphabetical author ordering, but I know that since conventions are different in math/statistics, computer science, and electrical engineering, there ends up being a lot of confusion (esp. on the part of graduate students) as to who actually did “most of the work” on a paper. Fan Chung Graham gives a succinct description of an ideal:

In math, we use the Hardy-Littlewood rule. That is, authors are alphabetically ordered and everyone gets an equal share of credit. The one who has worked the most has learned the most and is therefore in the best position to write more papers on the topic.

This ideal doesn’t really hold in electrical engineering (or computer science, for that matter), and can lead to some dangerous assumptions when people’s conventions vary or when you are doing interdisciplinary work.

Continue reading

Criticism of open access is backwards, as usual

Inside Higher Ed has a piece today on the presidents of liberal arts colleges writing to support the Federal Research Access Act of 2009. The law would make federal agencies that sponsor research come up with methods for archiving and publishing research that they fund so it would be “made immediately available” to the public. It would (essentially) apply only to journal papers, which raises a question about computer science, which lives the fast-and-dangerous conference life.

The article ends with a reaction from “Martin Frank, executive director of the American Physiological Society and coordinator of the Washington D.C. Principles for Free Access to Science,” who claimed that the since there are many foreign journal subscribers, the argument that taxpayers should have access to the research is not very strong. Frank is concerned with non-profit publishers (such as professional societies like the IEEE), but in his eagerness to protect his own turf he completely ignores the fact that mega-publishers like Elsevier and the Nature Publishing Group are based in other countries. Elsevier is Headquartered in Amsterdam and NPG is run by Macmillan, which “is itself owned by German-based, family run company Verlagsgruppe Georg von Holtzbrinck GmbH.”

If Mr. Frank wants to make a nativist argument against an open access mandate, then perhaps he should support a ban on wasting American taxpayer dollars to fund foreign publishing houses. The whole “taxpayer” argument in the end is marketing for both sides — although in principle any citizen should have access to government-funded research, the real volume comes from universities and industry. Federal money is used many times over for the same piece of research — once to fund it and then once for every (public) university library which has to buy a subscription to the journal where the result was published. University libraries will not stop subscribing to the IEEE journals just because the NSF and DARPA funded research will be made available in (probably separate) repositories run by the NSF and DARPA. If a non-profit is publishing its journals at cost then they should still be affordable. The for-profit publishers are the ones who will have to realize that the “value added” by the Nature brand is not worth the markup they charge.

Samidh Chakrabarti on Transacting Philosophy

I recently re-read my old roommate Samidh Chakrabarti’s master’s thesis : Transacting Philosophy : A History of Peer Review in Scientific Journals (Oxford, 2004). It’s a fascinating history of scientific publishing from the Royal Society up to the present, and shows that “peer review has never been inseparable from the scientific method.” His analysis is summed up in the following cartoon, which shows three distinct phases of peer review:
SamidhModel
When there are few journals but a large supply of papers, peer review is necessary to select the papers to be published. However, when printing became cheap in the 19th century, everybody and their uncle had a journal and sometimes had to solicit papers to fill their pages. After WWII the trend reversed again, so now peer review is “in.” In this longish post I’m going to summarize/highlight a few things I learned.

The first scientific journal was started by the Royal Society, called Philosophical Transactions: giving some Account of the Present Undertakings, Studies and Labours of the Ingenious in many considerable Parts of the World, but is usually shortened to Phil. Trans.. Henry Oldenburg, the secretary of the Society, came up with the idea of using referees. Samidh’s claim is that Oldenburg was motivated by intellectual property claims. Time stamps for submitted documents would let philosophers establish when they made a discovery — Olderburg essentially made Phil. Trans. the arbiter of priority. However, peer review was necessary to provide quality guarantees, since the Royal Society was putting their name on it. He furthermore singled out articles which were not reviewed by putting the following disclaimer:

sit penes authorem fides [let the author take responsibility for it]: We only set it downe, as it was related to us, without putting any great weight upon it.”

Phil. Trans. was quite popular but not profitable. The Society ended up taking over the full responsibility (including fiscal) of the journal, and decided that peer review would not be about endorsing the papers or guaranteeing correctness:

And the grounds of their choice are, and will continue to be, the importance or singularity of the subjects, or the advantageous manner of treating them; without pretending to answer for the certainty of the facts, or propriety of the reasonings, contained in the several papers so published, which must still rest on the credit or judgment of their respective authors.

In the 19th century all this changed. Peer review began to smack of anti-democracy (compare this to the intelligent design crowd now), and doctors of medicine were upset ever since Edward Jenner’s development of the vaccine for smallpox in 1796 was rejected by the Royal Society for having too small a sample size. Peer review made it tough for younger scientists to be heard, and politics played no small role in papers getting rejected. Those journals which still practiced peer review sometimes paid a hefty price. Samidh writes of Einstein:

In 1937 (a time when he was already a celebrity), he submitted an article to Physical Review, one of the most prestigious physics journals. The referees sent Einstein a letter requesting a few revisions before they would publish his article. Einstein was so enraged by the reviews that he fired off a letter to the editor of Physical Review in which he strongly criticized the editor for having shown his paper to other researchers… he retaliated by never publishing in Physical Review again, save a note of protest.

The 19th century also saw the rise of cheap printing and the industrial revolution which created a larger middle class that was literate and interested in science. A lot hadn’t been discovered yet, and an amateur scientist could still make interesting discoveries with their home microscope. There was a dramatic increase in magazines, journals, gazettes, and other publications, each with their own editor, and each with a burning need to fill their pages.

The content of these new scientific journals became a reflection of the moods and ideas of their editors. Even the modern behemoths, Science and Nature, used virtually no peer review. James McKeen Cattell, the editor of Science from 1895-1944 got most of his content from personal solicitations. The editor of Nature would just ask people around the office or his friends at the club. Indeed, the Watson-Crick paper on the structure of DNA was not reviewed because the editor said “its correctness is self-evident.”

As the 20th century dawned, science became more specialized and discoveries became more rapid, so that editors could not themselves curate the contents of their journals. As the curve shows, the number of papers written started to exceed the demand of the journals. In order to maintain their competitive edge and get the “best” papers, peer review became necessary again.

Another important factor was the rise of Nazi Germany and the corresponding decline of German science as Jewish and other scientists fled. Elsevier hired these exiles to start a number of new journals with translations into English, and became a serious player in the scientific publishing business. And it was a business — Elsevier could publish more “risky” research because it had other revenue streams, and so it could publish a large volume of research than other publishers. This was good and bad for science as a whole — journals were published more regularly, but the content was mixed. After the war, investment in science and technology research increased; since the commercial publishers were more established, they had an edge.

How could the quality of a journal be measured?

Eugene Garfield came up with a method of providing exactly this kind of information starting in 1955, though it wasn’t his original intent. Garfield was intrigued by the problem of how to trace the lineage of scientific ideas. He wanted to know how the ideas presented in an article percolated down through other papers and led to the development of new ideas. Garfield drew his inspiration from law indexes. These volumes listed a host of court decisions. Under each decision, they listed all subsequent decisions that used it as a precedent. Garfield realized that he could do the same thing with scientific papers using bibliographical citations. He conceived of creating an index that not only listed published scientific articles, but also listed all subsequent articles that cited each article in question. Garfield founded the Institute for Scientific Information (ISI) to make his vision a reality. By 1963, ISI had published the first incarnation of Garfield’s index, which it called the Science Citation Index.

And hence the impact factor was born — a ratio of citations to citable articles. This proved to be helpful to librarians as well as tenure and promotion committees. They just had to look at the aggregate impact of a professor’s research. Everything became about the impact factor, and the way to improve the impact factor of a journal was to improve the quality (or at least perceived quality) of its peer review. And fortunately, most of it was (and is) given for free — “unpaid editorial review is the only thing keeping the journal industry solvent.” However, as Samidh puts it succinctly in his thesis:

All of this sets aside the issue of whether the referee system in fact provides the best possible quality control. But this merely underscores the fact that in the historical record, the question of peer review’s efficacy has always been largely disconnected from its institutionalization. To summarize the record, peer review became institutionalized largely because it helped commercial publishers inexpensively sustain high impact factors and maintain exalted positions in the hierarchy of journals. Without this hierarchy, profits would vanish. And without this hierarchy, the entire system of academic promotion in universities would be called into question. Hence, every scientist’s livelihood depends on peer review and it has become fundamental to the professional organization of science. As science is an institution chiefly concerned with illuminating the truth, it’s small wonder, then, that editorial peer review has become confused with truth validation.

It seems all like a vicious cycle — is there any way out? Samidh claims that we’re moving to a “publish, then filter” approach where things are put on ArXiV and then are reviewed. He’s optimistic about “a system where truth is debated, not assumed, and where publication is for the love of knowledge, not prestige.” I’m a little more dubious, to be honest. But it’s a fascinating history, and some historical perspective may yield clues about how to design a system with the right incentives for the future of scientific publishing.