Odd thing to see in IKEA

IKEA populates its display units (like bookshelves or living rooms, etc.) with real books. I was quite surprised to see many copies of Vikram Seth’s The Golden Gate translated into Swedish on my last visit there.

Speaking of IKEA, HTMLGIANT will tell you what to read in your IKEA chair (h/t to Bookslut). I’m surprised they didn’t choose the ubiquitous Poäng.

Samidh Chakrabarti on Transacting Philosophy

I recently re-read my old roommate Samidh Chakrabarti’s master’s thesis : Transacting Philosophy : A History of Peer Review in Scientific Journals (Oxford, 2004). It’s a fascinating history of scientific publishing from the Royal Society up to the present, and shows that “peer review has never been inseparable from the scientific method.” His analysis is summed up in the following cartoon, which shows three distinct phases of peer review:
SamidhModel
When there are few journals but a large supply of papers, peer review is necessary to select the papers to be published. However, when printing became cheap in the 19th century, everybody and their uncle had a journal and sometimes had to solicit papers to fill their pages. After WWII the trend reversed again, so now peer review is “in.” In this longish post I’m going to summarize/highlight a few things I learned.

The first scientific journal was started by the Royal Society, called Philosophical Transactions: giving some Account of the Present Undertakings, Studies and Labours of the Ingenious in many considerable Parts of the World, but is usually shortened to Phil. Trans.. Henry Oldenburg, the secretary of the Society, came up with the idea of using referees. Samidh’s claim is that Oldenburg was motivated by intellectual property claims. Time stamps for submitted documents would let philosophers establish when they made a discovery — Olderburg essentially made Phil. Trans. the arbiter of priority. However, peer review was necessary to provide quality guarantees, since the Royal Society was putting their name on it. He furthermore singled out articles which were not reviewed by putting the following disclaimer:

sit penes authorem fides [let the author take responsibility for it]: We only set it downe, as it was related to us, without putting any great weight upon it.”

Phil. Trans. was quite popular but not profitable. The Society ended up taking over the full responsibility (including fiscal) of the journal, and decided that peer review would not be about endorsing the papers or guaranteeing correctness:

And the grounds of their choice are, and will continue to be, the importance or singularity of the subjects, or the advantageous manner of treating them; without pretending to answer for the certainty of the facts, or propriety of the reasonings, contained in the several papers so published, which must still rest on the credit or judgment of their respective authors.

In the 19th century all this changed. Peer review began to smack of anti-democracy (compare this to the intelligent design crowd now), and doctors of medicine were upset ever since Edward Jenner’s development of the vaccine for smallpox in 1796 was rejected by the Royal Society for having too small a sample size. Peer review made it tough for younger scientists to be heard, and politics played no small role in papers getting rejected. Those journals which still practiced peer review sometimes paid a hefty price. Samidh writes of Einstein:

In 1937 (a time when he was already a celebrity), he submitted an article to Physical Review, one of the most prestigious physics journals. The referees sent Einstein a letter requesting a few revisions before they would publish his article. Einstein was so enraged by the reviews that he fired off a letter to the editor of Physical Review in which he strongly criticized the editor for having shown his paper to other researchers… he retaliated by never publishing in Physical Review again, save a note of protest.

The 19th century also saw the rise of cheap printing and the industrial revolution which created a larger middle class that was literate and interested in science. A lot hadn’t been discovered yet, and an amateur scientist could still make interesting discoveries with their home microscope. There was a dramatic increase in magazines, journals, gazettes, and other publications, each with their own editor, and each with a burning need to fill their pages.

The content of these new scientific journals became a reflection of the moods and ideas of their editors. Even the modern behemoths, Science and Nature, used virtually no peer review. James McKeen Cattell, the editor of Science from 1895-1944 got most of his content from personal solicitations. The editor of Nature would just ask people around the office or his friends at the club. Indeed, the Watson-Crick paper on the structure of DNA was not reviewed because the editor said “its correctness is self-evident.”

As the 20th century dawned, science became more specialized and discoveries became more rapid, so that editors could not themselves curate the contents of their journals. As the curve shows, the number of papers written started to exceed the demand of the journals. In order to maintain their competitive edge and get the “best” papers, peer review became necessary again.

Another important factor was the rise of Nazi Germany and the corresponding decline of German science as Jewish and other scientists fled. Elsevier hired these exiles to start a number of new journals with translations into English, and became a serious player in the scientific publishing business. And it was a business — Elsevier could publish more “risky” research because it had other revenue streams, and so it could publish a large volume of research than other publishers. This was good and bad for science as a whole — journals were published more regularly, but the content was mixed. After the war, investment in science and technology research increased; since the commercial publishers were more established, they had an edge.

How could the quality of a journal be measured?

Eugene Garfield came up with a method of providing exactly this kind of information starting in 1955, though it wasn’t his original intent. Garfield was intrigued by the problem of how to trace the lineage of scientific ideas. He wanted to know how the ideas presented in an article percolated down through other papers and led to the development of new ideas. Garfield drew his inspiration from law indexes. These volumes listed a host of court decisions. Under each decision, they listed all subsequent decisions that used it as a precedent. Garfield realized that he could do the same thing with scientific papers using bibliographical citations. He conceived of creating an index that not only listed published scientific articles, but also listed all subsequent articles that cited each article in question. Garfield founded the Institute for Scientific Information (ISI) to make his vision a reality. By 1963, ISI had published the first incarnation of Garfield’s index, which it called the Science Citation Index.

And hence the impact factor was born — a ratio of citations to citable articles. This proved to be helpful to librarians as well as tenure and promotion committees. They just had to look at the aggregate impact of a professor’s research. Everything became about the impact factor, and the way to improve the impact factor of a journal was to improve the quality (or at least perceived quality) of its peer review. And fortunately, most of it was (and is) given for free — “unpaid editorial review is the only thing keeping the journal industry solvent.” However, as Samidh puts it succinctly in his thesis:

All of this sets aside the issue of whether the referee system in fact provides the best possible quality control. But this merely underscores the fact that in the historical record, the question of peer review’s efficacy has always been largely disconnected from its institutionalization. To summarize the record, peer review became institutionalized largely because it helped commercial publishers inexpensively sustain high impact factors and maintain exalted positions in the hierarchy of journals. Without this hierarchy, profits would vanish. And without this hierarchy, the entire system of academic promotion in universities would be called into question. Hence, every scientist’s livelihood depends on peer review and it has become fundamental to the professional organization of science. As science is an institution chiefly concerned with illuminating the truth, it’s small wonder, then, that editorial peer review has become confused with truth validation.

It seems all like a vicious cycle — is there any way out? Samidh claims that we’re moving to a “publish, then filter” approach where things are put on ArXiV and then are reviewed. He’s optimistic about “a system where truth is debated, not assumed, and where publication is for the love of knowledge, not prestige.” I’m a little more dubious, to be honest. But it’s a fascinating history, and some historical perspective may yield clues about how to design a system with the right incentives for the future of scientific publishing.

Illinois Wireless Summer School

I just came back from the Illinois Wireless Summer School, hosted by the Illinois Center for Wireless Systems. Admittedly, I had a bit of an ulterior motive in going, since it meant a trip home to see my parents (long overdue!), but I found the workshop a pretty valuable crash course running the whole breadth of wireless technology. The week started out with lectures on propagation, wireless channel modeling, and antennas, and ran up to a description of WiMAX and LTE. Slides for some of the lectures are available online.

Some tidbits and notes:

  • Venugopal Veeravalli gave a nice overview of channel modeling, which was a nice refresher since taking David Tse’s wireless course at the beginning of grad school. Xinzhou Wu talked about modulation issues and mentioned that for users near the edge of a cell, universal reuse may be bad and mentioned Flarion’s flexband idea, which I hadn’t heard about before.
  • Jennifer Bernhard talked about antenna design, which I had only the sketchiest introduction to 10 years ago. She pointed out that actually getting independent measurements from two antennas by spacing them just the right distance apart is nearly impossible, so coupling effects should be worked into MIMO models (at least, this is what I got out of it). Also, the placement of the antenna on your laptop matters a lot — my Mac is lousy at finding the WiFi because its antenna is sub-optimally positioned.
  • Nitin Vaidya discussed Dynamic Source Routing, which I had heard about but never really learned before.
  • Dina Katabi and Sachin Katti talked about network coding and its implementation. The issues with asynchronous communication for channel estimation in the analog network coding was something I had missed in earlier encounters with their work.
  • P. R. Kumar talked about his work with I-Hong Hou and Vivek Borkar on QoS guarantees in in a simple downlink model. I had seen this talk at Infocom, but the longer version had more details and was longer, so I think I understood it more this time.
  • Wade Trappe and Yih-Chun Hu talked about a ton of security problems (so many that I got a bit lost, but luckily I have the slides). In particular, they talked about how many adversarial assumptions that are made are very unrealistic for wireless, since adversaries can eavesdrop and jam, spoof users, and so on. They mentioned the Dolev-Yao threat model, from FOCS 1981, that I should probably read more about. There were some slides on intrusion detection, which I think is an interesting problem that could also be looked at from the EE/physical layer side.
  • R. Srikant and Attila Eryilmaz gave a nice (but dense) introduction to resource allocation and network utilization problems from the optimization standpoint. Srikant showed how some of the results Kumar talked about can also be interpreted from this approach. There was also a little bit of MCMC that showed up, which got me thinking about some other research problems…
  • The industry speakers didn’t post their slides, but they had a different (and a bit less tutorial) perspective to give. Victor Bahl from MSR gave a talk on white space networking (also known as cognitive radio, but he seems to eschew that term). Dilip Krishnaswamy (Qualcomm) talked about WWAN architectures, which (from the architectural standpoint) are different from voice or other kinds of networks, and in particular where the internet cloud sits with respect to the other system elements was interesting to me. Amitava Ghosh (Motorola) broke down LTE and WiMAX for us in gory detail.

Privacy for prescriptions

The NY TImes has an article on how the information on our prescriptions is “a commodity bought and sold in a murky marketplace, often without the patients’ knowledge or permission.” I was informed by UC Berkeley in the spring that some of my information may have been compromised, although only “Social Security numbers, health insurance information and non-treatment medical information,” and not “diagnoses, treatments and therapies.” But in that case it was theft, not out-and-out sale. The Times article suggests that the new health care bill will tighten up some of the information leakage, but I am unconvinced.

Of more interest is the second half of the article, on privacy in the data mining of medical information, which is a topic which is a strong motivator for some of the research I’m working on now. I’m not too comforted by pronouncements from industry people:

“Data stripped of patient identity is an important alternative in health research and managing quality of care,” said Randy Frankel, an IMS vice president. As for the ability to put the names back on anonymous data, he said IMS has “multiple encryptions and various ways of separating information to prevent a patient from being re-identified”

IMS Health reported operating revenue of $1.05 billion in the first half of 2009, down 10.6 percent from the period a year earlier. Mr. Frankel said he did not expect growing awareness of privacy issues to affect the business.

There’s no incentive to develop real privacy-preservation systems if you make money like that and don’t think that pressure is going to change your model. As far as the vague handwaving of “multiple encryptions and… separating information,” color me unconvinced again.

I think it’s time for a new take on privacy laws and technologies.

Visit to University of Washington

After ISIT I went to visit the Electrical Engineering Department at the University of Washington. I was invited up there by Maya Gupta, who told me about a little company she started called Artifact Puzzles.

On the research end of things, I learned a lot about the the learning problems her group is working on and their applications to color reproduction. I also got a chance to chat with Maryam Fazel about rank minimization problems, Marina Meilă about machine learning and distance models for rankings (e.g. the Fligner-Verducci model), and David Thorsley about self-assembling systems and consensus problems. All in all I learned a lot!

On the social side I got to hang out with friends in Seattle and UW and hiked for an afternoon at Mt. Ranier. Photos are on Picasa!

ISIT 2009 : talks part four

The Gelfand-Pinsker Channel: Strong Converse and Upper Bound for the Reliability Function
Himanshu Tyagi, Prakash Narayan
Strong Converse for Gel’fand-Pinsker Channel
Pierre Moulin

Both of these papers proved the strong converse for the Gel’fand-Pinsker channel, e.g. the discrete memoryless channel with iid state sequence P_S, where the realized state sequence is known ahead of time at the encoder. The first paper proved a technical lemma about the image size of “good codeword sets” which are jointly typical conditioned on a large subset of the typical set of S^n sequences. That is, given a code and a set of almost \exp(n H(P_S)) typical sequences in S^n for which the average probability of error is small, then they get a bound on the rate of the code. They then derive bounds on error exponents for the channel. The second paper has a significantly more involved argument, but one which can be extended to multiaccess channels with states known to the encoder.

Combinatorial Data Reduction Algorithm and Its Applications to Biometric Verification
Vladimir B. Balakirsky, Anahit R. Ghazaryan, A. J. Han Vinck

The goal of this paper was to compute short fingerprints f(\mathbf{x}) from long binary strings \mathbf{x} so that a verifier can look at a new long vector \mathbf{y} and tell whether or not \mathbf{y} is close to \mathbf{x} based on f(\mathbf{x}). This is a little different from hashing, where we could first compute f(\mathbf{y}). They develop a scheme which stores the index of a reference vector \mathbf{c} that is “close” to \mathbf{x} and the distance d(\mathbf{x},\mathbf{c}). This can be done with low complexity. They calculated false accept and reject rates for this scheme. Since the goal is not reconstruction or approximation, but rather a kind of classification, they can derive a reference vector set which has very low rate.

Two-Level Fingerprinting Codes
N. Prasanth Anthapadmanabhan, Alexander Barg

This looks at a variant of the fingerprinting problem, in which you a content creator makes several fingerprinted versions of an object (e.g. a piece of software) and then a group of pirates can take their versions and try to create a new object with a valid fingerprint. The marking assumption mean that the pirates can only alter the positions in their copies which are different. The goal is to build a code such that a verifier looking at an object produced by t pirates can identify at least one of the pirates. In the two-level problem, the objects are coarsely classified into groups (e.g. by geographic region) and the verifier wants to be able to identify one of the groups of one of the pirates when there are more than t pirates. They provide some conditions for traceability and constructions, This framework can also be extended to multiple levels.