The Conyers bill and open access

Allie sent this blog post my way about the Conyers bill, about which Lawrence Lessig has been quite critical. At the moment the NIH requires all publications from research it funds to be posted (e.g. on PubMed) so that the public can read them. This makes sense because the taxpayers paid for this research.

What Conyers wants is to do is end the requirement for free and public dissemination of research. Why? Lessig says he’s in the pocket of the publishing industry. From the standpoint of the taxpayer and a researcher, it’s hard to see a justification for this amendment. Conyers gives a procedural reason for the change, namely that “this so-called ‘open access’ policy was not subject to open hearings, open debate or open amendment.” So essentially he wants to go back to the status quo ante and then have a debate, rather than have a debate about whether we want to go back to the status quo ante.

From my perspective, spending Congressional time to do the equivalent of a Wikipedia reversion is a waste — if we want to debate whether to change the open access rules, let’s debate that now rather than changing the rules twice. I think we should expand open access to include the NSF too. It’s a bit tricky though, since most of my work is published (and publishable) within the IEEE. The professional societies could be a great ally in the open-access movement, but as Phil Davis points out, the rhetoric on both sides tends to leave them out.

Advertisement

Elsevier strikes again

Via Crooked Timber comes another story about the depths plumbed by Elsevier:

Merck paid an undisclosed sum to Elsevier to produce several volumes of a publication that had the look of a peer-reviewed medical journal, but contained only reprinted or summarized articles–most of which presented data favorable to Merck products–that appeared to act solely as marketing tools with no disclosure of company sponsorship… Disclosure of Merck’s funding of the journal was not mentioned anywhere in the copies of issues obtained by The Scientist.

Elsevier has been involved in shady dealings before, but this is a new one for me. I recently turned down a request to review a paper for an Elsevier-published journal (citing their business practices), and this piece of news confirms my decision.

Infocom 2009 : delay issues

Effective Delay Control for Online Network Coding
Joao Barros (University of Porto, PT); Rui Costa (Universidade do Porto / Instituto de Telecomunicações, PT); Daniele Munaretto (DoCoMo Euro-Labs, DE); Joerg Widmer (DoCoMo Euro-Labs, DE)
This talk tries to merge ARQ with network coding. The key idea is that a packet is “seen” if it only depends on XORs with future packets. The delay analysis is based then on analyzing the chains of dependent packets induced by erasures. This paper looks at the problem of multicast. Here the issue is managing the delays to multiple receivers and assessing which receiver is the “leader” and how the leader switches over time. For random erasures (different for each receiver) this means we are doing a biased random walk whose location shows who the leader is. A coding strategy is trying to control this random walk, and a strategy is proposed to maintain a “leader” by sending the XOR of the last unseen packet of all users when the leader loses a packet.

The Capacity Allocation Paradox
Asaf Baron (Technion – Israel Institute of Technology, IL); Isaac Keslassy (Technion, IL); Ran Ginosar (Technion, IL)
This talk was about a simple example of a network in which adding capacity can make a stable network unstable. To me it seemed to be because of the particular model adopted for the network, namely that if a link of capacity C is available, then the transmitter will operate at rate C. The simple example of the paradox is a 2-user multiaccess link. Suppose we have arrival process with rate 1 arriving at two users which have outgoing links of capacity 1 to a shared queue. This queue has output capacity 2, so the whole network is stable. However, if one user gets a capacity 2 link to the queue, then their traffic can hog the output link and cause increasing delay to the second user. The paradox was motivated by networks on a chip, which seem to be an interesting source of new problems with different emphases than traditional networking problems.

Power-Aware Speed Scaling In Processor Sharing Systems
Adam Wierman (California Institute of Technology, US); Lachlan Andrew (Swinburne University of Technology, AU); Kevin Tang (Cornell University, US)
This talk was about assigning speeds to processing jobs in a queue — if you do a job faster it takes more power but reduces delay. There are different ways of assigning speeds, either a fixed speed for all jobs (static), a square-wave for speed vs. time (gated static), or some arbitrary curve (dynamic). The metric they choose to look at it a sort of regularized energy E[\mathrm{energy}] + \beta E[ \mathrm{delay} ], where \mathrm{energy} = (\mathrm{speed})^{\alpha}. For a large number of jobs they get a kind of limiting form for the optimal speed and show that a gated static policy performs within a factor of 2 of the optimal dynamic, which is verified by simulation. In general we may not know the arrival process and so choosing the duty cycle for a gated static policy may be hard a priori. In this case a dynamic strategy may be much better to handle model variability. This was one of those problems I had never thought about before and I thought the results were pretty cute.