# Yet more on Elsevier’s fake journals

But wait… there’s more:

Scientific publishing giant Elsevier put out a total of six publications between 2000 and 2005 that were sponsored by unnamed pharmaceutical companies and looked like peer reviewed medical journals, but did not disclose sponsorship, the company has admitted… Elsevier declined to provide the names of the sponsors of these titles, according to the company spokesperson… last week, Elsevier indicated that it had no plans of looking into the matter further, but that decision has apparently been reversed.

“We are currently conducting an internal review but believe this was an isolated practice from a past period in time,” Hansen continued in the Elsevier statement. “It does not reflect the way we operate today. The individuals involved in the project have long since left the company. I have affirmed our business practices as they relate to what defines a journal and the proper use of disclosure language with our employees to ensure this does not happen again.”

I guess they’re saying “mistakes were made.”

# relative price index table

JDO sent me a link to a journal rating databas that tries to calculate a “relative price index” for different journals:

he coloration (red for very low value, yellow for low value, and green for good value) is computed by comparing the composite price index to the median for non-profit journals in the same subject. Be advised that price per citation, price per article and the composite index are not perfect measures of value. Neither of us are experts in most of the fields represented, and others may reasonably, or unreasonably, disagree with the value assessment.

This provides a counterpoint to the impact factor commonly bandied about at academic gatherings. High impact is only one aspect of the cost-effectiveness. For those information theorists out there:

Title: IEEE TRANSACTIONS ON INFORMATION THEORY
Publisher: IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
ISSN: 0018-9448
Subject: Computer Science, Engineering
Profit Status: Non-Profit
Year First Published: 1953
Price per article: 2.43
Price per citation: 1.45
Composite Price Index: 1.88
Relative Price Index 0.24

It’s in the green, which is appropriate, I guess. (Don’t worry, the Transactions on Signal Processing has an RPI of 0.55, so it’s not just judging the journal by its color).

# Infocom 2009 : routing

Scalable Routing Via Greedy Embedding
Cedric Westphal (Docomo Labs USA, US); Guanhong Pei (Virginia Tech, US)
In a network, a greedy routing strategy that sends packets to the neighbor closest to the destination may require routing tables of size $O(n)$. What we would like is to embed the network into (Euclidean or other) space in such a way that greedy routing requires only knowlege of your own location and your neighbor’s locations. The dimension needed may be as high as $O(\log n)$, but embedded graph may not be faithful to the true network, resulting in longer paths than greedy routing on the true network. This paper presents a practical distributed algorithm for embedding a graph with $n$ vertices into $\mathbb{R}^{\log n}$ with greedy forwarding. It works by extracting a spanning tree from the network and then using random projections on the label space to go from dimension $n$ to dimension $\log n$, using the popular results of Achlioptas. The tree structure serves as a backbone to avoid too much backtracking. The main point seems to be that the algorithm can be implemented in a distributed manner, but I was not clear on how this would be done.

Greedy Routing with Bounded Stretch
Roland Flury (ETH Zurich, Switzerland); Sriram Pemmaraju (The University of Iowa, US); Roger Wattenhofer (ETH Zurich, Switzerland)
This was another talk about greedy routing. If the network has holes in it (imagine a lake) then greedy routing can get stuck because there is no greedy step to take. So there are lots of algorithms out there for recovering when you are stuck. It would be far nicer to get a coordinate system for the graph such that greedy routing always works (in some sense), but so that the stretch is not too bad. This talk proposed a scheme which had three ingredients: spanning trees, separators, and isometric greedy embeddings. Basically the idea (from what I saw in the talk) is to use the existence of separators in the graph to build a tree cover of the graph and then do routing along the best tree in the cover. This guarantees the bounded stretch, and the number of trees needed is not too large, so the labels are not too large either. I think it was a rather nice paper, and between the previous one and this one I was introduced to a new problem I had not thought about before.

Routing Over Multi-hop Wireless Networks with Non-ergodic Mobility
Chris Milling (The University of Texas at Austin, US); Sundar Subramanian (Qualcomm Flarion Technologies, US); Sanjay Shakkottai (The University of Texas at Austin, US); Randall Berry (Northwestern University, US)
In this paper they had a network with some static nodes and some mobile nodes who move in prescribed areas but with no real “model” on their motion. They move arbitrarily according to some arbitrary continuous paths. Some static nodes need to route packets to the mobile nodes. How should they do this? A scheme is proposed in which the static node sends out probe packets to iteratively quarter the network and corral the mobile node. If the mobile crosses a boundary then a node close to the boundary will deliver the packet. An upperbound on the throughput can be shown by considering the mobiles to be at fixed locations (chosen to be the worst possible). This is essentially the worst case, since the rates achieved by their scheme are within a polylog factor of the upper bound. This suggests that the lack of knowledge of the user’s mobility patterns hurts very little in the scaling sense.

# The Conyers bill and open access

Allie sent this blog post my way about the Conyers bill, about which Lawrence Lessig has been quite critical. At the moment the NIH requires all publications from research it funds to be posted (e.g. on PubMed) so that the public can read them. This makes sense because the taxpayers paid for this research.

What Conyers wants is to do is end the requirement for free and public dissemination of research. Why? Lessig says he’s in the pocket of the publishing industry. From the standpoint of the taxpayer and a researcher, it’s hard to see a justification for this amendment. Conyers gives a procedural reason for the change, namely that “this so-called ‘open access’ policy was not subject to open hearings, open debate or open amendment.” So essentially he wants to go back to the status quo ante and then have a debate, rather than have a debate about whether we want to go back to the status quo ante.

From my perspective, spending Congressional time to do the equivalent of a Wikipedia reversion is a waste — if we want to debate whether to change the open access rules, let’s debate that now rather than changing the rules twice. I think we should expand open access to include the NSF too. It’s a bit tricky though, since most of my work is published (and publishable) within the IEEE. The professional societies could be a great ally in the open-access movement, but as Phil Davis points out, the rhetoric on both sides tends to leave them out.

# Elsevier strikes again

Via Crooked Timber comes another story about the depths plumbed by Elsevier:

Merck paid an undisclosed sum to Elsevier to produce several volumes of a publication that had the look of a peer-reviewed medical journal, but contained only reprinted or summarized articles–most of which presented data favorable to Merck products–that appeared to act solely as marketing tools with no disclosure of company sponsorship… Disclosure of Merck’s funding of the journal was not mentioned anywhere in the copies of issues obtained by The Scientist.

Elsevier has been involved in shady dealings before, but this is a new one for me. I recently turned down a request to review a paper for an Elsevier-published journal (citing their business practices), and this piece of news confirms my decision.

# Infocom 2009 : delay issues

Effective Delay Control for Online Network Coding
Joao Barros (University of Porto, PT); Rui Costa (Universidade do Porto / Instituto de Telecomunicações, PT); Daniele Munaretto (DoCoMo Euro-Labs, DE); Joerg Widmer (DoCoMo Euro-Labs, DE)
This talk tries to merge ARQ with network coding. The key idea is that a packet is “seen” if it only depends on XORs with future packets. The delay analysis is based then on analyzing the chains of dependent packets induced by erasures. This paper looks at the problem of multicast. Here the issue is managing the delays to multiple receivers and assessing which receiver is the “leader” and how the leader switches over time. For random erasures (different for each receiver) this means we are doing a biased random walk whose location shows who the leader is. A coding strategy is trying to control this random walk, and a strategy is proposed to maintain a “leader” by sending the XOR of the last unseen packet of all users when the leader loses a packet.

The Capacity Allocation Paradox
Asaf Baron (Technion – Israel Institute of Technology, IL); Isaac Keslassy (Technion, IL); Ran Ginosar (Technion, IL)
This talk was about a simple example of a network in which adding capacity can make a stable network unstable. To me it seemed to be because of the particular model adopted for the network, namely that if a link of capacity $C$ is available, then the transmitter will operate at rate $C$. The simple example of the paradox is a 2-user multiaccess link. Suppose we have arrival process with rate 1 arriving at two users which have outgoing links of capacity 1 to a shared queue. This queue has output capacity 2, so the whole network is stable. However, if one user gets a capacity 2 link to the queue, then their traffic can hog the output link and cause increasing delay to the second user. The paradox was motivated by networks on a chip, which seem to be an interesting source of new problems with different emphases than traditional networking problems.

Power-Aware Speed Scaling In Processor Sharing Systems
Adam Wierman (California Institute of Technology, US); Lachlan Andrew (Swinburne University of Technology, AU); Kevin Tang (Cornell University, US)
This talk was about assigning speeds to processing jobs in a queue — if you do a job faster it takes more power but reduces delay. There are different ways of assigning speeds, either a fixed speed for all jobs (static), a square-wave for speed vs. time (gated static), or some arbitrary curve (dynamic). The metric they choose to look at it a sort of regularized energy $E[\mathrm{energy}] + \beta E[ \mathrm{delay} ]$, where $\mathrm{energy} = (\mathrm{speed})^{\alpha}$. For a large number of jobs they get a kind of limiting form for the optimal speed and show that a gated static policy performs within a factor of 2 of the optimal dynamic, which is verified by simulation. In general we may not know the arrival process and so choosing the duty cycle for a gated static policy may be hard a priori. In this case a dynamic strategy may be much better to handle model variability. This was one of those problems I had never thought about before and I thought the results were pretty cute.

# from my mandatory online course

I am required to take an online course on “Sexual Harassment Prevention Training for Supervisors” here at UCSD, and it is full of case studies with ridiculous names like “Manny Mozart” (for the Music Department) and Pierre Rodin (for French). Here was one which seemed quite strange to me:

Several male faculty in the predominately male Department of Human Studies invite a new male faculty member to Hooters for lunch, explaining that this is a bonding event for the “guys” every Friday. Professor Fellowman attends at first, but is uncomfortable with the setting, behavior and discussion during these lunches, and refuses subsequent requests to attend.

The department chair tells Professor Fellowman that he will not do well in the department if he cannot develop relationships with his fellow faculty members. Professor Fellowman is subsequently assigned to teach the largest and most unpopular courses, and is shunned by his male peers. Eventually, he suffers an unfavorable departmental merit review.

Case Study: Does Professor Fellowman have a claim of sexual harassment?

I am having a hard time imagining a department at a UC for which the regular faculty outing would be to Hooters (although the world is full of surprises). According to Oncale v. Sundowner Offshore Services, Inc. (1998) this would constitute sexual harassment (which seems obvious). But would it have hurt to come up with a more likely example?