UC Libraries vs. the Nature Publishing Group

Earlier this month, a letter was circulated to the UC Faculty regarding the Nature Publishing Group (NPG)’s proposal to increase the licensing fees for online access by 400%, which is pretty dramatic given a) the high cost of the subscription in the first place and b) the fact that library budgets are going down. There was a suggestion of a boycott.

NPG felt like they had been misrepresented, and issued a press statement saying “you guys are a bunch of whiners, our stuff is the best, and 7% price hikes per year is totally reasonable.” Furthermore, they said “you guys have been getting a crazy good deal for way too long anyhow and its time you paid your fair share.” I suppose behaving like complete jerks is an ok way to react when you are trying to sell somebody something, especially something that is made up of stuff written by your potential buyers. I wonder what their profit margins are like.

The University of California responded, pointing out that 7% increases, compounded, starts getting out of hand pretty fast. “Plainly put, UC Faculty do not
think that their libraries should have to pay exorbitant and unreasonable fees to get access to their own work.”

Looks like PLoS better start rolling out some new titles!

More info can be found at the OSC website, which oddly doesn’t say what OSC stands for.

Advertisements

ISIT 2010 : talks about adversaries

My thesis work was about a particular adversarial model from Shannon theory (the arbitrarily varying channel), but I am more broadly interested in how adversarial assumptions can be used to capture uncertainty in modeling or add an element of robustness to designs. I’m not wedded to the AVC, so I like going to see other works which try to address these issues. Among the talks I saw at ISIT, these three captured different aspects of these models.

POLYTOPE CODES AGAINST ADVERSARIES IN NETWORKS
(Oliver Kosut; Cornell University, Lang Tong; Cornell University, David Tse; University of California at Berkeley)
This talk used as an example the “cockroach network,” named thusly because it “does not look like a butterfly.” That line got a lot of laughs. In this model a subset of nodes in the network are controlled by an adversary, who can introduce errors. They show that by using a certain structured code called a polytope code, together with some checking and filtering by internal nodes, the adversaries can be detected/avoided. For some networks (planar) they can show that their code achieves the cut-set bound. The key to the codes working is making any adversarial action result in certain tuples of random variables becoming atypical and hence detectable.

JAMMING GAMES FOR POWER CONTROLLED MEDIUM ACCESS WITH DYNAMIC TRAFFIC
(Yalin Sagduyu; Intelligent Automation Inc./University of Maryland, Randall Berry; Northwestern University, Anthony Ephremides; University of Maryland)
This uses adversarial modeling (in the form of game theory) to model how users in a multiaccess setting contend for resources in the presence of jammers. In the AVC context, La and Anantharam proved some early results on these models. Here the extension was that the jammer(s) do not know whether or not the transmitter will be sending anything in a given slot (hence dynamic traffic). In the case with a single transmitter and a single jammer, the model is that the transmitter has to minimize its energy subject to an average rate constraint (packets are coming in at a certain rate to the transmitter and have to passed along). The jammer has a power constraint and wants the transmitter to maximize its energy. It turns out the that if the jammer doesn’t know the queue state of the transmitter, then it has to be on all the time. They have more complicated extensions to multi-transmitter scenarios.

ON THE DETERMINISTIC CODE CAPACITY REGION OF AN ARBITRARILY VARYING MULTIPLE-ACCESS CHANNEL UNDER LIST DECODING
(Sirin Nitinawarat; University of Maryland)
Sirin talked in the session I was in. For point-to-point AVCs under average error, there is a finite list size L called the symmetrizability of the channel such that for list-decoding with list sizes \le L the capacity is 0 and for larger list sizes the capacity is the randomized coding capacity of the AVC. This work extended this to the multiaccess setting, which is a particularly tricky beast since the capacity region itself is nonconvex. He showed that there is a U such that the capacity region has empty interior if the list size is \le U and that for list sizes \ge (U+1)^2 you get back the randomized coding capacity. What is open is whether the list sizes for the two users can be different, so that this gap could be closed, but that problem seems pretty hard to tackle.

CODING AGAINST DELAYED ADVERSARIES
(Bikash Kumar Dey; Indian Institute of Technology, Bombay, Sidharth Jaggi; Chinese University of Hong Kong, Michael Langberg; Open University of Israel, Anand Sarwate; University of California, San Diego)
I guess I’ll be shameless and blog about my own paper — we looked at an adversarial setting where the adversary gets to see the transmitted symbols after a delay. We assumed that the delay grew linearly with the blocklength, so that the transmitter gets to act at time i based on x_1, x_2, \ldots, x_{i - \Delta n}, where n is the blocklength and \Delta is the delay parameter. Suppose everything is binary and the adversary gets to flip a total of pn bits of the codeword but has to see it causally with delay \Delta n. If \Delta = 1 the adversary sees nothing and the average-error capacity is 1 - h(p) bits. If \Delta = 0 the adversary can do a lot worse and the capacity is 1 - 4p bits. What we show is that for any \Delta > 0 we go back to 1 - h(p) bits (and there was much rejoicing). The key is allowing the encoder to randomize, which effectively prevents the adversary from learning anything. Luckily, the decoder can still figure out what is going on (as long as there is some additional slack in the rate), even though it does not share common randomness with the encoder.