ITA Workshop : Panel on publication issues

Part of ITA was a panel on publication issues in information theory. Paul Siegel was the moderator and led off with three binaries to spark discussion “paper versus plasma” (the medium of publication), “prophets versus profits” (the financial model), and “peer review versus page rank” (quality measurements). Pretty much everyone thought page rank was not really a measure of anything except… page rank, so peer review was not under attack (thank goodness). In general people were in favor of paper at least for archival purposes, so that the ability to browse would still be there. Finally, everyone said they liked the IEEE (no surprise there) and its publication model.

Dave Forney talked about the spiraling costs of Elsevier-owned journals to libraries and urged people to just say no. He implicated those professors who choose to be on the editorial boards of such journals as being part of the problem, and urged them to divest from Elsevier, as it were. In general, he wanted faculty to be more proactive and aware of these important issues, a stance that I was 100% with. He then turned to ArXiV, and told everyone to submit their preprints to ArXiV so that people could know what research is being done. He said usage was increasing, but too slowly.

Andrea Goldsmith said that she found ArXiV to be of limited use since articles posted there are not peer reviewed, and the value of an article is only guaranteed via publication in the Transactions. For the publication model, she stressed the importance of access to the Transactions for the entire IEEE, so that the IT Society should not drift off. She also urged faculty to put institutional pressure on Elsevier by boycotting.

Steve McLaughlin also brought up the ties that bind IEEE and IT. The online Transactions are a major source of revenue, and it was the IT Society that spurred the creation of IEEExplore. He lauded ArXiV as a good impetus to embrace new ideas and models for publication, and floated the idea of an Open Access (OA) journal to complement the Transactions.

Dave Neuhoff reiterated that the journal should be the focus of the field, and that conference papers are not archival for information theorists. Because of this and other reasons, the IT Society was able to convince the IEEE to grant online access to conference proceedings.

Vince Poor, the current Editor-In-Chief, talked about copyright issues in the age of ArXiV and pointed out how reasonable the IEEE is. He seemed to indicate that Elsevier doesn’t affect our community much, but I didn’t really follow his argument there. He also claimed that market forces will push the publication industry to embrace electronic formats.

Rüdiger Urbanke was very excited about ArXiV because it could provide timestamps, and since the field is moving faster these timestamps are important. He also questioned the 5 page ISIT paper, which is not reviewed that carefully, and said that if there is a 5 page limit on correspondences the scale doesn’t make sense, especially in light of conference papers being non-archival. Finally, the pressure to publish is what enables Elsevier, and so this pressure must be alleviated somehow.

In the Q&A, one person asked about double-blind reviewing, which the panel wholeheartedly embraced. I think they should do it to, and I really have no idea what is holding it up, except that perhaps Pareja, the online paper management system, has to be hacked to do it. Someone else asked why we need timestamps from ArXiV when there are timestamps on the IT Transactions papers already, but Urbanke said that it has to do with time scales more than anything, and ArXiV lets you track revisions. Another person complained that ArXiV could become a repository for erroneous results and rejected papers, but Forney was quick to note that ArXiV’s value lies in showing who is working on what, and clearly there are no guarantees on the veracity of the claims made there. The last question was on the nature of journal papers versus conference papers — if conference papers are not archival, does that make it ok to merge 3 conference papers to make a journal paper? The panel seemed surprised to hear that this could be considered double-publishing, and the informal consensus seemed to be that doing so was not self-plagiarism.

I was most disappointed that nobody took up Steve McLaughlin’s comment on making an OA journal that is peer-reviewed and pay-to-publish. I’ve already written about having a new letters journal, but an OA journal would provide an alternative place to publish papers that is not evil and possibly has faster review turnaround than the IT Transactions. Given there are 900 papers submitted a year to the IT Transactions now, it seems like the benefits would be great. It would also help alleviate the Elsevier-feeding publication pressure. But the IT Society could never endorse such a project and thus a panel like this would not address that issue. You’d have to get professors without their official IEEE hats on to discuss this freely, and that wasn’t going to happen at this panel. I think if the OA option is on the table it could get modified into something more palatable and friendly to the iEEE, but it of course would take some discussion and a desire to make it happen.

Advertisement

ITA Workshop : source and channel coding

There were some talks on dynamic spectrum access as well as waterfilling and other resource allocation problems.

  • Twice-universal types and simulation of individual sequences (Alvaro Martín, Universidad de la República, Neri Merhav, Technion, Gadiel Seroussi, Universidad de la República, and Marcelo J. Weinberger, HP Labs)

    A universal source code partitions the set of all length-n sequences so that under an iid model class into types, so that two sequences that have the same type have the same probability. If we look at order-k Markov sources, we can ask for a universal code that is universal for all n and k. This leads to twice-universal source codes. Given a Markov-order esimator that takes a sequence and estimates an order k, we can intersect the set of sequences with the same order with the type class of order k. Given some conditions on the quality of the order estimator, this partition has some nice properties, including simulation of individual sequences with similar empirical statistics.

  • Universal noiseless compression for noisy data (Gil I. Shamir, University of Utah, Tjalling J. Tjalkens, Frans M. J. Willems, Eindhoven University of Technology)

    We’d like to be able to compress noisy data, since sometimes we only have access to noisy data. What they propose is a new probability estimator that can take into account upper and lower bounds on the probability to be estimated. So if I know the true probability lies in [a,b] I can use that knowledge in the estimator. When a = 0, b = 1 this reduces to the Krichevsky-Trofimov (KT) estimator. Intuitively, noise reduces the richness of the model class, and hence reduces the redundancy. Some properties of the esimator and corresponding compression results were given.

  • On universal coding of unordered data (Lav R. Varshney and Vivek K Goyal, MIT)

    Suppose you want to compress data but don’t care about the order of the data (think database records for concreteness) — then using a source code for vectors is silly, since you want a source code for multisets. In this work they address universal coding for multisets , and show that universal lossless coding is impossible, but a low universal rate is achievable across the model class (iid sources I believe, but it might be more general).

  • Sum-Capacity of a Degraded Gaussian Multiaccess Relay Channel (Lalitha Sankar, WINLAB, Rutgers, Gerhard Kramer, Bell Labs, and Narayan B. Mandayam, WINLAB, Rutgers)

    For a physically degraded multiple-access relay channel (MARC), the decode-forward inner bound and old outer bounds do not match in general. But by making new outer bounds that exploit the causality of the relay, an optimized decode-forward scheme meets the outer bound.

  • Secrecy generation for channel models (Imre Csiszar, Renyi Institute, Budapest, and Prakash Narayan, Univ. of Maryland, College Park, USA)

    I am not as familiar with the 2004 paper on which this work builds (but I’m reading it now). The model is that an encoder transmits a vector into a DMC with many outputs. Other terminals each view one output and then can engage in public discussion that is overheard by an eavesdropper. The terminals must come up with a secret key that is secret from the eavesdropper. The results are related to a multerminal source problem that was discussed earlier.

ITA Workshop : networks

There were some talks on dynamic spectrum access as well as waterfilling and other resource allocation problems.

  • Robust routing for dynamic ad hoc wireless networks based on embeddings (D. Tschopp, EPFL, S.N. Diggavi, EPFL and M. Grossglauser, EPFL)

    In fading, multiple unicast networks with mobility and point-to-point links, how can we do route discovery withi limited control overhead? This is a question of topology versus geometry and so they use a graph embedding of the communication graph (taking fading into account) into Rn to minimize the worst-case stretch. By using beacons to coordinate partial graph distances they can decentralize the approach a bit.

  • Coding achieves the optimal delay-capacity trade-off in mobile ad hoc networks (Lei Ying, Sichao Yang and R. Srikant, University of Illinois at Urbana-Champaign)

    For networks in which packets have a hard delay deadline, it’s bad news if a packet just dies onces its deadline expires — you might hope to code over packets so that if a packet dies from delay its contents can be recovered from other packets that made it in time (writing this makes it sound like Watership Down). This work tries to look at a mobility model in which nodes choose a fresh new location every time slot and uses coding across packets to help maximize a link rate.

  • On the throughput-region of a random wireless ad hoc network with geographic routing (Sundar Subramanian, Sanjay Shakkottai, University of Texas at Austin, Piyush Gupta, Bell Labs)

    Geographic routing is bad for network congestion and doesn’t deal well with holes in the networks (although there are hacks for this). By splitting a route into multiple paths, the authors can bound away the congestions. To deal with holes, you can erasure-code across the routes and drop a packet if you hit a hole. These two algorithms are shown to be order-optimal up to log factors in the network size.

  • On the wireless communication architecture for consensus problems (Anna Scaglione, Cornell)

    This work tried to incorporate quantization error into gossip algorithms as well as other physical-layer considerations. I wasn’t able to parse out the results (the talk was notation-heavy), but I suppose I will get to see the paper soon.

  • The law second class customers obey (James Martin, Oxford University, Balaji Prabhakar, Stanford University)

    The best thing about this talk was that it was given on a whiteboard (no slides) and was one of the clearest talks I went to at the whole workshop. Maybe technology is getting in the way. Suppose that we have an exponential service time queue with two Poisson arrival processes (first and second class customers). The first class customers queue up as if the second class customers were not there. Clearly the departure process for the first class customers is also Poisson. What about the second class customers? They use Weber’s result on the
    interchangeability of exponential-server queues to get an alternate derivation of the departure law.

  • On the broadcast capacity of wireless networks (Birsen Sirkeci-Mergen and Michael Gastpar, UC Berkeley)

    Suppose we have a central broadcaster in a network that wants to flood one message across all the nodes. This paper addresses how do to this for dense and extended networks, under iid and “spatially continuous” fading. For dense networks a two-phase protocol involving cooperation works. For an extended network, a cooperative multistage broadcase protocol is proposed. The key is that multihop protocols perform very badly on the broadcast problem and cooperative protocols are needed to get reasonable results.

ITA Workshop : spectrum access and resource allocation

There were some talks on dynamic spectrum access as well as waterfilling and other resource allocation problems.

  • Resource consumption in dynamic spectrum access networks: Applications and Shannon limits (Michael B. Pursley and Thomas C. Royster IV, Clemson University)

    This talk tried to address how to choose a modulation scheme to maximize throughput in dynamic spectrum access networks. The scheme could be power or bandwidth efficient, and trading off these efficiencies is important. One way of capturing the impact is to look at how the transmission of one radio affects other radios across bandwidth and time. So a radio preventing 3 radios talking over a 2 Hz for 1 second is using 6 units of “resource.” The limits on communication can be rephrased in terms of the familiar Eb/N0.

  • Spectrum sharing on a wideband fading channel with limited feedback (Manish Agarwal and Michael Honig, Northwestern University)

    Suppose that K users over N channels wish to share the spectrum effectively in a multiple-access setting. They can probe the channels to see which ones are available — if more than K’ probe a given channel, the destination can assign it for transmission. The goal is to choose a the number of channels to probe so that there is little overhead but users still get a fair allocation. At least that’s how I understood the problem. They show a scheme that probes N/(log N)^2 but the rate for each user grows like log(log N).

  • Asynchronous iterative water-filling for Gaussian frequency-selective interference channels: A unified framework (Gesualdo Scutari, Univ. of Rome “La Sapienza”, Daniel P. Palomar, Hong Kong Univ. of Science and Technology, and Sergio Barbarossa, Univ. of Rome “La Sapienza”)

    This work looked at a new geometric intuition for the waterfilling algorithm for Gaussian channels, where the waterfilling allocation is a kind of projection onto a simplex. This projection view allows them to get convergence results for asynchronous waterfilling algorithms for the interference channels. The conditions for convergence match those for the multiple-access and broadcast channels.

  • Spectrum sharing: fundamental limits and self-enforcing protocols. (Raul Etkin, U.C. Berkeley and HP-Labs)

    This talk focused on how to do interference management and generate self-enforcing protocols for spectrum-sharing systems. In the first part, he showed that a kind of Han-Kobayashi scheme gets to within 1 bit of the capacity region for the interference channel. The key is to show a tighter converse and comes from looking at certain genie-aided channels that are not too loose. The second part of the talk was a game-theoretic approach to guaranteeing good behavior in a spectrum-sharing system. The approach is to use punishments — if any user deviates from the protocol then the other users spread their power, punishing him. This game has an equilibrium and greatly improves on the static game, which has an unbounded price of anarchy. I just like writing that — “price of anarchy.”

ITA Workshop : coding theory

I’m by no means an expert on coding theory, but I went to a few talks on coding that were interesting. Some were definitely geared for the specialist.

  • Decoding LDPC codes through tree pruning (Yi Lu and Andrea Montanari, Stanford)

    This was one of those talks where I got the big picture and then lost the details, since I haven’t really studied LDPC decoding algorithms in sufficient detail. Their idea is to use a new construction/technique by Dror Weitz to compute the marginals in the graph of an LDPC code. The idea is to create a tree rooted at xi whose marginal at the root is the same as the marginal at xi in the original graph. This tree is constructed by taking self-avoiding random walks through the graph and putting a leaf when the walk crosses itself. But such a tree is too big — truncating at a fixed level gives an approximate algorithm for decoding.

  • Communication over channels with varying sampling rate (Lara Dolecek, UC Berkeley)

    This paper looked at what happens when there is a sampling error in the received signal before the decoding block. Some symbol may be sampled twice or some symbol may be missed entirely and deleted. One way around this is to build better timing recovery blocks, and the other is to build better codes (which she does) that are resistent to these kind of repetition and deletion errors. This work comes up with number-theoretic formulation of the code design constraints, shows that existing codes can be modified to take into account these types of errors, as well as a modified belief-propagation decoding algorithm.

ITA Workshop : network coding

ITA coincided with NetCod07, which was a small miniworkshop (single-day, single-track) on network coding. I went to a few talks there, but unfortunately missed the one I most wanted to see, “Resilient network coding in the presence of Byzantine adversaries.” Oh well, at least there’s a paper for me to read.

  • The benefits of network coding for peer-to-peer storage systems (Alexandros G. Dimakis, P. Brighten Godfrey, Martin J. Wainwright, Kannan Ramchandran, UC Berkeley)

    This talk was on designing a decentralized storage system in which individual nodes (peers) hold coded chunks of the total file. To be concrete, say we have a 8 MB file. Someone wishing to download the file can download the 8 chunks of 1 MB each from randomly chosen peers and with high probability recover the file. The issue with this system is that if a peer leaves the network and a new peer joins, that peer should not have to download all 8 MB to make a new coded 1 MB chunk for itself to store. So the question this paper answers is how much overhead (excess downloading) a new peer has to pay in order to store a coded packet, and how can we design codes whose overhead is low in terms of this metric so that peers can leave and join the network and still have the whole file stored. Alex calls these “regenerating codes,” which sounds kind of sci-fi to me.

  • Code construction for two source interference (Elona Erez and Meir Feder, Tel Aviv University)

    This paper looked at non-multicast network coding, and was focused on coming up with decentralized and low-complexity algorithms for accomplishing this. The approach was to make a “quasiflow” and use a modified version of a multicommodity flow algorithm to construct the quasiflow.

  • Characaterizations of network error correction/detection and erasure correction (Shenghao Yang and Raymond W. Yeung, The Chinese University of Hong Kong)

    This was a real big-picture talk, in which Prof. Yeung tried to argue that network coding has all the analogues of regular error correcting codes. In this sense, network coding is a generalization of algebraic coding. So there are analogies to the Singleton bound, and all sort of other things. In particular, minimum distance has a analogy in network coding, which can make a lot of intuitive connections clear.

ITA Workshop : general comments

The ITA Workshop was last week at UCSD, and as opposed to last year I decided to go down and attend. I had a good time, but it was a bit weird to be at a conference without presenting anything. It was worth it to get a snapshot of some of the things going on in information theory, and I got a few new ideas for problems that I should work on instead of blogging. But I find the exercise of blogging about the conference useful, and at least a few people have said some positive things about it. This time around I’m going to separate posts out by subject area, loosely. My attention and drive to attend talks decreased exponentially as the week progressed, more due to fatigue than anything else, so these posts may be short (a blessing for my friends who don’t care about information theory!) and more impressionistic at times.

One general gripe I had was that sessions were very de-synced from each other. Most session chairs were unable or unwilling to curtail speakers who went over, to the point where one session I attended finished after the break between sessions. I ended up missing a few talks I wanted to see because of this. I regard it as more of a failing on the part of the speaker — an experienced researcher with many conference talks under their belt should be know how to make a coherent 20 minute talk and not plan to run over. Dry runs can only tell you so much about timing, but one should be considerate towards the other speakers in the session and at the conference, no? I know this makes me sound a bit like a school-marm, but it bothers me to leave a talk before the theorem is presented so that I can make it to another talk.

I’ll write separately about the panel on publication issues, which raised some interesting points while dodging others. There was also a presentation by Dr. Sirin Tekinay, who is in charge of the NSF area under which information theory sits. I am woefully ignorant of the grant-writing process right now so I wasn’t sure how to take her comments, but it looks like a lot of emphasis is going to be on networks and cross-discipline work, as is the trend. Unfortunately, not all research can be said to have applications to networks, so that seems a bit unfortunate…