ITA Workshop : spectrum access and resource allocation

There were some talks on dynamic spectrum access as well as waterfilling and other resource allocation problems.

  • Resource consumption in dynamic spectrum access networks: Applications and Shannon limits (Michael B. Pursley and Thomas C. Royster IV, Clemson University)

    This talk tried to address how to choose a modulation scheme to maximize throughput in dynamic spectrum access networks. The scheme could be power or bandwidth efficient, and trading off these efficiencies is important. One way of capturing the impact is to look at how the transmission of one radio affects other radios across bandwidth and time. So a radio preventing 3 radios talking over a 2 Hz for 1 second is using 6 units of “resource.” The limits on communication can be rephrased in terms of the familiar Eb/N0.

  • Spectrum sharing on a wideband fading channel with limited feedback (Manish Agarwal and Michael Honig, Northwestern University)

    Suppose that K users over N channels wish to share the spectrum effectively in a multiple-access setting. They can probe the channels to see which ones are available — if more than K’ probe a given channel, the destination can assign it for transmission. The goal is to choose a the number of channels to probe so that there is little overhead but users still get a fair allocation. At least that’s how I understood the problem. They show a scheme that probes N/(log N)^2 but the rate for each user grows like log(log N).

  • Asynchronous iterative water-filling for Gaussian frequency-selective interference channels: A unified framework (Gesualdo Scutari, Univ. of Rome “La Sapienza”, Daniel P. Palomar, Hong Kong Univ. of Science and Technology, and Sergio Barbarossa, Univ. of Rome “La Sapienza”)

    This work looked at a new geometric intuition for the waterfilling algorithm for Gaussian channels, where the waterfilling allocation is a kind of projection onto a simplex. This projection view allows them to get convergence results for asynchronous waterfilling algorithms for the interference channels. The conditions for convergence match those for the multiple-access and broadcast channels.

  • Spectrum sharing: fundamental limits and self-enforcing protocols. (Raul Etkin, U.C. Berkeley and HP-Labs)

    This talk focused on how to do interference management and generate self-enforcing protocols for spectrum-sharing systems. In the first part, he showed that a kind of Han-Kobayashi scheme gets to within 1 bit of the capacity region for the interference channel. The key is to show a tighter converse and comes from looking at certain genie-aided channels that are not too loose. The second part of the talk was a game-theoretic approach to guaranteeing good behavior in a spectrum-sharing system. The approach is to use punishments — if any user deviates from the protocol then the other users spread their power, punishing him. This game has an equilibrium and greatly improves on the static game, which has an unbounded price of anarchy. I just like writing that — “price of anarchy.”

ITA Workshop : coding theory

I’m by no means an expert on coding theory, but I went to a few talks on coding that were interesting. Some were definitely geared for the specialist.

  • Decoding LDPC codes through tree pruning (Yi Lu and Andrea Montanari, Stanford)

    This was one of those talks where I got the big picture and then lost the details, since I haven’t really studied LDPC decoding algorithms in sufficient detail. Their idea is to use a new construction/technique by Dror Weitz to compute the marginals in the graph of an LDPC code. The idea is to create a tree rooted at xi whose marginal at the root is the same as the marginal at xi in the original graph. This tree is constructed by taking self-avoiding random walks through the graph and putting a leaf when the walk crosses itself. But such a tree is too big — truncating at a fixed level gives an approximate algorithm for decoding.

  • Communication over channels with varying sampling rate (Lara Dolecek, UC Berkeley)

    This paper looked at what happens when there is a sampling error in the received signal before the decoding block. Some symbol may be sampled twice or some symbol may be missed entirely and deleted. One way around this is to build better timing recovery blocks, and the other is to build better codes (which she does) that are resistent to these kind of repetition and deletion errors. This work comes up with number-theoretic formulation of the code design constraints, shows that existing codes can be modified to take into account these types of errors, as well as a modified belief-propagation decoding algorithm.

ITA Workshop : network coding

ITA coincided with NetCod07, which was a small miniworkshop (single-day, single-track) on network coding. I went to a few talks there, but unfortunately missed the one I most wanted to see, “Resilient network coding in the presence of Byzantine adversaries.” Oh well, at least there’s a paper for me to read.

  • The benefits of network coding for peer-to-peer storage systems (Alexandros G. Dimakis, P. Brighten Godfrey, Martin J. Wainwright, Kannan Ramchandran, UC Berkeley)

    This talk was on designing a decentralized storage system in which individual nodes (peers) hold coded chunks of the total file. To be concrete, say we have a 8 MB file. Someone wishing to download the file can download the 8 chunks of 1 MB each from randomly chosen peers and with high probability recover the file. The issue with this system is that if a peer leaves the network and a new peer joins, that peer should not have to download all 8 MB to make a new coded 1 MB chunk for itself to store. So the question this paper answers is how much overhead (excess downloading) a new peer has to pay in order to store a coded packet, and how can we design codes whose overhead is low in terms of this metric so that peers can leave and join the network and still have the whole file stored. Alex calls these “regenerating codes,” which sounds kind of sci-fi to me.

  • Code construction for two source interference (Elona Erez and Meir Feder, Tel Aviv University)

    This paper looked at non-multicast network coding, and was focused on coming up with decentralized and low-complexity algorithms for accomplishing this. The approach was to make a “quasiflow” and use a modified version of a multicommodity flow algorithm to construct the quasiflow.

  • Characaterizations of network error correction/detection and erasure correction (Shenghao Yang and Raymond W. Yeung, The Chinese University of Hong Kong)

    This was a real big-picture talk, in which Prof. Yeung tried to argue that network coding has all the analogues of regular error correcting codes. In this sense, network coding is a generalization of algebraic coding. So there are analogies to the Singleton bound, and all sort of other things. In particular, minimum distance has a analogy in network coding, which can make a lot of intuitive connections clear.

ITA Workshop : general comments

The ITA Workshop was last week at UCSD, and as opposed to last year I decided to go down and attend. I had a good time, but it was a bit weird to be at a conference without presenting anything. It was worth it to get a snapshot of some of the things going on in information theory, and I got a few new ideas for problems that I should work on instead of blogging. But I find the exercise of blogging about the conference useful, and at least a few people have said some positive things about it. This time around I’m going to separate posts out by subject area, loosely. My attention and drive to attend talks decreased exponentially as the week progressed, more due to fatigue than anything else, so these posts may be short (a blessing for my friends who don’t care about information theory!) and more impressionistic at times.

One general gripe I had was that sessions were very de-synced from each other. Most session chairs were unable or unwilling to curtail speakers who went over, to the point where one session I attended finished after the break between sessions. I ended up missing a few talks I wanted to see because of this. I regard it as more of a failing on the part of the speaker — an experienced researcher with many conference talks under their belt should be know how to make a coherent 20 minute talk and not plan to run over. Dry runs can only tell you so much about timing, but one should be considerate towards the other speakers in the session and at the conference, no? I know this makes me sound a bit like a school-marm, but it bothers me to leave a talk before the theorem is presented so that I can make it to another talk.

I’ll write separately about the panel on publication issues, which raised some interesting points while dodging others. There was also a presentation by Dr. Sirin Tekinay, who is in charge of the NSF area under which information theory sits. I am woefully ignorant of the grant-writing process right now so I wasn’t sure how to take her comments, but it looks like a lot of emphasis is going to be on networks and cross-discipline work, as is the trend. Unfortunately, not all research can be said to have applications to networks, so that seems a bit unfortunate…

new paper : deterministic list codes for state-constrained AVCs

It should be up on ArXiV later today…

A.D. Sarwate and M. Gastpar
Deterministic list codes for state-constrained arbitrarily varying channels
Submitted to IEEE Transactions on Information Theory
ArXiV cs.IT/0701146

The capacity for the discrete memoryless arbitrarily varying channel (AVC) with cost constraints on the jammer is studied using deterministic list codes under both the maximal and average probability of error criteria. For a cost function $l(\cdot)$ on the state set and constraint $\Lambda$ on the jammer, the achievable rates are upper bounded by the random coding capacity $C_r(\Lambda)$. For maximal error, the rate $R = C_r(\Lambda) – \epsilon$ is achievable using list codes with list size $O(\epsilon^{-1})$. For average error, an integer $\lsym(\Lambda)$, called the \textit{symmetrizability}, is defined. It is shown that any rate below $C_r(\Lambda)$ is achievable under average error using list codes of list size $L > \lsym$. An example is given for a class of discrete additive AVCs.

IEEEtran BibTeX annoyance

I’m busy as a bee writing two papers for ISIT 2007 (in Nice, woohoo!) and as usual I find myself at odds with the IEEE Transactions style formats. The BibTeX format by default puts the bibliography in order of how references are cited, and as far as I can tell there is no option for putting things in alphabetical order. One option is of course to use the \nocite command before any of the other text. This will put the citations in the bibliography without anything in the main text — a handy feature for sneaking in references that you don’t need but should cite (perhaps to appease reviewers). But that hack defeats the purpose of BibTeX, which is to stop you from futzing with your bibliography by formatting the thing in the correct manner, be it APA, MLA, ACM, or IEEE.

I understand that for a survey paper it would be advantageous to list references in the order that they are cited. That way all the papers on topic A will be in a big block together in the bibliography, and the cite package provides a nice shorthand that will list the references as [1 – 5] instead of [1][2][3][4][5]. For conference papers, which often have fewer than 10 citations, it seems that the aesthetic benefits of an alphabetized bibliography outweigh the minor inconvenience in typesetting. From looking at existing published papers, it seems that the IEEE isn’t particularly insistent on making the bibliography in citation-order. So why not provide the alphabetical option?

Perhaps its because the entire LaTeX package system is being maintained by Michael Shell, who I’m sure has better things to do with his time, like a real job. It almost boggles the mind that so many people in the IEEE use this LaTeX package and the Institute doesn’t really support it in the way that the AMS supports AMSTeX.

pioneering works in music and more generally

I was reading some Adorno a while ago, and this excerpt (from the meandering essay “Motifs” in the collection Quasi una Fantasia:

Among the most infamous phrases used to defeat changes in musical consciousness is that of the ‘trendsetting or pioneering work.’ A work of art legitimates itself historically only by virtue of its uniqueness and intrinsic validity. Only works which have truth and consistency can impinge on the historical process. The specific work can never be reconstructed or deduced from the historical totality; on the contrary, the totality is contained within its most minute cell. But by substituting a connection with the presumed course of history for insight into that unique concrete arttefact the critic defects from the work and escapes into a future which, as often as not, turns out to be the past. If a work points in new direcions it raises the hope that it will not be necessary to expend too much effort on it, since it is nothing but a stopping point along the track which will surely lead into the Grand Central Station of the great platitudes.

The same critique can probably be applied to other disciplines as well. Information Theory lives in the shadow of Shannon’s landmark 1948 paper, and sometimes one feels that Shannon was Moses coming down from the mountain, to hear people talk about it. It’s not often situated within the historical context or phrased as part of a particular line of thought or collection of concerns. This not to downplay Shannon’s contribution to the field, of course, but it’s more an observation of the post-hoc discussion of his works. Other papers are sometimes accorded a similar status — the Ahlswede et al. paper on network coding may be an example. This is different from papers which have beautiful results, although often the two go hand in hand. Do these pioneering papers really come out of the blue? It ignores the connections between that work and other related ideas that have been floating about.

There’s also a niggling notion that your PhD thesis should also be one of these “trendsetting works.” What’s being ignored again is process. Is a thesis the culmination of work, a demonstration of your potential to do interesting work, a document of “what I did in grad school,” a proposal for a new avenue of research, or…?

Transactions growth

On a similar tip as my comments on ISIT’s size, the IT Society’s Board of Governors formed an ad-hoc committee to talk about the ballooning-out-of-control of the Transactions. They filed a report at the ISIT in Seattle. Some highlights:

  1. “… the exponential curve, which corresponds to an annual growth rate of 7.5%, is by far the best fit to the data… we may well find ourselves in the following situation within less than 10 years. The Transactions will be publishing over 10,000 pages per year…”
  2. “… it appears the growth will be sustainable as long as we charge for the cost of producing and mailing hard copies.”
  3. The average length of a regular paper was 10.3 pages in 1989 and is 15.8 pages in 2006.

Something that is distinctly missing from the plots provided in the report is the variance of paper length or a histogram of paper lengths. Although papers of average length 15.8 doesn’t seem so bad, if the distribution has a heavy tail some other solutions might present themselves.

The report describes several possible solutions, including doing nothing, splitting the transactions into two journals, limiting page lengths, going to all-electronic publishing except for libraries, etc. The recommendations they make are threefold:

  1. Go to all-electronic publishing except for libraries. This limits the financial burden.
  2. Make a hierarchical organization of the editorial board with sub-editors in chief for different areas.
  3. A 5-page limit for correspondence items

I’m not sure how I feel about all-electronic publishing. Libraries want to have hard copies of things for archival purposes, and this seems to be a neat way of passing the buck to them — will more expensive binding mean more cost on their institutional subscription? On the one hand, you save money by printing fewer copies, but on the other the hard copies may cost more. Probably this saves money all around though.

The sub-editing is a way of making the large page numbers tractable. They specifically don’t want to narrow the scope of the journal, which is good, but they also don’t want to cut more papers by making the acceptance rate lower, so the only way to keep the quality high is to add more reviewers and editors. But they simultaneously note that the number of reviewers is not increasing at the same rate as the number of pages.

The 5-page limit is somewhat odd — ISIT papers are already 5 pages and the difference would seem to be better peer-reviewing and very long publication delay. While this would help control the page numbers, they specifically do not recommend adding a page limit for regular papers. What happens in the gap between a 5-page idea and a 15.8 page regular paper?

Taken together, the proposed solutions seem to consciously avoiding taking the problem head-on. A more aggressive solution might include things like

  • Imposing page charges. At the moment the policy is:

    Page Charges: If the manuscript is accepted for publication, the author’s company or institution will be requested to cover part of the cost of publication. Page charges for this journal are not obligatory nor is their payment a prerequisite for publication. The author will receive 100 free reprints without covers if the charge is honored. Detailed instructions will accompany the proof.

    The Signal Processing Transactions charges $110/page up to 8 pages and $220/page thereafter. Making page charges mandatory for longer papers or adding a mandatory charge per page for papers over 20 pages may encourage authors to write shorter (and perhaps more readable?) papers.

  • Encouraging editors and reviewers to more aggressively promote correspondence items. They bring up the point that correpondence items suffer from “inflated introductions and similar excesses, designed by the authors in order to avoid the correspondence classification.” If there are more specific instructions to editors to… edit things down, then both regular papers and correspondence items can be trimmed.

In the end, the recommendations of the board are more carrot than stick, which may work or may not. The big message seems to be that exponential growth is good but it needs to be somehow managed. However, it may be that more active feedback is needed to control this system which is going unstable exponentially. It has been noted that there is insufficient communication between the information theory and control communities…