ISIT 2007 : large random networks

Scaling Bounds for Function Computation over Large Networks (Sundar Subramanian, Piyush Gupta, and Sanjay Shakkottai) : This paper essentially looked at the scaling laws for the “refresh rate” of networks in which a collector node wants to compute a function f(x1, x1, …, xn) of measurements at the nodes of a network. The network has bounded degree and there is a difference in scaling between type-sensitive (mean, mode) and type-threshold (max, min) functions. They show that deterministic bounds on type-sensitive functions are not possible in general, but probabilistic bounds are possible. By using a joint source-channel coding strategy, for AWGN networks they obtain constant refresh rates for small path-loss exponents.

Characterization of the Critical Density for Percolation in Random Geometric Graphs (Zhenning Kong and Edmund M. Yeh) : Since we had a reading group on percolation theory this semester, this talk felt right up my alley. Although using Monte Carlo techniques we know the critical threshold (density) for percolation (formation of a giant connected component) to happen in random geometric graphs, the analytical bounds are quite loose. This paper gets tighter analytical bounds by doing some smarter bounding of the “cluster coefficients,” which come from looking at the geometry of the percolation model.

ISIT 2007 : feedback

Communication with Feedback via Posterior Matching (Ofer Shayevitz and Meir Feder) : This work was an attempt to come up with a common framework and generalization of the Horstein and Schalkwijk-Kailath feedback coding schemes, in which the encoder uses the feedback to track the decoder and “steer” it to the correct message. They come up with an idea, called “posterior matching” and apply it to DMCs to show that a simple “steering” encoder can also achieve the empirical mutual information of the channel I(Q,W) using the posterior CDF at the decoder. It’s a pretty cool result, in the line of “A is like B in this way,”

Broadcasting with Feedback (Sibi Raj Bhaskaran) : This addresses the question of broadcasting when feedback is only available to one user. In a degraded Gaussian broadcast setting, if the feedback is from the strong user, you can use a Gaussian codebook for the weak user with a Schalkwijk-Kailath scheme for the strong user to get an increased capacity region. This is one of those papers that I’ll have to read a bit more carefully…

The Gaussian Channel with Noisy Feedback (Young-Han Kim, Amos Lapidoth, and Tsachy Weissman) : This talk was about error exponents for the AWGN channel with noisy feedback. By using some change-of-measure and genie arguments, they show a non-trivial upper bound, and a three-phase coding scheme can give a lower bound which scales like (feedback noise variance)-1. Unlike the perfect feedback case, where the exponent is infinite, both bound are finite. Furthermore, they show that linear coding for noisy feedback will not get any rate.

Error Exponent for Gaussian Channels with Partial Sequential Feedback (Manish Agarwal, Dongning Guo, and Michael L. Honig) : In this talk, they take an AWGN channel in which only a fraction of the received symbols are fed back and ask for the error exponent. They propose an achievable scheme that uses the slots with feedback separately from the slots without. With feedback, they try to steer the decoder into a favorable position. and then use a forward error-correcting code to get the rest of the rate.

A Coding Theorem for a Class of Stationary Channels with Feedback (Young-Han Kim) : The class of channels under consideration are those in which there is order m memory, and in this case the capacity is given by a normalized directed mutual information. The main interest for me was in the proof strategy, which used Shannon strategies and something called the “Nedoma decomposition,” which I have to read about a bit more… except that it’s in German, so I have to brush up my German first.

Capacity of Markov Channels with Partial State Feedback (Serdar Yüksel and Sekhar Tatikonda) : This was about partial feedback in that the state is quantized and sent back to the encoder. For a Markov state transition with some technical mixing conditions, they find a single-letter expression for the capacity with feedback, and that a sliding-window encoder is “almost good enough,” which is good news for practical coding schemes. They also use some nice dynamic programming with the directed mutual information as the running costs, which merits some closer inspection, I think.

ISIT 2007 : source coding

Source Coding with Distortion through Graph Coloring (Vishal Doshi, Devavrat Shah, and Muriel Médard) : This paper looked at the rate-distortion function for reconstructing a function f(X,Y) with X at the encoder and Y as side information at the decoder. One can think of this as a “functional” Wyner-Ziv, or in some cases as a noisy Wyner-Ziv. In the lossless setting, the reconstruction function can be found using graph entropy, and minimum-entropy graph coloring turns out to be a way of obtaining a solution. For the lossy problem, they find a similar graph coloring scheme but can’t get a single-letter expression in all cases. What is interesting to me is the optimality of “preprocess + Slepian-Wolf,” which is similar in spirit to the work done by Wang, Wagner, Viswanath and others for Gaussian multiterminal source coding problems.

Compound conditional source coding, Slepian-Wolf list decoding, and applications to media coding (Stark C. Draper and Emin Martinian) : The main motivation here was that many multimedia applications (such as video coding) may more fruitfully be thought of as compound source coding problems with an unknown side information at the decoder. In this setting, we can imagine the side information as one of P different predictors of the source X available at the encoder. The encoder can use the different predictors to list decode the source message and send conditional list-disambiguation information in addition to the source encoding. It’s a neat scheme that seems quite related to some of my thesis work on list decoding for AVCs.

Correlated Sources over a Noisy Channel: Cooperation in the Wideband Limit (Chulhan Lee and Sriram Vishwanath) : I have to admit I didn’t fully get this talk, but it looked at wideband distributed source coding using “highly correlated sources.” They propose a modified PPM scheme which can exploit the correlation as if the encoding is joint with small bit error rate (but possibly larger block-error rate). What was unclear to me was why the modified error criterion was necessary, but it seems to be an artifact of the proposed scheme. The algorithm requires a sliding window decoder whose analysis seems a bit tricky.

Joint Universal Lossy Coding and Identification of Stationary Mixing Sources (Maxim Raginsky) : What is the loss in estimating the parameter of a source and doing universal lossy source coding? By using a competitive optimality framework and a Lagrangian formulation to trade off the parameter error and source distortion, Raginsky can bound the loss in performance. This falls into the category of the O(log n/sqrt(n)) results that I don’t know much about, but I will probably take a look at the full paper to get a better idea of how the codes work. He uses some ideas of VC dimension from learning theory, which I know a little about, so hopefully it will not be too hard going…

The source coding game with a cheating switcher (Hari Palaiyanur, Cheng Chang, Anant Sahai) : This is an extension of Berger’s source coding game, in which a switcher switches between k memoryless sources and you have to make a rate distortion code that can handle the worst case behavior of the switcher. Before, the switcher could not see the source outputs first. Here, he can (hence “cheating”). The main point is to figure out which iid distributions the switcher can emulate, and the worst one in that set gives the bound. The rest is a union bound over types and doesn’t affect the rate.

ISIT 2007 : general impressions

I had meant to start blogging ISIT as it was going on, but that turned out to be infeasible due to a number of factors. The most relevant was that I spent the first 3 days of this year’s conference battling a fever, so by the time I got back to the hotel I was in no mood to type up any of the notes I had taken from the talks, and I also had to leave early the first two days so that I could lie down. Another downside was that my attention was not as sharp as it should have been, so I apologize in advance for the facile nature of my observations.

In general, the program this year felt way more Shannon-theory heavy to me than last year’s. Perhaps that’s because I wasn’t hanging out with many coding theorists, so I didn’t hear about the talks, but I felt like the coding talks in which I was interested were few in number. Furthermore, I ended up missing the one or two that I really wanted to attend (such as the new Reed-Solomon list decoding algorithm).

My biggest gripe was the lack of water at the sessions — there were two snack breaks, between the two morning and two afternoon sessions. There was juice, water, coffee and caffeinated tea, and everything was cleared out immediately upon the resumption of the sessions. I understand taking away the coffee and juice, but there was no water to be had in the session rooms and none in the convention area unless you bought it at the overpriced bar. Perhaps that’s in the contract with the convention center, but it’s a lousy deal.

This year they were also quite zealous in checking that everyone had their conference badge to prove that they had registered. The stinginess with respect to the water complemented (in some sense) their concern that freeloaders not be allowed into the sessions. I left my badge at the hotel one day and was made to get a temporary one for that day.

The big news of course was that Bob Gray won the Shannon Award for next year, which made me happy. I heard some people say beforehand that he seemed like an outside shot since he’s not a pure information theorist per se, and that’s what the award is about. In some sense Sergio Verdú is an Information Theorist’s information theorist, and I think one would be hard-pressed to find a candidate like him. You wouldn’t say Kailath is a pure IT guy either, right?

All in all, I had a good time, and had a few interesting research discussions with different people. I find myself wanting to work on some new problems, which is good from the perspective of learning some new areas, but bad from the perspective of trying to wrap things up and write my thesis. I was asked by several people if I was graduating soon and what I was planning on doing next, which is always a terrifying question. Hopefully I’ll get some of that sorted out this summer…

warning : ISIT 2007 blogging to begin soon

I was completely ill for the first day of ISIT so I only made it to a handful of talks, but I have notes and will be posting about the conference soon, hopefully. Let this be a warning to those who read this blog and are bored by information theory…

UPDATE : still ill, which did not help my talk today. Hopefully I’ll be better tomorrow morning for talk number 2…

triple entendres, dirty jokes, and publications

In Cover’s early paper on “Broadcast Channels,” (IEEE Trans. Info Theory, vol 18, no 1, 1972), he writes:

The primary heuristic that we garner from these investigations is that high joint rates of transmission are best achieved by superimposing high-rate and low-rate information rather than by using time-sharing. Novels written with many levels of symbolism provide just one example of a mode of communication that may be perceived at many different
levels by different people.1

1I am soliciting double- and triple-meaning quotes that illustrate this idea. Consider, for example, the reaction of three different people to the following donated story. Buck and Harry led a beautiful maiden into the clearing by a rope tied around her ankle. “Let’s make her fast,” said Buck, “while we have breakfast.” The anonymity of the authors will be protected.

So the meanings I can come up with are (a) “let’s prevent her from eating while we have breakfast,” (b) “let’s bind her tightly while we have breakfast,” and (c) a meaning using a sexual interpretation of “make.” There’s something a bit disquieting about a dirty joke in a journal, especially one with overtones of rape. Nevertheless, I read it as an an interesting example of the ambiguity of language, even though it reifies the old-boy’s club-ness of the field…

rejected opening paragraph for my ISIT 2007 paper

And it’s a good thing too — I don’t even like Buffy

Xander wishes to leave a message for his friend Buffy and writes it on a chalkboard using a secret code. After he leaves, Spike comes by and sees the encoded message on the chalkboard. Not content to leave things as they are, he decides to make some limited alterations to what was written. When Buffy comes back that evening, she has to recover the message left for her by Xander, despite Spike’s non-causal tampering with the encoded message. In this paper we will look at a model for this problem, which can be thought of as randomized coding for arbitrarily varying channels with the codeword known to the jammer.

yet another crunch

As if having the deadlines for ISIT and CISS 2007 so close to each other wasn’t bad enough, the deadlines for the Statistical Signal Processing Workshop and the Information Theory Workshop in Tahoe are both on April 1. In the former case I didn’t mind, since I didn’t have anything I wanted to send to CISS and besides, it’s bad to distract oneself with conference papers. But in the latter case, I have results to send and reasons to go to both. Perhaps going out of town for spring break wasn’t such a good plan after all…