ISIT 2006 : day five
The burnout was hitting hard today, so I only saw a few talks, and some of those I can’t really say anything about since I missed parts by being late. Overall I had a good time, but it was a draining experience overall. I noticed that much of the non-student crowd spent a significant fraction of their time chatting and not in sessions, which strikes me as a wise strategy. Another wise strategy to adopt is to go to the talk with the best speaker in a given slot unless you are particularly invested in the problem being given in another slot. I went to several talks in which I hoped to learn something about a new problem and its context and often came out quite confused. Information theory is wide enough that it’s hard to understand the motivation for a lot of research, and 20 minutes isn’t enough time to give that motivation for some problems, so it’s not necessarily the speaker’s fault. Still, a good speaker can get s few bits of understanding across where a less experienced speaker might be prone to missteps.
Plenary Talk :
Prof. Gemans gave an interesting overview of the current state of computer vision research, and provided a framework and perspective that I lacked. Since he ran out of time, I never really got a good feeling for the technical part of the talk, which apparently had a connection to the liar game and other fun problems, but it did drive home the point that one has to be quite careful in applying information theory ideas willy nilly in other areas.
Cognitive Radio: An Information-Theoretic Perspective
(Aleksandar Jovicic, Pramod Viswanath)
This is a modified interference channel in which one user (the secondary) knows the message of the first user (the primary) acausally. I don’t really know how to justify this assumption, but given that, Jovicic and Viswanath solve this channel in the case where the interference-noise ratio (INR) is less than the SNR. The scheme uses dirty paper coding and so on.
On Jamming in the Wideband Regime (Siddharth Ray, Pierre Moulin, Muriel Medard)
This takes the mutual-information game approach to the jamming problem — the jammer and transmitter can each choose an input distribution. The latter wishes to maximize and the former minimize the mutual information of the game. In the case of the wideband limit for Gaussian channels, the capacity-achieving bursty transmission scheme must be modified using randomization. The problem I had with this talk is that I’m not sure I understood the model correctly — how does bursty transmission fit with the inherently iid mutual information game? To me, the danger (cf Ahlswede) is that we always view transmission and information problems via the Shannon lens and so the assumptions made in the channel modeling are sometimes confusing. If nothing else, my study of AVCs has taught me that there many ways to skin a cat/problem, so I need to understand this model better.
Strong Consistency of the Good-Turing Estimator
(Aaron B. Wagner, Pramod Viswanath, Sanjeev R. Kulkarni)
The Good-Turing Estimator is a method for estimating a probability distribution without knowing the alphabet size ahead of time. For example, suppose we went on a safari and recorded the number of times we saw different animals. We don’t know how many animals there are in total, since there are some we may not have seen, but we do have an empirical measure of the animals we have seen. What is the best way of inferring the underlying probability distribution from the data, assuming they are drawn iid from that distribution? This estimator assigns a total weight to all elements that occurred k times and then evenly divides that mass among those elements. This has been proven to be unbiased, and the pre-division step is consistent. The division step can be modified to make it consistent. The interesting technique is to model the underlying data as a mixture of Poissons with a “shadow” mixing distribution. It was a pretty cool analysis, and one which I wish I had thought of, since I had tried to learn about this estimator earlier. Oh well, there are lots of good ideas in the sea…
Oblivious channels (Michael Langberg)
Michael Langberg was one of the few people I didn’t know who came to my talk. I had just heard about his work before coming to ISIT so I was pretty interested in going to his talk. Unfortunately I missed the first 3 minutes, so I hope that what I understood from it wasn’t compromised. The problem he has looks very much like an AVC with state constraint — a binary channel in which the channel can flip any p-fraction of bits. The key contribution of his work is to use a new concentration inequality of Vu to derive the capacity result in a more constructive way. The approach was more from a coding theory perspective, where we want to construct a code with minimum distance 2 p n to avoid any errors. There’s a longer FOCS paper related to this and the ISIT paper still, so I have some work cut out for me.
Rate-Constrained Beamforming for Collaborating Hearing Aids
(Olivier Roy, Martin Vetterli)
This was an interesting model problem in which there are two hearing aids, one in each ear, and one can transmit to the other to do some beamforming and noise cancellation. The problem is formulated as s unidirectional source-channel communication problem with side information at the decoder, and a rate-distortion tradeoff is derived based on a spatial model of the head. It was a neat application of multi-terminal source-coding ideas to an application, but I think some more complicated models should be considered (a lossy version of Orlitsky’s problem?) before trying to commit this to an architecture.