Overall I was pretty happy with how the day went — talks were (mostly) on schedule and I only missed one or two that I wanted to go to, but that was mostly due to information overload than anything else. This isn’t all of the talks I went to, but a representative sample.
Plenary Talk : Some perspectives on Computational and Communication Complexity
Andrew C. Yao (Tsinghua and Chinese Univ. of Hong Kong)
Yao is a pretty famous complexity theorist, and his talk was mostly about how communication theory ideas have influenced complexity theory. I found some of the talk a little hard to follow, mostly because I’ve never taken a CS theory class (the later it gets, the harder this is to rectify), but I definitely want to learn more about the Euclidean membership problem (which has some connections to ML decoding for additive noise channels) and the hardness amplification stuff for one-way functions.
Erasure Entropy (Sergio Verdu, Tsachy Weissman)
This was the first talk I went to. The erasure entropy of a vector Xn is just the sum of conditional entropies H(Xi | X\i), appropriately normalized. That is, we look at the uncertainty of each symbol conditioned on the others and take the average. This comes up naturally when you look at the entropy rate of oversampled continuous-time Markov signals. For memoryless sources it is just get the regular entropy-rate. The interesting this is that you can define an erasure entropy rate-distortion function and actually compute it for cases in which the actual rate distortion function is not known.
Gaussian Fading is the Worst Fading (Tobias Koch, Amos Lapidoth)
I didn’t really follow this that well, but the basic result is that for a pretty large class of channels, Gaussian fading leads to the smallest pre-log factor in the capacity, and a similar result holds for MISO channels. It’s another brick in the wall of results that basically says that Gaussian distributions are worst-case for a lot of communication and compression settings.
Noisy feedback improves communication reliability (Stark Draper, Anant Sahai)
I’ve seen parts of this talk before, but the basic idea here is that if you have an application which can only tolerate a certain amount of delay, you want to design coding schemes that guarantee some delay bounds. If you have some feedback and feed back ACK/NACKs to the encoder to say if packets are received properly, you achieve certain bounds. If now this feedback link is itself noisy, how well can you do? There was a lot packed into this talk, but the point is that you really can get a pretty good boost.
On Information Divergence Measures and a Unified Typicality (Siu-Wai Ho, Raymond W. Yeung)
This was pretty technical, but the basic idea was that you can define a new kind of typicality (other than strong and weak) that works for both discrete and continuous random variables, and the unified-typical sequences are both strong and weakly typical. They show some asymptotic equipartition properties for this new typicality and leave open whether new coding theorems can be proved with it. On the one hand, it is a pretty dry topic, but the thing I liked about it was that it showed there is still some room for improvements at the very fundamentals of information theory.
The final form of Tao’s inequality relating conditional expectation and conditional mutual information (Rudolf Ahlswede)
Greene and Tao have a really exciting result on arithmetic sequences of primes. The basic core of the proof uses the probabilistic method, which is pretty cool for a number theory result. Part of the proof is a lemma of Tao’s that relates the the difference of two conditional expectations to a conditional mutual information. The constant in the inequality of the lemma is not the tightest in the original proof, and Ahlswede tightens it up, showing that it’s tight.
Source Description Cost (Hossein Kakavand, Abbas El Gamal)
Suppose you have an image that you want to compress for display on some LCD screen later. There is clearly a tradeoff between how many bits you spend on describing the image and fidelity of a reconstruction you can make from those bits — this is rate-distortion. In addition, you can think that there is a cost associated with making the reconstruction, since white pixels may cost more energy than black pixels in terms of the LCD screen’s life. If you factor in this reproduction cost you get the problem studied in this talk. I still don’t understand how it is different from just adding a second distortion measure, but I think I have to look at the proceedings to get that.
Information Embedding with Distortion Side Information (Ashish Khisti, Emin Martinian, Gregory Wornell)
This is a watermarking problem in which an encoder wants to embed a watermark into a host signal without distorting it too much. In addition, there is some information about how important different parts of the signal are, or maybe how “reliable” the host is at those points. For high distortion-to-noise ratios their scheme is optimal.
Capacity of Beamforming with Limited Training and Feedback (Wiroonsak Santipach, Michael Honig)
In this work, the authors look at a fading channel in which the encoder sends a pilot to the decoder, which estimates the channel and feeds back a quantized version of the channel gain. The goal is to trade off how much time you spend estimating the channel and feeding the information back with the achievable data rate. It’s an interesting problem, but I don’t think I got the most significant take-away point from the talk. The upshot seemed to be that imperfect channel knowledge implies that you can’t use all of the antennas in the base station. In fact, you only need to activate N/log N of them.
On the Power of Imperfect Broadcast (Matthias Fitzi, Stefan Wolf, Juerg Wullschleger)
My lack of knowledge of complexity theory reared it head again in this talk. The basic point is that you have a number of players who must use a certain protocol to agree on a common message. Some of the players may cheat though, so the problem is interesting from a security/crypto context.