ISIT 2012 : the Shannon Award!

I am pleased as punch that Katalin Marton won the 2013 Shannon Award. She made seminal contributions in information theory in the earlier years but has been active in combinatorics and stochastic processes since. Many people start doing information theory because of an interest in digital communications — my own academic background reflects this — we get some blinders on regarding the role information theory plays in other fields, such as statistics (c.f. Rissanen) and combinatorics (c.f. Ahlswede). It’s great to see Marton’s work recognized, and I hope she talks about some of her more recent interests. Finally, I find it inspirational that a woman has finally won the Shannon Award — it’s a sign of the change that is happening in engineering more broadly, I hope.


Myopic adversaries in the Gaussian channel

I’m writing this (but not posting it) from somewhere above Belarus, on my way to Delhi and then on to Bangalore for SPCOM 2012. I was extended a very kind invitation to give a talk there, and I decided to present some work related to my thesis research on AVCs. I’m still working out my one-sentence summaries, but I figured I could use the old blog to work out some thoughts. Plus I don’t think I am going to be able to sleep on this flight any more.

The motivation for this work is the question of how to model uncertain interference. The Shannon-theoretic approach to this is to say that a communication channel is subject to stochastic noise and then look at how properties of the noise (memory, burstiness) affect the capacity. The coding-theoretic approach is to treat (usually discrete) noise as adversarial — I need to design a code that corrects all error patterns of pn bits. This is a stronger requirement, and in particular, I can even allow the noise to depend on the transmitted codeword. Of course, this leads to a lower capacity in general.

Can we close the gap between these two models? One idea is to allow the encoder and decoder to jointly randomize (common randomness). This doesn’t buy anything in the Shannon-theoretic case, but for the case of codeword-dependent worst case noise, the capacity turns out to be 1 - h( p ) for the binary channel (see Langberg 2004). Unfortunately, this is does not hold for more general channels, which is part of what my thesis is about.

What happens in the when the channel has continuous inputs and output with input power-limited to P and interference limited to $N$? Shannon says the capacity is \frac{1}{2} \log(1 + P/N). The corresponding worst-case channel is the Gaussian arbitrarily varying channel (GAVC). But there’s a new twist here — can the noise depend on the transmitted codeword?

Suppose that it can’t and we care about the average error. Without common randomness, Csiszár and Narayan showed (under technical conditions) that the capacity is \frac{1}{2} \log(1 + P/N) only if P > N, and is 0 otherwise. This is because if the interference has more power than the transmitter it could spoof the transmitter.

However, if there is common randomness and the interference can depend on the transmitted codeword, Agarwal, Sahai, and Mitter showed that the capacity is actually \frac{1}{2} \log ( P/N ), which is like a rate distortion function. This makes sense — if P < N, the interference can simply cancel the transmitted codeword.

The model I am going to talk about is one in which the interference cannot depend on the transmitted codeword, but on a noisy version of the transmitted codeword. I call this coding against myopic adversaries (it sounds nicer than “four-eyed”). In a later post I’ll talk about some of the geometric intuition of this case.