I’m writing this (but not posting it) from somewhere above Belarus, on my way to Delhi and then on to Bangalore for SPCOM 2012. I was extended a very kind invitation to give a talk there, and I decided to present some work related to my thesis research on AVCs. I’m still working out my one-sentence summaries, but I figured I could use the old blog to work out some thoughts. Plus I don’t think I am going to be able to sleep on this flight any more.
The motivation for this work is the question of how to model uncertain interference. The Shannon-theoretic approach to this is to say that a communication channel is subject to stochastic noise and then look at how properties of the noise (memory, burstiness) affect the capacity. The coding-theoretic approach is to treat (usually discrete) noise as adversarial — I need to design a code that corrects all error patterns of bits. This is a stronger requirement, and in particular, I can even allow the noise to depend on the transmitted codeword. Of course, this leads to a lower capacity in general.
Can we close the gap between these two models? One idea is to allow the encoder and decoder to jointly randomize (common randomness). This doesn’t buy anything in the Shannon-theoretic case, but for the case of codeword-dependent worst case noise, the capacity turns out to be for the binary channel (see Langberg 2004). Unfortunately, this is does not hold for more general channels, which is part of what my thesis is about.
What happens in the when the channel has continuous inputs and output with input power-limited to and interference limited to $N$? Shannon says the capacity is
. The corresponding worst-case channel is the Gaussian arbitrarily varying channel (GAVC). But there’s a new twist here — can the noise depend on the transmitted codeword?
Suppose that it can’t and we care about the average error. Without common randomness, Csiszár and Narayan showed (under technical conditions) that the capacity is only if
, and is 0 otherwise. This is because if the interference has more power than the transmitter it could spoof the transmitter.
However, if there is common randomness and the interference can depend on the transmitted codeword, Agarwal, Sahai, and Mitter showed that the capacity is actually , which is like a rate distortion function. This makes sense — if
, the interference can simply cancel the transmitted codeword.
The model I am going to talk about is one in which the interference cannot depend on the transmitted codeword, but on a noisy version of the transmitted codeword. I call this coding against myopic adversaries (it sounds nicer than “four-eyed”). In a later post I’ll talk about some of the geometric intuition of this case.
Pingback: A comment about maximal versus average error « An Ergodic Walk