This is one of the papers I saw at ISIT. At the time I thought I understood it, but reading the actual paper made it much clearer.
S. Shamai, E. Telatar, and S. Verdú
Proc. Int’l Symp. on Information Theory
The background for this paper is fountain codes. The basic idea is pretty simple. Suppose you have k packets of data that you want to send. Then one way of sending them is to pick a random subset of the packets, XOR them, and send that, along with some header information about which packets are in the XOR. This is just a linear combination of the packets, so a receiver getting a sufficient number of coded packets will be able to invert the set of linear equations to get the remaining data. The encoder acts like a fountain spewing out coded packets. Suppose the packets go over an erasure channel, for which packets either arrive intact or are erased. If there are many users with varying link qualities, the ones with better links will be able to decode early and disconnect. If a user connects in the middle, they don’t lose anything from missing the “earlier” coded packets. All that matters is that each user/decoder gets enough unerased packets to enable them to decode.
The purpose of this paper is to see if and how this decoder-centric view of communication can be put on a more traditional information-theoretic (read : Shannon Theory) foundation. The idea is that the encoder will take M messages and encode them into infinitely long strings. The encoding is randomized, and the randomization is known to the decoder. The encoded symbols go through a channel, but the decoder is pay-per-view : it can only see n outputs of the channel, but it can’t control which n outputs. An adversary, without knowing which codebook is being used, picks the schedule of n outputs observed by the decoder. The probability of decoding error is then given by the worst case over subsets of n outputs. The capacity is the supremum of rates for which the decoding error can be driven to 0.
The main results of the paper are
- Randomized coding is necessary, and an infinite amount of randomness is required.
- Fountain capacity is always upper bounded by Shannon capacity.
- For stationary memoryless channels the fountain capacity is equal to the Shannon capacity.
- For channels with memory, the fountain capacity may be less than Shannon capacity.
What’s a little unsatisfying about this paper is precisely the same thing that makes it interesting — the channel model they’re using is a slippery one, so it’s difficult to see how “reasonable” it is. For example, I think the infinite randomization thing is a real drawback, but maybe the channel assumptions can be relaxed to take that away. I’m interested in this paper because it is very related to arbitrarily varying channels (AVCs), which is what my thesis will likely revolve around. There are some comments at the end which relate it to AVCs, but I should think a little bit more about the real underlying problem and how to connect it back to AVCs.