ISIT 2006 : day four

Plenary Talk : Towards a General Theory of Information Transfer
Rudolf Ahlswede (University of Bielefeld)

I really enjoyed this talk, mainly because I’ve spent a lot of time in the last year reading some of Ahlswede’s papers and trying to understand them. He was proposing a few new ways of thinking about the questions of information transmission in light of knowledge that may be present at the receiver and so on. For example, for Identification Channels, the question that a receiver is interested in is “did event i happen?” rather than “what event happened?” If the encoder doesn’t know what the decoder wants a-priori, then there is a different coding problem setup. The lecture felt a bit disorganized, but it had several quotable moments, such as “if you really want find a solution for the problem and not just turbo turbo turbo fountain…” and “you see, like coding theory has run off to the computer people…” The main lesson I got (as a graduate student) out of the lecture was that one should be more courageous and try to really pose problems from a philosophical standpoint, like Shannon did, and not be beholden to the models proposed before.

  • On the Discrete Memoryless Relay Channel with Relay-Transmitter Feedback (Yacov Gabbai, Shraga I. Bross)

    This was a modified block-Markov scheme for exploiting additional feedback for the relay channel. The decoder is a joint typicality decoder, and the interesting thing to me at least was the definition of the auxiliary random variables and their interpretation. I assume that this will appear in a journalized version somewhere, but in the meantime I’ll try to see what the real intuition is behind the construction.

  • The Impact of Link Layer Model on the Capacity of a Random Ad hoc Network (Vivek Mhatre, Catherine Rosenberg)

    The main point of this work was that if you assume that two nodes in the Gupta-Kumar model have an error probability associated with communicating a packet rather than a all-or-nothing SNR threshold then you can come up with a new scaling law. The title seems a bit misleading, but the modeling idea seemed pretty interesting.

  • The Effect of Node Density and Propagation Model on Throughput Scaling of Wireless Networks (Enrique Duarte-Melo, Awlok Josan, Mingyan Liu, David Neuhoff, Sandeep Pradhan)

    The model here is that of a (1 + d)– a path loss with a network on a disc whose radius grows like ng, where n is the number of sensors. In the Gupta-Kumar-esque sensor network model we can do a throughput scaling of this kind of network, which requires some graph coloring arguments. The end result is a scaling that depends on g and matches the existing results closely. Since g can be viewed as a density parameter, this captures one more tradeoff in wireless networks.

  • Fountain Capacity (Shlomo Shamai, Emre Telatar, Sergio Verdu)

    Since fountain codes have actually been incorporated into a new standard for wireless multimedia broadcast, it’s important to see how our models of communication must be modified in light of these new codes. In some sense this is a circular process — information theorists take a physical model of a channel and then come up with a capacity and methods for achieving this capacity. These coding methods are then adopted as standards, necessitating a change in the model of communication. A fountain channel is one in which we map a codeword into an infinite string of symbols. The channel gets to choose which n of these symbols the decoder can see. It’s related to an arbitrarily varying channel, and randomized codes do help make the capacity nonzero (and equal to the Shannon capacity). They show some interesting results for channels with memory in which the channel capacity is not equal to the Shannon capacity. I really need to understand this paper better, but at a first pass it seems really interesting.

  • The Arbitrarily Varying Degraded Broadcast Channel with States Known at the Encoder (Amir Winshtok, Yossef Steinberg)

    This is a modification of Ahlswede’s classic method for communicating over a “robust” Gelfand-Pinsker channel (or maybe AV-GPC?) The authors use a robustification and elimination technique but have to get around some hairy technical issues. I thought Amir did a good job of elucidating those issues, but I have also read most of the papers he cited, so I’m biased.

  • Randomization bounds for Gaussian arbitrarily varying channels
    (Anand Sarwate, Michael Gastpar)

    This was my talk — it was in the smallest room at the conference and a plurality of the people there were from Berkeley. I ended up feeling pretty nervous and kind of sped through things, but overall I think people liked it ok. I tried to spice it up a bit by using Koosh balls as props, which at least got a chuckle from the audience.

  • Error Exponent in AVS Coding (Ashot Harutyunyan, Han Vinck)

    The main result here is taking the source-coding game of Berger and calculating the error exponents. I really like the Berger paper, but I don’t get an intuition for the exponents shown in this paper. I suppose at some point some maxes and mins over KL divergences become intuitive, but at the moment I’m still at a loss.

  • A Synchronization Technique for Array-based LDPC Codes in Channels With Varying Sampling Rate (Lara Dolecek, Venkat Anantharam)

    Most coding models assume that the decoding circuit has perfect timing recovery, so that the output waveform is sampled in such a way that there is no timing jitter. Since this is clearly not realistic, codes which are robust to this are needed. The jitter can mean that one symbol is sampled twice (insertion) or that one is missed (deletion). This paper had a new model for these kind of errors and proved a modified LDPC construction and message-passing decoding algorithm that could correct for this. This would result in big savings in space and power consumption for these circuits, or so I am given to understand. Again, like in the fountain codes, we have a model that is kicked from implementation back to the theory level, which is pretty cool.

  • Coalitional Games in Gaussian Interference Channels (Suhas Mathur, Lalitha Sankaranarayanan, Narayan Mandayam)

    Coalitional games are games in which players may or may not cooperate in order to achieve a certain advantage. In this paper they look at multiuser interference channels in which some transmitter/receiver pairs can choose to join up, making a MIMO channel. If the rate allocated to a coalition is divided evenly between the members, then all the transmitters/receivers will make one big coalition. But if they can’t divide up the rates evenly, then you get a more interesting structure. At the end of the talk there were some geometric examples which brought the point home a bit. As with all game theory, I found it a bit mystifying, but I thought it was interesting take for the analysis of large network interference problems.

  • Is interference like noise when you know its codebook?
    (Rahul Tandra, Anant Sahai)

    The idea here was that if you have an interferer whose codebook is known to you and you can decode its message, you can effectively have an interference-free channel. However, if the receiver’s INR is too low to decode the interference, is knowledge of the codebook helpful? The answer seems to be no for a couple of situations (e.g. if the interferer uses a Gaussian codebook). The open question is what happens if the interference uses an arbitrary capacity-achieving code, which they leave as an open question. It’s an interesting idea, and the proof (by contradiction using the MAC) is another one of those nice reduction techniques.

The banquet was nice too, but we spent more time getting there and back than actually enjoying the dinner and the island. The big news was that Sergio Verdu won the Shannon Award, which was a bit surprising to me, since I figured they usually award it to someone older. My uneducated guess had been that Te Sun Han would get it, but then again, what do I know? Verdu’s a great speaker though, so hopefully I’ll get to go to Nice next year and hear his lecture.