ITW 2007 (Tahoe) : Day One

A lot of my comments will be short since in retrospect my notes were quite poor for most of the talks I attended. I attribute it to fatigue… I also skipped out on a number of talks to think over some new problems that had come to mind. I’m going to let Alex blog about some of the coding talks.

  • Plenary Talk: On Modeling Distributions over Large Alphabets (Alon Orlitsky)

    Orlitsky started with the non-paradoxical nature of the “birthday paradox.” After querying people in the audience one by one, he finally ended with two people who had the same birthday (9/3), namely Shlomo Shamai and Rüdiger Urbanke. This was all a set-up of course, but I honestly thought Urbanke was just sick of the process and tried to end it. One way of turning the birthday paradox on its head is to ask “if we saw that it took K queries to find two people with the same birthday, how should we estimate the length of the year?” This is precisely the problem of large alphabet size estimation with small sample size. He then went on to tour the techniques of patterns, profiles, and the relation to capture-recapture statistics. I had seen a lot of this work before, but it was nice to see it again and a bit more slowly.

  • Power Allocation for Discrete-Input Non-Ergodic Block-Fading Channels (Khoa D. Nguyen, Albert Guillén i Fàbregas, and Lars K. Rasmussen)

    This work was about finding the SNR exponent of the outage probability for power allocation strategies over a particular class of fading channels (Nakagami). They differentiated between long-term and short-term power allocation and found a relationship between the short-term optimal scheme and a Singleton bound. The more interesting part is that suboptimal, lower complexity schemes are nearly optimal in this outage-exponent sense.

  • Probabilistic Capacity and Optimal Coding for Asynchronous Channel (Ning Cai, Siu-Wai Ho, and Raymond W. Yeung)

    This paper looked at time jitter in a continuous time model, and showed that run-length limited (RLL) codes are suboptimal for this kind of asynchrony. When the jitter is bounded in some sense they found 0-error codes, and showed that “interval codes” are better than RLL codes. They used an approximation for the rate loss involving characteristic functions in the case where we demand a bound on the time that the encoder and decoder can be out of synch. They come up with a channel model and capacity formulation and design new capacity achieving codes.

  • The Effect of Finite Memory on Throughput of Wireline Packet Networks (Badri N. Vellambi, Nazanin Rahnavard, and Faramarz Fekri)

    In a packet erasure network on a line, with finite buffer sizes at the nodes, what are the benefits of network coding? The authors came up with a large Markov chain model and have to find the steady state distribution to get the capacity of the network. Unfortunately, the chain is a bit unwieldy so they come up with approximations for the distribution. Some simulations are also provided.