ISIT 2010 : error analysis and asynchrony

The remaining blogging from ISIT on my end will probably be delayed a bit longer since I have camera deadlines for EVT and ITW, a paper with Can Aysal for the special SP gossip issue that is within epsilon of getting done (and hence keeps getting put off) and a paper for Allerton with Michele Wigger and Young-Han Kim that needs some TLC. Phew! Maybe Alex will pick up the slack… (hint, hint)

Asynchronous Capacity per Unit Cost (Venkat Chandar, Aslan Tchamkerten, David Tse)
This paper was the lone non-adversarial coding paper in the session my paper was in. Aslan talked about a model for coding in which you have a large time block [0,A] in which B bits arrive at a random time at the encoder. The bits have to be transmitted to the receiver with delay less than D. The decoder hears noise when the encoder is silent, and furthermore doesn’t know when the encoder starts transmitting. They both know A, however. They do a capacity per-unit-cost analysis for this system and give capacity results in various cases, such as whether or not there is is a 0-cost symbol, whether or not the delay is subexponential in the number of bits to be sent, and so on.

Moderate Deviation Analysis of Channel Coding: Discrete Memoryless Case (Yucel Altug, Aaron Wagner)
This was a paper that looked at a kind of intermediate behavior in summing and normalizing random variables. If you sum up n iid variables and normalize by n you get a law of large numbers and large deviations analysis. If you normalized by \sqrt{n} you get a central limit theorem (CLT). What if you normalize by n^{2/3} for example? It turns out you get something like
\mathbb{P}( \frac{1}{n^{\beta}} \sum_{i=1}^{n} X_i \ge \epsilon ) \le \exp( -n^{2 \beta - 1} I_{\mathcal{N}}(\epsilon) )
where I_{\mathcal{N}}(\epsilon) is the rate function for the Gaussian you get out of the CLT. This is useful for error exponent analysis because it lets you get a handle on how fast you can get the error to decay if your sequence of codes has rate tending to capacity. They use these techniques to show that for a sequence of codes with gaps \epsilon_n to capacity the error decays like \exp( - n \epsilon_n^2/(2 \sigma), where \sigma is the dispersion of the channel.

Minimum energy to send k bits with and without feedback (Yury Polyanskiy, H. Vincent Poor, Sergio Verdu)
This was on the energy to send a bit but in the non-asymptotic energy regime (keeping philosophically with the earlier line of work by the authors) for the AWGN channel. Without feedback they show an achievability and converse for sending M codewords with error \epsilon and energy E. These give matching scalings up to the first three terms as E \to \infty. The first term is (E/N_0) \log e. Interestingly, with feedback, they show that the first term is 1/(1 - \epsilon) times the term without feedback, which makes sense since as the error goes to 0 the rates should coincide. They also have a surprising result showing that with energy E > k N_0 you can send 2^k messages with 0 error.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.