ISIT 2010 : a few more talks

I think I lack the willpower to write up more notes on talks, and there are other things I’d like to blog about, but here are one or two sentences on some other talks that I found interesting… I also enjoyed the energy session on Friday and the talks by Sundeep and Galen on compressed sensing, but time has gotten the better of me. Next time, folks.

Channel Intrinsic Randomness
Matthieu Bloch
This was on extracting random bits from the output of noisy channel. These bits should be independent of the input to the channel. Matthieu uses the enigmatic information spectrum method to get his results — thanks to the plenary lecture I was able to understand it a bit better than I might have otherwise.

Assisted Common Information
Vinod Prabhakaran and Manoj Prabhakaran
I was very interested in this talk because I have been thinking of a related problem. Two terminals observe correlated sources X_1^n and Y_1^n respectively. A genie observes both sources and sends messages at rates R_1 and R_2 to the two terminals, who then have to produce variables W_1 and W_2 which have large entropies and are also equal with high probability. This problem is connected to the Gacs-Korner problem and Wyner’s common information problem, and also possibly this recent preprint by Andrej Bogdanov and Elchanan Mossel. They manage to solve it using a novel construction of “monotone regions.”

Patterns and Exchangeability
N. P. Santhanam and M. Madiman
This work grew out of the AIM workshop on permanents. De Finetti’s theorem says an exchangeable process can be built up as a mixture of iid processes. Kingman showed that something called an exchangeable partition process is built up from something he called “paintbox processes.” One thing that we discovered at the workshop was that the pattern process of an iid process is the same as a paintbox process (and vice-versa). The paper then goes through many connections between these processes, certain limits of graphs, and connections to universal compression.

Universal Hypothesis Testing in the Learning-Limited Regime
Benjamin G. Kelly, Thitidej Tularak, Aaron B. Wagner, and Pramod Viswanath
This was a really great talk. The problem here is that for each n you get n samples X^n distributed i.i.d. according to p_n or q_n on an alphabet \mathcal{A}_n which can grow with n. Given a new sequence Z^n you have to decide if it was generated according to p_n or q_n. They show a number of results which say that consistency is possible for |\mathcal{A}_n| sublinear in n, impossible for quadratic in n, and other intermediate results. In particular, for well-behaved distributions with |\mathcal{A}_n| = \Theta(n^{\alpha}) and all probabilities are \Theta(n^{-\alpha}), they can get some consistency results, but in particular the generalized likelihood ratio test (GLRT) is inconsistent for \alpha = 1.

Feature Extraction for Universal Hypothesis Testing via Rank-Constrained Optimization
Dayu Huang and Sean Meyn
This talk was of interest to me because I have been looking at hypothesis testing problems in connection with election auditing. In universal testing you know a lot about the distribution for one hypothesis, but much less about the other hypothesis. The Hoeffding test is a threshold test on the KL-divergence between the empirical distribution and the known hypothesis. This test is asymptotically optimal but has a high variance when the data is in high dimension. Thus for smaller sample sizes, a so-called mismatched divergence test may be better. In this paper they look at how to tradeoff the variance and the error exponent of the test.

ISIT 2010 : error analysis and asynchrony

The remaining blogging from ISIT on my end will probably be delayed a bit longer since I have camera deadlines for EVT and ITW, a paper with Can Aysal for the special SP gossip issue that is within epsilon of getting done (and hence keeps getting put off) and a paper for Allerton with Michele Wigger and Young-Han Kim that needs some TLC. Phew! Maybe Alex will pick up the slack… (hint, hint)

Asynchronous Capacity per Unit Cost (Venkat Chandar, Aslan Tchamkerten, David Tse)
This paper was the lone non-adversarial coding paper in the session my paper was in. Aslan talked about a model for coding in which you have a large time block [0,A] in which B bits arrive at a random time at the encoder. The bits have to be transmitted to the receiver with delay less than D. The decoder hears noise when the encoder is silent, and furthermore doesn’t know when the encoder starts transmitting. They both know A, however. They do a capacity per-unit-cost analysis for this system and give capacity results in various cases, such as whether or not there is is a 0-cost symbol, whether or not the delay is subexponential in the number of bits to be sent, and so on.

Moderate Deviation Analysis of Channel Coding: Discrete Memoryless Case (Yucel Altug, Aaron Wagner)
This was a paper that looked at a kind of intermediate behavior in summing and normalizing random variables. If you sum up n iid variables and normalize by n you get a law of large numbers and large deviations analysis. If you normalized by \sqrt{n} you get a central limit theorem (CLT). What if you normalize by n^{2/3} for example? It turns out you get something like
\mathbb{P}( \frac{1}{n^{\beta}} \sum_{i=1}^{n} X_i \ge \epsilon ) \le \exp( -n^{2 \beta - 1} I_{\mathcal{N}}(\epsilon) )
where I_{\mathcal{N}}(\epsilon) is the rate function for the Gaussian you get out of the CLT. This is useful for error exponent analysis because it lets you get a handle on how fast you can get the error to decay if your sequence of codes has rate tending to capacity. They use these techniques to show that for a sequence of codes with gaps \epsilon_n to capacity the error decays like \exp( - n \epsilon_n^2/(2 \sigma), where \sigma is the dispersion of the channel.

Minimum energy to send k bits with and without feedback (Yury Polyanskiy, H. Vincent Poor, Sergio Verdu)
This was on the energy to send a bit but in the non-asymptotic energy regime (keeping philosophically with the earlier line of work by the authors) for the AWGN channel. Without feedback they show an achievability and converse for sending M codewords with error \epsilon and energy E. These give matching scalings up to the first three terms as E \to \infty. The first term is (E/N_0) \log e. Interestingly, with feedback, they show that the first term is 1/(1 - \epsilon) times the term without feedback, which makes sense since as the error goes to 0 the rates should coincide. They also have a surprising result showing that with energy E > k N_0 you can send 2^k messages with 0 error.