I didn’t do such a great job of taking notes this time, but I went to a number of talks today. Maybe Max will blog too.
The day kicked off with a plenary lecture by Frank Kschischang, who talked about the framework, pioneered by himself and Ralf Koetter, on viewing network coding as communicating subspaces from the encoder to the decoder. The corresponding codes are constructed over subspaces (elements of the Grassman manifold of a vector space over a finite field) and you can start building a coding theory for network coding based on that. It was a nice and clear way to start the day.
High-Rate Superposition Codes with Iteratively Optimal Estimates
Andrew Barron, Sanghee Cho
Andrew Barron talked about a new decoding algorithm for a channel coding scheme he and Anthony Joseph proposed two years ago. The information bits are mapped into a sparse coefficient vector which selects columns from a
codebook matrix
to get the codeword
. This is then corrupted by noise. The contribution here was in proposing an adaptive successive decoding algorithm that uses soft decisions. This greatly improves the performance of previous decoding schemes based on thresholding. The key to the analysis is treating the decoder’s problem as parameter estimation and constructing appropriate derived statistics.
Minimization of Entropy Functionals Revisited
Imre Csiszár, Frantisek Matúš
This talk was on the minimization of certain functionals of the form , where
is strictly convex for each
and measurable for each value of the first component. It was a little technical for my pre-coffee brain, but the full version is on ArXiV and will appear in Kybernetika.
The sufficiency principle for decentralized data reduction
Ge Xu, Biao Chen
This talk (by Ge Xu) was on the notion of sufficient statistics in networked inference problems. Under certain conditions, local sufficiency conditions can be translated into global sufficiency conditions. I wasn’t familiar enough with the related work in this area, but the structural problems of sufficiency look interesting and possibly of relevance to machine learning in distributed settings.
Finding the Capacity of a Quantized Binary-Input DMC
Brian Kurkoski, Hideki Yagi
Brian talked about the problem of taking a binary-input channel with outputs and quantizing the output to only
values. The goal is to do the mapping that will maximize the capacity of the resulting channel. This problem looks messy and (NP) hard due to all the combinatorial ickiness, but it you look at the ratios
on the real line, the optimal quantizer will merge contiguous blocks of points. The algorithm to do this is dynamic programming and runs cubic in
. There seem to be some interesting phenomena to explore here, but I wonder how the approaches might generalize to non-binary inputs.
The day ended with a lovely tribute to Tom Cover featuring reminiscences by his colleagues and students. It was really inspiring, much like the 2008 Coverfest at Stanford. As my friend Galen put it, “I’m glad I went — I came out inspired to do better research.” Bob Gallager issued a challenge bring a little more of Cover’s attitude to our own work, so we could make ourselves into a “distributed Tom.” It would be wonderful, he said, and I agree.