It was a half day today, so the pressure wasn’t on as much. It’s nice to have a break in the middle of the conference, because after the second day the fatigue really begins to set in. I skipped the plenary (by Brendan Frey) to sleep in, since I had seen some of that talk when he came to Berkeley. A lot of the talks I went to today were by lab mates and friends, so in a sense I wasn’t expanding my horizons as such. The formality of presenting a problem in a conference makes it easier to understand the problem, however, since the presenter has to contextualize more.

Rate Region of the Quadratic Gaussian TwoEncoder SourceCoding Problem (Aaron B. Wagner, Saurabha Tavildar, Pramod Viswanath)
I had seen this result before, but it’s always nice to see it presented again. They solve an example of a classic open problem. Two encoders observe correlated sources S_{1} and S_{2} and need to compress them in order to send them to a decoder. The decoder has two MSE requirements D_{1} and D_{2} for each of the sources. The first idea for solving this problem is for the encoders to quantize their observations and then uses SlepianWolf coding on the quantized observations. The result of this paper is to show this is in fact the best one can do. The tough part is the converse, for which they use two bounds — one that assumes a centralized encoder, and one that connects the problem to the CEO problem, for which Aaron proved a converse in his PhD thesis. It’s a neat result because of its use of reductions, I think.

Guessing Facets: Polytope Structure and Improved LP Decoder (Alexandros Dimakis, Martin Wainwright)
I don’t know much about coding theory, but apparently in the decoding of LDPC (lowdensity paritycheck) codes, there is a technique called “LP decoding,” which is a linear programming solution to the decoding problem. LP decoding operates on something called the fundamental polytope, which is constructed from the code in an interesting way. Since codewords are binary vectors, we embed the code in ndimensional Euclidean space. For each parity check, we look at all possible binary vectors satisfying the parity check (obviously the codewords are such vectors) and take the convex hull of those vectors. We then intersect all of these hulls to get a new polytope which has all codewords as vertices but also some new vertices called “pseudocodewords.” The problem is LP decoding fails when it returns a pseudocodeword. The key fact is that the polytope has very few facets, so if we restrict the decoding to a facet we can find the best candidate on a facet pretty fast. By guessing enough facets we can find the true codeword with high probability by running only a constant number of linear programs.

Computing over MultipleAccess Channels with Connections to Wireless Network Coding (Bobak Nazer, Michael Gastpar)
This is a new kind of look at network coding with real multipleaccess constraints. If you want to run network coding over a network which doesn’t have wired links, so that there are multipleaccess channels, current strategies are wasteful. In particular, if X and Y both need to be sent to a node, which will send some function f(X,Y) on its outgoing link, the encoders can do some clever coding to ensure that f(X,Y) can be decoded without requiring X and Y to be decoded. Bobak has a new coding technique for these kind of channels, which he gave some examples for.

Optimal DistortionPower Tradeoffs in Gaussian Sensor Networks (Nan Liu, Sennur Ulukus)
Liu and Ulukus consider a network in which the sensors are regularly sampling an underlying spatiallycorrelated stochastic process and need to communicate their observations to a central processor which wishes to reconstruct. Each sensor has a radio and so can hear the transmissions of the other sensors. Some scalinglaw bounds on the tradeoffs between the power and distortion. One bound comes from separating source and channel coding, and the other allows for centralized communication. Since there’s no noise in the sensor observations, it’s hard to evaluate the accuracy of the bounds, but the interesting point is that the two solutions are scalinglaw similar and so the distributedness doesn’t seem to affect the scaling law for these scenarios. I found the mathematical assumptions needed to prove this result a little confusing, since I didn’t have any intuition for them, but I think I’ll take a look at the paper to see the details.

Analog Matching of Colored Sources to Colored Channels (Yuval Kochman, Ram Zamir)
For colored Gaussian sources, doing a whitening filter followed by white Gaussian vector quantization is not optimal (somewhat unsurprising, but new to me). Instead they propose an architecture which puts the quantization in the same loop as the filtering . For a colored noise channel the system is a little more delicate. This has something to do with sourcechannel matching, but I need to understand the architecture a bit better first, I think.