I just finished up attending the 44th Annual Allerton Conference on Communication, Control, and Computation. Conveniently, the conference is held in Monticello, IL, which is a short drive away from Urbana. So it’s a free trip home for me, and another chance to flog my research. The conference was fun — I attended a number of good talks, and although the synchronization across sessions was a little uneven, I think I got a lot of ideas out of being there. What follows is a small set of notes on a subset of talks that I attended. I went to more talks than these, and some that I didn’t mention here were also quite interesting.
As always, I may have misunderstood some of the results, so don’t take my word for it necessarily…

How to Filter an “Individual Sequence with Feedback”
T. Weissman (Stanford)This was on a problem that is sometimes called “compound sequential decisionmaking against the wellinformed antagonist.” The idea is that you have some data sequence which is corrupted by noise and you would like to denoise it. If the sequence is arbitrary, then you can take an individual sequence approach to the problem. However, suppose now that you observe the noisy sequence causally, and an aversary can select the next data sample based on the noisy outputs of the previous data samples. This is the wellinformed adversary, and the problem becomes significantly harder. The interesting result for me is that randomized strategies perform better than deterministic ones, since the adversary can’t track the output. This is of course related in some way to the problems I’m thinking about, but has a totally different flavor.

Sharp Thresholds for Sparsity Recovery in the HighDimensional and Noisy Setting
M. Wainwright (Berkeley)There has been lots of work on compressed sensing and sparse reconstructions in recent years. Wainwright is interested in recovering a sparsity pattern in high dimensional data
that is, which elements are nonzero? A lot of asymptotic analysis is done to relate how many observations you need of the noisy data in terms of the dimension of the problem and the number of nonzero elements. It turns out that scaling the sparsity linearly with the dimension of the problem is bad (which may not be so surprising to some, but it’s hard for me to get an intuition for these things). 
Delay Asymptotics with Network Source Coding
S. Bhadra and S. Shakkottai (UT Austin)This was an interesting talk on interpreting coding across packets in networks as effectively moving the buffering from nodes in the network back to the source. In networks with heterogeneous traffic, this might be a useful way of thinking about things. Shakkottai said it was like “waterpouring over unused capacity,” which sounded like an appealing intuition. Whenever I get the proceedings I’d like to look at the details.

How to Achieve Linear Capacity Scaling without Mobility
A. Ozgur (EPFL), O. Leveque (Stanford) and D. Tse (Berkeley)This talk was completely packed, as it showed a complete solution for the scaling law problem in dense wireless networks like those of Gupta and Kumar. The scheme to achieve this was a hierarchical MIMO scheme on an overlay grid for the network that tried to get the maximum spatial reuse. The big picture was clear and the details looked quite tricky. The major thing needed is independent uniform phase fading from each pointtopoint link in the network. There was a little confusion during the questions about uniform phase and scattering effects that went a bit over my head. The mathematical problem seems settled, although the debate on the engineering question may be open…

Coding into a Source: A Direct Inverse RateDistortion Theorem
M. Agarwal (MIT), A. Sahai (Berkeley) and S. Mitter (MIT)Suppose you have to use a fixed input distribution for a channel, and the channel is guaranteed, given that input distribution, to not distort the input by too much (using a given distortion measure). How well can you do across such a channel? The capacity is actually the rate distortion function R(D), which neatly gives an operational meaning for ratedistortion in terms of channel coding. It’s a difficult problem to wrap your head around, and nearly impossible to describe in a few sentences. It’s one of those results that is trying to get at what information really is, and to what extent the “bit” formalism of information theory is a fundamental aspect of nature.

Data Fusion Trees for Detection: Does Architecture Matter?
WP. Tay, J. Tsitsilkis, and M. Win (MIT)Suppose we have a huge number of sensors that all observe iid samples that could come from one of two distributions. A fusion center wants to decide which distribution governs the samples, and can get quantized versions of the samples from the sensors. This paper is trying to get at how different fusion architectures (e.g. trees) perform versus a totally centralized scheme. The main result is that if the total number of leaves is asymptotically the same as the number of sensors and the tree has bounded height, then we can do as well as a completely centralized solution.

Randomization for robust communication in networks, or “Brother, can you spare a bit?”
A.D. Sarwate and M. Gastpar (Berkeley)I don’t have much to say about this paper, since I wrote it. The basic idea is to justify the AVC model for interference in networks and to come up with some strategies for sensor networks and adhoc wireless networks to generate secret random keys to enable randomized coding on links in the network. For sensor networks, two terminals might be able to extract a shared random key from correlated sources. In wireless networks we can sacrifice some nodes to distribute keys to their neighbors in a decentralized scheme. The main point is that key distribution needs to happen only once — a new key for the next transmission can be sent with the current transmission with no asymptotic cost in rate.

Probabilistic Analysis of LP Decoding
A.D.G. Dimakis, C. Daskalakis, R. Karp, and M. Wainwright (UC Berkeley)LP decoding is appealing since it is a way of efficiently decoding LDPC codes. The authors here provide a formalism for analyzing the performance of LP decoding for random errors (rather than the adversarial error model). The heuristic is that bits that are corrupted have “poison” that must be redistributed around the Tanner graph of the code using what they call a valid hyperflow. If such a flow exists than LP decoding succeeds. They then prove that for a large class of codes that are expanders a hyperflow will exist with high probability.

Distributed Beamforming Using 1 Bit Feedback: From Concept to Realization
R. Mudumbai (UC Santa Barbara), B. Wild (UC Berkeley), U. Madhow (UC Santa Barbara), and K. Ramchandran (UC Berkeley)The one bit feedback scheme for distributed beamforming is a way of aligning antennas using one bit of feedback about the received SNR. Imagine a large number of nodes communicating with a base station. Beamforming gains can be huge with a large number of antennas, so it would be good if they could adjust their phases to add coherently at the receiver. The authors describe their scheme and show some experimental results. There is also an analysis of the speed of convergence, which is linear in the number of antennas.

Achieving List Decoding Capacity Using Folded ReedSolomon Codes
V. Guruswami and A. Rudra (U. Washington)Consider a ReedSolomon code of blocklength n over an alphabet of size q that communicates N bits. We can take this code and make it into a Folded ReedSolomon code of blocklength n/m over an alphabet of size mq that communicates N bits by taking m consecutive symbols and treating them as a single packet. In doing this and some clever listdecoding modifications to the original GuruswamiSudan algorithm, the authors can get arbitrarily close to the capacity of the erasure channel. The tricky thing is that the alphabet size has to increase we well. The construction and decoding are pretty cool though, and it makes me wish we had done some of this kind of stuff in my algebra classes.
You’re back, huh? Drop me a line and we’ll go run amok… eating food and/or drinking booze temperately.
Pingback: fivestar
Pingback: audi