For some reason I never noticed that LIDS has a blog, with Kush Varshney doing the writing. It’s more interesting than mine, so go take a look.

# Daily Archives: October 3, 2007

# Allerton 2007 : Networks and Algorithms

**Constrained Consensus and Alternating Projection Methods**

*(Asu Ozdaglar)*

The problem here is somewthing like the minimization of the sum of local utility functions, but here the values for the optimization can be constrained. Asu proposed an alternating projection algorithm that minimizes the objective and then projects back onto the feasible set. In order to prove the covergence rates, she separates the linear update error (which can be analyzed using standard Markov chain techniques) from the projection error, which requires some technical conditions on the constraints. The key is that the latter error is the only one that incorporates the constraints. The bound is obtained by looking at a stopped process and letting the stopping time go to infinity.

**Network Coding Using Non-Commutative Algebras**

*(Shweta Agarwal and Sriram Vishwanath)*

For multiple unicast sessions, finite field operations and linear coding may not be sufficient to achieve capacity for network codes. The proposal in this paper was code using modules over a noncommutative ring, but still use Gaussian elimination for decoding. I had to duck out towards the end, so I wasn’t sure if there explicit code constructions provided.

**Rates of Convergence for Distributed Average Consensus with Probabilistic Quantization**

*(Tuncer Can Aysal, Mark Coates, and Michael Rabbat)*

Most gossip and consensus algorithms assume we can do computation over the reals, which is not really feasible. This work tries to tackle the effect of quantization on the convergence rate. The probabilistic quantization rule they use is this : if the true outcome is distance *d* away from an endpoint of a quantization bin of length *D*, quantize to that point with probability *d/D*. The resulting scheme can be analyzed and all nodes will converge to the same point (an absorbing state of a Markov chain). In expectation, this point will be the true average, although it will over or under-estimate the average in any realization. One tricky part of the analysis is a very long waiting time for the average to settle on one value after all nodes have converged to one of the quantization points neighboring the true mean.

**New Market Models and Algorithms**

*(Vijay Vazirani)*

This talk was on pricing algorithms for links in networks. He started out by talking about the Fisher market, which is a multicommodity market model where every user has a fixed budget, goods are divisible, and each user has a different utility for the different goods. There is an algorithm for doing the allocations efficiently in these markets. The Kelly approach to analyzing TCP uses pricing on edges and a combinatorial auction to generate efficient flows, and it formally similar to Fisher’s model in the linear case. Vazirani presented a ascending price auction for links where the sinks are buyers in a multicast scenario. The resulting allocations are fair and efficient. By showing a connection to the Eisenberg-Gale (1957) optimization problem, he proposed a new class of Eisenberg-Gale markets in which a strongly polynomial time algorithm will exist for cases where there is a max-min result to exploit (like multicast).

**Fast Gossip through Markov Chain Lifting**

*(Kyomin Jung, Devavrat Shah, and Jinwoo Shin)*

The standard analysis of gossip algorithms uses results on the mixing time of reversible Markov chains. However, results of Chen et al. and Diaconis et al. show that nonreversible chains may mix much more quickly. In particular, a “lifting” construction can be used to embed expander graphs in the original graph (I think?). The lifting construction itself is quite cool — I read the paper on it on the plane on the way back and may write more about it. The trick is really to get a better bound on the conductance of the chain, which then bounds the mixing times.

**Stochastic Approximation Analysis of Distributed Estimation Algorithms**

*(Ram Rajagopal and Martin Wainwright)*

This paper was on computing *nonlinear* functions of sensor observations. In particular, suppose sensors have access to iid samples of a random variable *X* with unknown distribution. They would like to estimate the *a*-quantile — that is, the *b* such that *P(X < b) = a*. The nodes can communicate with each other over noisy or rate-constrained links. The performance measure that Ram uses is ratio of the decentralized MSE to the centralized MSE. The algorithm works by local flooding and updates, and the estimator is strongly consistent, with asymptotically normal error. What is more interesting is that the error is a function of the Laplacian of the communication graph.

**Gossip Along the Way: Order-Optimal Consensus through Randomized Path Averaging**

*(Florence Bénézit, Alexandros Dimakis, Patrick Thiran, and Martin Vetterli)*

This algorithm is similar to the Geographic Gossip algorithm that Alex and I worked on. In that work a node would wake up, pick a target at random, and route a packet to the closest node to the target to perform one pairwise exchange in a standard gossip algorithm. The corresponding Markov chain went from having a mixing time of *O(n)* to *O(1)* at the expense of *O(\sqrt{n})* extra transmissions. In this work, they change the algorithm to aggregate and average all values along the routing path from the source to destination, so instead of averaging 2 nodes in each round, they average *O(\sqrt{n})* nodes in each round. This shaves off the extra *O(\sqrt{n})* from routing and so they get order-optimal performance (compared to the centralized algorithm, perhaps modulo all the log terms that I omitted in this brief description. The key to the analysis is to eschew computing the actual chain but instead look at “typical” paths and construct flows using the comparison method (original in Diaconis/Stroock, I think) to bound the mixing time.