ITA 2013 : part the third

I think this is the end of my ITA blogging! But there were some issues that came up during the conference that may be of interest to some of the readers of this blog (although from anecdotal reports, there are many people who read but never comment, so I’m not sure what to do to encourage more discussions).

Networking Data Centers Randomly
Ankit Singla, Chi-Yao Hong, Brighten Godfrey, Alexandra Kolla, Lucian Popa
Brighten talked about using random graphs as a network topology for the interconnections within a data center. Intra-center traffic is exploding and the current hierarchical back-end topology is not very efficient. They propose wiring servers together using an expander graph and show that the resulting network has better throughput and us easier to incrementally expand. There are, of course, some physical issues (what about the spaghetti of cables?) but this approach seems quite promising.

Universal Outlier Detection
Sirin Nitinawarat, Yun Li, Venugopal Veeravalli
The “universal outlier detection” problem can be phrased as follows. Suppose we have two distributions, \mu and \pi on a discrete set \mathcal{Y}. We can observe random M-dimensional vectors Y where one coordinate has the outlier distribution \mu and the other coordinates are independently drawn according to \pi. We want to detect which coordinate is the outlier from repeated draws from the distribution. This is a sort of M-ary hypothesis test, and they look at the exponents achievable in this detection task without any knowledge of \mu and \pi.

On Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging
Angelia Nedich
In mirror descent or other stochastic approximation-based optimization algorithms, sometimes taking the average of the iterates (or just the tail) can lead to better estimates because the randomness can get averaged out. The goal in this work was to produce algorithms which lead to more favorable tail-averaged estimates. To do this, we should really modify the algorithm itself and change the weight sequence. An open question is if the great convergence rates that they get are only in theory but also appear in implementations.

Alternating Minimization for Low-rank Recovery – Provable Guarantees
Sujay Sanghavi
Sujay talked about replacing a minimization like \min f(X) where X is a low-rank matrix by \min f(UV^T) where U and V are the factors of X. The overall problem is not convex but we can do alternating minimization over U and V. This works great in practice but it’s not clear why. They prove the first known guarantees on the convergence of this procedure. The idea is to show that for matrix completion (for example), alternating minimization is almost equivalent to a perturbed version of the power method.

Self-concordant scale estimation for manifold learning
Dominique Perrault-Joncas, Marina Meilă
Marina Meilă gave a talk about manifold learning, or the learning of low-dimensional (nonlinear) structures in higher dimensional space. Many methods use some sort of mesh on the given data points and use the graph Laplacian. The underlying differential geometry view is that there is a Riemannian metric g which assigns a bilinear form to the Hilbert space associated with each point of the manifold. The method proposed here was to do some local analysis by projecting the data onto a tangent plane at each point in a way that the distortion from the projection is not large. It made a lot more sense in the talk than I can do justice to it now, but I recommend perusing the paper if it sounds interesting.

Advertisement

2 thoughts on “ITA 2013 : part the third

Comments are closed.