After a long stint of proposal writing, I figured I should catch up on some old languishing posts. So here’s a few quick notes on the remainder of ICML 2014.

  • Fast Stochastic Alternating Direction Method of Multipliers (Wenliang Zhong; James Kwok): Most of the talks in the Optimization II session were on ADMM or stochastic optimization, or both. This was int he last category. ADMM can have rather high-complexity update rules, especially on large, complex problems, so the goal is to lower the complexity of the update step by making it stochastic. The hard part seems to be controlling the step size.
  • An Asynchronous Parallel Stochastic Coordinate Descent Algorithm (Ji Liu; Steve Wright; Christopher Re; Victor Bittorf; Srikrishna Sridhar): The full version of this paper is on ArXiV. The authors look at a multi-core lock-free stochastic coordinate descent method and characterize how many cores you need to get linear speedups — this depends on the convexity properties of the objective function.
  • Communication-Efficient Distributed Optimization using an Approximate Newton-type Method (Ohad Shamir; Nati Srebro; Tong Zhang): This paper looked 1-shot “average at the end” schemes where you divide the data onto multiple machines, have them each train a linear predictor (for example) using stochastic optimization and then average the results. This is just averaging i.i.d. copies of some complicated random variable (the output of an optimization) so you would expect some variance reduction. This method has been studied by a few people int the last few years. While you do get variance reduction, the bias can still be bad. On the other extreme, communicating at every iteration essentially transmits the entire data set (or worse) over the network. They propose a new method for limiting communication by computing an approximate Newton step without approximating the full Hessian. It works pretty well.
  • Lower Bounds for the Gibbs Sampler over Mixtures of Gaussians (Christopher Tosh; Sanjoy Dasgupta): This was a great talk about how MCMC can be really slow to converge. The model is a mixture of Gaussians with random weights (Dirichlet) and means (Gaussian I think). Since the posterior on the parameters is hard to compute, you might want to do Gibbs sampling. They use conductance methods to get a lower bound on the mixing time of the chain. The tricky part is that the cluster labels are permutation invariant — I don’t care if you label clusters (1,2) versus (2,1), so they need to construct some equivalence classes. They also have further results on what happens when the number of clusters is misspecified. I really liked this talk because MCMC always seems like black magic to me (and I even used it in a paper!)
  • (Near) Dimension Independent Risk Bounds for Differentially Private Learning (Prateek Jain; Abhradeep Guha Thakurta): Abhradeep presented a really nice paper with a tighter analysis of output and objective perturbation methods for differentially private ERM, along with a new algorithm for risk minimization on the simplex. Abhradeep really only talked about the first part. If you focus on scalar regret, they show that essentially the error comes from taking the inner product of a noise vector with a data vector. If the noise is Gaussian then the noise level is dimension-independent for bounded data. This shows that taking (\epsilon,\delta)-differential privacy yield better sample complexity results than (\epsilon,)-differential privacy. This feels similar in flavor to a recent preprint on ArXiV by Beimel, Nissim, and Stemmer.
  • Near-Optimally Teaching the Crowd to Classify (Adish Singla; Ilija Bogunovic; Gabor Bartok; Amin Karbasi; Andreas Krause): This was one of those talks where I would have to go back to look at the paper a bit more. The idea is that you want to train annotators to do better in a crowd system like Mechanical Turk — which examples should you give them to improve their performance? They model the learners as doing some multiplicative weights update. Under that model, the teacher has to optimize to pick a batch of examples to give to the learner. This is hard, so they use a submodular surrogate function and optimize over that.
  • Discrete Chebyshev Classifiers (Elad Eban; Elad Mezuman; Amir Globerson): This was an award-winner. The setup is that you have categorical (not numerical) features on n variables and you want to do some classification. They consider taking pairwise inputs and compute for each tuple (x_i, x_j, y) a marginal \mu_{ij}(x_i, x_j, y). If you want to create a rule f: \mathcal{X} \to \mathcal{Y} for classification, you might want to pick one that has best worst-case performance. One approach is to take the one which has best worst-case performance over all joint distributions on all variables that agree with the empirical marginals. This optimization looks hard because of the exponential number of variables, but they in fact show via convex duality and LP relaxations that it can be solved efficiently. To which I say: wow! More details are in the paper, but the proofs seem to be waiting for a journal version.

20140630-235429-86069867.jpg

I still owe a post from ICML and I am supposedly writing a proposal now but some blogging will happen soon (probably as a procrastination technique).

This is my first time at ICML, and every paper here has a talk and a poster. It’s a lot of work to prepare, but one nice benefit is that because my poster had to be done before I left, the talk was also pretty much done at the same time, modulo minor tweaks. Having to be ready early means less last-minute preparations and lower-stress at the conference overall. Another plus is that some talks are probably better as posters and some posters are probably better as talks, so the two modes of presentation gives a diversity to the delivery process. Some people also prefer talks to posters or vice-versa, so that’s good for them as well. Finally, the conference has 6 parallel tracks, so knowing that there’s a poster takes some of the stress out of deciding which session to attend — you can always catch the poster if you missed the talk.

The major minus is time. Sessions run from 8:30 to 6 and then posters run from 7 to 11 PM — it’s overwhelming! You can easily spend the entire conference at talks and then at posters, resulting in a brain overload. This also leaves less time for chatting and catching up with colleagues over dinner, starting up new research ideas or continuing ongoing projects in person, and the informal communication that happens at conferences. People do make time for that, but the format less conducive to it, or so it appeared to me. I ended up taking time off a bit during the sessions to take a walk around the Olympic park and have a chat, and I saw others leaving to do some sightseeing, so perhaps I am adhering to the schedule too much.

It’s interesting how different the modes of conference/social research communication are across research disciplines. I’ve yet to go to ICASSP or ICC, and while I have been to a medical informatics conference once, I haven’t gone to a Big Science conference or the joint meetings for mathematics or statistics. I imagine the whole purpose and format of those is completely different, and it makes me wonder if the particular formats of machine learning conferences are intentional: since there is rarely an extended/journal version of the paper, the conference is the only opportunity for attendees to really buttonhole the author and ask questions about details that are missing from the paper. Perhaps maximizing author exposure is a means to an end.

I was a somewhat inconsistent note-taker here. Because a lot of the talks I attended were sufficiently out-of-area for me that I didn’t get the context for the work, I often found myself jotting a few “look at this later” pointers to myself rather than actual ideas from the talk.

First, the plenaries: Eric Horvitz, Michael Kearns, and Michael Jordan. Horvitz talked about how we’ve made a lot of progress in machine learning but there’s more work to be done in bringing humans back into the loop. Examples include developing semantics for what features mean, how to visualize the results, adding humans into the loop (e.g. active learning or interactive settings), crowdsourcing, and building tools that are sensitive to human cognitive limitations, like detecting and informing people of “surprising events,” which involves knowing what surprising means. He also announced a new data set, COCO for “common objects in context” (not Cocoa Puffs) which has around 300k-400k images and lots of annotations. The goal was to build al library of objects that a 4-year-old can recognize. Can a computer?

I honestly was a little too zonked/jetlagged to understand Michael Kearns’ talk, which was on challenges in algorithmic trading. He was focused on problems that brokers face, rather than the folks who are holding the risk. Michael Jordan gave a variant on a talk I’ve seen him give in the last few plenary/big talks I’ve seen: computation, statistics, and big data. The three examples he talked about were local differential privacy, bounds for distributed estimation, and the bag of little bootstraps.

As far as the research talks go, here are a few from the first day:

  • Robust Principal Component Analysis with Complex Noise(Qian Zhao; Deyu Meng; Zongben Xu; Wangmeng Zuo; Lei Zhang): This paper interpreted the Robust PCA problem (given Y = L = E where L is low-rank and E is sparse, recover L) in terms of MAP inference. The solution generally looks like a nuclear-norm plus L_1 regularization, which they claim implies a kind of Laplace-like model for the noise. They build a generative model and then change the distributions around to get different noise models.
  • Discriminative Features via Generalized Eigenvectors (Nikos Karampatziakis; Paul Mineiro): This was on how to learn features that are discriminative in a multiclass setting while still being somewhat efficient. The main idea was to look at correlations in the existing features via the tensor x \otimes x \otimes y where x are the features and y are the labels, and to then find generalized eigenvalues and eigenvectors by looking for vectors v that maximize (for a given (i,j) the ratio \frac{ \mathbb{E}[ (v^{\top} x)^2 | y = i] }{ \mathbb{E}[ (v^{\top} x)^2 | y = j] }. This nonlinearity is important for reasons which I wasn’t entirely sure about.
  • Randomized Nonlinear Component Analysis (David Lopez-Paz; Suvrit Sra; Alex Smola; Zoubin Ghahramani; Bernhard Schoelkopf): I really enjoyed this talk — basically the idea is kernel versions of PCA and CCA have annoyingly large running times. So what they do here is linearize the kernel using sampling and then do some linear component analysis on the resulting features. The key tool is to use Matrix Bernstein inequalities to bound the kernel approximations.
  • Memory and Computation Efficient PCA via Very Sparse Random Projections (Farhad Pourkamali Anaraki; Shannon Hughes): This talk was on efficient approximations to PCA for large data sets, but not in a streaming setting. The idea was, as I recall, that you have big data sets and different sites. Each site takes a very sparse random projection of its data (e.g. via a random signed Bernoulli matrix) and then these get aggregated via an estimator. They show that the estimator is unbiased and the variance depends on the kurtosis of the distribution of elements in the projection matrix. One thing that was interesting to me is that the covariance estimate has bias term towards the canonical basis, which is one of those facts that makes sense after you hear it.
  • Concept Drift Detection Through Resampling (Maayan Harel; Shie Mannor; Ran El-Yaniv; Koby Crammer): This talk was sort of about change-detection, but not really. The idea is that a learning algorithm sees examples sequentially and wants to tell if there is a significant change in the expected risk of the distribution. The method they propose is a sequential permutation test — the challenge is that a gradual change in risk might be hard to detect, and the number of possible hypotheses to consider grows rather rapidly. I got some more clarification from Harel’s explanation at the poster, but I think this is one where reading the paper will make it clearer.

Noted without notes, but I enjoyed the posters (sometimes I read them since the presenter was not around):

  • An Asynchronous Parallel Stochastic Coordinate Descent Algorithm (Ji Liu; Steve Wright; Christopher Re; Victor Bittorf; Srikrishna Sridhar)
  • Clustering in the Presence of Background Noise (Shai Ben-David; Nika Haghtalab)
  • Demystifying Information-Theoretic Clustering (Greg Ver Steeg; Aram Galstyan; Fei Sha; Simon DeDeo)
  • Consistency of Causal Inference under the Additive Noise Model (Samory Kpotufe; Eleni Sgouritsa; Dominik Janzing; Bernhard Schoelkopf)
  • Concentration in unbounded metric spaces and algorithmic stability (Aryeh Kontorovich)
  • Hard-Margin Active Linear Regression (Zohar Karnin; Elad Hazan)
  • Heavy-tailed regression with a generalized median-of-means (Daniel Hsu; Sivan Sabato)
The famous "bird nest"

The famous “bird nest”

Greetings from ICML 2014! I will attempt to blog the conference in between attending sessions, giving my talk and poster, and stressing out about writing my CAREER award. Despite what Google Maps might tell you, my hotel is not across the street from the stadium pictured above — this led to a rather frustrating 30 minutes of walking around asking for directions. I do, however, have a lovely view from my room of the Bank of Communications (交通银行), which seems appropriate, somehow.

I can’t access Facebook or Twitter from China without some crazy paid VPN solution it seems (if you have any tips, feel free to email me), so I don’t know if this post will even make it to those services. It’s probably for the best — social media is too much of a distraction, right?

I got this email yesterday:

Dear Author of a NIPS 2014 Submission,

You are in for a treat! This year we will carry out an experiment that will give us insight to the fairness and consistency of the NIPS reviewing process. 10% of the papers, selected at random, will be duplicated and handled by independent Area Chairs. In cases where the Area Chairs arrive at different recommendations for accept/reject, the papers will be reassessed and a final recommendation will be determined.

I welcome this investigation — as an author and reviewer, I have found the NIPS review process to be highly variable in terms of the thoroughness of reviews, discussion, and the consistency of scores. I hope that the results of this experiment are made more publicly available — what is the variance of the scores? How do score distributions vary by area chair (a proxy for area)? There are a lot of ways to slice the data, and I would encourage the organizing committee to take the opportunity to engage with the “NIPS community” to investigate the correlation between the numerical measures provided by the review process and the outcomes.

For those readers of the blog who have not submitted papers to machine learning (or related) conferences, the conference review process is a bit like a mini-version of a journal review. You (as the author) get the reviews back and have to write a response and then the reviewers discuss the paper and (possibly, but in my experience rarely) revise their reviews. However, they generally are supposed to take into account the response in the discussion. In some cases people even adjust their scores; when I’ve been a reviewer I often adjust my scores, especially if the author response addresses my questions.

This morning I had the singular experience of having a paper rejected from ICML 2014 in which all of the reviewers specifically marked that they did not read and consider the response. Based on the initial scores the paper was borderline, so the rejection is not surprising. However, we really did try to address their criticisms in our rebuttal. In particular, some misunderstood what our claims were. Had they bothered to read our response (and proposed edits), perhaps they would have realized this.

Highly selective (computer science) conferences often tout their reviews as being just as good as a journal, but in both outcomes and process, it’s a pretty ludicrous claim. I know this post may sound like sour grapes, but it’s not about the outcome, it’s about the process. Why bother with the facade of inviting authors to rebut if the reviewers are unwilling to read the response?

The The 10th IEEE International Conference on Distributed Computing in Sensor Systems has issued a call for papers. Deadlines are 1/31 and 2/7.

After attending GlobalSIP I flew to Reno and drove to South Lake Tahoe for NIPS 2013. NIPS is large conference that is unfortunately single-track. All papers are posters and a very small number are selected for longer oral presentation. A slightly larger number are selected for 5 minute “spotlight” advertisements. The poster session is 7-11PM for the first three days, and each poster session contains around 90 posters in a giant room. It’s very loud, and some poster presenters lose their voice for a day or two after presenting.

The contrast with GlobalSIP could not be starker. Obviously these are very different venues, but I found that all of the noise and commotion at NIPS made it nigh impossible for me to understand or retain any explanations at the poster session. Instead, I found myself circling titles in my program guide so that I could take a look at the papers later. Perhaps it was harder for me since I’m an “outsider” so I have more to learn about the basic models/assumptions in most of the papers, and I need more of an explanation than most.

In a sense a poster is “better” for the viewer because they can see what they want/need. You can get an explanation “at your level” from the poster presenter, and it’s more interactive than sitting for some 20 minute talk where the presenter feels the need to have a TOC slide (c.f. ISIT). But the lack of noise isolation and the sheer volume of posters is not ideal for actually digesting new ideas. I wonder if the NIPS model is really sustainable, and if they would ever consider going to parallel sessions. I think that even with posters, some isolation would help tremendously.

I’m in Austin right now for the first GlobalSIP conference. The conference has a decentralized organization, with semi-independent day-long workshops (“symposia”) scheduled in parallel with each other. There are 8 of these, with 6 running in parallel per day, with 1 session of “plenary” talks and 2 poster sessions. Each workshop is scheduled in AAB, ABA, or BAA, where A = posters and B = plenary, so there are 2 talk sessions and 4 poster sessions running in parallel.

Fortunately, there are a wide range of topics covered in the workshops, from biology to controlled sensing, to financial signal processing. The downside is that the actual papers in each workshop often fit well with other workshops. For example, the distributed optimization posters (in which I am interested), were sprinkled all over the place. This probably has a lot to do with the decentralized effects.

In terms of the “results” at the conference, it seems from my cursory view that many people are presenting “extra” results from other conference papers, or preliminary work for future papers. This actually works well in the poster format: for the former, the poster contains a lot of information about the “main result” as well, and for the latter, the poster is an invitation to think about future work. In general I’m a little ambivalent about posters, but if you’re going to have to do ‘em, a conference like this may be a better way to do it.

Follow

Get every new post delivered to your Inbox.

Join 908 other followers