Signal boost: DPCOMP.ORG is live

I got the following email from Gerome Miklau:

Dear colleagues:

We are writing to inform you of the launch of DPCOMP.ORG.

DPCOMP.ORG is a public website designed with the following goals in mind: (1) to increase the visibility and transparency of state-of-the-art differentially private algorithms and (2) to present a principled and comprehensive empirical evaluation of these algorithms. The intended audience is both researchers who study privacy algorithms and practitioners who might deploy these algorithms.

Currently DPComp includes algorithms for answering 1- and 2-dimensional range queries. We thoroughly study algorithm accuracy and the factors that influence it and present our findings using interactive visualizations. We follow the evaluation methodology from the paper “Principled Evaluation of Differentially Private Algorithms using DPBench”. In the future we plan to extend it to cover other analysis tasks (e.g., higher dimensional data, private regression).

Our hope is that the research community will contribute to improving DPCOMP.ORG so that practitioners are exposed to emerging research developments. For example: if you have datasets which you believe would distinguish the performance of tested algorithms, new algorithms that could be included, alternative workloads, or even a new error metric, please let us know — we would like to include them.

Please share this email with interested colleagues and students. And we welcome any feedback on the website or findings.

Sincerely,

Michael Hay (Colgate University)
Ashwin Machanavajjhala (Duke University)
Gerome Miklau (UMass Amherst)

Multiple Postdoc Openings at USC

Prof. Urbashi Mitra is looking for multiple postdocs. Given that this is the time of year when the future looks murkiest, these are great opportunities!

I am seeking multiple post-doctoral researchers are sought with expertise in one or more areas: Communication Theory, (Statistical) Signal Processing, Controls, Information Theory, and Machine Learning. In particular, the following expertises are of interest: structured inference (sparse approximation, low rank matrix completion, tensor signal processing, graph signal processing); multi-terminal information theory, or information theory at the boundaries of control or signal processing; distributed control, consensus methods and partially observable Markov Decision Process modeling and algorithms; modern optimization methods; or biological communications, signal processing or information theory.

The successful applicants will be expected to perform innovative translational research, mentor PhD students, give oral presentations, write journal papers, and participate in grant writing and project management. There will be significant opportunities for research leadership and interaction with funding agencies.

Ideally, the successful applicants will start in Summer 2016.

Please have your interested graduate students apply using the following portal:

https://jobs.usc.edu/postings/63539

In addition to a cv and research statement, the applicants are requested to have three letters of reference uploaded to the system as well.

UCSD Data Science Postdocs

A bit of a delayed posting due to pre-spring break crunch time, but my inimitable collaborator and ex-colleague Kamalika Chaudhuri passed along the following announcement.

I write with the exciting news that UCSD has up to four postdoctoral fellowship openings in data science and machine learning.

The fellowships will prepare outstanding researchers for academic careers. The fellows will be affiliated with the CSE or ECE Departments, will enjoy broad freedom to work with any of or faculty, they will be allocated a research budget, and will teach one class per year.

If you know anyone who might be interested, please encourage them to apply!

The program is co-sponsored by UCSD’s CSE and ECE departments, the Interdisciplinary Qualcomm Institute, and the Information Theory and Applications Center.

More information is available at the UCSD Data Science site. Review begins March 21, so get your applications in!

LabTV Profiles Are Up!

And now, a little pre-ITA self-promotion. As I wrote earlier, LabTV interviewed me and a subset of the students in the lab last semester (it was opt-in). This opportunity came out of my small part in the a large-scale collaboration organized by Mind Research Network (PI: Vince Calhoun) on trying to implement distributed and differentially private algorithms in a system to enable collaborative neuroscience research. Our lab profiles are now up! They interviewed me, graduate students Hafiz Imtiaz, Sijie Xiong, and Liyang Xie, and an undergraduate student, Kevin Sun. In watching I found that I learned a few new things about my students…

Call for Papers: T-SIPN Special Issue on Inference and Learning Over Networks

IEEE Signal Processing Society
IEEE Transactions on Signal and Information Processing over Networks
Special Issue on Inference and Learning Over Networks

Networks are everywhere. They surround us at different levels and scales, whether we are dealing with communications networks, power grids, biological colonies, social networks, sensor networks, or distributed Big Data depositories. Therefore, it is not hard to appreciate the ongoing and steady progression of network science, a prolific research field spreading across many theoretical as well as applicative domains. Regardless of the particular context, the very essence of a network resides in the interaction among its individual constituents, and Nature itself offers beautiful paradigms thereof. Many biological networks and animal groups owe their sophistication to fairly structured patterns of cooperation, which are vital to their successful operation. While each individual agent is not capable of sophisticated behavior on its own, the combined interplay among simpler units and the distributed processing of dispersed pieces of information, enable the agents to solve complex tasks and enhance dramatically their performance. Self-organization, cooperation and adaptation emerge as the essential, combined attributes of a network tasked with distributed information processing, optimization, and inference. Such a network is conveniently described as an ensemble of spatially dispersed (possibly moving) agents, linked together through a (possibly time – varying) connection topology. The agents are allowed to interact locally and to perform in-network processing, in order to accomplish the assigned inferential task. Correspondingly, several problems such as, e.g., network intrusion, community detection, and disease outbreak inference, can be conveniently described by signals on graphs, where the graph typically accounts for the topology of the underlying space and we obtain multivariate observations associated with nodes/edges of the graph. The goal in these problems is to identify/infer/learn patterns of interest, including anomalies, outliers, and existence of latent communities. Unveiling the fundamental principles that govern distributed inference and learning over networks has been the common scope across a variety of disciplines, such as signal processing, machine learning, optimization, control, statistics, physics, economics, biology, computer, and social sciences. In the realm of signal processing, many new challenges have emerged, which stimulate research efforts toward delivering the theories and algorithms necessary to (a) designing networks with sophisticated inferential and learning abilities; (b) promoting truly distributed implementations, endowed with real-time adaptation abilities, needed to face the dynamical scenarios wherein real-world networks operate; and (c) discovering and disclosing significant relationships possibly hidden in the data collected from across networked systems and entities. This call for papers therefore encourages submissions from a broad range of experts that study such fundamental questions, including but not limited to:

  • Adaptation and learning over networks.
  • Consensus strategies; diffusion strategies.
  • Distributed detection, estimation and filtering over networks.
  • Distributed dictionary learning.
  • Distributed game-theoretic learning.
  • Distributed machine learning; online learning.
  • Distributed optimization; stochastic approximation.
  • Distributed proximal techniques, sub-gradient techniques.
  • Learning over graphs; network tomography.
  • Multi-agent coordination and processing over networks.
  • Signal processing for biological, economic, and social networks.
  • Signal processing over graphs.

Prospective authors should visit http://www.signalprocessingsociety.org/publications/periodicals/tsipn/ for information on paper submission. Manuscripts should be submitted via Manuscript Central at http://mc.manuscriptcentral.com/tsipn-ieee.

Important Dates:

  • Manuscript submission: February 1, 2016
  • First review completed: April 1, 2016
  • Revised manuscript due: May 15, 2016
  • Second review completed: July 15, 2016
  • Final manuscript due: September 15, 2016
  • Publication: December 1, 2016

Guest Editors:

 

ISIT 2015 : statistics and learning

The advantage of flying to Hong Kong from the US is that the jet lag was such that I was actually more or less awake in the mornings. I didn’t take such great notes during the plenaries, but they were rather enjoyable, and I hope that the video will be uploaded to the ITSOC website soon.

There were several talks on entropy estimation in various settings that I did not take great notes on, to wit:

  • OPTIMAL ENTROPY ESTIMATION ON LARGE ALPHABETS VIA BEST POLYNOMIAL APPROXIMATION (Yihong Wu, Pengkun Yang, University Of Illinois, United States)
  • DOES DIRICHLET PRIOR SMOOTHING SOLVE THE SHANNON ENTROPY ESTIMATION PROBLEM? (Yanjun Han, Tsinghua University, China; Jiantao Jiao, Tsachy Weissman, Stanford University, United States)
  • ADAPTIVE ESTIMATION OF SHANNON ENTROPY (Yanjun Han, Tsinghua University, China; Jiantao Jiao, Tsachy Weissman, Stanford University, United States)

I would highly recommend taking a look for those who are interested in this problem. In particular, it looks like we’re getting towards more efficient entropy estimators in difficult settings (online, large alphabet), which is pretty exciting.

QUICKEST LINEAR SEARCH OVER CORRELATED SEQUENCES
Javad Heydari, Ali Tajer, Rensselaer Polytechnic Institute, United States
This talk was about hypothesis testing where the observer can control the samples being taken by traversing a graph. We have an n-node graph (c.f. a graphical model) representing the joint distribution on n variables. The data generated is i.i.d. across time according to either F_0 or F_1. At each time you get to observe the data from only one node of the graph. You can either observe the same node as before, explore by observing a different node, or make a decision about whether the data from from F_0 or F_1. By adopting some costs for different actions you can form a dynamic programming solution for the search strategy but it’s pretty heavy computationally. It turns out the optimal rule for switching has a two-threshold structure and can be quite a bit different than independent observations when the correlations are structured appropriately.

MISMATCHED ESTIMATION IN LARGE LINEAR SYSTEMS
Yanting Ma, Dror Baron, North Carolina State University, United States; Ahmad Beirami, Duke University, United States
The mismatch studied in this paper is a mismatch in the prior distribution for a sparse observation problem y = Ax + \sigma_z z, where x \sim P (say a Bernoulli-Gaussian prior). The question is what happens when we do estimation assuming a different prior Q. The main result of the paper is an analysis of the excess MSE using a decoupling principle. Since I don’t really know anything about the replica method (except the name “replica method”), I had a little bit of a hard time following the talk as a non-expert, but thankfully there were a number of pictures and examples to help me follow along.

SEARCHING FOR MULTIPLE TARGETS WITH MEASUREMENT DEPENDENT NOISE
Yonatan Kaspi, University of California, San Diego, United States; Ofer Shayevitz, Tel-Aviv University, Israel; Tara Javidi, University of California, San Diego, United States
This was another search paper, but this time we have, say, K targets W_1, W_2, \ldots, W_K uniformly distributed in the unit interval, and what we can do is query at each time n a set S_n \subseteq [0,1] and get a response Y_n = X_n \oplus Z_n where X_n = \mathbf{1}( \exists W_k \in S_n ) and Z_n \sim \mathrm{Bern}( \mu(S_n) + b ) where \mu is the Lebesgue measure. So basically you can query a set and you get a noisy indicator of whether you hit any targets, where the noise depends on the size of the set you query. At some point \tau you stop and guess the target locations. You are (\epsilon,\delta) successful if the probability that you are within \delta of each target is less than \epsilon. The targeting rate is the limit of \log(1/\delta) / \mathbb{E}[\tau] as \epsilon,\delta \to 0 (I’m being fast and loose here). Clearly there are some connections to group testing and communication with feedback, etc. They show there is a significant gap between the adaptive and nonadaptive rate here, so you can find more targets if you can adapt your queries on the fly. However, since rate is defined for a fixed number of targets, we could ask how the gap varies with K. They show it shrinks.

ON MODEL MISSPECIFICATION AND KL SEPARATION FOR GAUSSIAN GRAPHICAL MODELS
Varun Jog, University of California, Berkeley, United States; Po-Ling Loh, University of Pennsylvania, United States
The graphical model for jointly Gaussian variables has no edge between nodes i and j if the corresponding entry (\Sigma^{-1})_{ij} = 0 in the inverse covariance matrix. They show a relationship between the KL divergence of two distributions and their corresponding graphs. The divergence is lower bounded by a constant if they differ in a single edge — this indicates that estimating the edge structure is important when estimating the distribution.

CONVERSES FOR DISTRIBUTED ESTIMATION VIA STRONG DATA PROCESSING INEQUALITIES
Aolin Xu, Maxim Raginsky, University of Illinois at Urbana–Champaign, United States
Max gave a nice talk on the problem of minimizing an expected loss \mathbb{E}[ \ell(W, \hat{W}) ] of a d-dimensional parameter W which is observed noisily by separate encoders. Think of a CEO-style problem where there is a conditional distribution P_{X|W} such that the observation at each node is a d \times n matrix whose columns are i.i.d. and where the j-th row is i.i.d. according to P_{X|W_j}. Each sensor gets independent observations from the same model and can compress its observations to b bits and sends it over independent channels to an estimator (so no MAC here). The main result is a lower bound on the expected loss as s function of the number of bits latex b, the mutual information between W and the final estimate \hat{W}. The key is to use the strong data processing inequality to handle the mutual information — the constants that make up the ratio between the mutual informations is important. I’m sure Max will blog more about the result so I’ll leave a full explanation to him (see what I did there?)

More on Shannon theory etc. later!

AISTATS 2015: a few talks from one day

I attended AISTATS for about a day and change this year — unfortunately due to teaching I missed the poster I had there but Shuang Song presented a work on learning from data sources of different quality, which her work with Kamalika Chaudhuri and myself. This was my first AISTATS. It had single track of oral presentations and then poster sessions for the remaining papers. The difficulty with a single track for me is that my interest in the topics is relatively focused, and the format of a general audience with a specialist subject matter meant that I couldn’t get as much out of the talks as I would have wanted. Regardless, I did get exposed to a number of new problems. Maybe the ideas can percolate for a while and inform something in the future.

Computational Complexity of Linear Large Margin Classification With Ramp Loss
Søren Frejstrup Maibing, Christian Igel
The main result of this paper (I think) is that ERM under ramp loss is NP-hard. They gave the details of the reduction but since I’m not a complexity theorist I got a bit lost in the weeds here.

A la Carte — Learning Fast Kernels
Zichao Yang, Andrew Wilson, Alex Smola, Le Song
Ideas like “random kitchen sinks” and other kernel approximation methods require you to have a kernel you want to approximate, but in many problems you in fact need to learn the kernel from the data. If I give you a kernel function k(x,x') = k( |x - x'| ), then you can take the Fourier transform K(\omega) of k. This turns out to be a probability distribution, so you can sample random \{\omega_i\} i.i.d. and build a randomized Fourier approximation of k. If you don’t know the kernel function, or you have to learn it, then you could instead try to learn/estimate the transform directly. This paper was about trying to do that in a reasonably efficient way.

Learning Where to Sample in Structured Prediction
Tianlin Shi, Jacob Steinhardt, Percy Liang
This was about doing Gibbs sampling, not for MCMC sampling from the stationary distribution, but for “stochastic search” or optimization problems. The intuition was that some coordinates are “easier” than others, so we might want to focus resampling on the harder coordinates. But this might lead to inaccurate sampling. The aim here twas to build a heterogenous sampler that is cheap to compute and still does the right thing.

Tradeoffs for Space, Time, Data and Risk in Unsupervised Learning
Mario Lucic, Mesrob Ohannessian, Amin Karbasi, Andreas Krause
This paper won the best student paper award. They looked at a k-means problem where they do “data summarization” to make the problem a bit more efficient — that is, by learning over an approximation/summary of the features, they can find different tradeoffs between the running time, risk, and sample size for learning problems. The idea is to use coresets — I’d recommend reading the paper to get a better sense of what is going on. It’s on my summer reading list.

Averaged Least-Mean-Squares: Bias-Variance Trade-offs and Optimal Sampling Distributions
Alexandre Defossez, Francis Bach
What if you want to do SGD but you don’t want to sample the points uniformly? You’ll get a bias-variance tradeoff. This is another one of those “you have to read the paper” presentations. A nice result if you know the background literature, but if you are not a stochastic gradient aficionado, you might be totally lost.

Sparsistency of \ell_1-Regularized M-Estimators
Yen-Huan Li, Jonathan Scarlett, Pradeep Ravikumar, Volkan Cevher
In this paper they find a new condition, which they call local structured smoothness, which is sufficient for certain M-estimators to be “sparsistent” — that is, they recover the support pattern of a sparse parameter asymptotically as the number of data points goes to infinity. Examples include the LASSO, regression in general linear models, and graphical model selection.

Some of the other talks which were interesting but for which my notes were insufficient:

  • Two-stage sampled learning theory on distributions (Zoltan Szabo, Arthur Gretton, Barnabas Poczos, Bharath Sriperumbudur)
  • Generalized Linear Models for Aggregated Data (Avradeep Bhowmik, Joydeep Ghosh, Oluwasanmi Koyejo)
  • Efficient Estimation of Mutual Information for Strongly Dependent Variables (Shuyang Gao, Greg Ver Steeg, Aram Galstyan)
  • Sparse Submodular Probabilistic PCA (Rajiv Khanna, Joydeep Ghosh, Russell Poldrack, Oluwasanmi Koyejo)