CFP: Theory and Practice of Differential Privacy (TPDP) 2019

November 11
London, UK
Colocated with CCS 2019

Differential privacy is a promising approach to privacy-preserving data analysis.  Differential privacy provides strong worst-case guarantees about the harm that a user could suffer from participating in a differentially private data analysis, but is also flexible enough to allow for a wide variety of data analyses to be performed with a high degree of utility.  Having already been the subject of a decade of intense scientific study, it has also now been deployed in products at government agencies such as the U.S. Census Bureau and companies like Apple and Google.

Researchers in differential privacy span many distinct research communities, including algorithms, computer security, cryptography, databases, data mining, machine learning, statistics, programming languages, social sciences, and law.  This workshop will bring researchers from these communities together to discuss recent developments in both the theory and practice of differential privacy.

Specific topics of interest for the workshop include (but are not limited to):

  • theory of differential privacy,
  • differential privacy and security,
  • privacy preserving machine learning,
  • differential privacy and statistics,
  • differential privacy and data analysis,
  • trade-offs between privacy protection and analytic utility,
  • differential privacy and surveys,
  • programming languages for differential privacy,
  • relaxations of the differential privacy definition,
  • differential privacy vs other privacy notions and methods,
  • experimental studies using differential privacy,
  • differential privacy implementations,
  • differential privacy and policy making,
  • applications of differential privacy.

Submissions

The goal of TPDP is to stimulate the discussion on the relevance of differentially private data analyses in practice. For this reason, we seek contributions from different research areas of computer science and statistics.  Authors are invited to submit a short abstract (4 pages maximum) of their work. Submissions will undergo a lightweight review process and will be judged on originality, relevance, interest and clarity. Submission should describe novel work or work that has already appeared elsewhere but that can stimulate the discussion between different communities at the workshop. Accepted abstracts will be presented at the workshop either as a talk or a poster.  The workshop will not have formal proceedings and is not intended to preclude later publication at another venue. Selected papers from the workshop will be invited to submit a full version of their work for publication in a special issue of the Journal of Privacy and Confidentiality.

Submission website: https://easychair.org/conferences/?conf=tpdp2019

Important Dates

Submission: June 21 (anywhere on earth)

Notification: August 9

Workshop: 11/11

Program Committee

  • Michael Hay (co-chair), Colgate University
  • Aleksandar Nikolov (co-chair), University of Toronto
  • Aws Albarghouthi, University of Wisconsin–Madison
  • Borja Balle, Amazon
  • Mark Bun, Boston University
  • Graham Cormode, University of Warwick
  • Rachel Cummings, Georgia Tech University
  • Xi He, University of Waterloo
  • Gautam Kamath, University of Waterloo
  • Ilya Mironov, Google Research – Brain
  • Uri Stemmer, Ben-Gurion University
  • Danfeng Zhang, Penn State University

For more information, visit the workshop website at https://tpdp.cse.buffalo.edu/2019/.

CFP: PPML Workshop at NIPS 2018

Privacy Preserving Machine Learning

NIPS 2018 Workshop

Montreal, December 8, 2018

Description

This one day workshop focuses on privacy preserving techniques for training, inference, and disclosure in large scale data analysis, both in the distributed and centralized settings. We have observed increasing interest of the ML community in leveraging cryptographic techniques such as Multi-Party Computation (MPC) and Homomorphic Encryption (HE) for privacy preserving training and inference, as well as Differential Privacy (DP) for disclosure. Simultaneously, the systems security and cryptography community has proposed various secure frameworks for ML. We encourage both theory and application-oriented submissions exploring a range of approaches, including:

  • secure multi-party computation techniques for ML
  • homomorphic encryption techniques for ML
  • hardware-based approaches to privacy preserving ML
  • centralized and decentralized protocols for learning on encrypted data
  • differential privacy: theory, applications, and implementations
  • statistical notions of privacy including relaxations of differential privacy
  • empirical and theoretical comparisons between different notions of privacy
  • trade-offs between privacy and utility

We think it will be very valuable to have a forum to unify different perspectives and start a discussion about the relative merits of each approach. The workshop will also serve as a venue for networking people from different communities interested in this problem, and hopefully foster fruitful long-term collaboration.

Submission Instructions

Submissions in the form of extended abstracts must be at most 4 pages long (not including references) and adhere to the NIPS format. We do accept submissions of work recently published or currently under review. Submissions should be anonymized. The workshop will not have formal proceedings, but authors of accepted abstracts can choose to have a link to arxiv or a pdf published on the workshop webpage.

Program Committee

  • Pauline Anthonysamy (Google)
  • Borja de Balle Pigem (Amazon)
  • Keith Bonawitz (Google)
  • Emiliano de Cristofaro (University College London)
  • David Evans (University of Virginia)
  • Irene Giacomelli (Wisconsin University)
  • Nadin Kokciyan (King’s College London)
  • Kim Laine (Microsoft Research)
  • Payman Mohassel (Visa Research)
  • Catuscia Palamidessi (Ecole Polytechnique & INRIA)
  • Mijung Park (Max Planck Institute for Intelligent Systems)
  • Benjamin Rubinstein (University of Melbourne)
  • Anand Sarwate (Rutgers University)
  • Philipp Schoppmann (HU Berlin)
  • Nigel Smart (KU Leuven)
  • Carmela Troncoso (EPFL)
  • Pinar Yolum (Utrecht University)
  • Samee Zahur (University of Virginia)

Organizers

  • Adria Gascon (Alan Turing Institute & Edinburgh)
  • Niki Kilbertus (MPI for Intelligent Systems & Cambridge)
  • Olya Ohrimenko (Microsoft Research)
  • Mariana Raykova (Yale)
  • Adrian Weller (Alan Turing Institute & Cambridge)

NIPS 2017 Tutorial on Differential Privacy and Machine Learning

Kamalika and I gave a tutorial at NIPS last week on differential privacy and machine learning. We’ve posted the slides and references (updates still being made). It was a bit stressful to get everything put together in time, especially given how this semester went, but it was a good experience and now we have something to build on. It’s amazing how much research activity there has been in the last few years.

One thing that I struggled with a bit was the difference between a class lecture, a tutorial, and a survey. Tutorials sit between lectures and surveys: the goal is to be clear and cover the basics with simple examples, but also lay out something about what is going on in the field and where important future directions lie. It’s impossible to be comprehensive; we had to pick and choose different topics and papers to cover, and ended up barely mentioning large bodies of work. At the same time, it didn’t really make sense to put up a slide saying “here are references for all the things we’re not going to talk about.” If the intended audience is a person who has heard of differential privacy but hasn’t really studied it, or someone who has read this recent series of articles, then a list without much context is not much help. It seems impossible to even make a real survey now, unless you make the scope more narrow.

As for NIPS itself… I have to say that the rapid increase in size (8000 participants this year) made the conference feel a lot different. I had a hard time hearing/understanding for the short time I was there. Thankfully the talks were streamed/recorded so I can go back to catch what I missed.

Retraction for Symmetric Matrix Perturbation for Differentially-Private Principal Component Analysis, ICASSP 2016

This is somewhat difficult to write, so I’ll cut to the chase. The proposed differentially private PCA method from the ICASSP 2016 paper by my student Hafiz Imtiaz and myself is incorrect and so should be retracted:

H. Imtiaz, A.D. Sarwate, Symmetric Matrix Perturbation for Differentially-Private Principal Component Analysis, Proceedings of the 2016 International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2339–2343, March 2016.

As a side-result, the paper of Jiang et al. from AAAI 2016 has as similar incorrect result. I’d like to thank my collaborator Kamalika Chaudhuri, who brought this up to me after a conversation with Kunal Talwar. The fact that I have taken so long to make this public announcement was due to a misguided effort to adapt the algorithm and run new experiments. Unfortunately, this was not to be.

What is the algorithm? Well, the basic idea in all of these papers is to add Wishart noise to produce a noisy estimate of the second-moment matrix of a data set. We start with some data vectors \mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_n \in \mathbb{R}^d and compute the second moment matrix A = \sum_{i} \mathbf{x}_i \mathbf{x}_i^{\top}. If we want to compute the SVD in a differentially private way, one approach is to add noise E to the second moment matrix to form \hat{A} = A + E such that \hat{A} is a differentially private version of A. Then by postprocessing invariance, the SVD of \hat{A} is also differentially private.

The motivation for our work was twofold: 1) we wanted to see how theoretically “optimal” algorithms performed on real data sets, especially for suggested settings of the parameters, and 2) we wanted to propose a good (\epsilon,0)-differentially private algorithm? What is wrong with \delta > 0? In some application domains, I have found that the notion of a “failure probability” is not compatible with practitioners’t (often lawyer-driven) policy requirements. Concentrated DP and newer concepts still keep the \delta > 0, so they might not fly either. Plus, it’s a slightly more challenging and interesting mathematical problem.

A very good (\epsilon,\delta) choice for E is to choose the entries to be i.i.d. Gaussian on the diagonal and above and copy entries below the diagonal such that E is symmetric. This is the Analyze Gauss (AG) algorithm of Dwork et al. Although \hat{A} is symmetric, it’s no longer positive semidefinite (PSD). However, this gives order-optimal guarantees on utility. Taking that as a cue, we wanted to preserve the PSD property in \hat{A}, so we proposed adding noise with a Wishart distribution, which corresponds to generating an i.i.d. rectangular d \times p Gaussian matrix $\latex Z$ and settting E = Z Z^{\top}. This generates a PSD perturbation fo \hat{A} and has pretty good utility guarantees as well.

Unfortunately, the Wishart perturbation is blatantly non-private because the image under the perturbation is a PSD cone starting from A, and for A and A' coming from neighboring databases, the cones are not the same. As a trivial example, suppose A = \mathbf{x} \mathbf{x}^{\top} and A' = \mathbf{0}. Hafiz has written up a more formal description. Because differential privacy requires the support of the output distribution under different inputs to coincide, it is difficult to create a (data-independent) additive perturbation that preserves the PSD property. The exponential mechanism works, but is not additive.

In the end, I was rather excited about the empirical performance (c.f. goal 1) above) and we were close to the deadline so I did not do a proper job checking the technical results in our rush to submit. I’m making this a teachable moment for myself but wanted to make sure there was a public, findable version of this retraction so that people don’t continue to build on our results. What we learned from 1) is that designing practical PCA methods for reasonable values of (\epsilon,\delta) is still challenging — many data sets do not have the sample sizes needed to make differential privacy practical. The solution may be to design more complex multi-stage pipeline, revisit what we can assume about the data space and neighborhood relations, or come up with better (\epsilon,0) algorithms that pass muster for the applications when nonzero \delta is unacceptable.

Signal boost: DPCOMP.ORG is live

I got the following email from Gerome Miklau:

Dear colleagues:

We are writing to inform you of the launch of DPCOMP.ORG.

DPCOMP.ORG is a public website designed with the following goals in mind: (1) to increase the visibility and transparency of state-of-the-art differentially private algorithms and (2) to present a principled and comprehensive empirical evaluation of these algorithms. The intended audience is both researchers who study privacy algorithms and practitioners who might deploy these algorithms.

Currently DPComp includes algorithms for answering 1- and 2-dimensional range queries. We thoroughly study algorithm accuracy and the factors that influence it and present our findings using interactive visualizations. We follow the evaluation methodology from the paper “Principled Evaluation of Differentially Private Algorithms using DPBench”. In the future we plan to extend it to cover other analysis tasks (e.g., higher dimensional data, private regression).

Our hope is that the research community will contribute to improving DPCOMP.ORG so that practitioners are exposed to emerging research developments. For example: if you have datasets which you believe would distinguish the performance of tested algorithms, new algorithms that could be included, alternative workloads, or even a new error metric, please let us know — we would like to include them.

Please share this email with interested colleagues and students. And we welcome any feedback on the website or findings.

Sincerely,

Michael Hay (Colgate University)
Ashwin Machanavajjhala (Duke University)
Gerome Miklau (UMass Amherst)

Postdoctoral Position at Rutgers with… me!

I keep posting ads for postdocs with other people but this is actually to work with little old me!

Postdoctoral Position
Department of Electrical and Computer Engineering
Rutgers, The State University of New Jersey

The Department of Electrical and Computer Engineering (ECE) at Rutgers University is seeking a dynamic and motivated Postdoctoral Fellow to work on developing distributed machine learning algorithms that work on complex neuroimaging data. This work is in collaboration with the Mind Research Network in Albuquerque, New Mexico under NIH Grant 1R01DA040487-01A1.

Candidates with a Ph.D. in Electrical Engineering, Computer Science, Statistics or related areas with experience in one of

  • distributed signal processing or optimization
  • image processing with applications in biomedical imaging
  • machine learning theory (but with a desire to interface with practice)
  • privacy-preserving algorithms (preferably differential privacy)

are welcome to apply. Strong and self-motivated candidates should also have

  • a strong mathematical background: this project is about translating theory to practice, so a solid understanding of mathematical formalizations is crucial;
  • good communication skills: this is an interdisciplinary project with many collaborators

The Fellow will receive valuable experience in translational research as well as career mentoring, opportunities to collaborate with others outside the project within the ECE Department, DIMACS, and other institutions.

The initial appointment is for 1 year but can be renewed subject to approval. Salary and compensation is at the standard NIH scale for postdocs.

To apply, please email the following to Prof. Anand D. Sarwate (anand.sarwate@rutgers.edu):

  • Curriculum Vitae
  • Contact information for 3 references
  • A brief statement (less than a page!) addressing the qualifications above and why the position is appealing.
  • Standard forms: Equal Employment Opportunity Data Form [PDF] Voluntary Self-Identification of Disability Form [PDF] Invitation to Covered Veterans to Self-Identify [PDF].

    Applications are open until the position is filled. Start date is flexible but sometime in Fall 2016 is preferable.

    Rutgers, The State University of New Jersey, is an Equal Opportunity / Affirmative Action Employer. Qualified applicants will be considered for employment without regard to race, creed, color, religion, sex, sexual orientation, gender identity or expression, national origin, disability status, genetic information, protected veteran status, military service or any other category protected by law. As an institution, we value diversity of background and opinion, and prohibit discrimination or harassment on the basis of any legally protected class in the areas of hiring, recruitment, promotion, transfer, demotion, training, compensation, pay, fringe benefits, layoff, termination or any other terms and conditions of employment.

IHP “Nexus” Workshop on Privacy and Security: Days 4-5

Wrapping this up, finally. Maybe conference blogging has to go by the wayside for me… my notes got a bit sketchier so I’ll just do a short rundown of the topics.

Days 4-5 were a series of “short” talks by Moni Naor, Kobbi Nissim, Lalitha Sankar, Sewoong Oh, Delaram Kahrobaie, Joerg Kliewer, Jon Ullman, and Sasho Nikolov on a rather eclectic mix of topics.

Moni’s talk was on secret sharing in an online setting — parties arrive one by one and the qualified sets (who can decode the secret) is revealed by all parties. The shares have to be generated online as well. Since the access structure is evolving, what kinds of systems can we support? As I understood it, the idea is to use something similar to threshold scheme and a “doubling trick”-like argument by dividing the users/parties into generations. It’s a bit out of area for me so I had a hard time keeping up with the connections to other problems. Kobbi talked about reconstruction attacks based on observing traffic from outsourced database systems. A user wants to get the records but the server shouldn’t be able to reconstruct: it knows how many records were returned from a query and knows if the same record was sent on subsequent queries — this is a sort of access pattern leakage. He presented attacks based on this information and also based on just knowing the volume (e.g. total size of response) from the queries.

Lalitha talked about mutual information privacy, which was quite a bit different than the differential privacy models from the CS side, but more in line with Ye Wang’s talk earlier in the week. Although she didn’t get to spend as much time on it, the work on interactive communication and privacy might have been interesting to folks earlier in the workshop studying communication complexity. In general, the connection between communication complexity problems and MPC, for example, are elusive to me (probably from lack of trying).

Sewoong talked about optimal mechanisms for differentially private composition — I had to miss his talk, unfortunately. Delaram talked about cryptosystems based on group theory and I had to try and check back in all the things I learned in 18.701/702 and the graduate algebra class I (mistakenly) took my first year of graduate school. I am not sure I could even do justice to it, but I took a lot of notes. Joerg talked about using polar codes to enable private function computation — initially privacy was measured by equivocation but towards the end he made a connection to differential privacy. Since most folks (myself included) are not experts on polar codes, he gave a rather nice tutorial (I thought) on polar coding. It being the last day of the workshop, the audience had unfortunately thinned out a bit.

Jon spoke about estimating marginal distributions for high-dimensional problems. There were some nice connections to composite hypothesis testing problems that came out of the discussion during the talk — the model seems a bit complex to get into based on my notes, but I think readers who are experts on hypothesis testing might want to check out his work. Sasho rounded off the workshop with a talk about the sensitivity polytope of linear queries on a statistical database and connections to Gaussian widths. The main result was on the sample complexity of answering the queries in time polynomial in the number of individuals, size of the universe, and size of the query set.

IHP “Nexus” Workshop on Privacy and Security: Day 3

I’m doggedly completing these notes because in a fit of ambition I actually started posts for each of the workshop days and now I feel like I need to finish it up. Day 3 was a day of differential privacy: Adam Smith, Cynthia Dwork, and Kamalika Chaudhuri.

Adam gave a tutorial on differential privacy that had a bit of a different flavor from tutorials I have seen before (and given). He started out by highlighting a taxonomy of potential attacks on released data to make a distinction between re-identification, reconstruction, membership, and correlation inferences before going into the definitions, composition theory, Bayesian interpretation, and so on. With the attacks, he focused a bit more on the reconstruction story. The algorithms view of things (as I get it) is to think of, say, an LP relaxation of a combinatorial problem: you solve the LP and round the solution to integers and prove that it’s either correct or close to correct. This has more connections to things we think about in information theory (e.g. compressed sensing) but the way of stating the problem was a bit different. He also described the Homer et al. attack on GWAS. The last part of his talk was on multiplicative weights and algorithms for learning distributions over the data domain, which I think got a bit hairy for the IT folks who hadn’t seen MW before. This made me wonder if these connections between mirror descent on the simplex, information projections, and other topics can be taught in a “first principles” way that doesn’t require you to have a lot of familiarity with one interpretation of the method before bridging to another.

Cynthia gave a talk on false discovery control and how to use differential privacy ideas in a version of the Benjamini-Hochberg BHq procedure for controlling the false discovery rate. A key primitive is the the report noisy argmax procedure, which gives the index of the argmax but not its value (which would entail a further privacy loss). Since most people are not familiar with FDR control, she spent a lot of her talk on that and so the full details of the private version were deferred to the paper. I covered FDR in my detection and estimation class partly from some of the extra attention it has received in the privacy workshops over the last few years.

Kamalika’s talk was on a model for privacy when data may be correlated between individuals. This involves using the Pufferfish model for privacy in which there is an explicit class of probability distribution on parameters and a set of explicit secrets which the algorithm wants to obfuscate: the differential privacy guarantee should hold for the output distribution of the mechanism conditioned on any valid data distribution and any pair of secrets. Since the class of data distributions is arbitrary, we can also consider joint distributions on individuals’ data — if the distribution class has some structure, then there might be a hope to efficiently produce an output of a function. She talked about using the \ell_{\infty} Wasserstein distance to measure the sensitivity of a function, and that adding noise that scales with this sensitivity would guarantee privacy in the Pufferfish model. She then gave an example for Bayesian networks and Markov chains. As we discussed, it seems like for each dependence structure you need to come up with a sort of covering of the dependencies to add noise appropriately. This seems pretty challenging in general now, but maybe after a bit more work there will be a clearer “general” strategy to handle dependence along these lines.

Randomized response, differential privacy, and the elusive biased coin

In giving talks to broader audiences about differential privacy, I’ve learned quickly (thanks to watching talks by other experts) that discussing randomized response first is an easy way to explain the kind of “plausible deniability” guarantee that differentially private algorithms give to individuals. In randomized response, the setup is that of local privacy: the simplest model is that a population of n individuals with data x_1, x_2, \ldots, x_n \in \{0,1\} representing some sensitive quantity are to be surveyed by an untrusted statistician. Concretely, suppose that the individual bits represent whether the person is a drug user or not. The statistician/surveyor wants to know the fraction p = \frac{1}{n} \sum x_i of users in the population. However, individuals don’t trust the surveyor. What to do?

The surveyor can give the individuals a biased coin that comes up heads with probability q < 1/2. The individual flips the coin in private. If it comes up heads, they lie and report y_i = 1 - x_i. If it comes up tails, they tell the truth y_i = x_i. The surveyor doesn’t see the outcome of the coin, but can compute the average of the \{y_i\}. What is the expected value of this average?

\mathbb{E}\left[ \frac{1}{n} \sum_{i=1}^{n} y_i \right] = \frac{1}{n} \sum_{i=1}^{n} (q (1 - x_i) + (1 -q) x_i) = q + (1 - 2q) p.

So we can invert this to solve for p: if we have a reported average \bar{y} = \frac{1}{n} \sum y_i then estimate p by

\hat{p} = \frac{\bar{y} - q}{ 1 - 2 q }.

What does this have to do with differential privacy? Each individual got to potentially lie about their drug habits. So if we look at the hypothesis test for a surveyor trying to figure out if someone is a user from their response, we get the likelihood ratio

\frac{ \mathbb{P}( y_i = 1 | x_i = 1 ) }{ \mathbb{P}( y_i = 1 | x_i = 0 ) } = \frac{1 - q}{q}

If we set \epsilon = \log \frac{1 - q}{q}, we can see that the protocol guarantees differential privacy. This gives a possibly friendlier interpretation of \epsilon in terms of the “lying probability” q. We can plot this:

Epsilon versus lying probability

Epsilon versus lying probability

This is a bit pessimistic — it says that to guarantee reasonable “lying probability” we need \epsilon \ll 1, but in practice this turns out to be quite difficult. Why so pessimistic? The differential privacy thread model is pretty pessimistic — it’s your plausible deniability given that everyone else in the data set has revealed their data to the surveyor “in the clear.” This is the fundamental tension in thinking about the practical implications of differential privacy — we don’t want to make conditional guarantees (“as long as everyone else is secret too”) but the price of an unconditional guarantee can be high in the worst case.

So how does randomized response work in practice? It seems we would need a biased coin. Maybe one can custom order them from Alibaba? Turns out, the answer is not really. Gelman and Nolan have an article about getting students to try and evaluate the bias of a coin — the physics of flipping would seem to dictate that coins are basically fair. You can load dice, but not coins. I recommend reading through the article — it sounds like a fun activity, even for graduate students. Maybe I’ll try it in my Detection and Estimation course next semester.

Despite the widespread prevalence of “flipping a biased coin” as a construction in probability, randomized algorithms, and information theory, a surprisingly large number of people I have met are completely unaware of the unicorn-like nature of biased coins in the real world. I guess we really are in an ivory tower, eh?

ITA 2015: quick takes

Better late than never, I suppose. A few weeks ago I escaped the cold of New Jersey to my old haunts of San Diego. Although La Jolla was always a bit fancy for my taste, it’s hard to beat a conference which boasts views like this:

A view from the sessions at ITA 2015

A view from the sessions at ITA 2015


I’ll just recap a few of the talks that I remember from my notes — I didn’t really take notes during the plenaries so I don’t have much to say about them. Mostly this was due to laziness, but finding the time to blog has been challenging in this last year, so I think I have to pick my battles. Here’s a smattering consisting of

\{ \mathrm{talks\ attended} \} \cap \{ \mathrm{talks\ with\ understandable\ notes} \}

(Information theory)
Emina Soljanin talked about designing codes that are good for fast access to the data in distributed storage. Initial work focused on how to repair codes under disk failures. She looked at how easy it is to retrieve the information afterwords to guarantee some QoS for the storage system. Adam Kalai talked about designing compression schemes that work for an “audience” of decoders. The decoders have different priors on the set of elements/messages so the idea is to design an encoder that works for this ensemble of decoders. I kind of missed the first part of the talk so I wasn’t quite sure how this relates to classical work in mismatched decoding as done in the information theory world. Gireeja Ranade gave a great talk about defining notions of capacity/rate need to control a system which as multiplicative uncertainty. That is, x[n+1] = x[n] + B[n] u[n] where B[n] has the uncertainty. She gave a couple of different notions of capacity, relating to the ratio | x[n]/x[0] | — either the expected value of the square or the log, appropriately normalized. She used a “deterministic model” to give an explanation of how control in this setting is kind of like controlling the number of significant bits in the state: uncertainty increases this and you need a certain “amount” of control to cancel that growth.

(Learning and statistics)
I learned about active regression approaches from Sivan Sabato that provably work better than passive learning. The idea there is do to use a partition of the X space and then do piecewise constant approximations to a weight function that they use in a rejection sampler. The rejection sampler (which I thought of as sort of doing importance sampling to make sure they cover the space) helps limit the number of labels requested by the algorithm. Somehow I had never met Raj Rao Nadakuditi until now, and I wish I had gotten a chance to talk to him further. He gave a nice talk on robust PCA, and in particular how outliers “break” regular PCA. He proposed a combination of shrinkage and truncation to help make PCA a bit more stable/robust. Laura Balzano talked about “estimating subspace projections from incomplete data.” She proposed an iterative algorithm for doing estimation on the Grassmann manifold that can do subspace tracking. Constantine Caramanis talked about a convex formulation for mixed regression that gives a guaranteed solution, along with minimax sample complexity bounds showing that it is basically optimal. Yingbin Liang talked about testing approaches for understanding if there is an “anomalous structure” in a sequence of data. Basically for a sequence Y_1, Y_2, \ldots, Y_n, the null hypothesis is that they are all i.i.d. \sim p and the (composite) alternative is that there an interval of indices which are \sim q instead. She proposed a RKHS-based discrepancy measure and a threshold test on this measure. Pradeep Ravikumar talked about a “simple” estimator that was a “fix” for ordinary least squares with some soft thresholding. He showed consistency for linear regression in several senses, competitive with LASSO in some settings. Pretty neat, all said, although he also claimed that least squares was “something you all know from high school” — I went to a pretty good high school, and I don’t think we did least squares! Sanmi Koyejo talked about a Bayesian devision theory approach to variable selection that involved minimizing some KL-divergence. Unfortunately, the resulting optimization ended up being NP-hard (for reasons I can’t remember) and so they use a greedy algorithm that seems to work pretty well.

(Privacy)
Cynthia Dwork gave a tutorial on differential privacy with an emphasis on the recent work involving false discovery rate. In addition to her plenary there were several talks on differential privacy and other privacy measures. Kunal Talwar talked about their improved analysis of the SuLQ method for differentially private PCA. Unfortunately there were two privacy sessions in parallel so I hopped over to see John Duchi talk about definitions of privacy and how definitions based on testing are equivalent to differential privacy. The testing framework makes it easier to prove minimax bounds, though, so it may be a more useful view at times. Nadia Fawaz talked about privacy for time-series data such as smart meter data. She defined different types of attacks in this setting and showed that they correspond to mutual information or directed mutual information, as well as empirical results on a real data set. Raef Bassily studied a estimation problem in the streaming setting where you want to get a histogram of the most frequent items in the stream. They reduce the problem to one of finding a “unique heavy hitter” and develop a protocol that looks sort of like a code for the MAC: they encode bits into a real vector, had noise, and then add those up over the reals. It’s accepted to STOC 2015 and he said the preprint will be up soon.