NIPS 2017 Tutorial on Differential Privacy and Machine Learning

Kamalika and I gave a tutorial at NIPS last week on differential privacy and machine learning. We’ve posted the slides and references (updates still being made). It was a bit stressful to get everything put together in time, especially given how this semester went, but it was a good experience and now we have something to build on. It’s amazing how much research activity there has been in the last few years.

One thing that I struggled with a bit was the difference between a class lecture, a tutorial, and a survey. Tutorials sit between lectures and surveys: the goal is to be clear and cover the basics with simple examples, but also lay out something about what is going on in the field and where important future directions lie. It’s impossible to be comprehensive; we had to pick and choose different topics and papers to cover, and ended up barely mentioning large bodies of work. At the same time, it didn’t really make sense to put up a slide saying “here are references for all the things we’re not going to talk about.” If the intended audience is a person who has heard of differential privacy but hasn’t really studied it, or someone who has read this recent series of articles, then a list without much context is not much help. It seems impossible to even make a real survey now, unless you make the scope more narrow.

As for NIPS itself… I have to say that the rapid increase in size (8000 participants this year) made the conference feel a lot different. I had a hard time hearing/understanding for the short time I was there. Thankfully the talks were streamed/recorded so I can go back to catch what I missed.

Advertisement

Protips for Allerton Presentations

So you’re going to be presenting at the Annual Allerton Conference on Communication, Control, and Computing, you say? This annual conference, sometimes called the “Burning Man for EE Systems” by the younger set, has a much older pedigree. This year is the 54th anniversary, and you want to make sure you make an impression making sure you dress (your slides) for success! Here are some tips from old hands on how to make sure your talk is the Sun Singer and not the Death of the Last Centaur.

  • Library: You landed a slot in the library, the crème-de-la-crème of venues at the Allerton Mansion! There is ample seating, so even a moderate audience may seem sparse. Although your slides will be visible, your voice may be inaudible, especially for those “attendees” who are actually checking Facebook against the back wall. Invest in a stage-acting class or perhaps bring a megaphone.
  • Solarium: Afraid that early fall in the Midwest may be a bit chilly? Fear no more, for the Solarium is surrounded by glass and boasts a climate that is more Humboldt than Piatt. Even if the forecast is for rain, beware of light colors on your slides — they will be overwhelmed by the kiss of Phoebus. If the forecast is particularly sunny, consider polarizing the projector and handing out sunglasses for the audience
  • Butternut/Pine: Regardless of how esoteric your paper may be, you are nearly guaranteed a packed house: expect your talk to be punctuated by the door opening, a slight breeze filtering in, and the faint sound of light swearing. In order to maximize visibility, try to not obscure the view of your slides. Recommended places to position yourself include on a bookshelf or behind the screen.
  • Lower Level: Unlike a horror house, the basement of Allerton Mansion is where the fun is — the pool table and ice maker! Here you will be cool, comfortable, and nigh invisible. For some pre-Halloween fun, bring a flashlight and make your presentation a ghost story! Regardless, make sure your slides only have important information on the top quarter of the screen so that session attendees beyond the first row can get the gist.
  • Visitor Center: (Discontinued this year!) In the past, adventurous Allerton attendes would trek across the wild groves and through a manicured garden to the Visitor Center to attend special sessions al fresco (this was a change from the Tent, which may have been more properly au naturel). Hard times on the road make for famished session attendees. Consider offering complementary snacks and beverage to boost attendance.

IHP “Nexus” Workshop on Privacy and Security: Days 4-5

Wrapping this up, finally. Maybe conference blogging has to go by the wayside for me… my notes got a bit sketchier so I’ll just do a short rundown of the topics.

Days 4-5 were a series of “short” talks by Moni Naor, Kobbi Nissim, Lalitha Sankar, Sewoong Oh, Delaram Kahrobaie, Joerg Kliewer, Jon Ullman, and Sasho Nikolov on a rather eclectic mix of topics.

Moni’s talk was on secret sharing in an online setting — parties arrive one by one and the qualified sets (who can decode the secret) is revealed by all parties. The shares have to be generated online as well. Since the access structure is evolving, what kinds of systems can we support? As I understood it, the idea is to use something similar to threshold scheme and a “doubling trick”-like argument by dividing the users/parties into generations. It’s a bit out of area for me so I had a hard time keeping up with the connections to other problems. Kobbi talked about reconstruction attacks based on observing traffic from outsourced database systems. A user wants to get the records but the server shouldn’t be able to reconstruct: it knows how many records were returned from a query and knows if the same record was sent on subsequent queries — this is a sort of access pattern leakage. He presented attacks based on this information and also based on just knowing the volume (e.g. total size of response) from the queries.

Lalitha talked about mutual information privacy, which was quite a bit different than the differential privacy models from the CS side, but more in line with Ye Wang’s talk earlier in the week. Although she didn’t get to spend as much time on it, the work on interactive communication and privacy might have been interesting to folks earlier in the workshop studying communication complexity. In general, the connection between communication complexity problems and MPC, for example, are elusive to me (probably from lack of trying).

Sewoong talked about optimal mechanisms for differentially private composition — I had to miss his talk, unfortunately. Delaram talked about cryptosystems based on group theory and I had to try and check back in all the things I learned in 18.701/702 and the graduate algebra class I (mistakenly) took my first year of graduate school. I am not sure I could even do justice to it, but I took a lot of notes. Joerg talked about using polar codes to enable private function computation — initially privacy was measured by equivocation but towards the end he made a connection to differential privacy. Since most folks (myself included) are not experts on polar codes, he gave a rather nice tutorial (I thought) on polar coding. It being the last day of the workshop, the audience had unfortunately thinned out a bit.

Jon spoke about estimating marginal distributions for high-dimensional problems. There were some nice connections to composite hypothesis testing problems that came out of the discussion during the talk — the model seems a bit complex to get into based on my notes, but I think readers who are experts on hypothesis testing might want to check out his work. Sasho rounded off the workshop with a talk about the sensitivity polytope of linear queries on a statistical database and connections to Gaussian widths. The main result was on the sample complexity of answering the queries in time polynomial in the number of individuals, size of the universe, and size of the query set.

IHP “Nexus” Workshop on Privacy and Security: Day 1

The view from my office at IHP

The view from my office at IHP

I am attending the Nexus of Information and Computation Theories workshop at the Institut Henri Poincaré in Paris this week. It’s the last week of a 10 week program that brought together researchers from information theory and CS theory in workshops around various themes such as distributed computation, inference, lower bounds, inequalities, and security/privacy. The main organizers were Bobak Nazer, Aslan Tchamkerten, Anup Rao, and Mark Braverman. The last two weeks are on Privacy and Security: I helped organize these two weeks with Prakash Narayan, Salil Vadhan, Aaron Roth, and Vinod Vaikuntanathan.

Due to teaching and ICASSP, I missed last week, but am here for this week, for which the sub-topics are security multiparty computation and differential privacy. I’ll try to blog about the workshop since I failed to blog at all about ITA, CISS, or ICASSP. The structure of the workshop was to have 4 tutorials (two per week) and then a set of hopefully related talks. The first week had tutorials on pseudorandomness and information theoretic secrecy.

The second week of the workshop kicked off with a tutorial from Yuval Ishai and Manoj Prabhakaran on secure multiparty computation (MPC). Yuval gave an abbreviated version/update of his tutorial from the Simons Institute (pt1/pt2) that set up the basic framework and language around MPC: k parties with inputs x_1, x_2, \ldots, x_k want to exchange messages to implement a functionality (evaluate a function) f(x_1, x_2, \ldots, x_k) over secure point-to-point channels such they successfully learn the output of the function but don’t learn anything additional about each others’ inputs. There is a landscape of definitions within this general framework: some parties could collude, behave dishonestly with respect to the protocol, and so on. The guarantees could be exact (in the real/ideal paradigm in which you compare the real system with an simulated system), statistical (the distribution in the real system is close in total variation distance to an ideal evaluation), or computational (some notion of indistinguishability). The example became a bit clearer when he described a 2-party example with a “trusted dealer” who can give parties some correlated random bits and they could use those to randomly shift the truth table/evaluation of f(x_1, x_2) to guarantee correctness and security.

Manoj, on the other hand talked about some notions of reductions between secure computations: given a protocol which evaluates f, can you simulate/compute g using calls to f? How many do you need? this gives a notion of the complexity rate of one function in terms of another. For example, can Alice and Bob simulate a BEC using calls to an oblivious transfer (OT) protocol? What about vice versa? What about using a BSC? These problems seem sort of like toy channel problems (from an information theory perspective) but seem like fundamental building blocks when thinking about secure computation. As I discussed with Hoeteck Wee today, in information theory we often gain some intuition from continuous alphabets or large/general alphabet settings, whereas cryptography arguments/bounds come from considering circuit complexity: these are ideas that we don’t think about too much in IT since we don’t usually care about computational complexity/implementation.

Huijia (Rachel) Lin gave an introduction to zero-knowledge proofs and proof systems: a verifier wants to know if a statement X is true and can ask queries to a prover P which has some evidence w that it wants to keep secret. For example, the statement might be “the number y is a perfect square” and the evidence might be an \alpha such that y = \alpha^2 \mod n. The prover doesn’t want to reveal w = \alpha, but instead should convince the verifier that such an alpha exists. She gave a protocol for this before turning to a more complicated statement like proving that a graph has a Hamiltonian cycle. She then talked about using commitment schemes, at which point I sort of lost the thread of things since I’m not as familiar with these cryptography constructions. I probably should have asked more questions, so it was my loss.

Daniel Wichs discussed two problems he called “multi-key” and “spooky” fully-homomorphic encryption (FHE). The idea in multi-key FHE is that you have N users who encrypt values \{ x_i : i \in [N] \} with their public key and upload them to a server. Someone with access to the server wants to be able to decode only a function f(x_1, x_2, \ldots, x_N) using the combined private keys of all the users. In “spooky” FHE, you have N decoders, each with one of the private keys, but they want to decode values \{y_i : i \in [N]\} which are functions of all of the encoded data. A simple example of this is when y_1 \oplus y_2 = x_1 \wedge x_2: that is, the XOR of the outputs is equal to the AND of the inputs. This generalizes to the XOR of multiple outputs being some function of the inputs, something he called additive function sharing. He then presented schemes for these two problems based on the “learning with errors” from Gentry, Sahai, and Waters, which I would apparently have to read to really understand the scheme. It’s some sort of linear algebra thing over \mathbb{Z}_q. Perhaps there are some connections to linear block codes or network coding to be exploited here.