IHP “Nexus” Workshop on Privacy and Security: Day 1

The view from my office at IHP

The view from my office at IHP

I am attending the Nexus of Information and Computation Theories workshop at the Institut Henri Poincaré in Paris this week. It’s the last week of a 10 week program that brought together researchers from information theory and CS theory in workshops around various themes such as distributed computation, inference, lower bounds, inequalities, and security/privacy. The main organizers were Bobak Nazer, Aslan Tchamkerten, Anup Rao, and Mark Braverman. The last two weeks are on Privacy and Security: I helped organize these two weeks with Prakash Narayan, Salil Vadhan, Aaron Roth, and Vinod Vaikuntanathan.

Due to teaching and ICASSP, I missed last week, but am here for this week, for which the sub-topics are security multiparty computation and differential privacy. I’ll try to blog about the workshop since I failed to blog at all about ITA, CISS, or ICASSP. The structure of the workshop was to have 4 tutorials (two per week) and then a set of hopefully related talks. The first week had tutorials on pseudorandomness and information theoretic secrecy.

The second week of the workshop kicked off with a tutorial from Yuval Ishai and Manoj Prabhakaran on secure multiparty computation (MPC). Yuval gave an abbreviated version/update of his tutorial from the Simons Institute (pt1/pt2) that set up the basic framework and language around MPC: k parties with inputs x_1, x_2, \ldots, x_k want to exchange messages to implement a functionality (evaluate a function) f(x_1, x_2, \ldots, x_k) over secure point-to-point channels such they successfully learn the output of the function but don’t learn anything additional about each others’ inputs. There is a landscape of definitions within this general framework: some parties could collude, behave dishonestly with respect to the protocol, and so on. The guarantees could be exact (in the real/ideal paradigm in which you compare the real system with an simulated system), statistical (the distribution in the real system is close in total variation distance to an ideal evaluation), or computational (some notion of indistinguishability). The example became a bit clearer when he described a 2-party example with a “trusted dealer” who can give parties some correlated random bits and they could use those to randomly shift the truth table/evaluation of f(x_1, x_2) to guarantee correctness and security.

Manoj, on the other hand talked about some notions of reductions between secure computations: given a protocol which evaluates f, can you simulate/compute g using calls to f? How many do you need? this gives a notion of the complexity rate of one function in terms of another. For example, can Alice and Bob simulate a BEC using calls to an oblivious transfer (OT) protocol? What about vice versa? What about using a BSC? These problems seem sort of like toy channel problems (from an information theory perspective) but seem like fundamental building blocks when thinking about secure computation. As I discussed with Hoeteck Wee today, in information theory we often gain some intuition from continuous alphabets or large/general alphabet settings, whereas cryptography arguments/bounds come from considering circuit complexity: these are ideas that we don’t think about too much in IT since we don’t usually care about computational complexity/implementation.

Huijia (Rachel) Lin gave an introduction to zero-knowledge proofs and proof systems: a verifier wants to know if a statement X is true and can ask queries to a prover P which has some evidence w that it wants to keep secret. For example, the statement might be “the number y is a perfect square” and the evidence might be an \alpha such that y = \alpha^2 \mod n. The prover doesn’t want to reveal w = \alpha, but instead should convince the verifier that such an alpha exists. She gave a protocol for this before turning to a more complicated statement like proving that a graph has a Hamiltonian cycle. She then talked about using commitment schemes, at which point I sort of lost the thread of things since I’m not as familiar with these cryptography constructions. I probably should have asked more questions, so it was my loss.

Daniel Wichs discussed two problems he called “multi-key” and “spooky” fully-homomorphic encryption (FHE). The idea in multi-key FHE is that you have N users who encrypt values \{ x_i : i \in [N] \} with their public key and upload them to a server. Someone with access to the server wants to be able to decode only a function f(x_1, x_2, \ldots, x_N) using the combined private keys of all the users. In “spooky” FHE, you have N decoders, each with one of the private keys, but they want to decode values \{y_i : i \in [N]\} which are functions of all of the encoded data. A simple example of this is when y_1 \oplus y_2 = x_1 \wedge x_2: that is, the XOR of the outputs is equal to the AND of the inputs. This generalizes to the XOR of multiple outputs being some function of the inputs, something he called additive function sharing. He then presented schemes for these two problems based on the “learning with errors” from Gentry, Sahai, and Waters, which I would apparently have to read to really understand the scheme. It’s some sort of linear algebra thing over \mathbb{Z}_q. Perhaps there are some connections to linear block codes or network coding to be exploited here.


Thinking, Fast and Slow (Daniel Kahneman). This was recommended by Vivek Goyal, and is Kahneman’s popular nonfiction book about the psychology of decision making in humans (as opposed to rational-decision making models like those in economics). The System 1/System 2 model was new to me, even though the various biases and heuristics that he describes were things I had heard about in different contexts. While quite interesting and a book that anyone who works on decision making should read (I’m looking at you, statisticians, machine learners, and systems-EE folks), it’s a bit too long, I think. I found it hard to power through at the end, which is where he gets into prospect theory, a topic which my colleague Narayan Mandayam is trying to apply in wireless systems.

Men Explain Things To Me (Rebecca Solnit). A slim volume collecting several of Solnit’s essays on feminism and its discontents, from the last few years. I was familiar with some of the essays (including the first one) but was surprised by her ultimately hopeful tone (many of the essays come with introductions describing their context and how she feels about them now). Highly recommended, but I don’t think it will help with any Arguments On The Internet.

The Idea of India (Sunil Khilnani). This book is a bit older now but provides a lot of crucial context about the early Indian state, the relationship between urbanism and social change, and the nature of electoral politics in India. Reading this gave me a more nuanced view of the complexity of contemporary Indian politics, or at least a more nuanced view of how we got here (beyond the usual history of communalism). The origins of the cronyism of Congress and the causes and effects of the Emergency were also a new perspective for me.

The Sympathizer (Viet Thanh Nguyen). This is about an undercover Vietnamese (well, half-Vietnamese, as people keep pointing out) undercover agent who leaves during the evacuation of Saigon and embeds himself in the refugee community, sending coded messages about counter-revolutionary plans. Our unnamed narrator has a an epic adventure, darkly comic and tragic, initially told as a confessional in some sort of prison interrogation. He was educated in the US before going back to Vietnam — this puts him between two worlds, and the novel is fundamentally about this tension. Throughout people are archetypes: The General, The Auteur, the crapulent major. Although long, the novel is rewarding: the last quarter really put me through the wringer, emotionally.

Station Eleven (Emily St. John Mandel). A novel about a post-apocalyptic future (split between pre-slightly post-and much post) in which much of the world has been decimated by a mysterious infection. The novel revolves around a series of connected characters: an actor who dies on stage in a production of King Lear, his ex-wife, who wrote a series of comics about a remote station, a child actor from the same production who survives to become part of a traveling theater company in the post-apocalyptic wasteland that was once Michigan, an audience member who was once a paparazzo following the actor. The whole novel has a haunting air to it, a bit of a dreamy sensibility that makes it easy to read (too) quickly. The connections between the characters were not surprising when they were revealed, but they didn’t need to be — the book doesn’t rely on that kind of gimmickry. Read it while traveling: you won’t look at airports the same way again.

UCSD Data Science Postdocs

A bit of a delayed posting due to pre-spring break crunch time, but my inimitable collaborator and ex-colleague Kamalika Chaudhuri passed along the following announcement.

I write with the exciting news that UCSD has up to four postdoctoral fellowship openings in data science and machine learning.

The fellowships will prepare outstanding researchers for academic careers. The fellows will be affiliated with the CSE or ECE Departments, will enjoy broad freedom to work with any of or faculty, they will be allocated a research budget, and will teach one class per year.

If you know anyone who might be interested, please encourage them to apply!

The program is co-sponsored by UCSD’s CSE and ECE departments, the Interdisciplinary Qualcomm Institute, and the Information Theory and Applications Center.

More information is available at the UCSD Data Science site. Review begins March 21, so get your applications in!


The Buddha in the Attic (Julie Otsuka). This was a beautifully written short book written as the collective experiences of Japanese picture brides from the 19th century to the present. It’s one of those books you have to read all at once or in a short period of time in concentrated bursts. Each chapter is a different era and a different set of experiences. It might make you want to read more.

Dancing Lessons for the Advanced in Age (Bohumil Hrabal). Another stream of consciousness, Hrabal’s novella is from the perspective of an old man, a “palaverer” in the language of the introductory essay, addressing a group of young “beauties.” The narrator is unreliable, he tells stories that are shocking and backtracks to make himself seem better, he digresses into rants and waxes poetic about the “beauties” he has seen in his life. The introductory essay does a good job of situating the text and explaining Hrabal, about whom I knew nothing.

Ancillary Justice (Ann Leckie). This is the first in a trilogy which I swear I will take my time reading so I can enjoy them more. It’s impossible not to compare the world-building here to a point of reference like Banks’s Culture novels, but there’s quite a bit that’s different here. The main conceit is that ship-level AIs can spin of “ancillaries” into other bodies to act as surrogates. Beckie’s insistence on the default “she” for a genderless society angers whiny MRA-type sad puppies, but I didn’t find that it played a central role in the story, although it made me aware of my default assumptions and “desire to know” characters’ genders. I’m looking forward to the next two!

Between The World And Me (Ta-Nehisi Coates). Written as a letter to his son, Coates tries to work out the meaning of his college friend Prince Jones’s murder at the hands of police. The book stakes out a strong critical position on America as an enterprise, but, as Michelle Alexander’s review puts it, feels unfinished. It is, however, necessary reading.

Cannery Row (John Steinbeck). A classic that I never read until now! Steinbeck’s prose feels dated and his way of describing people like Lee Chong, made my skin crawl at times, but that man could write a sentence. The book surprised me.