IHP “Nexus” Workshop on Privacy and Security: Day 1

The view from my office at IHP

The view from my office at IHP

I am attending the Nexus of Information and Computation Theories workshop at the Institut Henri Poincaré in Paris this week. It’s the last week of a 10 week program that brought together researchers from information theory and CS theory in workshops around various themes such as distributed computation, inference, lower bounds, inequalities, and security/privacy. The main organizers were Bobak Nazer, Aslan Tchamkerten, Anup Rao, and Mark Braverman. The last two weeks are on Privacy and Security: I helped organize these two weeks with Prakash Narayan, Salil Vadhan, Aaron Roth, and Vinod Vaikuntanathan.

Due to teaching and ICASSP, I missed last week, but am here for this week, for which the sub-topics are security multiparty computation and differential privacy. I’ll try to blog about the workshop since I failed to blog at all about ITA, CISS, or ICASSP. The structure of the workshop was to have 4 tutorials (two per week) and then a set of hopefully related talks. The first week had tutorials on pseudorandomness and information theoretic secrecy.

The second week of the workshop kicked off with a tutorial from Yuval Ishai and Manoj Prabhakaran on secure multiparty computation (MPC). Yuval gave an abbreviated version/update of his tutorial from the Simons Institute (pt1/pt2) that set up the basic framework and language around MPC: k parties with inputs x_1, x_2, \ldots, x_k want to exchange messages to implement a functionality (evaluate a function) f(x_1, x_2, \ldots, x_k) over secure point-to-point channels such they successfully learn the output of the function but don’t learn anything additional about each others’ inputs. There is a landscape of definitions within this general framework: some parties could collude, behave dishonestly with respect to the protocol, and so on. The guarantees could be exact (in the real/ideal paradigm in which you compare the real system with an simulated system), statistical (the distribution in the real system is close in total variation distance to an ideal evaluation), or computational (some notion of indistinguishability). The example became a bit clearer when he described a 2-party example with a “trusted dealer” who can give parties some correlated random bits and they could use those to randomly shift the truth table/evaluation of f(x_1, x_2) to guarantee correctness and security.

Manoj, on the other hand talked about some notions of reductions between secure computations: given a protocol which evaluates f, can you simulate/compute g using calls to f? How many do you need? this gives a notion of the complexity rate of one function in terms of another. For example, can Alice and Bob simulate a BEC using calls to an oblivious transfer (OT) protocol? What about vice versa? What about using a BSC? These problems seem sort of like toy channel problems (from an information theory perspective) but seem like fundamental building blocks when thinking about secure computation. As I discussed with Hoeteck Wee today, in information theory we often gain some intuition from continuous alphabets or large/general alphabet settings, whereas cryptography arguments/bounds come from considering circuit complexity: these are ideas that we don’t think about too much in IT since we don’t usually care about computational complexity/implementation.

Huijia (Rachel) Lin gave an introduction to zero-knowledge proofs and proof systems: a verifier wants to know if a statement X is true and can ask queries to a prover P which has some evidence w that it wants to keep secret. For example, the statement might be “the number y is a perfect square” and the evidence might be an \alpha such that y = \alpha^2 \mod n. The prover doesn’t want to reveal w = \alpha, but instead should convince the verifier that such an alpha exists. She gave a protocol for this before turning to a more complicated statement like proving that a graph has a Hamiltonian cycle. She then talked about using commitment schemes, at which point I sort of lost the thread of things since I’m not as familiar with these cryptography constructions. I probably should have asked more questions, so it was my loss.

Daniel Wichs discussed two problems he called “multi-key” and “spooky” fully-homomorphic encryption (FHE). The idea in multi-key FHE is that you have N users who encrypt values \{ x_i : i \in [N] \} with their public key and upload them to a server. Someone with access to the server wants to be able to decode only a function f(x_1, x_2, \ldots, x_N) using the combined private keys of all the users. In “spooky” FHE, you have N decoders, each with one of the private keys, but they want to decode values \{y_i : i \in [N]\} which are functions of all of the encoded data. A simple example of this is when y_1 \oplus y_2 = x_1 \wedge x_2: that is, the XOR of the outputs is equal to the AND of the inputs. This generalizes to the XOR of multiple outputs being some function of the inputs, something he called additive function sharing. He then presented schemes for these two problems based on the “learning with errors” from Gentry, Sahai, and Waters, which I would apparently have to read to really understand the scheme. It’s some sort of linear algebra thing over \mathbb{Z}_q. Perhaps there are some connections to linear block codes or network coding to be exploited here.

Advertisement

Allerton 2014: privately charging your information odometer while realizing your group has pretenders

Here are a few papers that I saw at Allerton — more to come later.

Group Testing with Unreliable Elements
Arya Mazumdar, Soheil Mohajer
This was a generalization of the group testing problem: items are either positive or null, and can be tested in groups such that if any element of the group is positive, the whole group will test positive. The item states can be thought of as a binary column vector \mathbf{x} and each test is the row or a matrix A: the (i,j)-the entry of A is 1 if the i-th item is part of the j-th group. The multiplication A \mathbf{x} is taken using Boolean OR. The twist in this paper is that they consider a situation where \tau elements can “pretend” to be positive in each test, with a possibly different group in each test. This is different than “noisy group testing” which was considered previously, which is more like A \mathbf{x} + \mathrm{noise}. They show achievable rates for detecting the positives using random coding methods (i.e. a random A matrix). There was some stuff at the end about the non-i.i.d. case but my notes were sketchy at that point.

The Minimal Realization Problems for Hidden Markov Models
Qingqing Huang, Munther Dahleh, Rong Ge, Sham Kakade
The realization problem for an HMM is this: given the exact joint probability distribution on length N strings (Y_1, Y_2, \ldots, Y_N) from an HMM, can we create a multilinear system whose outputs have the same joint distribution? A multilinear system looks like this:
w_{t+1} = A^{(\ell_t)} w_t for w_t \in \mathbb{R}^k with w_0 = v
z_{t} = u^t w_t
We think of A^{(\ell_t)} \in \mathbb{R}^{k \times k} as the transition for state \ell_t (e.g. a stochastic matrix) and we want the output to be z_t. This is sort of at the nexus of control/Markov chains and HMMs and uses some of the tensor ideas that are getting hot in machine learning. As the abstract puts it, the results are that they can efficiently construct realizations if N > max(4 log_d(k) + 1; 3), where d is the size of the output alphabet size, and k is the minimal order of the realization.”

Differentially Private Distributed Protocol for Electric Vehicle Charging
Shuo Han, Ufuk Topcu, George Pappas
This paper was about a central aggregator trying to get multiple electric vehicle users to report their charging needs. The central allocator has a utility maximization problem over the rates of charging:
\min_{r_i} U(\sum r_i)
\mathrm{s.t.}\  \le r_i \le \bar{r}_i
\qquad \mathbf{1}^{\top} r_i = z_i
They focus on the case where the charge demands z_i are private but the charging rates r_i can be public. This is in contrast to the mechanism design literature popularized by Aaron Roth and collaborators. They do an analysis of a projected gradient descent procedure with Laplace noise added for differential privacy. It’s a stochastic gradient method but seems to converge in just a few time steps — clearly the number of iterations may impact the privacy parameter, but for the examples they showed it was only a handful of time steps.

An Interactive Information Odometer and Applications
Mark Braverman and Omri Weinstein
This was a communication complexity talk about trying to maintain an estimate of the information complexity of a protocol \Pi for computing a function f(X,Y) interactively, where Alice has X, Bob has Y, and (X,Y) \sim \mu:
\mathsf{IC}_{\mu}(\Pi) = I( \Pi ; X | Y ) + I( \Pi ; Y | X)
Previous work has shown that the (normalized) communication complexity of computing f on n-tuples of variables \epsilon-accurately approaches the minimum information complexity over all protocols for computing f once \epsilon-accurately. At least I think this is what was shown previously — it’s not quite my area. The result in this paper is a strong converse for this — the goal is to maintain an online estimate of \mathsf{IC} during the protocol. The mechanism seeems a bit like communication with feedback a la Horstein, but I got a bit confused as to what was going on — basically it seems that Alice and Bob need to be able to agree on the estimate during the protocol and use a few extra bits added to the communication to maintain this estimate. If I had an extra n hours in the week I would read up more about this. Maybe on my next plane ride…

Thresholds in random integer programs

Karthik Chandrasekaran gave a talk at TTI today on the feasibility of integer programs. Given a polytope defined by the inequalities A x \le b in dimension n, can we say whether the polytope contains an integer point? In general, the problem is NP-hard, but efficient algorithms are known for special sub-cases. The goal in this talk was to understand if random instances of the problem are also hard.

The first thing to figure out is what do we mean by a random instance? Consider a point x_0 and the sphere S of radius R around x_0. Now draw m vectors x_1, x_2, \ldots, x_m uniformly from the surface of the unit sphere, and consider the polytope defined by faces which are tangent to S at x_0 + x_i for i = 1, 2, \ldots, m. That is, the vector x_0 + x_i is the normal vector of that face. This defines a random polytope whose distribution depends on the parameters (n,m,x_0,R). The goal is to find how R scales with n and m such that with high probability, the polytope contains an integer point for all x_0.

If the radius R is too small, then this will be hard to do because guaranteeing that an integer point is in the interior for all x_0 becomes hard. If R = \sqrt{n}/2, then we will always have an integer point. What should the real scaling for R look like?

The simple form of the main result is that if n \le m \le 2^{\sqrt{n}} and R \ge c_1 \sqrt{log(m/n)}, with high probability the polytope will have an integer point for every x_0. Conversely, if R \le c_0 \sqrt{ log(m/n) }, then with high probability the polytope “centered” at x_0 = (1/2,1/2,\ldots,1/2) with not contain an integer point. Note that if the number of faces is linear in n, then a constant radius is sufficient. I’m trying to square that with the “infinite dimensional spheres have vanishing volume and expanding surface area” but I think the fact that the polytope is “pointy” means that L_1 geometry gives better intuition.

To prove these bounds on R, they make a connection to the discrepancy of random Gaussian matrices (which approximate the random unit vector row matrices). The paper is on arxiv for those who want to take a look.