Maybe more like “paper whenever I feel like it.” This is a post on a now not-so-recent ArXiV preprint by Quan Geng and Pramod Viswanath on constructing the best mechanism (output distribution) for guaranteeing differential privacy under a utility constraint.

For those readers not quite familiar with differential privacy, the setup may seem a little enigmatic. The idea is that there are individuals represented by points in a space . The objective is to compute a real-valued function of the data points such that from the output it will be difficult to infer any particular value of the input. We do this by making the computation randomized, and so this is a property of the conditional distribution where is the data set. In particular, for and differing in a single point (denoted , we want the hypothesis test between the two, given the output, to be difficult:

.

Since the log likelihood ratio is small, the hypothesis test is difficult, and an adversary would have a hard time inferring any individual’s data, *even if the other data points are revealed*. I’m being a bit loose here and saying that the output has a density but you can undo the logs and so on to get a better statement:

.

for any measurable set . We call an output distribution satisfying this an -differentially private mechanism.

The focus of the paper is on questioning the ubiquity of the Laplace distribution in guaranteeing differential privacy. In particular, given a real-valued query function that operates on data points from a domain , it’s not clear that producing where is Laplace-distributed noise is the best differentially-private approximation to . The contribution of this paper is to show that if there is a utility function that we want the output to maximize then the optimal distribution may be quite far from Laplace — instead it is often staircase-shaped.

More formally, they consider mechanisms of the form and ask what the optimal distribution of to minimize an expected loss:

,

where is the distribution of and is some loss on . We want to minimize subject to the differential privacy constraints:

for all measurable and all offsets , where is the sensitivity of the query function :

So this really boils down to a constrained optimization program reminiscent of those we sometimes find in information theory, like finding the capacity achieving input distribution.

The main point of the paper is that the optimal distribution is not Laplace, but staircase-shaped:

And that as the Laplace distribution does become optimal, whereas as it is not. Since choosing an appropriate in practical settings is an open question, the “moderate” regime is interesting.

How does one prove this result? Basically you have to show that the optimum distribution satisfies various properties. They also need to assume that the cost function is symmetric (not always the case, but usually true) and that it cannot increase too quickly (which is reasonable in many cases). The main approach is to take any differentially private output density and instead look at piecewise constant densities as approximations. This is noncontroversial — the same trick is used in functional analysis. The real key insight is that the steps are of width . This is done in two parts — first they discretize to subdivisions of the intervals, and then they show that for the optimal density is a step function. This is done in Lemmas 23 and 24 (!).

There are more extensions and examples in the paper, but it’s worth a skim if you have a passing interest in differentially private mechanisms and a deeper read if you want to get some insights into what kind of constraints differential privacy puts on output distributions.

Very nice and interesting. Thanks for sharing. The staircase noise distribution seems like a quantized Laplacian at first shot — must be a property of the utility function desired. I have always wondered why Laplacian (after all, we IT folks know that additive noise is not necessarily optimal for a large class of sources) but then since the source statistics is swept under the carpet in DP, finding the optimal input-to-output distribution needs to satisfy other criterion and maybe then linear noise models suffice.

Pingback: Generating vector-valued noise for differential privacy | An Ergodic Walk