A distribution that appears frequently in differential privacy is the Laplace distribution. While in the scalar case we have seen that Laplace noise may not be the best, it’s still the easiest example to start with. Suppose we have scalars
and we want to compute the average
in a differentially private way. One way to do this is to release
, where
has a Laplace distribution:
.
To see that this is differentially private, note that by changing one value of the average can change by at most
. Let
and
be the average of the original data and the data with one element changed. The output density in these two cases has distribution
and
, so for a
So we can see by choosing we get an
-differentially private approximation to the average.
What if we now have n vectors ? Well, one candidate is release a differentially private version of the mean by computing
, where
has a distribution that looks Laplace-like but in higher dimensions:
Now we can do the same calculation with means and
Now the Euclidean norm of the average vector can change by a most (by replacing
with
, for example), so the overall bound is
, which means choosing
we get
-differential privacy.
Sampling from exponentials is easy, but what about sampling from this distribution? Here’s where people can fall into a trap because they are not careful about transformations of random variables. It’s tempting (if you are rusty on your probability) to say that
and then say “well, the norm looks exponentially distributed and the direction is isotropic so we can just sample the norm with an exponential distribution and the uniform direction by taking i.i.d. Gaussians and normalizing them.” But that’s totally wrong because that is implicitly doing a change of variables without properly adjusting the density function. The correct thing to do is to change from Euclidean coordinates to spherical coordinates. We have a map whose Jacobian is
.
Plugging this in and noting that we get
.
So now we can see that the distribution factorizes and indeed the radius and direction are independent. The radius is not exponentially distributed, it’s Erlang with parameters . We can generate this by taking the sum of
exponential variables with parameter
. The direction we can pick uniformly by sampling
i.i.d. Gaussians and normalizing them.
In general sampling distributions for differentially private mechanisms can be complicated — for example in our work on PCA we had to use an MCMC procedure in our experiments to sample from the distribution in our algorithm. This means we could really only approximate our algorithm in the experiments, of course. There are also places to slip up in sampling from simple-looking distributions, and I’d be willing to bet that in some implementations out there people are not sampling from the correct distribution.
(Thanks to Kamalika Chaudhuri for inspiring this post.)