Differentially Private ERM

Right after Memorial Day, I submitted a paper with Kamalika Chaudhuri and Claire Monteleoni to the Journal of Machine Learning Research on differential privacy and empirical risk minimization. This work looks at how to learn a classifier from training data in such a way that an adversary with access to the classifier and full knowledge of all but one of the training data points would still have a difficult time inferring the values of the last data point.

Ben Rubenstein has a nice post on the differential privacy model, and Adam Smith has more to say on sample secrecy. Adam and his colleagues (Dan Kifer and Abhradeep Guha Thakurta) gave us useful feedback on an earlier draft, which prompted me to learn some new facts about matrix perturbations, symmetric functions, and eigenvalues. Perhaps I’ll blog about this a bit more in the future, but I seem to be going in fits and starts here, so I don’t want to promise anything.

On a related note, ArXiV has dramatically changed its interface for submissions, and it is soooooo much better than before.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s