Right after Memorial Day, I submitted a paper with Kamalika Chaudhuri and Claire Monteleoni to the Journal of Machine Learning Research on differential privacy and empirical risk minimization. This work looks at how to learn a classifier from training data in such a way that an adversary with access to the classifier and full knowledge of all but one of the training data points would still have a difficult time inferring the values of the last data point.
Ben Rubenstein has a nice post on the differential privacy model, and Adam Smith has more to say on sample secrecy. Adam and his colleagues (Dan Kifer and Abhradeep Guha Thakurta) gave us useful feedback on an earlier draft, which prompted me to learn some new facts about matrix perturbations, symmetric functions, and eigenvalues. Perhaps I’ll blog about this a bit more in the future, but I seem to be going in fits and starts here, so I don’t want to promise anything.
On a related note, ArXiV has dramatically changed its interface for submissions, and it is soooooo much better than before.