Election auditing with convex optimization

A month or two ago I was on the shuttle going to campus and ended up talking with Steve Checkoway about his work. He described a problem in election auditing that sounded pretty interesting : given an election between, say, two candidates, and the ability to sample individual ballots at random and compare how they were counted versus how they should have been counted, how should we design an auditing algorithm that has the following property: if the auditor certifies that the reported winner is the true winner, then it is wrong with probability no larger than \alpha?

After a couple more meetings we came up with a pretty simple method based on minimizing a Kullback-Leibler divergence. This minimum KL-divergence gives a bound on the error probability of our algorithm which we can then compare to \alpha to give the guarantee. To do the minimization we need to turn to convex optimization tools (which might be a subject for another post — numerically minimizing the KL divergence requires choosing your solving algorithm carefully).

We wrote up a paper on it with Steve’s advisor Hovav Shacham and submitted it to 2010 Electronic Voting Technology Workshop, where it was accepted. We’re making some major-ish revisions (mostly in the simulations) but I’m pretty excited about it. There are some fun analysis questions lurking in the corners that we didn’t have time to get to yet, but I’m looking forward to poking at them after ISIT.

Speaking of which, ISIT is coming up — not sure how much I will blog, but there will be something at least.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.