EVT / WOTE 2011

This week I attended EVT/WOTE ’11, a workshop on voting, technology, and trustworthiness co-located with the USENIX Security conference. I phased in and out of the workshop, which had a number of different session themes:

  • “E2E”, or end-to-end voting
  • empirical studies of real elections direct-recording electronic voting machines (DREs), forensics for them, and their impact on the reliability of election outcomes
  • studies of accessibility issues, either to polling places or for different voting technologies
  • new proposals for voting systems
  • auditing and metrics for existing election systems

I pretty much work on the last one, so while some of the panels were quite interesting, some technical talks were a little beyond me. Dana Debeauvoir, the Travis County Clerk (where Austin is) gave a keynote about how she thinks technologists and elections officials can work together, and rather bravely put forward a proposal for an electronic voting system to be used at the county level. There were lots of comments about that, of course.

Theron Ji gave a nice talk about how write-in marks are (or are not) properly counted by optical scan machines. People often forget to fill in the bubble saying they are doing a write-in, which can have pretty disastrous effects, as San Diego politician Donna Frye found out. Gillian Piner reported on a survey she did of vision-impaired voters, asking them what they want for accessibility technologies. Imagine that, asking them what they want!

The two talks of most interest to me were by David Cary and Tom Magrino, both on the margin of victory for IRV elections. Cary presented a method for estimating the margin based only on the tabulation of first-choices in each round, whereas Magrino presented an exact calculation that involved solving many integer linear programs. The scaling is not so great with the number of candidates (exponential), but for the kind of IRV elections we see in the US it was definitely doable. Margin calculations are important for developing auditing algorithms (on which I will write more once my paper is done). Philip Stark gave a plenary lecture on auditing which I missed part of due to a conflict with a parallel workshop.

There were also some interesting panels. The most contentious one was on internet voting, which I missed much of but the discussion went over by an hour so I think got the gist of it. Some people are afraid of voting over the internet, but the crypto people think it can be made safe. The panel on the the Sarasota House race in 2006 tried to hone in on the reason for the problems with undervotes in that contest. A lot can be explained by the design of the ballot, proving again that user interface and graphic design is really important!

The rump session was, as always, a mixture of amusing and technical and dry. The real highlight was probably David Bismark, who seems to have antagonized someone who has a new voting system involving moon projections. Wow.

Advertisement

California Elections Code on auditing

From Section 15360:

(a) During the official canvass of every election in which a
voting system is used, the official conducting the election shall
conduct a public manual tally of the ballots tabulated by those
devices, including vote by mail voters’ ballots, cast in 1 percent of
the precincts chosen at random by the elections official. If 1
percent of the precincts is less than one whole precinct, the tally
shall be conducted in one precinct chosen at random by the elections
official.

In addition to the 1 percent manual tally, the elections official
shall, for each race not included in the initial group of precincts,
count one additional precinct. The manual tally shall apply only to
the race not previously counted.

Additional precincts for the manual tally may be selected at the
discretion of the elections official.

Clearly this is not written by a statistician. Counting 1% of precincts “chosen at random” is hardly clear, and also doesn’t tell you too much about how many ballots you are going to count.