EVT / WOTE 2011

This week I attended EVT/WOTE ’11, a workshop on voting, technology, and trustworthiness co-located with the USENIX Security conference. I phased in and out of the workshop, which had a number of different session themes:

  • “E2E”, or end-to-end voting
  • empirical studies of real elections direct-recording electronic voting machines (DREs), forensics for them, and their impact on the reliability of election outcomes
  • studies of accessibility issues, either to polling places or for different voting technologies
  • new proposals for voting systems
  • auditing and metrics for existing election systems

I pretty much work on the last one, so while some of the panels were quite interesting, some technical talks were a little beyond me. Dana Debeauvoir, the Travis County Clerk (where Austin is) gave a keynote about how she thinks technologists and elections officials can work together, and rather bravely put forward a proposal for an electronic voting system to be used at the county level. There were lots of comments about that, of course.

Theron Ji gave a nice talk about how write-in marks are (or are not) properly counted by optical scan machines. People often forget to fill in the bubble saying they are doing a write-in, which can have pretty disastrous effects, as San Diego politician Donna Frye found out. Gillian Piner reported on a survey she did of vision-impaired voters, asking them what they want for accessibility technologies. Imagine that, asking them what they want!

The two talks of most interest to me were by David Cary and Tom Magrino, both on the margin of victory for IRV elections. Cary presented a method for estimating the margin based only on the tabulation of first-choices in each round, whereas Magrino presented an exact calculation that involved solving many integer linear programs. The scaling is not so great with the number of candidates (exponential), but for the kind of IRV elections we see in the US it was definitely doable. Margin calculations are important for developing auditing algorithms (on which I will write more once my paper is done). Philip Stark gave a plenary lecture on auditing which I missed part of due to a conflict with a parallel workshop.

There were also some interesting panels. The most contentious one was on internet voting, which I missed much of but the discussion went over by an hour so I think got the gist of it. Some people are afraid of voting over the internet, but the crypto people think it can be made safe. The panel on the the Sarasota House race in 2006 tried to hone in on the reason for the problems with undervotes in that contest. A lot can be explained by the design of the ballot, proving again that user interface and graphic design is really important!

The rump session was, as always, a mixture of amusing and technical and dry. The real highlight was probably David Bismark, who seems to have antagonized someone who has a new voting system involving moon projections. Wow.

Allerton 2010 : the only talks I’ll blog about

Hey, lookit! I’m blogging about a conference somewhat near to when the conference happened!

I’m just going to write about a few talks. This is mostly because I ended up not taking as many notes this year, but also because writing up extensive notes on talks is a bit too time consuming. I found the talks by Paul Cuff, Sriram Vishwanath, Raj Rajagopalan, and others interesting, but no notes. And of course I enjoyed the talks by my “bosses” at UCSD, Alon Orlitsky and Tara Javidi. That’s me being honest, not me trying to earn brownie points (really!)

So here were 5 talks which I thought were interesting and I took some notes.

Continue reading

Allerton 2010 : the sessions’ (lack of) timing

Warning: small rant below. I’m probably not as ticked off as this makes me sound.

One thing that seemed significantly off this year from previous times I’ve been to Allerton is that around 3/4 of the talks I attended went over the alloted time. Why does this happen?

For one thing, more than half of the sessions at Allerton are invited. This means that some speakers know what they are going to talk about in general, but haven’t necessarily pinned down the whole story. This is amplified by the fact that the camera-ready paper is due on the last day of the conference (the deadline was pushed back to Monday this year). For invited talks, many people have not even started writing the paper until they get on the plane, adding uncertainty as to what they can or should present. Little lemmas are proved hours before the deadline. It’s not unusual to make slides on the plane to the conference, but if the actual results are in flux, what are you going to put on the slides? Why, the entire kitchen sink, of course!

The actual session brings up other issues. Because people are editing their slides until the last minute, they insist on using their own laptop, causing delays as the laptops are switched, the correct display is found, and the presentation remote is set up. This is a gigantic waste of time. Almost all laptops provided by conference organizers are PCs, which can display PDF (generated by LaTeX or Keynote) and PowerPoint. Why must you use your own laptop? So the slide transitions will be oh-so pretty?

Finally, many session chairs don’t give warnings early enough and don’t enforce time constraints. Once a habit of talks running over is established, it becomes unfair to cut off one speaker if you didn’t cut off another. Naturally, speakers feel upset if someone got more time to present than they did.

What we should ask ourselves is this : is the talk for the benefit of the speaker or for the benefit of the audience?

PSA for Allerton authors who use TeXShop

The papercept website will throw an error if your final PDF is lower than version 1.4. If you use latex -> dvips -> ps2pdf then you will likely get PDF 1.3.

To fix this in TeXShop, go to Preferences and select Engine. Under “Tex + dvips + distiller” enter simpdftex tex --maxpfb --distiller ps2pdf14 and you should be good to go.

More posting about Allerton later!

ITW 2010 : finishing up

Blogging conferences seems to have gone by the wayside for me, but some quick takes on things from ITW. It was a more coding-focused conference so there were fewer talks of active research interest to me, but I did get to catch up and talk shop with a few people, so that was nice.

Tali Kaufman (who seems to have no homepage) gave a plenary on “Local computation in codes” which looked at fast (sublinear time) algorithms for detecting if a binary vector is from a code, and fast ways to correct single bit errors. In particular, she was looking at these properties in the context of LDPC codes. It’s a nice place where information theory and CS theory look at the same object but with different intents.

Ueli Maurer gave a great talk on “A Cryptographic Theory of System Indistinguishability,” which started out kind of slow and then ended up at a breakneck speed. This was a way of thinking about crypto from the perspective of systems being able to simulate each other.

Rudolf Ahlswede talked a bit about strong converses and weak converses for channels with a rather generic “time structure.” Essentially this boils down to a difference between lim inf and lim sup, and he gave a rather short proof showing that capacities exist under very mild conditions and that the additivity of capacity (e.g. for two parallel channels) may hold in some settings but not others.

There were lots of other good talks that I enjoyed but I didn’t take particularly good notes this time (I blame the jet lag), so I don’t have much to say here. Tomorrow is the start of Allerton, so I might take better notes for that.

ITW Dublin : historical take on polar codes

I am at ITW in Dublin, and I will write a short post or two about it. I missed most of the conference until now due to jetlag and late arrival, but I did make it to Arikan’s plenary lecture this morning on the historical context for polar codes. It was a really nice talk about successive decoding and how it relates to polar codes. A central issue is the computation cutoff rate R_{comp}, which prevents successive decoding from reaching capacity.

He described Pinsker’s “concatenated” construction of convolutional encoders around a block code, which is capacity-achieving but inefficient, and Massey’s 1981 construction of codes for the quaternary erasure channel which decomposes the QEC into two parallel BECs whose noise is correlated (you just relabel the 4 inputs with 2 bits and treat the two bits as going through parallel BECs). This is efficient, increases R_{comp}, but is not enough to get to capacity. However, in a sense, Massey’s construction is like doing one step in polar codes, and combining this with Pinkser’s ideas starts getting the flavor of the channel polarization effect.

Good stuff!

EVT/WOTE ’10 : Panel on India’s Electronic Voting Machine

I’m attending the…

Panel on Indian Electronic Voting Machines (EVMs)
Moderator: Joseph Lorenzo Hall, University of California, Berkeley and Princeton University
Panelists: P.V. Indiresan, Former Director, IIT-Madras; G.V.L Narasimha Rao, Citizens for Verifiability, Transparency, and Accountability in Elections, VeTA; Alok Shukla, Election Commission of India; J. Alex Halderman, University of Michigan

The first speaker was G.V.L. Narasimha Rao, who is also a blogger on the topic of elections. He is a staunch opponent of Electonic Voting Machines (EVMs). He gave a summary of voting in India — until 1996, all voting was with paper ballots and hand counting. In 1998 there were some EVMs introduced in urban areas, and then in 2004 it moved entirely to EVMs. Vote confirmation was given by a beep, and there were several complaints of machine failure. His claim is that exit polling was accurate prior to 2004 and then after the introduction of EVMs, the exit polls diverged widely from the actual results. In these elections I believe the BJP got a drubbing from Congress (Rao probably got suspicious since he appears to be a BJP political analyst).

Next up was Alok Shukla, the Deputy Election Commissioner of India. He gave an overview of the EVMs in use in India. He gave a review of how India decided to move to EVMs (the Parliament ended up approving the use of EVMs). He claimed that a paper trail was not the solution (mostly due to infeasibility/cost/remoteness of polling locations, etc), and said solutions lie in better transparency and administrative oversight. His main answer to claims that the EVMs have been hacked is that the attacks are infeasible and detectable by election officials. Finally, he said essentially “different systems for different people” (or different strokes for different folks?).

The third speaker was J. Alex Halderman, who is one of the people who attacked the Indian EVM. He described how he got hold of an EVM and showed details on the insides. The first problem is that the devices can be duplicated (or fake ones could be substituted). Another issue is that verifying the code in the EVM is not possible (so they can be tampered with at the time of manufacture). Finally, the reported counts are stored in two EEPROMS which can be swapped out. There are two attacks (at least) that they performed. The first is to hack the display so that false counts are displayed on the LED. A bluetooth radio lets a mobile user select who should win. The second is to clip on a device to reprogram the EEPROMS. Full details will appear at CCS. Halderman’s last bit of news was that one of their co-authors in India, Hari K. Prasad, has been summoned by the police as a result of a criminal complaint that he stole the EVM, which seems like an attempt by the government of India to silence their critics. He called upon Shukla to drop the suit, who was rather upset by this public accusation.

The last panelist was P.V. Indiresan, who is on the advisory committee to the government. He discussed some new security features in EVMs, such as signatures to prevent tampering with the cable between the ballot unit (where people push buttons) and the control unit (which counts the ballots). He claimed that most of the attacks proposed so far are farfetched. Much of his latter complaints were to the effect that to break the EVM is a criminal act (which is a claim of security through obscurity). He ended with a plea to ask researchers to stop (!) hacking the EVMs because they “are working.”

To sum up : the Indian government says the system works and that there is no actual evidence of tampering (with the exception of Prasad, who apparently received stolen goods). Halderman says the attacks show that the system as a whole are not secure, and Rao says that the results are suspicious.

Shukla responded to critics that the Election Commission of India is willing to listen to critics and said that the only kind of attack that is of interest is one on a sealed machine. He reiterated the statement that Prasad was in receipt of stolen government property and needs to be questioned.

The Q&A was quite contentious. I might have more to say about it later… but wow.

EVT/WOTE ’10 : the keynote

I am at EVT/WOTE (Electronic Voting Technology Workshop/Workshop on Trustworthy Elections) today and tomorrow, and will try to blog about it a bit. The keynote today was given by Donetta Davidson, who runs the Election Assisstance Commission. She gave an overview of the EAC’s activities and priorities. The Q&A has focused a bit on how voting research is underfunded and that CS researchers want the EAC to lobby for more research funding. I guess some things don’t change much.