I also attended HealthSec ’11 this week, and the program was a little different than what I had expected. There was a mix of technical talks and policy/framework proposals around a couple of themes:
- security in medical devices
- auditing in electronic medical records
- medical record dissemination and privacy
In particular, a key challenge in healthcare coming up is how patient information is going to be handled in heath insurance exchanges (HIE’s) that will be created as part of the Affordable Care Act. The real question is what is the threat model for health information : hackers who want to wholesale health records, or the selling of data by third parties (e.g. insurance companies). Matthew Green from Dartmouth discussed implications of the PCAST report on Health Information Technology, which I will have to read.
The most interesting part of the workshop was the panel on de-identification and whether it was a relevant or useful framework moving forward. The panelists were Sean Nolan from Microsoft, Kelly Edwards from University of Washington, Arvind Narayanan from Stanford, and Lee Tien from the EFF. Sean Nolan talked a bit about how HIPAA acts as an impediment to exploratory research, which I have worked on a little, but also raised the thorny ethical issue of public good versus privacy, which is key to understanding the debate over health records in clinical research. Edwards is a bioethicist and had some very important points to raise about how informed consent is an opportunity to educate patients about their (potential) role in medical research, but also to make them feel like informed participants in the process. The way in which we phrase the tradeoff Nolan mentioned really relates to ethics in how we communicate the tradeoff to patients. Narayanan (famous for his Netflix deanonymization) talked about the relationship between technology and policy has to be rethought or turned more into a dialogue rather than a blame-shifting or challenge-posing framework. Lee Tien made a crucial point that if we do not understand how patient data moves about in our existing system, then we have no hope of reform or regulation, and no stakeholder in the system how has that “bird’s eye view” of these data flows.
I hope that in the future I can contribute to this in some way, but in the meantime I’ve been left with a fair bit to chew on. Although the conference was perhaps a bit less technical than I would have liked, I think it was quite valuable as a starting point for future work.
Via Jay P., a pretty amazing dance video.
Via 530nm330Hz, a very interesting tidbit on the history of the one-time pad. A free tech report version is available too. The one-time pad XOR’s the bits of a message with a i.i.d. random bitstring of the same length, and is credited to Gilbert Vernam and Joseph Mauborgne. However, as Steven Bellovin‘s paper shows,
In 1882, a California banker named Frank Miller published Telegraphic Code to Insure Privacy and Secrecy in the Transmission of Telegrams. In it, he describes the first one-time pad system, as a superencipherment mechanism for his telegraph code. If used properly, it would have had the same property of absolute security.
Although in theory Miller can claim priority, reality is more complex. As will be explained below, it is quite unlikely that either he or anyone else ever used his system for real messages; in fact, it is unclear if anyone other than he and his friends and family ever knew of its existence. That said, there are some possible links to Mauborgne. It thus remains unclear who should be credited with effectively inventing the one-time pad.
Another fun tidbit : apparently mother’s maiden name was used for security purposes way back in 1882!
I really like shiso leaves and their cousins. I had a shiso plant but it did not survive the California sun / I have a black thumb. One of my favorite meals at ISIT 2009 was with Bobak Nazer, where we found an out-of-the way BBQ joint where they brought us a long box filled with 7 varieties of leaves, including perilla leaves. It makes me hungry just writing about it.
Kudos to Adrienne for the amazing photo.
There’s Only One Sun, a short sci-fi film by Wong Kar-Wai.
One day I will write all those posts that I’ve been intending on writing. One day… until then:
Nabokov’s butterfly theory is found to be correct.
Why you should encrypt your smartphone.
You should now call him Doctor Grover. No, not him, silly. Him.
MIT’s 150th year anniversary PR blitz has an article on Wiener.
I am at EVT/WOTE (Electronic Voting Technology Workshop/Workshop on Trustworthy Elections) today and tomorrow, and will try to blog about it a bit. The keynote today was given by Donetta Davidson, who runs the Election Assisstance Commission. She gave an overview of the EAC’s activities and priorities. The Q&A has focused a bit on how voting research is underfunded and that CS researchers want the EAC to lobby for more research funding. I guess some things don’t change much.
A month or two ago I was on the shuttle going to campus and ended up talking with Steve Checkoway about his work. He described a problem in election auditing that sounded pretty interesting : given an election between, say, two candidates, and the ability to sample individual ballots at random and compare how they were counted versus how they should have been counted, how should we design an auditing algorithm that has the following property: if the auditor certifies that the reported winner is the true winner, then it is wrong with probability no larger than ?
After a couple more meetings we came up with a pretty simple method based on minimizing a Kullback-Leibler divergence. This minimum KL-divergence gives a bound on the error probability of our algorithm which we can then compare to to give the guarantee. To do the minimization we need to turn to convex optimization tools (which might be a subject for another post — numerically minimizing the KL divergence requires choosing your solving algorithm carefully).
We wrote up a paper on it with Steve’s advisor Hovav Shacham and submitted it to 2010 Electronic Voting Technology Workshop, where it was accepted. We’re making some major-ish revisions (mostly in the simulations) but I’m pretty excited about it. There are some fun analysis questions lurking in the corners that we didn’t have time to get to yet, but I’m looking forward to poking at them after ISIT.
Speaking of which, ISIT is coming up — not sure how much I will blog, but there will be something at least.
From Bobak I saw that US drones in Iraq have been hacked because “the remotely flown planes have an unprotected communications link” but “there was no evidence that they [insurgents] were able to jam electronic signals from the aircraft.”
This illustrates nicely the difference between eavesdropping and jamming. However, a nice by-product of anti-jamming codes using shared encryption keys (here they can be easily agreed upon before the drone takes off) is that sometimes you can get both eavesdropping and jamming protection at the same time.