Hello from the IPAM Workshop on Privacy for Biomedical Data

I just arrived in LA for the IPAM Workshop on Algorithmic Challenges in Protecting Privacy for Biomedical Data. I co-organized this workshop with Cynthia Dwork, James Zou, and Sriram Sankararaman and it is (conveniently) before the semester starts and (inconveniently) overlapping with the MIT Mystery Hunt. The workshop has a really diverse set of speakers so to get everyone on the same page and anchor the discussion, we have 5 tutorial speakers and a few sessions or shorter talks. The hope is that these tutorials (which are on the first two days of the workshop) will give people some “common language” to discuss research problems.

The other big change we made to the standard workshop schedule was to put in time for “breakout groups” to have smaller discussions focused on identifying the key fundamental problems that need to be addressed when thinking about privacy and biomedical data. Because of the diversity of viewpoints among participants, it seems a tall order to generate new research collaborations out of attending talks and going to lunch. But if we can, as a group, identify what the mathematical problems are (and maybe even why they are hard), this can help identify the areas of common interest.

I think of these as falling into a few different categories.

  • Questions about demarcation. Can we formalize (mathematically) the privacy objective in different types of data sets/computations? Can we use these to categorize different types of problems?
  • Metrics. How do we formulate the privacy-utility tradeoffs for different problems? What is the right measure of performance? What (if anything) do we lose in guaranteeing privacy?
  • Possibility/impossibility. Algorithms which can guarantee privacy and utility are great, but on the flip side we should try to identify when privacy might be impossible to guarantee. This would have implications for higher-level questions about system architectures and policy.
  • Domain-specific questions. In some cases all of the setup is established: we want to compute function F on dataset D under differential privacy and the question is to find algorithms with optimal utility for fixed privacy loss or vice versa. Still, identifying those questions and writing them down would be a great outcome.

In addition to all of this, there is a student poster session, a welcome reception, and lunches. It’s going to be a packed 3 days, and although I will miss the very end of it, I am excited to learn a lot from the participants.

Advertisement

Re-identification from microbiomes

A (now not-so-recent) paper by Homer et al. made a splash by showing that one could take a DNA sample from a person and detect whether they were part of the Human Genome Project (HGP) based on looking at the SNP variations from that individual together with the reported allele variations in the HGP data. More recently, a paper in PNAS by Franzosa et al. showed reidentification of individuals in the Human Microbiome Project.

Color me unsurprised. Given the richness of the data, from a purely informational point of view it seems pretty clear that people should be identifiable. As with many machine learning problems, however, the secret is in the feature encoding. Many approaches to comparing metagenomes, especially for bacterial ecologies, try to assess the variability in the population of bacteria, perhaps through mapping the to known strains. As mentioned in the Methods section, “reads were additionally mapped to a database of 649 microbial reference genomes using the Burrows-Wheeler aligner.” However, in addition to these mapping statistics, they used a few other more complicated features to help gain some additional robustness in their identification procedure.

Somehow being able to be identified by your microbiome seems less scary than being able to be identified by your genome, perhaps because we have a sense that genes are more “determining” than microbiomes. After all, you could get a fecal transplant and change your gut flora significantly. Is it the same as burning off your fingerprints? Probably not. But perhaps in the future, perpetrators of certain campus shenanigans may be easier to catch.

Linkage : science edition

Learning from transcriptomes can be cheaper for organisms which have never been sequenced.

A fancy Nature article on mobility privacy, in case you weren’t convinced by other studies on mobility privacy.

Bad statistics in neuroscience. Color me unsurprised.

I bet faked results happen a lot in pharmaceutical trials, given the money involved. Perhaps we should jail people for faking data as a disincentive?

The Atheist shoe company did a study to see if the USPS was discriminating against them.

Problems in modeling Illumina sequencing

About a year ago I started collaborating with a friend in the Armbrust Lab at the University of Washington on some bioinformatics problems, and as a part of that I am trying to give myself a primer on sequencing technologies and how they work. I came across this video recently, and despite its atrocious music and jargonized description, I actually found it quite helpful in thinking about how this particular sequencing technology works:

  • Acoustic waves shatter the DNA.
  • Things (ligases) get attached to the end.
  • The fragments get washed over a “lawn” and the ligases stick the sequences to the lawn.
  • The strands get amplified into larger spots.
  • Single nucleotides with phosphorescent tags are washed on, they are hit with a laser to reveal the color, the tag is sheared, and then the next nucleotide is washed on.

A simple model for the data we get is to say that a position in the genome is selected uniformly at random, and then the read is the sequence of size L, starting from that position. Just a brief glance at the physical process above shows how simple that model is. For the purposes of statistics, it may be enough, but here are some complications that I can see, from knowing almost no physics and biology:

  • The places at which the DNA fragments are not uniformly distributed — in fact, they should be sequence-dependent.
  • The ligases may have some preferential attachment characteristics. Ditto for the oligos on the lawn in the flowcell.
  • The amplification may be variable, spot by spot. This will affect the brightness of the flash and therefore the reliability of the read assessment.
  • The ability of single nucleotides to bind will vary as more and more bases are read, so the gain in the optical signal (or noise) will vary as the read goes on.

Some of these effects are easier to model than others, but what is true from the real data is that these variations in the technology can cause noticeable effects in the data that deviate from the simple model. More fun work to do!

Allerton 2012 : David Tse’s plenary on sequencing

I’ll follow up with some blogging about the talks at Allerton a bit later — my notes are somewhat scattershot, so I might give a more cursory view than usual. Overall, I thought the conference was more fun than in previous years: the best Allerton to date. For now though, I’ll blog about the plenary.

David Tse gave the “director’s cut” of his recent work (with Motahari, Bresler, Bresler, and Ramchandran) on a information theoretic model for next-generation sequencing (NGS). In NGS, many copies of a single genome are chopped up into short chunks (say 100 base pairs) of reads. The information theory model is a very simplified abstraction of this process — a read is generated by choosing uniformly a location in the genome and producing the 100 bases following that position. In NGS, the reads overlap, and each nucleotide of the original genome may appear in many reads. The number of reads in which a base appears is called the coverage.

So there are three parameters, G, the length of the genome, L, the length of a read, and N, the number of reads. The questions is how should these depend on each other? David presented an theoretical analysis of the reconstruction question under rather restrictive assumptions (bases are i.i.d., reads are noiseless) and showed that there is a threshold on the number of reads for successful reconstruction with high probability. That is, there is a number C such that N = G/C reads can reconstruct the sequence with high probability. The number C depends on L via L/\log G.

This model is very simple and is clearly not an accurate model for genome sequencing. David began his talk by drawing a grand analogy to the DMC — his proposition is that this approach will be the information theory of genome sequencing. I have to say that this sounds appealing, but it looks at a rather specific problem that arises in NGS, namely assembly. This is a first step towards building one abstract theory for sequencing and while the model may be simple, the results are non-trivial. David also presented some evidence about how real DNA sequences have features (long repeats) which make problems for greedy assemblers but can be handled by more complex assemblers based on deBruijn graphs. They can also handle noise in the form of i.i.d. erasures. What this analysis seems to do is point to features about the data that are problematic for assembly from an information-theoretic standpoint — this is an analysis of the technological process of NGS rather than saying that much about biology.

I’ve been working for the last year (more off than on, alas) on how to understand certain types of NGS data from a statistical viewpoint. I’ll probably write more about that later when I get some actual understanding. But a central lesson I’ve taken from this is that the situation is quite a bit different than it was when Shannon made a theory of communication that abstracted existing communication systems. We don’t have nearly as good an understanding NGS data from an engineering standpoint, and the questions we want to answer from this data are also unclear. Assembly is one thing, but if nothing else, this theoretical analysis shows that the kind of data we have is often insufficient for “real” assembly. This correlated with practice, as many assemblers produce large chunks of DNA, called contigs, rather than the full organism genome. There are many interesting statistical questions to explore in this data — what can we answer from the data without assembling organisms?