Colocated with CCS 2019
Differential privacy is a promising approach to privacy-preserving data analysis. Differential privacy provides strong worst-case guarantees about the harm that a user could suffer from participating in a differentially private data analysis, but is also flexible enough to allow for a wide variety of data analyses to be performed with a high degree of utility. Having already been the subject of a decade of intense scientific study, it has also now been deployed in products at government agencies such as the U.S. Census Bureau and companies like Apple and Google.
Researchers in differential privacy span many distinct research communities, including algorithms, computer security, cryptography, databases, data mining, machine learning, statistics, programming languages, social sciences, and law. This workshop will bring researchers from these communities together to discuss recent developments in both the theory and practice of differential privacy.
Specific topics of interest for the workshop include (but are not limited to):
- theory of differential privacy,
- differential privacy and security,
- privacy preserving machine learning,
- differential privacy and statistics,
- differential privacy and data analysis,
- trade-offs between privacy protection and analytic utility,
- differential privacy and surveys,
- programming languages for differential privacy,
- relaxations of the differential privacy definition,
- differential privacy vs other privacy notions and methods,
- experimental studies using differential privacy,
- differential privacy implementations,
- differential privacy and policy making,
- applications of differential privacy.
The goal of TPDP is to stimulate the discussion on the relevance of differentially private data analyses in practice. For this reason, we seek contributions from different research areas of computer science and statistics. Authors are invited to submit a short abstract (4 pages maximum) of their work. Submissions will undergo a lightweight review process and will be judged on originality, relevance, interest and clarity. Submission should describe novel work or work that has already appeared elsewhere but that can stimulate the discussion between different communities at the workshop. Accepted abstracts will be presented at the workshop either as a talk or a poster. The workshop will not have formal proceedings and is not intended to preclude later publication at another venue. Selected papers from the workshop will be invited to submit a full version of their work for publication in a special issue of the Journal of Privacy and Confidentiality.
Submission website: https://easychair.org/conferences/?conf=tpdp2019
Submission: June 21 (anywhere on earth)
Notification: August 9
- Michael Hay (co-chair), Colgate University
- Aleksandar Nikolov (co-chair), University of Toronto
- Aws Albarghouthi, University of Wisconsin–Madison
- Borja Balle, Amazon
- Mark Bun, Boston University
- Graham Cormode, University of Warwick
- Rachel Cummings, Georgia Tech University
- Xi He, University of Waterloo
- Gautam Kamath, University of Waterloo
- Ilya Mironov, Google Research – Brain
- Uri Stemmer, Ben-Gurion University
- Danfeng Zhang, Penn State University
For more information, visit the workshop website at https://tpdp.cse.buffalo.edu/2019/.
Passing a message along for my colleague Waheed Bajwa:
As the US Liaison Chair of IEEE SPAWC 2019, I have received NSF funds to support travel of undergraduate and/or graduate students to Cannes, France for IEEE SPAWC 2019. Having a paper at the workshop is not a prerequisite for these grants and a number of grants are reserved for underrepresented minority students whose careers might benefit from these travel grants. Please share this with any interested students and, if you know one, please encourage her/him to consider applying for these grants.
(h/t to Stark Draper, Elza Erkip, Allie Fletcher, Tara Javidi, and Tsachy Weissman for sources)
The IEEE Information Theory Society Board of Governors voted to approve the following statement to be included on official society events and on the website:
IEEE members are committed to the highest standards of integrity, responsible behavior, and ethical and professional conduct. The IEEE Information Theory Society reaffirms its commitment to an environment free of discrimination and harassment as stated in the IEEE Code of Conduct, IEEE Code of Ethics, and IEEE Nondiscrimination Policy. In particular, as stated in the IEEE Code of Ethics and Code of Conduct, members of the society will not engage in harassment of any kind, including sexual harassment, or bullying behavior, nor discriminate against any person because of characteristics protected by law. In addition, society members will not retaliate against any IEEE member, employee or other person who reports an act of misconduct, or who reports any violation of the IEEE Code of Ethics or Code of Conduct.
I guess the lawyers had to have a go at it, but this is essentially repeating that the IEEE already had rules and so here, we’re reminding you about the rules. This statement is saying “the new rules are the old rules.” We probably need more explicit new rules, however. In particular, many conferences have more detailed codes of conduct (NeurohackWeek, RSA,
Usenix, APEC) that provide more detail about how the principles espoused in the text above are implemented. Often, these conferences have formal reporting procedures/policies and sanctions for violations: many IEEE conferences do not. The NSF is now requiring reporting on PIs who are “found to have committed sexual harassment” so incidents at conferences where the traveler is presenting NSF-sponsored should also be reported, it seems.
While the ACM’s rules suggest making reporting procedures, perhaps a template (borrowed from another academic community?) could just become part of the standard operating procedure for running an IEEE conference. Just have a member of the organizing committee in charge, similar to having a local arrangements chair, publicity chair, etc. However, given the power dynamics of academic communities, perhaps people would feel more comfortable reporting incidents to someone outside the community.
Relatedly, The Society also approved creating an Ad Hoc Committee on Diversity and Inclusion (I’m not on it) who have already done a ton of work on this and will find other ways to make the ITSOC (even) more open and welcoming.
I just arrived in LA for the IPAM Workshop on Algorithmic Challenges in Protecting Privacy for Biomedical Data. I co-organized this workshop with Cynthia Dwork, James Zou, and Sriram Sankararaman and it is (conveniently) before the semester starts and (inconveniently) overlapping with the MIT Mystery Hunt. The workshop has a really diverse set of speakers so to get everyone on the same page and anchor the discussion, we have 5 tutorial speakers and a few sessions or shorter talks. The hope is that these tutorials (which are on the first two days of the workshop) will give people some “common language” to discuss research problems.
The other big change we made to the standard workshop schedule was to put in time for “breakout groups” to have smaller discussions focused on identifying the key fundamental problems that need to be addressed when thinking about privacy and biomedical data. Because of the diversity of viewpoints among participants, it seems a tall order to generate new research collaborations out of attending talks and going to lunch. But if we can, as a group, identify what the mathematical problems are (and maybe even why they are hard), this can help identify the areas of common interest.
I think of these as falling into a few different categories.
- Questions about demarcation. Can we formalize (mathematically) the privacy objective in different types of data sets/computations? Can we use these to categorize different types of problems?
- Metrics. How do we formulate the privacy-utility tradeoffs for different problems? What is the right measure of performance? What (if anything) do we lose in guaranteeing privacy?
- Possibility/impossibility. Algorithms which can guarantee privacy and utility are great, but on the flip side we should try to identify when privacy might be impossible to guarantee. This would have implications for higher-level questions about system architectures and policy.
- Domain-specific questions. In some cases all of the setup is established: we want to compute function F on dataset D under differential privacy and the question is to find algorithms with optimal utility for fixed privacy loss or vice versa. Still, identifying those questions and writing them down would be a great outcome.
In addition to all of this, there is a student poster session, a welcome reception, and lunches. It’s going to be a packed 3 days, and although I will miss the very end of it, I am excited to learn a lot from the participants.
We (really Mohsen and Zahra) had a paper nominated for a student paper award at CAMSAP last year, but since both student authors are from Iran, their single-entry student visas prevented them from going to the conference. The award terms require that the student author present the work (in a poster session) and the conference organizers were kind enough to allow Mohsen to present his poster via Skype. It’s hardly an ideal communication channel, given how loud poster sessions are. Although the award went to a different paper, the experience brought up two questions that are not new but don’t get a lot of discussion.
How should paper awards deal with visa issues? This is not an issue specific to students from Iran, although the US State Department’s visa issuance for Iranian students is stupidly restrictive. Students from Iran are essentially precluded from attending any non-US conference unless they want to roll the dice again and wait for another visa at home. Other countries may also deny visas to students for various reasons. Requiring students to be present at the conference is discriminatory, since the award should be based on the work. Disqualifying a student for an award because of bullshit political/bureaucratic nonsense that is totally out of their control just reinforces that bullshit.
Why are best papers judged by their presentation? I have never been a judge for a paper award and I am sure that judges try to be as fair as they can. However, the award is for the paper and not its performance. I agree that scholarly communication through oral presentation is a valuable skill, but if the award is going to be determined by who gives the best show at the conference, they should retitle these to “best student paper and presentation award” or something like that. Maybe it should instead be based on video presentations to allow remote participation. If you are going to call it a paper award, then it should based on the written work.
I don’t want this to seem like a case of sour grapes. Not all student paper awards work this way, but it seems to be the trend in IEEE-ish venues. The visa issue has hurt a lot of researchers I know; they miss out on opportunities to get their name/face known, chances to meet and network with people, and the experience of being exposed to a ton of ideas in a short amount of time. Back when I had time to do conference blogging, it was a way for me to process the wide array of new things that I saw. For newer researchers (i.e. students) this is really important. Making paper awards based on presentations hits these students doubly: they can neither attend the conference nor receive recognition for their work.
IPAM is hosting a workshop on “Algorithmic Challenges in Protecting Privacy for Biomedical Data” which will be held at IPAM from January 10-12, 2018.
The workshop will be attended by many junior as well as senior researchers with diverse backgrounds. We want to to encourage students or postdoctoral scholars who might be interested, to apply and/or register for this workshop.
I think it will be quite interesting and has the potential to spark a lot of interesting conversations around what we can and cannot do about privacy for medical data in general and genomic data in specific.
My colleague Waheed Bajwa, Alejandro Ribeiro, and Alekh Agarwal are organizing a Workshop on Distributed Optimization, Information Processing, and Learning from August 21 to August 23, 2017 at Rutgers DIMACS. The purpose of this workshop is to bring together researchers from the fields of machine learning, signal processing, and optimization for cross-pollination of ideas related to the problems of distributed optimization, information processing, and learning. All in all, we are expecting to have 20 to 26 invited talks from leading researchers working in these areas as well as around 20 contributed posters in the workshop.
Registration is open from now until August 14 — hope to see some of you there!