Linkage

This NSF report from the Office of the Inspector General has some really horrendous examples of data fabrication, plagiarism, and other misconduct by PIs and graduate fellowship (GRFP) recipients. It’s true that bad behavior taints the whole program: how good is the GRFP selection process if students like this get awards?

This article on Bhagat Singh Thind is fascinating. We need a modern Ghadar Party here. But this is so bizarre: “[o]ut of necessity and ingenuity, Thind, along with several dozen South Asians during the interwar decades reinvented themselves as itinerant spiritual teachers and metaphysical lecturers who would travel from city to city, giving lectures and holding private classes.”

A photo gallery by Lotfi Zadeh: some of these are really beautiful portraits. Also the variety! I remember not really understanding portraiture when I was younger but I think I “get it” a bit more now. Or at least why it’s interesting. There’s even a photo of Claude Shannon… from the email:

Prof. Lotfi Zadeh, who passed away in 2017, was an avid photographer who grew up in a multicultural environment, surrounded himself with a cosmopolitan crowd, and always kept his mind open to new ideas. In the 1960s and 70s, he enjoyed capturing the people around him in a series of black and white portraits. His burgeoning career gave him access to a number of artists, academics, and dignitaries who, along with his colleagues, friends, and family, proved a great source of inspiration for him.

THE SQUIRCLE IS SO FASCINATING!

I helped organize a workshop at IPAM on privacy and genomics. Videos (raw) are up now.

Advertisement

What’s new is old in ethics and conduct

(h/t to Stark Draper, Elza Erkip, Allie Fletcher, Tara Javidi, and Tsachy Weissman for sources)

The IEEE Information Theory Society Board of Governors voted to approve the following statement to be included on official society events and on the website:

IEEE members are committed to the highest standards of integrity, responsible behavior, and ethical and professional conduct. The IEEE Information Theory Society reaffirms its commitment to an environment free of discrimination and harassment as stated in the IEEE Code of Conduct, IEEE Code of Ethics, and IEEE Nondiscrimination Policy. In particular, as stated in the IEEE Code of Ethics and Code of Conduct, members of the society will not engage in harassment of any kind, including sexual harassment, or bullying behavior, nor discriminate against any person because of characteristics protected by law. In addition, society members will not retaliate against any IEEE member, employee or other person who reports an act of misconduct, or who reports any violation of the IEEE Code of Ethics or Code of Conduct.

I guess the lawyers had to have a go at it, but this is essentially repeating that the IEEE already had rules and so here, we’re reminding you about the rules. This statement is saying “the new rules are the old rules.” We probably need more explicit new rules, however. In particular, many conferences have more detailed codes of conduct (NeurohackWeek, RSA,
Usenix, APEC) that provide more detail about how the principles espoused in the text above are implemented. Often, these conferences have formal reporting procedures/policies and sanctions for violations: many IEEE conferences do not. The NSF is now requiring reporting on PIs who are “found to have committed sexual harassment” so incidents at conferences where the traveler is presenting NSF-sponsored should also be reported, it seems.

While the ACM’s rules suggest making reporting procedures, perhaps a template (borrowed from another academic community?) could just become part of the standard operating procedure for running an IEEE conference. Just have a member of the organizing committee in charge, similar to having a local arrangements chair, publicity chair, etc. However, given the power dynamics of academic communities, perhaps people would feel more comfortable reporting incidents to someone outside the community.

Relatedly, The Society also approved creating an Ad Hoc Committee on Diversity and Inclusion (I’m not on it) who have already done a ton of work on this and will find other ways to make the ITSOC (even) more open and welcoming.

SPS: no edits after acceptance

I got an email recently saying that the Signal Processing Society‘s Publications Board has decided to “no longer allow any changes to papers once the papers are accepted… the accepted version of the papers will be the version posted on Xplore.” Associate editors are supposed to enforce this policy.

I can only imagine that this is the result of abuse by some (or many) authors to make substantive changes to their manuscript post-acceptance. That is clearly bad and should probably be stopped. However, I think this hard-line policy may not be good for a couple of reasons:

  • Even after reviewers sign off on a manuscript from a technical standpoint, there are often several small issues like grammar, typos, and so on. The only solution then would be to enter an endless cycle of revise and resubmit, unless SPS is ok with typos and the like.
  • I have had galley proofs come back with several technically substantive errors and have had to go back and forth with IEEE about fixing these. This can only get worse with this policy.
  • Due to the fast pace of research and the slow pace of reviewing, many times the references for a paper need updating even after acceptance: a journal version of a conference paper may have come out, or an ArXiV preprint may have been updated, or any host of other changes. This hard requirement is bad for scholarship since it makes finding the “correct” reference more onerous.

Overall, this shifts the burden of fine-level verification of the manuscript to the AE. For some journals this is not so bad since they don’t have long papers and AEs may handle only a few papers at the same time. For something like the Transactions on Information Theory, it would be a disaster! Thankfully (?) this is only for the Signal Processing Society. However, my prediction is that overall paper quality will decrease with this policy, driving more papers to ArXiV for their “canonical version.” Is this bad? Depends on your point of view.

Hello from the IPAM Workshop on Privacy for Biomedical Data

I just arrived in LA for the IPAM Workshop on Algorithmic Challenges in Protecting Privacy for Biomedical Data. I co-organized this workshop with Cynthia Dwork, James Zou, and Sriram Sankararaman and it is (conveniently) before the semester starts and (inconveniently) overlapping with the MIT Mystery Hunt. The workshop has a really diverse set of speakers so to get everyone on the same page and anchor the discussion, we have 5 tutorial speakers and a few sessions or shorter talks. The hope is that these tutorials (which are on the first two days of the workshop) will give people some “common language” to discuss research problems.

The other big change we made to the standard workshop schedule was to put in time for “breakout groups” to have smaller discussions focused on identifying the key fundamental problems that need to be addressed when thinking about privacy and biomedical data. Because of the diversity of viewpoints among participants, it seems a tall order to generate new research collaborations out of attending talks and going to lunch. But if we can, as a group, identify what the mathematical problems are (and maybe even why they are hard), this can help identify the areas of common interest.

I think of these as falling into a few different categories.

  • Questions about demarcation. Can we formalize (mathematically) the privacy objective in different types of data sets/computations? Can we use these to categorize different types of problems?
  • Metrics. How do we formulate the privacy-utility tradeoffs for different problems? What is the right measure of performance? What (if anything) do we lose in guaranteeing privacy?
  • Possibility/impossibility. Algorithms which can guarantee privacy and utility are great, but on the flip side we should try to identify when privacy might be impossible to guarantee. This would have implications for higher-level questions about system architectures and policy.
  • Domain-specific questions. In some cases all of the setup is established: we want to compute function F on dataset D under differential privacy and the question is to find algorithms with optimal utility for fixed privacy loss or vice versa. Still, identifying those questions and writing them down would be a great outcome.

In addition to all of this, there is a student poster session, a welcome reception, and lunches. It’s going to be a packed 3 days, and although I will miss the very end of it, I am excited to learn a lot from the participants.

Some thoughts on paper awards at conferences

We (really Mohsen and Zahra) had a paper nominated for a student paper award at CAMSAP last year, but since both student authors are from Iran, their single-entry student visas prevented them from going to the conference. The award terms require that the student author present the work (in a poster session) and the conference organizers were kind enough to allow Mohsen to present his poster via Skype. It’s hardly an ideal communication channel, given how loud poster sessions are. Although the award went to a different paper, the experience brought up two questions that are not new but don’t get a lot of discussion.

How should paper awards deal with visa issues? This is not an issue specific to students from Iran, although the US State Department’s visa issuance for Iranian students is stupidly restrictive. Students from Iran are essentially precluded from attending any non-US conference unless they want to roll the dice again and wait for another visa at home. Other countries may also deny visas to students for various reasons. Requiring students to be present at the conference is discriminatory, since the award should be based on the work. Disqualifying a student for an award because of bullshit political/bureaucratic nonsense that is totally out of their control just reinforces that bullshit.

Why are best papers judged by their presentation? I have never been a judge for a paper award and I am sure that judges try to be as fair as they can. However, the award is for the paper and not its performance. I agree that scholarly communication through oral presentation is a valuable skill, but if the award is going to be determined by who gives the best show at the conference, they should retitle these to “best student paper and presentation award” or something like that. Maybe it should instead be based on video presentations to allow remote participation. If you are going to call it a paper award, then it should based on the written work.

I don’t want this to seem like a case of sour grapes. Not all student paper awards work this way, but it seems to be the trend in IEEE-ish venues. The visa issue has hurt a lot of researchers I know; they miss out on opportunities to get their name/face known, chances to meet and network with people, and the experience of being exposed to a ton of ideas in a short amount of time. Back when I had time to do conference blogging, it was a way for me to process the wide array of new things that I saw. For newer researchers (i.e. students) this is really important. Making paper awards based on presentations hits these students doubly: they can neither attend the conference nor receive recognition for their work.

NIPS 2017 Tutorial on Differential Privacy and Machine Learning

Kamalika and I gave a tutorial at NIPS last week on differential privacy and machine learning. We’ve posted the slides and references (updates still being made). It was a bit stressful to get everything put together in time, especially given how this semester went, but it was a good experience and now we have something to build on. It’s amazing how much research activity there has been in the last few years.

One thing that I struggled with a bit was the difference between a class lecture, a tutorial, and a survey. Tutorials sit between lectures and surveys: the goal is to be clear and cover the basics with simple examples, but also lay out something about what is going on in the field and where important future directions lie. It’s impossible to be comprehensive; we had to pick and choose different topics and papers to cover, and ended up barely mentioning large bodies of work. At the same time, it didn’t really make sense to put up a slide saying “here are references for all the things we’re not going to talk about.” If the intended audience is a person who has heard of differential privacy but hasn’t really studied it, or someone who has read this recent series of articles, then a list without much context is not much help. It seems impossible to even make a real survey now, unless you make the scope more narrow.

As for NIPS itself… I have to say that the rapid increase in size (8000 participants this year) made the conference feel a lot different. I had a hard time hearing/understanding for the short time I was there. Thankfully the talks were streamed/recorded so I can go back to catch what I missed.

ISIT 2018: call for CS theory papers too

I got an email from Venkat Guruswami encouraging those in the TCS community to submit work to the upcoming ISIT 2018 deadline. In particular, since ISIT papers are short (5 pages) it’s an ideal venue to publish more technical results or general tools (relevant to information theory) that get used in longer STOC/FOCS/SODA/etc papers. There was a lively discussion about what the “rules” were for ISIT, but basically:

  • the proceedings are archival so it counts as a real publication (no submitting the same result elsewhere)
  • ideal works would be things like coding theory problems of interest to both communities, TCS takes on IT problems, or general standalone results that could be applicable to information theory (or related) problems

The deadline is January 12, 2018. I guess I know what I’ll be doing for my winter vacation…

Rutgers ECE is Hiring (2018 edition)

My department is hiring for (potentially) multiple positions!

Hiring areas for this search are: (i) Electronics, including sensors, devices, bioelectronics, as well as integrated circuits and systems for RF and millimeter wave applications, (ii) Information processing and machine learning for autonomous systems and robots, especially learning and control in autonomous systems such as vehicles or drones as well as in assistive technologies, (iii) E-health, especially wearable electronics and sensors, medical informatics, quantified self and personalized medicine, as well as (iv) Cyber-physical systems, including signal processing and machine learning techniques, embedded systems, device and software security, IoT security, and applications to smart cities. Exceptional candidates in the university strategic areas are also welcome to apply.

NSF Report Markdown file

I experienced a horrible “network dropping causes web forms to clear” experience when filing an NSF report a few years back, so I switched to filling things in via the NSF’s Word template. However, the extraneous formatting in that made the cut-and-paste into the webworm tedious. So this time around I created a Markdown (.md) template with all of the questions you need to answer. This makes it easier to edit and lightly format your report text offline (e.g. on a plane) for much faster cut-and-paste later.

IPAM Workshop on Algorithmic Challenges in Protecting Privacy for Biomedical Data

IPAM is hosting a workshop on Algorithmic Challenges in Protecting Privacy for Biomedical Data” which will be held at IPAM from January 10-12, 2018.

The workshop will be attended by many junior as well as senior researchers with diverse backgrounds. We want to to encourage students or postdoctoral scholars who might be interested, to apply and/or register for this workshop.

I think it will be quite interesting and has the potential to spark a lot of interesting conversations around what we can and cannot do about privacy for medical data in general and genomic data in specific.