IHP “Nexus” Workshop on Privacy and Security: Day 2

Verrrrrry belated blogging on the rest of the workshop, more than a month later. Day 2 had 5 talks instead of the tutorial plus talks, and the topics were a bit more varied (this was partly because of scheduling issues that prevented us from being strictly thematic).

Amos Beimel started out with a talk on secret sharing, which had a very nice tutorial/introduction to the problem, including the connection between Reed-Solomon codes and Shamir’s t-out-of-n scheme. For professional (and perhaps personal) reasons I found myself wondering how much more the connection between secret sharing and coding theory was — after all, this was a workshop about communication between information theory and theoretical CS. Not being a coding theory expert myself, I could only speculate. What I didn’t know about was the more general secret sharing structures and the results of Ito-Saito-Nishizeki scheme (published in Globecom!). Amos also talked about monotone span programs, which were new to me, and how to prove lower bounds. He concluded with more recent work on the related distribution design problem: how can we construct a distribution on n variables given constraints that specify subsets which should have identical marginals and subsets which should have disjoint support? The results appeared in ICTS.

Ye Wang talked about his work on common information and how it appears in privacy and security problems from an information theoretic perspective. In particular he talked about secure sampling, multiparty computation, and data release problems. The MPC and sampling results were pretty technical in terms of notions of completeness of primitives (conditional distributions) and triviality (a way of categorizing sources). For the data release problem he focused on problems where a sanitizer has access to a pair (X,Y) where X is private and Y is “useful” — the goal is to produce a version of the data which reveals less about X (privacy) and more about Y (utility). Since they are correlated, there is a tension. The question he addressed is when having access to Y alone as as good as both X and Y.

Manoj, after giving his part of the tutorial (and covering for Vinod), gave his own talk on what he called “cryptographic complexity,” which is an analogy to computational complexity, but for multiparty functions. This was also a talk about definitions and reductions: if you can build a protocol for securely computing f(\cdot) using a protocol for g(\cdot), then f(\cdot) reduces to g(\cdot). A complete function is one for which everything reduces to it, and a trivial function reduces to everything. So with the concepts you can start to classify and partition out functions like characterizing all complete functions for 2 parties, or finding trivial functions under different security notions. He presented some weird facts, like an n bit XOR doesn’t reduce to an (n-1) bit XOR. It was a pretty interesting talk, and I learned quite a bit!

Elette Boyle gave a great talk on Oblivious RAM, a topic about which I was completely oblivious myself. The basic idea in oblivious RAM is (as I understood it) that an adversary can observe the accesses to a RAM and therefore infer what program is being executed (and the input). To obfuscate that, you introduce a bunch of spurious accesses. So if you have a program $\latex \Pi$ whose access pattern is fixed prior to execution, you can randomize the accesses and gain some security. The overhead is the ratio of the total accesses to the required accesses. After this introduction to the problem, she talked about lower bounds on the overhead (e.g. you need this much overhead) for a case where you have parallel processing. I admit that I didn’t quite understand the arguments, but the problem was pretty interesting.

Hoeteck Wee gave the last (but quite energetic) talk of the afternoon, on what he called “functional encryption.” The ideas is that Alice has (x,M) and Bob has y. They both send messages to a third party, Charlie. There is a 0-1 function (predicate) P(x,y) such that if P(x,y) = 1 then Charlie can decode the message M. Otherwise, they cannot. An example would be the predicate P(x,y) = \mathbf{1}(x = y). In this case, Alice can send h(x) \oplus M and Bob can send h(y) for some 2-wise independent hash function, and then Charlie can recover M if the hashes match. I think there is a question in this scheme about whether Charlie needs to know that they got the right message, but I guess I can read the paper for that. The kinds of questions they want to ask are what kinds of predicates have nice encoding schemes? What is the size of message that Alice and Bob have to send? He made a connection/reduction to a communication complexity problem to get a bound on the message sizes in terms of the communication complexity of computing the predicate P. It really was a very nice talk and pretty understandable even with my own limited background.

Bob Gallager on Shannon’s tips for research

One of the classes I enjoyed the most in undergrad was Bob Gallager’s digital communications class, 6.450. I was reminded of what an engaging lecturer he was yesterday when I attended the Bell Labs Shannon Celebration yesterday. Unfortunately, it being the last week of the semester, I could not attend today’s more technical talks. Gallager gave a nice concise summary of what he learned from Shannon about how to do good theory work:

  1. Simplify the problem
  2. Relate it to other problems
  3. Restate the problem in as many ways as possible
  4. Break the problem into pieces
  5. Avoid getting locked into thinking ruts
  6. Generalize

As he said, “it’s a process of doing research… each one [step] gives you a little insight.” It’s tempting, as a theorist, to claim that at the end of this process you’ve solved the “fundamental” problem, but Gallager admonished us to remember that the first step is to simplify, often dramatically. As Alfred North Whitehead said, we should “seek simplicity and distrust it.”

Multiple Postdoc Openings at USC

Prof. Urbashi Mitra is looking for multiple postdocs. Given that this is the time of year when the future looks murkiest, these are great opportunities!

I am seeking multiple post-doctoral researchers are sought with expertise in one or more areas: Communication Theory, (Statistical) Signal Processing, Controls, Information Theory, and Machine Learning. In particular, the following expertises are of interest: structured inference (sparse approximation, low rank matrix completion, tensor signal processing, graph signal processing); multi-terminal information theory, or information theory at the boundaries of control or signal processing; distributed control, consensus methods and partially observable Markov Decision Process modeling and algorithms; modern optimization methods; or biological communications, signal processing or information theory.

The successful applicants will be expected to perform innovative translational research, mentor PhD students, give oral presentations, write journal papers, and participate in grant writing and project management. There will be significant opportunities for research leadership and interaction with funding agencies.

Ideally, the successful applicants will start in Summer 2016.

Please have your interested graduate students apply using the following portal:

https://jobs.usc.edu/postings/63539

In addition to a cv and research statement, the applicants are requested to have three letters of reference uploaded to the system as well.

Jan Hein van Dierendonck’s Painting of Shannon

Jan Hein van Dierendonck, a science writer and illustrator/cartoonist from Leiden, recently contacted the IT Society about an oil painting he made of Claude Shannon. He has kindly given permission to post it here. It will be used by some of the Shannon Centenary events this year.

Claude Shannon, by Jan Hein van Dierendonck

Claude Shannon, by Jan Hein van Dierendonck

Claude Elwood Shannon (April 30, 1916 – February 24, 2001)

In the Forties a juggling Claude Elwood Shannon rides a unicycle down the endless hallways of Bell Labs, a telecommunications research laboratory south of New York. Perhaps this balancing act puts his brilliant mind in the right state to look at complex problems in an original way and to devise the formulas that initiate the Digital Era.

As a 21-year-old master’s degree student at the Massachusetts Institute of Technology, Shannon wrote his thesis demonstrating that electrical applications of Boolean algebra could construct and resolve any logical, numerical relationship. In 1948 this mathematician, electronic engineer, and cryptographer published a landmark paper that laid the foundation for information theory. From that moment on, information is something computable. Whether you are dealing with images, text or sound: convert everything into zeros and ones and remove all redundant information and noise. This has changed our world completely. Without Shannon’s Information Theory, your phone simply wasn’t smart.

Averse to fame, the professor in electronics preferred tinkering with his amazing magnetic mouse in a maze with memory and his mechanic juggling robots. He also refined his Juggling Theorem: the number of hands (H) multiplied by the total time a ball spends in the air (F) and is held in a hand (D) is in balance with the number of balls (N) multiplied by the total time a hand is empty (V) and holding a ball (D).

On April 30, 2016, he would have been a hundred.

Linkage

Badass women cartographers!

Looking back on Shoes.

At a DARPA PI meeting recently, I met some folks from Cybernetica who told me about the hot new startup CountryOS! (EDIT: it’s not their startup).

A recent 99% Invisible episode describes the history of the SIGSALY, a secure communication system developed during WWII that used white noise one-time pads printed on vinyl to analog-encrypt communications lines.

Thanks to The Allusionist, I learned about EuroSpeak and discovered this guide on Misused English words and expressions in EU publications, which is hilarious.

Postdoc at Rutgers ECE in Network Science and Statistical Inference

My colleague Laleh Najafizadeh has a postdoc position at Rutgers!

The NeuroImaging Laboratory at the Department of Electrical and Computer Engineering (ECE) at Rutgers University is seeking a highly motivated Postdoctoral Fellow to work on an exciting interdisciplinary project at the intersection of Neuroscience, Network Science, and Statistical Learning and Inference. The applicant will have a unique opportunity to be involved in both the theoretical and experimental development of the project.
The position is open to candidates with a Ph.D. in Electrical Engineering, Computer Science, Statistics or related areas, who are self-driven, have a strong background in mathematics, and have excellent analytical and communication skills. Prior experience of working with neuroimaging data (any modality) is a plus. The appointment is available immediately and will be for 1 year.

Please email your current CV and contact information for three references to Dr. Laleh Najafizadeh at laleh.najafizadeh@rutgers.edu. Early submission is strongly encouraged.

The Rutgers ECE NeuroImaging Laboratory is designed to accommodate both single-subject and hyperscanning multi-modal functional neuroimaging experiments, and is equipped with high- resolution EEG and optical imaging (fNIRS) systems. More information about the laboratory can be found at the lab homepage.

The laboratory is located in Rutgers University–New Brunswick, which is situated at the center of the Northeast Corridor, within 20 miles of Princeton, 40 miles of New York City and 70 miles of Philadelphia.

There exist several opportunities to collaborate with clinicians at Rutgers University. Rutgers Biomedical and Health Sciences is home to the Center for Advanced Biotechnology and Medicine as well as Rutgers School of Public Health. The Robert Wood Johnson University Hospital, the flagship hospital of Robert Wood Johnson Health System, is also located few miles from the ECE Department.

Rutgers is an Equal Opportunity / Affirmative Action Employer.

Postdoc positions at UCLA in Coding Theory

I will write more about the IHP workshop! In the meantime, here are some exciting postdoc opportunities with my ex-classmate Lara Dolecek!

I’m writing to let you know that I have 2 postdoc positions available in my research group at UCLA, starting this summer. I am looking for talented students who want to work on one of the following:

  1. Coding theoretic methods and algorithms for emerging memories and modern computing systems
  2. New algorithms and coding-theoretic techniques for data management (data science)

Both projects are interdisciplinary. Postdocs will be working closely with me and a vibrant group of my graduate students, and will have the opportunity to collaborate with other researchers and to interact with our industry sponsors.

Strong background in mathematics and interest in interdisciplinary research are required.

I would greatly appreciate if you can pass this information to interested students.

Prospective students should contact me via email at dolecek@ee.ucla.edu

with subject line

[Prospective postdoc interested in LORIS research]

along with their CV and 3 selected publications. Students should plan to arrange for 2 letters of recommendation to be sent to the email address above.