WIFS 2014

This week I took a quick jaunt down to Atlanta to attend part of WIFS 2014 (co-located with GlobalSIP 2014). Kamalika and I were invited to give a talk on differential privacy and machine learning, based on our IEEE Signal Processing Magazine article. I’ve uploaded the slides of the tutorial to my website and we’re planning on making a video (audio over slides) version for SigView as well as on YouTube.

Much like last year, GlobalSIP had a somewhat disjointed, semi-chaotic feel (exacerbated by tiredness, I am sure) — it’s really a collection of semi-interacting workshops in the same space, and I knew people in several of the other workshops. Since I was there for a day and giving a tutorial at WIFS, I decided to stick with WIFS for the day. To give a sense of how confusing it all was, here’s a picture of the guide to deciphering the program book:

Overly-complicated rules for encoding sessions

Overly-complicated rules for encoding sessions

The keynote for GlobalSIP was given by Vince Poor on information-theoretic privacy via rate distortion (this is the work with Lalitha). Vince did a good job of not over-IT-ing it I think, which was good because the audience was pretty diverse and it’s not clear that many of the people there had even taken a course on information theory. This seems to be the big challenge in multi-disciplinary conferences like GlobalSIP (or large signal processing conferences in general) — everyone is in signal processing, but it’s a big tent and it’s hard to reach everyone.

Min Wu was the keynote speaker for the WIFS workshop on the day I attended. Her talk, on “Exploring Power Network Signatures for Information Forensics” was about how to glean information from power fluctuations in networks, or electronic network frequency (ENF). Different processes or operations have different power demands — by matching these signatures to an observed signal (e.g. a video), one can make inferences about the time/location/integrity of the data. For example, were the audio and visual tracks in a video taken at the same time or merged later? This whole area is quite interesting, and while I was sort of aware of this work I hadn’t really read up on much of it.

Perhaps it was the end of the semester kicking in, but I sort of took terrible notes on most of the talks and poster sessions at the conference, so I can’t really write coherently about the papers I saw. Unfortunately I had to run back to teach the penultimate lecture in my class. I guess now that I have a “real job” this is going to be the way it works from now on. Kind of sad, really.

SPCOM 2014: some talks

Relevance Singular Vector Machine for Low-­rank Matrix Sensing
Martin Sundin; Saikat Chatterjee; Magnus Jansson; Cristian Rojas
This talk was on designing Bayesian priors for sparse-PCA problems — the key is to find a prior which induces a low-rank structure on the matrix. The model was something like y = A \mathrm{vec}(X) + n where X is a low-rank matrix and n is noise. The previous state of the art is by Babacan et al., a paper which I obviously haven’t read, but the method they propose here (which involved some heavy algebra/matrix factorizations) appears to be competitive in several regimes. Probably more of interest to those working on Bayesian methods…

Non-Convex Sparse Estimation for Signal Processing
David Wipf
More Bayesian methods! Although David (who I met at ICML) was not trying to say that the priors are particularly “correct,” but rather that the penalty functions that they induce on the problems he is studying actually make sense. More of an algorithmist’s approach, you might say. He set up the problem a bit more generally, to minimize problems of the form
\min_{X_i} \sum_{i} \alpha_i \mathrm{rank}[X_i] \ \ \ \ \ \ \ Y = \sum_{i} A_i(X_i)
where A_i are some operators. He made the case that convex relaxations of many of these problems, while analytically beautiful, have restrictions which are not satisfied in practice, and indeed they often have poor performance. His approach is via Empirical Bayes, but this leads to non-convex problems. What he can show is that the algorithm he proposes is competitive with any method that tries to separate the error from the “low-rank” constraint, and that the new optimization is “smoother.” I’m sure more details are in his various papers, for those who are interested.

PCA-HDR: A Robust PCA Based Solution to HDR Imaging
Adit Bhardwaj; Shanmuganathan Raman
My apologies for taking fewer notes on this one, but I don’t know much about HDR imaging, so this was mostly me learning about HDR image processing. There are several different ways of doing HDR, from multiple exposures to flash/no-flash, and so on. The idea is that artifacts introduced by the camera can be modeled using the robust PCA framework and that denoting in HDR imaging may be better using robust PCA. I think that looking at some of the approaches David mentioned may be good in this domain, since it seems unlikely to me that these images will satisfy the conditions necessary for convex relaxations to work…

On Communication Requirements for Secure Computation
Vinod M Prabhakaran
Vinod showed some information theoretic approaches to understanding how much communication is needed for secure computation protocols like remote oblivious transfer: Xavier has \{X_0, X_1\}, Yvonne has Y \in \{0,1\} and Zelda wants Z = X_Y, but nobody should be able to infer each other’s values. Feige, Killian, and Naor have a protocol for this, which Vinod and Co. can show is communication-optimal. There were several ingredients here, including cut-set bounds, distribution switching, data processing inequalities, and special bounds for 3-party protocols. More details in his CRYPTO paper (and others).

Artificial Noise Revisited: When Eve Has More Antennas Than Alice
Shuiyin Liu; Yi Hong; Emanuele Viterbo
In a MIMO wiretap setting, if the receiver has more antennas than the transmitter, then the transmitter can send noise in the nullspace of the channel matrix of the direct channel — as long as the eavesdropper has fewer antennas than the transmitter then secure transmission is possible. In this paper they show that positive secrecy capacity is possible even when the eavesdropper has more antennas, but as the number of eavesdropper antennas grows, the achievable rate goes to 0. Perhaps a little bit of a surprise here!

Help researchers figure out how to make better passwords

S. Raj Rajagopalan and collaborators at Honeywell are doing some security research on making better passwords. They are looking for some people to do a quick study on password design.

Along with a couple of Honeywell security researchers I am running a study on a rather familiar problem for most of us – creating memorable but secure passwords, i.e. how to generate passwords that are both suitably random and memorable. We have just launched a simple user study that asks volunteers to participate in an interactive session that lets them choose password candidates and see how well they remember them. Needless to say, these are not actual passwords used by any system, only strings that could be used as passwords.

No personal information is collected in the study and the system only stores the data that is actually provided by the user. To that end, you may choose to not provide any bit of information as you choose. The study takes only a couple of minutes to finish. You may run it multiple times if you wish (and you will likely get different use cases) but you will have to clear the cache on your browsers to get a fresh configuration.

We need at least 300 participants to get statistical significance, so we would appreciate it if you could participate in the study.

Please click here to go to the study: http://138.91.115.120:8080/syspwd

Thanks for your help. Any questions on the study may be directed to me.

Raj

Linkage

I am traveling all over India at the moment so I’m not really able to write contentful posts. Here are even more links instead, sigh. Maybe later I’ll talk about log-Sobolev inequalities so I can be cool like Max.

Speaking of Max, he posted this hilarious bad lip reading version of Game of Thrones. Probably NSFW. I don’t even like the series but it’s pretty funny.

For those who are fans of Rejected, Don Hertzfeldt’s new film is available on Vimeo.

Those who were at Berkeley may remember seeing Ed Reed perform at the Cheeseboard. His album (which I helped fund via indiegogo, was named a Downbeat Editors’ Pick. It’s a great album.

In light of the Snowden leaks, some doubt has been cast on NIST’s crypto standards.

I’m super late to this, but I endorse Andrew’s endorsement of Sergio‘s interview with Robert Fano in the IT Newsletter. Here’s just the article, if you want that.

ISIT Blogging, part 3

I’ll round out the end of my ISIT blogging with very brief takes on a few more papers. I took it pretty casually this year in terms of note taking, and while I attended many more talks, my notes for most of them consist of a title and a star next to the ones where I want to look at the paper more closely. That’s probably closer to how most people attend conferences, only they probably use the proceedings book. I actually ended up shredding the large book of abstracts to use as bedding for my vermicompost (I figured they might appreciate eating a little Turkish paper for a change of diet).

On Connectivity Thresholds in Superposition of Random Key Graphs on Random Geometric Graphs
B Santhana Krishnan (Indian Institute of Technology, Bombay, India); Ayalvadi Ganesh (University of Bristol, United Kingdom); D. Manjunath (IIT Bombay, India)
This looked at a model where you have a random geometric graph (RGG) together with a uniformly chosen random subset S_i of \{ 1, 2, \ldots, P_n\} of size K_n at each node. The subset is the set of keys available at each node; two nodes can talk (securely) if they share a key in common. We keep the edge in the RGG is if the link can be secured. The question is whether the secure-link graph is connected. It turns out that the important scaling is in terms of r_n^2 K_n^2/P_n, where r_n is the connectivity radius of the RGG. This sort of makes sense, as the threshold is more or less \Theta(\log n/n), so the keys provide a kind of discount factor on effective radius needed for connectivity — if the number of keys per node is small then you need a larger radius to compensate.

Secure Network Coding for Distributed Secret Sharing with Low Communication Cost
Nihar B Shah (University of California at Berkeley, USA); K. v. Rashmi (University of California at Berkeley, USA); Kannan Ramchandran (University of California at Berkeley, USA)
This paper was on secret sharing — a dealer wants to distribute n shares of a secret such that any k of them can be used to reconstruct the secret but k-1 or fewer cannot. The idea here is that the dealer has to distribute these shares over the network, which means that if a receiver is not connected directly to the dealer then the share will be passed insecurely through another node. Existing approaches based on pairwise agreement protocols are communication intensive. The idea here is use ideas from network coding to share masked versions of shares so that intermediate nodes will not get valid shares from others. To do this the graph needs to satisfy a particular condition (k-propagating), which is defined in the paper. A neat take on the problem, and worth looking at if you’re interested in that sort of thing.

Conditional Equivalence of Random Systems and Indistinguishability Proofs
Ueli Maurer (ETH Zurich, Switzerland)
This was scheduled to be in the same session as my paper with Vinod, but was moved to an earlier session. Maurer’s “programme” as it were, is to think about security via three kinds of systems — real systems with real protocols and pseudorandomness, idealized systems with real protocols but real randomness, and perfect systems which just exist on paper. The first two are trivially indistinguishable from a computational perspective, and the goal is to show that the last two are information-theoretically indistinguishable. This conceptual framework is actually useful for me to separate out the CS and IT sides of the security design question. This paper tried to set up a framework in which there is a distinguisher D which tries to make queries to two systems and based on the answers has to decide if they are different or not. I think if you’re interested in sort of a systems-theoretic take on security you should take a look at this.

Tight Bounds for Universal Compression of Large Alphabets
Jayadev Acharya (University of California, San Diego, USA); Hirakendu Das (University of California San Diego, USA); Ashkan Jafarpour (UCSD, USA); Alon Orlitsky (University of California, San Diego, USA); Ananda Theertha Suresh (University of California, San Diego, USA)
The main contribution of this paper was to derive bounds on compression of patterns of sequences over unknown/large alphabets. The main result is that the worst case pattern redundancy for i.i.d. distributions is basically n^{1/3} where n is the blocklength. The main result is a new upper bound which uses some tricks like sampling a random number of points, where the number of samples is Poisson distributed, and a partition of the set of distributions induced by Poisson sampling.

To Surprise and Inform
Lav R. Varshney (IBM Thomas J. Watson Research Center, USA)
Lav talked about communication over a channel where the goal is to communicate subject to a constraint on the Bayesian surprise s(x) = D( p(Y|x) \| P(Y) ) where X and Y are the input and output of the channel. He gets a single-letter expression for the capacity under a bound on the max surprise and gives an example for which the same distribuion maximizes mutual information and achieves the minimax surprise. The flip side is to ask for capacity when each output should be surprising (or “attention seeking”). He gets a single letter capacity here as well, but the structure of the solution seems to be a bit more complicated.

i’m in ur protocolz, jammin ur cellphonez

Krish Eswaran sent me a story about how a group at Virgina Tech described how LTE networks are susceptible to a certain kind of jamming strategy:

“An example strategy would be to target specific control or synchronization signals, in order to increase the geographic range of the jammer and better avoid detection,” the Wireless @ Virginia Tech research group said in a filing (PDF) submitted to the National Telecommunications and Information Administration. “The availability of low-cost and easy to use software-defined radios makes this threat even more realistic.”

Color me unsurprised! For my PhD, I studied arbitrarily varying channels (AVCs), which are information-theoretic models for communication against adversarial interference. There are a couple of design insights one can distill from considering the AVC model:

  • Separating protocol and payload makes schemes susceptible to spoofing.
  • Lack of synchronization/coordination between sender and receiver can be a real problem in adversarial settings.

Here we have a case where the protocol is easy to spoof/disrupt, essentially because the control information in unprotected.

This separation between control information and payload is often suboptimal in other senses. See, for example, Tchamkerten, Chandar and Wornell.

DIMACS Workshop on Information-Theoretic Network Security

At DIMACS, I got a notice about a workshop here that is coming up in November with a deadline ofr November 5 to register: the DIMACS Workshop on Information-Theoretic Network Security organized by Yingbin Liang and Prakash Narayan. Should be worth checking out — they have a nice slate of talks.

If you do come though, don’t stay at the Holiday Inn — go for The Heldrich or a Hyatt or something that is anywhere near walking distance to restaurants or something. I think I almost got run over going to Walgreens yesterday in this land of strip malls…