I just attended SPCOM 2014 at the Indian Institute of Science in Bangalore — many thanks to the organizers for the invitation! SPCOM 2014 happens every two years and is a mix of invited and submitted papers (much like Allerton). This year they mixed the invited talks with the regular talks which I thought was a great idea — since invited papers were not for specific sessions, it makes a lot more sense to do it that way, plus it avoids a sort of “two-tier” system.

I arrived early enough to catch the tutorials on the first day. There was a 3 hour session in the morning and another on the in afternoon. For the morning I decided to expand my horizons by attending Manoj Gopalkrishnan‘s tutorial on the physics of computation. Manoj focused on the question of how much energy it takes to erase or copy a bit of information. He started with some historical context via von Neumann, Szilard, and Landauer to build a correspondence between familiar information theoretic concepts and their physical counterparts. So in this correspondence, relative entropy is the same as free energy. He then turned to look at what one might call “finite time” thermodynamics. Suppose that you have to apply a control that operates in finite time in order to change a bit. One way to look at this is through controlling the transition probabilities in a two-state Markov chain representing the value of the bit you want to fix. You want to drive the resting state (with stationary distribution (1/2,1/2) to something like (\epsilon, 1 - \epsilon) within time T. At this level I more or less understood what was going on, but since my physics background is pretty poor, I think I missed out on how the physical intuition/constraints impact what control strategies you can choose.

Prasad Santhanam gave the other tutorial, which was a bit more solid ground for me. This was not quite a tutorial on large-alphabet probability estimation, but more directly on universal compression and redundancy calculations. The basic setup is that you have a family of distributions \mathcal{P} and you don’t know which distribution p \in \mathcal{P} will generate your data. Based on the data sample you want to do something: estimate some property of the distribution, compress the sample to a size close to its entropy, etc. A class can be weakly or strongly compressible, or insurable (which means being able to estimate quantiles), and so on. These problems turn out to be a bit different from each other depending on some topological features of the class. One interesting thing to consider for the machine learners out there this stopping time that you need in some analyses. As you are going along, observing the data and doing your task (estimation, compression, etc) can you tell from the data that you are doing well? This has major implications for whether or not an online algorithm can even work the way we want it to, and is something Prasad calls “data-driven compressible.”

I’ll try to write another post or two about the talks I saw as well!

Identification via the Broadcast Channel
Annina Bracher (ETH Zurich, Switzerland); Amos Lapidoth (ETHZ, Switzerland)
The title pretty much describes it — there are two receivers which are both looking out for a particular message. This is the identification problem, in which the receiver only cares about a particular message (but we don’t know which one) and we have to design a code such that they can detect the message. The number of messages is 2^{2^{nC}} where C is the Shannon capacity of the DMC. In the broadcast setting we run into the problem that the errors for the two receivers are entangled. However, their message sets are disjoint. The way out is to look at the average error for each (averaged over the other user’s message). The main result is that the rates only depend on the conditional marginals, and they have a strong converse.

Efficient compression of monotone and m-modal distributions
Jayadev Acharya (University of California, San Diego, USA); Ashkan Jafarpour (University of California, San Diego, USA); Alon Orlitsky (University of California, San Diego, USA); Ananda Theertha Suresh (University of California, San Diego, USA)
A monotone distribution is a distribution on \mathbb{N} such that the probabilities are non-increasing. The redundancy for this class is infinite, alas, so they restrict the support to size k (where k can be large). They propose a two-step compression scheme in which the first step is to approximate the true distribution with a piecewise constant step distribution, and then use a compression scheme for step distributions.

Writing on a Dirty Paper in the Presence of Jamming
Amitalok J Budkuley (Indian Institute of Technology, Bombay, India); Bikash K Dey (Indian Institute of Technology Bombay, India); Vinod M Prabhakaran (Tata Institute of Fundamental Research, India)
Ahh, jamming. A topic near and dear to my heart. This paper takes a game-theoretic approach to jamming in a DPC setup: “the capacity of the channel in the presence of the jammer is the unique Nash equilibrium utility of the zero sum communication game between the user and the jammer.” This is a mutual information game, and they show that i.i.d. Gaussian jamming and dirty paper coding are a Nash equilibrium. I looked at an AVC version of this problem in my thesis, and the structure is quite a bit different, so this was an interesting different take on the same problem — how can we use the state information to render adversarial interference as harmless as noise?

Stable Grassmann Manifold Embedding via Gaussian Random Matrices
Hailong Shi (Tsinghua University & Department of Electronic Engineering, P.R. China); Hao Zhang (TsinghuaUniversity, P.R. China); Gang Li (Tsinghua University, P.R. China); Xiqin Wang (Tsinghua University, P.R. China)
This was in the session I was chairing. The idea is that you are given a subspace (e.g., a point on the Grassman manifold) and you want to see what happens when you randomly project this into a lower-dimensional subspace using an i.i.d. Gaussian matrix a la Johnson-Lindenstrauss. The JL Lemma says that projections are length-preserving. Are they also volume-preserving? It turns out that they are (no surprise). The main tools are measure concentration results together with a union bound over a covering set.

Is “Shannon capacity of noisy computing” zero?
Pulkit Grover (Carnegie Mellon University, USA)
Yes. I think. Maybe? Pulkit set up a physical model for computation and used a cut-set argument to show that the total energy expenditure is high. I started looking at the paper in the proceedings and realized that it’s significantly different than the talk though, so I’m not sure I really understood the argument. I should read the paper more carefully. You should too, probably.

As I wrote before, I took pretty woeful notes during ISIT this year, so I don’t have much to write about. Andrea Goldsmith’s plenary was about how we always say IT/Comm is dead, but she thinks we should be more sanguine about it. She presented a glimpse of some recent work with Stefano Rini on a unified approach for providing achievable results for single-hop networks using a graph to represent superposition coding and binning operations among the auxiliary variables. If it is actually as easy to use as advertised, it might save over the 23+ rate inequalities defining some achievable rate regions. The moral of the story is that it’s sometimes better to clean up our existing results a bit. I think the El-Gamal and Kim book did a great job of this for basic multiterminal IT, for example.

Vijay Kumar’s plenary was on codes for distributed storage and repair-bandwidth tradeoffs, focusing on extensions of the model. There was a lot of discussion of other code constructions, and how asking for certain properties (such as “locality”) can cost you something in the tradeoff. This is important when you can’t repair a code from arbitrary nodes in the network/data center — because there’s an underlying network which supplies the data for repair, codes should probably respect that network. At least that was the moral I took from this talk. Since I don’t work on coding, some things were a little over my head, but I thought he did an excellent job of keeping it accessible with nice concrete examples.

Janos Körner’s Shannon lecture was “On the Mathematics of Distinguishable Difference.” He began by remarking how information theory in Hungary was really a branch of mathematics — at least that’s how Rényi viewed it — and how collaborations made him appreciate some of the operational significance of information problems. On the other hand, he also made a strong case for understanding the combinatorial structure of many problems abstractly. A simple motivating example was something he called “trifference.” Basically he is asking for

T(n) = \max \{ |C| : C \subseteq \{ 0, 1, 2 \}^n,
\forall \{x,y,z\}\ \mathrm{distinct}
\hspace{1in} \exists i\ \mathrm{s.t.} \{x_i, y_i, z_i\} = \{0, 1, 2\} \}

That’s a mouthful! We want a set of trinary codewords C such that any triple of codewords differs in at a least one position. What’s the maximum size of such a set? That’s T(n). More specifically, we want the rate of growth of this thing. That we have are upper and lower bounds:

\frac{1}{4} \log \frac{9}{5} \le \lim \sup \frac{1}{n} \log T(n) \le \log \frac{3}{2}.

It turns out that simple i.i.d. random coding is not going to give you a good set — the lower bound comes from a non-uniform random codebook. The upper bound is actually a capacity of a hypergraph. This led him to his second topic, which was on hypergraph entropy, a generalization of graph entropy. This is connected to Sperner families of subsets: collections such that for any pair of subsets, neither contains the other. The rate of growth of Sperner families is also related to the hypergraph entropy.

I didn’t really manage to take as good notes as I might have wanted, but I really enjoyed the lecture, and you can too, now that the video has been posted on the IT Society website. I heard some people complaining that the talk was a little too technical while walking out of the plenary hall, but for me, it was quite clear and quite interesting, even at 8:30 in the morning. You can’t please everyone I guess!

Due to jetlag, my CAREER proposal deadline, and perhaps a bit of general laziness, I didn’t take as many notes at ISIT as I would have, so my posting will be somewhat light (in addition to being almost a month delayed). If someone else took notes on some talks and wants to guest-post on it, let me know!

Strong Large Deviations for Composite Hypothesis Testing
Yen-Wei Huang (Microsoft Corporation, USA); Pierre Moulin (University of Illinois at Urbana-Champaign, USA)
This talk was actually given by Vincent Tan since neither of the authors could make it (this seems to be a theme of talks I’ve attended this summer. The paper was about testing a simple hypothesis H_1 versus a composite hypothesis H_0 where under H_0 the observations are i.i.d. with respect to one of possibly k different distributions. There are therefore k different errors and the goal is to characterize these errors when we ask for the probability of true detection to be greater than 1 - \epsilon. This is a sort of generalized Neyman-Pearson setup. They look at the vector of log-likelihood ratios and show that a threshold test is nearly optimal. At the time, I understood the idea of the proof, but I think it’s one of things where you need to really read the paper.

Randomized Sketches of Convex Programs with Sharp Guarantees
Mert Pilanci (University of California, Berkeley, USA); Martin J. Wainwright (University of California, Berkeley, USA)
This talk was about using random projections to lower the complexity of solving a convex program. Suppose we want to minimize \| Ax - y \|^2 over x given y. A sketch would be to solve \| SAx - Sy \|^2 where S is a random projection. One question is how to choose A. They show that choosing S to be a randomized Hadamard matrix (the paper studies Gaussian matrices), then the objective value of the sketched program is at most (1 + \epsilon)^2 times the value of the original program as long as the the number of rows of S is larger than O( \epsilon^{-2} \mathbb{W}^2(A \mathcal{K})), where \mathbb{W}(A \mathcal{K}) is the Gaussian width of the tangent cone of the contraint set at the optimum value. For more details look at their preprint on ArXiV.

On Efficiency and Low Sample Complexity in Phase Retrieval
Youssef Mroueh (MIT-IIT, USA); Lorenzo Rosasco (DIBRIS, Unige and LCSL – MIT, IIT, USA)
This was another talk not given by the authors. The problem is recovery of a complex vector x_0 \in \mathbb{C}^n from phaseless measurements of the form b_i = |\langle a_i, x_0 \rangle|^2 where a_i are complex spherically symmetric Gaussian vectors. Recovery from such measurements is nonconvex and tricky, but an alternating minimizing algorithm can reach a local optimum, and if you start it in a “good” initial position, it will find a global optimum. The contribution of this paper is provide such a smart initialization. The idea is to “pair” the measurements to create new measurements y_i = \mathrm{sign}( b_i^{(1)} - b_i^{(2)} ). This leads to a new problem (with half as many measurements) which is still hard, so they find a convex relaxation of that. I had thought briefly about such sensing setups a long time ago (and by thought, I mean puzzled over it at a coffeshop once), so it was interesting to see what was known about the problem.

Sorting with adversarial comparators and application to density estimation
Jayadev Acharya (University of California, San Diego, USA); Ashkan Jafarpour (University of California, San Diego, USA); Alon Orlitsky (University of California, San Diego, USA); Ananda Theertha Suresh (University of California, San Diego, USA)
Ashkan gave this talk on a problem where you have m samples from an unknown distribution p and a set of distributions \{q_1, q_2, \ldots, q_n\} to compare against. You want to find the distribution that is closest in \ell_1. One way to do this is via Scheffe tournament tht compares all pairs of distributions — this runs in time n^2 time. They show a method that runs in O(n) time by studying the structure of the comparators used in the sorting method. The motivation is that running comparisons can be expensive (especially if they involve human decisions) so we want to minimize the number of comparisons. The paper is significantly different than the talk, but I think it would definitely be interesting to those interested in discrete algorithms. The density estimation problem is really just a motivator — the sorting problem is far more general.

More like an old paper… this (finally) a journal version of some older work that we did on analyzing Bayesian nonparametric estimators form an information-theoretic (redundancy) perspective.

Redundancy of Exchangeable Estimators
Narayana P. Santhanam, Anand D. Sarwate and Jae Oh Woo

Exchangeable random partition processes are the basis for Bayesian approaches to statistical inference in large alphabet settings. On the other hand, the notion of the pattern of a sequence provides an information-theoretic framework for data compression in large alphabet scenarios. Because data compression and parameter estimation are intimately related, we study the redundancy of Bayes estimators coming from Poisson-Dirichlet priors (or “Chinese restaurant processes”) and the Pitman-Yor prior. This provides an understanding of these estimators in the setting of unknown discrete alphabets from the perspective of universal compression. In particular, we identify relations between alphabet sizes and sample sizes where the redundancy is small, thereby characterizing useful regimes for these estimators.

In the large alphabet setting, one thing we might be interested in is sequential prediction: I observe a sequence of butterfly species and want to predict whether the next butterfly I collect will be new or one that I have seen before. One simple way to do this prediction is to put a prior on the set of all distributions on infinite supports and do inference on that model given the data. This corresponds to the so-called Chinese Restaurant Process (CRP) approach to the problem. The information-theoretic view is that sequential prediction is equivalent to compression: the estimator is assigning a probability q(x^n) to the sequence x^n seen so far. An estimator is good if for any distribution p, if x^n is drawn i.i.d. according to p, then the divergence between p(x^n) and q(x^n) is “small.” The goal of this work is to understand when CRP estimators are good in this sense.

This sort of falls in with the “frequentist analysis of Bayesian procedures” thing which some people work on.

While trying to show a student a generic example of a paper’s structure, I came across this gem:

A sample from the IT Transactions of 1992

A sample from the IT Transactions of 1992

I feel like I am reading a MacWrite document while wearing a flannel shirt.

Some shorter takes on these papers, some of which I should read in more detail later. I figure I’ll use the blog for some quick notes and to see if any readers have any comments/ideas about these:

Differentially Private Convex Optimization with Piecewise Affine Objectives (Shuo Han, Ufuk Topcu, George J. Pappas) — arXiv:1403.6135 [math.OC]. The idea here is to look at minimizing functions of the form
f(x) = \max_{i = 1,2, \ldots, m} \{ a_i^{\top} x + b_i \}
subject to x belonging to some convex polytope \mathcal{P}. This is a bit different than the kind of convex programs I’ve been looking at (which are more ERM-like). Such programs occur often in resource allocation problems. Here the private information of users are the offsets b_i. They propose a number of methods for generating differentially private approximations to this problem. Analyzing the sensitivity of this optimization is tricky, so they use an upper bound based on the diameter of the feasible set $\mathcal{P}$ to find an appropriate noise variance. The exponential mechanism also gives a feasible mechanism, although the exact dependence of the suboptimality gap on \epsilon is unclear. They also propose a noisy subgradient method where, instead of using SGD, they alter the sampling distribution using the exponential mechanism to choose a gradient step. Some preliminary experiments are also given (although none exploring the dependence on \epsilon, which would also be very interesting)!

Assisted Common Information with an Application to Secure Two-Party Sampling (Vinod M. Prabhakaran, Manoj M. Prabhakaran) — arXiv:1206.1282 [cs.IT]. This is the final version of the journal version of a few conference papers that Vinod and Manoj have done on an interesting variant of the Gács-Körner problem. The motivation is from secure multiparty computation — the problem also touches on some work Vinod and I started but is sadly languishing due to the utter overwhelmingness of starting a new job. Hopefully I can get back to it this summer.

Analysis of Distributed Stochastic Dual Coordinate Ascent (Tianbao Yang, Shenghuo Zhu, Rong Jin, Yuanqing Lin) — arXiv:1312.1031 [cs.DC]. The title pretty much sums it up. I’m interested in looking a bit more at the analysis method, since I had a similar algorithm bouncing around my head that I would like to analyze. The main idea is also update the primal variables to achieve a speedup/use a larger step size.

Convergence of Stochastic Proximal Gradient Algorithm (Lorenzo Rosasco, Silvia Villa, Bang Công Vũ) — arXiv:1403.5074 [math.OC]. This is a similar setup as my last post, with a convex objective that has a smooth and non-smooth component. They show convergence in expectation and almost surely. The key here is that they show convergence in an infinite-dimentionsal Hilbert space instead of, say, \mathbb{R}^d.

There’s this nice calculation in a paper of Shannon’s on optimal codes for Gaussian channels which essentially provides a “back of the envelope” way to understand how noise that is correlated with the signal can affect the capacity. I used this as a geometric intuition in my information theory class this semester, but when I described it to other folks I know in the field, they said they hadn’t really thought of capacity in that way. Perhaps it’s all of the AVC-ing I did in grad school.

Suppose I want to communicate over an AWGN channel

Y = X + Z

where Z \sim \mathcal{N}(0,N) and X satisfies a power constraint P. The lazy calculation goes like this. For any particular message m, the codeword X^n(m) is going to be i.i.d. \mathcal{N}(0,P), so with high probability it has length \sqrt{n P}. The noise is independent and \mathbb{E}[ X Z ] = 0 so \langle X^n, Z^n \rangle \approx 0 with high probability, so Z^n is more-or-less orthogonal to X^n(m) with high probability and it has length \sqrt{nN} with high probability. So we have the following right triangle:

Geometric picture of the AWGN channel

Geometric picture of the AWGN channel

Looking at the figure, we can calculate \sin \theta using basic trigonometry:

\sin \theta = \sqrt{ \frac{N}{N + P} },

so

\log \frac{1}{\sin \theta} = \frac{1}{2} \log \left( 1 + \frac{P}{N} \right),

which is the AWGN channel capacity.

We can do the same thing for rate-distortion (I learned this from Mukul Agarwal and Anant Sahai when they were working on their paper with Sanjoy Mitter). There we have Gaussian source X^n with variance \sigma^2, distortion D and a quantization vector \hat{X}^n. But now we have a different right triangle:

Geometry of the rate-distortion problem

Geometry of the rate-distortion problem

Here the distortion is the “noise” but it’s dependent on the source X^n. The “test channel” view says that the quantization \hat{X}^n is corrupted by independent (approximately orthogonal) noise to form the original source X^n. Again, basic trigonometry shows us

\log \frac{1}{\sin \theta} = \frac{1}{2} \log \left( \frac{\sigma^2}{D} \right).

Turning back to channel coding, what if we have some intermediate picture, where the noise slightly negatively correlated with the signal, so \mathbb{E}[ X Z ] = - \rho? Then the cosine of the angle between X and Z in the picture is \frac{\rho}{\sqrt{NP}} and we have a general triangle like this:

Geometry for channels with correlated noise

Geometry for channels with correlated noise

Where we’ve calculated the length of Y^n using the law of cosines:

\|Y^n\|^2 = n (P + N) - 2 n \sqrt{ N P } \cdot \frac{\rho}{\sqrt{NP}}.

So now we just need to calculate \theta again. The cosine is easy to find:

\cos \theta = \frac{ P + N - 2 \rho + P - N }{ 2 \sqrt{ P (P + N - 2 \rho) } } = \frac{P - \rho}{ \sqrt{P (P + N - 2 \rho) } }.

Then solving for the sine:

\sin^2 \theta = 1 - \frac{ (P - \rho)^2 }{ P( P + N - 2 \rho) }

and applying our formula, for \rho < \sqrt{PN},

\log \frac{1}{\sin \theta} = \frac{1}{2} \log\left( \frac{ P ( P + N - 2 \rho) }{ P (P + N - 2 \rho) - (P - \rho)^2 } \right) = \frac{1}{2} \log\left( \frac{ P (P + N - 2 \rho) }{ PN - \rho^2 } \right)

If we plug in \rho = 0 we get back the AWGN capacity and if we plug in \rho = N we get the rate distortion function, but this formula gives the capacity for a range of correlated noise channels.

I like this geometric interpretation because it's easy to work with and I get a lot of intuition out of it, but your mileage may vary.

I saw a paper on ArXiV yesterday called Kalman meets Shannon, which got me thinking: in how many papers has someone met Shannon, anyway? Krish blogged about this a few years ago, but since then Shannon has managed to meet some more people. I plugged “meets Shannon” into Google Scholar, and out popped:

Sometimes people are meeting Shannon, and sometimes he is meeting them, but each meeting produces at least one paper.

Follow

Get every new post delivered to your Inbox.

Join 908 other followers