HGR maximal correlation revisited : a corrected reverse inequality

Sudeep Kamath sent me a note about a recent result he posted on the ArXiV that relates to an earlier post of mine on the HGR maximal correlation and an inequality by Erkip and Cover for Markov chains U -- X -- Y which I had found interesting:
I(U ; Y) \le \rho_m(X,Y)^2 I(U ; X).
Since learning about this inequality, I’ve seen a few talks which have used the inequality in their proofs, at Allerton in 2011 and at ITA this year. Unfortunately, the stated inequality is not correct!

On Maximal Correlation, Hypercontractivity, and the Data Processing Inequality studied by Erkip and Cover
Venkat Anantharam, Amin Gohari, Sudeep Kamath, Chandra Nair

What this paper shows is that the inequality is not satisfied with \rho_m(X,Y)^2, but by another quantity:
I(U ; Y) \le s^*(X;Y) I(U ; X)
where s^*(X;Y) is given by the following definition.

Let X and Y be random variables with joint distribution (X, Y) \sim p(x, y). We define
s^*(X;Y) = \sup_{r(x) \ne p(x)} \frac{ D( r(y) \| p(y) ) }{ D( r(x) \| p(x) },
where r(y) denotes the y-marginal distribution of r(x, y) := r(x)p(y|x) and the supremum on the right hand side is over all probability distributions r(x) that are different from the probability distribution p(x). If either X or Y is a constant, we define s^*(X; Y) to be 0.

Suppose (X,Y) have joint distribution P_{XY} (I know I am changing notation here but it’s easier to explain). The key to showing their result is through deriving variational characterizations of \rho_m and s^* in terms of the function
t_{\lambda}( P_X ) := H( P_Y ) - \lambda H( P_X )
Rather than getting into that in the blog post, I recommend reading the paper.

The implication of this result is that the inequality of Erkip and Cover is not correct : not only is \rho_m(X,Y)^2 not the supremum of the ratio, there are distributions for which it is not an upper bound. The counterexample in the paper is the following: X \sim \mathsf{Bernoulli}(1/2), and Y is generated via this asymmetric erasure channel:

Joint distribution counterexample

Joint distribution counterexample (Fig. 2 of the paper)


How can we calculate \rho_m(X,Y)? If either X or Y is binary-valued, then
\rho_m(X,Y)^2 = -1 + \sum_{x,y} \frac{ p(x,y)^2 }{ p(x) p(y) }
So for this example \rho_m(X,Y)^2 = 0.6. However, s^*( X,Y) = \frac{1}{2} \log_2(12/5) > 0.6 and there exists a sequence of variables U_i satisfying the Markov chain such that U_i -- X -- Y such that the ratio approaches s^*.

So where is the error in the original proof? Anantharam et al. point to an explanation that the Taylor series expansion used in the proof of the inequality with \rho_m(X,Y)^2 may not be valid at all points.

This seems to just be the start of a longer story, which I look forward to reading in the future!

Advertisement

More ArXiV skims

Assumptionless consistency of the Lasso
Sourav Chatterjee
The title says it all. Given p-dimensional data points \{ \mathbf{x}_i : i \in [n] \} the Lasso tries to fit the model \mathbb{E}( y_i | \mathbf{x_i}) = \boldsymbol{\beta} \mathbf{x}_i by minimizing the \ell^1 penalized squared error
\sum_{i=1}^{n} (y_i - \boldsymbol{\beta} \mathbf{x}_i)^2 + \lambda \| \boldsymbol{\beta} \|_1.
The paper analyzes the Lasso in the setting where the data are random, so there are n i.i.d. copies of a pair of random variables (\mathbf{X},Y) so the data is \{(\mathbf{X}_i, Y_i) : i \in [n] \}. The assumptions are on the random variables (\mathbf{X},Y) : (1) each coordinate |X_i| \le M is bounded, the variable Y = (\boldsymbol{\beta}^*)^T \mathbf{X} + \varepsilon, and \varepsilon \sim \mathcal{N}(0,\sigma^2), where \boldsymbol{\beta}^* and \sigma are unknown constants. Basically that’s all that’s needed — given a bound on \|\boldsymbol{\beta}\|_1, he derives a bound on the mean-squared prediction error.

On Learnability, Complexity and Stability
Silvia Villa, Lorenzo Rosasco, Tomaso Poggio
This is a handy survey on the three topics in the title. It’s only 10 pages long, so it’s a nice fast read.

Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression
Francis Bach
A central challenge in stochastic optimization is understanding when the convergence rate of the excess loss, which is usually O(1/\sqrt{n}), can be improved to O(1/n). Most often this involves additional assumptions on the loss functions (which can sometimes get a bit baroque and hard to check). This paper considers constant step-size algorithms but where instead they consider the averaged iterate $\latex \bar{\theta}_n = \sum_{k=0}^{n-1} \theta_k$. I’m trying to slot this in with other things I know about stochastic optimization still, but it’s definitely worth a skim if you’re interested in the topic.

On Differentially Private Filtering for Event Streams
Jerome Le Ny
Jerome Le Ny has been putting differential privacy into signal processing and control contexts for the past year, and this is another paper in that line of work. This is important because we’re still trying to understand how time-series data can be handled in the differential privacy setting. This paper looks at “event streams” which are discrete-valued continuous-time signals (think of count processes), and the problem is to design a differentially private filtering system for such signals.

Gossips and Prejudices: Ergodic Randomized Dynamics in Social Networks
Paolo Frasca, Chiara Ravazzi, Roberto Tempo, Hideaki Ishii
This appears to be a gossip version of Acemoglu et al.’s work on “stubborn” agents in the consensus setting. They show similar qualitative behavior — opinions fluctuate but their average over time converges (the process is ergodic). This version of the paper has more of a tutorial feel to it, so the results are a bit easier to parse.

Dagadful and nag kesar

A few months ago I was home visiting my parents and we had a lunch with a few other Maharashtrians. The conversation turned towards food, and in particular ingredients that are important for making authentic garam masala. Garam masalas vary widely by region in India, and the two ingredients in question were dagadful and nag kesar. I had never really heard of these spices so I did a bit of research to learn more.

Dagadful (Parmelia perlata) is a lichen, not to be confused with the stone flower Didymocarpus pedicellatus, which is a plant that grows on rocks and is called charela in Hindi, I believe. The confusing thing is that both plants are used for herbal remedies, but the former is used for culinary purposes.

If you search for “nag kesar” you may find Mesua ferrea, a hardwood tree that grows in India and surrounds. That’s not where the spice comes from, however. This sparked the most debate at lunch, but I think I’ve figured out that the spice is the bud of a different tree, Mammea longifolia. Both Mesua and Mammea are in the family Calophyllaceae, which probably led to the name clash.

Expected number of faces in Gaussian polytopes

Last week I was reading Active Learning via Perfect Selective Classification by El-Yaniv and Wiener, and came across a neat result due to Hug and Reitzner that they use in some of their bounds for active learning on Gaussian distributions.

The setup is the following : let X_1, X_2, \ldots, X_n be n jointly Gaussian vectors with distribution \mathcal{N}(0,I_d) in \mathbb{R}^d. The convex hull P_n of these points is called a Gaussian polytope. This is a random polytope of course, and we can ask various things about their shape : what is the distribution of the number of vertices, or the number of k-faces? Let f_k(P_n) be the number of k-faces Distributions are hard, but for general k the expected number of faces (as n \to infty) is given by

\mathbb{E}[ f_k(P_n)] = \frac{2^d}{\sqrt{d}} \binom{d}{k+1} \beta_{k,d-1}(\pi \ln n)^{(d-1)/2} (1 + o(1)),

where \beta_{k,d-1} is the internal angle of a regular (d-1)-simplex at one of its k-dimensional faces. What Hug and Reitzner show is a bound on the variance (which then El-Yaniv and Plan use in a Chebyshev bound) : there exists a constant c_d such that

\mathrm{Var}( F_k(P_n) ) \le c_d (\ln n)^{(d-1)/2}

So the variance of the number of k-faces can be upper bounded by something that does not depend at all on the actual value of k. In fact, they show that

f_k(P_n) (\ln n)^{-(d-1)/2} \to \frac{2^d}{\sqrt{d}} \binom{d}{k+1} \beta_{k,d-1} \pi^{(d-1)/2}

in probability as n \to \infty. That is, appropriately normalized, the number of faces converges to a constant.

To me this result was initially surprising, but after some more thought it makes a bit more sense. If you give me a cloud of Gaussian points, then I need k+1 points to define a k-face. The formula for the mean says that if I chose a random set of k+1 points, then approximately \frac{2^d}{\sqrt{d}} \beta_{k,d-1}(\pi \ln n)^{(d-1)/2} fraction of them will form a real k-face of the polytope. This also explains why the simplex-related quantity appears — points that are on “opposite sides” of the sphere (the level sets of the density) are not going to form a face together. As n \to \infty this fraction will change, but effectively the rate of growth in the number of faces with n is (\ln n)^{(d-1)/2}, regardless of k.

I’m not sure where this result will be useful for me (yet!) but it seemed like something that the technically-minded readers of the blog would find interesting as well.

RAR : a cry of rage

I’ve been trying to get a camera-ready article for the Signal Processing Magazine and the instructions from IEEE include the following snippet:

*VERY IMPORTANT: All source files ( .tex, .doc, .eps, .ps, .bib, .db, .tif, .jpeg, …) may be uploaded as a single .rar archived file. Please do not attempt to upload files with extensions .shs, .exe, .com, .vbs, .zip as they are restricted file types.

While I have encountered .rar files before, I was not very familiar with the file format or its history. I didn’t know it’s a proprietary format — that seems like a weird choice for IEEE to make (although no weirder than PDF perhaps).

What’s confusing to me is that ArXiV manages to handle .zip files just fine. Is .tgz so passé now? My experience with RAR is that it is good for compressing (and splitting) large files into easier-to-manage segments. All of that efficiency seems wasted for a single paper with associated figures and bibliography files and whatnot.

I was trying to find the actual compression algorithm, but like most modern compression software, the innards are a fair bit more complex than the base algorithmic ideas. The Wikipedia article suggests it does a blend of Lempel-Ziv (a variant of LZ77) and prediction by partial matching, but I imagine there’s a fair bit of tweaking. What I couldn’t figure out is if there is a new algorithmic idea in there (like in the Burrows-Wheeler Transform (BWT)), or it’s more a blend of these previous techniques.

Anyway, this silliness means I have to find some extra software to help me compress. SimplyRAR for MacOS seems to work pretty well.

Linkage : science edition

Learning from transcriptomes can be cheaper for organisms which have never been sequenced.

A fancy Nature article on mobility privacy, in case you weren’t convinced by other studies on mobility privacy.

Bad statistics in neuroscience. Color me unsurprised.

I bet faked results happen a lot in pharmaceutical trials, given the money involved. Perhaps we should jail people for faking data as a disincentive?

The Atheist shoe company did a study to see if the USPS was discriminating against them.

Readings

I’ve been on some flights lately and skived off of work to read a bit more.

The White Tiger (Aravind Adiga) — a farce told from the perspective of a murderer-turned entrepreneur in Bangalore writing letters to Wen Jiabao. I think there are definitely some interesting issues here especially with Adiga trying to write the voice of the subaltern. The point of the book seems to be to skewer the rich in India (and by implication the middle class which seeks to emulate the rich) but I’m not sure if the hits land where they are targeted. Definitely worth reading and discussing if you care about India. People who have never been there may find it less… familiar, and so their reading experience would be quite different.

Interworld (Neil Gaiman and Michael Reaves) — a Young Adult science fiction/fantasy novel. A bit of a thin premise, world-building-wise, but a breezy read. Can’t really recommend it but it was ok.

Rule 34 (Charles Stross) — a follow-up to Halting State. Set in future-Scotland and has all of the techno-econo-conspiracy together with some interesting takes on the effect of how ubiquitous internet and custom-3D printing and fabbing can affect life.

A Man of Misconceptions (John Glassie) — a fascinating biography of Athanasius Kircher, whose fascinatingly incorrect “scholarship” makes for some enjoyable reading. Glassie’s book is a really engaging read and brings a lot of the context of Kircher’s world to life. Highly recommended.

Readings

Endless Things [John Crowley] — Book four of the Aegypt Cycle, and the one most grounded in the present. The book moves more swiftly than the others, as if Crowley was racing to the end. Many of the concerns of the previous books, such as magic, history, and memory, are muted as the protagonist Pierce Moffett wends his way through his emotional an intellectual turmoil and into what in the end amounts to a kind of peace. Obviously only worth reading if you read the first three books.

Understanding Privacy [Daniel Solove] — A law professor’s take on what constitutes privacy. Solove wants to conceptualize privacy in terms of clusters of related ideas rather than take a single definition, and he tries to put a headier philosophical spin on it by invoking Wittgenstein. I found the book a bit overwritten but it does parse out the things we call privacy, especially in the longest chapter on the taxonomy of privacy. It’s not a very long book, but it has a number of good examples and also case law to show how muddled our legal definitions have become. He also makes a strong case for increased protections and shows how the law is blind to the effects of information aggregation, for example.

The Fall of the Stone City [Ismail Kadare] — An allegorical novel by a Man Booker prize winner chronicling the Nazi occupation and the communist takeover of Gjirokaster, an old Albanian city. It’s a dark absurdist comedy, partly in the vein of Kafka but with a bit of… Calvino almost. The tone of the book (probably a testament to the translator) has this almost academic detachment, gently mocking as it describes the ways in which the victors try to rewrite history.

Invisible Men [Becky Pettit] — A sobering look at how mass incarceration interacts with official statistics. Because most surveys are household-based, they do not count the increasingly larger incarcerated population, thereby introducing a systematic racialized bias in the statistics used for public policy. In particular, Pettit shows how this bias leads to underestimation of racial inequity because the (mainly young black male) prisoners are “erased” in the official records.

The Rise of Ransom City [Felix Gilman] — A sequel to The Half-Made World, and a wondrously engrossing read it is too, filled with the clash of ideas, the corruption of corporations, the “borrowing” and evolution of ideas, and the ravages of industrialization. Also has a healthy dose of Mark Twain for good measure.

C.R. Rao and information geometry

On Lalitha’s recommendation I read Frank Nielsen’s paper “Cramer-Rao Lower Bound and Information Geometry,” which is a survey how C.R. Rao’s work has impacted information geometry. I remember spending some time in grad school trying to learn information geometry (mostly for fun), but since it ended up not being particularly useful in my research, I’m afraid a lot of it has leaked out of my ears. This paper has a short introduction to the Cramer-Rao lower bound and an introduction to information geometry which might be a nice read for some of the readers of this blog. It’s certainly faster than trying to read Amari’s monograph! In particular, it goes over the “highlights” of geodesics and other geometric features on the manifold of probability distributions.

The paper mentions the sub-family of f-divergences known as \alpha-divergences, which are given by

D_{\alpha}(p \| q) = \frac{4}{1 - \alpha^2} \left( 1 - \int p(x)^{(1 - \alpha)/2)} q(x)^{(1 + \alpha)/2} dx \right)

The KL divergence is D_{-1}(p \| q) — you have to take the limit as \alpha \to -1. Within this family of divergences we have the relation D_{\alpha}(p \| q) = D_{-\alpha}(q \| p). Consider a pair of random variables (X,Y) with joint distribution P_{XY} and marginal distributions P_X and P_Y. If we take q = P_X P_Y and p = P_{XY} then the mutual information is D_{-1}( p \| q ). But we can also take

D_{-1}( P_{X} P_{Y} \| P_{XY}) = D_1( P_{XY} \| P_{X} P_{Y} )

Thus it turns out that the “lautum information” defined by Palomar and Verdú is a special case of this: it’s the 1-divergence between the the joint distribution and the product of the marginals. While their paper mentions the lautum information is an f-divergence, it doesn’t discuss this connection to this family of divergences. Nielsen’s paper calls this the “reverse Kullback-Leibler divergence,” but some googling doesn’t seem to indicate that this is a common term, or indeed if it has some use in information geometry. Palomar and Verdú give several operational interpretations of the lautum information.