ITA Workshop 2013 : post the first

I promised some ITA blogging, so here it is. Maybe Alex will blog a bit too. These notes will by necessity be cursory, but I hope some people will find some of these papers interesting enough to follow up on them.

A Reverse Pinsker Inequality
Daniel Berend, Peter Harremoës , Aryeh Kontorovich
Aryeh gave this talk on what we can say about bounds in the reverse direction of Pinsker’s inequality. Of course, in general you can’t say much, but what they do is show an expansion of the KL divergence in terms of the total variation distance in terms of the balance coefficient of the distribution \beta = \inf \{ P(A) : P(A) \ge 1/2 \}.

Unfolding the entropy power inequality
Mokshay Madiman, Liyao Wang
Mokshay gave a talk on the entropy power inequality. Given vector random variables X_1 and X_2 is there a term we know that h(X_1 + X_2) \ge h(Z_1 + Z_2) where Z_1 and Z_2 are isotropic Gaussian vectors with the same differential entropy as X_1 and X_2. The question in this paper is this : can we insert a term between these two in the inequality? The answer is yes! They define a spherical rearrangement of the densities of X_1 and X_2 into variables X_1^{\ast} and X_2^{\ast} with spherically symmetric decreasing densities and show that the differential entropy of their sum lies between the two terms in the regular EPI.

Improved lower bounds on the total variation distance and relative entropy for the Poisson approximation
Igal Sason
The previous lower bounds mentioned in the title were based on the Chen-Stein method, and they can be strengthened by sharpening the analysis in the Chen-Stein method.

Fundamental limits of caching
Mohammad A. Maddah-Ali, Urs Niesen`
This talk was on tradeoffs in caching. If there are N files, K users and a size M cache at each user, how should they cache files so as to best allow a broadcaster to share the bandwidth to them? More simply, suppose there are three people who may want to watch one of three different TV shows, and they can buffer the content of one TV show. Since a priori you don’t know which show they want to watch, the idea might be to buffer/cache the first 3rd of each show at each user. They show that this is highly suboptimal. Because the content provider can XOR parts of the content to each user, the caching strategy should not be the same at each user, and the real benefit is the global cache size.

Simple outer bounds for multiterminal source coding
Thomas Courtade
This was a very cute result on using the HGR maximal correlation to get outer bounds for multiterminal source coding without first deriving a single letterization of the outer bound. The main ideas are to use two properties of the HGR correlation : it tensorizes (to get the multiletter part) and the strong DPI from Elza Erkip and Tom Cover’s paper (referenced above).

Advertisement

One thought on “ITA Workshop 2013 : post the first

  1. Pingback: HGR maximal correlation revisited : a corrected reverse inequality | An Ergodic Walk

Comments are closed.