# ISIT 2009 : more talks

This is the second post on talks at ISIT — it’s been going a bit more slowly because of extra travel.

Mismatched Estimation and Relative Entropy
Sergio Verdú

This talk looked at how to characterize the KL divergence or relative entropy $D(P~\|~Q)$ between two distributions in terms of an MMSE estimation problem. The problem, more specifically, is MMSE estimation of $X$ from an observation $Y = \alpha X + N$ where $N$ is Gaussian noise. The mismatch estimator looks at the error if you estimate $X$ assuming it was distributed according to $P$ but it is actually distributed according to $P$. The relative entropy is then an integral of the difference in the mean squared-error from the mismatch and the true MMSE estimator’s error. I thought this paper was quite interesting since I’m interested in mismatch problems and model uncertainty. Verdú went on to talk about connections to free probability and other connections as well.

Improved Slepian-Wolf Exponents via Witsenhausen’s Rate
Benjamin Kelly, Aaron Wagner

As the title suggests, this paper looked at the problem of characterizing the exponential error decay in distributed source coding. More specifically, they restricted attention to the case of encoding iid $X^n$ when iid $Y^n$ are known at the decoder and $(X,Y)$ have some known distribution. When the joint pmf of $(X, Y)$ has some 0’s in it, i.e. some pairs are impossible, then they can improve the error exponents by looking more closely at the chromatic graph of $(X^n, Y^n)$ and getting a better bound on its chromatic number (or rather, the asymptotic growth thereof).

Source Coding with a Side Information `Vending Machine’ at the Decoder
Tsachy Weissman, Haim Permuter

This looked at lossy source coding with side information at the decoder, but with a twist. The encoder first sends a message $T$ to the decoder. The decoder chooses a set of actions $A^n(T)$ to get its side information $Y^n$ according to a conditional distribution $P(Y | X, A)$. The actions are subject to a per-letter cost. The question is now how well can we do? A combination of strategies (Slepian-Wolf-ly, Wyner-Ziv-ly, etc.) can achieve the rate-distortion function. One insight is that a greedy strategy may not always be optimal. If the action sequence can be chosen by the encoder then the optimal solution is a chimera Wyner-Ziv-Gelfand-Pinsker.

The Quadratic Gaussian CEO Problem with Byzantine Agents
Oliver Kosut, Lang Tong

This paper looked at the Gaussian CEO problem (a common Gaussian source observed by many terminals through independent Gaussian noise, who must encode separately to a common decoder or “CEO”). However, in this scenario, some of the terminals may be misbehaving (they are traitors) and not following the protocol. What can we do now? They derive inner and outer bounds on the problem by adapting the achievable scheme of Berger-Tung and the converses of Oohama and Prabhakaran-Tse-Ramchandran. An interesting thing to note is that the protocol is quite sensitive, so a small number of traitors may mess things up a lot.

This site uses Akismet to reduce spam. Learn how your comment data is processed.