This post was written using Luca’s LATEX2WP tool.
The topic of this (late) post is “wringing” techniques, which are a way of converting a statement like “ and
are close” into a statement about per-letter probabilities. Such a statement looks like “we can find some indices and values at those indices such that if we condition on those values, the marginals of
and
are close.” I’m going to skip straight to the simpler of the two “Wringing Lemmas” in this paper and sketch its proof. Then I’ll go back and state the first one (without proof). There are some small typos in the original which I will try to correct here.
Lemma 1 Let
and
be probability distributions on
such that for a positive constant
,
then for any
and
there exist
, where
such that for some
,
for all
and all
. Furthermore,
The proof works pretty mechanically. Fix a and
. Suppose that the conclusion doesn’t hold for
. Then there is a
and
such that
where the first bound is from the assumption and the second because the conclusion doesn’t hold. Then we have . Now dividing (1) by
and using the lower bound on
we get for all
,
Now, either the conclusion is true or we use (2) in place of (1) to find a and a
such that
We can keep going in this way until we get in the denominator. This yields the condition on
.
Rather than describe how to apply this to our coding problem, I want to contrast this rather simple statement to the other wringing lemma in the paper, which is a generalization of the lemma developed by Dueck. This one is more “information theoretic” in the sense that it says if the block mutual information is small, then we can find positions to condition on such that the per-letter mutual informations are small too.
Lemma 2 Let
and
be
-tuples of random variables from discrete finite alphabets
and
and assume that
Then for any
and there exist
, where
such that for some
,
for all
. Furthermore,
This lemma proved in basically the same way as the previous lemma, but it’s a bit messier. Dueck’s original version was the similar but gave a bound on the conditional mutual information, rather than this bound where we condition on particular values. This way of taking a small mutual information and getting small per-letter mutual informations can be useful outside of this particular problem, I imagine.
Next time I’ll show how to apply the first lemma to our codebooks in the converse for the MAC.
If you had to give a name to these techniques other than “wringing”, which one would you choose? …
I thought that for a given code, “wringing” refered to finding a maximal error subcode with the same rates such that it satisfied expressions (4.1) and (4.2) …
I think of it as a bit more general than that — you are “wringing” the dependence out of the two sequences/distributions by conditioning on some selected values. It’s a way of taking global correlation and squeezing/wringing it into local correlation bounds.
Most of the applications seem to be in finding subcodes (which is how I think about it), but there should be other situations in which you want to do things like this. This is why Lemma 4 (= Lemma 1 above) looks more broadly applicable than Lemma 3 (= Lemma 2 above): it doesn’t explicitly deal with mutual information expressions.
Pingback: The strong converse for the MAC, part 4 « An Ergodic Walk
Pingback: The strong converse for the MAC summary « An Ergodic Walk