The strong converse for the MAC, part 4

In the previous post we saw how to take two distributions {P} and {Q} on {\mathcal{X}^n} that are close and show that there must be some fixed values {\{ \bar{x}_{t_i} : i = 1, 2, \ldots, k\}} for individual variables {\{X_{t_i} : i = 1, 2, \ldots, k\}} such that the single letter marginals of {P} and {Q} are close when you condition on {X_{t_i} = \bar{x}_{t_i}}.

Corollary 1 Let {\mathcal{A} \subset \{1,2, \ldots, M\} \times \{1,2,\ldots,N\}} and suppose {|\mathcal{A} \ge (1 - \lambda^{\ast}) MN} and let {\{(u_i,v_j, D_{ij}) : (i,j) \in \mathcal{A}\}} be a code for the MAC with maximal error probability {\lambda}. Then for an {0 < \gamma < c = \frac{\lambda^{\ast}}{1 - \lambda^{\ast}}} and {0 \le \epsilon < } there exist {\{t_1,\ldots, t_k\}} for {k \le c/\gamma} and fixed values {\{(\bar{x}_{t_i},\bar{y}_{t_i})\}} such that the subcode formed by

\displaystyle  	\bar{\mathcal{A}} = \{ (i,j) : u_{i,t_i} = \bar{x}_{t_i}, v_{i,t_i} = \bar{y}_{t_i} \}

has maximal error {\lambda},

\displaystyle  \begin{array}{rcl}  	|\bar{\mathcal{A}}| &\ge& \epsilon^k |\mathcal{A}| \\ 	\bar{M} &=& | \{ u_i : (i,j) \in \bar{\mathcal{A}} \} | \ge \epsilon^{k} M \\ 	\bar{N} &=& | \{ v_i : (i,j) \in \bar{\mathcal{A}} \} | \ge \epsilon^{k} N 	\end{array}

and for all {x \in \mathcal{X}} and {y \in \mathcal{Y}} and all {t = 1,2, \ldots, n},

\displaystyle  \begin{array}{rcl}  	\mathbb{P}( \bar{X}_t = x, \bar{Y}_t = y ) 		&\ge& 		(1 + \gamma) \mathbb{P}(\bar{X}_t = x) \mathbb{P}(\bar{Y}_t = y) - \gamma \\  & & \qquad - |\mathcal{X}||\mathcal{Y}|\epsilon \\ 	\mathbb{P}( \bar{X}_t = x, \bar{Y}_t = y ) 		&\le& 		\max\left\{ (1 + \gamma) \mathbb{P}(\bar{X}_t = x) \mathbb{P}(\bar{Y}_t = y), \epsilon \right\}, 	\end{array}

where {\bar{X}^n} and {\bar{Y}^n} are distributed according to the Fano distribution of the subcode.

That is, we can condition on some values at {\{t_i\}} and (a) not lose too much rate for each user and (b) the joint distribution of the subcode is almost a product distribution. Remember, the Fano distribution is just the uniform distribution on the codewords of the subcode induced by {\bar{\mathcal{A}}}.

So how does this result follow from the Wringing Lemma discussed before? Let {P} be the Fano distribution of the original code, and let {Q} be the product distribution induced by the marginal distributions on the code:

\displaystyle  \begin{array}{rcl}  P_{X^n Y^n}(u_i,v_j) &=& \frac{1}{|\mathcal{A}|} \\ 	Q_{X^n Y^n}(u_i,v_j) &=& P_{X^n}(u_i) P_{Y^n}(v_j). 	\end{array}

Then just churn through the implications of the Lemma.

So putting the whole argument together, we start first with an {(n,M,N,\bar{\lambda})} code for the MAC and then we get a subcode from pairs {\mathcal{A}} defined before. This code satisfies rate bounds which look almost like what we want — they have sums of mutual informations like {I(X_t ; Z_t | Y_t)} except that {(X^n,Y^n)} are distributed uniformly on {\mathcal{A}}, so they are not independent. We can then apply the corollary above to get that there is a sub-subcode indexed by {\bar{\mathcal{A}}} for which the Fano distribution on {\bar{\mathcal{A}}} is almost independent, and for which we have not lost too much rate. Now we have rate bounds with a sum of terms {I(\bar{X}_t ; \bar{Z}_t | \bar{Y}_t)} where the inputs are almost independent, so then we can use the continuity of mutual information and entropy to get upper bounds with sums of {I(\hat{X}_t ; \hat{Z}_t | \hat{Y}_t)} where the inputs are finally independent.

There are three steps here — first you find a subcode with more uniform error properties but dependence between the codewords, then you wring out the dependence without giving up too much rate, and finally you substitute in the independent inputs without giving up too much rate. And that’s the value of wringing arguments: in order to get the nice structure from the subcode with {\mathcal{A}} you have to get back the independence of inputs that you lost.

Advertisement