ITA 2008 : Part the Second

This is the second part in my blogging about some of the talks I saw at ITA 2008, and also a way for me to test a tag so that readers who don’t care about information theory won’t have a huge post hogging up their aggregator window (livejournal, I’m looking at you). Again, I may have misunderstood something in the talks, so if you see something that doesn’t sound right, let me know.

The compound Gaussian interference channel
A. Raja, V. Prabhakaran, and P. Viswanath, UIUC

This paper looked at the Gaussian compound interference channel, defined by the following two equations:

Y_1 = X_1 + c_2 X_2 + Z_1

Y_2 = c_1 X_1 + X_1 + Z_2

Recent work of Etkin, Tse, and Wang has provided a “within one bit” characterization of the capacity region of regular interference channels with known c_1 and c_2. In the compound setting, where the constants are not known a priori, the problem is a little more difficult, since the power splitting strategy of Etkin et al. is based on the knowledge of the cross gains. This paper looks at a more complicated layered scheme that splits the messages for each user into public, semi-private, and private messages. If the cross-gain is high, then the other user can decode both the public and semi-private message, reducing the overall interference. This scheme can also get to “within one bit” of the upper bound (which is the worst upper bound over all channel states). In the talk, Pramod only discussed the achievable strategy. The converse arguments, which require new genie-aided arguments, are in the ArXiV preprint.

Information theoretic operating regimes of large ad hoc wireless networks
Ayfer Ozgur, Ramesh Johari, David Tse, UC Berkeley, and Olivier Leveque

Shannon’s famous AWGN capacity formula (that includes bandwidth) identifies two key operating regimes for the point-to-point systems. In the bandwidth limited regime, where SNR is high, capacity scales linearly with the bandwidth. In the power limited regime, where SNR is low, the capacity scales linearly with SNR. The question posed (and answered in some sense) in this talk is “is there an equivalent notion for large wireless networks?” They take a network of nodes on a grid with random source-destination pairs, distance-based path loss, and random phase. There are two parameters: \alpha, the path loss exponent, and \beta, an SNR exponent. Here the assumption is that SNR = n^{\beta}. The operating regimes are characterized by the following ratio, where n is the number of nodes:

\lim_{n  \to \infty} \frac{\log C_n(\alpha, \beta)}{\log n}

This is a scaling-law sense of capacity. They then identify four regimes in terms of \alpha and \beta and order optimal strategies for each, which draws together the existing work in a unified way. The different schemes are hierarchical MIMO, “bursty” hierarchical MIMO, nearest-neighbor multihop, and local cooperation for a MIMO multihop on squarelets.

Secrecy when the relay is the eavesdropper
Xiang He and Aylin Yener, Penn State

This talk looked at the wiretap problem with an eavesdropper co-located with a relay, or alternatively, designing protocols in which you don’t want the relay to decode your message. So decode-and-forward is right out, but compress-and-forward (CF) is still ok. Can a relay still help even if it’s not trusted? Some partial results have been found by Oohama for a certain degradedness condition that I’m not clear on at the moment, but in the the talk they looked at CF schemes that can achieve the secrecy rate point on the rate-equivocation tradeoff curve. That is, the relay learns nothing about the message, but the destination can decode. Prof. Yener discussed two examples of relay channels with orthogonal components. If the there are orthogonal links from the source to the destination and relay, as in El Gamal-Zahedi, then the relay cannot be used to increase data rates while preserving secrecy, whereas if there are orthogonal links from the source and relay to the destination, as in Liang-Veeravalli, then it does help.

Coding and common knowledge
Yossef Steinberg, Technion

This talk was about a different kind of rate distortion problem. The setup is like the Wyner-Ziv problem, but the encoder should also know be able to create the reconstruction \hat{X} created by the decoder (without knowing the side information). This can be called a Common Information Constraint. An application to remote surgery was outlined. The expression for the rate distortion function then becomes:

R_{ck}(D) = \min [ I(\hat{X} \wedge X) - I(\hat{X} \wedge Y) ]

This reduces to a scheme like the following: “the encoder quantizes X to \hat{X} and then sends it optimally knowing the decoder has Y. Some extensions to broadcast settings were also explored.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.