January 2006


I just signed up to sing a concert with Paul Flight (venue TBA):

Mozart – Misericordias Domini
Haydn – Salve Regina
Haydn – Six Partsongs
Beethoven – Elegy
Gasparini – Adoramus Te

With the Babi Yar rehearsal coming up on Monday and more Mahler than I can shake a stick at, I’m certainly going to be kept busy!

I came across the following in Royden’s Real Analysis:

All students are enjoined in the strongest possible terms to eschew proofs by contradiction!  There are two reasons for this prohibition : First, such proofs are very often fallacious, the contradiction on the final page arising from an erroneous deduction on an earlier page… Second, even when correct, such a proof gives little insight into the connection between A and B, whereas both the direct proof and the proof by contraposition construct a chain of argument connecting A with B.  One reason that mistakes are so much more likely in proofs by contradiction than in direct proofs or proofs by contraposition is that in a direct proof (assuming the hypothesis is not always false) all deductions from the hypothesis are true in those cases where the hypothesis holds, and similarly for proofs by contraposition (if the conclusion is not always true), the deductions from the negation of the conclusion are true in those cases where the conclusion is false.  Either way, one is dealing with true statements, and one’s intuition and knowledge about what is true help to keep one from making erroneous statements.  In proofs by contradiction, however, you are (assuming the theorem is true) in the unreal world where any statement can be derived, and so the falsity of a statement is no indication of an erroneous deduction.

This impassioned attack reminds me of Ionesco’s The Lesson or Stoppard’s Jumpers.  It’s tempting to psychoanalyze Royden and say that he must have an irrational fear of an unreal “make-believe” world or an overwhelming need to have certainty and truth.  But that’s just silly, isn’t it?

And the latest victim is Apurva. You would be hard-pressed to find a more congenial and literate individual.

Exponential Error Bounds for Random Codes in the Arbitrarily Varying Channel
Thomas Ericson
IEEE Transactions on Information Theory, 31(1) : 42–48

I really wish I had read this paper fully before writing my ISIT submission, because the vocabulary and terminology is less confusing here. The title is pretty self-explanatory, as long as you know information theory. An arbitrarily varying channel (AVC) is a probabilistic/mathematical model for a communication scenario in which there is a sender, receiver, and jammer. The jammer wants to minimize the communication rate between the sender and receiver. Assuming there exists some nonzero communication rate that is achievable over all possible strategies of the jammer, Ericson wants to find how the probability of making a decoding error decays as a function of the blocklength of the code. In a heuristic sense, if we make the packets of data larger and larger, how does the error probability scale?

AVCs have a lot of technical junk around them, but one important thing is the distinction between random and deterministic codes. A deterministic code C is a fixed mapping from messages {1, 2, 3, … N} into codewords {x1,x2, …, xN }. The jammer knows this mapping and so can tailor its strategy to be as bad as possible for this codebook. A random code is a random variable C taking values in the set of deterministic codes. If the values that C can take on is exp(n Rz), Ericson calls Rz the key rate of the code. That is, for a blocklength of n, we have nRz bits securely shared between the sender and receiver in order to choose a codebook without the jammer’s knowledge.

Ericson wants to find the largest attainable E such that the probability of decoding error is upper bounded by exp(-n (E – d)) for some small d. This is called the reliability function of the channel. Clearly it will depend on the code rate Rx = n-1 log N and the key rate Rz . Let F(Rx) is the reliability function when Rz is infinite. The first result is that E(Rx, Rz) >= min[ F(Rx), Rz].

In some cases the AVC capacity for deterministic codes is zero. This happens intuitively when the jammer can “pretend” to be the sender, so that the receiver can’t tell how to decode the message. In this case we say the channel is symmetrizable. If the channel is symmetrizable, then in fact E(Rx, Rz) = min[ F(Rx), Rz]. This basically says that the best decay in the error probability is limited by the key rate.

Interestingly enough, this is a similar result to the one that I proved in my ISIT paper, only in my case I did it for the Gaussian channel and I only have an upper bound on the error decay. Strict equality is a bit of a bear to prove.

Anyway, this paper is highly relevant to my current research and probably of interest to about 15 people in the world.

I spent the morning reading and working in the 1369 Coffeehouse in Central Square. It was like the good old days except that this time I had a laptop. Perhaps it’s just me being wistful and reminiscing, but I find the coffeehouses in Cambridge/Somerville significantly better than the ones in Berkeley. Maybe it’s because the weather makes you appreciate them more.

The music was perfect — Gotan Project, Tosca, and Django Reinhardt. I feel like I’m home.

Things have kicked into high gear for MH2K6. As I’ve remarked before, this is one annual event that reminds me what is possible when you put your mind to it. Pathological? Yes. But also wicked fun.

It should really be “paper every two weeks” or something — mostly it’s laziness in the blogging half of the reading a paper a day rather than the reading part. This one was pretty heavy going though.

Pseudodifferential operators and Banach algebras in mobile communications
Thomas Strohmer
Appl.Comp.Harm.Anal., to appear.

This paper is pretty heavy on the math but does a nice job of explaining the engineering concepts as well. There are two main thrusts here : modeling wirelss channels and OFDM pulse design. Multipath fading and Doppler shift in wireless communication channels can be mathematized via pseudodifferential operators, and designing OFDM pulses can be viewed as trying to make a certain matrix that represent the channel equalization sparse or concentrated on its diagonal. The tools Strohmer uses are pretty advanced (for me), but that’s because I’m not up on my harmonic analysis.

Wireless fading channels are typically modeled by a convolution with a time-varying filter. If the filter were time-invariant we could take Fourier transforms and write the channel as a multiplication by a transfer function. In the time-varying case we can make a similar representation in terms of a time-varying transfer function, which makes the channel law into a pseudodifferential operator whose Kohn-Nirenberg symbol is precisely the time-varying transfer function. Thus modeling constraints on the symbol can be translated into constraints on the operator.

The decay profile from multipath fading and the effect of Doppler shift provide constraints on the localization of the symbol in the time-frequency plane. The engineering constraints don’t give a nice characterization of the symbol per se, but we can embed the class of channels into a Banach algebra of operators with certein weight functions. We can also embed the symbol into a specific modulation space called a Sjöstrand class.

Turning to the equalization problem, OFDM pulses form a Gabor system, which is a kind of time-frequency basis for a function space. We would like to choose a basis so that recovering the data modulated on these pulses is easy. It turns out that the whole equalization operation can be written as a matrix that is related to the set of pulses, so the condition we want is for this matrix and its inverse to be nearly diagonal or sparse.

The big theorem (Theorem 4.1) in the paper essentially states that for pulses with good time-frequency localization, if the channel’s K-N symbol is invertible, then the inverse of the equalization matrix belongs to a certain algebra. This is the mathematical statement of the pulses being a “good system for communication.” This plus some more advanced relationship between different spaces gives a way of actually engineering a pulse design that can trade off the spectral efficiency versus inter-symbol and inter-channel interference.

All in all, this was a good paper to read but I don’t think I have the background to go and use these tools and techniques on anything because the math is pretty far above my head.

sūryaṃ cakṣurgachatu vātamātmā dyāṃ ca gachapṛthivīṃ ca dharmaṇā |
apo vā gacha yadi tatra te hitamoṣadhīṣu prati tiṣṭhā śarīraiḥ ||
(Rg Veda X.16.3)

na jāyate mriyate vā kadācin
nāyaṃ bhūtvā bhavitā vā na bhūyaḥ |
ajo nityaḥ śāśvato ‘yam purāṇo
na hanyate hanyamāne śarīre ||
(Gita II.20)

Games People Don’t Play
Peter Winkler
Puzzlers’ Tribute, David Wolfe and Tom Rodgers, eds., A K Peters Ltd. (2001)

This is a short note on 4 little games that people don’t really play. Some are unfair, some are violent, but the interesting question arises for each — what is the best strategy for each player? I’ll describe two of them (with variants) — the other two are equally interesting, but there’s no point in repeating everything!

In “Larger Or Smaller”, Paula writes two numbers on different slips of paper. Victor picks one and has to guess if its the larger or smaller, winning $1 if he’s right. In one version the numbers are just different integers and in the other the numbers are drawn from a uniform distribution on [0,1] and Paula picks which one Victor sees.

Victor can do better than even (if only by a tiny amount) by adopting a clever threshholding strategy proposed by Cover. He guesses “larger” or “smaller” based on a comparison to his threshhold and will win slightly more than half the time (think about it). In the random version, the game is completely fair, with a simple strategy for Paula to choose which number to reveal.

In “Colored Hats” a dictator gives blue and red hats to a roomful of dissidents. They cannot communicate. Each can see the hats of the others and the simultaneously guess their own hat color. If they are wrong they are executed. How do we maximize the number of survivors? As a variant, the dissidents are lined up and guess from the back of the line to the front, so they can see all the hats ahead of them and hear the guesses from those behind them.

It turns out you can save floor(n/2) of the dissidents by having them pair up and guess their partner’s hat color. In the sequential version you can save all but 1 by having the first person guess based on the parity of the red hats in front of them. This provides enough bias for everyone to guess their own hat color.

This hat problem is particularly interesting because of its relation to some of the work in my Master’s thesis. So this paper is actually relevant (albeit tangentially) to my research. Plus it’s a fun and entertaining read. Recreational math and little games like this was what really got me interested in mathematics when I was younger. That and Raymond Smullyan, of course.

Via Amardeep Singh, an article in the NY Times by Kwame Anthony Appiah on cosmopolitanism. His argument (and it’s a good one) is that movements to “preserve traditional culture” are misguided:

Talk of cultural imperialism “structuring the consciousnesses” of those in the periphery treats people like Sipho as blank slates on which global capitalism’s moving finger writes its message, leaving behind another cultural automaton as it moves on. It is deeply condescending. And it isn’t true.

He talks about the reception of soap operas in different cultures — because of their own local values and beliefs, a show like Dallas does not have the same meaning and interpretation to critics in the US as it does to viewers in other countries. It’s a nice complement to the reader-response theory I’m getting in the book I’m reading now. The essay is an excerpt from a forthcoming book, which I may just have to read.

Next Page »

Follow

Get every new post delivered to your Inbox.

Join 906 other followers