more singing

I just signed up to sing a concert with Paul Flight (venue TBA):

Mozart – Misericordias Domini
Haydn – Salve Regina
Haydn – Six Partsongs
Beethoven – Elegy
Gasparini – Adoramus Te

With the Babi Yar rehearsal coming up on Monday and more Mahler than I can shake a stick at, I’m certainly going to be kept busy!

contradiction injunction

I came across the following in Royden’s Real Analysis:

All students are enjoined in the strongest possible terms to eschew proofs by contradiction!  There are two reasons for this prohibition : First, such proofs are very often fallacious, the contradiction on the final page arising from an erroneous deduction on an earlier page… Second, even when correct, such a proof gives little insight into the connection between A and B, whereas both the direct proof and the proof by contraposition construct a chain of argument connecting A with B.  One reason that mistakes are so much more likely in proofs by contradiction than in direct proofs or proofs by contraposition is that in a direct proof (assuming the hypothesis is not always false) all deductions from the hypothesis are true in those cases where the hypothesis holds, and similarly for proofs by contraposition (if the conclusion is not always true), the deductions from the negation of the conclusion are true in those cases where the conclusion is false.  Either way, one is dealing with true statements, and one’s intuition and knowledge about what is true help to keep one from making erroneous statements.  In proofs by contradiction, however, you are (assuming the theorem is true) in the unreal world where any statement can be derived, and so the falsity of a statement is no indication of an erroneous deduction.

This impassioned attack reminds me of Ionesco’s The Lesson or Stoppard’s Jumpers.  It’s tempting to psychoanalyze Royden and say that he must have an irrational fear of an unreal “make-believe” world or an overwhelming need to have certainty and truth.  But that’s just silly, isn’t it?

paper a day : exponential error bounds for AVCs

Exponential Error Bounds for Random Codes in the Arbitrarily Varying Channel
Thomas Ericson
IEEE Transactions on Information Theory, 31(1) : 42–48

I really wish I had read this paper fully before writing my ISIT submission, because the vocabulary and terminology is less confusing here. The title is pretty self-explanatory, as long as you know information theory. An arbitrarily varying channel (AVC) is a probabilistic/mathematical model for a communication scenario in which there is a sender, receiver, and jammer. The jammer wants to minimize the communication rate between the sender and receiver. Assuming there exists some nonzero communication rate that is achievable over all possible strategies of the jammer, Ericson wants to find how the probability of making a decoding error decays as a function of the blocklength of the code. In a heuristic sense, if we make the packets of data larger and larger, how does the error probability scale?

AVCs have a lot of technical junk around them, but one important thing is the distinction between random and deterministic codes. A deterministic code C is a fixed mapping from messages {1, 2, 3, … N} into codewords {x1,x2, …, xN }. The jammer knows this mapping and so can tailor its strategy to be as bad as possible for this codebook. A random code is a random variable C taking values in the set of deterministic codes. If the values that C can take on is exp(n Rz), Ericson calls Rz the key rate of the code. That is, for a blocklength of n, we have nRz bits securely shared between the sender and receiver in order to choose a codebook without the jammer’s knowledge.

Ericson wants to find the largest attainable E such that the probability of decoding error is upper bounded by exp(-n (E – d)) for some small d. This is called the reliability function of the channel. Clearly it will depend on the code rate Rx = n-1 log N and the key rate Rz . Let F(Rx) is the reliability function when Rz is infinite. The first result is that E(Rx, Rz) >= min[ F(Rx), Rz].

In some cases the AVC capacity for deterministic codes is zero. This happens intuitively when the jammer can “pretend” to be the sender, so that the receiver can’t tell how to decode the message. In this case we say the channel is symmetrizable. If the channel is symmetrizable, then in fact E(Rx, Rz) = min[ F(Rx), Rz]. This basically says that the best decay in the error probability is limited by the key rate.

Interestingly enough, this is a similar result to the one that I proved in my ISIT paper, only in my case I did it for the Gaussian channel and I only have an upper bound on the error decay. Strict equality is a bit of a bear to prove.

Anyway, this paper is highly relevant to my current research and probably of interest to about 15 people in the world.

cambridge and coffee

I spent the morning reading and working in the 1369 Coffeehouse in Central Square. It was like the good old days except that this time I had a laptop. Perhaps it’s just me being wistful and reminiscing, but I find the coffeehouses in Cambridge/Somerville significantly better than the ones in Berkeley. Maybe it’s because the weather makes you appreciate them more.

The music was perfect — Gotan Project, Tosca, and Django Reinhardt. I feel like I’m home.

paper a day : pseudodifferential operators for wireless communications

It should really be “paper every two weeks” or something — mostly it’s laziness in the blogging half of the reading a paper a day rather than the reading part. This one was pretty heavy going though.

Pseudodifferential operators and Banach algebras in mobile communications
Thomas Strohmer
Appl.Comp.Harm.Anal., to appear.

This paper is pretty heavy on the math but does a nice job of explaining the engineering concepts as well. There are two main thrusts here : modeling wirelss channels and OFDM pulse design. Multipath fading and Doppler shift in wireless communication channels can be mathematized via pseudodifferential operators, and designing OFDM pulses can be viewed as trying to make a certain matrix that represent the channel equalization sparse or concentrated on its diagonal. The tools Strohmer uses are pretty advanced (for me), but that’s because I’m not up on my harmonic analysis.

Wireless fading channels are typically modeled by a convolution with a time-varying filter. If the filter were time-invariant we could take Fourier transforms and write the channel as a multiplication by a transfer function. In the time-varying case we can make a similar representation in terms of a time-varying transfer function, which makes the channel law into a pseudodifferential operator whose Kohn-Nirenberg symbol is precisely the time-varying transfer function. Thus modeling constraints on the symbol can be translated into constraints on the operator.

The decay profile from multipath fading and the effect of Doppler shift provide constraints on the localization of the symbol in the time-frequency plane. The engineering constraints don’t give a nice characterization of the symbol per se, but we can embed the class of channels into a Banach algebra of operators with certein weight functions. We can also embed the symbol into a specific modulation space called a Sjöstrand class.

Turning to the equalization problem, OFDM pulses form a Gabor system, which is a kind of time-frequency basis for a function space. We would like to choose a basis so that recovering the data modulated on these pulses is easy. It turns out that the whole equalization operation can be written as a matrix that is related to the set of pulses, so the condition we want is for this matrix and its inverse to be nearly diagonal or sparse.

The big theorem (Theorem 4.1) in the paper essentially states that for pulses with good time-frequency localization, if the channel’s K-N symbol is invertible, then the inverse of the equalization matrix belongs to a certain algebra. This is the mathematical statement of the pulses being a “good system for communication.” This plus some more advanced relationship between different spaces gives a way of actually engineering a pulse design that can trade off the spectral efficiency versus inter-symbol and inter-channel interference.

All in all, this was a good paper to read but I don’t think I have the background to go and use these tools and techniques on anything because the math is pretty far above my head.