Readings

I anticipate I will be doing a fair bit more reading in the future, due to the new job and personal circumstances. However, I probably won’t write more detailed notes on the books. This blog should be a rapidly mixing random walk, after all.

Embassytown (China Miéville) : a truly bizarre novel set on an alien world in on which humans have an Embassy but can only communicate with the local aliens in a language which defies easy description. Ambassadors come in pairs, as twins — to speak with the Ariekei they must both simultaneously speak (in “cut” and “turn”). The Ariekei’s language does not allow lying, and they have contests in which they try to speak falsehoods. However, events trigger a deadly change (I don’t want to give it away). Philosophically, the book revolves a lot around how language structures thought and perception, and it’s fascinating if you like to think about those things.

Chop Suey: A Cultural History of Chinese Food in the United States (Andrew Coe) : an short but engaging read about how Chinese food came to the US. The book starts really with Americans in China and their observations on Chinese elite banquets. A particular horror was that the meat came already chopped up — no huge roasts to carve. Chapter by chapter, Coe takes us through the railroad era through the 20s, the mass-marketing of Chinese food and the rise of La Choy, through Nixon going to China. The book is full of fun tidbits and made my flights to and from Seattle go by quickly.

The Thousand Autumns of Jacob de Zoet: A Novel (David Mitchell) : I really love David Mitchell’s writing, but this novel was not my favorite of his. It was definitely worth reading — I devoured it — but the subject matter is hard. Jacob de Zoet is a clerk in Dejima, a Dutch East Indies trading post in 19th century Japan. There are many layers to the story, and more than a hint of the grotesque and horrific, but Mitchell has an attention to detail and a mastery with perspective that really makes the place and story come alive.

Air (Geoff Ryman) : a story about technological change, issues of the digital divide, economic development, and ethnic politics, set in a village in fictional Karzistan (looks like Kazakhstan). Air is like having mandatory Internet in your brain, and is set to be deployed globally. During a test run in the village, Chung Mae, a “fashion expert,” ends up deep into Air and realizes that the technology is going to change their lives. She goes about trying (in a desperate, almost mad way) to tell her village and bring them into the future before it overwhelms them. There’s a lot to unpack here, especially in how technology is brought to rural communities in developing nations, how global capital and the “crafts” market impacts local peoples, and the dynamics of village social orders. It’s science fiction, but not really.

The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy (Sharon Bertsch McGrayne) : an engaging read about the history of Bayesian ideas in statistics. It reads a bit like an us vs. them, the underdog story of how Bayesian methods have overcome terrible odds (prior beliefs?) to win the day. I’m not sure I can give it as enthusiastic a review as Christian Robert, but I do recommend it as an engaging popular nonfiction read on this slice in the history of modern statistics. In particular, it should be entertaining to a general audience.

Dangerous Frames: How Ideas about Race and Gender Shape Public Opinion (Nicholas J.G. Winter) : the title says most of it, except it’s mostly about how ideas about race and gender shape white public opinion. The basic theoretical structure is that there are schemas that we carry that help us interpret issues, like a race schema or a gender schema. Then there are frames or narratives in which issues are put. If the schema is “active” and an issue is framed in a way that is concordant with the schema, then people’s opinions follow the schema, even if the issue is not “about” race or gender. This is because people reason analogically, so they apply the schema if it matches. To back up the theory, Winter has some experiments, both of the undergrads doing psych studies type as well as survey data, to show that by reframing certain issues people’s “natural” beliefs can be skewed by the schema that they apply. The schemas he discusses are those of white Americans, mostly, so the book feels like a bit of an uncomfortable read because he doesn’t really interrogate the somewhat baldly racist schemas. The statistics, as with all psychological studies, leaves something to be desired — I take the effects he notices at a qualitative level (as does he, sometimes).

Advertisement

Spread spectrum… in spaaaaaaaace…

I saw on the ArXiV earlier this month a paper on interstellar communication by Berkeley’s own David Messerschmitt. I only met him once, really, at my prelim exam oh so many years ago, but I figured I would give it a read. And here you thought spread spectrum was dead…

Prof. Messerschmitt proposes using spread-spectrum because of its combination of interference robustness and detectability. The fundamental assumption is that the receiver doesn’t know too much about the modulation strategy of the transmitter (this is a case of stochastic encoding but deterministic decoding). The choice of wide-band signaling is novel — SETI-related projects have looked for narrowband signals. The bulk of the paper is on what to do at the transmitter:

The focus of this paper is on the choice of a transmitted signal, which directly parallels the receiver’s challenge of anticipating what type of signal to expect. In this we take the perspective of a transmitter designer, because in the absence of explicit coordination it is the transmitter, and the transmitter alone, that chooses the signal. This is significant be- cause the transmitter designer possesses far less information about the receiver’s environment than the receiver designer, due to both distance (tens to hundreds of light-years) and speed-of-light delay (tens to hundreds of years). While the receiver design can and should take into account all relevant characteristics of its local environs and available resources and technology, in terms of the narrower issue of what type of signal to expect the receiver designer must rely exclusively on the perspective of the transmitter designer.

The rest of the paper centers on designing the coding scheme which is robust to any kind of radio-frequency interference (RFI), without assuming any knowledge at the decoder — specific knowledge of the RFI (say, a statistical description) can only enhance detection, but the goal is to be robust against the modeling issues. To get this robustness, he spends a fair bit of time is spent developing isotropic models for noise and coding (which should be familiar to information theorists of a Gaussian disposition) and then reduces the problem to looking for appropriate time and bandwidth parameters.

This is definitely more of a “communication theory” paper, but I think some of the argument could be made clearer by appeals to some things that are known in information theory. In particular, this communication problem is like coding over an AVC; the connection between spread-spectrum techniques and AVCs has been made before by Hughes and Thomas. However, translating Shannon-theoretic ideas from AVCs to concrete modulation schemes is a bit messy, and some kind of translation is needed. This paper doesn’t quite “translate” but it does bring up an interesting communication scenario : what happens when the decoder only has a vague sense of your coding scheme?

A new uncertainty principle

During a recent Google+ conversation about the quality of reviews and how to improve them (more from the CS side), the issue of the sheer number of reviews seemed to be a limiting factor. Given the window of time for a conference, there is not enough time to have a dialogue between reviewers and authors. By contrast, for journals (such as Trans. IT), I find that I’ve gotten really thorough reviews and my papers have improved a lot through the review process, but it can take years to get something published due to the length of time for communication.

This points to a new fundamental limit for academic communications:

Theorem. Let R be the number of papers submitted for review, Q be the average quality of reviews for those papers, and T be the time allotted to reviewing the papers. Then

R Q / T = K.

where K is a universal constant.

Arial is for Windows, Helvetica is for Mac

After watching the movie Helvetica a few years ago and playing the game Helvetica vs. Arial, I’ve become more aware of the ubiquity of Helvetica and the creep of Arial. In skimming this year’s edition of the NSF grant proposal guide (why yes, I am writing some proposals now), I saw that for the main proposal guidelines, the typeface requirement are:

  • Arial, Courier New, or Palatino Linotype at a font size of 10 points or larger;
  • Times New Roman at a font size of 11 points or larger; or
  • Computer Modern family of fonts at a font size of 11 points or larger.

with a footnote on “Arial” that says “Macintosh users also may use Helvetica and Palatino typefaces.” Quite apart from the discrimination issue, does a PDF identify the OS of its creator? Also, can you imagine reading a proposal in 10 point Courier? Yikes.

Clearly I need to spend less time thinking about this and more time chopping the last half a page…

Notes on stable distributions

After attending a recent talk at TTI about dimension reduction by Moses Charikar in which he mentioned the special role stable distributions play, I made a note to freshen up my own scattershot knowledge of facts about stable distributions. Of course, things got too busy and the the note ended up on my sub-list of to-do items that get infinitely postponed. However, I’ve been saved by a recent post to ArXiV by Svante Janson, who does all sorts of interesting work on these cool objects called graphons (the limits of infinite graph processes) :

Stable Distributions
Svante Janson

We give some explicit calculations for stable distributions and convergence to them, mainly based on less explicit results in Feller (1971). The main purpose is to provide ourselves with easy reference to explicit formulas. (There are no new results.)

All (or at least most) of the facts I wanted in one place! Hooray!

He starts with infinitely divisible distributions (e.g. Gaussian, Poisson, Gamma) and then talks about \alpha-stable distributions and the uniqueness of the corresponding measures for \alpha \in (0,2] (the case \alpha = 2 gives the Gaussian. I’m still reading it (bits at a time), but it’s great to have little surveys like this — broadens the mind, builds character, &c.