truly random numbers?

I heard this interesting story on All Things Considered on random number generation via quantum entanglement. The result was reported in Nature (the full paper is also available). I bet Scott will have something more to say about it (eventually), but it seems interesting to me, at least.

Perhaps I should go learn some quantum physics…

Advertisement

Shannon theory helps decipher Pictish?

Well, if not decipher, at least claim that there is something to read. A recent paper claims that Pictish inscriptions are a form of written language:

Lo and behold, the Shannon entropy of Pictish inscriptions turned out to be what one would expect from a written language, and not from other symbolic representations such as heraldry.

The full paper has more details. From reading the popular account I thought it was just a simple hypothesis test using the empirical entropy as a test statistic and “heraldry” as the null hypothesis, but it is a little more complicated than that.

After identifying the set of symbols in Pictish inscriptions, the question is how related adjacent symbols are to each other. That is, can the symbols be read sequentially? What they do is renormalize Shannon’s F_2 statistic (from the paper “Prediction and entropy of printed English”), which is essentially the empirical conditional entropy of the current symbol conditioned on the past symbols. They compute:

U_r = F_2 / \log\left( \frac{N_d}{N_u} \right)

where N_d and N_u are the number of di-grams and un-grams, respectively. Why normalize? The statistic F_2 by itself does not discriminate well between semasiographic (symbolic systems like heraldry) and lexigraphic (e.g. alphabets or syllabaries) systems.

Another feature which the authors think is important is the number of digrams which are repeated in the text. If S_d is the number of digrams appearing once and T_d is the total number of digrams, they use a “di-gram repetition factor”

C_r = \frac{N_d}{N_u} + a \cdot \frac{S_d}{T_d}

where the tradeoff factor a is chosen via cross-validation on known corpora.

They then propose a two-step decision process. First they compare C_r to a threshold — if it is small then they deem the system to be more “heraldic”. If C_r is large then then do a three-way decision based on U_r. If U_r is small then the text corresponds to letters, if larger, syllables, and larger still, words.

In this paper “entropy” is being used here as some statistic with discriminatory value. It is not clear a priori that human writing systems should display empirical entropies with certain values, but since it works well on other known corpora, it seems like reasonable evidence. I think the authors are relatively careful about this, which is nice, since popular news might make one think that purported alien transmissions could easily fall to a similar analysis. Maybe that’s how Jeff Goldblum mnanaged to get his Mac to reprogram the alien ship in Independence Day

Update: I forgot to link to a few related things. The statistics in this paper are a little more convincing than the work on the Indus script (see Cosma’s lengthy analysis. In particular, they do a little better job of justifying their statistic as discriminating in known corpora. Pictish would seem to be woefully undersampled, so it is important to justify the statistic as discriminatory for small data sets.

Everyone hates NCLB

Via Kevin Drum, I read this Economist poll about the popularity of No Child Left Behind. A rather overwhelming plurality of those surveyed said that it has hurt our schools. I don’t think I’ve met a single person who likes the law, although I chalked that up to the general political leanings of my friends. Perhaps repealing it would be something that can get “bipartisan support.”

On another note, the Wikipedia article says that people pronounce NCLB as “nicklebee.” Really? I have never heard that before. (Brandy, I’m looking at you).