Linkage

One day I will write all those posts that I’ve been intending on writing. One day… until then:

Nabokov’s butterfly theory is found to be correct.

Why you should encrypt your smartphone.

You should now call him Doctor Grover. No, not him, silly. Him.

MIT’s 150th year anniversary PR blitz has an article on Wiener.

Advertisement

QOTD : a lack of “know-what”

From the tail end of The Human Use of Human Beings:

Our papers have been making a great deal of American “know-how” ever since we had the misfortune to discover the atomic bomb. There is one quality more important than “know-how” and we cannot accuse the United States of any undue amount of it. This is “know-what” by which we determine not only how to accomplish our purposes, but what our purposes are to be. I can distinguish between the two by an example. Some years ago, a prominent American engineer bought an expensive player-piano. It became clear after a week or two that this purchase did not correspond to any particular interest in the music played by the piano but rather to an overwhelming interest in the piano mechanism. For this gentleman, the player-piano was not a means of producing music, but a means of giving some inventor the chance of showing how skillful he was at overcoming certain difficulties in the production of music. This is an estimable attitude in a second-year high-school student. How estimable it is in one of those on whom the whole cultural future of the country depends, I leave to the reader.

Oh, snap!

More seriously though, this definitely feels like a criticism of the era in which Wiener was writing. Game theory was very fashionable, and the pseudo-mathematization of Cold War geopolitics definitely gave him pause. I don’t think Wiener would agree current railing against “wasteful” government spending on “useless” research projects, despite his obvious dislike of vanity research and his disappointment with this science of his day. It was important to him that scientists remained free from political pressures and constraints to conform to a government agenda (as described in A Fragile Power).

Linkage (and a puzzle)

I saw Scott’s talk today on some complexity results related to his and Alex Arkhpov’s work on linear optics. I missed the main seminar but I saw the theory talk, which was on how hard it is to approximate the permanent of a matrix X whose entries (X_{ij}) are drawn iid complex circularly-symmetric Gaussian \mathcal{CN}(0,1). In the course of computing the expected value of the 4th moment of the permanent, he gave the following cute result as a puzzle. Given a permutation \sigma of length n, let c(\sigma) be the number of cycles in \sigma. Suppose \sigma is drawn uniformly from the set of all permutations. Show that

\mathbb{E}[ 2^{c(\sigma)}] = n + 1.

At least I think that’s the statement.

In other news…

  • Ken Ono has announced (with others) an algebraic formula for partition numbers. Very exciting!
  • Cosma thinks that New Yorker article is risible, but after talking to a number of people about it, I realized that the writing is pretty risible (and that I had, at first pass, skimmed to the part which I thought was good to report in the popular (or elitist) press, namely the bias towards positive results. Andrew Gelman points out that he has written about this before, but I think the venue was the more interesting part here. What was risible about the writing is that it starts out in this “ZOMG OUR SCIENCE POWERZ ARE FAAAAAAADINNNNNNGGGGGGG,” and then goes on to say slightly more reasonable things. It’s worthy of the worst of Malcolm Gladwell.
  • Tofu is complicated.
  • The 90-second Newbery contest.

Privacy and entropy (needs improvement)

A while ago, Alex Dimakis sent me an EFF article on information theory and privacy, which starts out with an observation of Latanya Sweeney’s that gender, ZIP code, birthdate are uniquely identifying for a large portion of the population (an updated observation was made in 2006).

What’s weird is that the article veers into “how many bits of do you need to uniquely identify someone” based on self-information or surprisal calculations. It paints a bit of a misleading picture about how to answer the question. I’d probably start with taking \log_2(6.625 \times 10^9) and then look at the variables in question.

However, the mere existence of this article raises a point : here is a situation where ideas from information theory and probability/statistics can be made relevant to a larger population. It’s a great opportunity to popularize our field (and demonstrate good ways of thinking about it). Why not do it ourselves?

Quote of the day

From Paul Ellis‘s The Essential Guide to Effect Sizes:

THe post hoc analysis of nonsignificant results is sometime painted as controversial (e.g. Nakagawa and Foster 2004), but it really isn’t. It is just wrong. There are two small technical reasons and one gigantic reason why the post hoc analysis of observed power is an exercise in futility…

… and then some more stuff on p-values and the like. Somehow, reading applied statistics books make my brain glaze over, but at least you get a giggle now and then.

Library cataloging fail

Last week I saw this in the library at UCSD:

Shot from UCSD Library

One of these things does not belong in QA279


What is that, you might ask? Why it’s The Design Inference, by noted Intelligent Design proponent William Dembski. I thought perhaps some enterprising soul had hidden it away in QA279, the Library of Congress call number for experimental design, to keep it away from the corruptible undergraduate youth. However, much to my surprise, it was correctly shelved. I suppose you could call it a book on experiments, but it’s a far cry from D.A. Freedman’s Statistical models : theory and practice. I wonder what the LC call number is for pseudomathematical quackery?

Rudolf Ahlswede (1938 – 2010)

At the end of last week I learned, much to my sadness, that Rudolf Ahlswede passed away in December. There will be some sort of commemoration at the ITA workshop. His Wikipedia entry has been edited, but I couldn’t find an obituary. It does have a rather dashing photograph of him in earlier years. I think the sideburns suited him.

Rudi Ahlswede was one of the pillars of Information Theory and had many seminal works in that field using tools from combinatorics, probability, and graph theory. I came to know his work through my dissertation work on the arbitrarily varying channel (AVCs); he had written extensively on the AVC starting in the 1960s. Of particular note (to me) was his paper on the “elimination technique,” which is one of the first derandomization arguments I’ve seen in the literature. And of course he was part of the start of Network Coding. I met Ahlswede at the 2006 ISIT and then most recently in September at ITW in Dublin, where I was presenting a paper on AVCs in which the adversary has a noisy look at the transmitted codeword. He presented in the same session a paper on “channels with time structure,” about which I will make a separate post later. I just wanted to note with sadness the passing of such a giant.