One day I will write all those posts that I’ve been intending on writing. One day… until then:

Why you should encrypt your smartphone.

You should now call him Doctor Grover. No, not him, silly. Him.

MIT’s 150th year anniversary PR blitz has an article on Wiener.

# QOTD : a lack of “know-what”

From the tail end of The Human Use of Human Beings:

Our papers have been making a great deal of American “know-how” ever since we had the misfortune to discover the atomic bomb. There is one quality more important than “know-how” and we cannot accuse the United States of any undue amount of it. This is “know-what” by which we determine not only how to accomplish our purposes, but what our purposes are to be. I can distinguish between the two by an example. Some years ago, a prominent American engineer bought an expensive player-piano. It became clear after a week or two that this purchase did not correspond to any particular interest in the music played by the piano but rather to an overwhelming interest in the piano mechanism. For this gentleman, the player-piano was not a means of producing music, but a means of giving some inventor the chance of showing how skillful he was at overcoming certain difficulties in the production of music. This is an estimable attitude in a second-year high-school student. How estimable it is in one of those on whom the whole cultural future of the country depends, I leave to the reader.

Oh, snap!

More seriously though, this definitely feels like a criticism of the era in which Wiener was writing. Game theory was very fashionable, and the pseudo-mathematization of Cold War geopolitics definitely gave him pause. I don’t think Wiener would agree current railing against “wasteful” government spending on “useless” research projects, despite his obvious dislike of vanity research and his disappointment with this science of his day. It was important to him that scientists remained free from political pressures and constraints to conform to a government agenda (as described in A Fragile Power).

I saw Scott’s talk today on some complexity results related to his and Alex Arkhpov’s work on linear optics. I missed the main seminar but I saw the theory talk, which was on how hard it is to approximate the permanent of a matrix $X$ whose entries $(X_{ij})$ are drawn iid complex circularly-symmetric Gaussian $\mathcal{CN}(0,1)$. In the course of computing the expected value of the 4th moment of the permanent, he gave the following cute result as a puzzle. Given a permutation $\sigma$ of length $n$, let $c(\sigma)$ be the number of cycles in $\sigma$. Suppose $\sigma$ is drawn uniformly from the set of all permutations. Show that

$\mathbb{E}[ 2^{c(\sigma)}] = n + 1$.

At least I think that’s the statement.

In other news…

• Ken Ono has announced (with others) an algebraic formula for partition numbers. Very exciting!
• Cosma thinks that New Yorker article is risible, but after talking to a number of people about it, I realized that the writing is pretty risible (and that I had, at first pass, skimmed to the part which I thought was good to report in the popular (or elitist) press, namely the bias towards positive results. Andrew Gelman points out that he has written about this before, but I think the venue was the more interesting part here. What was risible about the writing is that it starts out in this “ZOMG OUR SCIENCE POWERZ ARE FAAAAAAADINNNNNNGGGGGGG,” and then goes on to say slightly more reasonable things. It’s worthy of the worst of Malcolm Gladwell.
• Tofu is complicated.
• The 90-second Newbery contest.

# Privacy and entropy (needs improvement)

A while ago, Alex Dimakis sent me an EFF article on information theory and privacy, which starts out with an observation of Latanya Sweeney’s that gender, ZIP code, birthdate are uniquely identifying for a large portion of the population (an updated observation was made in 2006).

What’s weird is that the article veers into “how many bits of do you need to uniquely identify someone” based on self-information or surprisal calculations. It paints a bit of a misleading picture about how to answer the question. I’d probably start with taking $\log_2(6.625 \times 10^9)$ and then look at the variables in question.

However, the mere existence of this article raises a point : here is a situation where ideas from information theory and probability/statistics can be made relevant to a larger population. It’s a great opportunity to popularize our field (and demonstrate good ways of thinking about it). Why not do it ourselves?

# Quote of the day

THe post hoc analysis of nonsignificant results is sometime painted as controversial (e.g. Nakagawa and Foster 2004), but it really isn’t. It is just wrong. There are two small technical reasons and one gigantic reason why the post hoc analysis of observed power is an exercise in futility…

… and then some more stuff on p-values and the like. Somehow, reading applied statistics books make my brain glaze over, but at least you get a giggle now and then.