I occasionally enjoy Thai cooking, so I appreciated some of the comments made by Andy Ricker.
I recently learned about India’s Clean Currency Policy which went into effect this year. I still have some money (in an unpacked box, probably) from my trip this last fall, and I wonder if any of it will be still usable when I go to SPCOM 2014 this year. That sounded a bit crazy to me though, further investigation indicates that an internal circular leaked and it sounds like a more sensible multi-year plan to phase in more robust banknotes. My large-ish pile of Rs. 1 coins remains useless, however.
An Astounding Result — some may have seen this before, but it’s getting some press now. It’s part of the Numberphile series. Terry Tao (as usual) has a pretty definitive post on it.
Avi Wigderson is giving a talk at Rutgers tomorrow, so I thought about this nice lecture of his on Randomness (and pseudorandomness).
There’s been a lot of blogging about the MIT Mystery Hunt (if I wasn’t so hosed starting up here at Rutgers I’d probably blog about it earlier) but if you want the story and philosophy behind this year’s Hunt, look no further than the writeup of Erin Rhode, who was the Director of the whole shebang.
Last year I did a lot of flying, and as a result had many encounters with the TSA. This insider account should be interesting to anyone who flies regularly.
I’m in the process of moving to New Jersey for my new gig at Rutgers. Before I start teaching I have to go help run the the Mystery Hunt, so I am a little frazzled and unable to write “real” blog posts. Maybe later. In the meantime, here are some links.
The folks at Puzzazz have put out a bevy of links for the 200th anniversary of the crossword puzzle.
The UK has issued a pardon to Alan Turing, for, you know, more or less killing him. It’s a pretty weasely piece of writing though.
An important essay on women’s work: “…women are not devalued in the job market because women’s work is seen to have little value. Women’s work is devalued in the job market because women are seen to have little value.”. (h/t AW)
Of late we seem to be learning quite a bit about early hominins and hominids (I had no idea that hominini was a thing, nor that chimps are in the panini tribe, nor that “tribe” is between subfamily and genus). For example,
they have sequenced some old bones in Spain. Extracting sequenceable mitochondrial DNA is pretty tough — I am sure there are some interesting statistical questions in terms of detection and contamination. We’ve also learned that some neanderthals were pretty inbred.
Kenji searches for the perfect chocolate chip cookie recipe.
A map of racial segregation in the US.
Vi Hart explains serial music (h/t Jim CaJacob).
More adventures in trolling scam journals with bogus papers (h/t my father).
Brighten does some number crunching on his research notebook.
Jerry takes “disruptive innovation” to task.
Vladimir Horowitz plays a concert at the Carter White House. Also Jim Lehrer looks very young. The program (as cribbed from YouTube)
- The Star-Spangled Banner
- Chopin: Sonata in B-flat minor, opus 35, n°2
- Chopin: Waltz in a minor, opus 34, n°2
- Chopin: Waltz in C-sharp minor, opus 64, n° 2
- Chopin: Polonaise in A-flat major, opus 53 ,Héroïque
- Schumann: Träumerei, Kinderszene n°7
- Rachmaninoff: Polka de W.R
- Horowitz: Variations on a theme from Bizet’s Carmen
The Simons Institute is going strong at Berkeley now. Moritz Hardt has some opinions about what CS theory should say about “big data,” and how it might be require some adjustments to ways of thinking. Suresh responds in part by pointing out some of the successes of the past.
John Holbo is reading Appiah and makes me want to read Appiah. My book queue is already a bit long though…
An important thing to realize about performance art that makes a splash is that it can be often exploitative.
Mimosa shows us what she sees.
Having seen a talk recently by John Ioannidis on how medical research is (often) bunk, this finer corrective by Larry Wasserman was nice to read.
Computer science conferences are often not organized by the ACM, but instead there are different foundations for machine learning and vision and so on that basically exist to organize the annual conference(s). At least, that is what I understand. There are a few which are run by the ACM, and there’s often debate about whether or not the ACM affiliation is worth it, given the overheads and so on. Boaz Barak had a post a little over a week ago making the case for sticking with the ACM. Given the hegemonic control of the IEEE on all things EE (more or less), this debate is new to me. As far as I can tell, ISIT exists to cover some of the cost of publishing the IT Transactions, and so it sort of has to be run by IEEE.
As mentioned before, Tara Javidi has a nice post up on what it means for one random variable to be stochastically less variable than another.
Paul Miniero has a bigger picture view of NIPS — I saw there were lots of papers on “deep learning” but it’s not really my area so I missed many of those posters.
David Eppstein’s top 10 cs.DS papers from 2012.
Via Allie Fletcher, here is an awesome video on the SVD from Los Alamos National Lab in 1976:
From the caption by Cleve Moler (who also blogs):
This film about the matrix singular value decomposition was made in 1976 at the Los Alamos National Laboratory. Today the SVD is widely used in scientific and engineering computation, but in 1976 the SVD was relatively unknown. A practical algorithm for its computation had been developed only a few years earlier and the LINPACK project was in the early stages of its implementation. The 3-D computer graphics involved hidden line computations. The computer output was 16mm celluloid film.
The graphics are awesome. Moler blogged about some of the history of the film. Those who are particularly “attentive” may note that the SVD movie seems familiar:
The first Star Trek movie came out in 1979. The producers had asked Los Alamos for computer graphics to run on the displays on the bridge of the Enterprise. They chose our SVD movie to run on the science officer’s display. So, if you look over Spock’s shoulder as the Enterprise enters the nebula in search of Viger, you can glimpse a matrix being diagonalized by Givens transformations and the QR iteration.
Dhruv Batra forwarded this Communications of the ACM article by Pedro Domingos, entitled “A Few Useful Things to Know about Machine Learning” [free version] The main point from the abstract is:
However, developing successful machine learning applications requires a substantial amount of “black art” that is hard to find in textbooks. This article summarizes twelve key lessons that machine learning researchers and practitioners have learned. These include pitfalls to avoid, important issues to focus on, and answers to common questions.
The article focuses on the classification problem to illustrate these “key lessons.” It’s well-worth reading, especially for people who don’t work on machine learning because it explains a number of important issues.
- It illustrates the gap between what the theory/research works on and the nitty-gritty of applying these algorithms to real data.
- It gives people who want to implement an ML method important fundamental questions to ask before starting : how do I represent my data? How do I evaluate performance? How do I do things efficiently? These have to get squared away first.
- Domain knowledge and feature engineering are the keys to success.
Since I’m guessing there are 2 machine learners who read this blog, go read it (unless you are one of my friends who doesn’t care about all of these technical posts).