On entitlement and the Mystery Hunt

The MIT Mystery Hunt ended a little over a week ago — the premise is that a coin is hidden somewhere on campus and teams have to solve a bunch of puzzles to find its location. The prize for winning is writing it the next year. It was the longest hunt on record — 75 hours and 18 minutes if you count from the kickoff event, and 73 if you count from when the servers went live. My team, whose name is the entire text of the book Atlas Shrugged (although we often used the shorter name PART I: NON-CONTRADICTION; CHAPTER I: THE THEME ‘Who is John Galt?’…), managed to emerge victorious. Here’s a footprint outline we were given as part of an event puzzle:

Our team name on a footprint

Our team name on a footprint

And here are some fuzzy snaps of the coin:
The coin, obverse

The coin, obverse

The coin, reverse.  Now we can raise the debt ceiling.

The coin, reverse. Now we can raise the debt ceiling.

The hunt was so long because many of the puzzles were underclued and the team running it, Manic Sages, essentially mis-estimated how hard the Hunt would be (from gameplay and actual puzzles). Naturally there was much wailing and gnashing of teeth on the internet after this, and a lot of people took the Sages to task. Some of this criticism was a bit unfair, I think. The Sages put on a huge event for more than a thousand people, and much of it was quite fun. There were problems, sure, but let’s not get hyperbolic here.

Of course, hyperbole is par for the course, and Wired ran this piece by Thomas Synder who indulges in some pretty questionable plot extrapolation to conclude that there is a “trajectory” towards longer and longer hunts. For reference, here are the solving times for hunts up to 2010, showing the mysterious trend Snyder flags is more or less fabricated. So basically that line of argument is just hand-wringing. But why the calls for smelling salts?

The crucial line is this : “[w]hat started as an MIT-only event has now become a mainstay on the puzzle calendar.” Puzzle writers and solvers such as Snyder think that MIT Mystery Hunt puzzles should be less… well, MIT. One of the metapuzzles required you to know something about Feynman diagrams. There was a fractal word search. This is not a complaint about puzzles having too many steps, but about them being too nerdy or too inaccessible to those who have a “puzzle calendar.” Of course, those sorts of things are right up the alley of some MIT students. The subtext of this article is that it’s just not “professional” enough.

The Hunt is a free event (for solvers) that costs several thousands of dollars to put on, is much longer than most other puzzle events, and is done entirely by volunteers. In the case of Sages (and my team), many or most of those people are students. The sense of entitlement voiced by Snyder in this article (and by others elsewhere) is palpable. The fact that it’s a mainstay of the “puzzle calendar” is irrelevant — the Mystery Hunt owes its participants a good time, and those participants are still largely drawn from the MIT community, I think. Sure there were moments when I was not having fun, but also moments when I was having a lot of fun solving. There were some great/innovative puzzles in this hunt, and other great/innovative puzzle ideas. I wouldn’t keep going to Mystery Hunt if it was going to be like any other puzzle contest, and this hunt definitely delivered, even if reading some of the solutions breaks my brain.

Just next year, we’ll try to make it shorter, of course.


NIPS casinos flooded

I had a rather dim view of the NIPS conference venue last year — the Harrah’s and Harveys casino/hotels in South Lake Tahoe. Nothing is more depressing than people playing the slots at 8 AM, smoking and drinking away. Via Erin, I learned that the casinos flooded and are closed : “thousands of gallons of water dumped into Harrah’s, causing the elevators to break.” I can only hope that this is somehow an excuse to not hold NIPS there in the future — but I’m not holding my breath (which I did to avoid the smoke).

Postdoc positions at UT Austin

The Simons Postdoc positions are open:

The ECE department at The University of Texas at Austin seeks highly qualified candidates for postdoctoral fellowship positions, lasting up to two years, in the information sciences, broadly defined. Applicants should have, or be close to completing, a PhD in ECE, CS, Math, Statistics or related fields.

RIP Aaron Swartz

Aaron Swartz, who most recently made headlines for expropriating a large amount of information that was on JSTOR and making it available to the public, committed suicide. Cory Doctorow has a remembrance of Aaron and also a reminder of how we should remember how terrible depression can be. In making sense of what happened it’s tempting to say the threat of prosecution was the “cause,” but we shouldn’t lose sight of the person and the real struggles he was going through.

CRA Best Practices on Mentoring Postdocs

I just got the CRA newsletter, and it had a link to a document on best practices for mentoring postdocs:

… data from the Computing Research Association’s (CRA) annual Taulbee Survey indicate that the numbers of recent Ph.D.s pursuing postdocs following graduate school soared from 60 in 1998 to 249 in 2011 (three-year rolling averages), an increase of 315 percent during this period. Because research organizations are suddenly channeling many more young researchers into these positions, it is incumbent upon us as a community to have a clear understanding of the best practices associated with pursuing, hosting, and nurturing postdocs.

I think you’d find the same numbers in EE as well. This report relies a fair bit on the National Academies report, which is a little out of date and I thought very skewed towards those in the sciences. Engineering is a different beast (and perhaps computer science an even more different beast), so I think that while there are some universal issues, the emphasis and importance of different aspects varies across fields quite a bit. For example, the NA report focuses quite a bit on fairness in recruiting which are predicated on the postdoc being a “normal” thing to do. By contrast, in many engineering fields postdoc positions are relatively new and there’s an opportunity to define what the position means and what it is for (i.e. not a person you can pay cheaply to supervise your graduate students for you).

Anyway, it’s worth reading!

PSA : ISIT submission formatting

If you, like me, tend to cart around old ISIT papers and just gut them to put in the new content for this year’s paper, don’t do it. Instead, download the template because the page size has changed from letter to a4.

Also, as a postscript to Sergio’s note that eqnarray is OVER, apparently Stefan recommends we use IEEEeqnarray instead of align.

Job opening : Chair at the Hamilton Institute

Vijay Subramanian passed along this job opening in case readers know of someone who would be interested…


The Hamilton Institute at the National University of Ireland Maynooth invites applications for a Chair position starting in Summer 2013. Appointment will be at full professor level. Exceptional candidates in all areas will be considered, although we especially encourage candidates working in areas that complement existing activity in the mathematics of networks (distributed optimisation, feedback control, stochastic processes on graphs) as applied to smart transport, smart city data analytics and wireless networks.

The Hamilton Institute is a dynamic and vibrant centre of excellence for applied mathematics research. The successful candidate will be a leading international researcher with a demonstrated ability to lead and develop new research directions. A strong commitment to research excellence and a successful track record in building strategic partnerships and securing independent funding from public competitive sources and/or through private investment are essential.

Informal enquires can be directed to Prof. Doug Leith (doug.leith@nuim.ie), Director of the Hamilton Institute. Details on the Hamilton Institute can be found at www.hamilton.ie.

Further information on the post and the application procedure can be found here.

The deadline for applications is 11th Feb 2013.

Active learning survey

I’ve been starting work on a problem related to active learning, and I wanted to get caught up on the literature. Luckily for me, Sanjoy Dasgupta has a nice survey (non-paywall version here) from 2011 on the subject. It’s a nice read, although I didn’t know “aggressive” and “mellow” were terms of art in active learning.

In active learning you have to query unlabeled points and ask for their labels — the goal is usually to learn something like a classifier, so you want to query a small number of points by being judicious about which ones to ask for. A mellow algorithm queries any informative point, where as an aggressive algorithm queries the “most informative point.” The former are often easier to analyze, because the latter end up sampling a “nonrepresentative” set of labeled points — if the points come i.i.d. from some distribution, the set of points you would label in an aggressive strategy will not look like they came from that distribution. Future work may look at semi-aggressive strategies. Perhaps we could call this line of research “harshing the mellow” by developing “harsh functions” which score points according to informativeness…

Linkage (technical)

Having seen a talk recently by John Ioannidis on how medical research is (often) bunk, this finer corrective by Larry Wasserman was nice to read.

Computer science conferences are often not organized by the ACM, but instead there are different foundations for machine learning and vision and so on that basically exist to organize the annual conference(s). At least, that is what I understand. There are a few which are run by the ACM, and there’s often debate about whether or not the ACM affiliation is worth it, given the overheads and so on. Boaz Barak had a post a little over a week ago making the case for sticking with the ACM. Given the hegemonic control of the IEEE on all things EE (more or less), this debate is new to me. As far as I can tell, ISIT exists to cover some of the cost of publishing the IT Transactions, and so it sort of has to be run by IEEE.

As mentioned before, Tara Javidi has a nice post up on what it means for one random variable to be stochastically less variable than another.

Paul Miniero has a bigger picture view of NIPS — I saw there were lots of papers on “deep learning” but it’s not really my area so I missed many of those posters.

David Eppstein’s top 10 cs.DS papers from 2012.

B-log on IT

Via Tara Javidi I heard about a new blog on information theory: the Information Theory b-log, which has been going for a few months now but I guess in more “stealth mode.” It’s mostly posts by Sergio Verdú, with some initial posting by Thomas Courtade, but the most recent post is by Tara on how to compare random variables from a decision point of view. However, as Max noted:

All researchers work­ing on infor­ma­tion the­ory are invited to par­tic­i­pate by post­ing items to the blog. Both orig­i­nal mate­r­ial and point­ers to the web are welcome.