ISIT 2009 : plenaries and the Shannon Award

As I mentioned earlier, Rich Baraniuk’s plenary was quite energetic and entertaining. David Tse gave the next plenary, called It’s Easier to Approximate, mainly building on his recent string of work with Raul Etkin, Hua Wang, Suhas Diggavi, Salman Avestimehr, Guy Bresler, Changho Suh, and Mohammad Ali Maddah-Ali (and others too I imagine). He motivated the idea of using deterministic models for multiterminal Gaussian problems essentially by appealing to the idea of building approximation algorithms, although the connection directly to that community wasn’t made as explicitly (c.f. Michelle Effros’s plenary). David is also a great speaker, so even though I had seen a lot of these results before from being at Berkeley, the talk helped really put it all together. Raymond Yeung gave another great talk on information inequalities and for the first time I think I understood the point of “non-Shannon information inequalities” in terms of their connections to other disciplines. Hopefully I’ll get around to posting something about the automatic inequality provers out there. Unfortunately I missed all of Friday, so I didn’t get to see Noga Alon‘s plenary, but I’m sure it was great too. Does anyone else care to comment?

The Shannon Lecture was given, of course, by Jorma Rissanen, and it was on a basic question in statistics : model selection. Rissanen developed the Minimum Description Length (MDL) principle, which I had always understood in a fuzzy sense to mean that you choose a model which is easy to describe information theoretically, but I never had a good handle on what that meant besides taking some logs. The talk was peppered with good bits of philosophy. One which stood out to me was that our insistence that there be a “true model” for the data often leads to problems. That is, sometimes we’re better off not assuming that the data was generating according to a particular model, but focus on finding the best model that fits the data in our class. I got to chat with Rissanen later about this and pointed out that it’s a bit like religion to assume an underlying true distribution. Another great tidbit was his claim that “nonparametric models are nonsense,” by which he meant that essentially every model is parametric — it’s just that sometimes the number of parameters is mind-bogglingly huge. The most interesting thing is that there were new results in the Shannon lecture, and Rissanen is working on a paper with those results now!

The big news was of course that Te Sun Han is going to be next year’s Shannon Awardee. I was very happy to hear this — I’ve been hoping he would win it for a while now, so I had a big grin on my face leaving the banquet…

ArXiV is down

I got the following from ArXiV today:

Submissions to arXiv have been disabled for maintenance. arXiv’s database is down for maintenance. It is still possible to browse, view and search papers that have already been announced but submissions and replacements disabled, as are the functions to add cross listings and journal references.

So I guess I’ll have to wait to post our new submission, “Privacy constraints in regularized convex optimization,” until the system comes back up.

In the meantime, I’ll blog a bit about ISIT!