ISIT 2009 : plenaries and the Shannon Award

As I mentioned earlier, Rich Baraniuk’s plenary was quite energetic and entertaining. David Tse gave the next plenary, called It’s Easier to Approximate, mainly building on his recent string of work with Raul Etkin, Hua Wang, Suhas Diggavi, Salman Avestimehr, Guy Bresler, Changho Suh, and Mohammad Ali Maddah-Ali (and others too I imagine). He motivated the idea of using deterministic models for multiterminal Gaussian problems essentially by appealing to the idea of building approximation algorithms, although the connection directly to that community wasn’t made as explicitly (c.f. Michelle Effros’s plenary). David is also a great speaker, so even though I had seen a lot of these results before from being at Berkeley, the talk helped really put it all together. Raymond Yeung gave another great talk on information inequalities and for the first time I think I understood the point of “non-Shannon information inequalities” in terms of their connections to other disciplines. Hopefully I’ll get around to posting something about the automatic inequality provers out there. Unfortunately I missed all of Friday, so I didn’t get to see Noga Alon‘s plenary, but I’m sure it was great too. Does anyone else care to comment?

The Shannon Lecture was given, of course, by Jorma Rissanen, and it was on a basic question in statistics : model selection. Rissanen developed the Minimum Description Length (MDL) principle, which I had always understood in a fuzzy sense to mean that you choose a model which is easy to describe information theoretically, but I never had a good handle on what that meant besides taking some logs. The talk was peppered with good bits of philosophy. One which stood out to me was that our insistence that there be a “true model” for the data often leads to problems. That is, sometimes we’re better off not assuming that the data was generating according to a particular model, but focus on finding the best model that fits the data in our class. I got to chat with Rissanen later about this and pointed out that it’s a bit like religion to assume an underlying true distribution. Another great tidbit was his claim that “nonparametric models are nonsense,” by which he meant that essentially every model is parametric — it’s just that sometimes the number of parameters is mind-bogglingly huge. The most interesting thing is that there were new results in the Shannon lecture, and Rissanen is working on a paper with those results now!

The big news was of course that Te Sun Han is going to be next year’s Shannon Awardee. I was very happy to hear this — I’ve been hoping he would win it for a while now, so I had a big grin on my face leaving the banquet…


One thought on “ISIT 2009 : plenaries and the Shannon Award

  1. Alon’s talk was much more narrowly focused than I expected, which unfortunately for me meant that I didn’t “get it.” There are plenty of combinatorics used in my field (lossless source coding), but he focused on one or two specific areas unrelated to my expertise. I guess I came into the talk hoping for some general techniques and didn’t get them, but those who were in the fields he discussed might have been happier with the talk. Also, the four other speakers you mentioned were tough acts to follow. Friday’s usually the most difficult and thankless plenary slot, I’m afraid.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s