Dear ISIT Participant,
As you may well be aware, there is an ongoing sit-in protest in Taksim Square, Istanbul. The protest is about a road construction and renovation project which calls for cutting down a large number of trees and demolishing a park. The protest has triggered street demonstrations in many cities around Turkey and has not yet subsided. Demonstrations continue to take place at Taksim Square especially in the late afternoon and evening hours. There are no clashes between the protesters and the police on or near Taksim Square. The shops are open and business runs as usual in the area. The Istanbul Conference and Exhibition Center (ICEC), where ISIT 2013 will take place, is about 1 km from Taksim Square and is safely away from the scene of the protests. The conference hotels which are at Talimhane district are 200-500 m away from the Taksim Square and there have been no reported cases of disturbance to the hotels or their guests. We are hoping that by the time of ISIT, the protests will come to an end. You may find the latest travel advisories issued by various governments here.
We will be updating you here as events develop.
Erdal Arıkan – Elza Erkip, ISIT 2013 Co-Chairs
Gerhard Kramer, President IT Society
I’m still catching up on my backlog of
reading everything, but I’ve decided to set some time aside to take a look at a few papers from ArXiV.
- Lecture Notes on Free Probability by Vladislav Kargin, which is 100 pages of notes from a course at Stanford. Pretty self-explanatory, except for the part where I don’t really know free probability. Maybe reading these will help.
- Capturing the Drunk Robber on a Graph by Natasha Komarov and Peter Winkler. This is on a simple pursuit-evasion game in which the robber (evader) is moving according to a random walk. On a graph with vertices:
the drunk will be caught with probability one, even by a cop who oscillates on an edge, or moves about randomly; indeed, by any cop who isn’t actively trying to lose. The only issue is: how long does it take? The lazy cop will win in expected time at most (plus lower-order terms), since that is the maximum possible expected hitting time for a random walk on an n-vertex graph ; the same bound applies to the random cop . It is easy to see that the greedy cop who merely moves toward the drunk at every step can achieve ; in fact, we will show that the greedy cop cannot in general do better. Our smart cop, however, gets her man in expected time .
How do you make a smarter cop? In this model the cop can tell where the robber is but has to get there by walking along the graph. Strategies which try to constantly “retarget” are wasteful, so they propose a strategy wherein the cop periodically retargets to eventually meet the robber. I feel like there is a prediction/learning algorithm or idea embedded in here as well.
- Normalized online learning by Stephane Ross, Paul Mineiro, John Langford. Normalization and data pre-processing is the source of many errors and frustrations in machine learning practice. When features are not normalized with respect to each other, procedures like gradient descent can behave poorly. This paper looks at dealing with data normalization in the algorithm itself, making it “unit free” in a sense. It’s the same kind of weights-update rule that we see in online learning but with a few lines changed. They do an adversarial analysis of the algorithm where the adversary gets to scale the features before the learning algorithm gets the data point. In particular, the adversary gets to choose the covariance of the data.
- On the Optimality of Treating Interference as Noise, by Chunhua Geng, Navid Naderializadeh, A. Salman Avestimehr, and Syed A. Jafar. Suppose I have a -user interference channel with gains between transmitter and receiver . Then if
then treating interference as noise is optimal in terms of generalized degrees of freedom. I don’t really work on this kind of thing, but it’s so appealing from a sense of symmetry.
- Online Learning under Delayed Feedback, byPooria Joulani, András György, Csaba Szepesvári. This paper is on forecasting algorithms which receive the feedback (e.g. the error) with a delay. Since I’ve been interested in communication with delayed feedback, this seems like a natural learning analogue. They provide ways of modifying existing algorithms to work with delayed feedback — one such method is to run a bunch of predictors in parallel and update them as the feedback is returned. They also propose methods which use partial monitoring and an approach to UCB for bandit problems in the delayed feedback setting.