I’m in Austin right now for the first GlobalSIP conference. The conference has a decentralized organization, with semi-independent day-long workshops (“symposia”) scheduled in parallel with each other. There are 8 of these, with 6 running in parallel per day, with 1 session of “plenary” talks and 2 poster sessions. Each workshop is scheduled in AAB, ABA, or BAA, where A = posters and B = plenary, so there are 2 talk sessions and 4 poster sessions running in parallel.

Fortunately, there are a wide range of topics covered in the workshops, from biology to controlled sensing, to financial signal processing. The downside is that the actual papers in each workshop often fit well with other workshops. For example, the distributed optimization posters (in which I am interested), were sprinkled all over the place. This probably has a lot to do with the decentralized effects.

In terms of the “results” at the conference, it seems from my cursory view that many people are presenting “extra” results from other conference papers, or preliminary work for future papers. This actually works well in the poster format: for the former, the poster contains a lot of information about the “main result” as well, and for the latter, the poster is an invitation to think about future work. In general I’m a little ambivalent about posters, but if you’re going to have to do ‘em, a conference like this may be a better way to do it.

I recently saw that Andrew Gelman hasn’t really heard of compressed sensing. As someone in the signal processing/machine learning/information theory crowd, it’s a little flabbergasting, but I think it highlights two things that aren’t really appreciated by the systems EE/algorithms crowd: 1) statistics is a pretty big field, and 2) the gulf between much statistical practice and what is being done in SP/ML research is pretty wide.

The other aspect of this is a comment from one of his readers:

Meh. They proved L1 approximates L0 when design matrix is basically full rank. Now all sparsity stuff is sometimes called ‘compressed sensing’. Most of it seems to be linear interpolation, rebranded.

I find such dismissals disheartening — there is a temptation to say that every time another community picks up some models/tools from your community that they are reinventing the wheel. As a short-hand, it can be useful to say “oh yeah, this compressed sensing stuff is like the old sparsity stuff.” However, as a dismissal it’s just being parochial — you have to actually engage with the use of those models/tools. Gelman says it can lead to “better understanding one’s assumptions and goals,” but I think it’s more important to “understand what others’ goals.”

I could characterize rate-distortion theory as just calculating some large deviations rate functions. Dembo and Zeitouni list RD as an application of the LDP, but I don’t think they mean “meh, it’s rebranded LDP.” For compressed sensing, the goal is to do the inference in a computationally and statistically efficient way. One key ingredient is optimization. If you just dismiss all of compressed sensing as “rebranded sparsity” you’re missing the point entirely.

There’s an opening in Professor Madhow’s group at UC Santa Barbara:

We are looking for a postdoctoral researcher with a strong background in communications/signal processing/controls who is interested in applying these skills to a varied set of problems arising from a number of projects. These include hardware-adapted signal processing for communications and radar, neuro-inspired signal processing architectures, and inference in online social networks. In particular, familiarity with Bayesian inference is highly desirable, even if that is not the primary research area for his/her PhD. There are also opportunities to work on problems in next generation communication systems, including millimeter wave networking and distributed communication. While the researcher will be affiliated with Prof. Madhow’s group in the ECE Department at UCSB, depending on the problem(s) chosen, he/she may need to interact with faculty collaborators in other disciplines such as circuits, controls, computer science and neuroscience, as well as with colleagues with expertise in signal processing and communications. Thus, in addition to technical depth and talent, a flexible attitude and openness to interdisciplinary collaboration is essential.

Interested candidates should send a brief statement of research experience and interests and a CV (including the names and contact info for at least three references) to Prof. Upamanyu Madhow.

I’m on the program committee for the Cyber-Security and Privacy symposium, so I figured I would post this here to make more work for myself.

GlobalSIP 2013 – Call for Papers
IEEE Global Conference on Signal and Information Processing
December 3-5, 2013 | Austin, Texas, U.S.A.

GlobalSIP: IEEE Global Conference on Signal and Information Processing is a new flagship IEEE Signal Processing Society conference. The focus of this conference is on signal and information processing and up-and-coming signal processing themes.

GlobalSIP is composed of symposia selected based on responses to the call-for-symposia proposals. GlobalSIP is composed of symposia on hot topics related to signal and information processing.

The selected symposia are:

Paper submission will be online only through the GlobalSIP 2013 website Papers should be in IEEE two-column format. The maximum length varies among the symposia; be sure to check each symposium’s information page for details. Authors of Signal Processing Letters papers will be given the opportunity to present their work at GlobalSIP 2013, subject to space availability and approval by the Technical Program Chairs of GlobalSIP 2013. The authors need to specify in which symposium they wish to present their paper. Please check conference webpage for details.

Important Dates:
*New* Paper Submission Deadline – June 15, 2013
Review Results Announce – July 30, 2013
Camera-Ready Papers Due – September 7, 2013
*New* SPL request for presentation – September 7, 2013

Assumptionless consistency of the Lasso
Sourav Chatterjee
The title says it all. Given p-dimensional data points \{ \mathbf{x}_i : i \in [n] \} the Lasso tries to fit the model \mathbb{E}( y_i | \mathbf{x_i}) = \boldsymbol{\beta} \mathbf{x}_i by minimizing the \ell^1 penalized squared error
\sum_{i=1}^{n} (y_i - \boldsymbol{\beta} \mathbf{x}_i)^2 + \lambda \| \boldsymbol{\beta} \|_1.
The paper analyzes the Lasso in the setting where the data are random, so there are n i.i.d. copies of a pair of random variables (\mathbf{X},Y) so the data is \{(\mathbf{X}_i, Y_i) : i \in [n] \}. The assumptions are on the random variables (\mathbf{X},Y) : (1) each coordinate |X_i| \le M is bounded, the variable Y = (\boldsymbol{\beta}^*)^T \mathbf{X} + \varepsilon, and \varepsilon \sim \mathcal{N}(0,\sigma^2), where \boldsymbol{\beta}^* and \sigma are unknown constants. Basically that’s all that’s needed — given a bound on \|\boldsymbol{\beta}\|_1, he derives a bound on the mean-squared prediction error.

On Learnability, Complexity and Stability
Silvia Villa, Lorenzo Rosasco, Tomaso Poggio
This is a handy survey on the three topics in the title. It’s only 10 pages long, so it’s a nice fast read.

Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression
Francis Bach
A central challenge in stochastic optimization is understanding when the convergence rate of the excess loss, which is usually O(1/\sqrt{n}), can be improved to O(1/n). Most often this involves additional assumptions on the loss functions (which can sometimes get a bit baroque and hard to check). This paper considers constant step-size algorithms but where instead they consider the averaged iterate $\latex \bar{\theta}_n = \sum_{k=0}^{n-1} \theta_k$. I’m trying to slot this in with other things I know about stochastic optimization still, but it’s definitely worth a skim if you’re interested in the topic.

On Differentially Private Filtering for Event Streams
Jerome Le Ny
Jerome Le Ny has been putting differential privacy into signal processing and control contexts for the past year, and this is another paper in that line of work. This is important because we’re still trying to understand how time-series data can be handled in the differential privacy setting. This paper looks at “event streams” which are discrete-valued continuous-time signals (think of count processes), and the problem is to design a differentially private filtering system for such signals.

Gossips and Prejudices: Ergodic Randomized Dynamics in Social Networks
Paolo Frasca, Chiara Ravazzi, Roberto Tempo, Hideaki Ishii
This appears to be a gossip version of Acemoglu et al.’s work on “stubborn” agents in the consensus setting. They show similar qualitative behavior — opinions fluctuate but their average over time converges (the process is ergodic). This version of the paper has more of a tutorial feel to it, so the results are a bit easier to parse.

I’ve been trying to get a camera-ready article for the Signal Processing Magazine and the instructions from IEEE include the following snippet:

*VERY IMPORTANT: All source files ( .tex, .doc, .eps, .ps, .bib, .db, .tif, .jpeg, …) may be uploaded as a single .rar archived file. Please do not attempt to upload files with extensions .shs, .exe, .com, .vbs, .zip as they are restricted file types.

While I have encountered .rar files before, I was not very familiar with the file format or its history. I didn’t know it’s a proprietary format — that seems like a weird choice for IEEE to make (although no weirder than PDF perhaps).

What’s confusing to me is that ArXiV manages to handle .zip files just fine. Is .tgz so passé now? My experience with RAR is that it is good for compressing (and splitting) large files into easier-to-manage segments. All of that efficiency seems wasted for a single paper with associated figures and bibliography files and whatnot.

I was trying to find the actual compression algorithm, but like most modern compression software, the innards are a fair bit more complex than the base algorithmic ideas. The Wikipedia article suggests it does a blend of Lempel-Ziv (a variant of LZ77) and prediction by partial matching, but I imagine there’s a fair bit of tweaking. What I couldn’t figure out is if there is a new algorithmic idea in there (like in the Burrows-Wheeler Transform (BWT)), or it’s more a blend of these previous techniques.

Anyway, this silliness means I have to find some extra software to help me compress. SimplyRAR for MacOS seems to work pretty well.

Venkatesh Saligrama sent out a call for an ICML workshop he is organizing:

I wanted to bring to your attention an ICML workshop on “Machine Learning with Test-Time Budgets” that I am helping organize. The workshop will be held during the ICML week. The workshop will feature presentations both from data-driven as well as model-based perspectives and will feature researchers from machine learning and control/decision theory.

We are accepting papers related to these topics. Please let me know if you have questions about the workshop or wish to submit a paper.

I’m sick today so here are some links.

Click That Hood, a game which asks you to identify neighborhoods. I was lousy at San Diego, but pretty decent at Chicago, even though I’ve lived here for half the time. Go figure.

For those who care about beer, there’s been some news about the blocked merger of Inbev and Modelo. I recommend Erik’s podcast post on the structure of the beer industry (the three-tier system) for those who care about craft beer, and (with reservations) Planet Money’s show on the antitrust regulatory framework that is at work here.

Remember step functions from your signals and systems course? We called them Heaviside step functions after Oliver Heaviside — you can read more about him in this Physics Today article.

Did you know that Pad Thai’s “birth and popularity came out of the nationalist campaign of Field Marshal Plaek Pibulsongkram, one of the revolutionary figures who in 1932 pushed Thailand out of an absolute monarchy?” Neither did I!

I need this album, since I love me some Kurt Weill. I can also live vicariously through NPR’s list of SXSW recommendations.

I just wanted to write a few words about the workshop at the Bellairs Research Institute. I just returned from sunny Barbados to frigid Chicago, so writing this will help me remember the sunshine and sand:

The beach at Bathsheba on the east coast of Barbados

The beach at Bathsheba on the east coast of Barbados

Mike Rabbat put on a great program this year, and there were lots of talks on a range of topics in machine learning, signal processing, and optimization. The format of the workshop was to have talks with lots of room for questions and discussion. Talks were given out on the balcony where we were staying, and we had to end at about 2:30 because the sunshine would creep into our conference area, baking those of us sitting too far west.

(more…)

I promised some ITA blogging, so here it is. Maybe Alex will blog a bit too. These notes will by necessity be cursory, but I hope some people will find some of these papers interesting enough to follow up on them.

A Reverse Pinsker Inequality
Daniel Berend, Peter Harremoës , Aryeh Kontorovich
Aryeh gave this talk on what we can say about bounds in the reverse direction of Pinsker’s inequality. Of course, in general you can’t say much, but what they do is show an expansion of the KL divergence in terms of the total variation distance in terms of the balance coefficient of the distribution \beta = \inf \{ P(A) : P(A) \ge 1/2 \}.

Unfolding the entropy power inequality
Mokshay Madiman, Liyao Wang
Mokshay gave a talk on the entropy power inequality. Given vector random variables X_1 and X_2 is there a term we know that h(X_1 + X_2) \ge h(Z_1 + Z_2) where Z_1 and Z_2 are isotropic Gaussian vectors with the same differential entropy as X_1 and X_2. The question in this paper is this : can we insert a term between these two in the inequality? The answer is yes! They define a spherical rearrangement of the densities of X_1 and X_2 into variables X_1^{\ast} and X_2^{\ast} with spherically symmetric decreasing densities and show that the differential entropy of their sum lies between the two terms in the regular EPI.

Improved lower bounds on the total variation distance and relative entropy for the Poisson approximation
Igal Sason
The previous lower bounds mentioned in the title were based on the Chen-Stein method, and they can be strengthened by sharpening the analysis in the Chen-Stein method.

Fundamental limits of caching
Mohammad A. Maddah-Ali, Urs Niesen`
This talk was on tradeoffs in caching. If there are N files, K users and a size M cache at each user, how should they cache files so as to best allow a broadcaster to share the bandwidth to them? More simply, suppose there are three people who may want to watch one of three different TV shows, and they can buffer the content of one TV show. Since a priori you don’t know which show they want to watch, the idea might be to buffer/cache the first 3rd of each show at each user. They show that this is highly suboptimal. Because the content provider can XOR parts of the content to each user, the caching strategy should not be the same at each user, and the real benefit is the global cache size.

Simple outer bounds for multiterminal source coding
Thomas Courtade
This was a very cute result on using the HGR maximal correlation to get outer bounds for multiterminal source coding without first deriving a single letterization of the outer bound. The main ideas are to use two properties of the HGR correlation : it tensorizes (to get the multiletter part) and the strong DPI from Elza Erkip and Tom Cover’s paper (referenced above).

Follow

Get every new post delivered to your Inbox.

Join 861 other followers