a paper a day : 1

A new feature! Just to keep myself motivated on research and to dissuade people from reading the blog, I am trying to “read” one research paper a day (-ish) to get the new ideas running around my head. And you guessed it, I’m going to blog the interesting (to me) ideas here.

Denoising and Filtering Under the Probability of Excess Loss Criterion (PDF)
Stephanie Pereira and Tsachy Weissmann
Proc. 43rd Allerton Conf. Communication, Control, and Computing (2005)

This paper looks at the discrete denoising problem, which is related to filtering, estimation, and lossy source coding. Very briefly, the idea is that you have a iid sequence of pairs of discrete random variables taking values in a finite alphabet:

Where X is the “clean” source and Z is the “noisy” observation, so that the joint distribution is p(x,z) = p(x) p(z | x), where p(z | x) is some discrete memoryless channel. A denoiser is a set of mappings

so that g_i(z^n) is the “estimate” of X_i. One can impose many different constraints on these functions g_i. For example, they may be forced to operate only causally on the Z sequence, or may only use a certain subset of the Z‘s or only the symbol Z_i. This last case is called a symbol-by-symbol denoiser. The goal is to minimize the time-average of some loss function

This minimization is usually done on the expectation E[L_n], but this paper chooses to look at the the probability of exceeding a certain value P(L_n > D).

The major insight I got from this paper was that you can treat the of the loss function

as outputs of an source with time varying (arbitrarily varying) statistics. Conditioned on Z^n each h_k is independent with a distribution in a finite set of possible distributions. Then to bound the probability P(L_n > D), they prove a large deviations result on L_n, which is the time-average of the arbitrarily varying source.

Some of the other results in this paper are

  • For a Hamming loss function the optimal denoiser is symbol-by-symbol.
  • Among symbol-by-symbol denoisers, time-invariant ones are optimal.
  • An LDP for block denoisers and some analysis of the rate.

Most of the meat of the proofs are in a preprint which seems to still be in flux.

Advertisement

rewarding peer review

I’m not hot-shot enough to be asked to review papers yet, but I’ve looked over a few for others who wanted a second take on things, and it seems that the backlog of reviews, especially for conferences, is enormous. Here’s a set of recent (and not so recent) comments on the peer review process:

Larry Wasserman talks (from experience) about the problems of hostile reviewers and nasty politics.

Cosma Shalizi says there are many many more reasons to reject a paper than Wasserman, but that peer review should be reader-centric in focus.

David Feldman thinks that journals should give out free socks or something to reviewers so that there is at least some token appreciation of all the work they put into it.

Martin Grossglauser and Jennifer Rexford have another good take on the system.

Fundamentally it seems there are two problems to solve — reviewers have no incentive to review papers quickly, and the objective of the reviewing process is rarely articulated clearly. Socks and pools both seem like good steps in that direction. It seems to be one of those situations where trying small fixes now would be much better than trying to institute some huge shift in editorial processes across many journals all at once.