Detection and Estimation: book recommendations?

It’s confirmed that I will be teaching Detection and Estimation next semester so I figured I would use the blog to conjure up some book recommendations (or even debate, if I can be so hopeful). Some of the contenders:

  • Steven M. Kay, Fundamentals of Statistical Signal Processing – Estimation Theory (Vol. 1), Prentice Hall, 1993.
  • H. Vincent Poor, An Introduction to Signal Detection and Estimation, 2nd Edition, Springer, 1998.
  • Harry L. Van Trees, Detection, Estimation, and Modulation Theory (in 4 parts), Wiley, 2001 (a reprint).
  • M.D. Srinath, P.K. Rajasekaran, P. K. and R. Viswanathan, Introduction to Statistical Signal Processing with Applications, Prentice Hall, 1996.

Detection and estimation is a fundamental class for the ECE graduate curriculum, but these “standard” textbooks are around 20 years old, and I can’t help but think there might be more “modern” take on the subject (no I’m not volunteering). Venu Veeravalli‘s class doesn’t use a book, but just has notes. However, I think the students at Rutgers (majority MS students) would benefit from a textbook, at least as a grounding.

Srinath et al. is what my colleague Narayan Mandyam uses. Kay is what I was leaning to before (because it seems to be the most widely used), but Poor’s book is the one I read. I think I am putting up the Van Trees as a joke, mostly. I mean, it’s a great book but I think a bit much for a textbook. So what do the rest of you use? Also, if you are teaching this course next semester, perhaps we can share some ideas. I think the curriculum might be ripe for some shaking up. If not in core material, at least in the kinds of examples we use. For example, I’m certainly going to cover differential privacy as a connection to hypothesis testing.

Towards multi-sensor characterizations of pianos

As an undergraduate I became interested in how timbre can be used to identify musical instruments. This was largely due to my first UROP (undergraduate research gig) with Keith Martin at the MIT Media Lab. Keith’s thesis was on identifying musical instruments from spectral features, and I worked a bit on this under Ryan Rifkin in a later UROP. I’ve been catching up on podcasts during my commute to campus this week, and a semi-recent Science Friday piece on the Steinway factory was on deck for this morning.

The piece talks about work in Agnieszka Roginska‘s lab at NYU, and in particular work from a paper from last year on measuring radiation patterns in piano soundboards. The radiation patterns are pretty but a bit hard to interpret, largely because I’m way out of the acoustical signal processing world. However, what’s interesting to me is that we’re still largely focused on overtones/cepstral coefficients. I wonder about how one might discover more interesting features to characterize this data. (I know someone will suggest deep learning but color me a little skeptical).

As a side note, one of the recent popular articles from JASA is on the acoustics of coffee roasting.

SPCOM 2014: some more talks (and a plenary)

I did catch Greg Wornell’s plenary at SPCOM, which was called When Bits Absolutely, Positively, Have to be There as Soon as Possible, a riff on this FedEx commercial, which is older than I am. The talk was on link-aware PHY-layer design– basically looking at how ARQ enables incremental redundancy, and how to do a sort of layered superposition + incremental redundancy scheme in the sequential setting as well as a “multi-path” setting where blocks can arrive out of order. This was really digging into the signal issues in a way that a lot of non-communication engineering information theorists may get squeamish about. The nice thing is that I think the engineering problem is approachable without knowing a lot of heavy-duty math, but still requires some careful analysis.

Communication and Compression Via Sparse Linear Regression
Ramji Venkataramanan
This was on building codewords and codebooks out of a lower-complexity code dictionary A \in \mathbb{R}^{n \times ML} where each codeword is a superposition of L columns, one each from groups of size M. Thus encoding is A \beta where \beta is a sparse vector. I saw a talk by Barron and Joseph from a previous ISIT about this, but the framework extends to rate distortion (achieving the rate distortion function), and channel coding. The main point is to lower the complexity of the code at the expense of the gap to optimal rate — encoding and decoding are polynomial time but the rate gap for rate-distortion goes to zero as 1/\log n. Ramji gave a really nice and clear talk on this — I hope he puts the slides up!

An Optimal Varentropy Bound for Log-Concave Distributions
Mokshay Madiman; Liyao Wang
Mokshay’s talk was also really clear and excellent. For a distribution f(X) on \mathbb{R}^n, we can define \tilde{h}(X) = - \log f(X). The entropy is the expectation of this random variable, and the varentropy is the variance. Their main result is a upper bound on the varentropu of log concave distributions f(X). To wit, \mathrm{Var}(\tilde{h}) \le n. This bound doesn’t depend on the distribution and is sharp if f is a product of exponentials. They then use this to prove a universal bound on the deviation of \tilde{h} from its expectation, which gives a AEP that doesn’t really assume anything about the joint distribution of the variables except for log-concavity. There was more in the talk, but I eagerly await the paper.

Event-triggered Sampling and Reconstruction of Sparse Real-valued Trigonometric Polynomials
Neeraj Sharma; Thippur V. Sreenivas
This was on non-uniform sampling where the sampler tries to detect level crossings of the analog signal and samples at that point — the rate may not be uniform enough to use existing nonuniform sampling techniques. They come up with a method for reconstructing signals which are real-valued trigonometric polynomials with a few nonzero coefficients (e.g. sparse) and it seems to work pretty decently in experiments.

Removing Sampling Bias in Networked Stochastic Approximation
Vivek Borkar; Raaz Dwivedi
In networked stochastic approximation, the intermittent communication between nodes may mean that the system tracks a different ODE than the one we want. By modifying the method to account for “local clocks” on each edge, we can correct for this, but we end up with new conditions on the step size to make things work. I am pretty excited about this paper, but as usual, my notes were not quite up to getting the juicy bits. That’s what paper reading is for.

On Asymmetric Insertion and Deletion Errors
Ankur A. Kulkarni
The insertion/deletion channel model is notoriously hard. Ankur proposed a new model where 0‘s are “indestructible” — they cannot be inserted or deleted. This asymmetric model leads to new asymptotic bounds on the capacity. I don’t really work on this channel model so I can’t get the finer points of the results, but once nice takeaway was that asymptotically, each indestructible 0 in the codeword lets us correct around 1/2 a deletion more.

ISIT 2014: how many samples do we need?

Due to jetlag, my CAREER proposal deadline, and perhaps a bit of general laziness, I didn’t take as many notes at ISIT as I would have, so my posting will be somewhat light (in addition to being almost a month delayed). If someone else took notes on some talks and wants to guest-post on it, let me know!

Strong Large Deviations for Composite Hypothesis Testing
Yen-Wei Huang (Microsoft Corporation, USA); Pierre Moulin (University of Illinois at Urbana-Champaign, USA)
This talk was actually given by Vincent Tan since neither of the authors could make it (this seems to be a theme of talks I’ve attended this summer. The paper was about testing a simple hypothesis H_1 versus a composite hypothesis H_0 where under H_0 the observations are i.i.d. with respect to one of possibly k different distributions. There are therefore k different errors and the goal is to characterize these errors when we ask for the probability of true detection to be greater than 1 - \epsilon. This is a sort of generalized Neyman-Pearson setup. They look at the vector of log-likelihood ratios and show that a threshold test is nearly optimal. At the time, I understood the idea of the proof, but I think it’s one of things where you need to really read the paper.

Randomized Sketches of Convex Programs with Sharp Guarantees
Mert Pilanci (University of California, Berkeley, USA); Martin J. Wainwright (University of California, Berkeley, USA)
This talk was about using random projections to lower the complexity of solving a convex program. Suppose we want to minimize \| Ax - y \|^2 over x given y. A sketch would be to solve \| SAx - Sy \|^2 where S is a random projection. One question is how to choose A. They show that choosing S to be a randomized Hadamard matrix (the paper studies Gaussian matrices), then the objective value of the sketched program is at most (1 + \epsilon)^2 times the value of the original program as long as the the number of rows of S is larger than O( \epsilon^{-2} \mathbb{W}^2(A \mathcal{K})), where \mathbb{W}(A \mathcal{K}) is the Gaussian width of the tangent cone of the contraint set at the optimum value. For more details look at their preprint on ArXiV.

On Efficiency and Low Sample Complexity in Phase Retrieval
Youssef Mroueh (MIT-IIT, USA); Lorenzo Rosasco (DIBRIS, Unige and LCSL – MIT, IIT, USA)
This was another talk not given by the authors. The problem is recovery of a complex vector x_0 \in \mathbb{C}^n from phaseless measurements of the form b_i = |\langle a_i, x_0 \rangle|^2 where a_i are complex spherically symmetric Gaussian vectors. Recovery from such measurements is nonconvex and tricky, but an alternating minimizing algorithm can reach a local optimum, and if you start it in a “good” initial position, it will find a global optimum. The contribution of this paper is provide such a smart initialization. The idea is to “pair” the measurements to create new measurements y_i = \mathrm{sign}( b_i^{(1)} - b_i^{(2)} ). This leads to a new problem (with half as many measurements) which is still hard, so they find a convex relaxation of that. I had thought briefly about such sensing setups a long time ago (and by thought, I mean puzzled over it at a coffeshop once), so it was interesting to see what was known about the problem.

Sorting with adversarial comparators and application to density estimation
Jayadev Acharya (University of California, San Diego, USA); Ashkan Jafarpour (University of California, San Diego, USA); Alon Orlitsky (University of California, San Diego, USA); Ananda Theertha Suresh (University of California, San Diego, USA)
Ashkan gave this talk on a problem where you have m samples from an unknown distribution p and a set of distributions \{q_1, q_2, \ldots, q_n\} to compare against. You want to find the distribution that is closest in \ell_1. One way to do this is via Scheffe tournament tht compares all pairs of distributions — this runs in time n^2 time. They show a method that runs in O(n) time by studying the structure of the comparators used in the sorting method. The motivation is that running comparisons can be expensive (especially if they involve human decisions) so we want to minimize the number of comparisons. The paper is significantly different than the talk, but I think it would definitely be interesting to those interested in discrete algorithms. The density estimation problem is really just a motivator — the sorting problem is far more general.

CFP: JSTSP Special Issue on Privacy

IEEE Signal Processing Society
IEEE Journal of Selected Topics in Signal Processing
Special Issue on Signal and Information Processing for Privacy

Aims and Scope: There has been a remarkable increase in the usage of communications and information technology over the past decade. Currently, in the backend and in the cloud, reside electronic repositories that contain an enormous amount of information and data associated with the world around us. These repositories include databases for data-mining, census, social networking, medical records, etc. It is easy to forecast that our society will become increasingly reliant on applications built upon these data repositories. Unfortunately, the rate of technological advancement associated with building applications that produce and use such data has significantly outpaced the development of mechanisms that ensure the privacy of such data and the systems that process it. As a society we are currently witnessing many privacy-related concerns that have resulted from these technologies—there are now grave concerns about our communications being wiretapped, about our SSL/TLS connections being compromised, about our personal data being shared with entities we have no relationship with, etc. The problems of information exchange, interaction, and access lend themselves to fundamental information processing abstractions and theoretical analysis. The tools of rate-distortion theory, distributed compression algorithms, distributed storage codes, machine learning for feature identification and suppression, and compressive sensing and sampling theory are fundamental and can be applied to precisely formulate and quantify the tradeoff between utility and privacy in a variety of domains. Thus, while rate-distortion theory and information-theoretic privacy can provide fundamental bounds on privacy leakage of distributed data systems, the information and signal processing techniques of compressive sensing, machine learning, and graphical models are the key ingredients necessary to achieve these performance limits in a variety of applications involving streaming data, distributed data storage (cloud), and interactive data applications across a number of platforms. This special issue seeks to provide a venue for ongoing research in information and signal processing for applications where privacy concerns are paramount.

Topics of Interest include (but are not limited to):

  • Signal processing for information-theoretic privacy
  • Signal processing techniques for access control with privacy guarantees in distributed storage systems
  • Distributed inference and estimation with privacy guarantees
  • Location privacy and obfuscation of mobile device positioning
  • Interplay of privacy and other information processing tasks
  • Formalized models for adversaries and threats in applications where consumer and producer privacy is a major concern
  • Techniques to achieve covert or stealthy communication in support of private communications
  • Competitive privacy and game theoretic formulations of privacy and obfuscation

Important Dates:
Manuscript submission due: October 1, 2014
First review completed: December 15, 2014
Revised manuscript due: February 1, 2015
Second review completed: March 15, 2015
Final manuscript due: May 1, 2015
Publication date: October 2015

Prospective authors should visit the JSTSP homepage for information on paper submission. Manuscripts should be submitted using Manuscript Central.

GlobalSIP 2014 : the format

I’m in Austin right now for the first GlobalSIP conference. The conference has a decentralized organization, with semi-independent day-long workshops (“symposia”) scheduled in parallel with each other. There are 8 of these, with 6 running in parallel per day, with 1 session of “plenary” talks and 2 poster sessions. Each workshop is scheduled in AAB, ABA, or BAA, where A = posters and B = plenary, so there are 2 talk sessions and 4 poster sessions running in parallel.

Fortunately, there are a wide range of topics covered in the workshops, from biology to controlled sensing, to financial signal processing. The downside is that the actual papers in each workshop often fit well with other workshops. For example, the distributed optimization posters (in which I am interested), were sprinkled all over the place. This probably has a lot to do with the decentralized effects.

In terms of the “results” at the conference, it seems from my cursory view that many people are presenting “extra” results from other conference papers, or preliminary work for future papers. This actually works well in the poster format: for the former, the poster contains a lot of information about the “main result” as well, and for the latter, the poster is an invitation to think about future work. In general I’m a little ambivalent about posters, but if you’re going to have to do ‘em, a conference like this may be a better way to do it.

dismissing research communities is counterproductive

I recently saw that Andrew Gelman hasn’t really heard of compressed sensing. As someone in the signal processing/machine learning/information theory crowd, it’s a little flabbergasting, but I think it highlights two things that aren’t really appreciated by the systems EE/algorithms crowd: 1) statistics is a pretty big field, and 2) the gulf between much statistical practice and what is being done in SP/ML research is pretty wide.

The other aspect of this is a comment from one of his readers:

Meh. They proved L1 approximates L0 when design matrix is basically full rank. Now all sparsity stuff is sometimes called ‘compressed sensing’. Most of it seems to be linear interpolation, rebranded.

I find such dismissals disheartening — there is a temptation to say that every time another community picks up some models/tools from your community that they are reinventing the wheel. As a short-hand, it can be useful to say “oh yeah, this compressed sensing stuff is like the old sparsity stuff.” However, as a dismissal it’s just being parochial — you have to actually engage with the use of those models/tools. Gelman says it can lead to “better understanding one’s assumptions and goals,” but I think it’s more important to “understand what others’ goals.”

I could characterize rate-distortion theory as just calculating some large deviations rate functions. Dembo and Zeitouni list RD as an application of the LDP, but I don’t think they mean “meh, it’s rebranded LDP.” For compressed sensing, the goal is to do the inference in a computationally and statistically efficient way. One key ingredient is optimization. If you just dismiss all of compressed sensing as “rebranded sparsity” you’re missing the point entirely.