Detection and Estimation: book recommendations?

It’s confirmed that I will be teaching Detection and Estimation next semester so I figured I would use the blog to conjure up some book recommendations (or even debate, if I can be so hopeful). Some of the contenders:

  • Steven M. Kay, Fundamentals of Statistical Signal Processing – Estimation Theory (Vol. 1), Prentice Hall, 1993.
  • H. Vincent Poor, An Introduction to Signal Detection and Estimation, 2nd Edition, Springer, 1998.
  • Harry L. Van Trees, Detection, Estimation, and Modulation Theory (in 4 parts), Wiley, 2001 (a reprint).
  • M.D. Srinath, P.K. Rajasekaran, P. K. and R. Viswanathan, Introduction to Statistical Signal Processing with Applications, Prentice Hall, 1996.

Detection and estimation is a fundamental class for the ECE graduate curriculum, but these “standard” textbooks are around 20 years old, and I can’t help but think there might be more “modern” take on the subject (no I’m not volunteering). Venu Veeravalli‘s class doesn’t use a book, but just has notes. However, I think the students at Rutgers (majority MS students) would benefit from a textbook, at least as a grounding.

Srinath et al. is what my colleague Narayan Mandyam uses. Kay is what I was leaning to before (because it seems to be the most widely used), but Poor’s book is the one I read. I think I am putting up the Van Trees as a joke, mostly. I mean, it’s a great book but I think a bit much for a textbook. So what do the rest of you use? Also, if you are teaching this course next semester, perhaps we can share some ideas. I think the curriculum might be ripe for some shaking up. If not in core material, at least in the kinds of examples we use. For example, I’m certainly going to cover differential privacy as a connection to hypothesis testing.

MAP and ML in practice on the New Jersey Turnpike

Since I might be teaching detection and estimation next semester, I’ve been thinking a little bit about decision rules during my commute down the New Jersey Turnpike. The following question came to mind:

Suppose you see a car on the Turnpike who is clearly driving dangerously (weaving between cars, going 90+ MPH, tailgating an ambulance, and the like). You have to decide whether the car has New Jersey or New York plates [*]?

This is a hypothesis testing problem. I will assume for simplicity that New York drivers have cars with New York plates and New Jersey drivers have New Jersey plates [**]:
H_0: New Jersey driver
H_1: New York driver
Let Y be a binary variable indicating whether or not I observe dangerous driving behavior. Based on my entirely subjective experience, I would say the in terms of likelihoods,
\mathbb{P}(Y = 1 | H_1) > \mathbb{P}(Y = 1 | H_0)
so the maximum likelihood (ML) rule would suggest that the driver is from New York.

However, if I take into account my (also entirely subjective) priors on the fraction of drivers P(H_0), P_H(1) from New Jersey and New York, respectively, I would have to say
\mathbb{P}(Y = 1 | H_1) P(H_1) < \mathbb{P}(Y = 1 | H_0) P(H_0)
so the maximum a-posteriori probability (MAP) rule would suggest that the driver is from New Jersey.

Which is better?

[*] I am assuming North Jersey here, so Pennsylvania plates are negligible.
[**] This may be a questionable modeling assumption given suburban demographics.

Postdoctoral position at University of Michigan

A postdoctoral position is available at the University of Michigan Electrical Engineering and Computer Science Department for a project related to anomaly detection in networked cyber-physical systems. The successful applicant will have knowledge in one or more of the following topics: convex optimization and relaxations, compressed sensing, distributed optimization, submodularity, control and dynamical systems or system identification. The project will cover both theory and algorithm development and some practical applications in fault and attack detection in transportation and energy networks. The position can start anytime in 2014 or early 2015. This is a one year position, renewable for a second year. Interested candidates should contact Necmiye Ozay at necmiye@umich.edu with a CV and some pointers to representative publications.

Linkage

Some old links I meant to post a while back but still may be of interest to some…

I prefer my okra less slimy, but to each their own.

Via Erin, A tour of the old homes of the Mission.

Also via Erin, Women and Crosswords and Autofill.

A statistician rails against computer science’s intellectual practices.

Nobel Laureate Randy Schekman is boycotting Nature, Science, and Cell. Retraction Watch is skeptical.

Allerton 2014: privately charging your information odometer while realizing your group has pretenders

Here are a few papers that I saw at Allerton — more to come later.

Group Testing with Unreliable Elements
Arya Mazumdar, Soheil Mohajer
This was a generalization of the group testing problem: items are either positive or null, and can be tested in groups such that if any element of the group is positive, the whole group will test positive. The item states can be thought of as a binary column vector \mathbf{x} and each test is the row or a matrix A: the (i,j)-the entry of A is 1 if the i-th item is part of the j-th group. The multiplication A \mathbf{x} is taken using Boolean OR. The twist in this paper is that they consider a situation where \tau elements can “pretend” to be positive in each test, with a possibly different group in each test. This is different than “noisy group testing” which was considered previously, which is more like A \mathbf{x} + \mathrm{noise}. They show achievable rates for detecting the positives using random coding methods (i.e. a random A matrix). There was some stuff at the end about the non-i.i.d. case but my notes were sketchy at that point.

The Minimal Realization Problems for Hidden Markov Models
Qingqing Huang, Munther Dahleh, Rong Ge, Sham Kakade
The realization problem for an HMM is this: given the exact joint probability distribution on length N strings (Y_1, Y_2, \ldots, Y_N) from an HMM, can we create a multilinear system whose outputs have the same joint distribution? A multilinear system looks like this:
w_{t+1} = A^{(\ell_t)} w_t for w_t \in \mathbb{R}^k with w_0 = v
z_{t} = u^t w_t
We think of A^{(\ell_t)} \in \mathbb{R}^{k \times k} as the transition for state \ell_t (e.g. a stochastic matrix) and we want the output to be z_t. This is sort of at the nexus of control/Markov chains and HMMs and uses some of the tensor ideas that are getting hot in machine learning. As the abstract puts it, the results are that they can efficiently construct realizations if N > max(4 log_d(k) + 1; 3), where d is the size of the output alphabet size, and k is the minimal order of the realization.”

Differentially Private Distributed Protocol for Electric Vehicle Charging
Shuo Han, Ufuk Topcu, George Pappas
This paper was about a central aggregator trying to get multiple electric vehicle users to report their charging needs. The central allocator has a utility maximization problem over the rates of charging:
\min_{r_i} U(\sum r_i)
\mathrm{s.t.}\  \le r_i \le \bar{r}_i
\qquad \mathbf{1}^{\top} r_i = z_i
They focus on the case where the charge demands z_i are private but the charging rates r_i can be public. This is in contrast to the mechanism design literature popularized by Aaron Roth and collaborators. They do an analysis of a projected gradient descent procedure with Laplace noise added for differential privacy. It’s a stochastic gradient method but seems to converge in just a few time steps — clearly the number of iterations may impact the privacy parameter, but for the examples they showed it was only a handful of time steps.

An Interactive Information Odometer and Applications
Mark Braverman and Omri Weinstein
This was a communication complexity talk about trying to maintain an estimate of the information complexity of a protocol \Pi for computing a function f(X,Y) interactively, where Alice has X, Bob has Y, and (X,Y) \sim \mu:
\mathsf{IC}_{\mu}(\Pi) = I( \Pi ; X | Y ) + I( \Pi ; Y | X)
Previous work has shown that the (normalized) communication complexity of computing f on n-tuples of variables \epsilon-accurately approaches the minimum information complexity over all protocols for computing f once \epsilon-accurately. At least I think this is what was shown previously — it’s not quite my area. The result in this paper is a strong converse for this — the goal is to maintain an online estimate of \mathsf{IC} during the protocol. The mechanism seeems a bit like communication with feedback a la Horstein, but I got a bit confused as to what was going on — basically it seems that Alice and Bob need to be able to agree on the estimate during the protocol and use a few extra bits added to the communication to maintain this estimate. If I had an extra n hours in the week I would read up more about this. Maybe on my next plane ride…

Greetings from Allerton 2014

This is not a weeping angel. I hope.

IMG_3684-0.JPG

“In the battle of ideas for metaphors for explaining these phenomena, graphs are doing pretty well for themselves.” — Jon Kleinberg (at the plenary).

Greetings from Allerton! I know blogging has been light since the semester started. I chalk it up to the whole “starting as an assistant professor is time-consuming” thing. I really hope there isn’t a strong converse because I definitely feel like I am operating above capacity.

Regardless, posts to continue again soon. There have been lots of interesting talks here, and lots to follow up on, time permitting.