Since I might be teaching detection and estimation next semester, I’ve been thinking a little bit about decision rules during my commute down the New Jersey Turnpike. The following question came to mind:
Suppose you see a car on the Turnpike who is clearly driving dangerously (weaving between cars, going 90+ MPH, tailgating an ambulance, and the like). You have to decide whether the car has New Jersey or New York plates [*]?
This is a hypothesis testing problem. I will assume for simplicity that New York drivers have cars with New York plates and New Jersey drivers have New Jersey plates [**]:
: New Jersey driver
: New York driver
Let be a binary variable indicating whether or not I observe dangerous driving behavior. Based on my entirely subjective experience, I would say the in terms of likelihoods,
so the maximum likelihood (ML) rule would suggest that the driver is from New York.
However, if I take into account my (also entirely subjective) priors on the fraction of drivers from New Jersey and New York, respectively, I would have to say
so the maximum a-posteriori probability (MAP) rule would suggest that the driver is from New Jersey.
Which is better?
[*] I am assuming North Jersey here, so Pennsylvania plates are negligible.
[**] This may be a questionable modeling assumption given suburban demographics.
A postdoctoral position is available at the University of Michigan Electrical Engineering and Computer Science Department for a project related to anomaly detection in networked cyber-physical systems. The successful applicant will have knowledge in one or more of the following topics: convex optimization and relaxations, compressed sensing, distributed optimization, submodularity, control and dynamical systems or system identification. The project will cover both theory and algorithm development and some practical applications in fault and attack detection in transportation and energy networks. The position can start anytime in 2014 or early 2015. This is a one year position, renewable for a second year. Interested candidates should contact Necmiye Ozay at email@example.com with a CV and some pointers to representative publications.
Some old links I meant to post a while back but still may be of interest to some…
I prefer my okra less slimy, but to each their own.
Via Erin, A tour of the old homes of the Mission.
Also via Erin, Women and Crosswords and Autofill.
A statistician rails against computer science’s intellectual practices.
Nobel Laureate Randy Schekman is boycotting Nature, Science, and Cell. Retraction Watch is skeptical.
Here are a few papers that I saw at Allerton — more to come later.
Group Testing with Unreliable Elements
Arya Mazumdar, Soheil Mohajer
This was a generalization of the group testing problem: items are either positive or null, and can be tested in groups such that if any element of the group is positive, the whole group will test positive. The item states can be thought of as a binary column vector and each test is the row or a matrix : the -the entry of is 1 if the -th item is part of the -th group. The multiplication is taken using Boolean OR. The twist in this paper is that they consider a situation where elements can “pretend” to be positive in each test, with a possibly different group in each test. This is different than “noisy group testing” which was considered previously, which is more like . They show achievable rates for detecting the positives using random coding methods (i.e. a random matrix). There was some stuff at the end about the non-i.i.d. case but my notes were sketchy at that point.
The Minimal Realization Problems for Hidden Markov Models
Qingqing Huang, Munther Dahleh, Rong Ge, Sham Kakade
The realization problem for an HMM is this: given the exact joint probability distribution on length strings from an HMM, can we create a multilinear system whose outputs have the same joint distribution? A multilinear system looks like this:
We think of as the transition for state (e.g. a stochastic matrix) and we want the output to be . This is sort of at the nexus of control/Markov chains and HMMs and uses some of the tensor ideas that are getting hot in machine learning. As the abstract puts it, the results are that they can efficiently construct realizations if , where is the size of the output alphabet size, and is the minimal order of the realization.”
Differentially Private Distributed Protocol for Electric Vehicle Charging
Shuo Han, Ufuk Topcu, George Pappas
This paper was about a central aggregator trying to get multiple electric vehicle users to report their charging needs. The central allocator has a utility maximization problem over the rates of charging:
They focus on the case where the charge demands are private but the charging rates can be public. This is in contrast to the mechanism design literature popularized by Aaron Roth and collaborators. They do an analysis of a projected gradient descent procedure with Laplace noise added for differential privacy. It’s a stochastic gradient method but seems to converge in just a few time steps — clearly the number of iterations may impact the privacy parameter, but for the examples they showed it was only a handful of time steps.
An Interactive Information Odometer and Applications
Mark Braverman and Omri Weinstein
This was a communication complexity talk about trying to maintain an estimate of the information complexity of a protocol for computing a function interactively, where Alice has , Bob has , and :
Previous work has shown that the (normalized) communication complexity of computing on -tuples of variables -accurately approaches the minimum information complexity over all protocols for computing once -accurately. At least I think this is what was shown previously — it’s not quite my area. The result in this paper is a strong converse for this — the goal is to maintain an online estimate of during the protocol. The mechanism seeems a bit like communication with feedback a la Horstein, but I got a bit confused as to what was going on — basically it seems that Alice and Bob need to be able to agree on the estimate during the protocol and use a few extra bits added to the communication to maintain this estimate. If I had an extra hours in the week I would read up more about this. Maybe on my next plane ride…
This is not a weeping angel. I hope.
“In the battle of ideas for metaphors for explaining these phenomena, graphs are doing pretty well for themselves.” — Jon Kleinberg (at the plenary).
Greetings from Allerton! I know blogging has been light since the semester started. I chalk it up to the whole “starting as an assistant professor is time-consuming” thing. I really hope there isn’t a strong converse because I definitely feel like I am operating above capacity.
Regardless, posts to continue again soon. There have been lots of interesting talks here, and lots to follow up on, time permitting.
Since I’m sick and I can’t really focus on math right now, here’s a flowchart to help you decide if you should go into campus.
A flowchart to help you decide whether to come into campus when you’re sick
Setting: a lone house stands on a Scottish moor. The fog is dense here. It is difficult to estimate where your foot will fall. A figure in a cloak stands in front of the door.
Figure: [rapping on the door, in a Highland accent] Knock knock!
Voice from inside: Who’s there?
Voice: Glivenko who?
[The fog along the moor converges uniformly on the house, enveloping it completely in a cumulus.]