Since I might be teaching detection and estimation next semester, I’ve been thinking a little bit about decision rules during my commute down the New Jersey Turnpike. The following question came to mind:
Suppose you see a car on the Turnpike who is clearly driving dangerously (weaving between cars, going 90+ MPH, tailgating an ambulance, and the like). You have to decide whether the car has New Jersey or New York plates [*]?
This is a hypothesis testing problem. I will assume for simplicity that New York drivers have cars with New York plates and New Jersey drivers have New Jersey plates [**]:
: New Jersey driver
: New York driver
Let be a binary variable indicating whether or not I observe dangerous driving behavior. Based on my entirely subjective experience, I would say the in terms of likelihoods,
so the maximum likelihood (ML) rule would suggest that the driver is from New York.
However, if I take into account my (also entirely subjective) priors on the fraction of drivers from New Jersey and New York, respectively, I would have to say
so the maximum a-posteriori probability (MAP) rule would suggest that the driver is from New Jersey.
Which is better?
[*] I am assuming North Jersey here, so Pennsylvania plates are negligible.
[**] This may be a questionable modeling assumption given suburban demographics.
Since I’m sick and I can’t really focus on math right now, here’s a flowchart to help you decide if you should go into campus.
A flowchart to help you decide whether to come into campus when you’re sick
Setting: a lone house stands on a Scottish moor. The fog is dense here. It is difficult to estimate where your foot will fall. A figure in a cloak stands in front of the door.
Figure: [rapping on the door, in a Highland accent] Knock knock!
Voice from inside: Who’s there?
Voice: Glivenko who?
[The fog along the moor converges uniformly on the house, enveloping it completely in a cumulus.]
As part of ITAVision 2015 we are soliciting individuals and groups to submit videos documenting their love of information theory and/or its applications. During ISIT we put together a little example with our volunteers (it sounded better in rehearsal than at the banquet, alas). The song was Entropy is Awesome based on this, obviously. If you want to sing along, here is the Karaoke version:
The lyrics (so far) are:
Entropy is awesome!
Entropy is sum minus p log p
Entropy is awesome!
When you work on I.T.
Blockwise error vanishes as n gets bigger
Maximize I X Y
Let’s party forever
I get you, you get me
Communicating at capacity
Entropy is awesome…
This iteration of the lyrics is due to a number of contributors — truly a group effort. If you want to help flesh out the rest of the song, please feel free to email me and we’ll get a group effort going.
More details on the contest will be forthcoming!
While trying to show a student a generic example of a paper’s structure, I came across this gem:
A sample from the IT Transactions of 1992
I feel like I am reading a MacWrite document while wearing a flannel shirt.
This one might be a keeper!
I saw a paper on ArXiV yesterday called Kalman meets Shannon, which got me thinking: in how many papers has someone met Shannon, anyway? Krish blogged about this a few years ago, but since then Shannon has managed to meet some more people. I plugged “meets Shannon” into Google Scholar, and out popped:
- Fourier: Wang and Giannakis, Wireless Multicarrier Communications: Where Fourier Meets Shannon, IEEE Signal Processing Magazine, 2000.
- Bode: Elia, When Bode meets Shannon: control-oriented feedback communication schemes, IEEE Transactions on Automatic Control, 2004.
- Maxwell: Chakraborty and Franceschetti, Maxwell meets Shannon: Space-time duality in multiple antenna channels, Allerton 2006, and Lee and Chung, Capacity scaling of wireless ad hoc networks: Shannon meets Maxwell, IEEE Transactions on Information Theory, 2012.
- Carnot: Shental and Kanter, Shannon Meets Carnot: Generalized Second Thermodynamic Law, Europhysics Letters, 2009.
- Nash: Berry and Tse, Shannon Meets Nash on the Interference Channel, IEEE Transactions on Information Theory, 2011.
- Walras: Jorswieck and Mochaourab, Shannon Meets Walras on Interference Networks, ITA Workshop 2013.
- Nyqust: Chen, Eldar, and Goldsmith,
Shannon Meets Nyquist: Capacity of Sampled Gaussian Channels, IEEE Transactions on Information Theory, 2013.
- Strang and Fix: Dragotti, Vetterli, and Blu, Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets Strang–Fix, IEEE Transactions on Signal Processing, 2007.
- Blackwell and LeCam: Raginsky, Shannon meets Blackwell and Le Cam: channels, codes, and statistical experiments, ISIT 2011.
- Wiener: Forney, On the role of MMSE estimation in approaching the information-theoretic limits of linear Gaussian channels: Shannon meets Wiener, Allerton 2003, and Forney, Shannon meets Wiener II: On MMSE estimation in successive decoding schemes, Allerton 2004 and ArXiv 2004.
- Bellman: Meyn and Mathew, Shannon meets Bellman: Feature based Markovian models for detection and optimization, CDC 2008.
- Tesla: Grover and Sahai, Shannon meets Tesla: Wireless information and power transfer, ISIT 2010.
- Shortz: Efron, Shannon Meets Shortz: A Probabilistic Model of Crossword Puzzle Difficulty, Journal of the American Society for Information Science and Technology, 2008.
- Marconi: Tse, Modern Wireless Communication: When Shannon Meets Marconi, ICASSP 2006.
- Kalman: Gattami, Kalman meets Shannon, ArXiV 2014.
Sometimes people are meeting Shannon, and sometimes he is meeting them, but each meeting produces at least one paper.