One of the things I’m always asked when giving a talk on differential privacy is “how should we interpret ?” There a lot of ways of answering this but one way that seems to make more sense to people who actually think about risk, hypothesis testing, and prediction error is through the “area under the curve” metric, or AUC. This post came out of a discussion from a talk I gave recently at Boston University, and I’d like to thank Clem Karl for the more detailed questioning.
April 21, 2014
Leave a Comment
April 18, 2014
Leave a Comment
One thing that strikes me about US graduate programs in electrical engineering is that the student population is overwhelmingly international. For most of these students, English is a second or third language, and so we need to adopt more “ESL”-friendly pedagogical approaches to teaching writing. I came across a blog post from ATTW by Meg Morgan from UNC Charlotte that raises a number of interesting issues. For one, the term “ESL” is perhaps problematic. The linguistic and social differences in pedagogy between other countries and the US mean that we need to use different methods for engaging the students.
In terms of teaching technical writing at the graduate level, the issues may be similar but the students are generally older — they may have even had some writing experience from undergraduate or masters-level research. How should the “ESL” issue affect how we teach technical writing?
April 17, 2014
I saw a paper on ArXiV yesterday called Kalman meets Shannon, which got me thinking: in how many papers has someone met Shannon, anyway? Krish blogged about this a few years ago, but since then Shannon has managed to meet some more people. I plugged “meets Shannon” into Google Scholar, and out popped:
- Fourier: Wang and Giannakis, Wireless Multicarrier Communications: Where Fourier Meets Shannon, IEEE Signal Processing Magazine, 2000.
- Bode: Elia, When Bode meets Shannon: control-oriented feedback communication schemes, IEEE Transactions on Automatic Control, 2004.
- Maxwell: Chakraborty and Franceschetti, Maxwell meets Shannon: Space-time duality in multiple antenna channels, Allerton 2006, and Lee and Chung, Capacity scaling of wireless ad hoc networks: Shannon meets Maxwell, IEEE Transactions on Information Theory, 2012.
- Carnot: Shental and Kanter, Shannon Meets Carnot: Generalized Second Thermodynamic Law, Europhysics Letters, 2009.
- Nash: Berry and Tse, Shannon Meets Nash on the Interference Channel, IEEE Transactions on Information Theory, 2011.
- Walras: Jorswieck and Mochaourab, Shannon Meets Walras on Interference Networks, ITA Workshop 2013.
- Nyqust: Chen, Eldar, and Goldsmith,
Shannon Meets Nyquist: Capacity of Sampled Gaussian Channels, IEEE Transactions on Information Theory, 2013.
- Strang and Fix: Dragotti, Vetterli, and Blu, Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets Strang–Fix, IEEE Transactions on Signal Processing, 2007.
- Blackwell and LeCam: Raginsky, Shannon meets Blackwell and Le Cam: channels, codes, and statistical experiments, ISIT 2011.
- Wiener: Forney, On the role of MMSE estimation in approaching the information-theoretic limits of linear Gaussian channels: Shannon meets Wiener, Allerton 2003, and Forney, Shannon meets Wiener II: On MMSE estimation in successive decoding schemes, Allerton 2004 and ArXiv 2004.
- Bellman: Meyn and Mathew, Shannon meets Bellman: Feature based Markovian models for detection and optimization, CDC 2008.
- Tesla: Grover and Sahai, Shannon meets Tesla: Wireless information and power transfer, ISIT 2010.
- Shortz: Efron, Shannon Meets Shortz: A Probabilistic Model of Crossword Puzzle Difficulty, Journal of the American Society for Information Science and Technology, 2008.
- Marconi: Tse, Modern Wireless Communication: When Shannon Meets Marconi, ICASSP 2006.
- Kalman: Gattami, Kalman meets Shannon, ArXiV 2014.
Sometimes people are meeting Shannon, and sometimes he is meeting them, but each meeting produces at least one paper.
April 17, 2014
Leave a Comment
A bit of the new, a bit of the old, for this Maundy Thursday.
- You Can Never Hold Back Spring (Tom Waits)
- Le Gars qui vont à la fête (Stutzmann/Södergren, by Poulenc)
- Judas mercator pessimus (King’s Singers, by Gesualdo)
- Calling (Snorri Helgason)
- Hold Your Head (Hey Marseilles)
- Soutoukou (Mamadou Diabate)
- A Little Lost (Nat Baldwin)
- Gun Has No Trigger (Dirty Projectors)
- Stranger to My Happiness (Sharon Jones & The Dap-Kings)
- Dama Dam Mast Qalandar (Red Baraat)
- Libra Stripes (Polyrhythmics)
- Jaan Pehechan Ho (The Bombay Royale)
- Jolie Coquine (Caravan Palace)
- The Natural World (CYMBALS)
- Je Ne Vois Que Vous (Benjamin Schoos feat. Laetitia Sadier)
- Romance (Wild Flag)
April 16, 2014
I think it would be great to have a more formal way of teaching technical writing for graduate students in engineering. It’s certainly not being taught at (most) undergraduate institutions, and the mistakes are so common across the examples that I’ve seen that there must be a way to formalize the process for students. Since we tend to publish smaller things a lot earlier in our graduate career, having a “checklist” approach to writing/editing could be very helpful to first-time authors. There are several coupled problems here:
- students often don’t have a clear line of thought before they write,
- they don’t think of who their audience is,
- they don’t know how to rewrite, or indeed how important it is.
Adding to all of this is that they don’t know how to read a paper. In particular, they don’t know what to be reading for in terms of content or form. This makes the experience of reading “related work” sections incredibly frustrating.
What I was thinking was a class where students learn to write a literature review (a small one) on a topic of their choosing. The first part will be how to read papers and make connections between them. What is the point of a literature review, anyway? The first objective is to develop a more systematic way of reading and processing papers. I think everyone I know professionally, myself included, learned how to do this in an ad-hoc way. I believe that developing a formula would help improve my own literature surveying. The second part of the course would be teaching about rewriting (rather than writing). That is, instead of providing rules like “don’t use the passive voice so much” we could focus on “how to revise your sentences to be more active.” I would also benefit from a systematic approach to this for my own writing.
I was thinking of a kind of once-a-week writing seminar style class. Has anyone seen a class like this in engineering programs? Are there tips/tricks from other fields/departments which do have such classes that could be useful in such a class? Even though it is “for social scientists”, Harold Becker’s book is a really great resource.
April 15, 2014
Leave a Comment
I always end up bookmarking a bunch of papers from ArXiV and then looking at them a bit later than I want. So here are a few notes on some papers from the last month. I have a backlog of reading to catch up on, so I’ll probably split this into a couple of posts.
arXiv:1403.3465v1 [cs.LG]: Analysis Techniques for Adaptive Online Learning
H. Brendan McMahan
This is a nice survey on online learning/optimization algorithms that adapt to the data. These are all variants of the Follow-The-Regularized-Leader algorithms. The goal is to provide a more unified analysis of online algorithms where the regularization is data dependent. The intuition (as I see it) is that you’re doing a kind of online covariance estimation and then regularizing with respect to the distribution as you are learning it. Examples include the McMahan and Streeter (2010) paper and the Duchi et al. (2011) paper. Such adaptive regularizers also appear in dual averaging methods, where they are called “prox-functions.” This is a useful survey, especially if, like me, you’ve kind of checked in and out with the online learning literature and so may be missing the forest for the trees. Or is that the FoReL for the trees?
arXiv:1403.4011 [cs.IT]: Whose Opinion to follow in Multihypothesis Social Learning? A Large Deviation Perspective
Wee Peng Tay
This is a sort of learning from expert advice problem, though not in the setting that machine learners would consider it. The more control-oriented folks would recognize it as a multiple-hypothesis test. The model is that there is a single agent (agent ) and experts (agents ). The agent is trying to do an -ary hypothesis test. The experts (and the agent) have access to local (private) observations for . The observations come from a family of distributions determined by the true hypothesis . The agent needs to pick one of the experts to hire — the analogy is that you are an investor picking an analyst to hire. Each expert has its own local loss function which is a function of the amount of data it has as well as the true hypothesis and the decision it makes. This is supposed to model a “bias” for the expert — for example, they may not care to distinguish between two hypotheses. The rest of the paper looks at finding policies/decision rules for the agents that optimize the exponents with respect to their local loss functions, and then looking at how agent should act to incorporate that advice. This paper is a little out of my wheelhouse, but it seemed interesting enough to take a look at. In particular, it might be interesting to some readers out there.
arXiv:1403.3862 [math.OC] Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties
Ji Liu, Stephen J. Wright
This is another paper on lock-free optimization (c.f. HOGWILD!). The key difference, as stated in the introduction, is that they “do not assume that the evaluation vector is a version of that actually existed in the shared memory at some point in time.” What does this mean? It means that a local processor, when it reads the current state of the iterate, may be performing an update with respect to a point not on the sample path of the algorithm. They do assume that the delay between reading and updating the common state is bounded. To analyze this method they need to use a different analysis technique. The analysis is a bit involved and I’ll have to take a deeper look to understand it better, but from a birds-eye view this would make sense as long as the step size is chosen properly and the “hybrid” updates can be shown to be not too far from the original sample path. That’s the stochastic approximator in me talking though.
April 14, 2014
Leave a Comment
I meant to blog about this a while back, but somehow starting a new job/teaching are very time consuming (who knew?). Luckily, it’s about an older result of Banaszczyk (pronounced bah-nahsh-chik, I think):
Wojciech Banaszczyk. Balancing vectors and gaussian measures of n-dimensional convex bodies. Random Structures & Algorithms, 12(4):351–360, 1998.
This result came to my attention from a talk given by Sasho Nikolov here at Rutgers on his paper with Kunal Talwar on approximating hereditary discrepancy (see Kunal’s post from last year). The result is pretty straightforward to state.
Banaszczyk’s Theorem. There exists a universal constant such that the following holds. Let be an real matrix such that the -th column satisfies for , and let be a convex body in such that , where . Then there exists a vector such that .
This is a pretty cool result! Basically, if your convex body is big enough to capture half of the probability of a standard Gaussian, then if you blow it up by to get , then for any arbitrary collection of sub-unit-norm vectors , I can find a way to add and subtract them from each other so that the result ends up in .
I haven’t found a use for this result, but it’s a neat fact to keep in the bucket. Maybe it would be useful in alignment/beamforming schemes? Unfortunately, as far as I can tell he doesn’t tell you how to find this mysterious , so…
April 12, 2014
For those readers of the blog who have not submitted papers to machine learning (or related) conferences, the conference review process is a bit like a mini-version of a journal review. You (as the author) get the reviews back and have to write a response and then the reviewers discuss the paper and (possibly, but in my experience rarely) revise their reviews. However, they generally are supposed to take into account the response in the discussion. In some cases people even adjust their scores; when I’ve been a reviewer I often adjust my scores, especially if the author response addresses my questions.
This morning I had the singular experience of having a paper rejected from ICML 2014 in which all of the reviewers specifically marked that they did not read and consider the response. Based on the initial scores the paper was borderline, so the rejection is not surprising. However, we really did try to address their criticisms in our rebuttal. In particular, some misunderstood what our claims were. Had they bothered to read our response (and proposed edits), perhaps they would have realized this.
Highly selective (computer science) conferences often tout their reviews as being just as good as a journal, but in both outcomes and process, it’s a pretty ludicrous claim. I know this post may sound like sour grapes, but it’s not about the outcome, it’s about the process. Why bother with the facade of inviting authors to rebut if the reviewers are unwilling to read the response?
March 19, 2014
Leave a Comment
S. Raj Rajagopalan and collaborators at Honeywell are doing some security research on making better passwords. They are looking for some people to do a quick study on password design.
Along with a couple of Honeywell security researchers I am running a study on a rather familiar problem for most of us – creating memorable but secure passwords, i.e. how to generate passwords that are both suitably random and memorable. We have just launched a simple user study that asks volunteers to participate in an interactive session that lets them choose password candidates and see how well they remember them. Needless to say, these are not actual passwords used by any system, only strings that could be used as passwords.
No personal information is collected in the study and the system only stores the data that is actually provided by the user. To that end, you may choose to not provide any bit of information as you choose. The study takes only a couple of minutes to finish. You may run it multiple times if you wish (and you will likely get different use cases) but you will have to clear the cache on your browsers to get a fresh configuration.
We need at least 300 participants to get statistical significance, so we would appreciate it if you could participate in the study.
Thanks for your help. Any questions on the study may be directed to me.
March 16, 2014
“Cascading Style Sheets are a cryptic language developed by the Freemasons to obscure the visual nature of reality”Posted by Anand Sarwate under Uncategorized | Tags: humor, networks, teaching |
Computer scientists often look at Web pages in the same way that my friend looked at farms. People think that Web browsers are elegant computation platforms, and Web pages are light, fluffy things that you can edit in Notepad as you trade ironic comments with your friends in the coffee shop. Nothing could be further from the truth. A modern Web page is a catastrophe. It’s like a scene from one of those apocalyptic medieval paintings that depicts what would happen if Galactus arrived: people are tumbling into fiery crevasses and lamenting various lamentable things and hanging from playground equipment that would not pass OSHA safety checks.
It’s a fun read, but also a sentiment that may echo with those who truly believe in “clean slate networking.” I remember going to a tutorial on LTE and having a vision of what 6G systems will look like. One thing that is not present, though, is the sense that the system is unstable, and that the introduction of another feature in communication systems will cause the house of cards to collapse. Mickens seems to think the web is nearly there. The reason I thought of this is the recent fracas over the US ceding control of ICANN, and the sort of doomsdaying around that. From my perspective, network operators are sufficiently conservative that they can’t/won’t willy-nilly introduce new features that are only half-supported, like the in Web. The result is a (relatively) stable networking world that appears to detractors as somewhat Jurassic.
I’d argue (with less hyperbole) that some of our curriculum ideas also suffer from the accretion of old ideas. When I took DSP oh-so-long ago (13 years, really?) we learned all of this Direct Form Transposed II blah blah which I’m sure was useful for DSP engineers at TI to know at some point, but has no place in a curriculum now. And yet I imagine there are many places that still teaching it. If anyone reads this still, what are the dinosaurs in your curriculum?