PSA on IEEEtran.cls

Apparently there’s a PSA out about using the latest version of IEEEtran.cls. Stefan Moser is a big proponent of IEEEeqnarray which he says is even better than my beloved align environment. He also hates on the shorthand \[ \] for resulting in “poorly readable” source code, but I guess I disagree on that point. He even says it’s better than multline! I guess I’ll have to revise my LaTeX practices… but only when I write IEEE papers.

ITA 2015: quick takes

Better late than never, I suppose. A few weeks ago I escaped the cold of New Jersey to my old haunts of San Diego. Although La Jolla was always a bit fancy for my taste, it’s hard to beat a conference which boasts views like this:

A view from the sessions at ITA 2015

A view from the sessions at ITA 2015


I’ll just recap a few of the talks that I remember from my notes — I didn’t really take notes during the plenaries so I don’t have much to say about them. Mostly this was due to laziness, but finding the time to blog has been challenging in this last year, so I think I have to pick my battles. Here’s a smattering consisting of

\{ \mathrm{talks\ attended} \} \cap \{ \mathrm{talks\ with\ understandable\ notes} \}

(Information theory)
Emina Soljanin talked about designing codes that are good for fast access to the data in distributed storage. Initial work focused on how to repair codes under disk failures. She looked at how easy it is to retrieve the information afterwords to guarantee some QoS for the storage system. Adam Kalai talked about designing compression schemes that work for an “audience” of decoders. The decoders have different priors on the set of elements/messages so the idea is to design an encoder that works for this ensemble of decoders. I kind of missed the first part of the talk so I wasn’t quite sure how this relates to classical work in mismatched decoding as done in the information theory world. Gireeja Ranade gave a great talk about defining notions of capacity/rate need to control a system which as multiplicative uncertainty. That is, x[n+1] = x[n] + B[n] u[n] where B[n] has the uncertainty. She gave a couple of different notions of capacity, relating to the ratio | x[n]/x[0] | — either the expected value of the square or the log, appropriately normalized. She used a “deterministic model” to give an explanation of how control in this setting is kind of like controlling the number of significant bits in the state: uncertainty increases this and you need a certain “amount” of control to cancel that growth.

(Learning and statistics)
I learned about active regression approaches from Sivan Sabato that provably work better than passive learning. The idea there is do to use a partition of the X space and then do piecewise constant approximations to a weight function that they use in a rejection sampler. The rejection sampler (which I thought of as sort of doing importance sampling to make sure they cover the space) helps limit the number of labels requested by the algorithm. Somehow I had never met Raj Rao Nadakuditi until now, and I wish I had gotten a chance to talk to him further. He gave a nice talk on robust PCA, and in particular how outliers “break” regular PCA. He proposed a combination of shrinkage and truncation to help make PCA a bit more stable/robust. Laura Balzano talked about “estimating subspace projections from incomplete data.” She proposed an iterative algorithm for doing estimation on the Grassmann manifold that can do subspace tracking. Constantine Caramanis talked about a convex formulation for mixed regression that gives a guaranteed solution, along with minimax sample complexity bounds showing that it is basically optimal. Yingbin Liang talked about testing approaches for understanding if there is an “anomalous structure” in a sequence of data. Basically for a sequence Y_1, Y_2, \ldots, Y_n, the null hypothesis is that they are all i.i.d. \sim p and the (composite) alternative is that there an interval of indices which are \sim q instead. She proposed a RKHS-based discrepancy measure and a threshold test on this measure. Pradeep Ravikumar talked about a “simple” estimator that was a “fix” for ordinary least squares with some soft thresholding. He showed consistency for linear regression in several senses, competitive with LASSO in some settings. Pretty neat, all said, although he also claimed that least squares was “something you all know from high school” — I went to a pretty good high school, and I don’t think we did least squares! Sanmi Koyejo talked about a Bayesian devision theory approach to variable selection that involved minimizing some KL-divergence. Unfortunately, the resulting optimization ended up being NP-hard (for reasons I can’t remember) and so they use a greedy algorithm that seems to work pretty well.

(Privacy)
Cynthia Dwork gave a tutorial on differential privacy with an emphasis on the recent work involving false discovery rate. In addition to her plenary there were several talks on differential privacy and other privacy measures. Kunal Talwar talked about their improved analysis of the SuLQ method for differentially private PCA. Unfortunately there were two privacy sessions in parallel so I hopped over to see John Duchi talk about definitions of privacy and how definitions based on testing are equivalent to differential privacy. The testing framework makes it easier to prove minimax bounds, though, so it may be a more useful view at times. Nadia Fawaz talked about privacy for time-series data such as smart meter data. She defined different types of attacks in this setting and showed that they correspond to mutual information or directed mutual information, as well as empirical results on a real data set. Raef Bassily studied a estimation problem in the streaming setting where you want to get a histogram of the most frequent items in the stream. They reduce the problem to one of finding a “unique heavy hitter” and develop a protocol that looks sort of like a code for the MAC: they encode bits into a real vector, had noise, and then add those up over the reals. It’s accepted to STOC 2015 and he said the preprint will be up soon.

Student Promotion: Signal Processing Society Provides Steep Price Slash

Or SPSPSPSPS, for short. I’ve been over-busy and lax on posting, but I’ll provide some recap of ITA soon, as well as some notes from the Bellairs workshop I just came back from. The winter is a bit jarring. To the point of the subject:

In case you hadn’t heard, the IEEE Signal Processing Society is currently running a campaign that allows IEEE Student and Graduate Student members to join the SPS for free for the 2015 membership year. The promotion is running now through 15 August 2015. Only IEEE Student and Graduate Students are eligible, as this offer does not apply to SPS Student or Graduate Student members renewing their membership for 2015.

This link directs to the IEEE website with both IEEE Student membership and the free SPS Student membership in the cart.

If a student is already an IEEE Student of Graduate Student member, he/she can use the code SP15STUAD at checkout to obtain his/her free membership.

If you have any questions regarding the SPS Free Student Membership campaign or other membership items, please don’t hesitate to contact Jessica Perry at jessica.perry@ieee.org.

Please spread the news to others who may be interested in joining the SP Society.

Linkage

Posting a hodgepodge of links after a rather wonderful time hiking and camping, solving puzzles, and the semester starting all together too soon for my taste.

[Trigger warning] More details on Walter Lewin’s actions.

The unbearable maleness of Wikipedia.

Hanna Wallach’s talk at the NIPS Workshop on fairness.

Reframing Science’s Diversity Challenge by trying to move beyond the pipeline metaphor.

An essay by Daniel Solove on privacy (I’d recommend reading his books too but this is shorter). He takes on the “nothing to hide” argument against privacy.

I don’t like IPAs that much, but this lawsuit about lettering seems like a big deal for the craft beer movement.

I’ve always been a little skeptical of Humans of New York, but never was sure why. I think this critique has something to it. Not sure I fully agree but it does capture some of my discomfort.

Judith Butler gave a nice interview where she talks a bit about why “All Lives Matter,” while true, is not an appropriate rhetorical strategy: “If we jump too quickly to the universal formulation, ‘all lives matter,’ then we miss the fact that black people have not yet been included in the idea of ‘all lives.’ That said, it is true that all lives matter (we can then debate about when life begins or ends). But to make that universal formulation concrete, to make that into a living formulation, one that truly extends to all people, we have to foreground those lives that are not mattering now, to mark that exclusion, and militate against it.”

A nice essay on morality and progress with respect to Silicon Valley. Techno-utopianism running amok leads to bad results: “Silicon Valley’s amorality problem arises from the implicit and explicit narrative of progress companies use for marketing and that people use to find meaning in their work. By accepting this narrative of progress uncritically, imagining that technological change equals historic human betterment, many in Silicon Valley excuse themselves from moral reflection.”

Annals of bad academic software: letters of recommendation

‘Tis the season for recommendation letters, and I again find myself thwarted by terrible UX and decisions made by people who manage application systems.

  • Why do I need to rank the candidate in 8 (or more!) different categories vs. people at my institution? Top 5% in terms of “self-motivation” or top 10%? What if they were an REU student not from my school? What if I have no point of comparison? What makes you think that people are either (a) going to make numbers up or (b) put top scores on everything because that is easier? Moreover why make it mandatory to answer these stupid questions to submit my letter?
  • One system made me cut and paste my letter as text into a text box, then proceeded to strip out all the line/paragraph breaks. ‘Tis a web-app designed by an idiot, full of incompetent input-handling, and hopefully at least signifying to the committee that they should admit the student.
  • Presumably the applicant filled out my contact information already, so why am I being asked to fill it out again?

It’s enough to make me send all letters by post — it would save time, I think.