Active learning survey

I’ve been starting work on a problem related to active learning, and I wanted to get caught up on the literature. Luckily for me, Sanjoy Dasgupta has a nice survey (non-paywall version here) from 2011 on the subject. It’s a nice read, although I didn’t know “aggressive” and “mellow” were terms of art in active learning.

In active learning you have to query unlabeled points and ask for their labels — the goal is usually to learn something like a classifier, so you want to query a small number of points by being judicious about which ones to ask for. A mellow algorithm queries any informative point, where as an aggressive algorithm queries the “most informative point.” The former are often easier to analyze, because the latter end up sampling a “nonrepresentative” set of labeled points — if the points come i.i.d. from some distribution, the set of points you would label in an aggressive strategy will not look like they came from that distribution. Future work may look at semi-aggressive strategies. Perhaps we could call this line of research “harshing the mellow” by developing “harsh functions” which score points according to informativeness…

Advertisements

Linkage (technical)

Having seen a talk recently by John Ioannidis on how medical research is (often) bunk, this finer corrective by Larry Wasserman was nice to read.

Computer science conferences are often not organized by the ACM, but instead there are different foundations for machine learning and vision and so on that basically exist to organize the annual conference(s). At least, that is what I understand. There are a few which are run by the ACM, and there’s often debate about whether or not the ACM affiliation is worth it, given the overheads and so on. Boaz Barak had a post a little over a week ago making the case for sticking with the ACM. Given the hegemonic control of the IEEE on all things EE (more or less), this debate is new to me. As far as I can tell, ISIT exists to cover some of the cost of publishing the IT Transactions, and so it sort of has to be run by IEEE.

As mentioned before, Tara Javidi has a nice post up on what it means for one random variable to be stochastically less variable than another.

Paul Miniero has a bigger picture view of NIPS — I saw there were lots of papers on “deep learning” but it’s not really my area so I missed many of those posters.

David Eppstein’s top 10 cs.DS papers from 2012.

B-log on IT

Via Tara Javidi I heard about a new blog on information theory: the Information Theory b-log, which has been going for a few months now but I guess in more “stealth mode.” It’s mostly posts by Sergio Verdú, with some initial posting by Thomas Courtade, but the most recent post is by Tara on how to compare random variables from a decision point of view. However, as Max noted:

All researchers work­ing on infor­ma­tion the­ory are invited to par­tic­i­pate by post­ing items to the blog. Both orig­i­nal mate­r­ial and point­ers to the web are welcome.