ResearchGate: spam scam, or…?

I’ve been getting fairly regular automated emails lately from ResearchGate, which has pull-quotes from Forbes and NPR saying it’s changing the way we do research blah blah blah. However, all empirical reports I have heard indicate that once you join, it repeatedly spams all of your co-authors with requests to join, which makes it feel a bit more like Heaven’s Gate.

On a less grim note, the site’s promise to make your research “more visible” sounds a bit like SEO spam. Given the existence of Google Scholar, which is run by the SE that one would like to O, it seems slightly implausible.

Any readers want to weigh in on whether ResearchGate has been useful to them? Or is this mostly for people who don’t know how to make their own homepage with their papers on it (which is probably most faculty).

B-log on IT

Via Tara Javidi I heard about a new blog on information theory: the Information Theory b-log, which has been going for a few months now but I guess in more “stealth mode.” It’s mostly posts by Sergio Verdú, with some initial posting by Thomas Courtade, but the most recent post is by Tara on how to compare random variables from a decision point of view. However, as Max noted:

All researchers work­ing on infor­ma­tion the­ory are invited to par­tic­i­pate by post­ing items to the blog. Both orig­i­nal mate­r­ial and point­ers to the web are welcome.

Linkage part deux

Most of these are stolen from MetaFilter.

Welcome back to public blogging, Dan.

All about time zones.

Musical instrument samples. My first UROP at MIT was at the Media Lab, where I helped record instrumentalists as part of a musical instrument identification system. Paris Smaragdis was there at the time, and now he is at UIUC where he has a lot of cool audio demos. There are also some great clips Inside the Music Library at the BBC.

Ridiculous computer interfaces from movies.

Linkage

I’m blogging from Chicago, where it is a balmy 42 degrees but sunny. Whither spring, I ask! Actually, I’m not blogging so much as linking to a bunch of stuff.

For San Diegans, the SD Asian Film Festival Spring Showcase is going on. It looks like I’ll miss a lot of it but I might try to catch something at the end of the week.

Less Pretentious & More Accurate Titles For Literary Masterworks — funny but possibly NSFW.

This home-scanning program seems creepy, regardless of the constitutionality issues.

Unfortunate headlines strike again.

I really like scallion pancakes. I’ll have to try this out when I get back to San Diego.

I agree that this video is awesome. Yo-Yo Ma and Lil Buck. I think that dude is made of rubber. And steel.

Tom Waits was induced into the Rock and Roll Hall of Fame. I just hope I get to see him live some day.

Some things to skim or read from ArXiV when I get the chance:
Sequential Analysis in High Dimensional Multiple Testing and Sparse Recovery (Matt Malloy, Robert Nowak)
Differential Privacy: on the trade-off between Utility and Information Leakage (Mário S. Alvim, Miguel E. Andrés, Konstantinos Chatzikokolakis, Pierpaolo Degano, Catuscia Palamidessi)
Capacity of Byzantine Consensus with Capacity-Limited Point-to-Point Links (Guanfeng Liang, Nitin Vaidya)
Settling the feasibility of interference alignment for the MIMO interference channel: the symmetric square case (Guy Bresler, Dustin Cartwright, David Tse)
Decentralized Online Learning Algorithms for Opportunistic Spectrum Access (Yi Gai, Bhaskar Krishnamachari)
Online and Batch Learning Algorithms for Data with Missing Features (Afshin Rostamizadeh, Alekh Agarwal, Peter Bartlett)
Nonuniform Coverage Control on the Line (Naomi Ehrich Leonard, Alex Olshevsky)
Degree Fluctuations and the Convergence Time of Consensus Algorithms (Alex Olshevsky, John Tsitsiklis)

Privacy and entropy (needs improvement)

A while ago, Alex Dimakis sent me an EFF article on information theory and privacy, which starts out with an observation of Latanya Sweeney’s that gender, ZIP code, birthdate are uniquely identifying for a large portion of the population (an updated observation was made in 2006).

What’s weird is that the article veers into “how many bits of do you need to uniquely identify someone” based on self-information or surprisal calculations. It paints a bit of a misleading picture about how to answer the question. I’d probably start with taking \log_2(6.625 \times 10^9) and then look at the variables in question.

However, the mere existence of this article raises a point : here is a situation where ideas from information theory and probability/statistics can be made relevant to a larger population. It’s a great opportunity to popularize our field (and demonstrate good ways of thinking about it). Why not do it ourselves?