Faculty at Rutgers are unionized, and currently the union is trying to fight the university administration over their (secretive) use of Academic Analytics to rate the “scholarly productivity” of faculty and departments. For example, last year they produced a ranking of Rutgers departments (pdf). It’s so great to be reduced to a single number!
As the statistical adage goes, garbage in, garbage out, and it’s entirely unclear what AA is using to produce these numbers (although one could guess). It’s a proprietary system and the university refuses to give access to the “confidential innards” — perhaps they don’t want others to see how the sausage is made. If we take just one likely feature, impact factor, we can already see the poverty of single-index measures of productivity. Impact factor can vary widely across indexing systems: Scopus, Web of Knowledge, and Google all produce different numbers because they index different databases. At some point I went though and lumped together papers in my Google profile if they were the same result (e.g. a journal version of a conference paper) but then I was told that this is a bad idea, not because it would lower my impact factor (which it would), but because manipulating an index is bad form. If the index sucks, it’s the index-maker’s fault.
I wonder how many other universities are going through this process. Within one department the levels of “productivity” vary widely, and across disciplines the problem is only harder. The job faced by administrators is tough — how do they know where things can improve? But relying on opaque analytics is at best “statist-ism” and at worst reading entrails.