Annals of bad academic software: letters of recommendation

‘Tis the season for recommendation letters, and I again find myself thwarted by terrible UX and decisions made by people who manage application systems.

  • Why do I need to rank the candidate in 8 (or more!) different categories vs. people at my institution? Top 5% in terms of “self-motivation” or top 10%? What if they were an REU student not from my school? What if I have no point of comparison? What makes you think that people are either (a) going to make numbers up or (b) put top scores on everything because that is easier? Moreover why make it mandatory to answer these stupid questions to submit my letter?
  • One system made me cut and paste my letter as text into a text box, then proceeded to strip out all the line/paragraph breaks. ‘Tis a web-app designed by an idiot, full of incompetent input-handling, and hopefully at least signifying to the committee that they should admit the student.
  • Presumably the applicant filled out my contact information already, so why am I being asked to fill it out again?

It’s enough to make me send all letters by post — it would save time, I think.

PaperCept, EDAS, and so on: why can’t we have nice things?

Why oh why can’t we have nice web-based software for academic things?

For conferences I’ve used PaperCept, EDAS (of course), Microsoft’s CMT, and EasyChair. I haven’t used HotCRP, but knowing Eddie it’s probably significantly better than the others.

I can’t think of a single time I’ve used PaperCept and had it work the way I expect. My first encounter was for Allerton, where it apparently would not allow quotation marks in the title of papers (an undocumented restriction!). But then again, why has nobody heard of sanitizing inputs? The IEEE Transactions on Automatic Control also uses PaperCept, and the paper review has a character restriction on it (something like 5000 or so). Given that a thorough review could easily pass twice that length, I’m shocked at this arbitrary restriction.

On the topic of journal software, the Information Theory Society semi-recently transitioned from Pareja to Manuscript Central. I have heard that Pareja, a home-grown solution, was lovable in its own way, but was also a bit of a terror to use as an Associate Editor. Manuscript Central’s editorial interface is like looking at the dashboard of a modern aircraft, however — perhaps efficient to the expert, but the interaction designers I know would blanche (or worse) to see it.

This semi-rant is due to an email I got about IEEE Collabratec (yeah, brah!):

IEEE is excited to announce the pilot rollout of a new suite of online tools where technology professionals can network, collaborate, and create – all in one central hub. We would like to invite you to be a pilot user for this new tool titled IEEE Collabratec™ (Formerly known as PPCT – Professional Productivity and Collaboration Tool). Please use the tool and tell us what you think, before we officially launch to authors, researchers, IEEE members and technology professionals like yourself around the globe.

What exactly is IEEE Collabratec?
IEEE Collabratec will offer technology professionals robust networking, collaborating, and authoring tools, while IEEE members will also receive access to exclusive features. IEEE Collabratec participants will be able to:

* Connect with technology professionals by location, technical interests, or career pursuits;
* Access research and collaborative authoring tools; and
* Establish a professional identity to showcase key accomplishments.

Parsing the miasma of buzzwords, my intuition is that this is supposed to be some sort of combination of LinkedIn, ResearchGate, and… Google Drive? Why does the IEEE think it has the expertise to pull off integration at this scale? Don’t get me wrong, there are tons of smart people in the IEEE, but this probably should be done by professionals, and not non-profit professional societies. How much money is this going to cost? The whole thing reminds me of Illinois politics — a lucrative contract given to a wealthy campaign contributor after the election, with enough marketing veneer to avoid raising a stink. Except this is the IEEE, not Richard [JM] Daley (or Rahm Emmanuel for that matter).

As far as I can tell, the software that we have to interact with regularly as academics has never been subjected to scrutiny by any user-interface designer. From online graduate school/faculty application forms (don’t get me started on the letter of rec interface), conference review systems, journal editing systems, and on, we are given a terrible dilemma: pay exorbitant amounts of money to some third party, or use “home grown” solutions developed by our colleagues. For the former, there is precious little competition and they have no financial incentive to improve the interface. For the latter, we are at the whims of the home code-gardener. Do they care about user experience? Is that their expertise? Do they have time to both make it functional and be a pleasure to use? Sadly, the answer is usually no, with perhaps a few exceptions.

I shake my fist at the screen.

Feature Engineering for Review Times

The most popular topic of conversation among information theory afficionados is probably the long review times for the IEEE Transactions on Information Theory. Everyone has a story of a very delayed review — either for their own paper or for a friend of theirs. The Information Theory Society Board of Governors and Editor-in-Chief have presented charts of “sub-to-pub” times and other statistics and are working hard on ways to improve the speed of reviews without impairing their quality. These are all laudable. But it occurs to me that there is room for social engineering on the input side of things as well. That is, if we treat the process as a black box, with inputs (papers) and outputs (decisions), what would a machine-learning approach to predicting decision time do?

Perhaps the most important (and overlooked in some cases) aspects of learning a predictor from real data is figuring out what features to measure about each of the inputs. Off the top of my head, things which may be predictive include:

  • length
  • number of citations
  • number of equations
  • number of theorems/lemmas/etc.
  • number of previous IT papers by the authors
  • h-index of authors
  • membership status of the authors (student members to Fellows)
  • associate editor handling the paper — although for obvious reasons we may not want to include this

I am sure I am missing a bunch of relevant measurable quantities here, but you get the picture.

I would bet that paper length is a strong predictor of review time, not because it takes a longer time to read a longer paper, but because the activation energy of actually picking up the paper to review it is a nonlinear function of the length.

Doing a regression analysis might yield some interesting suggestions on how to pick coauthors and paper length to minimize the review time. This could also help make the system go faster, no? Should we request these sort of statistics from the EiC?

Rutgers ECE is hiring!

Faculty Search, Department of Electrical and Computer Engineering, Rutgers University.

The Department of Electrical and Computer Engineering at Rutgers University anticipates multiple faculty openings in the following areas: (i) High-performance distributed computing, including cloud computing and data-intensive computing, (ii) Electronics, advanced sensors and renewable energy, including solar cells and detectors (bio, optical, RF) and, (iii) Bioelectrical engineering.

We are interested in candidates who can combine expertise in these areas with cyber-security, software engineering, devices, embedded systems, signal processing and or communications. In addition, we particularly welcome candidates who can contribute to broader application initiatives such as biomedical and health sciences, smart cities, or sustainable energy.

Outstanding applicants in all areas and at all ranks are encouraged to apply. Suitable candidates may be eligible to be considered for Henry Rutgers University Professorships in Big Data as part of a University Initiative.

Excellent facilities are available for collaborative research opportunities with various university centers such as the Wireless Information Network Laboratory (WINLAB), Microelectronics Research Laboratory (MERL), Institute for Advanced Materials, Devices and Nanotechnology (IAMDN), Center for Advanced Infrastructure and Transportation (CAIT), Rutgers Energy Institute (REI), and the Center for Integrative Proteomics Research, as well as with local industry.

A Ph.D. in a related field is required. Responsibilities include teaching undergraduate and graduate courses and establishing independent research programs. Qualified candidates should submit a CV, statements on teaching and research, and contacts of three references to this website. The review process will start immediately. For full consideration applications must be received by January 15, 2015.

Questions may be directed to:

Athina P. Petropulu
Professor and Chair
Department of Electrical and Computer Engineering
Rutgers University
athinap @ rutgers.edu.

EEO/AA Policy:
Rutgers is an Equal Opportunity / Affirmative Action Employer. Rutgers is also an ADVANCE institution, one of a limited number of universities in receipt of NSF funds in support of our commitment to increase diversity and the participation and advancement of women in the STEM disciplines.

Harvard Business Review’s underhanded game

For our first-year seminar, we wanted to get the students to read some the hyperbolic articles on data science. A classic example is the Harvard Business Review’s Data Scientist: The Sexiest Job of the 21st Century. However, when we downloaded the PDF version through the library proxy, we were informed:

Harvard Business Review and Harvard Business Publishing Newsletter content on EBSCOhost is licensed for the private individual use of authorized EBSCOhost users. It is not intended for use as assigned course material in academic institutions nor as corporate learning or training materials in businesses. Academic licensees may not use this content in electronic reserves, electronic course packs, persistent linking from syllabi or by any other means of incorporating the content into course resources

Harvard Business Publishing will be pleased to grant permission to make this content available through such means. For rates and permission, contact permissions@harvardbusiness.org.

So it seems that for a single article we’d have to pay extra, and since “any other means of incorporating the content” is also a violation, we couldn’t tell the students that they can go to the library website and look up an article in a publication whose name sounds like “Schmarbard Fizzness Enqueue” on sexy data science.

My first thought on seeing this restriction is that it would definitely not pass the fair use test, but then the fine folks at the American Library Association say that it’s a little murky:

Is There a Fair Use Issue? Despite any stated restrictions, fair use should apply to the print journal subscriptions. With the database however, libraries have signed a license that stipulates conditions of use, so legally are bound by the license terms. What hasn’t really been fully tested is whether federal law (i.e. copyright law) preempts a license like this. While librarians may like to think it does, there is very little case law. Also, it is possible that if Harvard could prove that course packs and article permission fees are a major revenue source for them, it would be harder to declare fair use as an issue and fail the market effect factor. In other cases as in Georgia State, the publishers could not prove their permissions business was that significant which worked against them. Remember that if Harvard could prove that schools were abusing the restrictions on use, they could sue.

Part of the ALA’s advice is to use “alternate articles to the HBR 500 supplied by other vendors that do not have these restrictions.” Luckily for us, there is no absence of hype on data science, so we could avoid it.

Given Harvard’s well-publicized open access policy and general commitment to sharing scholarly materials, the educational restriction on using materials strikes me as rank hypocrisy. Of course, maybe HBR is not really a venue for scholarly articles. Regardless, I would urge anyone considering including HBR material in their class to think twice before playing their game. Or to indulge in some civil disobedience, but this might end up hurting the libraries and not HBR, so it’s hard to figure out what to do.

Ethical questions in research funding: the case of ethics centers

I read a piece in Inside Higher Ed today on the ethics of accepting funds from different sources. In engineering, this is certainly an important issue, but the article focused Cynthia Jones, an ethics professor at UT-Pan American who directs the PACE ethics center. Jones had this stunningly ignorant thing to say about Department of Defense funding:

“What the hell are we going to use lasers for except to kill people?” Jones said. “But scientists get cut the slack.”

I’m flabbergasted that someone who works on philosophy applied to a technological field, namely biomedical ethics, believes that the only use of lasers is to kill people. Perhaps she thinks that using lasers in surgery is unethical. Or, more likely, she is unaware of how basic research in science is actually funded in this country.

Certainly, there’s been a definite shift over time in how defense-related agencies have targeted their funds — they fund much less basic research (or basic applied research) and have focused more on deliverables and technologies that more directly support combat, future warriors, and the like. This presents important ethical questions for researchers who may oppose the use of military force (or how it has been used recently) but who are interested in problems that could be “spun” towards satisfying these new objectives from DARPA, ARO, ONR, and AFOSR. Likewise, there are difficult questions about the line between independent research and consulting work for companies who may fund your graduate students. Drawing sharp distinctions in these situations is hard — everybody has their own comfort zone.

Jones wrote an article on “Dirty Money” that tries to develop rules for when money is tainted and when it is not. She comes up with a checklist at the end of the article that says funds should not be accepted if they

1- are illegal or that operate illegally in one’s country, or when the funding violates a generally accepted doctrine signed by one’s country (keeping in mind there is sometimes a distinction between legally acceptable and morally acceptable); or
2- originate from a donor who adds controls that would conflict with the explicit or implicit goals of the project to be funded or that would conflict with the proper functioning of the project or the profession’s ethical guidelines.

This, she says, is “the moral minimum.” This framing (and the problem in general of funding centers) that she addresses sidesteps the ethical questions around research that is funded by writing proposals, and indeed the question of soliciting funds. Even in the world of charitable giving, the idea that funders wander through the desert with bags of money searching for fundees seems odd. I think the more difficult ethical quandary is that of solicitation. At a “moral minimum” the fundee has to think about these questions, but I think point 2 needs a lot more unpacking because of the chicken-and-egg question of matching proposed research to program goals.

I don’t want to sound so super-negative! I think it’s great that someone is looking at the ethics of the economics of how we fund research. It’s just that there’s a whole murkier lake beyond the murky pond of funding centers, and the moral issues of science/engineering funding are not nearly as simple as Jones’s remark indicates.