Linkage

Like many, I was shocked to hear of Prashant Bhargava’s death. I just saw Radhe Radhe with Vijay Iyer’s live score at BAM, and Bhargava was there. I met him once, through Mimosa Shah.

Most people know Yoko Ono as “the person who broke up the Beatles” and think of her art practice as a joke. She’s a much more serious artist than that, and this article tries to lay it out a bit better.

Via Celeste LeCompte, a tool to explore MIT’s research finances. It’s still a work-in-progress. I wonder how hard it would be to make such a thing for Rutgers.

In lieu of taking this course offered by Amardeep Singh, I could at least read the books on the syllabus I guess.

Muscae volitantes, or floaty things in your eyes.

Survey on Ac and post-Ac STEM PhD careers

One of the things about teaching in a more industry-adjacent field like electrical engineering is that the vast majority of PhDs do not go on to academic careers. The way in which we have traditionally structured our programs is somehow predicated on the idea that students will go on to be academic researchers themselves, and there’s a long argument about the degree to which graduate school should involve vocational training that can fill many a post-colloquium dinner discussion.

Since I know there are non-academic PhDs who read this, there’s a survey out from Harvard researcher Melanie Sinche that is trying to gather data on the career trajectories of PhDs. The title of the article linked above, “Help solve the mystery of the disappearing Ph.D.s,” sounds really off to me — I know where the people I know from grad school ended up, and a quick glance through LinkedIn show that the “where” is not so much the issue as “how many.” For example, we talk a lot about how so many people from various flavors of theory end up in finance, but is it 50%? I suspect the number is much lower. Here’s a direct link to the survey. Fill it out and spread widely!

Annals of bad academic software: letters of recommendation

‘Tis the season for recommendation letters, and I again find myself thwarted by terrible UX and decisions made by people who manage application systems.

  • Why do I need to rank the candidate in 8 (or more!) different categories vs. people at my institution? Top 5% in terms of “self-motivation” or top 10%? What if they were an REU student not from my school? What if I have no point of comparison? What makes you think that people are either (a) going to make numbers up or (b) put top scores on everything because that is easier? Moreover why make it mandatory to answer these stupid questions to submit my letter?
  • One system made me cut and paste my letter as text into a text box, then proceeded to strip out all the line/paragraph breaks. ‘Tis a web-app designed by an idiot, full of incompetent input-handling, and hopefully at least signifying to the committee that they should admit the student.
  • Presumably the applicant filled out my contact information already, so why am I being asked to fill it out again?

It’s enough to make me send all letters by post — it would save time, I think.

PaperCept, EDAS, and so on: why can’t we have nice things?

Why oh why can’t we have nice web-based software for academic things?

For conferences I’ve used PaperCept, EDAS (of course), Microsoft’s CMT, and EasyChair. I haven’t used HotCRP, but knowing Eddie it’s probably significantly better than the others.

I can’t think of a single time I’ve used PaperCept and had it work the way I expect. My first encounter was for Allerton, where it apparently would not allow quotation marks in the title of papers (an undocumented restriction!). But then again, why has nobody heard of sanitizing inputs? The IEEE Transactions on Automatic Control also uses PaperCept, and the paper review has a character restriction on it (something like 5000 or so). Given that a thorough review could easily pass twice that length, I’m shocked at this arbitrary restriction.

On the topic of journal software, the Information Theory Society semi-recently transitioned from Pareja to Manuscript Central. I have heard that Pareja, a home-grown solution, was lovable in its own way, but was also a bit of a terror to use as an Associate Editor. Manuscript Central’s editorial interface is like looking at the dashboard of a modern aircraft, however — perhaps efficient to the expert, but the interaction designers I know would blanche (or worse) to see it.

This semi-rant is due to an email I got about IEEE Collabratec (yeah, brah!):

IEEE is excited to announce the pilot rollout of a new suite of online tools where technology professionals can network, collaborate, and create – all in one central hub. We would like to invite you to be a pilot user for this new tool titled IEEE Collabratec™ (Formerly known as PPCT – Professional Productivity and Collaboration Tool). Please use the tool and tell us what you think, before we officially launch to authors, researchers, IEEE members and technology professionals like yourself around the globe.

What exactly is IEEE Collabratec?
IEEE Collabratec will offer technology professionals robust networking, collaborating, and authoring tools, while IEEE members will also receive access to exclusive features. IEEE Collabratec participants will be able to:

* Connect with technology professionals by location, technical interests, or career pursuits;
* Access research and collaborative authoring tools; and
* Establish a professional identity to showcase key accomplishments.

Parsing the miasma of buzzwords, my intuition is that this is supposed to be some sort of combination of LinkedIn, ResearchGate, and… Google Drive? Why does the IEEE think it has the expertise to pull off integration at this scale? Don’t get me wrong, there are tons of smart people in the IEEE, but this probably should be done by professionals, and not non-profit professional societies. How much money is this going to cost? The whole thing reminds me of Illinois politics — a lucrative contract given to a wealthy campaign contributor after the election, with enough marketing veneer to avoid raising a stink. Except this is the IEEE, not Richard [JM] Daley (or Rahm Emmanuel for that matter).

As far as I can tell, the software that we have to interact with regularly as academics has never been subjected to scrutiny by any user-interface designer. From online graduate school/faculty application forms (don’t get me started on the letter of rec interface), conference review systems, journal editing systems, and on, we are given a terrible dilemma: pay exorbitant amounts of money to some third party, or use “home grown” solutions developed by our colleagues. For the former, there is precious little competition and they have no financial incentive to improve the interface. For the latter, we are at the whims of the home code-gardener. Do they care about user experience? Is that their expertise? Do they have time to both make it functional and be a pleasure to use? Sadly, the answer is usually no, with perhaps a few exceptions.

I shake my fist at the screen.

Feature Engineering for Review Times

The most popular topic of conversation among information theory afficionados is probably the long review times for the IEEE Transactions on Information Theory. Everyone has a story of a very delayed review — either for their own paper or for a friend of theirs. The Information Theory Society Board of Governors and Editor-in-Chief have presented charts of “sub-to-pub” times and other statistics and are working hard on ways to improve the speed of reviews without impairing their quality. These are all laudable. But it occurs to me that there is room for social engineering on the input side of things as well. That is, if we treat the process as a black box, with inputs (papers) and outputs (decisions), what would a machine-learning approach to predicting decision time do?

Perhaps the most important (and overlooked in some cases) aspects of learning a predictor from real data is figuring out what features to measure about each of the inputs. Off the top of my head, things which may be predictive include:

  • length
  • number of citations
  • number of equations
  • number of theorems/lemmas/etc.
  • number of previous IT papers by the authors
  • h-index of authors
  • membership status of the authors (student members to Fellows)
  • associate editor handling the paper — although for obvious reasons we may not want to include this

I am sure I am missing a bunch of relevant measurable quantities here, but you get the picture.

I would bet that paper length is a strong predictor of review time, not because it takes a longer time to read a longer paper, but because the activation energy of actually picking up the paper to review it is a nonlinear function of the length.

Doing a regression analysis might yield some interesting suggestions on how to pick coauthors and paper length to minimize the review time. This could also help make the system go faster, no? Should we request these sort of statistics from the EiC?

Rutgers ECE is hiring!

Faculty Search, Department of Electrical and Computer Engineering, Rutgers University.

The Department of Electrical and Computer Engineering at Rutgers University anticipates multiple faculty openings in the following areas: (i) High-performance distributed computing, including cloud computing and data-intensive computing, (ii) Electronics, advanced sensors and renewable energy, including solar cells and detectors (bio, optical, RF) and, (iii) Bioelectrical engineering.

We are interested in candidates who can combine expertise in these areas with cyber-security, software engineering, devices, embedded systems, signal processing and or communications. In addition, we particularly welcome candidates who can contribute to broader application initiatives such as biomedical and health sciences, smart cities, or sustainable energy.

Outstanding applicants in all areas and at all ranks are encouraged to apply. Suitable candidates may be eligible to be considered for Henry Rutgers University Professorships in Big Data as part of a University Initiative.

Excellent facilities are available for collaborative research opportunities with various university centers such as the Wireless Information Network Laboratory (WINLAB), Microelectronics Research Laboratory (MERL), Institute for Advanced Materials, Devices and Nanotechnology (IAMDN), Center for Advanced Infrastructure and Transportation (CAIT), Rutgers Energy Institute (REI), and the Center for Integrative Proteomics Research, as well as with local industry.

A Ph.D. in a related field is required. Responsibilities include teaching undergraduate and graduate courses and establishing independent research programs. Qualified candidates should submit a CV, statements on teaching and research, and contacts of three references to this website. The review process will start immediately. For full consideration applications must be received by January 15, 2015.

Questions may be directed to:

Athina P. Petropulu
Professor and Chair
Department of Electrical and Computer Engineering
Rutgers University
athinap @ rutgers.edu.

EEO/AA Policy:
Rutgers is an Equal Opportunity / Affirmative Action Employer. Rutgers is also an ADVANCE institution, one of a limited number of universities in receipt of NSF funds in support of our commitment to increase diversity and the participation and advancement of women in the STEM disciplines.