Family leave for graduate students: how does it work at your school?

I am trying to understand how family leave works for graduate students at different schools. More specifically, I am interested in how the finances for family leave work. Graduate students at Rutgers (as at many schools) are covered by a union contract. The contract specifies that in case of a pregnancy, the mother can take 6 weeks of paid leave recovery time plus an additional 8 weeks of paid leave family time. Non-carrying parents can take 8 weeks of paid leave for family time. While not generous by European standards, it’s better than what I would expect (ah, low expectations) here in the US.

This raises the question of how the university pays for the leave time. Students are either teaching or research assistants. 14 weeks off from teaching might include most of a semester, so the department needs a substitute. Trying to give the student an “easy TA” and still expecting them to come and teach when they are entitled to the leave is shady (although I have heard this idea floated). If they are paid through a grant, how should the leave time be charged?

I recently contacted authorities at Rutgers about this, and their response was not encouraging. Rutgers foists all charges off onto the department or grant/PI. If you are a TA and have a baby, the department is on the hook, financially, for finding a replacement. If you are a research assistant, they just charge the paid leave to the grant, as per the fringe rules in OMB Circular A-21.

I wrote a letter back about how disappointing this all is. The current system creates strong incentives for departments and PIs to deny appointments to students who have or may develop family obligations. This lack of support from the University could result in systematic discrimination against student parents. Whether examples of such discrimination exist is not clear, but I wouldn’t be surprised. Allocating the financial burden of leave to departments creates great inequities based on department size and budget, and not all departments can “close ranks” so easily.

For PIs covering students on grants with “deliverables,” the system encourages not supporting students on such grants. The rules in OMB Circular A-21 say that costs should be “distributed to all institutional activities in proportion to the relative amount of time or effort actually devoted by the employees.” It also implies that leave time should be charged via fringe benefits and not salary. It’s not entirely clear to be how a particular grant should be charged if a student participant goes on family leave, but the Rutgers policy seems to be to stick it to the PI.

The current situation leaves students in a predicament: when should they tell their advisor or department that they are pregnant? Many students are afraid of retribution or discrimination: I have heard from students that their friends say advisors “don’t like it when their students have kids.” The university’s policy on this issues only serves to legitimize these fears by creating uncertainty for them about whether they will be reappointed.

My question to the readers of this blog is this: how does your university manage paying for family leave for grad students?

LabTV, research stories, and video outreach

My lab was visited by Charlie Chalkin a few weeks ago. He was here to interview me and various students on our experiences in research for LabTV. LabTV was founded by Jay Walker and the NIH director Dr. Francis Collins with the aim of profiling NIH-funded researchers (as I now am). It was a great opportunity and a really short informal process, and I guess I can get some more hits from YouTube on the LabTV channel.

This experience got me thinking about how hard it is to connect with students at times. In particular, I think that many students don’t really see the process of how we got to where we are as their professors. Unless they have an academic in the family and also paid attention to their life story, they seem a bit mystified by it all. Obviously pop culture has a lot to do with this — movie and TV depictions of the professoriat are pretty far from reality. I have heard, however, from Ram Rajagopal that San Andreas has pretty much the most amazing interactions between professors and grad students. Heroism — that’s what we want.

But this experience got me thinking that departments might benefit from having short 2 minute profiles of their faculty members, but not from the technical achievements view. Instead, let them talk about what got them interested in the problems they are interested in, how they ended up in this position, and why they like the job. The answers may be surprising, but I think students might see a different side than they get in the lecture hall.


Like many, I was shocked to hear of Prashant Bhargava’s death. I just saw Radhe Radhe with Vijay Iyer’s live score at BAM, and Bhargava was there. I met him once, through Mimosa Shah.

Most people know Yoko Ono as “the person who broke up the Beatles” and think of her art practice as a joke. She’s a much more serious artist than that, and this article tries to lay it out a bit better.

Via Celeste LeCompte, a tool to explore MIT’s research finances. It’s still a work-in-progress. I wonder how hard it would be to make such a thing for Rutgers.

In lieu of taking this course offered by Amardeep Singh, I could at least read the books on the syllabus I guess.

Muscae volitantes, or floaty things in your eyes.

Survey on Ac and post-Ac STEM PhD careers

One of the things about teaching in a more industry-adjacent field like electrical engineering is that the vast majority of PhDs do not go on to academic careers. The way in which we have traditionally structured our programs is somehow predicated on the idea that students will go on to be academic researchers themselves, and there’s a long argument about the degree to which graduate school should involve vocational training that can fill many a post-colloquium dinner discussion.

Since I know there are non-academic PhDs who read this, there’s a survey out from Harvard researcher Melanie Sinche that is trying to gather data on the career trajectories of PhDs. The title of the article linked above, “Help solve the mystery of the disappearing Ph.D.s,” sounds really off to me — I know where the people I know from grad school ended up, and a quick glance through LinkedIn show that the “where” is not so much the issue as “how many.” For example, we talk a lot about how so many people from various flavors of theory end up in finance, but is it 50%? I suspect the number is much lower. Here’s a direct link to the survey. Fill it out and spread widely!

Annals of bad academic software: letters of recommendation

‘Tis the season for recommendation letters, and I again find myself thwarted by terrible UX and decisions made by people who manage application systems.

  • Why do I need to rank the candidate in 8 (or more!) different categories vs. people at my institution? Top 5% in terms of “self-motivation” or top 10%? What if they were an REU student not from my school? What if I have no point of comparison? What makes you think that people are either (a) going to make numbers up or (b) put top scores on everything because that is easier? Moreover why make it mandatory to answer these stupid questions to submit my letter?
  • One system made me cut and paste my letter as text into a text box, then proceeded to strip out all the line/paragraph breaks. ‘Tis a web-app designed by an idiot, full of incompetent input-handling, and hopefully at least signifying to the committee that they should admit the student.
  • Presumably the applicant filled out my contact information already, so why am I being asked to fill it out again?

It’s enough to make me send all letters by post — it would save time, I think.

PaperCept, EDAS, and so on: why can’t we have nice things?

Why oh why can’t we have nice web-based software for academic things?

For conferences I’ve used PaperCept, EDAS (of course), Microsoft’s CMT, and EasyChair. I haven’t used HotCRP, but knowing Eddie it’s probably significantly better than the others.

I can’t think of a single time I’ve used PaperCept and had it work the way I expect. My first encounter was for Allerton, where it apparently would not allow quotation marks in the title of papers (an undocumented restriction!). But then again, why has nobody heard of sanitizing inputs? The IEEE Transactions on Automatic Control also uses PaperCept, and the paper review has a character restriction on it (something like 5000 or so). Given that a thorough review could easily pass twice that length, I’m shocked at this arbitrary restriction.

On the topic of journal software, the Information Theory Society semi-recently transitioned from Pareja to Manuscript Central. I have heard that Pareja, a home-grown solution, was lovable in its own way, but was also a bit of a terror to use as an Associate Editor. Manuscript Central’s editorial interface is like looking at the dashboard of a modern aircraft, however — perhaps efficient to the expert, but the interaction designers I know would blanche (or worse) to see it.

This semi-rant is due to an email I got about IEEE Collabratec (yeah, brah!):

IEEE is excited to announce the pilot rollout of a new suite of online tools where technology professionals can network, collaborate, and create – all in one central hub. We would like to invite you to be a pilot user for this new tool titled IEEE Collabratec™ (Formerly known as PPCT – Professional Productivity and Collaboration Tool). Please use the tool and tell us what you think, before we officially launch to authors, researchers, IEEE members and technology professionals like yourself around the globe.

What exactly is IEEE Collabratec?
IEEE Collabratec will offer technology professionals robust networking, collaborating, and authoring tools, while IEEE members will also receive access to exclusive features. IEEE Collabratec participants will be able to:

* Connect with technology professionals by location, technical interests, or career pursuits;
* Access research and collaborative authoring tools; and
* Establish a professional identity to showcase key accomplishments.

Parsing the miasma of buzzwords, my intuition is that this is supposed to be some sort of combination of LinkedIn, ResearchGate, and… Google Drive? Why does the IEEE think it has the expertise to pull off integration at this scale? Don’t get me wrong, there are tons of smart people in the IEEE, but this probably should be done by professionals, and not non-profit professional societies. How much money is this going to cost? The whole thing reminds me of Illinois politics — a lucrative contract given to a wealthy campaign contributor after the election, with enough marketing veneer to avoid raising a stink. Except this is the IEEE, not Richard [JM] Daley (or Rahm Emmanuel for that matter).

As far as I can tell, the software that we have to interact with regularly as academics has never been subjected to scrutiny by any user-interface designer. From online graduate school/faculty application forms (don’t get me started on the letter of rec interface), conference review systems, journal editing systems, and on, we are given a terrible dilemma: pay exorbitant amounts of money to some third party, or use “home grown” solutions developed by our colleagues. For the former, there is precious little competition and they have no financial incentive to improve the interface. For the latter, we are at the whims of the home code-gardener. Do they care about user experience? Is that their expertise? Do they have time to both make it functional and be a pleasure to use? Sadly, the answer is usually no, with perhaps a few exceptions.

I shake my fist at the screen.

Feature Engineering for Review Times

The most popular topic of conversation among information theory afficionados is probably the long review times for the IEEE Transactions on Information Theory. Everyone has a story of a very delayed review — either for their own paper or for a friend of theirs. The Information Theory Society Board of Governors and Editor-in-Chief have presented charts of “sub-to-pub” times and other statistics and are working hard on ways to improve the speed of reviews without impairing their quality. These are all laudable. But it occurs to me that there is room for social engineering on the input side of things as well. That is, if we treat the process as a black box, with inputs (papers) and outputs (decisions), what would a machine-learning approach to predicting decision time do?

Perhaps the most important (and overlooked in some cases) aspects of learning a predictor from real data is figuring out what features to measure about each of the inputs. Off the top of my head, things which may be predictive include:

  • length
  • number of citations
  • number of equations
  • number of theorems/lemmas/etc.
  • number of previous IT papers by the authors
  • h-index of authors
  • membership status of the authors (student members to Fellows)
  • associate editor handling the paper — although for obvious reasons we may not want to include this

I am sure I am missing a bunch of relevant measurable quantities here, but you get the picture.

I would bet that paper length is a strong predictor of review time, not because it takes a longer time to read a longer paper, but because the activation energy of actually picking up the paper to review it is a nonlinear function of the length.

Doing a regression analysis might yield some interesting suggestions on how to pick coauthors and paper length to minimize the review time. This could also help make the system go faster, no? Should we request these sort of statistics from the EiC?