Bob Gallager on Shannon’s tips for research

One of the classes I enjoyed the most in undergrad was Bob Gallager’s digital communications class, 6.450. I was reminded of what an engaging lecturer he was yesterday when I attended the Bell Labs Shannon Celebration yesterday. Unfortunately, it being the last week of the semester, I could not attend today’s more technical talks. Gallager gave a nice concise summary of what he learned from Shannon about how to do good theory work:

  1. Simplify the problem
  2. Relate it to other problems
  3. Restate the problem in as many ways as possible
  4. Break the problem into pieces
  5. Avoid getting locked into thinking ruts
  6. Generalize

As he said, “it’s a process of doing research… each one [step] gives you a little insight.” It’s tempting, as a theorist, to claim that at the end of this process you’ve solved the “fundamental” problem, but Gallager admonished us to remember that the first step is to simplify, often dramatically. As Alfred North Whitehead said, we should “seek simplicity and distrust it.”

Advertisement

Microsoft Research Silicon Valley to close

I’ve been in a non-blogging mode due to classes starting and being a bit overwhelmed by everything, but the news came out today that Microsoft Silicon Valley is closing. Although the article says that some researchers may find new homes at other MSR campuses, it’s not clear who is staying and who is going. As with all of the recent industrial research lab closures/downsizings, my first thoughts are to the researchers who were working there — even if one has an inkling that things are “going bad” it still must be a shock to hear that you won’t be going to the office on Monday. Here’s hoping they land on their feet (and keep pushing the ball forward on the research front) soon!

Embracing failure and design vs. research

My college friend Ann Marie Thomas has a post up on the problematic use of the word “failure” in the discourse around education, technology, and design. I was listening to Dyson extoll the virtues of failure Science Friday recently, and it also made me cringe. Ann talks about the problems with “failure” from an education and design point of view, but I think it’s also problematic from teaching/training students to be researchers. One of the most normalizing things I heard in graduate school from my advisor was “well, that’s research for you” after I told him I had found a counterexample to everything I had “proved” in the previous week. I don’t think of that as “embracing failure” but rather a recognition that the process is not one of continuous forward progress.

The sound-bite nature of the word does a disservice to the valuable concept, which is, as Ann says, to “try something.” I think it’s not (often) true that students are afraid to try things because they are afraid to fail. It’s far more likely that they are unsure of how to try things, or what to try. The problem is too abstract and it’s hard to find any sort of inroad that might make sense. Or they have thought they have an inroad and it’s absolutely not working and they are frustrated because they can’t step back and say “this approach is bad.”

I can’t help but think that this talk of “failure” is somehow leaking in from positive psychology. I think it treats us like children who may be afraid to go down some stairs because they are too tall, or afraid to try the new food because it looks funny. It obscures the really difficult part, which is about where to start, not how you end.

A proposal for restructuring tenure

An Op-Ed from the NY Times (warning: paywall) suggests creating research and teaching tenure tracks and hire people for one or the other. This is an interesting proposal, and while the author Adam Grant marshals empirical evidence showing that the two skills are largely uncorrelated, as well as research on designing incentives, it seems that the social and economic barriers to implementing such a scheme are quite high.

Firstly, the economic. Grant-funded research faculty bring in big bucks (sometimes more modest bucks for pen-and-paper types) to the university. They overheads (55% at Rutgers, I think) on those grants help keep the university afloat, especially at places which don’t have huge endowments. Research in technology areas can also generate patents, startups, and other vehicles that bring money to the university coffers. This is an incentive for the university to push the research agenda first. Grant funding may be drying up, but it’s still a big money maker.

On the social barriers, it’s simply true in the US that as a society we don’t value teaching very highly. Sure, we complain about the quality of education and its price and so on, but the taxpayers and politicians are not willing to put their money where their mouth is. We see this in the low pay for K-12 teachers and the rise of the $5k-per-class adjunct at the university level. If a university finds that it’s doing well on research but poorly on teaching, the solution-on-the-cheap is to hire more adjuncts.

Of course, the proposal also represents a change, and institutionalized professionals hate change. For what it’s worth, I think it’s a good idea to have more tenure-track teaching positions. However, forcing a choice — research or teaching — is a terrible idea. I do like research, but part of the reason I want to be at a university is to engage with students through the classroom. I may not be the best teacher now, but I want to get better. A better, and more feasible, short-term solution would be to create more opportunities and support for teacher development within the university. This would strengthen the correlation between research and teaching success.

Linkage

A map of racial segregation in the US.

Vi Hart explains serial music (h/t Jim CaJacob).

More adventures in trolling scam journals with bogus papers (h/t my father).

Brighten does some number crunching on his research notebook.

Jerry takes “disruptive innovation” to task.

Vladimir Horowitz plays a concert at the Carter White House. Also Jim Lehrer looks very young. The program (as cribbed from YouTube)

  • The Star-Spangled Banner
  • Chopin: Sonata in B-flat minor, opus 35, n°2
  • Chopin: Waltz in a minor, opus 34, n°2
  • Chopin: Waltz in C-sharp minor, opus 64, n° 2
  • Chopin: Polonaise in A-flat major, opus 53 ,Héroïque
  • Schumann: Träumerei, Kinderszene n°7
  • Rachmaninoff: Polka de W.R
  • Horowitz: Variations on a theme from Bizet’s Carmen

The Simons Institute is going strong at Berkeley now. Moritz Hardt has some opinions about what CS theory should say about “big data,” and how it might be require some adjustments to ways of thinking. Suresh responds in part by pointing out some of the successes of the past.

John Holbo is reading Appiah and makes me want to read Appiah. My book queue is already a bit long though…

An important thing to realize about performance art that makes a splash is that it can be often exploitative.

Mimosa shows us what she sees.

The NSF and the sequester

My department chair sent out a recent notice from the NSF about the impact of the sequestration order on the NSF awards.

At NSF, the major impact of sequestration will be seen in reductions to the number of new research grants and cooperative agreements awarded in FY 2013. We anticipate that the total number of new research grants will be reduced by approximately 1,000.

In FY2011 the NSF funded 11,185 proposals, so that’s an 8.94% reduction. Yikes.

The things we know we don’t know

As a theoretical engineer, I find myself getting lulled into the trap of what I now starting to call “lazy generalization.” It’s a form of bland motivation that you often find at the beginning of papers:

Sensor networks are large distributed collections of low-power nodes with wireless radios and limited battery power.

Really? All sensor networks are like this? I think not. Lots of sensor networks are wired (think of the power grid) but still communicate wirelessly. Others communicate through wires. This is the kind of ontological statement that metastasizes into the research equivalent of a meme — 3 years after Smart Dust appears, suddenly all papers are about dust-like networks, ignoring the vast range of other interesting problems that arise in other kinds of “sensor networks.”

Another good example is “it is well known that most [REAL WORLD THING] follows a power law,” which bugs Cosma to no end. We then get lots of papers papers which start with something about power laws and then proceed to analyze some algorithms which work well on graphs which have power law degree distributions. And the later we get statements like “all natural graphs follow power laws, so here’s a theory for those graphs, which tells us all about nature.”

Yet another example of this is sparsity. Sparsity is interesting! It lets you do a lot of cool stuff, like compressed sensing. And it’s true that some real world signals are approximately sparse in some basis. However, turn the crank and we get papers which make crazy statements approximately equal to “all interesting signals are sparse.” This is trivially true if you take the signal itself as a basis element, but in the way it’s mean (e.g. “in some standard basis”), it is patently false.

So why is are these lazy generalization? It’s a kind of fallacy which goes something like:

  1. Topic A is really useful.
  2. By assuming some Structure B about Topic A, we can do lots of cool/fun math.
  3. All useful problems have Structure B

Pattern matching, we get A = [sensor networks, the web, signal acquisition], and B = [low power/wireless, power laws, sparsity].

This post may sound like I’m griping about these topics being “hot” — I’m not. Of course, when a topic gets hot, you get a lot of (probably incremental) papers all over the place. That’s the nature of “progress.” What I’m talking about is the third point. When we go back to our alabaster spire of theory on top of the ivory tower, we should not fall into the same trap of saying that “by characterizing the limits of Structure B I have fundamentally characterized Topic A.” Maybe that’s good marketing, but it’s not very good science, I think. Like I said, it’s a trap that I’m sure I’m guilty of stepping into on occasion, but it seems to be creeping into a number of things I’ve been reading lately.

Collaborative paper filtering?

At ISIT 2012, there were posters up for a site called ShareRI.org: Share Research Ideas, an initiative of a student at UIUC named Quan Geng. It’s a platform for posting and discussing papers — sort of like creating a mini-forum around ArXiV posts. It seems to be just starting out now, but I figured I would post the link to see if others take it up. I imagine as things scale up it might run into similar problems as Wikipedia with trolling etc, but it’s an interesting idea which has come up before in discussions with the IT Society Online Committee, for example.

Quote of the day : squabbles

I am writing a paper at the moment on some of my work with Steve Checkoway and Hovav Shacham on voting, which has involved a pretty broad literature search in social choice theory. I came across this quote about approval voting (AV) as an alternative to plurality voting (PV) in the paper Going from theory to practice: the mixed success of approval voting by Steven J. Brams and Peter C. Fishburn (Soc Choice Welfare 25:457–474 (2005)):

The confrontation between theory and practice offers some interesting lessons on “selling” new ideas. The rhetoric of AV supporters has been opposed not only by those supporting extant systems like plurality voting (PV)—including incumbents elected under PV—but also by those with competing ideas, particularly proponents of other voting systems like the Borda count and the Hare system of single transferable vote.

We conclude that academics probably are not the best sales people for two reasons: (1) they lack the skills and resources, including time, to market their ideas, even when they are practicable; and (2) they squabble among themselves. Because few if any ideas in the social sciences are certifiably “right” under all circumstances, squabbles may well be grounded in serious intellectual differences. Sometimes, however, they are not.

I don’t think it’s particular to the social sciences…

On another note, the IEEE adopted AV at some point but then abandoned it. According to a report on the (very partisan) range voting website, there are shady reasons.