It’s been a busy January — I finished up a family vacation, moved into a new apartment, helped run the MIT Mystery Hunt, started teaching at Rutgers, and had two conference deadlines back to back. One of my goals for the year is to blog a bit more regularly — I owe some follow-up to my discussion of the MAP perturbation work, which I will be talking about at ITA.
In the meantime, however, one of the big tasks in January is graduate admissions. I helped out with admissions at Berkeley for 4 years, so I’m familiar with reviewing the (mostly international) transcripts, but the level of detail in transcript reporting varies widely. The same is true for letters of recommendation. I’m sure this is culturally mediated, but some recommenders write 1-2 sentences, and some write paeans. This makes calibrating across institutions very difficult. While the tails of the distribution are easy to assess, decisions about the middle are a bit tougher.
Rutgers, like many engineering school across the country, has a large Masters program. Such programs serve as a gateway for foreign engineers to enter the US workforce — it’s much easier to get hired if you’re already here. It’s also makes money for the university, since most students pay their own way. In that regards, Rutgers is a pretty good deal, being a state school. However, it also means making admissions decisions about the middle of the distribution. What one wants is to estimate the probability an applicant will succeed in their Masters level classes.
It’s a challenging problem — without being able to get the same level of detail about the candidates, their schools, and how their recommenders feel about their chances, one is left with a kind of robust estimation problem with a woefully underspecified likelihood. I’ve heard some people (at other schools) discuss GPA cutoffs, but those aren’t calibrated either. More detail about a particular individual doesn’t really help. I think it’s a systemic problem with how graduate applications work in larger programs; our model now appears better suited to small departments with moderate cohort sizes.
Via Inherent Uncertainty I learned that David Blackwell passed away on July 8th.
Prof. Blackwell’s original paper (with Leo Breiman and Aram Thomasian) on the arbitrarily varying channel was an inspiration to me, and he served on my thesis committee a scant 2 years ago.
I’ll always remember what he told me when I handed him a draft of my thesis. “The best thing about Bayesians is that they’re always right.”
I just came back from the Illinois Wireless Summer School, hosted by the Illinois Center for Wireless Systems. Admittedly, I had a bit of an ulterior motive in going, since it meant a trip home to see my parents (long overdue!), but I found the workshop a pretty valuable crash course running the whole breadth of wireless technology. The week started out with lectures on propagation, wireless channel modeling, and antennas, and ran up to a description of WiMAX and LTE. Slides for some of the lectures are available online.
Some tidbits and notes:
- Venugopal Veeravalli gave a nice overview of channel modeling, which was a nice refresher since taking David Tse’s wireless course at the beginning of grad school. Xinzhou Wu talked about modulation issues and mentioned that for users near the edge of a cell, universal reuse may be bad and mentioned Flarion’s flexband idea, which I hadn’t heard about before.
- Jennifer Bernhard talked about antenna design, which I had only the sketchiest introduction to 10 years ago. She pointed out that actually getting independent measurements from two antennas by spacing them just the right distance apart is nearly impossible, so coupling effects should be worked into MIMO models (at least, this is what I got out of it). Also, the placement of the antenna on your laptop matters a lot — my Mac is lousy at finding the WiFi because its antenna is sub-optimally positioned.
- Nitin Vaidya discussed Dynamic Source Routing, which I had heard about but never really learned before.
- Dina Katabi and Sachin Katti talked about network coding and its implementation. The issues with asynchronous communication for channel estimation in the analog network coding was something I had missed in earlier encounters with their work.
- P. R. Kumar talked about his work with I-Hong Hou and Vivek Borkar on QoS guarantees in in a simple downlink model. I had seen this talk at Infocom, but the longer version had more details and was longer, so I think I understood it more this time.
- Wade Trappe and Yih-Chun Hu talked about a ton of security problems (so many that I got a bit lost, but luckily I have the slides). In particular, they talked about how many adversarial assumptions that are made are very unrealistic for wireless, since adversaries can eavesdrop and jam, spoof users, and so on. They mentioned the Dolev-Yao threat model, from FOCS 1981, that I should probably read more about. There were some slides on intrusion detection, which I think is an interesting problem that could also be looked at from the EE/physical layer side.
- R. Srikant and Attila Eryilmaz gave a nice (but dense) introduction to resource allocation and network utilization problems from the optimization standpoint. Srikant showed how some of the results Kumar talked about can also be interpreted from this approach. There was also a little bit of MCMC that showed up, which got me thinking about some other research problems…
- The industry speakers didn’t post their slides, but they had a different (and a bit less tutorial) perspective to give. Victor Bahl from MSR gave a talk on white space networking (also known as cognitive radio, but he seems to eschew that term). Dilip Krishnaswamy (Qualcomm) talked about WWAN architectures, which (from the architectural standpoint) are different from voice or other kinds of networks, and in particular where the internet cloud sits with respect to the other system elements was interesting to me. Amitava Ghosh (Motorola) broke down LTE and WiMAX for us in gory detail.
I had an interesting conversation two weeks ago about the working process for doing theory work in CS and EE. We discussed two extremes of working styles. In one, you meticulously prove small statements, type them up as you go along, getting the epsilons and deltas right and not working on the next step until the current step is totally set. I call this “prove as you go.” The other is that you sketch out some proofs to convince yourself that they are probably true (in some form) and then try to chase down the implications until you have the big result. When some deadline rolls around, you then build up the proofs for real. This could be thought of as “scaffolding first.” Fundamentally, these are internal modes of working, but because of the pressure to publish in CS and EE they end up influencing how people view theory work.
I filed my thesis today. It was ok.
The defense/talk went pretty well, I thought. And there was champagne afterwards! However, filing the thesis has been pushed off until early in the summer. Although this has the benefit of making the finished document much better, the downside is that such a large object sitting in my brain tends to crowd out other more exciting projects that I would like to work on or think about. In the meantime, I will try to blog a bit more frequently so that I don’t end up with tunnel vision.
Robin sent along a link to some mathematical models in metal. They look like more hard-core versions of the sculpted surfaces seen in display cases in math department hallways. I took a few math classes at UIUC during high school and I remember walking past these dusty shelves filled with plaster (?) shapes with intriguing names like “the ring of the nodoid.”
This is why I’ve not been posting, but hopefully that will change.
Beyond the ABCs of AVCs : robust and adaptive strategies for future communication systems
Anand D. Sarwate
Advisor : Michael Gastpar
Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Thursday, May 15
521 Cory Hall
Cutting-edge application areas such as cognitive radio, ad-hoc networks, and sensor networks are changing the way we think about wireless services. The demand for ubiquitous communication and computing requires flexible communication protocols that can operate in a range of conditions. This thesis adopts and extends a mathematical model for these communication systems that accounts for uncertainty and time variation in link qualities. The arbitrarily varying channel (AVC) is an information theoretic channel model that has a time varying state with no statistical description. We assume the state is chosen by an adversarial jammer, reflecting the demand that our constructions work for all state sequences. In this talk I will show how resources such as secret keys, feedback, and side-information can help communication under this kind of uncertainty. I will present results on list coding and rateless coding for discrete channel models and coding with side information for continuous channels.
And of course the most important part: refreshments will be provided!
It’s 210 pages + 12 pages of front matter and out to my committee. My wrists are killing me.
So my thesis is almost in to my committee, but I keep making cosmetic changes. After dinner I didn’t feel capable of editing the main text, so I finished up adding in the chapter headings. I think they might imply I’m a bit more cynical than I am, but…
List decoding for discrete AVCs
Derandomization for discrete AVCs
Limited feedback and rateless coding
Continuous AVCs : the Gaussian case
My dissertation committee had to be reshuffled on short notice because my outside-the-department member became inside-the-department. Luckily for me, Prof. David Blackwell kindly agreed to serve on my committee. I had a meeting with him two weeks ago, wrote up a pseudo-précis of my dissertation, and had a short chat about it and Bayesianism today. Forty-eight years ago, he wrote the first paper on the arbitrarily varying channel , and today he gave me an actual reprint of the article!
The staples are a little bit oxidized from time. I guess people couldn’t get rid of their reprints back then, even. QUESTA sent me reprints of my paper there and they’re just occupying space in my filing drawer. The worst part was, there was no option I could click for “no reprints — save the trees.” Of course, given my feelings about commercial publishing, I’m less likely to send a paper there in the future.
 D. Blackwell, L. Breiman, and A.J. Thomasian, The Capacities of Certain Channel Classes Under Random Coding, Annals of Mathematical statistics Stat. 31 (3), Sept. 1960, p.558-567.