Illinois Wireless Summer School

I just came back from the Illinois Wireless Summer School, hosted by the Illinois Center for Wireless Systems. Admittedly, I had a bit of an ulterior motive in going, since it meant a trip home to see my parents (long overdue!), but I found the workshop a pretty valuable crash course running the whole breadth of wireless technology. The week started out with lectures on propagation, wireless channel modeling, and antennas, and ran up to a description of WiMAX and LTE. Slides for some of the lectures are available online.

Some tidbits and notes:

  • Venugopal Veeravalli gave a nice overview of channel modeling, which was a nice refresher since taking David Tse’s wireless course at the beginning of grad school. Xinzhou Wu talked about modulation issues and mentioned that for users near the edge of a cell, universal reuse may be bad and mentioned Flarion’s flexband idea, which I hadn’t heard about before.
  • Jennifer Bernhard talked about antenna design, which I had only the sketchiest introduction to 10 years ago. She pointed out that actually getting independent measurements from two antennas by spacing them just the right distance apart is nearly impossible, so coupling effects should be worked into MIMO models (at least, this is what I got out of it). Also, the placement of the antenna on your laptop matters a lot — my Mac is lousy at finding the WiFi because its antenna is sub-optimally positioned.
  • Nitin Vaidya discussed Dynamic Source Routing, which I had heard about but never really learned before.
  • Dina Katabi and Sachin Katti talked about network coding and its implementation. The issues with asynchronous communication for channel estimation in the analog network coding was something I had missed in earlier encounters with their work.
  • P. R. Kumar talked about his work with I-Hong Hou and Vivek Borkar on QoS guarantees in in a simple downlink model. I had seen this talk at Infocom, but the longer version had more details and was longer, so I think I understood it more this time.
  • Wade Trappe and Yih-Chun Hu talked about a ton of security problems (so many that I got a bit lost, but luckily I have the slides). In particular, they talked about how many adversarial assumptions that are made are very unrealistic for wireless, since adversaries can eavesdrop and jam, spoof users, and so on. They mentioned the Dolev-Yao threat model, from FOCS 1981, that I should probably read more about. There were some slides on intrusion detection, which I think is an interesting problem that could also be looked at from the EE/physical layer side.
  • R. Srikant and Attila Eryilmaz gave a nice (but dense) introduction to resource allocation and network utilization problems from the optimization standpoint. Srikant showed how some of the results Kumar talked about can also be interpreted from this approach. There was also a little bit of MCMC that showed up, which got me thinking about some other research problems…
  • The industry speakers didn’t post their slides, but they had a different (and a bit less tutorial) perspective to give. Victor Bahl from MSR gave a talk on white space networking (also known as cognitive radio, but he seems to eschew that term). Dilip Krishnaswamy (Qualcomm) talked about WWAN architectures, which (from the architectural standpoint) are different from voice or other kinds of networks, and in particular where the internet cloud sits with respect to the other system elements was interesting to me. Amitava Ghosh (Motorola) broke down LTE and WiMAX for us in gory detail.

Privacy for prescriptions

The NY TImes has an article on how the information on our prescriptions is “a commodity bought and sold in a murky marketplace, often without the patients’ knowledge or permission.” I was informed by UC Berkeley in the spring that some of my information may have been compromised, although only “Social Security numbers, health insurance information and non-treatment medical information,” and not “diagnoses, treatments and therapies.” But in that case it was theft, not out-and-out sale. The Times article suggests that the new health care bill will tighten up some of the information leakage, but I am unconvinced.

Of more interest is the second half of the article, on privacy in the data mining of medical information, which is a topic which is a strong motivator for some of the research I’m working on now. I’m not too comforted by pronouncements from industry people:

“Data stripped of patient identity is an important alternative in health research and managing quality of care,” said Randy Frankel, an IMS vice president. As for the ability to put the names back on anonymous data, he said IMS has “multiple encryptions and various ways of separating information to prevent a patient from being re-identified”

IMS Health reported operating revenue of $1.05 billion in the first half of 2009, down 10.6 percent from the period a year earlier. Mr. Frankel said he did not expect growing awareness of privacy issues to affect the business.

There’s no incentive to develop real privacy-preservation systems if you make money like that and don’t think that pressure is going to change your model. As far as the vague handwaving of “multiple encryptions and… separating information,” color me unconvinced again.

I think it’s time for a new take on privacy laws and technologies.

Visit to University of Washington

After ISIT I went to visit the Electrical Engineering Department at the University of Washington. I was invited up there by Maya Gupta, who told me about a little company she started called Artifact Puzzles.

On the research end of things, I learned a lot about the the learning problems her group is working on and their applications to color reproduction. I also got a chance to chat with Maryam Fazel about rank minimization problems, Marina Meilă about machine learning and distance models for rankings (e.g. the Fligner-Verducci model), and David Thorsley about self-assembling systems and consensus problems. All in all I learned a lot!

On the social side I got to hang out with friends in Seattle and UW and hiked for an afternoon at Mt. Ranier. Photos are on Picasa!

ISIT 2009 : talks part four

The Gelfand-Pinsker Channel: Strong Converse and Upper Bound for the Reliability Function
Himanshu Tyagi, Prakash Narayan
Strong Converse for Gel’fand-Pinsker Channel
Pierre Moulin

Both of these papers proved the strong converse for the Gel’fand-Pinsker channel, e.g. the discrete memoryless channel with iid state sequence P_S, where the realized state sequence is known ahead of time at the encoder. The first paper proved a technical lemma about the image size of “good codeword sets” which are jointly typical conditioned on a large subset of the typical set of S^n sequences. That is, given a code and a set of almost \exp(n H(P_S)) typical sequences in S^n for which the average probability of error is small, then they get a bound on the rate of the code. They then derive bounds on error exponents for the channel. The second paper has a significantly more involved argument, but one which can be extended to multiaccess channels with states known to the encoder.

Combinatorial Data Reduction Algorithm and Its Applications to Biometric Verification
Vladimir B. Balakirsky, Anahit R. Ghazaryan, A. J. Han Vinck

The goal of this paper was to compute short fingerprints f(\mathbf{x}) from long binary strings \mathbf{x} so that a verifier can look at a new long vector \mathbf{y} and tell whether or not \mathbf{y} is close to \mathbf{x} based on f(\mathbf{x}). This is a little different from hashing, where we could first compute f(\mathbf{y}). They develop a scheme which stores the index of a reference vector \mathbf{c} that is “close” to \mathbf{x} and the distance d(\mathbf{x},\mathbf{c}). This can be done with low complexity. They calculated false accept and reject rates for this scheme. Since the goal is not reconstruction or approximation, but rather a kind of classification, they can derive a reference vector set which has very low rate.

Two-Level Fingerprinting Codes
N. Prasanth Anthapadmanabhan, Alexander Barg

This looks at a variant of the fingerprinting problem, in which you a content creator makes several fingerprinted versions of an object (e.g. a piece of software) and then a group of pirates can take their versions and try to create a new object with a valid fingerprint. The marking assumption mean that the pirates can only alter the positions in their copies which are different. The goal is to build a code such that a verifier looking at an object produced by t pirates can identify at least one of the pirates. In the two-level problem, the objects are coarsely classified into groups (e.g. by geographic region) and the verifier wants to be able to identify one of the groups of one of the pirates when there are more than t pirates. They provide some conditions for traceability and constructions, This framework can also be extended to multiple levels.

Updated LaTeX Poster Package

I updated the poster package that I had written for doing large-format conference posters in LaTeX. The updated package uses PGF and TikZ, which is a way of making beautiful figures in LaTeX. I’ve been using pstricks to make figures for a while, and while it works for me, getting it to work with pdflatex is a pain. Luckily, TikZ works in both the PostScript-based workflow as well as the direct-to-pdf workflow. Many thanks are due to Massimo Canonico for his feedback and comments.

In Memoriam Dorothy Vickers-Shelley

I learned today that my grade school librarian, Dorothy Vickers-Shelley, passed away last week. It’s difficult for me to explain how much she affected my life and the lives of all the children she taught. She struck fear into our little hearts by threatening to hang us up by our toe-nails or skewer us with her purple-pointed stick if we were naughty, and thrilled us by reading us stories and personally picking out books she thought we would enjoy. She gave me a job when I was in middle school and I spent part of a glorious summer working in the library. She was the library at Yankee Ridge. But the most important thing she taught us is encapsulated in the creed she wrote that we would recite every time we went to the library:

Life is short. Therefore I shall be a crusader in the fight against ignorance and fear, beginning with myself.

Goodbye, Ms. Vickers-Shelley. You will always have a place in my heart.

F-22s and the Indian Menace

I learned today that Sen. John Cornyn’s (R-Texas) believes we are in danger from India…

We’re fighting — we have graver threats and greater threats than that: From a rising India, with increased exercise of their military power; Russia; Iran, that’s threatening to build a nuclear weapon; with North Korea, shooting intercontinental ballistic missiles, capable of hitting American soil.

I figured out how the insidious Indian plan works:

  1. Get America addicted to the sweet tunes of A.R. Rahman, as featured in Slumdog Millionaire.
  2. ???
  3. Profit / world domination!

Clearly the only way to save ourselves is to destroy Bollywood with our F-22 fighter jets… oh wait, Slumdog was made by white people…

copyright and the ex-avant-garde

Kyle Gann had a rather disturbing revelation about how copyright intersects with scholarship, and in particular scholarship about experimental music:

… you are no longer allowed to quote texts that are entire pieces of art. This means I’ve been trying to get permission simply to refer to Fluxus pieces like La Monte Young’s “This piece is little whirlpools in the middle of the ocean,” and Yoko Ono’s “Listen to the sound of the earth turning.” And of course, Yoko (whom I used to know) isn’t responding, and La Monte is imposing so many requirements and restrictions that I would have to add a new chapter to the book, and so in frustration well past the eleventh hour, I’ve excised the pieces from the text.

Normally, I’d expect the publishing companies to be the most obstructionist, as this commenter said:

Just last week I found out that, even for a thesis that will not be published, Shirmer [sic] now asks money to permit me to reproduce musical excerpts. If I paid every institution (libraries for manuscripts / publishers for printed matter) that holds rights to the excerpts that I need to reproduce to illustrate points and arguments, my dissertation would cost in excess of 15.000US dollars for permissions alone.

Some months ago I was warned that I may not have had the right to TAKE NOTES while studying Cowell manuscripts at the LOC in 1998.

Apparently, however, the artists themselves are also the problem. Way to make yourselves even more irrelevant…