the pompkin

Today I went to an evisceration party and created the following:
A light pompkin
Here’s a more nocturnal (and blurry) image.
A dark pompkin
I hereby dub it the POMPKIN.

Advertisement

Amy Is Now In Lesotho

That’s right, and here is some music for her.

  1. All Or Nothing — Casey Dienel
  2. My Beloved Monster — Eels
  3. You have a long journey ahead of you — Samrat Chakrabarti/Sudipto Chatterjee (feat. Tanmay Dhanania and Anand Sarwate)
  4. Is You Or Is You Ain’t My Baby? — Dinah Washington (Rae and Christian Remix)
  5. Skokiaan — Louis Armstrong
  6. Nothing In This World Can Stop Me Worrying ‘Bout That Girl — The Kinks
  7. Once in Love with Amy — Frank Sinatra
  8. Walk Or Ride — The Ditty Bops
  9. In The Aeroplane Over The Sea — Neutral Milk Hotel
  10. Noah’s Pub — Paul Kotheimer
  11. Les lumiéres Pt. 2 — Bell Orchestre
  12. Everything I Have Is Yours — Billie Holiday
  13. Sound And Vision — David Bowie
  14. O Boundless, Boundless Evening — Samuel Barber/Thomas Hampson
  15. That Old Black Magic — Ella Fitzgerald
  16. Hymne L’Amour — Edith Pïaf
  17. Obsolete — MC Solaar

Love is a Dream House in Lorin

Last night I saw the Shotgun Players new production, Love is a Dream House in Lorin, by Marcus Gardley. It is an amazing piece of community-generated theater. The production is important to see, especially if you don’t usually go to theater, because it can turn your whole view of the possibilities of theater as a tool for dialogue within the community. Lorin was the name of a town that was annexed by Berkeley in the early 1900’s. It goes from Dwight to Alcatraz, Sacramento to Telegraph. The neighborhood was one of the only places people of color were allowed to own property — as such this area of South Berkeley developed a complex multiracial composition and was one of the first neighborhoods to voluntarily desegregate its schools. As a student, I am far too ignorant of the environment surrounding where I study — Berkeley is a temporary stopping place for me on my road through life, but this neighborhood, in and around which I have lived my entire time, has a complex history that is not at all apparent from its modern-day incarnation as the ‘hood.

The play is centered around a house in the Lorin district and its history. The play’s characters, residents of the Lorin District, are all named after streets in the neighborhood. The story starts with the house being bought by Russell and Adeline Wheeler, a biracial couple ready to start a family in the early ’80s. As we follow their story the history is revealed, from the original Ohlone inhabitants of the area through the building of the first Victorian homes, the internment of the Japanese-Americans who lived there, the black families who made their homes their and on through to Vietnam. The narrative is not linear — although the play is grounded in Adeline’s experience, it is not merely a story being told to her. Each stage of the story is paced differently — the heterogeneity keeps the play breathing and unpredictable. Aaron Davidman has created a physical language for the piece using recurrent physical patterns and motifs that helped the stories maintain their individual consistency while drawing parallels between the different residents of the house (as when a couple in love dances in the living room). The overall effect is hauntingly beautiful but not sentimental. Always looming over the production is the current situation of the neighborhood — drive-by shootings, gang problems, and drug abuse. The play reminds us that everyone who lives here has a story to tell.

I could go on rambling about the play and spoil details of the production but I won’t. If you are in the area, you have to see this play — it is probably the most important piece of theater I’ve seen in years.

Transactions growth

On a similar tip as my comments on ISIT’s size, the IT Society’s Board of Governors formed an ad-hoc committee to talk about the ballooning-out-of-control of the Transactions. They filed a report at the ISIT in Seattle. Some highlights:

  1. “… the exponential curve, which corresponds to an annual growth rate of 7.5%, is by far the best fit to the data… we may well find ourselves in the following situation within less than 10 years. The Transactions will be publishing over 10,000 pages per year…”
  2. “… it appears the growth will be sustainable as long as we charge for the cost of producing and mailing hard copies.”
  3. The average length of a regular paper was 10.3 pages in 1989 and is 15.8 pages in 2006.

Something that is distinctly missing from the plots provided in the report is the variance of paper length or a histogram of paper lengths. Although papers of average length 15.8 doesn’t seem so bad, if the distribution has a heavy tail some other solutions might present themselves.

The report describes several possible solutions, including doing nothing, splitting the transactions into two journals, limiting page lengths, going to all-electronic publishing except for libraries, etc. The recommendations they make are threefold:

  1. Go to all-electronic publishing except for libraries. This limits the financial burden.
  2. Make a hierarchical organization of the editorial board with sub-editors in chief for different areas.
  3. A 5-page limit for correspondence items

I’m not sure how I feel about all-electronic publishing. Libraries want to have hard copies of things for archival purposes, and this seems to be a neat way of passing the buck to them — will more expensive binding mean more cost on their institutional subscription? On the one hand, you save money by printing fewer copies, but on the other the hard copies may cost more. Probably this saves money all around though.

The sub-editing is a way of making the large page numbers tractable. They specifically don’t want to narrow the scope of the journal, which is good, but they also don’t want to cut more papers by making the acceptance rate lower, so the only way to keep the quality high is to add more reviewers and editors. But they simultaneously note that the number of reviewers is not increasing at the same rate as the number of pages.

The 5-page limit is somewhat odd — ISIT papers are already 5 pages and the difference would seem to be better peer-reviewing and very long publication delay. While this would help control the page numbers, they specifically do not recommend adding a page limit for regular papers. What happens in the gap between a 5-page idea and a 15.8 page regular paper?

Taken together, the proposed solutions seem to consciously avoiding taking the problem head-on. A more aggressive solution might include things like

  • Imposing page charges. At the moment the policy is:

    Page Charges: If the manuscript is accepted for publication, the author’s company or institution will be requested to cover part of the cost of publication. Page charges for this journal are not obligatory nor is their payment a prerequisite for publication. The author will receive 100 free reprints without covers if the charge is honored. Detailed instructions will accompany the proof.

    The Signal Processing Transactions charges $110/page up to 8 pages and $220/page thereafter. Making page charges mandatory for longer papers or adding a mandatory charge per page for papers over 20 pages may encourage authors to write shorter (and perhaps more readable?) papers.

  • Encouraging editors and reviewers to more aggressively promote correspondence items. They bring up the point that correpondence items suffer from “inflated introductions and similar excesses, designed by the authors in order to avoid the correspondence classification.” If there are more specific instructions to editors to… edit things down, then both regular papers and correspondence items can be trimmed.

In the end, the recommendations of the board are more carrot than stick, which may work or may not. The big message seems to be that exponential growth is good but it needs to be somehow managed. However, it may be that more active feedback is needed to control this system which is going unstable exponentially. It has been noted that there is insufficient communication between the information theory and control communities…

paper a day : periodic bloodsucking rates for vampires

Alex forwarded me this reference:

CYCLES OF FEAR: PERIODIC BLOODSUCKING RATES FOR VAMPIRES
R. F. Hartl, A. Mehlmann, and A. Novak
Journal of Optimization Theory and Applications, Vol.75 No. 3, 1992

Abstract:
In this paper, we present a new approach for modeling the dynamic intertemporal confrontation between vampires and humans. It is assumed that the change of the vampiristic consumption rate induces costs and that the vampire community also derives some utility from possessing humans and not only from consuming them. Using the Hopf bifurcation theorem it can be shown that cyclical bloodsucking strategies are optimal. These results are in accordance with empirical evidence.

Keywords: maximum principle; limit cycles; economics of human resources; vampire myth

To the feather-fool and lobcock, the pseudo-scientist and materialist, these deeper and obscurer things must, of course, appear a grandam’s tale.
Montague Summers. The Vampire in Europe

This paper analyzes a mathematical model of bloodsucking rates for vampires using control theory. However, as they note in the introduction,

To a traditional vampirologist, the use of optimal control theory against vampires, as exercised in Ref. 6, seems highly questionable. This is due to the fact that the application of Pontryagin’s principle requires the derivation of a shadow price for vampires. Such a price is, however, nonexistent since vampires do not have a shadow.

As a predator-prey scenario, we can model the dynamics of the population using some differential equations. The problem for the vampires is to set a bloodsucking rate (humans per vampire) so as to maximize a utility function subject to the dynamics. However, the model has to be made more sophisticated to account for the cyclical bloodsucking patterns found in real vampires. The modifications are twofold — firstly, vampires also derive some utility from posessing humans rather than just sucking blood from them, and secondly, changing the consumption rate penalizes the utility. So in this push-and-pull framework they can derive some cycles, appropriately named “cycles of fear” in which the bloodsucking rate is modulated over time to achieve a stable tradeoff and net utility.

The full version, which is not to be missed, can be found via SpringerLink.
For some earlier comments on the optimal destruction of vampires and macroeconomic policy (which involves the shadow price), see this related JSTOR article.

My research is waaaaaay too boring.

Concert : Spem in Alium

I’m singing in this concert next week — I’m baritone in Choir 8 for “Spem in Alium,” which is for 8 choirs of 5.

SPEM IN ALIUM
Music of the English Renaissance

The Pacific Collegium, directed by Christopher Kula, and Schola Cantorum San Francisco, directed by John Renke, assemble the Bay Area’s finest choral forces to perform Thomas Tallis’ lush, rich and rarely-performed forty-part motet in celebration of John Renke’s forty years of artistry in the Bay Area. Also performed- William Byrd’s Mass for 5 Voices and selections from the Great Service.

Friday October 13, 8:00 PM
Ss. Peter and Paul Church
660 Filbert Street
San Francisco, CA 94133
P: 415.421.5219

Saturday October 14, 8:00 PM
St. Marks Episcopal Church
2300 Bancroft Way
Berkeley, CA 94704
P: 510.848.5107

Sunday October 15, 4:00 PM
Mercy Center
2300 Adeline Drive
Burlingame, CA 94010
P: 650.340.7474

ADMISSION: $18 general, $12 seniors, $8 students

TICKETS : Available online or at the door.

Allerton 2006

I just finished up attending the 44th Annual Allerton Conference on Communication, Control, and Computation. Conveniently, the conference is held in Monticello, IL, which is a short drive away from Urbana. So it’s a free trip home for me, and another chance to flog my research. The conference was fun — I attended a number of good talks, and although the synchronization across sessions was a little uneven, I think I got a lot of ideas out of being there. What follows is a small set of notes on a subset of talks that I attended. I went to more talks than these, and some that I didn’t mention here were also quite interesting.

As always, I may have misunderstood some of the results, so don’t take my word for it necessarily…

  • How to Filter an “Individual Sequence with Feedback”
    T. Weissman (Stanford)

    This was on a problem that is sometimes called “compound sequential decision-making against the well-informed antagonist.” The idea is that you have some data sequence which is corrupted by noise and you would like to denoise it. If the sequence is arbitrary, then you can take an individual sequence approach to the problem. However, suppose now that you observe the noisy sequence causally, and an aversary can select the next data sample based on the noisy outputs of the previous data samples. This is the well-informed adversary, and the problem becomes significantly harder. The interesting result for me is that randomized strategies perform better than deterministic ones, since the adversary can’t track the output. This is of course related in some way to the problems I’m thinking about, but has a totally different flavor.

  • Sharp Thresholds for Sparsity Recovery in the High-Dimensional and Noisy Setting
    M. Wainwright (Berkeley)

    There has been lots of work on compressed sensing and sparse reconstructions in recent years. Wainwright is interested in recovering a sparsity pattern in high dimensional data
    that is, which elements are nonzero? A lot of asymptotic analysis is done to relate how many observations you need of the noisy data in terms of the dimension of the problem and the number of nonzero elements. It turns out that scaling the sparsity linearly with the dimension of the problem is bad (which may not be so surprising to some, but it’s hard for me to get an intuition for these things).

  • Delay Asymptotics with Network Source Coding
    S. Bhadra and S. Shakkottai (UT Austin)

    This was an interesting talk on interpreting coding across packets in networks as effectively moving the buffering from nodes in the network back to the source. In networks with heterogeneous traffic, this might be a useful way of thinking about things. Shakkottai said it was like “waterpouring over unused capacity,” which sounded like an appealing intuition. Whenever I get the proceedings I’d like to look at the details.

  • How to Achieve Linear Capacity Scaling without Mobility
    A. Ozgur (EPFL), O. Leveque (Stanford) and D. Tse (Berkeley)

    This talk was completely packed, as it showed a complete solution for the scaling law problem in dense wireless networks like those of Gupta and Kumar. The scheme to achieve this was a hierarchical MIMO scheme on an overlay grid for the network that tried to get the maximum spatial reuse. The big picture was clear and the details looked quite tricky. The major thing needed is independent uniform phase fading from each point-to-point link in the network. There was a little confusion during the questions about uniform phase and scattering effects that went a bit over my head. The mathematical problem seems settled, although the debate on the engineering question may be open…

  • Coding into a Source: A Direct Inverse Rate-Distortion Theorem
    M. Agarwal (MIT), A. Sahai (Berkeley) and S. Mitter (MIT)

    Suppose you have to use a fixed input distribution for a channel, and the channel is guaranteed, given that input distribution, to not distort the input by too much (using a given distortion measure). How well can you do across such a channel? The capacity is actually the rate distortion function R(D), which neatly gives an operational meaning for rate-distortion in terms of channel coding. It’s a difficult problem to wrap your head around, and nearly impossible to describe in a few sentences. It’s one of those results that is trying to get at what information really is, and to what extent the “bit” formalism of information theory is a fundamental aspect of nature.

  • Data Fusion Trees for Detection: Does Architecture Matter?
    W-P. Tay, J. Tsitsilkis, and M. Win (MIT)

    Suppose we have a huge number of sensors that all observe iid samples that could come from one of two distributions. A fusion center wants to decide which distribution governs the samples, and can get quantized versions of the samples from the sensors. This paper is trying to get at how different fusion architectures (e.g. trees) perform versus a totally centralized scheme. The main result is that if the total number of leaves is asymptotically the same as the number of sensors and the tree has bounded height, then we can do as well as a completely centralized solution.

  • Randomization for robust communication in networks, or “Brother, can you spare a bit?”
    A.D. Sarwate and M. Gastpar (Berkeley)

    I don’t have much to say about this paper, since I wrote it. The basic idea is to justify the AVC model for interference in networks and to come up with some strategies for sensor networks and ad-hoc wireless networks to generate secret random keys to enable randomized coding on links in the network. For sensor networks, two terminals might be able to extract a shared random key from correlated sources. In wireless networks we can sacrifice some nodes to distribute keys to their neighbors in a decentralized scheme. The main point is that key distribution needs to happen only once — a new key for the next transmission can be sent with the current transmission with no asymptotic cost in rate.

  • Probabilistic Analysis of LP Decoding
    A.D.G. Dimakis, C. Daskalakis, R. Karp, and M. Wainwright (UC Berkeley)

    LP decoding is appealing since it is a way of efficiently decoding LDPC codes. The authors here provide a formalism for analyzing the performance of LP decoding for random errors (rather than the adversarial error model). The heuristic is that bits that are corrupted have “poison” that must be redistributed around the Tanner graph of the code using what they call a valid hyperflow. If such a flow exists than LP decoding succeeds. They then prove that for a large class of codes that are expanders a hyperflow will exist with high probability.

  • Distributed Beamforming Using 1 Bit Feedback: From Concept to Realization
    R. Mudumbai (UC Santa Barbara), B. Wild (UC Berkeley), U. Madhow (UC Santa Barbara), and K. Ramchandran (UC Berkeley)

    The one bit feedback scheme for distributed beamforming is a way of aligning antennas using one bit of feedback about the received SNR. Imagine a large number of nodes communicating with a base station. Beamforming gains can be huge with a large number of antennas, so it would be good if they could adjust their phases to add coherently at the receiver. The authors describe their scheme and show some experimental results. There is also an analysis of the speed of convergence, which is linear in the number of antennas.

  • Achieving List Decoding Capacity Using Folded Reed-Solomon Codes
    V. Guruswami and A. Rudra (U. Washington)

    Consider a Reed-Solomon code of blocklength n over an alphabet of size q that communicates N bits. We can take this code and make it into a Folded Reed-Solomon code of blocklength n/m over an alphabet of size mq that communicates N bits by taking m consecutive symbols and treating them as a single packet. In doing this and some clever list-decoding modifications to the original Guruswami-Sudan algorithm, the authors can get arbitrarily close to the capacity of the erasure channel. The tricky thing is that the alphabet size has to increase we well. The construction and decoding are pretty cool though, and it makes me wish we had done some of this kind of stuff in my algebra classes.