more on the FCC auction

Even the NY Times had an article in the Friday issue about it, but the new focuses on the main business point, which is that Verizon and AT&T basically snapped up the lion’s share. Hopefully it won’t be “meet the new boss, same as the old boss,” but I’m a little less sanguine. Floating under the radar was an attachment from Commissioner Jonathan Adelstein, who said:

It’s appalling that women and minorities were virtually shut out of this monumental auction. It’s an outrage that we’ve failed to counter the legacy of discrimination that has kept women and minorities from owning their fair share of the spectrum. Here we had an enormous opportunity to open the airwaves to a new generation that reflects the diversity of America, and instead we just made a bad situation even worse. This gives whole new meaning to “white spaces” in the spectrum.

I usually think of Commissioners as a little less fiery, but I guess that’s just a stereotype. Maybe he was inspired to speak out by Obama’s call for more dialogue about race, but after looking at his record it seems totally consistent. Personally, I think that he’s just the kind of guy we need at the FCC!

FCC closes 700 MHz auction

The FCC just announced the closure of the spectrum auction for the 700 MHz TV bands.

The Commission has announced the closing of Auction 73, assigning over 1,000 licenses for much of the 700 MHz band, at just under $19.6 billion in final bids. This impressive sum is almost double the amount that was estimated for this auction, and it exceeds the final receipts of all previous U.S. spectrum auctions combined.

My reaction is : holy crap. Hopefully there will be a similar amount of financial interest in research directed at utilizing that shared spectrum efficiently…

freezing attacks

The NY Times has an article about compromising data on DRAMs via freezing them and reading the bits off. This would let someone read your encryption keys right off the chip.

It’s kind of funny — all of these crazy-complicated cryptographic schemes can be compromised by what amounts to breaking and entering. I’m sure someone will end up writing a paper entitled “Baby It’s Cold Outside : Zero-Knowledge Proofs With Freezing Verifiers.” Actually, that’s not a bad title…

EE Toolkit Topics

Alex and I were talking this week about what the syllabus for a course along the lines of Arora’s Theorist’s Toolkit or Kelner’s An Algorithmist’s Toolkit (HT to Lav for the link on my last post on this topic). I think developing a course along these lines with interchangeable 2-4 lecture “modules” would be a great thing in general. Modular design is appealing from an engineering perspective (although, as my advisor might say, separation of design is in general suboptimal). The question is, what would a good set of topics be? Here’s a partial set:

  • Advanced matrix tricks: matrix derivatives and other tools
  • Majorization and applications
  • Random graphs and random geometric graphs
  • Mixing times for Markov chains
  • Auctions and related allocation mechanisms
  • Random matrix theory : the basic results
  • Message passing algorithms
  • High-dimensional convex geometry
  • Concentration of measure phenomena
  • Alternating projection techniques

If any readers have ideas for additional topics, please leave a comment.

The point of such a course would be to give students some exposure to analysis and modeling techniques and more importantly a set of references so they could learn more if they need to. It’s like having a kind of cultural literacy to read systems EE papers. Of course, if you’re going to go and study LDPC codes, the brief introduction to message passing wouldn’t be enough, but if you want to understand the issues around BP decoding, the 3 lectures may be sufficient for you to follow what is going on. The list above has some items that are too narrow and some that are too broad. There are a lot of different tools out there, and some exposure to what they are used for would be useful, especially for graduate students at schools which don’t have an extremely diverse seminar series.

unlocked networks

KQED‘s Forum program had an hour on Verizon’s new decision to unlock its network and the future of consumer wireless technologies. The first part is more about business and consumers, but the conversation wends its way later to issues of unloading cellular traffic to WiFi networks (much as Blackberries do now, I guess). The whole “femtocell” idea is an interesting one, as is the mesh network that Meraki and others are proposing. Of course, the hardest part is convincing consumers to adopt/use the changes, but there are plenty of research questions in there as well, and even good theory questions.

As an added bonus, a caller towards the end named Rajiv who is a “researcher in mobile internet” called in to say that these changes are not really “new innovation.” At the time I thought, “maybe that’s Rajiv Laroia,” but I doubt it.

Allerton 2007 : Networks and Algorithms

Constrained Consensus and Alternating Projection Methods
(Asu Ozdaglar)
The problem here is somewthing like the minimization of the sum of local utility functions, but here the values for the optimization can be constrained. Asu proposed an alternating projection algorithm that minimizes the objective and then projects back onto the feasible set. In order to prove the covergence rates, she separates the linear update error (which can be analyzed using standard Markov chain techniques) from the projection error, which requires some technical conditions on the constraints. The key is that the latter error is the only one that incorporates the constraints. The bound is obtained by looking at a stopped process and letting the stopping time go to infinity.

Network Coding Using Non-Commutative Algebras
(Shweta Agarwal and Sriram Vishwanath)
For multiple unicast sessions, finite field operations and linear coding may not be sufficient to achieve capacity for network codes. The proposal in this paper was code using modules over a noncommutative ring, but still use Gaussian elimination for decoding. I had to duck out towards the end, so I wasn’t sure if there explicit code constructions provided.

Rates of Convergence for Distributed Average Consensus with Probabilistic Quantization
(Tuncer Can Aysal, Mark Coates, and Michael Rabbat)
Most gossip and consensus algorithms assume we can do computation over the reals, which is not really feasible. This work tries to tackle the effect of quantization on the convergence rate. The probabilistic quantization rule they use is this : if the true outcome is distance d away from an endpoint of a quantization bin of length D, quantize to that point with probability d/D. The resulting scheme can be analyzed and all nodes will converge to the same point (an absorbing state of a Markov chain). In expectation, this point will be the true average, although it will over or under-estimate the average in any realization. One tricky part of the analysis is a very long waiting time for the average to settle on one value after all nodes have converged to one of the quantization points neighboring the true mean.

New Market Models and Algorithms
(Vijay Vazirani)
This talk was on pricing algorithms for links in networks. He started out by talking about the Fisher market, which is a multicommodity market model where every user has a fixed budget, goods are divisible, and each user has a different utility for the different goods. There is an algorithm for doing the allocations efficiently in these markets. The Kelly approach to analyzing TCP uses pricing on edges and a combinatorial auction to generate efficient flows, and it formally similar to Fisher’s model in the linear case. Vazirani presented a ascending price auction for links where the sinks are buyers in a multicast scenario. The resulting allocations are fair and efficient. By showing a connection to the Eisenberg-Gale (1957) optimization problem, he proposed a new class of Eisenberg-Gale markets in which a strongly polynomial time algorithm will exist for cases where there is a max-min result to exploit (like multicast).

Fast Gossip through Markov Chain Lifting
(Kyomin Jung, Devavrat Shah, and Jinwoo Shin)
The standard analysis of gossip algorithms uses results on the mixing time of reversible Markov chains. However, results of Chen et al. and Diaconis et al. show that nonreversible chains may mix much more quickly. In particular, a “lifting” construction can be used to embed expander graphs in the original graph (I think?). The lifting construction itself is quite cool — I read the paper on it on the plane on the way back and may write more about it. The trick is really to get a better bound on the conductance of the chain, which then bounds the mixing times.

Stochastic Approximation Analysis of Distributed Estimation Algorithms
(Ram Rajagopal and Martin Wainwright)
This paper was on computing nonlinear functions of sensor observations. In particular, suppose sensors have access to iid samples of a random variable X with unknown distribution. They would like to estimate the a-quantile — that is, the b such that P(X < b) = a. The nodes can communicate with each other over noisy or rate-constrained links. The performance measure that Ram uses is ratio of the decentralized MSE to the centralized MSE. The algorithm works by local flooding and updates, and the estimator is strongly consistent, with asymptotically normal error. What is more interesting is that the error is a function of the Laplacian of the communication graph.

Gossip Along the Way: Order-Optimal Consensus through Randomized Path Averaging
(Florence Bénézit, Alexandros Dimakis, Patrick Thiran, and Martin Vetterli)
This algorithm is similar to the Geographic Gossip algorithm that Alex and I worked on. In that work a node would wake up, pick a target at random, and route a packet to the closest node to the target to perform one pairwise exchange in a standard gossip algorithm. The corresponding Markov chain went from having a mixing time of O(n) to O(1) at the expense of O(\sqrt{n}) extra transmissions. In this work, they change the algorithm to aggregate and average all values along the routing path from the source to destination, so instead of averaging 2 nodes in each round, they average O(\sqrt{n}) nodes in each round. This shaves off the extra O(\sqrt{n}) from routing and so they get order-optimal performance (compared to the centralized algorithm, perhaps modulo all the log terms that I omitted in this brief description. The key to the analysis is to eschew computing the actual chain but instead look at “typical” paths and construct flows using the comparison method (original in Diaconis/Stroock, I think) to bound the mixing time.

Allerton 2007 : Information Theory

Because Allerton proceedings don’t get handed out at the conference, my understanding of what I saw is upper bounded by the quality of my notes. If anyone has any clarifications or corrections, please let me know!

A Rate-Distortion Approach to Anonymous Networking
(Parvathinathan Venkitasubramaniam and Lang Tong)
This problem was looking at networks in which we want to keep eavesdropping nodes from doing traffic analysis. There is a technique called Chaum mixing that can be used by trusted routers to scramble traffic and increase anonymity. The measure of anonymity they use is an equivocation at the eavesdropper normalized by the entropy of the sessions. The problem then becomes a rate-distortion problem for scheduling sessions in the network. They use two classes of nodes, covert relays and normal relays, and try to find tradeoffs between the density of covert relays and the anonymity.

Wireless Network Information Flow
(Salman Avestimehr, Suhas Diggavi, and David Tse)
Suhas presented this paper, which was on the deterministic model that David presented at ITW. Since we have broadcast and multiple access in wireless networks, the signal interactions become quite complex. The deterministic model reduces these networks to deterministic finite-field networks, which have a simple cut-set upper bound in terms of the rank of associated matrices. This outer bound is achievable as well, so the capacity of this simplified model can be calculated and looks a bit like the Ford-Fulkerson theorem in a more general sense.

Source Coding with Mismatched Distortion Measures
(Urs Niesen, Devavrat Shah, and Gregory Wornell)
If the distortion measure that we are to use is not known at the time of encoding, what should we do? To be more concrete, suppose we have f and g as distortion measures. Given a rate R and distortion D for the source under distortion f, what is the smallest distortion Ewe can achieve under the same code with measure g? It can be arbitrarily bad! But we can also write the performance using a single-letter characterization. There was a second problem Urs discussed, bitu I didn’t quite catch it — I think it was fixing D under f and asking what E we can get under g, regardless of the rate.

A Deterministic Approach to Wireless Relay Networks
(Salman Avestimehr, Suhas Diggavi, and David Tse)
This was a continuation of the work in the earlier talk, providing the details of the proof. The first part shows that if all paths from source to sink in the network are of equal length, then the capacity is the cut-set bound. In order to deal with unequal length paths, they use a time expansion argument (which looked a bit like a trellis, actually) with scheduling to avoid different flows mixing. It’s a bit tricky to show that you get back to the cut set bound with this, but I assume the details are in the paper. The other part was to show that this model can be used to construct a scheme for the original Gaussian channel that is “close” to optimal, and that the maximum gap (one bit in many cases) occurs in a very small regime of parameters.

The Approximate Capacity of an One-Sided Gaussian Interference Channel
(Guy Bresler, Abhay Parekh, and David Tse)
This was yet another paper using the deterministic model, this time for the many-to-one (and by a clever duality argument on the interference pattern, the one-to-many) interference channel, in which one transmit-receive pair experiences (inflicts) interference from (on) everyone else. The central idea here is interference alignment — if one signal level experiences interference from a transmitter, it can experience interference from other transmitters with no problem (or so I understood it). Guy showed that the Han-Kobayashi scheme is not optimal because the interference pattern is “too crowded,” and gave a lattice-based scheme that emulates the deterministic model.

SSP 2007 : aftermath

I just finished up attending the 2007 Statistical Signal Processing Workshop in Madison. Normally I would have blogged about all the talks I saw, but (a) the only “talks” were tutorials and plenaries, and (b) I’m a little burned out to write much. Despite the fact that I applied to grad schools for signal processing and took the DSP prelim exam at Berkeley, I’m not really much of a signal processing guy these days. All of the submitted papers were given as posters, and despite being organized into “sessions,” all the posters were in the same room, so there were about 30-40 talks going on at the same time in parallel for 2 hours. I was a bit dubious at first, since my experience with poster presentations is that they have a large effort-to-value ratio, but this format worked for me. I was unfamiliar with about 80% of the problems that people were trying to solve, so going to talks would have made me confused. Instead, I could at least talk to someone and get the point of what they were trying to do, if not the scale of their contribution.

The one downside to the conference for me was that several of the posters that I wanted to see were in the same session as me, so I ended up missing them! Luckily I was next to “Distributed Average Consensus using Probabilistic Quantization,” which is right up my alley (from my work on gossip algorithms), but I could only listen in every once in a while. If only we could encode our talks using an erasure code — then if I listen to 7 minutes our of 10 I could interpolate the other 3…