A postdoctoral position is available at the University of Michigan Electrical Engineering and Computer Science Department for a project related to anomaly detection in networked cyber-physical systems. The successful applicant will have knowledge in one or more of the following topics: convex optimization and relaxations, compressed sensing, distributed optimization, submodularity, control and dynamical systems or system identification. The project will cover both theory and algorithm development and some practical applications in fault and attack detection in transportation and energy networks. The position can start anytime in 2014 or early 2015. This is a one year position, renewable for a second year. Interested candidates should contact Necmiye Ozay at email@example.com with a CV and some pointers to representative publications.
Computer scientists often look at Web pages in the same way that my friend looked at farms. People think that Web browsers are elegant computation platforms, and Web pages are light, fluffy things that you can edit in Notepad as you trade ironic comments with your friends in the coffee shop. Nothing could be further from the truth. A modern Web page is a catastrophe. It’s like a scene from one of those apocalyptic medieval paintings that depicts what would happen if Galactus arrived: people are tumbling into fiery crevasses and lamenting various lamentable things and hanging from playground equipment that would not pass OSHA safety checks.
It’s a fun read, but also a sentiment that may echo with those who truly believe in “clean slate networking.” I remember going to a tutorial on LTE and having a vision of what 6G systems will look like. One thing that is not present, though, is the sense that the system is unstable, and that the introduction of another feature in communication systems will cause the house of cards to collapse. Mickens seems to think the web is nearly there. The reason I thought of this is the recent fracas over the US ceding control of ICANN, and the sort of doomsdaying around that. From my perspective, network operators are sufficiently conservative that they can’t/won’t willy-nilly introduce new features that are only half-supported, like the in Web. The result is a (relatively) stable networking world that appears to detractors as somewhat Jurassic.
I’d argue (with less hyperbole) that some of our curriculum ideas also suffer from the accretion of old ideas. When I took DSP oh-so-long ago (13 years, really?) we learned all of this Direct Form Transposed II blah blah which I’m sure was useful for DSP engineers at TI to know at some point, but has no place in a curriculum now. And yet I imagine there are many places that still teaching it. If anyone reads this still, what are the dinosaurs in your curriculum?
Applications are invited for a Postdoc position (full-time, up to 2 years) at INRIA-ENS in Paris. The position is funded by the ANR GAP grant “Graphs, Algorithms and Probability.”
Requirements are a PhD degree in Computer Science or Mathematics and a strong background in some of the following topics:
- discrete probability
- statistical learning
- combinatorial optimization
- stochastic networks
Applications must include a research statement, a CV and the names and contacts of references. All material should be sent by email to Marc Lelarge. Please indicate in the subject POSTDOC GAP.
- Intention of application (short email): as soon as possible
- Deadline for application: December 1st, 2013
- Suggested starting dates: Jan.-Feb. 2014
I’ll round out the end of my ISIT blogging with very brief takes on a few more papers. I took it pretty casually this year in terms of note taking, and while I attended many more talks, my notes for most of them consist of a title and a star next to the ones where I want to look at the paper more closely. That’s probably closer to how most people attend conferences, only they probably use the proceedings book. I actually ended up shredding the large book of abstracts to use as bedding for my vermicompost (I figured they might appreciate eating a little Turkish paper for a change of diet).
On Connectivity Thresholds in Superposition of Random Key Graphs on Random Geometric Graphs
B Santhana Krishnan (Indian Institute of Technology, Bombay, India); Ayalvadi Ganesh (University of Bristol, United Kingdom); D. Manjunath (IIT Bombay, India)
This looked at a model where you have a random geometric graph (RGG) together with a uniformly chosen random subset of of size at each node. The subset is the set of keys available at each node; two nodes can talk (securely) if they share a key in common. We keep the edge in the RGG is if the link can be secured. The question is whether the secure-link graph is connected. It turns out that the important scaling is in terms of , where is the connectivity radius of the RGG. This sort of makes sense, as the threshold is more or less , so the keys provide a kind of discount factor on effective radius needed for connectivity — if the number of keys per node is small then you need a larger radius to compensate.
Secure Network Coding for Distributed Secret Sharing with Low Communication Cost
Nihar B Shah (University of California at Berkeley, USA); K. v. Rashmi (University of California at Berkeley, USA); Kannan Ramchandran (University of California at Berkeley, USA)
This paper was on secret sharing — a dealer wants to distribute shares of a secret such that any of them can be used to reconstruct the secret but or fewer cannot. The idea here is that the dealer has to distribute these shares over the network, which means that if a receiver is not connected directly to the dealer then the share will be passed insecurely through another node. Existing approaches based on pairwise agreement protocols are communication intensive. The idea here is use ideas from network coding to share masked versions of shares so that intermediate nodes will not get valid shares from others. To do this the graph needs to satisfy a particular condition (-propagating), which is defined in the paper. A neat take on the problem, and worth looking at if you’re interested in that sort of thing.
Conditional Equivalence of Random Systems and Indistinguishability Proofs
Ueli Maurer (ETH Zurich, Switzerland)
This was scheduled to be in the same session as my paper with Vinod, but was moved to an earlier session. Maurer’s “programme” as it were, is to think about security via three kinds of systems — real systems with real protocols and pseudorandomness, idealized systems with real protocols but real randomness, and perfect systems which just exist on paper. The first two are trivially indistinguishable from a computational perspective, and the goal is to show that the last two are information-theoretically indistinguishable. This conceptual framework is actually useful for me to separate out the CS and IT sides of the security design question. This paper tried to set up a framework in which there is a distinguisher D which tries to make queries to two systems and based on the answers has to decide if they are different or not. I think if you’re interested in sort of a systems-theoretic take on security you should take a look at this.
Tight Bounds for Universal Compression of Large Alphabets
Jayadev Acharya (University of California, San Diego, USA); Hirakendu Das (University of California San Diego, USA); Ashkan Jafarpour (UCSD, USA); Alon Orlitsky (University of California, San Diego, USA); Ananda Theertha Suresh (University of California, San Diego, USA)
The main contribution of this paper was to derive bounds on compression of patterns of sequences over unknown/large alphabets. The main result is that the worst case pattern redundancy for i.i.d. distributions is basically where is the blocklength. The main result is a new upper bound which uses some tricks like sampling a random number of points, where the number of samples is Poisson distributed, and a partition of the set of distributions induced by Poisson sampling.
To Surprise and Inform
Lav R. Varshney (IBM Thomas J. Watson Research Center, USA)
Lav talked about communication over a channel where the goal is to communicate subject to a constraint on the Bayesian surprise where and are the input and output of the channel. He gets a single-letter expression for the capacity under a bound on the max surprise and gives an example for which the same distribuion maximizes mutual information and achieves the minimax surprise. The flip side is to ask for capacity when each output should be surprising (or “attention seeking”). He gets a single letter capacity here as well, but the structure of the solution seems to be a bit more complicated.
Logarithmic Sobolev inequalities and strong data processing theorems for discrete channels
Maxim Raginsky (University of Illinois at Urbana-Champaign, USA)
Max talked about how the strong data processing inequality (DPI) is basically a log-Sobolev inequality (LSI) that is used in measure concentration. The strong DPI says that
for some , so the idea is to get bounds on
What he does is construct a hierarchy of LSIs in which the strong DPI fits and then gets bounds on this ratio in terms of best constants for LSIs. The details are a bit hairy, and besides, Max has his own blog so he can write more about it if he wants…
Building Consensus via Iterative Voting
Farzad Farnoud (University of Illinois, Urbana-Champaign, USA); Eitan Yaakobi (Caltech, USA); Behrouz Touri (University of Illinois Urbana-Champaign, USA); Olgica Milenkovic (University of Illinois, USA); Jehoshua Bruck (California Institute of Technology, USA)
This paper was about rank aggregation, or how to take a bunch of votes expressed as permutations/rankings of options to produce a final option. The model is one in which people iteratively change their ranking based on the current ranking. For example, one could construct the pairwise comparison graph (a la Condorcet) and then have people change their rankings when they disagree with the majority on an edge. They show conditions under which this process converges (the graph should not have a cycle) and show that if there is a Condorcet winner, then after this process everyone will rank the Condorcet winner first. They also look at a Borda count version of this problem but to my eye that just looked like an average consensus method, but it was at the end of the talk so I might have missed something.
Information-Theoretic Study of Voting Systems
Eitan Yaakobi (Caltech, USA); Michael Langberg (Open University of Israel, Israel); Jehoshua Bruck (California Institute of Technology, USA)
Eitan gave this talk and the preceding talk. This one was about looking at voting through the lens of coding theory. The main issue is this — what sets of votes or distribution of vote profiles will lead to a Condorcet winner? Given a set of votes, one could look at the fraction of candidates who rank candidate j in the i-th position and then try to compute entropies of the resulting distributions. The idea is somehow to characterize the existence or lack of a Condorcet winner in terms of distances (Kendall tau) and these entropy measures. This is different than looking at probability distributions on permutations and asking about the probability of there existing a Condorcet cycle.
Brute force searching, the typical set and Guesswork
Mark Chirstiansen (National University of Ireland Maynooth, Ireland); Ken R Duffy (National University of Ireland Maynooth, Ireland); Flávio du Pin Calmon (Massachusetts Institute of Technology, USA); Muriel Médard (MIT, USA)
Suppose an item is chosen from a list and we want to guess the choice that is made. We’re only allowed to ask questions of the form “is the item ?” Suppose now that the list is a list of codewords of blocklength drawn i.i.d. according to . This paper looks at the number of guesses one needs if is uniform on the typical set according to versus when is distributed according the distribution conditioned on being in the typical set. The non-uniformity of the latter turns out to make the guessing problem a lot easier.
Rumor Source Detection under Probabilistic Sampling
Nikhil Karamchandani (University of California Los Angeles, USA); Massimo Franceschetti (University of California at San Diego, USA)
This paper looked at an SI model of infection on a graph — nodes are either Susceptible (S) or Infected (I), and there is a probability of transitioning from S to I based on your neighbors’ states. There in exponential waiting time for the to infect if is infected. The idea is that the rumor starts somewhere and infects a bunch of people and then you get to observe/measure the network. You want to find the source. This was studied by Zaman and Shah under the assumption of perfect observation of all nodes. This work looked at the case where nodes randomly report their infection state, so you only get an incomplete picture of the infection state. They characterize the effect of the reporting probability on the excess error and show that for certain tree graphs, incomplete reporting is as good as full reporting.
Again a caveat — these are the talks in which I took reasonable enough notes to write anything coherent.
Green Communication: From Maxwell’s Demon to “Informational Friction”
Pulkit talked about trying to tie a physical interpretation the energy used in communication during computation. Physicists might argue that reversible computation costs nothing, but this ignores friction and noise. Pulkit discussed a simple network model to account for “informational friction” that penalizes the bit-distance product in communicating on a chip. See also Pulkit’s short video on the topic.
Energy Harvesting Receivers
Hajar Mahdavi-Doost, Roy Yates
Roy talked about a model in which receivers have to harvest the energy they need for sampling/buffering/decoding the transmissions. These three tasks cost different amounts, and in particular, the rate at which the receiver samples the output dictates the other parameters. The goal is to choose a rate which helps meet the decoder energy requirements. Because the receiver has to harvest the energy it needs, it has to design a policy to switch between the three operations while harvesting the (time-varying) energy available to it.
Multiple Access and Two-way Channels with Energy Harvesting and Bidirectional Energy Cooperation
Kaya Tutuncuoglu Aylin Yener
Unlike the previous talk, this was about encoders which have to transmit energy to the receivers — there’s a tradeoff between transmitting data and energy, and in the MAC and TWC there is yet another dimension in how the two users can cooperate. For eample, they can cooperate in energy transmission but not data cooperation. There were a lot of results in here, but there was also a discussion of policies for the users. In particular a “procrastination” strategy turns out to work well (rejoice!).
An equivalence between network coding and index coding
Michelle Effros, Salim El Rouayheb, Michael Langberg
The title says it all! For every network coding problem (multiple unicast, multicast, whatever), there exists a corresponding index coding problem (constructed via a reduction) such that a solution to the latter can be easily translated to a solution for the former. This equivalence holds for all network coding problems, not just linear ones.
Crowd-sourcing epidemic detection
Constantine Caramanis, Chris Milling, Shie Mannor, Sanjay Shakkottai
Suppose we have a graph and we can see some nodes are infected. This paper was on trying to distinguish between whether the infected nodes started from a single point infection spread via an SI model, or just from a random pattern of infection. They provide two algorithms for doing this and then address how to deal with false positives using ideas from robust statistics.
As a theoretical engineer, I find myself getting lulled into the trap of what I now starting to call “lazy generalization.” It’s a form of bland motivation that you often find at the beginning of papers:
Sensor networks are large distributed collections of low-power nodes with wireless radios and limited battery power.
Really? All sensor networks are like this? I think not. Lots of sensor networks are wired (think of the power grid) but still communicate wirelessly. Others communicate through wires. This is the kind of ontological statement that metastasizes into the research equivalent of a meme — 3 years after Smart Dust appears, suddenly all papers are about dust-like networks, ignoring the vast range of other interesting problems that arise in other kinds of “sensor networks.”
Another good example is “it is well known that most [REAL WORLD THING] follows a power law,” which bugs Cosma to no end. We then get lots of papers papers which start with something about power laws and then proceed to analyze some algorithms which work well on graphs which have power law degree distributions. And the later we get statements like “all natural graphs follow power laws, so here’s a theory for those graphs, which tells us all about nature.”
Yet another example of this is sparsity. Sparsity is interesting! It lets you do a lot of cool stuff, like compressed sensing. And it’s true that some real world signals are approximately sparse in some basis. However, turn the crank and we get papers which make crazy statements approximately equal to “all interesting signals are sparse.” This is trivially true if you take the signal itself as a basis element, but in the way it’s mean (e.g. “in some standard basis”), it is patently false.
So why is are these lazy generalization? It’s a kind of fallacy which goes something like:
- Topic A is really useful.
- By assuming some Structure B about Topic A, we can do lots of cool/fun math.
- All useful problems have Structure B
Pattern matching, we get A = [sensor networks, the web, signal acquisition], and B = [low power/wireless, power laws, sparsity].
This post may sound like I’m griping about these topics being “hot” — I’m not. Of course, when a topic gets hot, you get a lot of (probably incremental) papers all over the place. That’s the nature of “progress.” What I’m talking about is the third point. When we go back to our alabaster spire of theory on top of the ivory tower, we should not fall into the same trap of saying that “by characterizing the limits of Structure B I have fundamentally characterized Topic A.” Maybe that’s good marketing, but it’s not very good science, I think. Like I said, it’s a trap that I’m sure I’m guilty of stepping into on occasion, but it seems to be creeping into a number of things I’ve been reading lately.