- On Optimal Information Storage in Synapses
(Lav R. Varshney and Dmitri B. Chklovskii)
Lav presented a mathematical model for storing information in synapses that could be modeled as a communication channel with input costs, and then calculated the capacity per unit cost to derive some principles for a good memory design. These principles are consistent with biological evidence about the brain, which shows that reality is consistent with optimizing efficiency in this model (but not, naturally, that nature is optimizing this model). The model is that we are coding over a set of n synapses in space. The input alphabet X is the synaptic strength, so a codeword is a sequence of synaptic strengths for the n synapses. The cost function b(x) for a particular synaptic strength measures the volume of a synapse, so that stronger synapses are larger. By using an SNR model for the channel, they can calculate this capacity per unit cost. The model can explain some properties such as the optimal exponent of the cost function, the profile of synaptic strengths and the benefits of noisy synapses, and the reasons for sparse connectivity. - Learning from Compressed Observations
(Maxim Raginsky)
Maxim presented a set-up in which he coupled the “learning from examples” paradigm of machine learning to a rate-distortion problem. In learning from a set of examples {Xi, Yi}, we are interested in finding a map f : X -> Y that can assign new Y values to fresh samples X. Some targets for making a good classifier are minimizing the empirical risk or the generalization error, and it is the latter criterion on which this work focused. The problem is this : if the X samples are available to both the encoder and decoder but the Y‘s must be compressed to rate R, what is the set of rate-generalization error tradeoff curves. By using some heavier function-analytic machinery he can get an expression for an achievable rate as the distortion-rate function over a set of distortions depending on Y. - Intermediate Performance of Rateless Codes
(Sujay Sanghavi)
Designing a rateless code for erasures like a Fountain or Raptor code corresponds to designing a degree distribution on the graph that defines the codes. In particular, the soliton distribution can be shown to achieve capacity. The question here is the decoder performance when the number of unerased symbols is less than the number needed to decode — can we partially recover? That is, if we need K symbols, but only get r K symbols, for what t can we get t K symbols correctly? The soliton distribution performs poorly in this respect — the error is catastrphic. Sujay gets a pointwise outer bound on the performance of any code (via linear programming), and each point corresponds to a different input distribution. The proof uses some embeddings into a fluid model due to Darling and Norris. - On Secure Distributed Source Coding
(Vinod Prabhakaran and Kannan Ramchandran)
Suppose Alice and Bob each have correlated sources and Alice wants to communicate her source to Bob. Eve has a source of her own correlated with Alice and Bob’s, and can eavesdrop on their communication. They formulate their problem in terms of the leakage rate — the rate at which information leaks to Eve. For one-way communication from Alice to Bob they characterize the leakage rate completely, and show it is achievable using a “secure Slepian-Wolf” coding scheme. For two-way communication they get bounds in terms of the secret key rate, and they also get the leakage capacity for a certain multiple-access setup. - A Hot Channel
(Tobias Koch, Amos Lapidoth, and Paul P. Sotiriadis)
This paper used a (rather simplified) model for communication on a chip. In this model, the variance of the noise is a linear combination of additional noise and the average power expended by the transmitter in earlier channel steps. This “heating up” property makes the channel a bit tricky to handle. One interesting way to attack it is by looking at the capacity per unit cost function, where the cost is the SNR — this was done in earlier work at ISIT. The main result is a characterization of the capacity itself in terms of the coefficient sequence that dictates how the noise variance is calculated. In particular, if the memory in this noise model decays geometrically, then the capacity is bounded. - A Deterministic Model for Wireless Channels and its Applications
(David Tse)
David presented a new way of thinking about wireless channels in which the transmitters and receivers only transmit bits. The goal is to do away with the noise and instead focus on how different user’s signals interact/interfere with each other. We think of each bit as occupying a signal level, so that there is an ordering from MSB to LSB, where the MSB is occupying a “high power level.” This is motivated by the approximation log(1 + SNR) = log(SNR) at high SNR. The SNR determines how many bits that were transmitted can be received at the decoder. So in a broadcast setting, the transmitter may send 5 bits, one receiver will get all 5, and the other will only get the 3 most significant bits (MSBs). Interference is modeled as XORing the bits. He stepped through several examples of building deterministic models, and then addressed how close the rates derived from these models are to the optimal rate, along the lines of the “within one bit” results on the interference channel. There will be a number of talks at Allerton on how to use this model for different channels, including general relay networks, MIMO channels, and the interference channel.