Plenary:Pseudo-codewords and iterative decoding: A Guided tour.
by Pascal Vontobel.
Pascal gave a very nice tutorial presentation of the recent results on Linear programming decoding, its relations to iterative decoding through graph covers (the eagle vs the mountain lion) and the role of the vertices of the fundamental polytope called pseudo-codewords. I saw that the slides are available online on the pseudo-codewords homepage and contain many interesting figures like the decision regions and numerous graph cover examples.
On the Minimum Number of Transmissions in Single-Hop Wireless Coding Networks
by Salim Y. El Rouayheb, Mohammad Asad R. Chaudhry, and Alex Sprintson
This paper studies the ‘Single-hop’ wireless Network coding problem where a base station is trying to communicate a number of packets to a set of receivers. Each of the receivers has (say, from previous transmissions) a subset of the packets, and a list of desired packets it wants to receive. The question is finding the minimal number of transmissions for the base station (each transmission is overheard by all the receivers) so that all the clients can decode the packets they are interested in. For example, if receiver R1 has packets A and wants B while R2 has B and wants A, a single transmission of A+B will suffice. Salim showed that finding the minimal number of transmissions for a general set of receivers is NP-hard over GF(2). Further, the minimal number of transmissions is non-monotone in the field size and can depend on the characteristic of the field (which was quite surprising to me- I was guessing that there would exist a field size large enough above which there would be no difference). They also show how a special case of this problem (assuming no memory at the decoders) boils down to clique partition and propose a simple algorithm that works well for random instances.
Fault Tolerant Memories Based on Expander Graphs
by Shashi Kiran Chilappagari and Bane Vasic
This paper looked at the problem of maintaining an erasure encoded representation when both the memory elements and the logical gates can be faulty. As far as I understood, there is a correcting circuit that periodically checks for faults in the memory and repairs them using the bit-flipping algorithm of Sipser and Spielman. The problem is that this circuit is made of unreliable logic gates and hence the repairs are not always correct. If the Tanner graph of the code used has sufficient expansion, then the correcting circuit manages (despite its own errors) to keep the fraction of errors below a threshold.
This threshold is low enough for a powerful decoder (that makes no errors) to be able to recover all the bits at any given time- whereas a memory with no correction circuit would always fail after a constant amount of time.
Reducing the Error Floor
Michael Chertkov
Misha presented improved decoding algorithms (based on LP decoding) to reduce the probability of error at high SNR (error floor). The talk focused on the famous [155,64,20] Tanner code where the authors have developed an algorithm to find the most dangerous (low pseudo-weight) pseudo-codewords. The proposed Loop Guided Guessing (LGG) is an informed facet guessing algorithm that selects which bits to guess by finding critical loops on the graph. It was great to learn that the investigated algorithms managed to correct all 200 low weight pseudo-codewords-essentially achieving ML performance for high SNR.
On Optimizing XOR-Based Codes for Fault-Tolerant Storage Applications
Cheng Huang, Jin Li, and Minghua Chen
This paper shows a novel technique to improve the performance of encoding and decoding for linear codes that have been reduced to binary representations. The encoding and decoding of linear block codes is represented in matrix form, where the matrices are fixed while the vectors are data dependent. The idea is finding patterns in the matrices that can be reused, essentially optimizing the number of XOR operations required to multiply any vector with a given matrix. For example is the parities P1= x1+x3+x5, P2=x1+x3+x5+x6, P6=x2+x3+x4+x5 are computed naively, this requires 8 XOR operations. If one pre-computes x3+x5, only 6 XOR operations suffice. The authors call this common operations first (COF) principle and propose an optimization problem to minimize the number of XORs required to multiply a given matrix with any vector. They propose two greedy algorithms that work well in practice and show that generic Reed-Solomon codes that have been optimized with this scheme can be equally or more efficient than codes specifically designed to minimize the number of XOR operations.
Approximate message-passing inference algorithm
Kyomin Jung and Devavrat Shah
Devavrat presented a general message passing framework for computing maximum a-posteriori assignments (MAP) for binary markov random fields. Building on the recent result by Dror Weitz on the equivalence of message passing on a graph and a self-avoiding walk tree of G for marginalization, the equivalence still holds for MAP assignments. For graphs that admit some kind of good partitioning, Devavrat presented message passing algorithms that can approximate the MAP assignment within epsilon factor in polynomial time for any fixed epsilon. The intuition is that the graph can be separated in small components, compute estimates locally and then combine them to obtain a good global estimate. The algorithm can be applied to a wide family of graphs that can be easily separated (examples included planar graphs and graphs that have some types of geometry).
Equivalence of LP Relaxation and Max-Product for Weighted Matching in General Graphs, Sujay Sanghavi
Sujay showed that the max-product algorithm for finding weighted matchings in general graphs converges to the right answer if and only if the LP relaxation of the problem is tight. The proof relies on the fact that max-product is solving the problem on a computation tree that can sometimes contain extra matchings (pseudo-matchings!) of larger weight, that do not exist on the real graph. Using LP duality, Sujay showed that if the LP relaxation is tight, there are no such evil matchings on the computation tree. The converse (as far as i understood) is using a combinatorial characterization of when the LP relaxation fails- the existence of evil structures in the graph (‘Bad stemmed blossom’ and ‘bad blossom pair’) that also make max-product fail.
On the Hardness of Approximating Stopping and Trapping Sets in LDPC Codes
Andrew McGregor and Olgica Milenkovic
Olgica presented hardness results for minimum distance, minimum stopping and trapping sets for general linear codes and even LDPC codes. In particular she showed that minimum stopping and trapping sets are hard to approximate within an arbitrary constant factor (in other words, there exists a factor c below which c-factor approximations become NP-hard). The reductions carry through even if the codes have constant degrees (LDPC) with minimum degree 3. An interesting related open problem I was thinking about after this talk is hardness results on finding and approximating minimum (BSC) pseudo-weight. Any ideas on this?