# ISIT 2009 : talks part four

The Gelfand-Pinsker Channel: Strong Converse and Upper Bound for the Reliability Function
Himanshu Tyagi, Prakash Narayan
Strong Converse for Gel’fand-Pinsker Channel
Pierre Moulin

Both of these papers proved the strong converse for the Gel’fand-Pinsker channel, e.g. the discrete memoryless channel with iid state sequence $P_S$, where the realized state sequence is known ahead of time at the encoder. The first paper proved a technical lemma about the image size of “good codeword sets” which are jointly typical conditioned on a large subset of the typical set of $S^n$ sequences. That is, given a code and a set of almost $\exp(n H(P_S))$ typical sequences in $S^n$ for which the average probability of error is small, then they get a bound on the rate of the code. They then derive bounds on error exponents for the channel. The second paper has a significantly more involved argument, but one which can be extended to multiaccess channels with states known to the encoder.

Combinatorial Data Reduction Algorithm and Its Applications to Biometric Verification
Vladimir B. Balakirsky, Anahit R. Ghazaryan, A. J. Han Vinck

The goal of this paper was to compute short fingerprints $f(\mathbf{x})$ from long binary strings $\mathbf{x}$ so that a verifier can look at a new long vector $\mathbf{y}$ and tell whether or not $\mathbf{y}$ is close to $\mathbf{x}$ based on $f(\mathbf{x})$. This is a little different from hashing, where we could first compute $f(\mathbf{y})$. They develop a scheme which stores the index of a reference vector $\mathbf{c}$ that is “close” to $\mathbf{x}$ and the distance $d(\mathbf{x},\mathbf{c})$. This can be done with low complexity. They calculated false accept and reject rates for this scheme. Since the goal is not reconstruction or approximation, but rather a kind of classification, they can derive a reference vector set which has very low rate.

Two-Level Fingerprinting Codes
N. Prasanth Anthapadmanabhan, Alexander Barg

This looks at a variant of the fingerprinting problem, in which you a content creator makes several fingerprinted versions of an object (e.g. a piece of software) and then a group of pirates can take their versions and try to create a new object with a valid fingerprint. The marking assumption mean that the pirates can only alter the positions in their copies which are different. The goal is to build a code such that a verifier looking at an object produced by $t$ pirates can identify at least one of the pirates. In the two-level problem, the objects are coarsely classified into groups (e.g. by geographic region) and the verifier wants to be able to identify one of the groups of one of the pirates when there are more than $t$ pirates. They provide some conditions for traceability and constructions, This framework can also be extended to multiple levels.