# Allerton 2014: privately charging your information odometer while realizing your group has pretenders

Here are a few papers that I saw at Allerton — more to come later.

Group Testing with Unreliable Elements
Arya Mazumdar, Soheil Mohajer
This was a generalization of the group testing problem: items are either positive or null, and can be tested in groups such that if any element of the group is positive, the whole group will test positive. The item states can be thought of as a binary column vector $\mathbf{x}$ and each test is the row or a matrix $A$: the $(i,j)$-the entry of $A$ is 1 if the $i$-th item is part of the $j$-th group. The multiplication $A \mathbf{x}$ is taken using Boolean OR. The twist in this paper is that they consider a situation where $\tau$ elements can “pretend” to be positive in each test, with a possibly different group in each test. This is different than “noisy group testing” which was considered previously, which is more like $A \mathbf{x} + \mathrm{noise}$. They show achievable rates for detecting the positives using random coding methods (i.e. a random $A$ matrix). There was some stuff at the end about the non-i.i.d. case but my notes were sketchy at that point.

The Minimal Realization Problems for Hidden Markov Models
Qingqing Huang, Munther Dahleh, Rong Ge, Sham Kakade
The realization problem for an HMM is this: given the exact joint probability distribution on length $N$ strings $(Y_1, Y_2, \ldots, Y_N)$ from an HMM, can we create a multilinear system whose outputs have the same joint distribution? A multilinear system looks like this:
$w_{t+1} = A^{(\ell_t)} w_t$ for $w_t \in \mathbb{R}^k$ with $w_0 = v$
$z_{t} = u^t w_t$
We think of $A^{(\ell_t)} \in \mathbb{R}^{k \times k}$ as the transition for state $\ell_t$ (e.g. a stochastic matrix) and we want the output to be $z_t$. This is sort of at the nexus of control/Markov chains and HMMs and uses some of the tensor ideas that are getting hot in machine learning. As the abstract puts it, the results are that they can efficiently construct realizations if $N > max(4 log_d(k) + 1; 3)$, where $d$ is the size of the output alphabet size, and $k$ is the minimal order of the realization.”

Differentially Private Distributed Protocol for Electric Vehicle Charging
Shuo Han, Ufuk Topcu, George Pappas
This paper was about a central aggregator trying to get multiple electric vehicle users to report their charging needs. The central allocator has a utility maximization problem over the rates of charging:
$\min_{r_i} U(\sum r_i)$
$\mathrm{s.t.}\ \le r_i \le \bar{r}_i$
$\qquad \mathbf{1}^{\top} r_i = z_i$
They focus on the case where the charge demands $z_i$ are private but the charging rates $r_i$ can be public. This is in contrast to the mechanism design literature popularized by Aaron Roth and collaborators. They do an analysis of a projected gradient descent procedure with Laplace noise added for differential privacy. It’s a stochastic gradient method but seems to converge in just a few time steps — clearly the number of iterations may impact the privacy parameter, but for the examples they showed it was only a handful of time steps.

An Interactive Information Odometer and Applications
Mark Braverman and Omri Weinstein
This was a communication complexity talk about trying to maintain an estimate of the information complexity of a protocol $\Pi$ for computing a function $f(X,Y)$ interactively, where Alice has $X$, Bob has $Y$, and $(X,Y) \sim \mu$:
$\mathsf{IC}_{\mu}(\Pi) = I( \Pi ; X | Y ) + I( \Pi ; Y | X)$
Previous work has shown that the (normalized) communication complexity of computing $f$ on $n$-tuples of variables $\epsilon$-accurately approaches the minimum information complexity over all protocols for computing $f$ once $\epsilon$-accurately. At least I think this is what was shown previously — it’s not quite my area. The result in this paper is a strong converse for this — the goal is to maintain an online estimate of $\mathsf{IC}$ during the protocol. The mechanism seeems a bit like communication with feedback a la Horstein, but I got a bit confused as to what was going on — basically it seems that Alice and Bob need to be able to agree on the estimate during the protocol and use a few extra bits added to the communication to maintain this estimate. If I had an extra $n$ hours in the week I would read up more about this. Maybe on my next plane ride…