# Banaszczyk’s theorem on convex bodies

I meant to blog about this a while back, but somehow starting a new job/teaching are very time consuming (who knew?). Luckily, it’s about an older result of Banaszczyk (pronounced bah-nahsh-chik, I think):

Wojciech Banaszczyk. Balancing vectors and gaussian measures of n-dimensional convex bodies. Random Structures & Algorithms, 12(4):351–360, 1998.

This result came to my attention from a talk given by Sasho Nikolov here at Rutgers on his paper with Kunal Talwar on approximating hereditary discrepancy (see Kunal’s post from last year). The result is pretty straightforward to state.

Banaszczyk’s Theorem. There exists a universal constant $C$ such that the following holds. Let $A = [a_1\ a_2\ \cdots \ a_n]$ be an $m \times n$ real matrix such that the $i$-th column $a_i$ satisfies $\|a_i\|_2 \le 1$ for $i = 1, 2, \ldots, n$, and let $\mathcal{K}$ be a convex body in $\mathbb{R}^m$ such that $\mathbb{P}( G \in K ) \ge 1/2$, where $G \sim \mathcal{N}(0, I_m)$. Then there exists a vector $x \in \{-1,1\}^n$ such that $Ax \in C \cdot \mathcal{K}$.

This is a pretty cool result! Basically, if your convex body $\mathcal{K}$ is big enough to capture half of the probability of a standard Gaussian, then if you blow it up by $C$ to get $C \cdot \mathcal{K}$, then for any arbitrary collection of sub-unit-norm vectors $\{a_i\}$, I can find a way to add and subtract them from each other so that the result ends up in $C \cdot \mathcal{K}$.

I haven’t found a use for this result, but it’s a neat fact to keep in the bucket. Maybe it would be useful in alignment/beamforming schemes? Unfortunately, as far as I can tell he doesn’t tell you how to find this mysterious $x$, so…