I had never really heard of this result, sometimes called the Matrix Determinant Lemma, but it came up in the process of answering a relatively simple question. Suppose I have an $M$-dimensional jointly Gaussian vector $\mathbf{X}$ with covariance matrix $A$. The differential entropy of $\mathbf{X}$ is $\frac{1}{2} \log ( (2 \pi e)^M \det(A)$. Suppose now I consider some rank-1 perturbation $B = A + u u^T$. What choice of $u$ maximizes the differential entropy?

On the face of it, this seems intuitively easy — diagonalize $A$ and then pick $u$ to be the eigenvector corresponding to the smallest singular value of $A$. But is there an simple way to see this analytically?

Matrix Determinant Lemma. Let $A$ be an $M \times M$ positive definite matrix and $U$ and $V$ be two $M \times k$ matrices. Then

$\det(A + U V^H) = \det(A) \det(I_k + V^H A^{-1} U)$.

To see this, note that

$\left[ \begin{array}{cc} A & -U \\ V^{H} & I \end{array} \right] = \left[ \begin{array}{cc} A & 0 \\ V^{H} & I \end{array} \right] \cdot \left[ \begin{array}{cc} I & - A^{-1} U \\ 0 & I + V^H A^{-1} U \end{array} \right]$,

and take determinants on both sides.

So now applying this to our problem,
$\det(A + u u^T) = \det(A) ( 1 + u^T A^{-1} u )$
But the right side is clearly maximized by choosing $u$ corresponding to the largest singular value of $A^{-1}$, which in this case is the smallest singular value of $A$. Ta-da!