[**Update:** As Max points out in the comments, this is really a specialized version of the Donsker-Varadhan formula, also mentioned by Mokshay in a comment here. I think the difficulty with concepts like these is that they are true for deeper reasons than the ones given when you learn them — this is a special case that requires undergraduate probability and calculus, basically.]

One of my collaborators said to me recently that it’s well known that the “negative entropy is the Fenchel dual of the log-partition function.” Now I know what these words meant, but it somehow was not a fact that I had learned elsewhere, and furthermore, a sentence like that is frustratingly terse. If you already know what it means, then it’s a nice shorthand, but for those trying to figure it out, it’s impenetrable jargon. I tried running it past a few people here who are generally knowledgeable but are not graphical model experts, and they too were unfamiliar with it. While this is just a simple thing about conjugate duality, I think it doesn’t really show up in information theory classes unless the instructor talks a bit more about exponential family distributions, maximum entropy distributions, and other related concepts. Bert Huang has a post on Jensen’s inequality as a justification.

We have a distribution in the exponential family:

As a side note, I often find that the exponential family is not often covered in systems EE courses. Given how important it is in statistics, I think it should be a bit more of a central concept — I’m definitely going to try and work it in to the detection and estimation course.

For the purposes of this post I’m going to assume takes values in a discrete alphabet (say, n-bit strings). The function is a vector of statistics calculated from , and is a vector of parameters. the function is the log partition function:

Where the partition function is

The entropy of the distribution is easy to calculate:

The Fenchel dual of a function is the function

.

So what’s the Fenchel dual of the log partition function? We have to take the gradient:

So now setting this equal to zero, we see that at the optimum :

And the dual function is:

The standard approach seems to go the other direction by computing the dual of the negative entropy, but that seems more confusing to me (perhaps inspiring Bert’s post above). Since the log partition function and negative entropy are both convex, it seems easier to exploit the duality to prove it in one direction only.

### Like this:

Like Loading...

*Related*

This holds more generally, not just in the context of graphical models:

http://infostructuralist.wordpress.com/2012/07/20/information-theory-in-economics-part-ii-robustness/

http://mark.reid.name/blog/a-compression-lemma.html

I knew about the Donsker-Varadhan connection, but I forgot to mention it above. I was trying to come up with a “homework” version suitable for the graduate students, most of whom haven’t seen the KL divergence before (I know, shocking).