In this month’s Notices of the AMS, there was an article on how JPEG works [pdf]. These little articles are supposed to be about neat topics in mathematics or applications, and they are often uneven and sometimes contentious, as the Koblitz article [pdf] has recently demonstrated.
Even though I do signal processing research, I hadn’t looked under the hood of JPEG too carefully in any of my classes, so it was interesting to me to read about process of translating RGB to luminance and two chrominances, the DCT coefficient quantization scheme, and so on. The bit at the end on wavelets in JPEG-2000 I already knew about. But I think that for a mathematician the “why” was lost in all of the details of Huffman coding, Cohen-Daubechies wavelets, and so on. All of the engineering choices were made (in terribly contentious standards meetings, I’m sure) for a reason, and the choice of the mathematics is really a choice of model of “natural” images. He does a good explanation of luminance-chrominance vs. RGB in terms of the visual system, but what about the Fourier transform in terms of capturing edge detail in high frequency coefficients?
Unfortunately for me, the article ended up confirming my pessimistic view that standards look like a sequence of ad-hoc decisions. There’s lots of theory behind those implementations, and I think that might appeal to more mathematicians.