Barry Glassner on food

Salon’s interview with Barry Glassner is really interesting. What’s nice is that he brings up the way in which the organic/healthy/no-trans-fat/etc food movement ignores the glaring issues of class (and race, reading between the lines) that are the real problem in our society. While it would be nice if we ate healthier, these healthy meals have to be affordable and efficient to those with the fewest resources (time and money) to spend on meals. I might get his book (from the library) to get more of the details.

(via Winnie’s post at get in my belly.)

Advertisement

scaling laws as comedy

[Note : imagine B is from India.]

A: Oh my God!
B: What is it?
A: I just proved this great result!
B: Really???
A: Yeah, it’s an lower bound on the achievable rate!
B: So what is it?
A: Well my scheme shows it’s at least log! Log(N)!
B: Ok…
A: Isn’t that cool?
B: Seems a bit… low.
A: Well it’s not polynomial…
B: Hardly. Log(log(N))? You’ve got to be joking.
A: C’mon! Look, if you have a log, log(N) growth you can bootstrap that up to something better.
B: No you need to get rid of a log.
A: I did get rid of a log! It’s an improvement on Singh et al.
B: So it was log log log before?
A: No, log log.
B: So what’s your contribution?
A: Well it’s log log…
B: Exactly! Log log!
A: By log, do you…
B: Log log by log?
A: No, you have an extra two logs in there, it’s…
B: 1 by log? What the heck are you trying to prove!
A: It’s log! Log! Log!
B: I give up. Why don’t you come back when you’ve figured it it out. See if you can get it to log(N). [exits]

new paper : deterministic list codes for state-constrained AVCs

It should be up on ArXiV later today…

A.D. Sarwate and M. Gastpar
Deterministic list codes for state-constrained arbitrarily varying channels
Submitted to IEEE Transactions on Information Theory
ArXiV cs.IT/0701146

The capacity for the discrete memoryless arbitrarily varying channel (AVC) with cost constraints on the jammer is studied using deterministic list codes under both the maximal and average probability of error criteria. For a cost function $l(\cdot)$ on the state set and constraint $\Lambda$ on the jammer, the achievable rates are upper bounded by the random coding capacity $C_r(\Lambda)$. For maximal error, the rate $R = C_r(\Lambda) – \epsilon$ is achievable using list codes with list size $O(\epsilon^{-1})$. For average error, an integer $\lsym(\Lambda)$, called the \textit{symmetrizability}, is defined. It is shown that any rate below $C_r(\Lambda)$ is achievable under average error using list codes of list size $L > \lsym$. An example is given for a class of discrete additive AVCs.

yet another test

I can only hope that this Gaussian will appear:

\mathbb{P}(X \le x) = \int_{-\infty}^{x} \frac{1}{\sqrt{2 \pi}} \exp(- y^2/2) dy

How to do it:

  1. Install the WP-Cache plugin. Note that you have to muck with file permissions and it’s somewhat non-intuitive.
  2. Install the MimeTeX plugin.
  3. Stress out about the fact that for some equations the terminating quote ” in the alt tag for the image has turned into a fancy quote. Come up with a hack involving placing an empty img tag right after every piece of LaTeX. Hope that you figure out a way around it.

not again!

Since I only recently started reading ArXiV through my RSS aggregator, I was a unaware of the astonishing regularity with which “proofs” of polynomial-time algorithms for NP-complete problems are proposed. Most recent is this one, but one can find a more comprehensive list here. The latter page is a bit too unskeptical of the claims, since they say “so and so proved P=NP in this paper.” It’s not a proof if it’s wrong, and pretty much all of these proofs have been shown to be wrong. But it might be an interesting exercise one week for some reading group or topics class to formally prove some of these guys wrong. Of course, for every person claiming a proof that P=NP there is another person ready to knock them down and claim that P!=NP. Maybe it’s just a little self-correcting mechanism in the ArXiV.

one thing that xmh got right

At MIT I used the xwindows program xmh to read my email. It was a pretty bare-bones program, but had one really great feature. Instead of having to drag a message to a folder to refile it, you could just right-click on the folder title and it would mark it as beign destined for there. Then you could commit all of the moves in one fell swoop. Neither GMail nor Thunderbird nor Mail.app seem to have this feature, which I regarded as a genius piece of interface in what was otherwise a pretty ugle program, all told. Maybe I’m the only one who got a lot of mileage out that feature, but I can’t help but think that our modern mail clients are missing something.

maximum versus maximal

So I know I learned this at one point, but I can’t rederive the logical argument explaining when to use the words “maximal” and “maximum.” Certainly maximum is both an adjective and a noun, and maximal is just an adjective.

One explanation I remember was that there can be many maximal things, but only one maximum thing. I know that you call it a maximal ideal in algebra, and it need not be unique (unless it’s a local ring?), but then why say “local minima” if a minimum is unique?

I just noticed I’m a bit inconsistent in my usage in this paper I’m writing, and I can’t tell if I should call it the “maximum probability of error criterion” or the “maximal probability of error criterion.” I was leaning to the former, but now thinking about it has got me all muddled.

IEEEtran BibTeX annoyance

I’m busy as a bee writing two papers for ISIT 2007 (in Nice, woohoo!) and as usual I find myself at odds with the IEEE Transactions style formats. The BibTeX format by default puts the bibliography in order of how references are cited, and as far as I can tell there is no option for putting things in alphabetical order. One option is of course to use the \nocite command before any of the other text. This will put the citations in the bibliography without anything in the main text — a handy feature for sneaking in references that you don’t need but should cite (perhaps to appease reviewers). But that hack defeats the purpose of BibTeX, which is to stop you from futzing with your bibliography by formatting the thing in the correct manner, be it APA, MLA, ACM, or IEEE.

I understand that for a survey paper it would be advantageous to list references in the order that they are cited. That way all the papers on topic A will be in a big block together in the bibliography, and the cite package provides a nice shorthand that will list the references as [1 – 5] instead of [1][2][3][4][5]. For conference papers, which often have fewer than 10 citations, it seems that the aesthetic benefits of an alphabetized bibliography outweigh the minor inconvenience in typesetting. From looking at existing published papers, it seems that the IEEE isn’t particularly insistent on making the bibliography in citation-order. So why not provide the alphabetical option?

Perhaps its because the entire LaTeX package system is being maintained by Michael Shell, who I’m sure has better things to do with his time, like a real job. It almost boggles the mind that so many people in the IEEE use this LaTeX package and the Institute doesn’t really support it in the way that the AMS supports AMSTeX.

HD-DVD copy protection

Apparently the HD-DVD copy protection system has been compromised. Is this really a big surprise? I mean, if you actually go and talk to the engineers who designed AACS, would they think it’s totally unbreakable? Behind all the PR and marketing hacks who don’t know the first thing about cryptography are the people who actually designed this system, and I’m sure they would list a load of standard caveats like “assuming a bounded-complexity adversary” But even then, perhaps twixt design and implementation someone screwed up and weakened the protections.

The whole approach to copy-protecting media via controlling the hardware is a little odd to me. Its like having a secret decoder ring made of a piece of red cellophane. If you make a media format which is useful for carrying data and holding movies, you’re inviting attacks far more complicated than those envisioned by scenarios in which you have an average user buying a player and plugging it into their TV. Is it really that the design specs were for the latter and didn’t even think about what would happen if someone popped the disc into their computer?

Personally, I thnk these guys knew it would happen all along and were surprised at how little time it took. The truth is probably that people in the company who were ignorant of the technology ended up approving something that was a lot weaker than it should have been, and not that the guy who hacked the standard was somehow a super-genius. This is not to take away from his accomplishment, but it seems like you can either build a watertight security system or one that leaks like a sieve, and this is more of the latter.