too many problems

… and not enough solutions:

  • Matching source and channel models for large sensor networks : I want to characterize when a fully-distributed set of gains can “match” a linear/matrix observation model with a linear/matrix communication channel model. I think it’s a broad class, especially as the number of gains (which is the number of sensors) increases.
  • Dense sensor network scaling laws: under some better constraints, it should be possible to show that the total capacity does not scale with the network size for dense networks.
  • Causal jamming relays: a new take on relays as potential jammers. You can’t trust anyone.
  • Fast distributed consensus: not quite sure where this is going, but it has to do with creating a speedy algorithm for calculating the average of a function on a graph.

There are, of course, other problems, but unless I write these down I’ll be liable to forget them when the next shiny object comes along.

more heartening news

Now I have to make sure there isn’t too much overlap between the papers…

We are pleased to inform you that your paper listed below has been accepted for presentation as a Regular paper in session WA6 at the 2005 Conference on Information Sciences and Systems. “Estimation from misaligned observations with limited feedback”

I am a bit more excited about going to Baltimore, however.

maybe I can do this thing after all

Got this in the mail last night:

Congratulations – your paper #1568949604 (‘Fading observation alignment
via feedback’) for The Fourth International Conference on Information
Processing in Sensor Networks has been accepted for The Fourth
International Conference on Information Processing in Sensor Networks.
We received 213 papers for the Main Track and 63 papers for the SPOTS
Track. After a through review process, we accepted 44 papers for the
main track and 24 papers for the SPOTS track.

Your paper has been assigned to the session POSTERS – MAIN TRACK.

It almost makes up for discovering the problem I had been working on for the last month and a half was solved in 2002.

professional blather

As fate would have it, I am in the same subfield of electrical engineering as my father, namely Information Theory, but even within this subfield we are not in the same sub-subfield. This leads to endless frustation for me nowadays; having been used to babbling to him various technical points and tidbits until this point, I am abruptly cut off now that I have my own agenda that doesn’t fit neatly into the undergraduate and beginning graduate curriculum. I’ve been on a crutch for the last 24 years. It’s not that I ask him lots of questions or he demystifies for me things that I wouldn’t otherwise have figured out, but the comfort of being able to explain technical things to someone who understands is gone. Talking to my advisor about my little points of misunderstanding is a waste of his time. Talking to my father about them is somehow less guilt-inducing.

Now, when I explain my research to him in an attempt to figure out what I’m doing wrong, I have to start from the basics. Which is good in its own way, but that little “wasting other people’s time” guilt kicks in now. It’s kind of sad to me, but that’s a part of moving on I guess. I still hope to someday co-author a paper with him, it for no other reason than for the fun of seeing it as “Sarwate and Sarwate.” As Lucky from Waiting for Godot says “only time will tell.”

characterizations of entropy

I was looking up axiomatic characterizations of entropy today and figured I’d share. There are two different axiomatizations I found while rooting around my books. I’m sure there are more, but these have nice plausible arguments. The entropy of a random variable measures the uncertainty inherent to that random variable. Earlier I argued that one should think of this uncertainty in terms of praxis — how many bits it takes to resolve something, or how many bits it can resolve. Here the question is more fundamental: what do we mean by the uncertainty?
Continue reading

people should still write like this

From J. Wolfowitz (father of Paul Wolfowitz), in Coding Theorems of Information Theory:

The use of combinatorial arguments is frequent in probability theory. We shall reverse this usual procedure and use formally probabilistic arguments to obtain combinatorial results. However, the probabilistic arguments to be employed are very simple and of no depth, and could easily be replaced by suitable combinatorial arguments. Their chief role is therefore one of convenience, to enable us to proceed with speed and dispatch.

And now to vanquish this paper I’m writing with “speed and dispatch.”

One awesome and one not-so-awesome thing

Awesome thing: Jack Silverstein posted a reference to a paper of his in response to an earlier posting — I had looked at other papers of his but not the one he cited. I probably didn’t find it due to incompetence on my part, but maybe this blogging thing is sometimes useful.

Not-so-awesome thing: My department’s reliance on IT infrastructure that always seems to be breaking/undergoing upgrades. I know I was spoiled by MIT, but is it too much to ask to have some sort of redundant backup of the mailserver so that service doesn’t go down for hours at a time? Maybe it’s time to get Gmail…

trouble

You know you’ve started reverting to childhood when you write “Benoulli” as “Bear Nooly.”

Hey everybody, it’s Nooly, the Bear who flips coins! He’s always goin’ on about how unfair his coin is… what a wacky bear.

I was 30 years too late

In doing some literature searches, I came across paper from 1973 with the following in the abstract:

A generalization of information theory is presented with the aim of distinguishing the direction of information flow for mutually coupled statistical systems… An extension to a group of such systems has also been proposed. The theory is able to describe the informational relationships between living beings and other multivariate complex systems as encountered in economy. An application example referring to group behavior with monkeys is given.

Monkeys are so cool…