Alex and I were talking this week about what the syllabus for a course along the lines of Arora’s Theorist’s Toolkit or Kelner’s An Algorithmist’s Toolkit (HT to Lav for the link on my last post on this topic). I think developing a course along these lines with interchangeable 2-4 lecture “modules” would be a great thing in general. Modular design is appealing from an engineering perspective (although, as my advisor might say, separation of design is in general suboptimal). The question is, what would a good set of topics be? Here’s a partial set:
- Advanced matrix tricks: matrix derivatives and other tools
- Majorization and applications
- Random graphs and random geometric graphs
- Mixing times for Markov chains
- Auctions and related allocation mechanisms
- Random matrix theory : the basic results
- Message passing algorithms
- High-dimensional convex geometry
- Concentration of measure phenomena
- Alternating projection techniques
If any readers have ideas for additional topics, please leave a comment.
The point of such a course would be to give students some exposure to analysis and modeling techniques and more importantly a set of references so they could learn more if they need to. It’s like having a kind of cultural literacy to read systems EE papers. Of course, if you’re going to go and study LDPC codes, the brief introduction to message passing wouldn’t be enough, but if you want to understand the issues around BP decoding, the 3 lectures may be sufficient for you to follow what is going on. The list above has some items that are too narrow and some that are too broad. There are a lot of different tools out there, and some exposure to what they are used for would be useful, especially for graduate students at schools which don’t have an extremely diverse seminar series.
6 thoughts on “EE Toolkit Topics”
I think the key difference from traditional theoretical EE classes is focusing on “proving techniques” rather than application areas.
Some slight variations on your list:
-Topics in Markov Chains (hitting times, mixing times, etc)
-Probabilistic method (first/second moment methods, local lemma)
-Optimization, Linear programming, and duality theory
-Concentration inequalities (Chernoff bounds, Azuma-Hoeffding, Large deviations)
-Random graphs, expanders, small world graphs and models for wireless networks
Hmmm, what other families of tricks are common enough, and can be overviewed in 2-4 lectures each?
I would love such a course! Maybe the following topics are of interest too:
1) Introduction to game theory
2) Algebraic techniques – solutions of systems of polynomial equations (the ideal-variety correspondence), polynomial interpolation
3) Lattices – properties, finding shortest vectors, good lattices
These are all awesome topics; I know I wish I could keep all of these tools handy. However, I don’t think it works that way. I think knowledge of the tool collection is a bit like being a kid with a new Swiss Army knife in your pocket (before it became semi-illegal to carry a knife to school or on a plane or any other convenient place). It made you feel like you could tackle all sorts of projects but in fact the tools were never quite useful enough for anything significant.
It seems to me that the bulk of the topics on the list can be divided into a course on Optimization and a course in Stochastic Processes and perhaps a course in Distributed Algorithms. Admittedly, some topics don’t seem to fit (auctions? lattices?), but I don’t that diminishes the value of courses that can provide an integrated understanding of a body of knowledge.
Without studying the UCB catalog, don’t these courses already exist somewhere on campus?
I don’t think the point of this kind of class would be to make students feel like they could tackle new problems right away. The goals would be twofold:
(a) General literacy. I learned about auction and pricing mechanisms from going to seminars, not from taking a graduate course on networks. Perhaps I should have taken that course — but there are a lot of classes out there, and students have only so much time. I think of this kind of class as a sort of mulch for students understanding to grow and integrate over time. Perhaps manure is the better analogy 🙂
(b) Access. The Swiss Army knife analogy is a good one — the benefit of a Swiss Army knife is that if you need a bottle opener, or a pair of tweezers, or mini scissors, you know where to look. Of course the other students in your program or group are a resource, but not that many schools have the open atmosphere that Berkeley or Winlab at Rutgers has. It’s easy to say “leverage your peers,” but in practice it might be nontrivial. Compounding that is the general grad school depression that makes people feel awkward about asking for help or seeming stupid.
In my mind, a course like this would give students an idea of the kinds of proof tools and models that are out there without having to take a full-on course.
|Of course a stochastic processes class would cover Markov chains, but maybe the mixing time analysis is shunted in favor of spending more time on queueing. At the graduate level syllabi are often highly mutable, and depending on who teaches the basic course and their proclivities, some topics may be emphasized over others. An additional benefit of a course like this might be to smooth out some of the bumps.
Anand, I just ran across this, because I was searching for “toolkit” for matlab… In terms of theory, I would include: stochastic optimization/approximation, principles of dynamic programming, and theory fundamentals of semi- and non-parametric statistics (including chaining, donsker, Neyman-Pearson criterion, etc).
I feel that my emphasis would be on applying these tools/techniques to some concrete models, rather than detailed proof of results. In most EE-Theory papers you find clever use of existing theorems and techniques (sometimes, modifying existing proofs), rather than say what a probabilist would do (formalizations and new proof techniques). For example, in some sense Auctions is combo of game theory and optimization with some randomized algorithms. Or, understanding message-passing is more of understanding the underlying model (conditional independences) and variational inequalities… So on so forth…
Pingback: Toolkit revisited « An Ergodic Walk