Tracks : kisses are a better fate than wisdom

  1. A Little Lost — Nat Baldwin
  2. Lemonade — Braids
  3. It All Began With A Burst — Kishi Bashi
  4. Plasticities — Andrew Bird
  5. Tenere Taqqim Tossam — Tinariwen
  6. Chapter 8 -Seashore and Horizon- — Cornelius
  7. Cavaleiro Monge — Antônio Carlos Jobim
  8. No Balanço da Canoa — Maga Bo
  9. Moonday School (Intergalactic Church) — THEESatisfaction
  10. triangle walks — Fever Ray
  11. Wraith Pinned to the Mist and Other Gams — of Montreal
  12. Awkward — Lightning Love
  13. Ignore the Bell — The Ladybug Transistor
  14. 1904 — The Tallest Man On Earth
  15. Don’t Try to Fool Me — Miss Li
  16. Forks and Knives (La Fete) — Beirut
  17. That Old Feeling — Miss Erika
  18. Kiss Me — Tom Waits

Braised mizuna and oyster mushrooms

I am headed out of town tomorrow but I wanted to cook up my ill-advised gains from the Logan Square farmer’s market — mizuna and oyster mushrooms. I was a bit inspired by this ohitashi variation, but wanted something a bit more hearty to eat with soba. So I decided to braise the greens with ginger and dashi. This recipe may need tweaking depending on the saltiness of your dashi, etc.

Braised oyster mushrooms, turnips, and mizuna over soba

Braised oyster mushrooms, turnips, and mizuna over soba

Ingredients
4 medium Japanese turnips, sliced thinly
1/2 – 1 lb oyster mushrooms, sliced
1 bunch mizuna

3 tbsp diced or grated ginger
3 tbsp mirin or sake
2/3 cup dashi (from scratch or bottle)
2 tbsp soy sauce
peanut oil

cooked soba (buckwheat) noodles.

Lightly coat wok/pan with oil and cook turnips on medium-high until softened and some are lightly browned. Remove turnips and add a little bit more oil and cook mushrooms until they soften and give up liquid. Add turnips and mix. Add mirin/sake and mix well until it cooks off. Make a space in the middle, add a little more oil and cook ginger until aromatic, then mix everything. Add mizuna and mix, then add dashi and soy sauce. Simmer until broth reduces and mizuna wilts, but not too long. Serve over soba.

Toolkit revisited

I joined TTI Chicago almost a year ago, and it’s been an interesting time here. Since my background is a bit different from most of the other folks here, I have many moments of “academic cognitive dissonance” as it were — but more on that later. Madhur Tulsiani is going to offer a toolkit course in the spring focusing on mathematical tools for CS theory — I wanted to revisit a topic from a few years ago, namely what an EE-systems/theory “toolkit” would look like. I think a similar course / seminar would be really handy (even for self-study), but the topics we came up with before seem a little dated now. It seems like the topics fall under a few categories

  • advanced stochastic processes : stochastic approximation
  • mathematical economics : game theory, auctions, mechanism design
  • advanced probability : concentration of measure, random graphs
  • optimization : stochastic control, dynamic programming, convex optimization
  • mathematical statistics : asymptotic statistics, minimax theory

Roy’s observation is that these topics are already covered in graduate syllabi is still apt. But I still think that knowing a smattering of these topics is still important for general literacy and critical reading of papers. In reading a new paper I first situate the techniques within the context of things I know about — if I have to absorb the author’s cursory description of the general method as well as its application to the problem at hand, I get bogged down in the former and find the latter mystifying.

Actually, I think what would be great is to make tutorials on the topics and gather them together. I know that people who make research tutorials spend a lot of time on them and there’s some reluctance to gather them together, but these topics are not bleeding edge and could be part of a course. It’s sort of like Connexions, but perhaps a little less wiki-like and more lecture-notes like. What would be the best way to do that?

As an aside, Madhur is also thinking of doing a more focused course later which would cover coding and information theory for (theoretical) computer scientists. I’ve thought a fair bit about such a course focused on machine learning — focusing a bit more on statistical issues like redundancy and Sanov’s theorem instead of Gaussian channels. But how could one do an information theory course without \frac{1}{2} \log( 1 + \mathsf{SNR} )?

DIMACS Workshop on Differential Privacy

Via Kamalika, I head about the DIMACS Workshop on differential privacy at the end of October:

DIMACS Workshop on Differential Privacy across Computer Science
October 24-26, 2012
(immediately after FOCS 2012)

Call for Abstracts — Short Presentations

The upcoming DIMACS workshop on differential privacy will feature invited talks by experts from a range of areas in computer science as well as short talks (5 to 10 minutes) by participants.

Participants interested in giving a short presentation should send an email to asmith+dimacs@psu.edu containing a proposed talk title, abstract, and the speaker’s name and affiliation. We will try to
accommodate as many speakers as possible, but

a) requests received before October 1 will get full consideration
b) priority will be given to junior researchers, so students and postdocs should indicate their status in the email.

More information about the workshop:

The last few years have seen an explosion of results concerning differential privacy across many distinct but overlapping communities in computer science: Theoretical Computer Science, Databases, Programming Languages, Machine Learning, Data Mining, Security, and Cryptography. Each of these different areas has different priorities and techniques, and despite very similar interests, motivations, and choice of problems, it has become difficult to keep track of this large literature across so many different venues. The purpose of this workshop is to bring researchers in differential privacy across all of these communities together under one roof to discuss recent results and synchronize our understanding of the field. The first day of the workshop will include tutorials, representing a broad cross-section of research across fields. The remaining days will be devoted to talks on the exciting recent results in differential privacy across communities, discussion and formation of interesting open problems, and directions for potential inter-community collaborations.

The workshop is being organized by Aaron Roth (blog) and Adam Smith (blog).

Juggling, (a)synchrony, and queues

Research often takes twisty little paths, and as the result of a recent attempt to gain understanding about a problem I was trying to understand the difference between the following two systems with k balls and n (ordered) bins:

  1. Synchronous: take all of the top balls in each bin and reassign them randomly and uniformly to the bottoms of the bins.
  2. Asynchronous: pick a random bin, take the top ball in that bin, and reassign it randomly and uniformly to the bottom of a bin.

These processes sound a bit similar, right? The first one is a batch version of the second one. Sort of. We can think of this as modeling customers (balls) in queues (bins) or balls being juggled by n hands (bins).

Each of these processes can be modeled as a Markov chain on the vector of bin occupation numbers. For example, for 3 balls and 3 bins we have configurations that look like (3,0,0) and its permutations, (2,1,0) and its permutations, and (1,1,1) for a total of 10 states. If you look at the two Markov chains, they are different, and it turns out they have different stationary distributions, even. Why is that? The asynchronous chain is reversible and all transitions are symmetric. The synchronous one is not reversible.

One question is if there is a limiting sense in which these are similar — can the synchronous batch-recirculating scheme be approximated by the asynchronous version if we let n or k get very large?