Proof by figure.

# Category Archives: Uncategorized

# A letter to the GlobalSIP Technical Program Chairs

I am a big supporter of robust peer review. However, I feel very strongly that issuing short review deadlines weakens the review process and has a negative impact on the quality of research. I had a previous experience with a machine learning conference that assigned 9 papers requiring an in-depth review and was given less than 3 weeks to complete these reviews. I immediately wrote back saying that this was infeasible and the deadline was extended by more than a week, as I recall. It was still hard to get the reviews done on time, but I managed it.

People may think I am being petty here, but I think it is important to not get caught in the dilemma of “phone it in and get it done by the deadline” and “pull some all-nighters to get it done right.”

I regret to inform you that I must resign from the Technical Program Committee for GlobalSIP because I will be unable to complete reviews required of me in the time required by the conference.

On July 12 at around 11:45 EST I was assigned 12 papers to review for the conference, for a total of around 60 pages of material (including references). The deadline given was “before July 22, 2015 (AoE)” which I take to mean approximately 8 AM EST July 22 given the location international date line. This is around 9 days to review 12 papers.

At that time I responded indicating that given my other responsibilities, I would be unable to review such a large volume of material at such short notice in the given time frame. I received no response.

On July 15 at 12:33 AM I received a second request to review the same papers with a revised deadline of “before July 25, 2015 (AoE)”. That is, 2 days after the initial assignment, the deadline was extended by 3 days.

Given my other professional and personal commitments, I will not be able to provide the level of scrutiny required to review the papers in under two weeks. As it stands, the modest extension covering 3 additional business days is not enough, especially given the delay in issuing the extension. I realize that conference submissions do not entail the same depth of review as a journal paper, but they still take time, and the review requests came quite unexpectedly.

Finally, I recognize that the delay in assignment was caused by “system glitches” (as stated in your email) and is not the fault of the PC chairs. However, the brunt of the effect is faced by the reviewers. Without any prior communication or information regarding the delay in review assignments, I am not able to juggle/move/delay other obligations at such short notice.

Anand D. Sarwate

Assistant Professor

Department of Electrical and Computer Engineering

Rutgers, The State University of New Jersey

asarwate@ece.rutgers.edu

http://www.ece.rutgers.edu/~asarwate/

# Manual Cinema’s ADA | AVA

Chicago performance group Manual Cinema has a performance running for another week down in the Financial District at 3LD Arts and Technology Center. It’s a co-production with The Tank, a great nonprofit that supports the development of new works. The show uses 4 overhead projectors, a live band, and actors to make a dialogue-free live-created shadow-play animation, complete with sound effects. The overall aesthetics reminded me of Limbo, an independent game, although less zombie-filled. The story is about two sisters, Ada, and Ava. Ava passes away and Ada has to cope with the grief of losing someone so close to her. We go through her memories of their time together and her fantasies light and dark as she mourns and perhaps begins to heal.

I don’t have too much to say except to recommend it highly. If I had one critique it would be that I wanted the story to be more surprising, or revelatory. The medium is at the same time familiar (animation) and new (live performance). They can show so many things and use metaphor (sometimes a bit heavy-handed in a 1940s way) in ways that a conventional play with dialogue would be hard-pressed to do. I wanted to learn something new about grieving, and afterwards I felt like I hadn’t. But then again, I’m still thinking about it, so perhaps I cannot yet put what I have learned into words.

# ISIT 2015 : statistics and learning

The advantage of flying to Hong Kong from the US is that the jet lag was such that I was actually more or less awake in the mornings. I didn’t take such great notes during the plenaries, but they were rather enjoyable, and I hope that the video will be uploaded to the ITSOC website soon.

There were several talks on entropy estimation in various settings that I did not take great notes on, to wit:

**OPTIMAL ENTROPY ESTIMATION ON LARGE ALPHABETS VIA BEST POLYNOMIAL APPROXIMATION**(Yihong Wu, Pengkun Yang, University Of Illinois, United States)**DOES DIRICHLET PRIOR SMOOTHING SOLVE THE SHANNON ENTROPY ESTIMATION PROBLEM?**(Yanjun Han, Tsinghua University, China; Jiantao Jiao, Tsachy Weissman, Stanford University, United States)**ADAPTIVE ESTIMATION OF SHANNON ENTROPY**(Yanjun Han, Tsinghua University, China; Jiantao Jiao, Tsachy Weissman, Stanford University, United States)

I would highly recommend taking a look for those who are interested in this problem. In particular, it looks like we’re getting towards more efficient entropy estimators in difficult settings (online, large alphabet), which is pretty exciting.

**QUICKEST LINEAR SEARCH OVER CORRELATED SEQUENCES**

*Javad Heydari, Ali Tajer, Rensselaer Polytechnic Institute, United States*

This talk was about hypothesis testing where the observer can control the samples being taken by traversing a graph. We have an -node graph (c.f. a graphical model) representing the joint distribution on variables. The data generated is i.i.d. across time according to either or . At each time you get to observe the data from only one node of the graph. You can either observe the same node as before, explore by observing a different node, or make a decision about whether the data from from or . By adopting some costs for different actions you can form a dynamic programming solution for the search strategy but it’s pretty heavy computationally. It turns out the optimal rule for switching has a two-threshold structure and can be quite a bit different than independent observations when the correlations are structured appropriately.

**MISMATCHED ESTIMATION IN LARGE LINEAR SYSTEMS**

*Yanting Ma, Dror Baron, North Carolina State University, United States; Ahmad Beirami, Duke University, United States*

The mismatch studied in this paper is a mismatch in the prior distribution for a sparse observation problem , where (say a Bernoulli-Gaussian prior). The question is what happens when we do estimation assuming a different prior . The main result of the paper is an analysis of the excess MSE using a decoupling principle. Since I don’t really know anything about the replica method (except the name “replica method”), I had a little bit of a hard time following the talk as a non-expert, but thankfully there were a number of pictures and examples to help me follow along.

**SEARCHING FOR MULTIPLE TARGETS WITH MEASUREMENT DEPENDENT NOISE**

*Yonatan Kaspi, University of California, San Diego, United States; Ofer Shayevitz, Tel-Aviv University, Israel; Tara Javidi, University of California, San Diego, United States*

This was another search paper, but this time we have, say, targets uniformly distributed in the unit interval, and what we can do is query at each time a set and get a response where and where is the Lebesgue measure. So basically you can query a set and you get a noisy indicator of whether you hit any targets, where the noise depends on the size of the set you query. At some point you stop and guess the target locations. You are successful if the probability that you are within of each target is less than . The targeting rate is the limit of as (I’m being fast and loose here). Clearly there are some connections to group testing and communication with feedback, etc. They show there is a significant gap between the adaptive and nonadaptive rate here, so you can find more targets if you can adapt your queries on the fly. However, since rate is defined for a fixed number of targets, we could ask how the gap varies with . They show it shrinks.

**ON MODEL MISSPECIFICATION AND KL SEPARATION FOR GAUSSIAN GRAPHICAL MODELS**

*Varun Jog, University of California, Berkeley, United States; Po-Ling Loh, University of Pennsylvania, United States*

The graphical model for jointly Gaussian variables has no edge between nodes and if the corresponding entry in the inverse covariance matrix. They show a relationship between the KL divergence of two distributions and their corresponding graphs. The divergence is lower bounded by a constant if they differ in a single edge — this indicates that estimating the edge structure is important when estimating the distribution.

**CONVERSES FOR DISTRIBUTED ESTIMATION VIA STRONG DATA PROCESSING INEQUALITIES**

*Aolin Xu, Maxim Raginsky, University of Illinois at Urbana–Champaign, United States*

Max gave a nice talk on the problem of minimizing an expected loss of a -dimensional parameter which is observed noisily by separate encoders. Think of a CEO-style problem where there is a conditional distribution such that the observation at each node is a matrix whose columns are i.i.d. and where the -th row is i.i.d. according to . Each sensor gets independent observations from the same model and can compress its observations to bits and sends it over independent channels to an estimator (so no MAC here). The main result is a lower bound on the expected loss as s function of the number of bits latex , the mutual information between and the final estimate . The key is to use the strong data processing inequality to handle the mutual information — the constants that make up the ratio between the mutual informations is important. I’m sure Max will blog more about the result so I’ll leave a full explanation to him (see what I did there?)

More on Shannon theory etc. later!

# Recipe: Chaimen mutton stew

A danger of living near New York is the relative proximity of Kalustyan’s and its near-infinite array of spices. Last year on an impulse I bought a packet of dry *chaimen* spice mix (basically the dry ingredients from the paste recipe here) — it’s a mix of fenugreek, cumin, paprika, and garlic used in some Armenian dishes. Since I’m not really up for making bastirma or something fancy like that, I’ve been experimenting over the last year with ways of using it outside of making a dip. Here’s a recipe for a mutton curry (I used goat), but I’ve used an adapted procedure to make a bean and vegetable dish stew as well: squash, carrots, and white beans, for example.

**Ingredients**

- 2 heaping tbsp chairmen mix
- 1 6 oz can of tomato paste
- 1 cup parsley, chopped
- 3 cloves garlic, diced or crushed
- 2-4 tbsp plain yogurt
- 1 lb cubed goat (bone-in stew meat)
- 1 medium onion, diced
- olive oil
- 1/3 cup diced prunes

**Instructions**

Mix chaimen, 2-3 tbsp or so of tomato paste, garlic, 1/2 cup parsley, and enough yogurt to make a thick paste. If necessary add a little olive oil to thin it out. Coat/rub it into the goat, cover, and let it sit for at least an hour on the counter or possibly overnight in the fridge.

Heat oil with onions in a heavy pot/dutch oven and cook on low until the onions turn more translucent. Add in 1 clove of garlic sometime in that time, being careful not to burn. Add goat and marinade cook for a few minutes, turning over a few times. Add enough water to just cover the goat and bring the heat up until it hits boiling, then cover and turn heat to low and cook it on low heat slowly for at least an hour, preferably more. Uncover and cook for 15 minutes longer to reduce the liquid. By this point the meat should be falling off the bones pretty easily. if not, you can cook a bit longer. Add in prunes and additional tomato paste to thicken (as needed) and cook for 15-20 minutes, still on low, until prunes dissolve. Remove from heat and serve garnished with additional parsley.

*Note:* other adaptations could include adding potatoes or some other starch about 30 minutes before finishing, or other vegetables that you think might be good. If you make this with chicken adding some vegetables would be great. For a vegetarian version you can shorten the cooking time a bit since you will likely add less liquid. White beans, kidney beans, and fava all work pretty well (or a mix of them). It takes on a bit of a cassoulet-like consistency in that case then. Lots of places you can sub here, like raisins or apricots for prunes, probably.

# ISIT 2015 begins!

# AISTATS 2015: a few talks from one day

I attended AISTATS for about a day and change this year — unfortunately due to teaching I missed the poster I had there but Shuang Song presented a work on learning from data sources of different quality, which her work with Kamalika Chaudhuri and myself. This was my first AISTATS. It had single track of oral presentations and then poster sessions for the remaining papers. The difficulty with a single track for me is that my interest in the topics is relatively focused, and the format of a general audience with a specialist subject matter meant that I couldn’t get as much out of the talks as I would have wanted. Regardless, I did get exposed to a number of new problems. Maybe the ideas can percolate for a while and inform something in the future.

**Computational Complexity of Linear Large Margin Classification With Ramp Loss**

*Søren Frejstrup Maibing, Christian Igel*

The main result of this paper (I think) is that ERM under ramp loss is NP-hard. They gave the details of the reduction but since I’m not a complexity theorist I got a bit lost in the weeds here.

**A la Carte — Learning Fast Kernels**

*Zichao Yang, Andrew Wilson, Alex Smola, Le Song*

Ideas like “random kitchen sinks” and other kernel approximation methods require you to have a kernel you want to approximate, but in many problems you in fact need to learn the kernel from the data. If I give you a kernel function , then you can take the Fourier transform of . This turns out to be a probability distribution, so you can sample random i.i.d. and build a randomized Fourier approximation of . If you don’t know the kernel function, or you have to learn it, then you could instead try to learn/estimate the transform directly. This paper was about trying to do that in a reasonably efficient way.

**Learning Where to Sample in Structured Prediction**

*Tianlin Shi, Jacob Steinhardt, Percy Liang*

This was about doing Gibbs sampling, not for MCMC sampling from the stationary distribution, but for “stochastic search” or optimization problems. The intuition was that some coordinates are “easier” than others, so we might want to focus resampling on the harder coordinates. But this might lead to inaccurate sampling. The aim here twas to build a heterogenous sampler that is cheap to compute and still does the right thing.

**Tradeoffs for Space, Time, Data and Risk in Unsupervised Learning**

*Mario Lucic, Mesrob Ohannessian, Amin Karbasi, Andreas Krause*

This paper won the best student paper award. They looked at a k-means problem where they do “data summarization” to make the problem a bit more efficient — that is, by learning over an approximation/summary of the features, they can find different tradeoffs between the running time, risk, and sample size for learning problems. The idea is to use coresets — I’d recommend reading the paper to get a better sense of what is going on. It’s on my summer reading list.

**Averaged Least-Mean-Squares: Bias-Variance Trade-offs and Optimal Sampling Distributions**

*Alexandre Defossez, Francis Bach*

What if you want to do SGD but you don’t want to sample the points uniformly? You’ll get a bias-variance tradeoff. This is another one of those “you have to read the paper” presentations. A nice result if you know the background literature, but if you are not a stochastic gradient aficionado, you might be totally lost.

**Sparsistency of -Regularized M-Estimators**

*Yen-Huan Li, Jonathan Scarlett, Pradeep Ravikumar, Volkan Cevher*

In this paper they find a new condition, which they call local structured smoothness, which is sufficient for certain M-estimators to be “sparsistent” — that is, they recover the support pattern of a sparse parameter asymptotically as the number of data points goes to infinity. Examples include the LASSO, regression in general linear models, and graphical model selection.

Some of the other talks which were interesting but for which my notes were insufficient:

**Two-stage sampled learning theory on distributions**(*Zoltan Szabo, Arthur Gretton, Barnabas Poczos, Bharath Sriperumbudur*)**Generalized Linear Models for Aggregated Data**(*Avradeep Bhowmik, Joydeep Ghosh, Oluwasanmi Koyejo*)**Efficient Estimation of Mutual Information for Strongly Dependent Variables**(*Shuyang Gao, Greg Ver Steeg, Aram Galstyan*)**Sparse Submodular Probabilistic PCA**(*Rajiv Khanna, Joydeep Ghosh, Russell Poldrack, Oluwasanmi Koyejo*)