I arrived early enough to catch the tutorials on the first day. There was a 3 hour session in the morning and another on the in afternoon. For the morning I decided to expand my horizons by attending Manoj Gopalkrishnan‘s tutorial on the physics of computation. Manoj focused on the question of how much energy it takes to erase or copy a bit of information. He started with some historical context via von Neumann, Szilard, and Landauer to build a correspondence between familiar information theoretic concepts and their physical counterparts. So in this correspondence, relative entropy is the same as free energy. He then turned to look at what one might call “finite time” thermodynamics. Suppose that you have to apply a control that operates in finite time in order to change a bit. One way to look at this is through controlling the transition probabilities in a two-state Markov chain representing the value of the bit you want to fix. You want to drive the resting state (with stationary distribution to something like within time . At this level I more or less understood what was going on, but since my physics background is pretty poor, I think I missed out on how the physical intuition/constraints impact what control strategies you can choose.

Prasad Santhanam gave the other tutorial, which was a bit more solid ground for me. This was not quite a tutorial on large-alphabet probability estimation, but more directly on universal compression and redundancy calculations. The basic setup is that you have a family of distributions and you don’t know which distribution will generate your data. Based on the data sample you want to do something: estimate some property of the distribution, compress the sample to a size close to its entropy, etc. A class can be weakly or strongly compressible, or *insurable* (which means being able to estimate quantiles), and so on. These problems turn out to be a bit different from each other depending on some topological features of the class. One interesting thing to consider for the machine learners out there this stopping time that you need in some analyses. As you are going along, observing the data and doing your task (estimation, compression, etc) can you tell *from the data* that you are doing well? This has major implications for whether or not an online algorithm can even work the way we want it to, and is something Prasad calls “data-driven compressible.”

I’ll try to write another post or two about the talks I saw as well!

Filed under: Uncategorized ]]>

The title pretty much describes it — there are two receivers which are both looking out for a particular message. This is the identification problem, in which the receiver only cares about a particular message (but we don’t know which one) and we have to design a code such that they can detect the message. The number of messages is where is the Shannon capacity of the DMC. In the broadcast setting we run into the problem that the errors for the two receivers are entangled. However, their message sets are disjoint. The way out is to look at the average error for each (averaged over the other user’s message). The main result is that the rates only depend on the conditional marginals, and they have a strong converse.

**Efficient compression of monotone and m-modal distributions**

*Jayadev Acharya (University of California, San Diego, USA); Ashkan Jafarpour (University of California, San Diego, USA); Alon Orlitsky (University of California, San Diego, USA); Ananda Theertha Suresh (University of California, San Diego, USA)*

A monotone distribution is a distribution on such that the probabilities are non-increasing. The redundancy for this class is infinite, alas, so they restrict the support to size (where can be large). They propose a two-step compression scheme in which the first step is to approximate the true distribution with a piecewise constant step distribution, and then use a compression scheme for step distributions.

**Writing on a Dirty Paper in the Presence of Jamming**

*Amitalok J Budkuley (Indian Institute of Technology, Bombay, India); Bikash K Dey (Indian Institute of Technology Bombay, India); Vinod M Prabhakaran (Tata Institute of Fundamental Research, India)*

Ahh, jamming. A topic near and dear to my heart. This paper takes a game-theoretic approach to jamming in a DPC setup: “the capacity of the channel in the presence of the jammer is the unique Nash equilibrium utility of the zero sum communication game between the user and the jammer.” This is a mutual information game, and they show that i.i.d. Gaussian jamming and dirty paper coding are a Nash equilibrium. I looked at an AVC version of this problem in my thesis, and the structure is quite a bit different, so this was an interesting different take on the same problem — how can we use the state information to render adversarial interference as harmless as noise?

**Stable Grassmann Manifold Embedding via Gaussian Random Matrices**

*Hailong Shi (Tsinghua University & Department of Electronic Engineering, P.R. China); Hao Zhang (TsinghuaUniversity, P.R. China); Gang Li (Tsinghua University, P.R. China); Xiqin Wang (Tsinghua University, P.R. China)*

This was in the session I was chairing. The idea is that you are given a subspace (e.g., a point on the Grassman manifold) and you want to see what happens when you randomly project this into a lower-dimensional subspace using an i.i.d. Gaussian matrix *a la* Johnson-Lindenstrauss. The JL Lemma says that projections are length-preserving. Are they also volume-preserving? It turns out that they are (no surprise). The main tools are measure concentration results together with a union bound over a covering set.

**Is “Shannon capacity of noisy computing” zero?**

*Pulkit Grover (Carnegie Mellon University, USA)*

Yes. I think. Maybe? Pulkit set up a physical model for computation and used a cut-set argument to show that the total energy expenditure is high. I started looking at the paper in the proceedings and realized that it’s significantly different than the talk though, so I’m not sure I really understood the argument. I should read the paper more carefully. You should too, probably.

Filed under: Uncategorized ]]>

Vijay Kumar’s plenary was on codes for distributed storage and repair-bandwidth tradeoffs, focusing on extensions of the model. There was a lot of discussion of other code constructions, and how asking for certain properties (such as “locality”) can cost you something in the tradeoff. This is important when you can’t repair a code from arbitrary nodes in the network/data center — because there’s an underlying network which supplies the data for repair, codes should probably respect that network. At least that was the moral I took from this talk. Since I don’t work on coding, some things were a little over my head, but I thought he did an excellent job of keeping it accessible with nice concrete examples.

Filed under: Uncategorized ]]>

That’s a mouthful! We want a set of trinary codewords such that any triple of codewords differs in at a least one position. What’s the maximum size of such a set? That’s . More specifically, we want the rate of growth of this thing. That we have are upper and lower bounds:

.

It turns out that simple i.i.d. random coding is not going to give you a good set — the lower bound comes from a non-uniform random codebook. The upper bound is actually a capacity of a hypergraph. This led him to his second topic, which was on hypergraph entropy, a generalization of graph entropy. This is connected to Sperner families of subsets: collections such that for any pair of subsets, neither contains the other. The rate of growth of Sperner families is also related to the hypergraph entropy.

I didn’t really manage to take as good notes as I might have wanted, but I really enjoyed the lecture, and you can too, now that the video has been posted on the IT Society website. I heard some people complaining that the talk was a little too technical while walking out of the plenary hall, but for me, it was quite clear and quite interesting, even at 8:30 in the morning. You can’t please everyone I guess!

Filed under: Uncategorized ]]>

If anyone else has some useful hacks, feel free to leave them in the comments!

**Saving Space**

One of the big problems in NSF proposal writing is that there’s a hard limit on the number of pages (not the number of words), so if you’re at the edge, there’s a lot of “oops, two lines over” hacking to be done towards the end.

: The typeface for your proposal makes a big difference in space. Computer Modern is a bit easier to read since there’s more whitespace, but Times shaved a whole page off of my proposal. The NSF Grant Proposal Guidelines has the list of approved formatting. It seems standard for NIH proposals to use 11pt Arial but that makes me want to gouge my eyes out. Know thy reviewers, is what I would say: keep in mind what’s standard for the solicitation and don’t make the proposal so dense as to be unreadable.~~\usepackage{times}~~\usepackage{mathptmx}**NB:**Apparently the`times`

package is deprecated (see comments).`\usepackage{titlesec}`

. This package lets you control the spacing around your titles and subtitles like this:

\titlespacing\section{0pt}{10pt plus 2pt minus 2pt}{2pt plus 2pt minus 2pt}

\titlespacing\subsection{0pt}{8pt plus 2pt minus 2pt}{2pt plus 2pt minus 2pt}

See this post for more details, but basically it’s

`\titlespacing{command}{left spacing}{before spacing}{after spacing}`

. This is handy because there’s a lot of empty space around titles/subtitles and it’s an easy way to trim a few lines while making sure things don’t get too cramped/ugly.`\usepackage{enumitem}`

: This package lets you control the spacing around your`enumerate`

lists. The package has a lot of options but one that may be handy is`\setlist{nosep}`

which removes the space around the list items. This actually makes things a little ugly, I think, but bulleted lists are helpful to the reviewer and they also take a little more space, so this lets you control the tradeoff. Another thing that is handy to control is the left margin:`\setlist[itemize,1]{leftmargin=20pt}`

.`\usepackage{savetrees}`

: Prasad says it’s great, but I didn’t really use it. YMMV.

**Customizations**

- Sometimes it’s handy to have a new theorem environment for Specific Aims or Open Problems or what-have-you. The problem is (as usual) that the theorem environment by itself puts in extra space and isn’t particularly customizable. So one option is to define a new theorem style:

\newtheoremstyle{mystyle}% name

{5pt}%Space above

{5pt}%Space below

{\itshape}% Body font

{5pt}%Indent amount

{\bfseries}% Theorem head font

{:}%Punctuation after theorem head

{4pt}%Space after theorem head 2

{}%Theorem head spec (can be left empty, meaning â€˜normalâ€™)`\theoremstyle{mystyle}`

\newtheorem{specaim}{Specific Aim}

- Another handy hack is to make a different citation command to use for your own work that will then appear in a different color than normal citations if you use
`\usepackage[colorlinks]{hyperref}`

. I learned how to do this by asking a question on the stack exchange.

\makeatletter

\newcommand*{\citeme}{%

\begingroup

\hypersetup{citecolor=red}%

\@ifnextchar[\citeme@opt\citeme@

}

\def\citeme@opt[#1]#2{%

\cite[{#1}]{#2}%

\endgroup

}

\newcommand*{\citeme@}[1]{%

\cite{#1}%

\endgroup

}

\makeatother

- The
`hyperref`

package also creates internal links to equations and Figures (if you label them) and so on, but the link is usually just the number of the label, so you have to click on “1” instead of “Figure 1″ being the link. One way to improve this is to make a custom reference command:

\newcommand{\fref}[2]{\hyperref[#2]{#1 \ref*{#2}}}

So now you can write

`\fref{Figure}{fig:myfig}`

to get “Figure 1″ to be clickable. - You can also customize the colors for hyperlinks:

\hypersetup{

colorlinks,

citecolor=blue,

linkcolor=magenta,

urlcolor=MidnightBlue}

- Depending on your SRO, they may ask you to deactivate URLs in the references section. I had to ask to figure this out, but basically putting
`\let\url\nolinkurl`

before the bibliography seemed to work…

Filed under: Uncategorized ]]>

**Strong Large Deviations for Composite Hypothesis Testing**

*Yen-Wei Huang (Microsoft Corporation, USA); Pierre Moulin (University of Illinois at Urbana-Champaign, USA)*

This talk was actually given by Vincent Tan since neither of the authors could make it (this seems to be a theme of talks I’ve attended this summer. The paper was about testing a simple hypothesis versus a composite hypothesis where under the observations are i.i.d. with respect to one of possibly different distributions. There are therefore different errors and the goal is to characterize these errors when we ask for the probability of true detection to be greater than . This is a sort of generalized Neyman-Pearson setup. They look at the vector of log-likelihood ratios and show that a threshold test is nearly optimal. At the time, I understood the idea of the proof, but I think it’s one of things where you need to really read the paper.

**Randomized Sketches of Convex Programs with Sharp Guarantees**

*Mert Pilanci (University of California, Berkeley, USA); Martin J. Wainwright (University of California, Berkeley, USA)*

This talk was about using random projections to lower the complexity of solving a convex program. Suppose we want to minimize over given . A sketch would be to solve where is a random projection. One question is how to choose . They show that choosing to be a randomized Hadamard matrix (the paper studies Gaussian matrices), then the objective value of the sketched program is at most times the value of the original program as long as the the number of rows of is larger than , where is the *Gaussian width* of the tangent cone of the contraint set at the optimum value. For more details look at their preprint on ArXiV.

**On Efficiency and Low Sample Complexity in Phase Retrieval**

*Youssef Mroueh (MIT-IIT, USA); Lorenzo Rosasco (DIBRIS, Unige and LCSL – MIT, IIT, USA)*

This was another talk not given by the authors. The problem is recovery of a complex vector from phaseless measurements of the form where are complex spherically symmetric Gaussian vectors. Recovery from such measurements is nonconvex and tricky, but an alternating minimizing algorithm can reach a local optimum, and if you start it in a “good” initial position, it will find a global optimum. The contribution of this paper is provide such a smart initialization. The idea is to “pair” the measurements to create new measurements . This leads to a new problem (with half as many measurements) which is still hard, so they find a convex relaxation of that. I had thought briefly about such sensing setups a long time ago (and by thought, I mean puzzled over it at a coffeshop once), so it was interesting to see what was known about the problem.

**Sorting with adversarial comparators and application to density estimation**

*Jayadev Acharya (University of California, San Diego, USA); Ashkan Jafarpour (University of California, San Diego, USA); Alon Orlitsky (University of California, San Diego, USA); Ananda Theertha Suresh (University of California, San Diego, USA)*

Ashkan gave this talk on a problem where you have samples from an unknown distribution and a set of distributions to compare against. You want to find the distribution that is closest in . One way to do this is via Scheffe tournament tht compares all pairs of distributions — this runs in time time. They show a method that runs in time by studying the structure of the comparators used in the sorting method. The motivation is that running comparisons can be expensive (especially if they involve human decisions) so we want to minimize the number of comparisons. The paper is significantly different than the talk, but I think it would definitely be interesting to those interested in discrete algorithms. The density estimation problem is really just a motivator — the sorting problem is far more general.

Filed under: Uncategorized ]]>

Filed under: Uncategorized ]]>

Filed under: Uncategorized ]]>

Redundancy of Exchangeable Estimators

Narayana P. Santhanam, Anand D. Sarwate and Jae Oh WooExchangeable random partition processes are the basis for Bayesian approaches to statistical inference in large alphabet settings. On the other hand, the notion of the pattern of a sequence provides an information-theoretic framework for data compression in large alphabet scenarios. Because data compression and parameter estimation are intimately related, we study the redundancy of Bayes estimators coming from Poisson-Dirichlet priors (or “Chinese restaurant processes”) and the Pitman-Yor prior. This provides an understanding of these estimators in the setting of unknown discrete alphabets from the perspective of universal compression. In particular, we identify relations between alphabet sizes and sample sizes where the redundancy is small, thereby characterizing useful regimes for these estimators.

In the large alphabet setting, one thing we might be interested in is sequential prediction: I observe a sequence of butterfly species and want to predict whether the next butterfly I collect will be new or one that I have seen before. One simple way to do this prediction is to put a prior on the set of all distributions on infinite supports and do inference on that model given the data. This corresponds to the so-called Chinese Restaurant Process (CRP) approach to the problem. The information-theoretic view is that sequential prediction is equivalent to compression: the estimator is assigning a probability to the sequence seen so far. An estimator is good if for any distribution , if is drawn i.i.d. according to , then the divergence between and is “small.” The goal of this work is to understand when CRP estimators are good in this sense.

This sort of falls in with the “frequentist analysis of Bayesian procedures” thing which some people work on.

Filed under: Uncategorized ]]>

**Fast Stochastic Alternating Direction Method of Multipliers***(Wenliang Zhong; James Kwok)*: Most of the talks in the Optimization II session were on ADMM or stochastic optimization, or both. This was int he last category. ADMM can have rather high-complexity update rules, especially on large, complex problems, so the goal is to lower the complexity of the update step by making it stochastic. The hard part seems to be controlling the step size.**An Asynchronous Parallel Stochastic Coordinate Descent Algorithm**(*Ji Liu; Steve Wright; Christopher Re; Victor Bittorf; Srikrishna Sridhar)*: The full version of this paper is on ArXiV. The authors look at a multi-core lock-free stochastic coordinate descent method and characterize how many cores you need to get linear speedups — this depends on the convexity properties of the objective function.**Communication-Efficient Distributed Optimization using an Approximate Newton-type Method***(Ohad Shamir; Nati Srebro; Tong Zhang)*: This paper looked 1-shot “average at the end” schemes where you divide the data onto multiple machines, have them each train a linear predictor (for example) using stochastic optimization and then average the results. This is just averaging i.i.d. copies of some complicated random variable (the output of an optimization) so you would expect some variance reduction. This method has been studied by a few people int the last few years. While you do get variance reduction, the bias can still be bad. On the other extreme, communicating at every iteration essentially transmits the entire data set (or worse) over the network. They propose a new method for limiting communication by computing an approximate Newton step without approximating the full Hessian. It works pretty well.**Lower Bounds for the Gibbs Sampler over Mixtures of Gaussians***(Christopher Tosh; Sanjoy Dasgupta)*: This was a great talk about how MCMC can be really slow to converge. The model is a mixture of Gaussians with random weights (Dirichlet) and means (Gaussian I think). Since the posterior on the parameters is hard to compute, you might want to do Gibbs sampling. They use conductance methods to get a lower bound on the mixing time of the chain. The tricky part is that the cluster labels are permutation invariant — I don’t care if you label clusters (1,2) versus (2,1), so they need to construct some equivalence classes. They also have further results on what happens when the number of clusters is misspecified. I really liked this talk because MCMC always seems like black magic to me (and I even used it in a paper!)**(Near) Dimension Independent Risk Bounds for Differentially Private Learning***(Prateek Jain; Abhradeep Guha Thakurta)*: Abhradeep presented a really nice paper with a tighter analysis of output and objective perturbation methods for differentially private ERM, along with a new algorithm for risk minimization on the simplex. Abhradeep really only talked about the first part. If you focus on scalar regret, they show that essentially the error comes from taking the inner product of a noise vector with a data vector.*If the noise is Gaussian*then the noise level is dimension-independent for bounded data. This shows that taking -differential privacy yield better sample complexity results than -differential privacy. This feels similar in flavor to a recent preprint on ArXiV by Beimel, Nissim, and Stemmer.**Near-Optimally Teaching the Crowd to Classify***(Adish Singla; Ilija Bogunovic; Gabor Bartok; Amin Karbasi; Andreas Krause)*: This was one of those talks where I would have to go back to look at the paper a bit more. The idea is that you want to train annotators to do better in a crowd system like Mechanical Turk — which examples should you give them to improve their performance? They model the learners as doing some multiplicative weights update. Under that model, the teacher has to optimize to pick a batch of examples to give to the learner. This is hard, so they use a submodular surrogate function and optimize over that.**Discrete Chebyshev Classifiers***(Elad Eban; Elad Mezuman; Amir Globerson)*: This was an award-winner. The setup is that you have categorical (not numerical) features on variables and you want to do some classification. They consider taking pairwise inputs and compute for each tuple a marginal . If you want to create a rule for classification, you might want to pick one that has best worst-case performance. One approach is to take the one which has best worst-case performance over all joint distributions on all variables that agree with the empirical marginals. This optimization looks hard because of the exponential number of variables, but they in fact show via convex duality and LP relaxations that it can be solved efficiently. To which I say: wow! More details are in the paper, but the proofs seem to be waiting for a journal version.

Filed under: Uncategorized ]]>