Funkcialaj Ekvacioj

While poking around for some references today I took a detour through the Mathematical Reviews. I heard that Rota had written some zingers, but one of the first of his that I saw, on the Riemann-Liouville integral in the complex field contained the following comment:

Many of the results are essentially known, but this seems the best treatment in the literature, which would well justify a previous brief study of Esperanto for the interested reader.

A quick investigation shows that the paper is indeed in Esperanto!

The journal is Funkcialaj Ekvacioj, which presumably means “functional equations” in Esperanto. The entire editorial board is Japanese, and since its inception most of the contributing authors have been Japanese. The journal has been going since 1958, and they have full archives online (vol. 1-40 are free). At the beginning, they had articles in English, French, and Esperanto, with the abstract (if necessary) translated into Esperanto. That stopped as early as the 1970s (more’s the pity), and articles are just written in English, but the journal’s title remains in Esperanto.

This really made my day — maybe I should translate an article into Esperanto… except that I’d have to learn Esperanto first.

LaTeX figure repository

I’m pretty picky about figures for LaTeX documents now, and I hate having to make new figures from scratch since it takes forever. So a little while ago I got this free wiki called ThousandWords to hold figures that I had made and make them (along with relevant code) available to others. At the moment it’s pretty sparse — just a few things here and there that I put up — but it would be great to have more. In particular, right now it’s all information theory, signal processing, and networking, and there’s no reason (beyond disk space) that it can’t be more diverse.

So if any readers of this blog want to put up figures of their own for public use, let me know!

“The traveller must, of course, always be cautious of the overly broad generalisation”

George Saunders visits the UK.

But I am an American, and a paucity of data does not stop me from making sweeping, vague, conceptual statements and, if necessary, following these statements up with troops.

Furthermore, I feel confident that the discovery, by my countrymen, of the unique British delicacy called “fish and chips” would put an end to American obesity for ever.

Symphony Chorus again

I’m back in the Symphony Chorus! My reaudition was a bit harrowing, but I guess it worked out ok. I’m currently slated to sing:

  • A Mozart Journey (Michael Tilson Thomas [conductor], Jeremy Denk [piano])

    Steve Reich : Variations for Orchestra (Thursday, Saturday only)
    W.A. Mozart : A Mozart Journey (all-Mozart program)

  • John Adams’ A Flowering Tree [world premiere] (John Adams [conductor],
    Russell Thomas [tenor], Eric Owens [bass])

    John Adams : A Flowering Tree (SFS co-commission, US premiere)

    • Thursday, Mar. 1 at 8pm
    • Friday, Mar. 2 at 8pm
    • Saturday, Mar. 3 at 8pm

  • MTT and Richard Stoltzman (Michael Tilson Thomas [conductor],
    Richard Stolzman [clarinet])

    Stravinsky : Symphonies of Wind Instruments
    Stravinsky : Apollon musagète
    Takemitsu : Fantasma Cantos
    Stravinsky : Symphony of Psalms

    • Wednesday, April 11 at 8pm
    • Friday, April 13 at 8pm
    • Saturday, April 14 at 8pm

conference selectivity, publication delay, and a proposal

I should preface this post by warning any readers that these thoughts are pretty rambling and tentative. I don’t fully understand all of these issues, but they do interest me, so I find it helpful to write about them. I’m not even certain that my proposed idea here would work, but I think there seems to be some problems with the status quo that need to be addressed.

A recurrent theme in many conversations I had at ISIT was about the nature and purpose of the conference. Part of this concern stems from the stressful nature of the job market, and some from changes in the culture of the field. With regards to the former, the less selective nature of ISIT and electrical engineering conferences in general makes it difficult to evaluate the quality of work of a graduate student without being familiar with the area. Because of the incredibly long delay in journal publication, there may be no objective approval of the work by the time the student graduates. The underlying problem is that information theory is expanding and changing rapidly, and the publication and conference infrastructure hasn’t adapted to deal with the higher pace and volume of research. A lot of different arguments and proposals were thrown out there, but one that seemed to garner some interest was the foundation of a new journal for smaller results — a new Information Processing Letters, if you will.

Several people complained that ISIT was too big and that there were too many “less interesting” results. The trite answers are that the field is pretty big and that what is interesting to specialists may not be interesting to the general information theory community. If we take a comparative view, another model to look at is that of CS theory. There there are two or three very selective conferences — FOCS, STOC, and SODA. The acceptance rate for these conferences is quite low, they are single track, and all of the results presented are therefore “interesting.” Adopting such as system off the shelf misses the point of a conference like ISIT, I think.

ISIT serves several different functions that I know about. The first is to take a snapshot of the field and its progress and capture this in the proceedings. Prior to last year, the proceedings only contained a single-page extended abstract and summary of the results. The original intent that those results which were big enough would get sent to the Transactions. Another function is get sufficient funds via registration fees to pay for the transactions and the operation of the Information Theory Society. Now the proceedings are on CDROM and contain the full (5 page) papers, so many of them are (mostly) fleshed-out short papers. Finally, the conference is supposed to facilitate the contact between different research subcommunities, e.g. to get the coding people to talk to the Shannon theorists.

The first fundamental problem is that information theory has expanded as a research field — there were 4+ sessions on Network Coding alone, and those problems didn’t even exist 10 years ago. As someone told me, the flavor of the conference has shifted slowly over the years, and the incremental changes are hard to detect. The net effect is that the Transactions have become enormous and bulky — many classic papers from years ago were 10 pages, but now it’s rare to see a paper under 15-20 I bet. The conference has also become enormous and bulky, with so many parallel sessions that it’s hard to even get an idea of what is going in the field. Is this because too many papers are accepted? Perhaps. I certainly wouldn’t mind if there were about 25% fewer papers there, just so that I could see a greater percentage of what was going on. Greater selectivity would mean higher quality as well, but there are costs to that. For example, next year the conference is in Nice, and I doubt many grad students in the US would be able to just go there unless they had a paper. A lower acceptance rate would also impact the fundraising aspects.

What about the positive benefits of a low acceptance rate? Some people have argued that a low acceptance rate provides a skewed but somewhat objective measure of the quality of research done by a graduate student. That is, four solid papers at “good conferences” (in the CS sense) mean that the student has developed a solid research program. This prestige factor doesn’t hold in EE and information theory because the prestige is with journal publications, not conferences. But I noted earlier, journals take forever to publish papers, and the published papers tend to be quite long (and not because they have lots of figures). So if the burden of prestige is shifted to conferences, it might be better.

The second fundamental problem is that the pace of research is speeding up, while the venues for providing thoroughly peer-reviewed research are diminished. While all papers at ISIT are reviewed, the reviewing is somewhat uneven (I had very few reviews of my paper, while others had many). Since conferences are more about sharing the results of investigations rather than presenting the most interesting new results of the field, it’s hard to separate those ides which are going somewhere from promising directions and dead-ends, both of which may be interesting in their own right.

One solution that might work for both these problems (increased research pace and more people) would be to have more venues for journal publication. A new journal with thorough peer review and editing but an emphasis on smaller results would possibly alleviate the burden on the Transactions and speed up the pace of journal publication. The conferences could still be big, but then conference presenters with more or less polished end results could journalize those rather than waiting to included in a larger paper sent to the Transactions. It’s worth thinking about in addition to pruning down the size of ISIT a bit to avoid having 12 parallel sessions 5 years from now.

ISIT 2006 : day five

ISIT 2006 : day five

The burnout was hitting hard today, so I only saw a few talks, and some of those I can’t really say anything about since I missed parts by being late. Overall I had a good time, but it was a draining experience overall. I noticed that much of the non-student crowd spent a significant fraction of their time chatting and not in sessions, which strikes me as a wise strategy. Another wise strategy to adopt is to go to the talk with the best speaker in a given slot unless you are particularly invested in the problem being given in another slot. I went to several talks in which I hoped to learn something about a new problem and its context and often came out quite confused. Information theory is wide enough that it’s hard to understand the motivation for a lot of research, and 20 minutes isn’t enough time to give that motivation for some problems, so it’s not necessarily the speaker’s fault. Still, a good speaker can get s few bits of understanding across where a less experienced speaker might be prone to missteps.

Plenary Talk :

Prof. Gemans gave an interesting overview of the current state of computer vision research, and provided a framework and perspective that I lacked. Since he ran out of time, I never really got a good feeling for the technical part of the talk, which apparently had a connection to the liar game and other fun problems, but it did drive home the point that one has to be quite careful in applying information theory ideas willy nilly in other areas.

  • Cognitive Radio: An Information-Theoretic Perspective
    (Aleksandar Jovicic, Pramod Viswanath)

    This is a modified interference channel in which one user (the secondary) knows the message of the first user (the primary) acausally. I don’t really know how to justify this assumption, but given that, Jovicic and Viswanath solve this channel in the case where the interference-noise ratio (INR) is less than the SNR. The scheme uses dirty paper coding and so on.

  • On Jamming in the Wideband Regime (Siddharth Ray, Pierre Moulin, Muriel Medard)

    This takes the mutual-information game approach to the jamming problem — the jammer and transmitter can each choose an input distribution. The latter wishes to maximize and the former minimize the mutual information of the game. In the case of the wideband limit for Gaussian channels, the capacity-achieving bursty transmission scheme must be modified using randomization. The problem I had with this talk is that I’m not sure I understood the model correctly — how does bursty transmission fit with the inherently iid mutual information game? To me, the danger (cf Ahlswede) is that we always view transmission and information problems via the Shannon lens and so the assumptions made in the channel modeling are sometimes confusing. If nothing else, my study of AVCs has taught me that there many ways to skin a cat/problem, so I need to understand this model better.

  • Strong Consistency of the Good-Turing Estimator
    (Aaron B. Wagner, Pramod Viswanath, Sanjeev R. Kulkarni)

    The Good-Turing Estimator is a method for estimating a probability distribution without knowing the alphabet size ahead of time. For example, suppose we went on a safari and recorded the number of times we saw different animals. We don’t know how many animals there are in total, since there are some we may not have seen, but we do have an empirical measure of the animals we have seen. What is the best way of inferring the underlying probability distribution from the data, assuming they are drawn iid from that distribution? This estimator assigns a total weight to all elements that occurred k times and then evenly divides that mass among those elements. This has been proven to be unbiased, and the pre-division step is consistent. The division step can be modified to make it consistent. The interesting technique is to model the underlying data as a mixture of Poissons with a “shadow” mixing distribution. It was a pretty cool analysis, and one which I wish I had thought of, since I had tried to learn about this estimator earlier. Oh well, there are lots of good ideas in the sea…

  • Oblivious channels (Michael Langberg)

    Michael Langberg was one of the few people I didn’t know who came to my talk. I had just heard about his work before coming to ISIT so I was pretty interested in going to his talk. Unfortunately I missed the first 3 minutes, so I hope that what I understood from it wasn’t compromised. The problem he has looks very much like an AVC with state constraint — a binary channel in which the channel can flip any p-fraction of bits. The key contribution of his work is to use a new concentration inequality of Vu to derive the capacity result in a more constructive way. The approach was more from a coding theory perspective, where we want to construct a code with minimum distance 2 p n to avoid any errors. There’s a longer FOCS paper related to this and the ISIT paper still, so I have some work cut out for me.

  • Rate-Constrained Beamforming for Collaborating Hearing Aids
    (Olivier Roy, Martin Vetterli)

    This was an interesting model problem in which there are two hearing aids, one in each ear, and one can transmit to the other to do some beamforming and noise cancellation. The problem is formulated as s unidirectional source-channel communication problem with side information at the decoder, and a rate-distortion tradeoff is derived based on a spatial model of the head. It was a neat application of multi-terminal source-coding ideas to an application, but I think some more complicated models should be considered (a lossy version of Orlitsky’s problem?) before trying to commit this to an architecture.

ISIT 2006 : day four

Plenary Talk : Towards a General Theory of Information Transfer
Rudolf Ahlswede (University of Bielefeld)

I really enjoyed this talk, mainly because I’ve spent a lot of time in the last year reading some of Ahlswede’s papers and trying to understand them. He was proposing a few new ways of thinking about the questions of information transmission in light of knowledge that may be present at the receiver and so on. For example, for Identification Channels, the question that a receiver is interested in is “did event i happen?” rather than “what event happened?” If the encoder doesn’t know what the decoder wants a-priori, then there is a different coding problem setup. The lecture felt a bit disorganized, but it had several quotable moments, such as “if you really want find a solution for the problem and not just turbo turbo turbo fountain…” and “you see, like coding theory has run off to the computer people…” The main lesson I got (as a graduate student) out of the lecture was that one should be more courageous and try to really pose problems from a philosophical standpoint, like Shannon did, and not be beholden to the models proposed before.

  • On the Discrete Memoryless Relay Channel with Relay-Transmitter Feedback (Yacov Gabbai, Shraga I. Bross)

    This was a modified block-Markov scheme for exploiting additional feedback for the relay channel. The decoder is a joint typicality decoder, and the interesting thing to me at least was the definition of the auxiliary random variables and their interpretation. I assume that this will appear in a journalized version somewhere, but in the meantime I’ll try to see what the real intuition is behind the construction.

  • The Impact of Link Layer Model on the Capacity of a Random Ad hoc Network (Vivek Mhatre, Catherine Rosenberg)

    The main point of this work was that if you assume that two nodes in the Gupta-Kumar model have an error probability associated with communicating a packet rather than a all-or-nothing SNR threshold then you can come up with a new scaling law. The title seems a bit misleading, but the modeling idea seemed pretty interesting.

  • The Effect of Node Density and Propagation Model on Throughput Scaling of Wireless Networks (Enrique Duarte-Melo, Awlok Josan, Mingyan Liu, David Neuhoff, Sandeep Pradhan)

    The model here is that of a (1 + d)– a path loss with a network on a disc whose radius grows like ng, where n is the number of sensors. In the Gupta-Kumar-esque sensor network model we can do a throughput scaling of this kind of network, which requires some graph coloring arguments. The end result is a scaling that depends on g and matches the existing results closely. Since g can be viewed as a density parameter, this captures one more tradeoff in wireless networks.

  • Fountain Capacity (Shlomo Shamai, Emre Telatar, Sergio Verdu)

    Since fountain codes have actually been incorporated into a new standard for wireless multimedia broadcast, it’s important to see how our models of communication must be modified in light of these new codes. In some sense this is a circular process — information theorists take a physical model of a channel and then come up with a capacity and methods for achieving this capacity. These coding methods are then adopted as standards, necessitating a change in the model of communication. A fountain channel is one in which we map a codeword into an infinite string of symbols. The channel gets to choose which n of these symbols the decoder can see. It’s related to an arbitrarily varying channel, and randomized codes do help make the capacity nonzero (and equal to the Shannon capacity). They show some interesting results for channels with memory in which the channel capacity is not equal to the Shannon capacity. I really need to understand this paper better, but at a first pass it seems really interesting.

  • The Arbitrarily Varying Degraded Broadcast Channel with States Known at the Encoder (Amir Winshtok, Yossef Steinberg)

    This is a modification of Ahlswede’s classic method for communicating over a “robust” Gelfand-Pinsker channel (or maybe AV-GPC?) The authors use a robustification and elimination technique but have to get around some hairy technical issues. I thought Amir did a good job of elucidating those issues, but I have also read most of the papers he cited, so I’m biased.

  • Randomization bounds for Gaussian arbitrarily varying channels
    (Anand Sarwate, Michael Gastpar)

    This was my talk — it was in the smallest room at the conference and a plurality of the people there were from Berkeley. I ended up feeling pretty nervous and kind of sped through things, but overall I think people liked it ok. I tried to spice it up a bit by using Koosh balls as props, which at least got a chuckle from the audience.

  • Error Exponent in AVS Coding (Ashot Harutyunyan, Han Vinck)

    The main result here is taking the source-coding game of Berger and calculating the error exponents. I really like the Berger paper, but I don’t get an intuition for the exponents shown in this paper. I suppose at some point some maxes and mins over KL divergences become intuitive, but at the moment I’m still at a loss.

  • A Synchronization Technique for Array-based LDPC Codes in Channels With Varying Sampling Rate (Lara Dolecek, Venkat Anantharam)

    Most coding models assume that the decoding circuit has perfect timing recovery, so that the output waveform is sampled in such a way that there is no timing jitter. Since this is clearly not realistic, codes which are robust to this are needed. The jitter can mean that one symbol is sampled twice (insertion) or that one is missed (deletion). This paper had a new model for these kind of errors and proved a modified LDPC construction and message-passing decoding algorithm that could correct for this. This would result in big savings in space and power consumption for these circuits, or so I am given to understand. Again, like in the fountain codes, we have a model that is kicked from implementation back to the theory level, which is pretty cool.

  • Coalitional Games in Gaussian Interference Channels (Suhas Mathur, Lalitha Sankaranarayanan, Narayan Mandayam)

    Coalitional games are games in which players may or may not cooperate in order to achieve a certain advantage. In this paper they look at multiuser interference channels in which some transmitter/receiver pairs can choose to join up, making a MIMO channel. If the rate allocated to a coalition is divided evenly between the members, then all the transmitters/receivers will make one big coalition. But if they can’t divide up the rates evenly, then you get a more interesting structure. At the end of the talk there were some geometric examples which brought the point home a bit. As with all game theory, I found it a bit mystifying, but I thought it was interesting take for the analysis of large network interference problems.

  • Is interference like noise when you know its codebook?
    (Rahul Tandra, Anant Sahai)

    The idea here was that if you have an interferer whose codebook is known to you and you can decode its message, you can effectively have an interference-free channel. However, if the receiver’s INR is too low to decode the interference, is knowledge of the codebook helpful? The answer seems to be no for a couple of situations (e.g. if the interferer uses a Gaussian codebook). The open question is what happens if the interference uses an arbitrary capacity-achieving code, which they leave as an open question. It’s an interesting idea, and the proof (by contradiction using the MAC) is another one of those nice reduction techniques.

The banquet was nice too, but we spent more time getting there and back than actually enjoying the dinner and the island. The big news was that Sergio Verdu won the Shannon Award, which was a bit surprising to me, since I figured they usually award it to someone older. My uneducated guess had been that Te Sun Han would get it, but then again, what do I know? Verdu’s a great speaker though, so hopefully I’ll get to go to Nice next year and hear his lecture.

ISIT 2006 : day three

It was a half day today, so the pressure wasn’t on as much. It’s nice to have a break in the middle of the conference, because after the second day the fatigue really begins to set in. I skipped the plenary (by Brendan Frey) to sleep in, since I had seen some of that talk when he came to Berkeley. A lot of the talks I went to today were by lab mates and friends, so in a sense I wasn’t expanding my horizons as such. The formality of presenting a problem in a conference makes it easier to understand the problem, however, since the presenter has to contextualize more.

  • Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem (Aaron B. Wagner, Saurabha Tavildar, Pramod Viswanath)

    I had seen this result before, but it’s always nice to see it presented again. They solve an example of a classic open problem. Two encoders observe correlated sources S1 and S2 and need to compress them in order to send them to a decoder. The decoder has two MSE requirements D1 and D2 for each of the sources. The first idea for solving this problem is for the encoders to quantize their observations and then uses Slepian-Wolf coding on the quantized observations. The result of this paper is to show this is in fact the best one can do. The tough part is the converse, for which they use two bounds — one that assumes a centralized encoder, and one that connects the problem to the CEO problem, for which Aaron proved a converse in his PhD thesis. It’s a neat result because of its use of reductions, I think.

  • Guessing Facets: Polytope Structure and Improved LP Decoder (Alexandros Dimakis, Martin Wainwright)

    I don’t know much about coding theory, but apparently in the decoding of LDPC (low-density parity-check) codes, there is a technique called “LP decoding,” which is a linear programming solution to the decoding problem. LP decoding operates on something called the fundamental polytope, which is constructed from the code in an interesting way. Since codewords are binary vectors, we embed the code in n-dimensional Euclidean space. For each parity check, we look at all possible binary vectors satisfying the parity check (obviously the codewords are such vectors) and take the convex hull of those vectors. We then intersect all of these hulls to get a new polytope which has all codewords as vertices but also some new vertices called “pseudocodewords.” The problem is LP decoding fails when it returns a pseudocodeword. The key fact is that the polytope has very few facets, so if we restrict the decoding to a facet we can find the best candidate on a facet pretty fast. By guessing enough facets we can find the true codeword with high probability by running only a constant number of linear programs.

  • Computing over Multiple-Access Channels with Connections to Wireless Network Coding (Bobak Nazer, Michael Gastpar)

    This is a new kind of look at network coding with real multiple-access constraints. If you want to run network coding over a network which doesn’t have wired links, so that there are multiple-access channels, current strategies are wasteful. In particular, if X and Y both need to be sent to a node, which will send some function f(X,Y) on its outgoing link, the encoders can do some clever coding to ensure that f(X,Y) can be decoded without requiring X and Y to be decoded. Bobak has a new coding technique for these kind of channels, which he gave some examples for.

  • Optimal Distortion-Power Tradeoffs in Gaussian Sensor Networks (Nan Liu, Sennur Ulukus)

    Liu and Ulukus consider a network in which the sensors are regularly sampling an underlying spatially-correlated stochastic process and need to communicate their observations to a central processor which wishes to reconstruct. Each sensor has a radio and so can hear the transmissions of the other sensors. Some scaling-law bounds on the tradeoffs between the power and distortion. One bound comes from separating source and channel coding, and the other allows for centralized communication. Since there’s no noise in the sensor observations, it’s hard to evaluate the accuracy of the bounds, but the interesting point is that the two solutions are scaling-law similar and so the distributedness doesn’t seem to affect the scaling law for these scenarios. I found the mathematical assumptions needed to prove this result a little confusing, since I didn’t have any intuition for them, but I think I’ll take a look at the paper to see the details.

  • Analog Matching of Colored Sources to Colored Channels (Yuval Kochman, Ram Zamir)

    For colored Gaussian sources, doing a whitening filter followed by white Gaussian vector quantization is not optimal (somewhat unsurprising, but new to me). Instead they propose an architecture which puts the quantization in the same loop as the filtering . For a colored noise channel the system is a little more delicate. This has something to do with source-channel matching, but I need to understand the architecture a bit better first, I think.

ISIT 2006 : day two

Today there were a lot fewer talks that I was really interested in. There was an invited session on neural information theory that I went to in the morning, and then I spent some time revising my slides for Thursday.

Plenary : Exciting directions in Algebraic Coding Theory (Alex Vardy)

Alex Vardy’s a great speaker, and he gave a really clear talk about coding theory and where it is going. He brought out some exciting open problems in classical coding theory that have deep connections to other areas of mathematics, a good overview of list decoding for Reed-Solomon codes. I think it was videotaped — I hope they put the video online sometime, since I missed the first 10 minutes.

  • Universal Quantile Estimation with Feedback in the Communication-Constrained Setting (Ram Rajagopal, Martin Wainwright, Pravin Varaiya)

    This is a distributed estimation setting — a bunch of different sensors observe iid copies of some random variable X and want to let a central processor compute a quantile of the source. They can each send one bit per round to the central processor, and the central processor can feed back one bit to all of the sensors. Ram proposed two schemes, and showed they are both order-optimal when measured against a centralized estimator which has access to all of the sensor’s observations. It’s pretty surprising, actually.

  • Achieving the Gaussian Rate-Distortion Function by Prediction (Ram Zamir, Yuval Kochman, Uri Erez)

    If you have a stationary Gaussian process, one way of compressing it is to first filter it so that it looks iid (this is called whitening) and then compress the whitened source (called the innovations). Unfortunately, this is strictly suboptimal, so prefiltering and quantizing is bad. If you instead put the quantized in the prediction loop (as in DPCM), you can trim out the fat an actually achieve the true rate-distortion. I didn’t really know about this problem, but it seems pretty cool to me.

  • Awards Banquet

    The joint IT/Comm paper award went to the DUDe paper on discrete universal denoising. My only response is that “the DUDe abides.” The IT paper is going to Orlitsky et al’s work on compressing sources whose alphabet size is unknown, which has connections to Good-Turing estimation and is pretty darn cool.

  • Covering spheres and balls with smaller balls (Ilya Dumer)

    I don’t have much to say here, but the result is a tightening of older bounds on how many spherical caps you can put onto the sphere such that you cover the whole sphere. It’s kind of up my alley and the title is pretty funny.

  • Rényi to Rényi — Source Coding under Siege (Michael Baer)

    Another fun title! The problem here is pretty interesting — imagine you are under siege and a spy returns from the field but is so exhausted that he will die shortly. You can ask only yes or no questions to the spy and you need to find out a piece of information before he dies. It’s like 20 questions with a time limit — you need to maximize the chance that you get the answer out. Another example is if you have a connection to a base station that has an unknown time duration and you need to communicate a non-uniform source to maximize the chance of getting the full message across before the connection dies. Baer shows a connection to Huffman coding and gets some improved bounds.

  • Information rates subjected to state masking (Neri Merhav, Shlomo Shamai (Shitz))

    This is like dirty paper coding, but slightly different. Here there is a channel with a state known acausally at the encoder, but the encoder has two objectives: communicate at a rate R with the decoder and minimize the ability of the decoder to infer the channel state. The latter is done via the equivocation function. It’s a pretty cool setup, I think, and they solve the Gaussian case completely. I have to read the paper to get the full insights, but it definitely seems relevant to things that I’m interested in and working on.