Berkeley EECS makes TA-ing more cushy

At the end of last semester I got a memo from our department trying to make TA-ing a more attractive prospect. In reality, a grad student in EECS is a TA here (or Graduate Student Instructor (GSI)) for one of three reasons : they’re a first-year and don’t have an advisor (yet), their advisor is low on funds or doesn’t have funds for their particular project, or they are fulfilling their teaching requirement to graduate (one semester only). The difference between being paid as a TA and as a research assistant (Grad Student Researcher (GSR)) is significant — the union negotiates the pay scale for GSIs, and the University is not going to let salaries rise if they can help it. In some instances a student’s advisor can bump up their salary to the GSR level. So now the department recommends:

  • If you are doing research the same semester you’re teaching, your advisor should give you a partial GSR to help out.
  • You can be appointing as a 100% GSR during winter break if you are around.
  • If your advisor can’t afford to pay you and you are GSI-ing to fulfill the graduation requirement, then the department will give you a unconstrained fellowship.

All in all, it’s seems like a much more pleasant deal — how this will end up changing the dynamics of TAing is unclear though. It also makes things much much nicer in EECS than in other departments, which seems somehow unfair in the end. Why can’t all GSIs get better pay?

Advertisements

paper a day : The Byzantine Generals Problem

It’s more like “paper when I feel like it,” but whatever. This is a classic systems paper on decentralized agreement. Because the result is pretty general, I think there is something that could be done to connect it to information-theoretic studies of multi-way communication or maybe key agreement. That’s a bit hand-waving though.

The Byzantine Generals Problem
Leslie Lamport, Robert Shostak, and Marshall Pease
ACM Transactions on Programming Languages and Systems
Vol. 4 No. 3, July 1982, pp. 382–401.

The basic problem is pretty simple. There are n total generals, some of whom may be disloyal. A commanding general must send an order to n-1 lieutenant generals such that (a) all loyal lieutenants follow the same order and (b) if the commander is loyal then all loyal lieutenants follow the order sent by the commander. The paper looks a oral messages, which are unsigned messages such that a disloyal general can send any possible message, as well as signed messages. I’ll just talk about the former. A protocol for solving this problem is an algorithm for each general to execute, at the end of which all generals make a decision based on the information they have received.

The first result is that satisfying both conditions is impossible with m traitors unless n > 3m. This is easiest to see in the case of three generals, where you can go through all the cases. To extend it to general n the insight is that if n = 3m then m disloyal generals can collude to force indecision at m other generals. It’s kind of like having a devil on one shoulder and an angel on the other. If they are equally strong you will not be able to make the right choice. This result can be extended to show that approximate agreement (in a certain sense) is also impossible.

On the more side, for n > 3m there is a spreading/flooding algorithm that satisfies the conditions. The commander sends a message, and then each lieutenant general acts as the commander in a smaller instance of the algorithm to send the command to the remaining generals. This rebroadcasting protocol, while wasteful, guarantees that the loyal generals will all do the same thing. This algorithm can be extended to the case where the communication structure is a graph with certain “nice” structural properties.

To me, the interesting part really is the converse, since it says something about symmetrizing the distribution. The same phenomenon occurs in the arbitrarily varying channel, where the capacity is zero if a jammer can simulate a valid codeword. In this analogy, the encoder is one group of m generals, the decoder another group of m, and the jammer is a coalition of m disloyal generals.

There’s also a nice bit where they lay the smack down: “This argument may appear convincing, but we strongly advise the reader to be very suspicious of such nonrigorous reasoning. Although this result is indeed correct, we have seen equally plausible “proofs” of invalid results. We know of no area in computer science or mathematics in which informal reasoning is more likely to lead to errors than in the study of this type of algorithm.”