Some thoughts on teaching signals and systems

I’m teaching Linear Systems and Signals[*] (ECE 345) this semester at Rutgers. The course overall has 260+ students, split between two sections: I am teaching one section. This is my second time teaching it: last year I co-taught with Vishal Patel (who has decamped to Hopkins), and this semester I am co-teaching with Sophocles Orfanidis. I inherited a bit of a weird course: this is a 3-unit junior-level class with an associated 1-unit lab (ECE 347). Previous editions of the course had no recitations, which boggled my mind, since the recitation was where I really learned the material when I took the course (6.003 at MIT, with Greg Wornell as my recitation instructor). How are you supposed to understand how to do all these transforms without seeing some examples?

So this year we have turned ECE 347 into a recitation and moved the coding/simulation part of the course into the homework assignments. Due to the vagaries of university bureaucracy, however, we still have to assign a separate grade for the recitation (née lab). Moreover, there are some students who took the class without the lab and now just need to take 347! It’s a real mess. Hopefully it’s just one year of transition but this is also the year ABET [**] is showing up so we’ll see how things go.

After surveying a wide variety of textbook options for the course, we decided to go with the brand-new and free book by Ulaby and Yagle, Signals and Systems: Theory and Applications [***]. I really have to commend them on doing a fantastic job and making the book free, which is significantly better than $247 for the same book I used literally 20 years ago when I took this course. Actually, we mainly used another book, whose title/author eludes me now, but it had a green slipcover and was more analog control-focused (perhaps since Munther Dahleh was teaching).

One major difference I noticed between textbooks was the order of topics. Assuming you want to do convolution, Laplace (L), Z, Fourier Series (FS), and Fourier Transforms (FT), you can do a sort of back and forth between continuous time (CT) and discrete time (DT):

CT convolution, DT convolution, CTFS, DTFS, CTFT, DTFT, Laplace, Z
CT convolution, DT convolution, Laplace, Z, CTFS, DTFS, CTFT, DTFT

or do all one and then the other

CT convolution, Laplace, CTFS, CTFT, DT convolution, Z, DTFS, DTFT
DT convolution, Z, DTFS, DTFT, CT convolution, Laplace, CTFS, CTFT

I like the alternating version because it emphasizes the parallels between CT and DT, so if you cover sampling at the end you can kind of tie things together. This tends to give students a bit of whiplash, so we are going for:

CT convolution, DT convolution, Laplace, Z, CTFS, CTFT, DTFS, DTFT

It’s all a bit of an experiment, but the thing I find with all textbooks is that they are never as modular as one might like. That’s good for a book but maybe not as good for a collection of curricular units, which in the end is what a S & S [****] class is. CNX is one type of alternative, or maybe something like the interactive book that my colleague Roy Yates dreams of.

I find myself questioning my own choices of ordering and how to present things in the midst of teaching — it’s tempting to experiment mid-stream but I have to tamp down the urges so that I don’t lose the class entirely.

 

[*] You can tell by the word ordering that it was a control theorist who must have named the course.

[**] Accreditation seems increasingly like a scam these days.

[***] You can tell by the word ordering where the sympathies of the authors lie.

[****] Hedging my bets here.

Advertisement

CFP: PPML Workshop at NIPS 2018

Privacy Preserving Machine Learning

NIPS 2018 Workshop

Montreal, December 8, 2018

Description

This one day workshop focuses on privacy preserving techniques for training, inference, and disclosure in large scale data analysis, both in the distributed and centralized settings. We have observed increasing interest of the ML community in leveraging cryptographic techniques such as Multi-Party Computation (MPC) and Homomorphic Encryption (HE) for privacy preserving training and inference, as well as Differential Privacy (DP) for disclosure. Simultaneously, the systems security and cryptography community has proposed various secure frameworks for ML. We encourage both theory and application-oriented submissions exploring a range of approaches, including:

  • secure multi-party computation techniques for ML
  • homomorphic encryption techniques for ML
  • hardware-based approaches to privacy preserving ML
  • centralized and decentralized protocols for learning on encrypted data
  • differential privacy: theory, applications, and implementations
  • statistical notions of privacy including relaxations of differential privacy
  • empirical and theoretical comparisons between different notions of privacy
  • trade-offs between privacy and utility

We think it will be very valuable to have a forum to unify different perspectives and start a discussion about the relative merits of each approach. The workshop will also serve as a venue for networking people from different communities interested in this problem, and hopefully foster fruitful long-term collaboration.

Submission Instructions

Submissions in the form of extended abstracts must be at most 4 pages long (not including references) and adhere to the NIPS format. We do accept submissions of work recently published or currently under review. Submissions should be anonymized. The workshop will not have formal proceedings, but authors of accepted abstracts can choose to have a link to arxiv or a pdf published on the workshop webpage.

Program Committee

  • Pauline Anthonysamy (Google)
  • Borja de Balle Pigem (Amazon)
  • Keith Bonawitz (Google)
  • Emiliano de Cristofaro (University College London)
  • David Evans (University of Virginia)
  • Irene Giacomelli (Wisconsin University)
  • Nadin Kokciyan (King’s College London)
  • Kim Laine (Microsoft Research)
  • Payman Mohassel (Visa Research)
  • Catuscia Palamidessi (Ecole Polytechnique & INRIA)
  • Mijung Park (Max Planck Institute for Intelligent Systems)
  • Benjamin Rubinstein (University of Melbourne)
  • Anand Sarwate (Rutgers University)
  • Philipp Schoppmann (HU Berlin)
  • Nigel Smart (KU Leuven)
  • Carmela Troncoso (EPFL)
  • Pinar Yolum (Utrecht University)
  • Samee Zahur (University of Virginia)

Organizers

  • Adria Gascon (Alan Turing Institute & Edinburgh)
  • Niki Kilbertus (MPI for Intelligent Systems & Cambridge)
  • Olya Ohrimenko (Microsoft Research)
  • Mariana Raykova (Yale)
  • Adrian Weller (Alan Turing Institute & Cambridge)