Why use the LMS for linear systems?

It’s been a bit of a whirlwind since the last post but I made my course website and “published” it. Rutgers has basically forced all courses into their preferred “Learning Management System” (LMS) Canvas. Even the term LMS has some weird connotations: is it a management system for learning or a system for managing learning? A system for students to (barely) manage to learn? Canvas in particular seems terrible for things math-related (one semester the entire LaTeX rendering engine crashed with no notice) or engineering-related, and in general the whole question management system is garbage.

So if Canvas is so awful, shouldn’t I use something else? Maybe. It helps to imagine (with some dramatic liberties) the evolution of the course website:

  • Everything’s on paper. There’s a book or lecture notes/a reader you buy from the bookstore or copy shop, assignments are physical handouts only (photocopies or dittos or something). Every class works this way. Scores for assignments have to be manually associated with students.
  • Same book or lecture notes/reader but homework files are on the web (or ftp or gopher maybe) in .ps (or later .pdf). A course website is some hand-coded HTML (like my current homepage!) Students can maybe pick up a printout or print it themselves in a computer lab or at home. We can call this the “bag of PDFs” model. Scores for assignments have to be manually associated with students.
  • The website design is somehow centrally controlled either via a template or something and now the book/notes are in .pdf but many students still print things out because lugging a laptop around feels annoying. Maybe a pretty bag of PDFs.
  • The dawn of the LMS: student rosters can get associated into a system where you can deposit your bag of PDFs and then organize them in some pre-specified way. Grades for assignments can be manually associated with students in the system and then you can submit them automagically!
  • The creep of the LMS: you can make quizzes/assignments (simple ones) that are auto graded, make your site look pretty, maybe embed some videos and provide other content, and generally automate some aspects of your class. To take advantage of “features” you have to change your class to meet the tools. The latter is appealing because some features help students learn better (at least according to some research). It’s an opportunity to try things new.
  • Late-stage capitalism LMS: Universities “mandate” that faculty use a particular LMS. Many faculty comply. Every year new edtech companies show up trying to get you to use their software (and sometimes steal student data). Some might be grifts, others are not. They heavily marketed and different ones are pushed by different teaching and learning centers. Many of them require you to change your teaching to fit the tool because they are one-size-fits-all: claims to works for all fields, all types of classes!

So why am I still using Canvas? The main reason is that it benefits the students, not because it is good, but because they have been using it for 2 years and they are used to it. They can see all their due dates on a single dashboard. If you are reading this and scoffing, think about how terrible and non-interoperable almost every calendar system is. If you say “you’re just bowing to peer pressure” you’re more or less right. If you say “but it builds character for students to have to manage their due dates” then I’d ask if the goal of your class is to teach the material or to teach time management. If you say “both” then does your class explicitly teach students time management skills? I’m guessing not.

In this new class format, there are 28 class sessions. In 26 of them there is a conceptual quiz students have to take before class as well as an in-class assignment for which they have to upload solutions. Then there are homeworks, quizzes, and projects/labs. Compared to the old 8 problem sets, two midterms, and a final, that’s more than a 6x increase in things to keep to track of for > 200 students. It behooves me, as someone who cares whether students learn the material, to try and make keeping track easier.

Ultimately a university “adopting” any LMS is coercive because if students use it for all their introductory classes then using something else is almost deliberately making their lives harder with no real benefit. I don’t think I’m going to William F. Buckley it up and stand athwart with my bag of PDFs (even if it is on github). Ultimately I think using the LMS the right thing to do by the students. I’m going to be super salty about it though.

An experiment in teaching Linear Systems and Signals

This fall I am teaching for the n-th time our introductory signals and systems course (ECE 345). This time I’m teaching all of the students (at the time of writing, 206 of them): prior offerings split the class into two sections taught by different faculty and in the last two years of COVID-induced remote instruction I co-taught a combined class with my colleague Salim El Rouayheb. I had thought about some plans to change a few things about the class based on my last in-person offering in 2019 and drew up a few ideas, planning to get things organized in the month before the semester.

At the beginning of August I went to campus to look at the classroom, which is Lucy Stone Hall on the Livingston campus at Rutgers, which can seat 400 students. For context, to get there from my office/where most STEM classes are (on Busch campus) one has to drive, take a bus, or bike. I previously taught in a classroom that can seat 147 which had whiteboards on rollers that went up and down (they are behind the projection screen), which gives 4 boards worth of space visible at a time. The new classroom supposedly has “5 chalkboards” but those are on wooden panels that are partially obscured by the podium. The chalkboard on casters shown in the picture is not particularly visible from the back of the class. So… no real board space. I can use the projection screen with a tablet of document camera (or maybe transparencies to be really old school) but with a single projection screen.

So I’ve embarked on a far too ambitious plan to partially “flip” the class: students will watch video lectures (already recorded during COVID times) and then come to class to do more active learning/problem solving activities. Since this blog has been moribund for the last few years, I will try to write about this process as it goes to help process/document what I’m doing and how well its working (or not).

I’ve been doing a bit of reading on prior approaches, including:

I’ve also gotten a lot of help from various friends and other educators about their own experiences and ideas of what has worked and what hasn’t.

I rapidly realized that I could not implement all of these ideas in a month before the semester so I am trying to pick and choose my battles. Usually, to successfully flip a class requires a high instructor (TAs, learning assistants (LAs), etc.) to student ratio. I don’t even know how many TAs I’ll have this semester yet, so that’s going to be a challenge. It’s waaaaay too late to ask for LAs. Hence I’m calling this a “partial” flip.

I’m not sure I’ll be able to pull it off, but here’s hoping!

Teaching students to stay away from Physiognomic AI

I read Luke Stark and Jevan Hutson‘s Physiognomic AI paper last night and it’s sparked some thinking about additional reading I could add to my graduate course on statistical theory for engineering next semester (Detection and Estimation Theory).

“The inferential statistical methods on which machine learning is based, while useful in many contexts, fail when applied to extrapolating subjective human characteristics from physical features and even patterns of behavior, just as phrenology and physiognomy did.”

From the (mathematical) communication theory context in which I teach these methods, they are indeed useful. But I should probably teach more about the (non-mathematical) limitations of those methods. Ultimately, even if I tell myself that I am teaching theory, that theory has a domain of application which is both mathematically and normatively constrained. We get trained in the former but not in the latter. Teaching a methodology without a discussion of its limitations is a bit like teaching someone how to shoot a gun without any discussion of safety [*]

The paper describes the parallels between the development of physiognomy and some AI-based computer vision applications to illustrate how claims about the utility or social good arguments made now are nearly identical. They quote Lorenzo Niles Fowler, a phrenologist: “All teachers would be more successful if, by the aid of Phrenology, they trained their pupils with reference to their mental capacities.” Compare this to the push for using ML to generate individual learning plans.

The problem is not (necessarily) that giving students individualized instruction is bad, but that ML’s “internally consistent, but largely self-referential epistemological framework” cherry picks what it wants from the application domain to find a nail for the ML hammer. As they write: “[s]uch justifications also often point to extant scientific literature from other fields, often without delving its details and effacing controversies and disagreements within the original discipline.”

Getting back to pedagogy, I think it’s important to address this “everything looks like a nail” phenomenon. One start is to think carefully even about the cartoon examples we use as examples. But perhaps I should add a supplemental reading list to go along with each topic. We fancy ourselves as theorists, but I think it’s a dodge. Students are taking the class because they want to learn ML because they are excited about doing machine learning. When they go off into industry, they should be able to think critically about whether the tool is right for the job: not just “is logistic loss the right loss function” but “is this even the right question to be asking or trying to answer?”

[*] That is, very American?

Some thoughts on teaching signals and systems

I’m teaching Linear Systems and Signals[*] (ECE 345) this semester at Rutgers. The course overall has 260+ students, split between two sections: I am teaching one section. This is my second time teaching it: last year I co-taught with Vishal Patel (who has decamped to Hopkins), and this semester I am co-teaching with Sophocles Orfanidis. I inherited a bit of a weird course: this is a 3-unit junior-level class with an associated 1-unit lab (ECE 347). Previous editions of the course had no recitations, which boggled my mind, since the recitation was where I really learned the material when I took the course (6.003 at MIT, with Greg Wornell as my recitation instructor). How are you supposed to understand how to do all these transforms without seeing some examples?

So this year we have turned ECE 347 into a recitation and moved the coding/simulation part of the course into the homework assignments. Due to the vagaries of university bureaucracy, however, we still have to assign a separate grade for the recitation (née lab). Moreover, there are some students who took the class without the lab and now just need to take 347! It’s a real mess. Hopefully it’s just one year of transition but this is also the year ABET [**] is showing up so we’ll see how things go.

After surveying a wide variety of textbook options for the course, we decided to go with the brand-new and free book by Ulaby and Yagle, Signals and Systems: Theory and Applications [***]. I really have to commend them on doing a fantastic job and making the book free, which is significantly better than $247 for the same book I used literally 20 years ago when I took this course. Actually, we mainly used another book, whose title/author eludes me now, but it had a green slipcover and was more analog control-focused (perhaps since Munther Dahleh was teaching).

One major difference I noticed between textbooks was the order of topics. Assuming you want to do convolution, Laplace (L), Z, Fourier Series (FS), and Fourier Transforms (FT), you can do a sort of back and forth between continuous time (CT) and discrete time (DT):

CT convolution, DT convolution, CTFS, DTFS, CTFT, DTFT, Laplace, Z
CT convolution, DT convolution, Laplace, Z, CTFS, DTFS, CTFT, DTFT

or do all one and then the other

CT convolution, Laplace, CTFS, CTFT, DT convolution, Z, DTFS, DTFT
DT convolution, Z, DTFS, DTFT, CT convolution, Laplace, CTFS, CTFT

I like the alternating version because it emphasizes the parallels between CT and DT, so if you cover sampling at the end you can kind of tie things together. This tends to give students a bit of whiplash, so we are going for:

CT convolution, DT convolution, Laplace, Z, CTFS, CTFT, DTFS, DTFT

It’s all a bit of an experiment, but the thing I find with all textbooks is that they are never as modular as one might like. That’s good for a book but maybe not as good for a collection of curricular units, which in the end is what a S & S [****] class is. CNX is one type of alternative, or maybe something like the interactive book that my colleague Roy Yates dreams of.

I find myself questioning my own choices of ordering and how to present things in the midst of teaching — it’s tempting to experiment mid-stream but I have to tamp down the urges so that I don’t lose the class entirely.

 

[*] You can tell by the word ordering that it was a control theorist who must have named the course.

[**] Accreditation seems increasingly like a scam these days.

[***] You can tell by the word ordering where the sympathies of the authors lie.

[****] Hedging my bets here.

Linkage

Cheating: The List Of Things I Never Want To Hear Again. This is an almost definitive list of plagiarism/cheating excuses. I both love and loathe the idea of making students sign a pledge, but there’s that saying about a horse and water… (h/t Daniel Hsu)

This note on data journalism comes with a longer report about how to integrate data journalism into curricula. It strikes me that many statistics and CS departments are missing the boat here on creating valuable pedagogical material for improving data analytics in journalism. (h/t Meredith Broussard)

Speaking of which, ProPublica has launched version 2.0 of it’s Data Store!

Of course, data isn’t everything: The Perils of Using Technology to Solve Other People’s Problems.

DARPA just launched a podcast series, Voices from DARPA, where DARPA PMs talk about what they’re doing and what they’re interested in. The first one is on molecular synthesis. It’s more for a popular audience than a technical one, but also seems like a smart public-facing move by DARPA.

My friend Steve Severinghaus won the The Metropolitan Society of Natural Historians Photo Contest!

My friend (acquaintance?) Yvonne Lai co-authored this nice article on teaching high school math teachers and the importance of “mathematical knowledge for teaching.”

Data: what is it good for? (Absolutely Something): the first few weeks

So Waheed Bajwa and I have been teaching this Byrne Seminar on “data science.” At Allerton some people asked me how it was going and what we were covering in the class. These seminars are meant to be more discussion-based. This is a bit tough for us in particular:

  • engineering classes are generally NOT discussion-based, neither in the US nor in Pakistan
  • it’s been more than a decade since we were undergraduates, let alone 18
  • the students in our class are fresh out of high school and also haven’t had discussion-based classes

My one experience in leading discussion was covering for a theater class approximately 10 years ago, but that was junior-level elective as I recall, and the dynamics were quite a bit different. So getting a discussion going and getting all of the students to participate is, on top of being tough in general, particularly challenging for us. What has helped is that a number of the students in the class are pretty engaged with the ideas and material, and we do in the end get to collectively think about the technologies around us and the role that data plays a bit differently.

What I wanted to talk about in this post was what we’ve covered in the first few weeks. If we offer this class again it would be good to revisit some of the decisions we’ve made along the way, as this is as much a learning process for us as it is for them. A Byrne Seminar meets for 10 times during the semester, so that it will end well before finals. We had some overflow from one topic to the next, but roughly speaking the class went in the following order:

  • Introduction: what is data?
  • Potentials and perils of data science
  • The importance of modeling
  • Statistical considerations
  • Machine learning and algorithms
  • Data and society: ethics and privacy
  • Data visualizaion
  • Project Presentations

I’ll talk a bit more on the blog about this class, what we covered, what readings/videos we ended up choosing, and how it went. I think it would be fun to offer this course again, assuming our evaluations pass muster. But in the meantime, the class is still on, so it’s a bit hard to pass retrospective judgement.

Detection and Estimation: book recommendations?

It’s confirmed that I will be teaching Detection and Estimation next semester so I figured I would use the blog to conjure up some book recommendations (or even debate, if I can be so hopeful). Some of the contenders:

  • Steven M. Kay, Fundamentals of Statistical Signal Processing – Estimation Theory (Vol. 1), Prentice Hall, 1993.
  • H. Vincent Poor, An Introduction to Signal Detection and Estimation, 2nd Edition, Springer, 1998.
  • Harry L. Van Trees, Detection, Estimation, and Modulation Theory (in 4 parts), Wiley, 2001 (a reprint).
  • M.D. Srinath, P.K. Rajasekaran, P. K. and R. Viswanathan, Introduction to Statistical Signal Processing with Applications, Prentice Hall, 1996.

Detection and estimation is a fundamental class for the ECE graduate curriculum, but these “standard” textbooks are around 20 years old, and I can’t help but think there might be more “modern” take on the subject (no I’m not volunteering). Venu Veeravalli‘s class doesn’t use a book, but just has notes. However, I think the students at Rutgers (majority MS students) would benefit from a textbook, at least as a grounding.

Srinath et al. is what my colleague Narayan Mandyam uses. Kay is what I was leaning to before (because it seems to be the most widely used), but Poor’s book is the one I read. I think I am putting up the Van Trees as a joke, mostly. I mean, it’s a great book but I think a bit much for a textbook. So what do the rest of you use? Also, if you are teaching this course next semester, perhaps we can share some ideas. I think the curriculum might be ripe for some shaking up. If not in core material, at least in the kinds of examples we use. For example, I’m certainly going to cover differential privacy as a connection to hypothesis testing.

Teaching bleg: articles on “data” suitable for first-year undergraduates

My colleague Waheed Bajwa and I are teaching a Rutgers Byrne Seminar for first-year undergraduates this fall. The title of the course is Data: What is it Good For? (Absolutely Something), a reference which I am sure will be completely lost on the undergrads. The point of the course is to talk about “data” (what is it, exactly?), how it gets turned into “information,” and then perhaps even “knowledge,” with all of the pitfalls along the way. So it’s a good opportunity to talk about philosophy (e.g. epistemology), mathematics/statistics (e.g. undersampling, bias, analysis), engineering (e.g. storage, transmission), science (e.g. reduplication, retraction), and policy (e.g. privacy). It’s supposed to be a seminar class with lots of discussion, and the students can be expected to do a little reading outside of class. We have a full roster of 20 signed up, so managing the discussion might be a bit tricky, of course.

We’re in the process of collecting reading materials — magazine articles, book chapters, blog posts, etc. for the students to read. We explicitly didn’t want it to be for “technical” students only. Do any readers of the blog have great articles suitable for first-year undergrads across all majors?

As the class progresses I will post materials here, as well as some snapshot of the discussion. It’s my first time teaching a class of this type (or indeed any undergraduates at Rutgers) so I’m excited (and perhaps a bit nervous).

On a side note, Edwin Starr’s shirt is awesome and I want one.

Teaching technical (re-)writing

I think it would be great to have a more formal way of teaching technical writing for graduate students in engineering. It’s certainly not being taught at (most) undergraduate institutions, and the mistakes are so common across the examples that I’ve seen that there must be a way to formalize the process for students. Since we tend to publish smaller things a lot earlier in our graduate career, having a “checklist” approach to writing/editing could be very helpful to first-time authors. There are several coupled problems here:

  • students often don’t have a clear line of thought before they write,
  • they don’t think of who their audience is,
  • they don’t know how to rewrite, or indeed how important it is.

Adding to all of this is that they don’t know how to read a paper. In particular, they don’t know what to be reading for in terms of content or form. This makes the experience of reading “related work” sections incredibly frustrating.

What I was thinking was a class where students learn to write a literature review (a small one) on a topic of their choosing. The first part will be how to read papers and make connections between them. What is the point of a literature review, anyway? The first objective is to develop a more systematic way of reading and processing papers. I think everyone I know professionally, myself included, learned how to do this in an ad-hoc way. I believe that developing a formula would help improve my own literature surveying. The second part of the course would be teaching about rewriting (rather than writing). That is, instead of providing rules like “don’t use the passive voice so much” we could focus on “how to revise your sentences to be more active.” I would also benefit from a systematic approach to this for my own writing.

I was thinking of a kind of once-a-week writing seminar style class. Has anyone seen a class like this in engineering programs? Are there tips/tricks from other fields/departments which do have such classes that could be useful in such a class? Even though it is “for social scientists”, Harold Becker’s book is a really great resource.

“Cascading Style Sheets are a cryptic language developed by the Freemasons to obscure the visual nature of reality”

Via Cynthia, here is a column by James Mickens about how horrible the web is right now:

Computer scientists often look at Web pages in the same way that my friend looked at farms. People think that Web browsers are elegant computation platforms, and Web pages are light, fluffy things that you can edit in Notepad as you trade ironic comments with your friends in the coffee shop. Nothing could be further from the truth. A modern Web page is a catastrophe. It’s like a scene from one of those apocalyptic medieval paintings that depicts what would happen if Galactus arrived: people are tumbling into fiery crevasses and lamenting various lamentable things and hanging from playground equipment that would not pass OSHA safety checks.

It’s a fun read, but also a sentiment that may echo with those who truly believe in “clean slate networking.” I remember going to a tutorial on LTE and having a vision of what 6G systems will look like. One thing that is not present, though, is the sense that the system is unstable, and that the introduction of another feature in communication systems will cause the house of cards to collapse. Mickens seems to think the web is nearly there. The reason I thought of this is the recent fracas over the US ceding control of ICANN, and the sort of doomsdaying around that. From my perspective, network operators are sufficiently conservative that they can’t/won’t willy-nilly introduce new features that are only half-supported, like the in Web. The result is a (relatively) stable networking world that appears to detractors as somewhat Jurassic.

I’d argue (with less hyperbole) that some of our curriculum ideas also suffer from the accretion of old ideas. When I took DSP oh-so-long ago (13 years, really?) we learned all of this Direct Form Transposed II blah blah which I’m sure was useful for DSP engineers at TI to know at some point, but has no place in a curriculum now. And yet I imagine there are many places that still teaching it. If anyone reads this still, what are the dinosaurs in your curriculum?