Hi HN, I spent last spring working on a new class at Stanford that's focused on common pitfalls in systems programming and how to avoid them. You can check out all the lecture materials (including recorded lecture videos) and assignments available here: <a href="http://cs110l.stanford.edu/" rel="nofollow">http://cs110l.stanford.edu/</a><p>I'm planning on teaching this class again in the winter or spring and am looking for any feedback to improve it. I would love to hear your comments and suggestions!
Teaching a class about safety in systems code seems like a great idea, and at first glance the class content here seems useful and interesting. I do a lot of OS-level stuff at work (especially around virtualization and Firecracker), and while safety (and security) are obviously a critical topic for industrial systems, they don't tend to be things that new grads have thought about at all. Great to see that changing (even, as with all curriculum additions, it means covering some other stuff less).<p>> CS 110 [multiprocessing, multithreading, and networking in C++] is not just about how we do things, but also why – why are things designed the way they are, and if we get certain bugs or performance characteristics, why is that?<p>That's an interesting take, because I don't see Rust as being more abstracted in this way than C++ is. Obviously it's more abstracted than C, but by the time you get to "modern" C++ you're programming in a much higher-level language than C.<p>> I also think it’s hard for students to appreciate Rust without having first experienced C and C++.<p>This part does make sense. Explaining the "why can't we just write C really carefully?" piece to people who haven't experienced trying to do that is going to be harder. As we all know, it is possible to write safe C, but it takes a level of discipline and tooling support that is beyond most undergrads.<p>> that looks at what is often going wrong in systems and how we can improve practices to build better systems.<p>I'd love to see more research here too. There's some systematic studies of the causes of bugs in systems code, and obviously a lot of well-known bug patterns (see all of C's string handling). On the other hand, there seems to be fairly little research on the causes of more pernicious and subtle problems that become vulnerabilities (and data corruption, crashes, etc) in systems code.
I think another big benefit of Rust for systems is that you can explicitly mark functions/traits as unsafe if they require some preconditions to work correctly or if the trait must guarantee something that can't be checked by the compiler. That's a big benefit over C++ where the best you can do is documentation since the warnings in the documentation don't appear at the call site unlike how using an unsafe API requires an unsafe block/another unsafe function.
These comments struck me as particularly damning:<p>> As people usually say, Rust has a steep learning curve, and it’s really hard to get productive with it in a short amount of time. This is reflected in the 2019 Rust language survey, and it’s also reflected in the student frustration in the first few weeks of our weekly survey responses<p>> While I think Rust would be poorly motivated in CS 107, I think it is extra poorly suited for CS 110.<p>I have essentially no experience with Rust, and I want to like it. But I get the feeling that it isn't very user-friendly and that I would enjoy it much less than other memory safe languages like Swift, Go or Java or even sharper-edged languages like C++ with smart pointers or Objective-C with ARC.<p>Also given the large body of legacy C/C++ code still in use and development, I'm disappointed that clang still doesn't seem to support a memory-safe mode/ABI, as it could eliminate a large class of errors.<p>Maybe Unix made a critical error by adopting and promoting unsafe C; its predecessor Multics, written in PL/I, had essentially zero buffer, heap, or stack overflows over its entire lifetime (though it probably still had race conditions and concurrency errors.) ;-)
Rust is nice enough, but I'm not sure it's worth a university-level course on safe systems programming. There's a lot more of interest about ats[1]; f*[2] (specifically its 'low' subset); isabelle[3] (most notably used by the sel4 microkernel); et al.<p>1. <a href="http://www.ats-lang.org/" rel="nofollow">http://www.ats-lang.org/</a><p>2. <a href="https://fstar-lang.org/" rel="nofollow">https://fstar-lang.org/</a><p>3. <a href="http://isabelle.in.tum.de/" rel="nofollow">http://isabelle.in.tum.de/</a>
I don't see a focus on testing in this course. Was that by design? I feel like a lot of design out there forgets this important implementation detail.
This is a fine article but not a Show HN, so I've taken that out of the title. Please see <a href="https://news.ycombinator.com/showhn.html" rel="nofollow">https://news.ycombinator.com/showhn.html</a>.