I'm a tenured CS professor at a liberal arts college. I teach the entire curriculum, from intro programming up to senior-level electives.<p>I'm currently teaching a "Programming with AI" course where we're using Cursor with no restrictions. I now think "learning programming" as we've traditionally conceived it is toast. Core CS projects, the kind that would have been at the heart of the curriculum, take students maybe 30 minutes to do with modern tools.<p>Previously, the hard part of learning programming was developing the skill of putting code statements in the correct logical structure, and building that up from small programs of a few lines, to functions/classes, and eventually to larger programs. That happened in parallel with building the knowledge background of systems, libraries, algorithms, databases, etc. that you had to draw on to write complex applications.<p>The core work of the middle part of the CS major - building skill by writing progressively more complex functions and classes - can be largely automated at this point. You've still got to learn things, but the emphasis shifts. Details about libraries and frameworks - implementation at the level of functions and classes - are deemphasized. The scope of what my students can do is bigger, so they're dealing with the challenges of designing and debugging larger applications.<p>For new programmers the most important thing to learn is a mental model of how programs execute: Can you look at a small program and understand what it does? This is a prerequisite to generating bigger programs with AI. Students also need to learn to use AI collaboratively, to think through a problem like a pair programmer, rather than expecting to get complete one-shot generations.<p>I redesigned our intro course to move most of our core skills practice (what would previously have been homework problems) into class time. The out-of-class projects are now bigger and more ambitious, but also give students specific steps and prompts for using AI. A major point of the projects is agency. I like projects where there isn't one right answer, but instead students have to set a vision and then iterate on it.<p>Beyond the first course, we now need to spend more time on "software engineering" and craftsmanship:<p>- Clarifying requirements<p>- Reasoning about the design of a larger program with many parts<p>- Communication between parts of the application: DB design, APIs<p>- Debugging intuition, why is this thing breaking?<p>- Testing<p>- Working with bigger codebases<p>These things have always been around, but we often didn't teach them until upper-level courses. Early-level projects weren't complex enough to require serious craftsmanship. AI lets us do more ambitious things earlier in the curriculum - we should be looking for ways to raise our standards and continue challenging students.<p>For new programmers, I would encourage learning how to write standard single-file programs (with variables, loops, functions, lists, etc.) without using AI. Then bring in AI as a partner to write larger applications with specific libraries. Focus on raising the scale of your programs to the point where you can't immediately one-shot them, and then use the difficulties you encounter to think about design issues. Gradually build your knowledge base of systems and algorithms, but don't obsess about memorizing implementation details right away. Build end-to-end applications that do things; LeetCode problems have always been terrible.<p>For my fellow professors: You should start talking about how to redesign your courses and curriculum. That requires taking some guardrails off so you can see what students can really do with modern AI tools.