> we’re thinking a little bit about a Git Version Control or github-esque software for storing, documenting, sharing and iterating on curriculum<p>> Most of the professors we talked to [...] appeared mostly ok with the status quo.<p>Don't underestimate how wretched our own system is. Or how widely that's underappreciated. And thus how awesomely large the potential for improvement. Even if societal payoff is unclear. We're educationally leaving <i>a lot</i> on the table.<p>Compared with systems with an even greater emphasis on rote learning, yes, we're better at creativity and problem solving. But...<p>Numeracy? There's almost a century now of professors, across several fields, complaining to their field's journal, about PhD candidates without a feel for reasonable numbers. And there's a funny famous example of an ideal-gas-law chapter question, that persisted for many editions, years, and much use, with numbers describing <i>solid</i> Argon. Ok, so maybe just a typo? And whole lot of student and professorial mindless plug-and-chug? Have you <i>ever</i> seen a question, where it was the <i>students</i> responsibility to judge which simplifying ideals were valid to apply? We just don't do that. We could, but we don't. It's not a thing. So students just can't do that. We're <i>toy</i> problem solving and innumeracy.<p>Firm grasps? A wizzy teacher of intro genetics at a first-tier university, was asked what they would most like improved about their incoming students, and replied, a firmer grasp on central dogma. Something that can be taught in primary school. But firm grasps aren't something we do well.<p>Robustly integrated understanding? Versus Trivial-Pursuit collection of factoid fragments? Ask first-tier astronomy graduate students "a 5-year old asks, what color is that hot ball, the Sun?" and then "and sunlight?", and you can expect a wrong answer to the first, and right to the second. Sometimes followed by a pause, and a "that doesn't make sense, does it?". Two incompatible factoids, perhaps first learned in kindergarten, seemingly colliding for the first time, two decades later in grad school. And of the few who do get it right, half-ish (but small N, and at an institution strong in astronomy education research), half-ish report having learned it in a class on common misconceptions in astronomy education, rather than their own, atypically extensive and successful. After all, most all of the most-used introductory astronomy textbooks also have it wrong. And then Kahn Academy makes videos based on textbooks, and surprise... not. We don't do integrated understanding.<p>How can textbooks be so bad? Science education content is a very distinct thing from science, or even from science education research. It's under very very different selection pressures. And has nothing like science's infrastructure and culture. Physics education research folks tell a recurrent story, of physics colleagues who are solidly empirical in their work, but in teaching? "My <i>trusty gut</i> says it works!" A very large textbook company onboards its science education writers with the reassurance that it's ok you've a BA and no science background at all, because there's a "scientist" on call for consults. Expecting science-like properties of science education content seems to me a scope-of-competence inference error. Like "You're a Scientist? Then you'll know <arbitrary topic>", or "You're a Doctor? Yes, of medieval french literature. Good, I've a question about my surgery", or "My local TV meteorologist explained why climate change isn't real". So why would you expect astronomy textbooks to get the color of the Sun right? Why would they? No one is going to be embarrassed in front of their peers, or fail to get tenure, by getting it wrong. Science education has "science" in the name, and some overlap in individual personnel, but nothing like the social constructs which get us from the work of individual researchers to science and its aggregate properties.<p>On a more upbeat note, when MIT created a VR cell biology sim, gathering domain expertise by pulling in researchers for interviews, there was a problem... getting the researchers to leave. So despite oft-cited meager funding and lack of incentives, there seems at least a possibility pulling in expertise, if project pragmatics and goals have a right shape.<p>Fond memories of fiddling with Core War after the article in Scientific American.