I have pretty deep experience in modelling code across a wide range of dimensions (academic, industrial and government, supercomputing to microcontroller, internet ads to protein design). I've seen a lot of modelling codes, and run a bunch of them, written some of them, and helped people fix problems in them.<p>Throughout that time I've seen a wide range of modelling quality. Very few people can churn out really nice code that solves useful problem, update that software for a wide range of uses over the years, keep it documented, pay down technical debt, fix bugs, write great tests, and make sure the numerics are excellent. Often times these things are built by people who are experts, but spend most of their time in meetings explaining the situation to politicians, or running labs and publishing papers.<p>Having read this particular article, many of the problems I see being complained about are typical and happen in industry frequently, even in highly functional orgs with strong incentives to build high quality software. Further, it really just seems like the author had a very strong position about lockdowns, and tried to make a quantitative/technie takedown of some code that was used to make some decisions. The article really drips of that kind of animosity. i see a number of technical errors and ambiguity which make it unconvincing.<p>That said, we <i>could</i> have far better codes. In principle, all the data and support libraries would be open, the pipelines to produce the data reproducible, maintainable, and well-tested, and anybody would be able to write a simple notebook that reproducibly model their hypotheses for large populations in a way that a large number of people could inspect and come to their own conclusions.