>What areas of regulation have not fallen into these traps, or at least not as badly? For instance, building codes and restaurant health inspections seem to have helped create safety without killing their respective industries. Driver’s licenses seem to enforce minimal competence without preventing anyone who wants to from driving or imposing undue burden on them. Are there positive lessons we can learn from some of these boring examples of safety regulation that don’t get discussed as much?<p>I'm pretty sure Louis Rossman has a massive playlist making fun of how hilariously slow NYC's approval process for construction work is. Adjacent to building codes is zoning laws, which exist specifically to make sure American housing is shaped like a speculative investment vehicle[0].<p>Driver's licenses err the other way: being licensed to drive is hideously easy, suspensions of that license for unsafe driving are far too uncommon, and people regularly flout the rules. Any serious attempt to enforce the law is opposed as draconian, so the law is only enforced on populations that cannot meaningfully fight back[1].<p>>What other alternative models to review-and-approval exist, and what do we know about them, either empirically or theoretically?<p>I'm not aware of any. In fact, while the author suggested liability as an alternative; I would argue that liability and review-and-approval are two sides of the same coin. You have some liability, which you don't realize right away because probabilistic outcomes allow lucky individuals and institutions to dodge bullets, and then once you realize your liability is higher than you thought you start engaging in review-and-approval. In the case of the FDA, the liability was the risk of public embarrassment and losing elections for allowing unsafe drugs to hit the market. In the case of factories, the review-and-approval processes are internal and unaccountable. While IRBs can start out well-meaning and degrade into exercises in speculative donkey blanketing[2], the factories will <i>start</i> as a CYA measure.<p>AI risk is particularly strange, because the biggest risk of AI is just that the technology works <i>as intended</i>. Not just that it works, or doesn't work, but that it works and one company owns it all. A cursory reading of selectorate theory would suggest the ultimate disenfranchisement of everyone but the specific subgroup of capitalists that happen to own parts of OpenAI, Microsoft, or Google. What you need is not risk mitigation, what you need is to force free publication and use of AI software. In other words, Stallman was right[3].<p>>Why is there so much bloat in the contract research organizations (CROs) that run clinical trials for pharma? Shouldn’t there be competition in that industry too?<p>Competition is an artifice of the 1970s. When we stopped blocking mergers on antitrust grounds we functionally abandoned the concept of private competition. This is why I don't think liability is a fix. The author thinks that there are still competitive pressures that would disincentivize over-regulation; that is not the case.<p>[0] Not it's original intent, of course: the original idea was to keep black people out of the suburbs. Like much else in the US, the structure is not perpetuated for the sake of racism, but it is an artifice of vestigial racism.<p>[1] This is mediated through poverty; rich towns have politically active citizens that will fight back against new ways of enforcing the law. Poor towns can fleece their people at traffic stops, and they don't have to pay their cops as much as long as they can be paid in police brutality. Thanks to vestigial racism induced poverty, this disproportionately affects black people, too.<p>The dynamics behind this are Cory Doctorow's "shitty technology adoption curve."<p>[2] Covering your ass.<p>[3] Also Ned Ludd was right.