The reason why Altman is so intent on regulation is because he <i>knows</i> he has no moat if he can't persuade the government to build him one.<p>OpenAI has a head start, but the technology is very well understood and the only limitation to building new models is budget. Facebook is actively working to undercut them and commoditize the technology <i>and it looks like they will succeed</i>. OpenAI can't keep spending billions to stay ahead of the commoditization, so they <i>need</i> the government to pull up the drawbridge and give them that moat.<p>Altman himself doesn't care one iota about safety and he just pushed out the people that did. This is just a strategic play to him.
What passes for "AI regulation" is an attempt to divert attention from the real problem - arbitrary exercise of corporate power.<p>AIs which just answer questions people pose seem to be mostly harmless. Some people might not like the answers, and some of the answers might be wrong. That's true of web search results now. That's just an extension of the social media controversy.<p>AIs which have authority over people are scary. "AI says No!" This is a
corporate power problem, not an AI problem. If corporations or landlords have arbitrary
discretionary power to affect people's lives, that's the problem. Not that they delegated that
authority to an AI. The EU regulates decisions against individuals made by algorithms.
The EU has already stopped Uber from "robo-firing", where the app back end fired underperforming drivers.<p>Few proposed AI laws make this distinction. That's because it would raise the issue of arbitrary exercise of power and collective individual rights. That might lead to political unrest, or even unions.
Well, these are like other regulations too. Some incumbent gets in, lobbies, makes laws and pulls the ladder up behind them.<p>Similar to how cannabis rolled out in USA. Now many places require single-use RFID from the sole provider. Get in and then form the laws to build your business.
No doubt leadership at OpenAI, Google, and Microsoft are in favor of any regulations that would cement their lead.<p>For the rest of us, the relevant question is: are the costs worth the benefits?<p>> The risk [...] is that radically new products and approaches in the arena never get a chance to be developed and benefit consumers.<p>Totally. And the opposite risk is that these companies deploy technologies that cause massive harm to people, without adequate testing, because they're caught up in a race.<p>I think some regulations would be helpful on balance -- like reporting of large training runs, as in SB 1047 -- and some wouldn't -- like (hypothetically) requiring a license to train small models.
The irony being that probably <i>some</i> regulation is a good idea - but the only people with sufficient influence to enact top-down regulation are also the ones who hold the purse strings.<p>This is where capitalism breaks down, when it comes to technology that is powerful enough to erode liberty and undermine civilizations.
Crony capitalism is such an intellectual cop-out.<p>The people who say this garbage would never let any other ideology weasel its way out of the effects of their system in practice, but for some bizarre reason, the giga brains at Reason are allowed to get a pass and say that this is not true capitalism.<p>The type of capture by regulation outlined in the article is the direct result of a political landscape that prioritizes corporate power over government power. There is nothing "crony" about it; it's just capitalism.<p>P.S. This is not pro or anti-capitalist; it's anti-shitty argument.
How do we protect people’s livelihoods without engraining regulatory capture? Interested in HN’s thoughts.
<s> It won’t matter in a few years when AGI kills us all. </s>