Anybody got any additional context on this? The headline / tiny screenshot really leaves a lot to be desired. What is the bill? what does it do? how does this benefit OpenAI? Does it also benefit other "big players" (Meta, xAI, etc)?
Text of bill: <a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047" rel="nofollow">https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...</a>
The entire notion of “safety” and “ethics” in AI is simply a Trojan horse for injecting government control and censorship over speech and expression. That’s what the governments get out of it. The big AI players like OpenAI, Microsoft, Amazon, Google, etc. are incentivized to go along with it because it helps them through regulatory capture and barriers to competition. They also make some friends with powerful legislators to avoid pesky things like antitrust scrutiny.<p>Legislation should not restrict the development or operation of fundamental AI technologies. Instead laws should only be built on the specific uses that are deemed illegal, irrespective of AI.
related discussion:<p><a href="https://news.ycombinator.com/item?id=40198766">https://news.ycombinator.com/item?id=40198766</a>
Do I hear the sound of GPU farms being packed up in preparation for being moved to another state (or, more likely, the next generation being built in another state)?<p>Raw compute doesn't care where it occurs. Anywhere with plenty of reliable electric power is fine (which is increasingly <i>not</i> California to begin with).<p>The people who use the compute power don't even have to live there.
And here is the response thread from the state senator that introduced it.<p><a href="https://twitter.com/Scott_Wiener/status/1784964914236227757" rel="nofollow">https://twitter.com/Scott_Wiener/status/1784964914236227757</a><p>Has anyone with less of a bias actually analyzed this bill because I'm not sure if I trust either of these guys after reading these two threads?
Burdensome restrictions on AI seem poised to benefit entrenched players and government. On that we agree.<p>I am terrified of the societal implications of AI to the extent I am plotting out short stories.<p>The author of the linked tweet seems to think AI can do no harm and there is nothing to worry about. Bad actors can't abuse AI for bioweapons research, or to develop misinformation. And there is no chance that an AI technology could make itself better unassisted.<p>I'll follow up on their position but it seems naive or disingenuous in the extreme.
What the hell?<p>It establishes the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act to regulate the development and use of advanced AI models, termed "covered models." So now LLMs will be subject to the whims of an unelected governing body and you will comply under threat of vague penalties?<p>Requires entities operating computing clusters to assess if customers intend to use resources to deploy covered models and maintain related records. If your judgement is wrong you are fined and go to jail?<p>If a model does not qualify for exemption, developers must implement cybersecurity protections, shutdown capabilities, safety protocols, and capability testing. Annual compliance certification to the Frontier Model Division is required. GTFO.<p>Developers must report AI safety incidents to the Frontier Model Division within 72 hours of occurrence. Oof.<p>Creates the Frontier Model Division within the Department of Technology to oversee compliance, issue guidance, review safety reports, and advise the Attorney General and Legislature on AI matters. Jobs! Jobs! Jobs!<p>Requires entities operating computing clusters to assess if customers intend to use resources to deploy covered models and maintain related records. There goes our freedom of association.<p>Allows the Attorney General to bring civil actions for violations, with potential penalties including injunctions, damages, and fines up to 30% of model development cost. Just lovely.<p>Directs the Department of Technology to commission consultants to create CalCompute, a public cloud computing cluster for AI research and innovation. Good luck with that!<p>LLM model development will be buried by red tape in California.