I've written a submission to the authors of this bill, and made it publicly available here:<p><a href="https://www.answer.ai/posts/2024-04-29-sb1047.html" rel="nofollow">https://www.answer.ai/posts/2024-04-29-sb1047.html</a><p>The EFF have also prepared a submission:<p><a href="https://www.context.fund/policy/2024-03-26SB1047EFFSIA.pdf" rel="nofollow">https://www.context.fund/policy/2024-03-26SB1047EFFSIA.pdf</a><p>A key issue with the bill is that it criminalises creating a model that someone else uses to cause harm. But of course, it's impossible to control what someone else does with your model -- regardless of how you train it, it can be fine-tuned, prompted, etc by users for their own purposes. Even then, you can't really know <i>why</i> a model is doing something -- for instance, AI security researchers Arvind Narayanan and Sayash Kapoor point out:<p>> <i>Consider the concern that LLMs can help hackers generate and send phishing emails to a large number of potential victims. It’s true — in our own small-scale tests, we’ve found that LLMs can generate persuasive phishing emails tailored to a particular individual based on publicly available information about them. But here’s the problem: phishing emails are just regular emails! There is nothing intrinsically malicious about them. A phishing email might tell the recipient that there is an urgent deadline for a project they are working on, and that they need to click on a link or open an attachment to complete some action. What is malicious is the content of the webpage or the attachment. But the model that’s being asked to generate the phishing email is not given access to the content that is potentially malicious. So the only way to make a model refuse to generate phishing emails is to make it refuse to generate emails.</i><p>Nearly a year ago I warned that that bills of this kind could hurt, rather than help safety, and could actually tear down the foundations of the Enlightenment:<p><a href="https://www.fast.ai/posts/2023-11-07-dislightenment.html" rel="nofollow">https://www.fast.ai/posts/2023-11-07-dislightenment.html</a>