TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Call-to-Action on SB 1047 – Frontier Artificial Intelligence Models Act

148 点作者 jph00大约 1 年前

18 条评论

Animats大约 1 年前
I just sent in some comments.<p>It&#x27;s too late to stop &quot;deep fakes&quot;. That technology is already in Photoshop and even built into some cameras. Also, regulate that and Hollywood special effects shops may have to move out of state.<p>As for LLMs making it easier to people to build destructive devices, Google can provide info about that. Or just read some &quot;prepper&quot; books and magazines. That ship sailed long ago.<p>Real threats are mostly about how much decision power companies delegate to AIs. Systems terminating accounts with no appeal are already a serious problem. An EU-type requirement for appeals, a requirement for warning notices, and the right to take such disputes to court would help there. It&#x27;s not the technology.
评论 #40192602 未加载
评论 #40194539 未加载
评论 #40193367 未加载
jph00大约 1 年前
I&#x27;ve written a submission to the authors of this bill, and made it publicly available here:<p><a href="https:&#x2F;&#x2F;www.answer.ai&#x2F;posts&#x2F;2024-04-29-sb1047.html" rel="nofollow">https:&#x2F;&#x2F;www.answer.ai&#x2F;posts&#x2F;2024-04-29-sb1047.html</a><p>The EFF have also prepared a submission:<p><a href="https:&#x2F;&#x2F;www.context.fund&#x2F;policy&#x2F;2024-03-26SB1047EFFSIA.pdf" rel="nofollow">https:&#x2F;&#x2F;www.context.fund&#x2F;policy&#x2F;2024-03-26SB1047EFFSIA.pdf</a><p>A key issue with the bill is that it criminalises creating a model that someone else uses to cause harm. But of course, it&#x27;s impossible to control what someone else does with your model -- regardless of how you train it, it can be fine-tuned, prompted, etc by users for their own purposes. Even then, you can&#x27;t really know <i>why</i> a model is doing something -- for instance, AI security researchers Arvind Narayanan and Sayash Kapoor point out:<p>&gt; <i>Consider the concern that LLMs can help hackers generate and send phishing emails to a large number of potential victims. It’s true — in our own small-scale tests, we’ve found that LLMs can generate persuasive phishing emails tailored to a particular individual based on publicly available information about them. But here’s the problem: phishing emails are just regular emails! There is nothing intrinsically malicious about them. A phishing email might tell the recipient that there is an urgent deadline for a project they are working on, and that they need to click on a link or open an attachment to complete some action. What is malicious is the content of the webpage or the attachment. But the model that’s being asked to generate the phishing email is not given access to the content that is potentially malicious. So the only way to make a model refuse to generate phishing emails is to make it refuse to generate emails.</i><p>Nearly a year ago I warned that that bills of this kind could hurt, rather than help safety, and could actually tear down the foundations of the Enlightenment:<p><a href="https:&#x2F;&#x2F;www.fast.ai&#x2F;posts&#x2F;2023-11-07-dislightenment.html" rel="nofollow">https:&#x2F;&#x2F;www.fast.ai&#x2F;posts&#x2F;2023-11-07-dislightenment.html</a>
评论 #40193056 未加载
pcthrowaway大约 1 年前
This bill sounds unbelievably stupid. If passed, it will just result in a migration of AI projects out of California, save a few which are already tied to the EA movement.<p>I&#x27;m not under the impression that the EA movement is better suited to steward AI development than other groups, but even assuming they were, there is no chance for an initiative like this to work unless every country agreed to it and followed it.
评论 #40192432 未加载
评论 #40220572 未加载
评论 #40192366 未加载
interroboink大约 1 年前
I feel like the legal definition of &quot;AI Model&quot; is pretty slippery.<p>From this document, they define:<p><pre><code> “Artificial intelligence model” means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy. </code></pre> That&#x27;s pretty dang broad. Doesn&#x27;t it cover basically all software? I&#x27;m not a lawyer, and I realize it&#x27;s ultimately up to judges to interpret, but it seems almost limitless. Seems like it could cover a kitchen hand mixer too, as far as I can tell.
评论 #40198368 未加载
_heimdall大约 1 年前
Anyone have a link to a less biased explanation of the bill? I can&#x27;t take this one too seriously when it baselessly claims people will be charged with thought crimes.
评论 #40192382 未加载
评论 #40192428 未加载
评论 #40192411 未加载
hackermatic大约 1 年前
I encourage people to look for a variety of opinions on this bill -- and its various parts -- so you can better figure out which parts you actually want to keep, change, or remove, and give your legislators that specific feedback.<p>Alliance for the Future is a lobby group of effective accelerationists who endorse some of Marc Andreesen and Peter Thiel&#x27;s views in their manifesto, and based on that plus this article, they seem to oppose the bill entirely.<p>A place to start for a breakdown of what&#x27;s in the bill is the Context Fund analysis that AFTF links to. That analysis cites similar critiques from EFF, the Software &amp; Information Industry Association, and others. All of these are from the perspective of voting against or substantially changing the bill.<p>I haven&#x27;t found &quot;pro bill&quot; opinions as easily, but I haven&#x27;t been plugged into the conversations around this, so I&#x27;m missing anything that doesn&#x27;t appear on the first few pages of Google or DDG.
elicksaur大约 1 年前
I’ll happily support regulation of the space when the bill writers of these proposals stop using definitions of “artificial intelligence” that could reasonably be construed by a lawyer to cover literally any computer program.<p>&gt; (b) “Artificial intelligence model” means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.
Imnimo大约 1 年前
&gt;(2) “Hazardous capability” includes a capability described in paragraph (1) even if the hazardous capability would not manifest but for fine tuning and posttraining modifications performed by third-party experts intending to demonstrate those abilities.<p>So if I hand-write instructions to make a chemical weapon, and aggressively &quot;fine-tune&quot; Llama 7B to output those instructions verbatim regardless of input, Meta is liable for releasing a model with hazardous capabilities?
评论 #40192509 未加载
protocolture大约 1 年前
Question. What happens if I write a piece of software that is harmful that doesnt have the AI label.<p>It seems dumb to have a separate classification for harms caused by trained AI models. The training aspect doesnt seem to limit liability at all. A judge might rule differently, but thats why the justice system is built such as it is, to make intelligent decisions based on the specific facts of a case.<p>I am betting that software that causes some significant harm is already outlawed. So this whole thing is just a waste of time.
评论 #40193392 未加载
cscurmudgeon大约 1 年前
&gt; A developer of a covered model that provides commercial access to that covered model shall provide a transparent, uniform, publicly available price schedule for the purchase of access to that covered model<p>Interesting, we don&#x27;t have transparent, uniform, publicly available price schedule for healthcare and other basic needs (electricity, e.g. see PGE).<p>Something is fishy here.
andy99大约 1 年前
I agree with this (the call to action, not the act) and will try and respond and share it, but it&#x27;s a lobby group right (&quot;Alliance for the Future&quot;), I&#x27;d like to know who is funding it and a bit more about it.
评论 #40192393 未加载
phkahler大约 1 年前
It would be really helpful if folks like Sam Altman and Elon would STFU about dangers and claims of AGI or better in the next months.<p>If you&#x27;re actually worried about AI we need to ban any generative AI that can replicate a specific person&#x27;s voice or appearance. Beyond that I don&#x27;t see any immediate danger.
评论 #40193333 未加载
throwing_away大约 1 年前
Slow down there, California.<p>Florida is growing too fast as it is.
jkuli大约 1 年前
I&#x27;m unable to register. This is GME stonks all over again. It takes less than 1 second to process an account. There are 18,000 seconds in five hours. There must be a lot of comments that they don&#x27;t agree with. Maybe they shut it down to protect humanity from extinction?
nonplus大约 1 年前
I guess I think we should hold models used for non-academic reasons to a higher standard, and there should be oversight.<p>I don&#x27;t know if all the language in this bill does what we need, but I&#x27;m against letting large corporations like a META or X live test whatever they want on their end users.<p>Calling out derivative models are exempt sounds good; only new training sets have to be subjected to this. I think there should be an academic limited duty exemption, models that can&#x27;t be commercialized likely don&#x27;t need the rigor of this law.<p>I guess I don&#x27;t agree with affuture.org and think we need legislation like this in place.
评论 #40192777 未加载
carbocation大约 1 年前
I think this advice is incomplete. For those of us who live in California, shouldn&#x27;t we be contacting our representatives?
synapsomorphy大约 1 年前
I don&#x27;t think this bill would be that effective, but I do feel that if we as a species don&#x27;t do something drastic soon, we won&#x27;t be around for a whole lot longer.<p>And I&#x27;m not sure if it&#x27;s even possible to do something drastic enough at this point - regulating datacenters would just make companies move to other countries, just like this would probably just make companies move out of CA.
评论 #40192496 未加载
评论 #40192770 未加载
s1k3s大约 1 年前
The article suggests that this act will effectively destroy any open source AI initiative in California. After reading the act, this seems to be the correct assumption. But, is Open Source AI even a thing at this point?<p>By the way, this is how the EU does things and that&#x27;s why we&#x27;re always behind on anything tech :)
评论 #40192466 未加载
评论 #40192387 未加载