TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AI companies will need to report their safety tests to the US government

54 点作者 DaveFlater超过 1 年前

12 条评论

reissbaker超过 1 年前
Wow. This feels like a regulatory regime designed to smother open-source AI, and reduce competition in the American market since small companies are unlikely to be able to comply. The worst part is that doing so is actually against our national interests... China has already essentially caught up to GPT-4V with the release of Qwen-VL-Max yesterday [1]; the cat is out of the bag. Hobbling ourselves now doesn&#x27;t help.<p>1: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;_akhaliq&#x2F;status&#x2F;1752033872982806718" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;_akhaliq&#x2F;status&#x2F;1752033872982806718</a>
评论 #39190981 未加载
评论 #39190378 未加载
评论 #39192671 未加载
评论 #39191258 未加载
评论 #39191727 未加载
评论 #39192731 未加载
评论 #39191138 未加载
walterbell超过 1 年前
<a href="https:&#x2F;&#x2F;www.cnbc.com&#x2F;2022&#x2F;10&#x2F;24&#x2F;how-googles-former-ceo-eric-schmidt-helped-write-ai-laws-in-washington-without-publicly-disclosing-investments-in-ai-start-ups.html" rel="nofollow">https:&#x2F;&#x2F;www.cnbc.com&#x2F;2022&#x2F;10&#x2F;24&#x2F;how-googles-former-ceo-eric-...</a><p><i>&gt; “If you’re going to be leading a commission that is steering the direction of government AI and making recommendations for how we should promote this sector and scientific exploration in this area, you really shouldn’t also be dipping your hand in the pot and helping yourself to AI investments,” said Shaub of the Project on Government Oversight.</i>
jjk166超过 1 年前
This is some terrible reporting. If you actually read the executive order[0], while there is lip service paid to several general issues associated with AI like job displacement and privacy violation, they actually are pretty narrow in what they mean by AI safety. They&#x27;re specifically looking to restrict access to AI models that can be used to aid biological weapons development by modelling complex biochemical interactions.<p>While I&#x27;m unconvinced this is the best way to tackle the issue, and there&#x27;s always the possibility for overreach, the story as presented, namely that AI companies are about to become burdened with a vast and nebulous regulatory infrastructure prompted by vague and poorly informed fears about AI, is bunk.<p>[0] <a href="https:&#x2F;&#x2F;www.whitehouse.gov&#x2F;briefing-room&#x2F;presidential-actions&#x2F;2023&#x2F;10&#x2F;30&#x2F;executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.whitehouse.gov&#x2F;briefing-room&#x2F;presidential-action...</a>
ejb999超过 1 年前
Will not be enforceable imo, not sure if it is even constitutional - like it or not, output of AI engines is a type of free-speech. (and no, not that &#x27;AI&#x27; has free speech rights, the creators of the AI do)
评论 #39190984 未加载
评论 #39190702 未加载
评论 #39190563 未加载
评论 #39190642 未加载
评论 #39190603 未加载
jMyles超过 1 年前
At the risk of hyperbole:<p>It seems that, as AI becomes the prevalent model for human-computer interaction, requirements like this are not merely stifling to AI (and to the speech, press, and even religious protections that may arise from such interactions), but stifling to general-purpose computing.<p>Along with increasing &quot;protect the children&quot; sockpuppetry like we have in the bill in the Florida house yesterday - which seem to seek a future in which internet access requires the furnishing of a state ID - actions like this (which you will note did not even come from an additional act of Congress) appear to be rather vulgar measures to prevent technology from empowering anyone who doesn&#x27;t already have power.
评论 #39190633 未加载
scotty79超过 1 年前
Is it connected to a missile launcher? [Y] [No, it&#x27;s safe]
stainablesteel超过 1 年前
sounds like a soft censorship mechanism to me
testfoobar超过 1 年前
Is there a published list of safety tests that are to be conducted?
wand3r超过 1 年前
I laughed when Fox news said Taylor Swift was a &quot;psyop&quot; because it was absurd. Then a few weeks later her deepfakes go all over twitter and the government is cracking down on AI.<p>Sure its a conspiracy theory, but I do think the government is trying to manipulate public opinion around this space and gain control of it.
评论 #39191099 未加载
评论 #39192195 未加载
deadbabe超过 1 年前
Can the reports just be reported by AI?
throwaway49849超过 1 年前
&gt;The government’s National Institute of Standards and Technology will develop a uniform framework for assessing safety, as part of the order Biden signed in October.<p>They probably need to develop an extremely antisocial, psycopathic, malevolent AI to assess other AIs for safety. Its purpose will be to run through thousands of scenarios, using manipulation, threats, and deception to try to extract dangerous information from the other AI. It can then score the responses based on the information it was able to extract. I don&#x27;t really see any other way to automate this extremely tedious and error prone task. It&#x27;s interesting though because we will need to concentrate all of the evil in the world into a single AI in order to run these tests.
评论 #39190976 未加载
评论 #39190497 未加载
macinjosh超过 1 年前
I didn&#x27;t know the executive branch could just enact new laws unilaterally. They are probably using some easily bent statute to allow for this. Here is hoping SCOTUS overturns Chevron this cycle so we can be free of dictatorial edicts such as this.
评论 #39191287 未加载
评论 #39191120 未加载
评论 #39191087 未加载