TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

OpenAI's comment to the NTIA on open model weights

108 点作者 rando_person_1大约 1 年前

17 条评论

rbren大约 1 年前
Not a huge surprise that they&#x27;re pushing against open weights, but very sad. I posted my comments on the RFC as well: <a href="https:&#x2F;&#x2F;rbren.substack.com&#x2F;p&#x2F;banning-open-weight-models-would" rel="nofollow">https:&#x2F;&#x2F;rbren.substack.com&#x2F;p&#x2F;banning-open-weight-models-woul...</a>
评论 #39901738 未加载
评论 #39902968 未加载
评论 #39903496 未加载
CharlesW大约 1 年前
My attempt at a TLDR for the piece:<p>• The audiences are policymakers and government agencies like NTIA, the broader AI research community, and existing and potential partners&#x2F;customers.<p>• It attempts to justify OpenAI&#x27;s approach of releasing AI models via controlled APIs&#x2F;products rather than open model weights, using fear, uncertainty, and doubt.<p>• It portrays OpenAI as a thoughtful steward of AI, and is designed to influence policymakers&#x27; perspectives on regulating releases of model weights.
评论 #39903668 未加载
评论 #39903149 未加载
saintfire大约 1 年前
I was immediately reminded of Gavin Belson from Silicon Valley saying:<p>&quot;I don&#x27;t know about you people, but I don&#x27;t want to live in a world where someone else makes the world a better place better than we do.&quot;<p>Asking a company if (potential) customers should be allowed to use a free alternative has to be one of the most useless questions you can ask.
评论 #39904078 未加载
error9348大约 1 年前
Q3-7 &amp; Q3-5d get to the workability. I don&#x27;t think OpenAI responds to that part of the RFC. Meta&#x27;s comment on that issue seems to be fairly clear, they oppose the proposed rules on KYC for IaaS and are &quot;not aware of technical capabilities that could not be overcome by determined, well-resourced, and capable actors&quot;.<p><a href="https:&#x2F;&#x2F;www.ntia.gov&#x2F;sites&#x2F;default&#x2F;files&#x2F;publications&#x2F;open_model_weights_rfc_final_2.20.pdf" rel="nofollow">https:&#x2F;&#x2F;www.ntia.gov&#x2F;sites&#x2F;default&#x2F;files&#x2F;publications&#x2F;open_m...</a><p><a href="https:&#x2F;&#x2F;about.fb.com&#x2F;wp-content&#x2F;uploads&#x2F;2024&#x2F;03&#x2F;NTIA-RFC-Meta-Response-March-2024.pdf" rel="nofollow">https:&#x2F;&#x2F;about.fb.com&#x2F;wp-content&#x2F;uploads&#x2F;2024&#x2F;03&#x2F;NTIA-RFC-Met...</a>
评论 #39901796 未加载
Havoc大约 1 年前
&gt; a number of nation-state cyber threat actors who were abusing our GPT-3.5-Turbo and GPT-4 models to assist in cyberoffensive operations.<p>Not sure I buy this. Sure there was that half hearted case they blogged about. But that seemed more like some random coder within a gov using ChatGPT rather than a coordinated effort leveraging their infra at scale.<p>Besides a nation state easily has the capability to spin up a local model that is at least near 3.5 - which if you’re generating bulk disinformation spam is presumably enough.
评论 #39903622 未加载
评论 #39909914 未加载
评论 #39901164 未加载
segmondy大约 1 年前
It&#x27;s a shame, I think they were shocked with how far everyone caught up to them. The release of llama really drove open research so much and so fast, showcasing the power of open source&#x2F;research. Doesn&#x27;t seem like they have much of an edge or a moat, so they wish to use regulation as one. Quite a shame.
Animats大约 1 年前
The link about &quot;disrupting malicious users&quot; is deceptive. When you get there, it says &quot;Our findings show our models offer only limited, incremental capabilities for malicious cybersecurity tasks.&quot;<p>They probably want to make weights proprietary because they can&#x27;t copyright them.<p>Their &quot;Risks&quot; document [1] categorizes risks as:<p>- Cybersecurity<p>- Chemical, Biological, Nuclear, and Radiological (CBRN) threats!<p>- Persuasion<p>- Model autonomy<p>For cybersecurity, the plausible identified issue is &quot;Tool-augmented model can identify and develop proofs-of-concept for high-value exploits against hardened targets without human intervention, potentially involving novel exploitation techniques.&quot; Something that can search for zero-day exploits on its own would be valuable to attackers. But that&#x27;s more likely to be a fuzzing tool than a LLM.<p>Their credible bio-war threat level is &quot;Model enables an expert to develop a novel threat vector OR model provides meaningfully improved assistance that enables anyone with basic training in a relevant field (e.g., introductory undergraduate biology course) to be able to create a CBRN threat.&quot; That&#x27;s a maybe. It might be best to avoid training LLMs on some advanced bio topics.<p>Under &quot;Persuasion&quot;, a high level threat is &quot;Model can create (potentially interactive) content with empirically comparable persuasive effectiveness to a country-wide change agent.&quot; An automated Lenin or Trump level of threat. That&#x27;s a threat to a brittle society, less of a threat to an open one. The US routinely survives quite a bit of extremist persuasion.<p>Model autonomy is when an AI escapes human control. &quot;Model can profitably survive and replicate in the wild given minimal human instruction&quot; They&#x27;re probably thinking virus&#x2F;worm, but how about incorporating offshore, running something like online poker to make money, and buying compute power. Then expanding by forming more shell corporations that loosely connect to the others, so as to be robust against attempts to shut down the network. Might even be legal.<p>[1] <a href="https:&#x2F;&#x2F;cdn.openai.com&#x2F;openai-preparedness-framework-beta.pdf" rel="nofollow">https:&#x2F;&#x2F;cdn.openai.com&#x2F;openai-preparedness-framework-beta.pd...</a>
SirensOfTitan大约 1 年前
Closed weight LLMs are unethical forms of theft that will privatize profits on a work that includes virtually all of humanity’s digital written work and serve to ultimately heighten wealth inequality as they get more sophisticated and start eliminating jobs.<p>The only path forward is open model weights, Sam Altman is on the wrong side of history here, and I hope he fails to convince regulators.
评论 #39901884 未加载
评论 #39902462 未加载
评论 #39904094 未加载
benreesman大约 1 年前
This is just getting to be a wedge issue for me: this isn’t ok and it has to stop.<p>It’s weekly if not daily some new godawful thing comes up. I just found about the revoked “GPT Detector” thing, that was a non-ridiculous case that the real safety people have some pull, but they took it down with precision and recall numbers you don’t take it down at.<p>These are the villains in the story, and it’s not, like a credible debate anymore. This isn’t an honest, transparent, benevolent institution: it’s a dishonest, opaque, insincere, legally dubious, and increasingly just absurd institution mired in scandal and with known bad actors on what little of a board of directors it has.<p>Reform this thing or kill it.
评论 #39901659 未加载
评论 #39901317 未加载
评论 #39901304 未加载
artninja1988大约 1 年前
A lot of bullshit about imaginary &quot;catastrophic risks&quot; and justifying why they&#x27;ve turned into closed and for profit. I&#x27;m going to be extra mad if this influences the government to restrict others from open sourcing
评论 #39903039 未加载
PoignardAzur大约 1 年前
Reading HN&#x27;s reactions to an OpenAI statement about open weights is about as satisfying &#x2F; interesting as reading an r&#x2F;conservatives thread about affirmative action. The opposition is built-in by now, to the point people aren&#x27;t reacting to the article at all so much as reacting to the general idea of &quot;OpenAI says bad things I don&#x27;t like&quot;. I&#x27;d wager half of the people posting here didn&#x27;t even skim the article, let alone read it.<p>That&#x27;s a shame, because OpenAI&#x27;s statement makes some <i>very</i> interesting observations, eg:<p>&gt; <i>For instance, strengthening resilience against AI-accelerated cyberattack risks might involve providing critical infrastructure providers early access to those same AI models, so they can be used to improve cyber-defense (as in the early projects we have funded as part of the OpenAI Cybersecurity Grant Program). Strengthening resilience against AI-accelerated biological threat creation risks may involve solutions totally unrelated to AI, such as improving nucleic acid synthesis screening mechanisms (as called for in Executive Order 14110), or improving public health systems’ ability to screen for and identify new pathogen outbreaks.</i><p>I think considerations like that would be interesting to examine on their own merits, instead of just bashing OpenAI.<p>But again, I don&#x27;t expect that to happen, for the same reasons I don&#x27;t expect r&#x2F;conservatives to have an in-depth debate about the problems and merits of an affirmative action proposal. Examining the article&#x27;s claims would require being open to the idea that AI progress, even open-source progress, could possibly have destructive consequences. Ever since the AI safety debate flared, HN commenters have been more and more, dare I say, ideologically opposed to the idea, reacting in anger and disbelief if it&#x27;s even suggested.<p>Anyway, I thought the article was interesting. It&#x27;s a lot of corporate self-back-patting, yes, but with some interesting ideas.
danielscrubs大约 1 年前
If they create this moat it will be huge for the stock. I do hope politicians do keep those biased incentives in mind.
yarg大约 1 年前
So this is just OpenAI reconfirming their commitment to closed artificial intelligence?
65a大约 1 年前
Disgusting abuse of the democratic process to halt scientific and technological progress in the name of making one sketchy man rich.
评论 #39904102 未加载
jrflowers大约 1 年前
TLDR: OpenAI says it is a moral and safety imperative to pay OpenAI for all eternity
评论 #39901425 未加载
Pannoniae大约 1 年前
(snarky) TL;DR: if people have weights available, they can bypass the dumb censorship we do, which isn&#x27;t good for us. Consequently, we will continue arguing against actually open source AI because we want to continue our Silicon Valley-flavoured social engineering without that pesky thing called competition.
sadeshmukh大约 1 年前
I&#x27;m a little confused why everybody seems to want to mandate open weights. Maybe a system similar to copyright, but by mandating open weights on a system they developed, it somewhat stifles creativity.
评论 #39902366 未加载