TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Banning open weight models would be a disaster

244 点作者 rbren大约 1 年前

21 条评论

greenavocado大约 1 年前
Remember when the world freaked out over encryption, thinking every coded message was a digital skeleton key to anarchy? Yeah, the 90s were wild with the whole PGP (Pretty Good Privacy) encryption fight. The government basically treated encryption like it was some kind of wizardry that only &quot;good guys&quot; should have. Fast forward to today, and it&#x27;s like we&#x27;re stuck on repeat with open model weights.<p>Just like code was the battleground back then, open model weights are the new frontier. Think about it—code is just a bunch of instructions, right? Well, model weights are pretty much the same; they&#x27;re the brains behind AI, telling it how to think and learn. Saying &quot;nah, you can&#x27;t share those&quot; is like trying to put a genie back in its bottle after it&#x27;s shown you it can grant wishes.<p>The whole deal with PGP was about privacy, sending messages without worrying about prying eyes. Fast forward, and model weights are about sharing knowledge, making AI smarter and more accessible. Blocking that flow of information? It&#x27;s like telling scientists they can&#x27;t share their research because someone, somewhere, might do something bad with it.<p>Code lets us communicate with machines, model weights let machines learn from us. Both are about building and sharing knowledge. When the government tried to control encryption, it wasn&#x27;t just about keeping secrets; it was about who gets to have a voice and who gets to listen. With open model weights, we&#x27;re talking about who gets to learn and who gets to teach.<p>Banning or restricting access to model weights feels eerily similar to those encryption wars. It&#x27;s a move that says, &quot;We&#x27;re not sure we trust you with this power.&quot; But just like with code, the answer isn&#x27;t locking it away. It&#x27;s about education, responsible use, and embracing the potential for good.<p>Innovation thrives on openness. Whether it&#x27;s the lines of code that secure our digital lives or the model weights that could revolutionize AI, putting up walls only slows us down. We&#x27;ve been down this road before. Let&#x27;s not make the same mistake of thinking we can control innovation by restricting access.
评论 #39902809 未加载
评论 #39904783 未加载
评论 #39903965 未加载
评论 #39902896 未加载
评论 #39905019 未加载
评论 #39905467 未加载
评论 #39907911 未加载
评论 #39902898 未加载
评论 #39903440 未加载
mikkom大约 1 年前
Wow what a horrible idea. Sounds like monopoly in the making. I would be really interested to hear what &quot;open&quot; AI has commented about this - I guess they are lobbying for this with all of their billions.<p>If the US government really want to do this correctly, they must also ban any API access to AI models and ban all research related to AI.<p>How could this be even written as law? Universities and companies are probihited to publish their research? How many layer models are forbidden to be published? All neural networks?
评论 #39904168 未加载
评论 #39903433 未加载
评论 #39904072 未加载
torginus大约 1 年前
So AI companies take all the world&#x27;s text and knowledge for free, use openly available research, take massive private funding and generate immense economic value using that, and want to make it illegal for anyone else to do the same?<p>I don&#x27;t get the &#x27;harm can be done by individuals&#x27; argument. Sticks and stones. Every discussion forum on the internet is moderated to some degree, and every human being has the ability to post hurtful or illegal content, yet the system works. Moderation will only get more powerful thanks to AI tools.
评论 #39906150 未加载
skissane大约 1 年前
There are almost 200 countries in the world. Even if the US and the EU and a bunch more ban open weight models, I doubt they&#x27;ll succeed in convincing every country to do so. And whichever countries decide not to follow the ban, could thereby give their own AI industries a big boost. As the world becomes ever more globalised, the potential effectiveness of these kinds of policies declines.<p>Sure, they <i>could</i> try to negotiate some kind of UN convention for a coordinated global ban. But, given how fractured global diplomacy has become, I doubt the odds of something like that succeeding are particularly high.
评论 #39903489 未加载
评论 #39903365 未加载
评论 #39904161 未加载
评论 #39904057 未加载
CamperBob2大约 1 年前
From the end of the article: &quot;If you agree, please send your comments to the DoC by March 27th, 2024.&quot;<p>Is there a reason why this executive order &#x2F; RFC received no coverage on HN (or anywhere else I&#x27;m aware of) until after the deadline had passed?
评论 #39902320 未加载
评论 #39902608 未加载
评论 #39902637 未加载
wanderingmind大约 1 年前
Maybe some lawyer here can explain how can an administration block publishing of open weights without violating first amendment which guarantees freedom of expression for everyone
评论 #39914785 未加载
评论 #39904408 未加载
entrep大约 1 年前
The author is a collaborator in OpenDevin [1], an attempt to replicate and improve Cognition Labs&#x27; Devin.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;OpenDevin">https:&#x2F;&#x2F;github.com&#x2F;OpenDevin</a>
bradley13大约 1 年前
Government is necessary, in order to organize a complex society, but government is like any other organization: made up of people, many of who are out for their own interests. The most prominent of those interests are power and money.<p>Whenever government proposes banning a technology, one must ask: who benefits? The LLMs, even in their current state of infancy, are powerful tools. Restricting access to those tools keeps power in the hands of wealthy corporations and the government itself. That&#x27;s the power aspect.<p>The money aspect is even simpler: Don&#x27;t doubt that some of those wealthy corporations are making donations to certain officials, in order to gain support for actions like this. Almost no one leaves the Congress (or almost any parliamentary body in any country) as less than a multi-millionaire. Funny, how that works...
评论 #39905403 未加载
dang大约 1 年前
Related ongoing thread:<p><i>OpenAI&#x27;s comment to the NTIA on open model weights</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39900197">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39900197</a> - April 2024 (41 comments)
foota大约 1 年前
Is there any way to comment past the date? This seems like a horrible idea.
评论 #39902614 未加载
评论 #39902504 未加载
评论 #39902586 未加载
MrYellowP大约 1 年前
&gt; to prevent abuse.<p>People never grow tired of the ever-repeating excuses to further reduce our freedoms, or to hide criminality, or just insult the already lacking collective IQ.<p>Think of the children! We need to prevent abuse! Someone&#x27;s feelings might be hurt! Someone might get harmed! Seeing real breasts might cause trauma!<p>Or one that&#x27;s not so often used, but still really effective:<p>We&#x27;re just bulldozering this place, because it&#x27;s so horrible. Instead we turn it into a luxury resort for rich people. People who believe that it&#x27;s because we&#x27;re destroy any and all evidence are just conspiracy theorists. (in regards to Epstein&#x27;s Island)
xzzulz大约 1 年前
This tech is risky. It could be dangerous, if used without any control.<p>It should be keep in English culture. And its close friend allies. In my opinion, in our society, full of threats, it is not a good idea to make it open-source. (Not an expert on the topic).<p>That could allow adversaries parties, to obtain it. Parties that otherwise, would be unable to do so, by themselves.<p>In my opinion, it should be keep under control of the most able, and responsible organizations. Supervised by government. Because who else could supervise it?. Regulation of this tech, is a very important topic. That should be tackled by the best organizations, university and responsible researchers.<p>I would prefer a Star-Wars type of society. Where humans still do human activities.
fullspectrumdev大约 1 年前
I’m personally of the view that most AI safety people are delusional and irrationally terrified of progress.<p>I remain open to having my mind changed on this matter, but thus far, I’ve not seen a single good argument for restricting development of AI.
评论 #39915564 未加载
评论 #39904537 未加载
评论 #39904086 未加载
jmull大约 1 年前
&gt; The only difference with AI, is that AI is immensely powerful.<p>Now, I keep getting stuck on this.<p>All of these models automate the creation of BS -- that is, stuff that kind of seems like it could be real but isn&#x27;t.<p>I have no doubt there is significant economic value in this. But the world was awash in BS before these LLMs so we&#x27;re really talking about a more cost effective way to saturate the sponge.<p>Anyway, on the main topic... closing models is an absurd idea, and one that cannot possibly work. I think the people who have billions at stake in these models are panicking, realizing the precarious and temporary nature of their lead in LLMs and are desperately trying to protect it. All that money bought them a technological lead that is evaporating a lot faster than they can figure out how to build a business model on it.<p>...Nvidia should pump a little bit of their windfall profits into counter lobbying since they have the most to gain&#x2F;lose from open&#x2F;closed models.
andy99大约 1 年前
There will still be &quot;free&quot; as in freedom models available from China and others.<p>And no doubt a burgeoning resistance focused on making open source models available. It would be a disaster but I can see it making the industry stronger, with more variety and breaking the influence of some of the bigger players.<p>Practically, if you&#x27;re concerned about this, learn more about ML&#x2F;AI, not about how to use super high level frameworks but about how it actually works so when &quot;SHTF&quot; as survivalists say, you&#x27;ll still be able to use it.
formula_ninguna大约 1 年前
Chinese, iranians, russians should now exclaim - &quot;Poor americans! Look at what Biden&#x27;s regime does to them! Look at how their liberties get suspressed! Let&#x27;s help those people fight for their rights&quot;.<p>Because if this had happened in any non Western country, the mainstream news in US and EU would&#x27;ve been of the similar sentiment.
meindnoch大约 1 年前
IEE754 should be banned for public safety.
zzzzzzzzzz10大约 1 年前
Please ban them, it wont change a thing but drive us underground. I have backups of all relevant models as have many others.
评论 #39903889 未加载
rgmerk大约 1 年前
Not saying this ban is a great idea...<p>...but there are plenty of useful things that we ban the general public from having access to.<p>Opium poppies. Automatic weapons. Gas centrifuges.<p>In fact, if you start publishing detailed designs for the last one you&#x27;re likely to have people in suits who only go by first names visiting you in pretty short order.
评论 #39903233 未加载
评论 #39902918 未加载
injidup大约 1 年前
&gt; The biggest threats posed by AI come not from individuals, but from corporations and state-level actors.<p>Why is this statement assumed to be true? It is far from clear that advanced weapons in the hands of irresponsible, impulsive and ideological individuals cannot cause large scale chaos.<p>To build an egg requires the effort on the level of states or corporations but to break it requires just an individual motivated to do so.
评论 #39903126 未加载
评论 #39905050 未加载
patcon大约 1 年前
I&#x27;m open to this. If an attempt to stop frontier models is to hold, banning open weights must be on the table. I&#x27;m excited about AI, but not entirely sure that it doesn&#x27;t call for the same strong regulation as nuclear proliferation did. It&#x27;s about kicking the brakes until culture catches up and absorbs impacts, not stopping<p>I&#x27;m just saying I&#x27;m open to it, and don&#x27;t want them to listen to accelerationists, but rather the ppl doing the deepest work with the edge models. Many of them are humbled and worried. Generalist AI enthusiasts wanting freedom to do any and all things with paradigm-shifting intelligence infra, I&#x27;m not swayed by that so much.
评论 #39902970 未加载
评论 #39902894 未加载
评论 #39902950 未加载
评论 #39903143 未加载
评论 #39903363 未加载
评论 #39903076 未加载
评论 #39903333 未加载