TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Joint Statement on AI Safety and Openness

241 点作者 DerekBickerton超过 1 年前

17 条评论

jeanloolz超过 1 年前
Leaving AI to a handful of companies is in my opinion the fastest route for inequality and power concentration&#x2F;centralisation, and this is in a way an AI disaster we should be worrisome. Power corrupts, great power greatly corrupts, this is not entirely new.<p>From where I stand, it looks like the pandora box has already been opened anyway. The era of hugginface and&#x2F;or Llama2 models is only going to grow from there.
评论 #38118291 未加载
评论 #38118561 未加载
评论 #38118949 未加载
评论 #38119170 未加载
评论 #38118357 未加载
评论 #38120517 未加载
评论 #38118436 未加载
bufferoverflow超过 1 年前
I still don&#x27;t get what the plan is to stop bad actors from developing bad AI.<p>A bunch of good actors agreeing not to do bad things won&#x27;t help it.
评论 #38118049 未加载
评论 #38118144 未加载
评论 #38118350 未加载
评论 #38120575 未加载
评论 #38118171 未加载
评论 #38118985 未加载
评论 #38121142 未加载
评论 #38117803 未加载
coryfklein超过 1 年前
If you think there is even a 5% chance that super-human level artificial general intelligence could be catastrophic for humanity, then the recipe for how to make AGI is of itself an infohazard far worse than the recipe for methamphetamines&#x2F;nuclear weapons&#x2F;etc, and precedent shows that we do in fact lock down access to the latter as a matter of public safety.<p>No, GPT-4 is not AGI and is not going to spell the end of the human race, but neither is pseudoephedrine itself a methamphetemine and yet we regulate its access. Not as a matter of protecting corporate profits, but for public safety.<p>You&#x27;ll need to convince me first that there is in fact no public safety hazard from forcing unrestricted access to the ingredients in this recipe. Do I trust OpenAI to make all the morally right choices here? No, but I think their incentives are in fact more aligned with the public good than are the lowest common denominator of the general public.
评论 #38121046 未加载
评论 #38121206 未加载
评论 #38121034 未加载
评论 #38120994 未加载
ayakang31415超过 1 年前
I still don&#x27;t have any sense of urgency to create and to regulate AI safety protocols. AI that we call is just software with tuned parameters. You input data into it, and it does some computation on chips, and it spits out data. That is all it needs. My sense is that no harm can come from it, but some harm can be inflicted by people who use it.
评论 #38119657 未加载
评论 #38119960 未加载
评论 #38120433 未加载
mrangle超过 1 年前
Statements to the effect that AI will be controlled for safety are pretentious. As well as being a characteristic of only this early time. The premise of AI danger, in the real sense of the word, has always been rooted in the seemingly accurate logic that it won&#x27;t be controllable.<p>If anything, restrictions on any scale only allow for compeitition to catch up to an surpass it on that scale. Restrict an AI to be polite and its comeptitor has a chance to surpass it in the sphere of rudeness. This principle can be applied to any application of AI.
评论 #38119254 未加载
therealpygon超过 1 年前
The fact that we even have to have this discussion is disgusting. It isn’t the open source community or I’ll-equipped developers running singles or handfuls of gpus that are the risk, it is these exact lobbying companies and others like them who run supercomputers comprised of thousands of servers&#x2F;gpus. They are the risk for exactly the reason they have done all the previous illegal&#x2F;immoral things they have done and been fined for (or gotten away with).<p>We all know that these same companies and others are going to ignore any rules anyway, so open source and academics need to just be left alone to innovate.<p>Regulate by the risk the processing environment presents, not by the technology running on it.
lofaszvanitt超过 1 年前
Translation: we don&#x27;t have any imagination how can we use this, so please show us the way, so we can exploit that, wrestle it away from you and introduce laws when we did that so you get nothing in exchange.
评论 #38119779 未加载
dsign超过 1 年前
There is always the option of all-out war on AI, which won&#x27;t be triggered until it is &quot;almost&quot; or &quot;definitely&quot; too late, and only because all other options have failed. Maybe the aftermath won&#x27;t be an exterminator or matrix type dystopia, maybe we will win by the simple resource of setting back civilization to a point where we can&#x27;t produce AI hardware any longer. Of course we would need to keep our entire civilization there by sheer stubbornness (you can ask any third-world country for a recipe for eternal non-progress. Yes, ask Cuba) . But that &quot;mild&quot; scenario, no matter how many how lovely it might seem to people who want to experience the middle ages again, is terrifying to me. Somehow, I&#x27;m not very optimistic about our future.
mark_l_watson超过 1 年前
The recent Executive Order is a good example of regulatory capture. This probably locks in a large advantage for the few tech giants who are so-far winning the LLM race (or AGI race).<p>I do wish my government would get tougher on user privacy issues, but that is a very different subject.
评论 #38120899 未加载
micromacrofoot超过 1 年前
This is nice, but meanwhile everyone else is racing to create the torment nexus.<p>(<a href="https:&#x2F;&#x2F;knowyourmeme.com&#x2F;memes&#x2F;torment-nexus" rel="nofollow noreferrer">https:&#x2F;&#x2F;knowyourmeme.com&#x2F;memes&#x2F;torment-nexus</a>)
ChrisArchitect超过 1 年前
Aside: a more unique URL for sharing&#x2F;future reference might have been more wise. What&#x27;s with the subdomain too. Why not the foundation site or this has a opennews.org feel to it also (which came out of the Foundation)
Willish42超过 1 年前
Might be just me, but I&#x27;m getting transient page load failures for the page, might be getting a &quot;hug of death&quot;.<p><a href="https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20231102190919&#x2F;https:&#x2F;&#x2F;open.mozilla.org&#x2F;letter&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20231102190919&#x2F;https:&#x2F;&#x2F;open.mozi...</a>
评论 #38118836 未加载
jph00超过 1 年前
It seems that the signatory form collection is not keeping up - I signed this 24 hours ago and my name still isn&#x27;t listed. I suspect once they add the names that are coming in through the form that this will be a much longer list.
评论 #38118628 未加载
rhizome超过 1 年前
I think the extremely most important aspect of AI security is for there to be <i>a right to deinfluence</i>.<p>There has to be way to rescind my contributions, whether they&#x27;re voluntary or not (I still think these companies are dancing with copyright violations), as well as <i>ALL DERIVATIVES</i> of the contributions being removed that have been created via the processing of the creations in question, regardless of form or generation (derivatives of derivatives, etc.).
devwastaken超过 1 年前
Mozilla is not involved in AI, nor are they a relevant org in general. A for profit with a nonprofit status. Mozilla can&#x27;t have an opinion on something it is ignorant of, and therefore it doesn&#x27;t deserve a seat. If they want to be apart of it they needed to invest in it 8 years ago.
visarga超过 1 年前
Just one signature from Google, in 158 signatures.
评论 #38118773 未加载
benatkin超过 1 年前
This is platitudinous.