TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

OpenAI Research

124 点作者 aburan28大约 7 年前

6 条评论

cmpb大约 7 年前
On their &quot;OpenAI Charter&quot;, they list several basic principles they&#x27;ll use to achieve the goal of safe AGI, including this one, which I find pretty interesting:<p>&gt;We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”<p>If I&#x27;m reading that correctly, it means that later on if&#x2F;when some company is obviously on the cusp of AGI, OpenAI will drop what they&#x27;re doing and start helping this other company so that there isn&#x27;t a haphazard race to being the first one, which could result in unsafe AGI. That sounds like a well-intentioned idea that <i>could</i> cause more problems in practice. For instance, if there are multiple companies with almost equal footing, then combining forces with one of them would give a sense of even stricter deadline to the other ones, possibly making the development even less safe.<p>Also, they only mention assisting &quot;value-aligned, safety-conscious&quot; projects, which seems pretty vague. Just seems like they should give (and perhaps have given) more thought into that principle.
评论 #17077928 未加载
评论 #17075055 未加载
dawidloubser大约 7 年前
For anybody else excited when they hear &quot;open&quot; and &quot;AGI&quot; in the same sentence, if you don&#x27;t know the OpenCog project, the wiki (especially if downloaded in book form) makes for fascinating reading:<p><a href="https:&#x2F;&#x2F;wiki.opencog.org&#x2F;w&#x2F;The_Open_Cognition_Project" rel="nofollow">https:&#x2F;&#x2F;wiki.opencog.org&#x2F;w&#x2F;The_Open_Cognition_Project</a>
评论 #17073596 未加载
logicallee大约 7 年前
A question. It says:<p>&gt;OpenAI conducts fundamental, long-term research toward the creation of safe AGI.<p>What does &quot;safe AGI&quot; mean?<p>Obviously most readers would immediately think of this: &quot;AGI that won&#x27;t enslave humanity, kill millions of people as part of its optimization process, crash planes, order drones attacks on civilian sectors, etc&quot;, and escape its &quot;cage&quot; (whatever it was supposed to be doing)?<p>But that seems like a strange and poorly-defined explicit goal. And kind of early to be putting into an announcement like this. Does it really mean that - or does it mean something else, more specific - and if so, what?<p>I would be interested in knowing what the person who wrote that word had in mind, since I think most people would think of the Terminator series - Skynet - or The Matrix, etc, when it comes to AGI.<p>----<p>EDIT: To elaborate on why we should define safe! I know what &quot;safe&quot; means when we say &quot;A memory-safe programming language&quot;.[1] It&#x27;s very specific. In that sentence it doesn&#x27;t have anything to do with enslaving humanity, nor does anyone think it does. Here&#x27;s are some articles on this exact subject: <a href="https:&#x2F;&#x2F;www.google.com&#x2F;search?q=memory+safety" rel="nofollow">https:&#x2F;&#x2F;www.google.com&#x2F;search?q=memory+safety</a><p>Further, it&#x27;s pretty obvious what we mean when we say &quot;a safe autonomous vehicle&quot; because whether an accident occurs is pretty cut-and-dried. We have gray areas, for example is a vehicle &quot;safe&quot; if it&#x27;s driving under the speed limit and gets into an accident through no fault of it&#x27;s own, however if it had had advanced knowledge of all other vehicles heading toward the same intersection (regardless of visibility) it would not have been in that accident? Clearly a car that slows down through knowledge a human driver wouldn&#x27;t have can be safer than another kind of car. But we still understand this idea of &quot;safety&quot; when it comes to cars.<p>But what does &quot;safe&quot; mean when we say &quot;creation of safe AGI&quot;?<p>It must mean something to be in that sentence. So why and how can you apply the word &quot;safe&quot; to AGI? What does it mean?<p>[1] even has a complete Wikipedia article: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Memory_safety" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Memory_safety</a>
评论 #17073798 未加载
评论 #17074269 未加载
评论 #17073728 未加载
ayakura大约 7 年前
Best thing I want to get out of OpenAI right now is 10v10 Dota 2 Pros versus Bots All-star match. Wonder if they got anything out of last year&#x27;s data from all the pros playing with their bot...
评论 #17074803 未加载
backpropaganda大约 7 年前
It seems like the website is getting ready for an announcement.
评论 #17073366 未加载
fouc大约 7 年前
I&#x27;m not a fan of this website design, it strikes me as as an attempt to look extra fancy. AI is more human than that.
评论 #17073531 未加载