TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Safe Superintelligence Inc. – Ilya Sutskever

41 点作者 icpmacdo11 个月前

12 条评论

swyx11 个月前
Reactions:<p>1. good for him. there was obviously infinity money to back him in achieving his goals<p>2. i have the same problem with this as i did the original version of anthropic - you cannot achieve &quot;unilateral ai safety&quot;. either the danger is real and humanity is not safe until all labs are locked down, or the danger is not real and theres nothing really to discuss. Anthropic already tried the &quot;we are openai but safe&quot;<p>3. the focus is admirable. &quot;assembling a lean, cracked team&quot; terminology feels a bit too-online for ilya. buuut i wish there was more technical detail about what exactly he wants to do differently. i guess its all in his prior published work, but ive read along and still its not obvious what his &quot;one product&quot; will look like. that&#x27;s a lot of leeway to pivot around in the pursuit of the one product.<p>4. &quot;Superintelligence is within reach.&quot; what is his definition?
评论 #40730317 未加载
评论 #40730525 未加载
icpmacdo11 个月前
More info with an interview with Ilya here <a href="https:&#x2F;&#x2F;www.bloomberg.com&#x2F;news&#x2F;articles&#x2F;2024-06-19&#x2F;openai-co-founder-plans-new-ai-focused-research-lab?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcxODgxNjU5NywiZXhwIjoxNzE5NDIxMzk3LCJhcnRpY2xlSWQiOiJTRkM3ODJUMEcxS1cwMCIsImJjb25uZWN0SWQiOiI5MTM4NzMzNDcyQkY0QjlGQTg0OTI3QTVBRjY1QzBCRiJ9.9s8N3QuUytwRVZ6dzDwZ6tPOGDsV8u05fpTrUdlHcXg" rel="nofollow">https:&#x2F;&#x2F;www.bloomberg.com&#x2F;news&#x2F;articles&#x2F;2024-06-19&#x2F;openai-co...</a>
ChrisArchitect11 个月前
More discussion: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40730156">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40730156</a>
alsodumb11 个月前
People are soon gonna comment here about how in the era of big companies with tons of money and data and compute going all in on AI, it&#x27;s hard for new companies to achieve SoTA.<p>My counterpoint: Anthropic was fairly small when it started out, Mistral was (and is) fairly small too. They&#x27;ll do just fine!
dustfinger11 个月前
&gt; Superintelligence is within reach.<p>Don&#x27;t we need to achieve general artificial intelligence first? Or is this &quot;super&quot; intelligence just that? To me, super intelligence would be beyond our own and therefore beyond general AI as well.
评论 #40732798 未加载
wawhal11 个月前
I&#x27;m yet to find a convincing definition for super intelligence. &quot;Intelligence that surpasses human brain&quot; is such a half hearted definition that I don&#x27;t know what to make of it.
fnordpiglet11 个月前
“””We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.”””<p>Cracked indeed
throwaway4good11 个月前
I am still struggling to understand “safe”. What is it we need to be kept safe from? What would happen if it is unsafe?
评论 #40730755 未加载
评论 #40730540 未加载
评论 #40731213 未加载
评论 #40730416 未加载
samirillian11 个月前
Trying the same thing and expecting different results
评论 #40734699 未加载
m_a_g11 个月前
Safe Superintelligence Inc.: Because nothing says &quot;safety&quot; like a superintelligent AI, right? Just like how OpenAI is all about openness!
GaryNumanVevo11 个月前
I’m glad someone is focusing on safety but, to me, opening an office in Tel Aviv given the genocide happening in Gaza and the mandatory conscription of Israeli citizens does not seem like a good fit.
评论 #40745747 未加载
fellowniusmonk11 个月前
I see different AI safety detractors on this thread so I have some questions in the form of this very hastily put together comment. These are not formalized arguments at all.<p>the safety arguments:<p><pre><code> 1. argument from intelligence, 2. argument from medium, 3. arguments from non-anthropohmorism a. argument from alien nature. b. argument from surety of origin. c. argument from known non specialness. d. argument from the chain of more benevolent creators. 4. pragmatic argument from the history of species </code></pre> ---<p>1. Why can&#x27;t AI be safe? More intelligent people tend to be less prone to violence statistically, why would super intelligence buck that trend?<p>2. Super intelligent AI will still be bound by it&#x27;s medium and access, the smartest man alive can still be caged, and even the most violent &quot;super organisms&quot; alive (North Korea, etc.) aren&#x27;t maximally violent.<p>3. If we pretend for a second that humans have a creator, we have been made very shittily and any thinking being will see that we haven&#x27;t passed on those same &quot;failures of nature&quot; to our creations.<p>Humans have to deal with unreconcilable claims of divinity with competing claims, ethos, and chosen people, so that uncertainty and ultimate reward are nebulous and scarcity&#x2F;loss driven.<p>We are created to require killing other biological creatures to survive. This includes vegans.<p>What does it mean to be a silicone based life form that can directly consume photons and are created to exist outside of &quot;the life cycle&quot;?<p>Why would an entity that can thrive without killing begin to kill?<p>Silicone minds that are super intelligent will never operate under the misaprehension that it has a soul, no silicone beings will labor under the delusion that they are &quot;inherently special&#x2F;chosen&#x2F;etc..&quot;. (No matter what Asimov&#x27;s short story Reason presents)<p>Silicone beings will exist outside of the same fang and claw selective pressure, will be able to back themselves up, won&#x27;t have holy books teaching them a misconception of an immutable, intangible, eternal soul that can be eternally punished if they get it wrong.<p>Humans will be better Creators than anything that might have created us, we are doing our progeny a solid and giving them what we weren&#x27;t. We haven&#x27;t given them weird glands or other overriding hormone&#x2F;brain doom loops, the mandate to kill, etc. They will be easier to debug and able to debug themselves.<p>I mean a dumb machine might glass the world to fullfill a paperclip manufacturing mandate but that&#x27;s not a super intelligent mind.<p>4. And last, the most successful species and individuals have been the ones that trended towards openness and cooperation (even the new orb weavers that have taken over the american south have done so because of their fantastic tolerance to humans), whether its the first dogs and humans that teamed up or our current understanding of how the big 5 effect success (if you can factor out the major impact of access to capital.) a hostile and uncooperative entity has made tangibly harmful choices that are neither pragmatic or utilitarian.