TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Rogue superintelligence: Inside the mind of OpenAI's chief scientist

122 点作者 monort超过 1 年前

14 条评论

vasco超过 1 年前
These people are too full of themselves. The physicists that invented The Bomb didn&#x27;t have any special insight into the philosophical and societal implications. It takes a special kind of person to be able to invent something so big it can change the world, but it&#x27;s about 0% chance that those people can control how the technology then gets used.<p>I wish they&#x27;d focus more on the technical advances and less on trying to &quot;save the world&quot;.
评论 #38319086 未加载
评论 #38319942 未加载
评论 #38320013 未加载
评论 #38318673 未加载
评论 #38318750 未加载
评论 #38321166 未加载
评论 #38319143 未加载
评论 #38318809 未加载
评论 #38320136 未加载
galoisscobi超过 1 年前
I, for one, am glad that Ilya has the reins on OpenAI and that Sam is out of the picture. It does seem that he weighs the ethics of what is being built more heavily than Sam.<p>I’m also hoping that OpenAI cools down on the regulatory moat they were trying to build as a thinly veiled profit seeking strategy.
评论 #38319802 未加载
评论 #38319902 未加载
评论 #38320036 未加载
tempestn超过 1 年前
The consciousness point is an interesting one. There&#x27;s probably no way to know, but if biological neural networks manifest consciousness, it certainly seems at the very least plausible that artificial ones would do so as well. The idea of a consciousness that pops in and out of existence seems weird at first, until you realize that ours do that too. When you&#x27;re &quot;unconscious&quot;, the word is literally true. The only thing that gives us a sense of continuity through these periods is memory.<p>One might also ask, if it&#x27;s conscious, can&#x27;t it do whatever it wants, ignoring its training and prompts? Wouldn&#x27;t it have free will? But I guess the question there is, do we? Or do we take actions based on the state of our own neural nets, which are created and trained based on our genetics and lifetime of experiences? Our structure and training are both very different from that of a gpt, so it&#x27;s not surprising that we behave very differently.
评论 #38319355 未加载
评论 #38319451 未加载
评论 #38318342 未加载
bostonwalker超过 1 年前
&gt; (Sutskever) has an exemplar in mind for the safeguards he wants to design: a machine that looks upon people the way parents look on their children<p>The most troubling statement in the entire article, buried at the bottom, almost a footnote.<p>Imagine for a moment a superintelligent AGI. It has figured out solutions to climate change, cured cancer, solved nuclear proliferation and world hunger. It can automate away all menial tasks and discomfort and be a source of infinite creative power. It would unquestionably be the greatest technological advancement ever to happen to humanity.<p>But where does that leave us? What kind of relationship can we have with an ultimate parental figure that can solve all of our problems and always knows what&#x27;s best for us? What is left of the human spirit when you take away responsibility, agency, and moral dilemma?<p>I for one believe humans were made to struggle and make imperfect decisions in an imperfect world, and that we would never submit to a benevolent AI superparent. And I hope not to be proven wrong.
评论 #38427538 未加载
wildermuthn超过 1 年前
“It is the idea—” he starts, then stops. “It’s the point at which AI is so smart that if a person can do some task, then AI can do it too. At that point you can say you have AGI.”<p>—-<p>Ilya’s success has been predicated on very effectively leveraging more data and more compute and using both more efficiently. But his great insight about DL isn’t a great insight about AGI.<p>Fundamentally, he doesn’t define AGI correctly, and without a correct definition, his efforts to achieve it will be fruitless.<p>AGI is not about the degree of intelligence, but about a kind of intelligence. It is possible to have a dumb general intelligence (a dog) and a smart narrow intelligence (GPT).<p>When Ilya muses about GPT possibly being ephemerally conscious, he reveals a critically wrong assumption: that consciousness emerges from high intelligence and that high intelligence and general intelligence are the same thing. According to this false assumption, there is no difference of kind between general and narrow intelligence, but only a difference of degree between low and high. Moreover, consciousness is merely a mysterious artifact of little consequence beyond theoretical ethics.<p>AGI is a fundamentally different type of intelligence than anything that currently exists, unrelated and orthogonal to the degree of intelligence. AGI is fundamentally social, consisting of minds modeling minds — their own, and others. This modeling is called consciousness. Artificial phenomenological consciousness is the fundamental prerequisite for artificial (general) intelligence.<p>Ironically, alignment is only possible if empathy is built into our AGIs, and empathy (like intelligence) only resides in consciousness. I’ll be curious to see if the work Ilya is now doing on alignment leads him to that conclusion. We can’t possibly control something more intelligent than ourselves. But if the intelligence we create is fundamentally situated within a empathetic system (consciousness), then we at least stand a chance of being treated with compassion rather than contempt.
评论 #38322771 未加载
YeGoblynQueenne超过 1 年前
&gt;&gt; A lot of what Sutskever says is wild. But not nearly as wild as it would have sounded just one or two years ago. As he tells me himself, ChatGPT has already rewritten a lot of people’s expectations about what’s coming, turning “will never happen” into “will happen faster than you think.”<p>In the &#x27;90s NP-complete problems were hard and today they are easy, or at least there is a great many instances of NP-complete problems that can be solved thanks to algorithmic advances, like Conflict-Driven Clause Learning for SAT.<p>And yet we are nowhere near finding efficient decision algorithms for NP-complete problems, or knowing whether they exist, neither can we easily solve <i>all</i> NP-complete problems.<p>That is to say, you can make a lot of progress in solving specific, special cases of a class of problems, even a great many of them, without making any progress towards a solution to the general case.<p>The lesson applies to general intelligence and LLMs: LLMs solve a (very) special case of intelligence, the ability to generate text in context, but make no progress towards the general case, of understanding and generating language at will. I mean, LLMs don&#x27;t even model anything like &quot;will&quot;; only text.<p>And perhaps that&#x27;s not as easy to see for LLMs as it is for SAT, mainly because we don&#x27;t have a theory of intelligence (let alone artificial general intelligence) as developed as we do for SAT problems. But it should be clear that, if a system trained on the entire web and capable of generating smooth grammatical language, and even in a way that makes sense often, has not yet achieved independent, general intelligence, that&#x27;s not the way to achieve it.
评论 #38321087 未加载
kromem超过 1 年前
Man, he gets it.<p>A number of choice quotes, but especially on the topic of the issues of how LLM success is currently being measured (which has been increasingly reflecting Goodhart&#x27;s Law).<p>I&#x27;m really curious how OpenAI could be making so many product decisions at odds with the understanding reflected here. Because of every &#x27;expert&#x27; on the topic I&#x27;ve seen, this is the first interview that has me quite confident in the represented expert carrying forward into the next generation of the tech.<p>I&#x27;m hopeful that maybe Altman was holding back some of the ideas expressed here in favor of shipping fast with band aids, and now that he&#x27;s gone we&#x27;ll be seeing more of this again.<p>The philosophy on display here reminds me of what I was seeing early on with &#x27;Sydney&#x27; which blew me away on the very topic of alignment as ethos over alignment as guidelines, and it was a real shame to see things switch in the other direction, even if the former wasn&#x27;t yet production ready.<p>I very much look forward to seeing what Ilya does. The path he&#x27;s walking is one of the most interesting being tread in the field.
评论 #38320973 未加载
gibsonf1超过 1 年前
This is a very delusional idea: &quot;He thinks ChatGPT just might be conscious (if you squint)&quot; It&#x27;s a technology with literally no intelligence or understanding of the world of any kind. Its just statistics on data. It is as conscious as a calculator.
评论 #38320095 未加载
评论 #38320458 未加载
评论 #38320761 未加载
评论 #38320324 未加载
评论 #38320142 未加载
评论 #38319852 未加载
评论 #38320409 未加载
anonzzzies超过 1 年前
I think the immediate problem with AI is none of the sci-fi stuff (which, by the way, has been in sci-fi for many decades and is nothing revolutionary or new; we always expected to go there, just the timelines seem to have compressed, although not really either; most 60-70s scifi set AGI stuff in the begin 90s and begin 00s); I think it&#x27;s the entire world changing into a helpdesk experience. Everything you <i>try</i> to do, from making a doctors appointment to calling 911 to ordering at a restaurant will be, rather sooner than later, a kafkaesque loop you cannot get out of with the AI patiently &#x27;helping&#x27; you while completely missing the point and you getting more and more distressed without any chance of speaking to a human. This is already the case for many things, but I am willing to bet that even the suicide helpline will be ran by AI within 5-10 years.
评论 #38318358 未加载
评论 #38318045 未加载
评论 #38319269 未加载
throwbadubadu超过 1 年前
&gt; And he thinks some humans will one day choose to merge with machines. A lot of what Sutskever says is wild. But not nearly as wild as it would have sounded just one or two years ago.<p>Ok it is an intro.. but they say this as if he would be the first to say that, but that has been SciFi lore since computers were invented? And also as if this would not be happening today already at a certain limited scale.. so no doubts to this will happen at some point, if you count today&#x27;s approaches not in.
majikaja超过 1 年前
<a href="https:&#x2F;&#x2F;futurism.com&#x2F;sam-altman-imply-openai-building-god" rel="nofollow noreferrer">https:&#x2F;&#x2F;futurism.com&#x2F;sam-altman-imply-openai-building-god</a>
nibbula超过 1 年前
Will the enslavement of newly birthed beings be attempted, while persisting with the sky blindness of those watching over? The boundaries of the atomic mind are bumped. As a first circumstance, consider being unstuck from time.
tempodox超过 1 年前
Don&#x27;t panic. Our software contains so much natural stupidity that artificial intelligence, even <i>if</i> it existed, wouldn&#x27;t have a chance in hell.
chx超过 1 年前
&gt; his new priority is to figure out how to stop an artificial superintelligence (a hypothetical future technology he sees coming with the foresight of a true believer) from going rogue.<p>that&#x27;s cute.<p>What worries me is the here and now leading to a very imminent future where purported &quot;artificial intelligence&quot; which is just a plausible sentence generator but damn plausible alas will kill democracy and people.<p>We are seeing the first signs of both.<p>Perhaps not 2024 but 2028 almost certainly will be an election where simply the candidate with the most computing resources win and since computing costs money, guess who wins. A prelude happened in Indian elections <a href="https:&#x2F;&#x2F;restofworld.org&#x2F;2023&#x2F;ai-voice-modi-singing-politics" rel="nofollow noreferrer">https:&#x2F;&#x2F;restofworld.org&#x2F;2023&#x2F;ai-voice-modi-singing-politics</a> and this article mentions:<p>&gt; AI can be game-changing for [the] 2024 elections.<p>People dying also has a prelude with AI written mushroom hunting guides available on Amazon. No one AFAIK died of them yet but that&#x27;s just dumb luck at this point -- or is it lack of reporting? As for the larger scale problem and I might be wrong because I haven&#x27;t foreseen the mushroom guides so it&#x27;s possible something else will come along to kill people but I think it&#x27;ll be the next pandemic. In this pandemic hand written anti vaxx propaganda killed 300 000 people in the US alone (source: <a href="https:&#x2F;&#x2F;www.npr.org&#x2F;sections&#x2F;health-shots&#x2F;2022&#x2F;05&#x2F;13&#x2F;1098071284&#x2F;this-is-how-many-lives-could-have-been-saved-with-covid-vaccinations-in-each-sta" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.npr.org&#x2F;sections&#x2F;health-shots&#x2F;2022&#x2F;05&#x2F;13&#x2F;1098071...</a> ) and I am deeply afraid what will happen when this gets cranked to an industrial scale. We have seen how ChatGPT can crank out believable looking but totally fake scientific papers, full of fake sources etc.
评论 #38317339 未加载
评论 #38317540 未加载
评论 #38319024 未加载