TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Why are current LLMs not considered AGI?

9 点作者 philippta超过 1 年前
When thinking about humans, no matter their age and experience, we have no problem considering them generally intelligent. But still, humans are not omniscient, make things up (hallucinate) and sometimes lack proper reasoning.<p>In contrast LLMs already have way more knowledge than the average human, have mostly good reasoning and occasionally hallucinate.<p>Surely they aren&#x27;t artificial super intelligences, but it feels like the term AGI could apply.

10 条评论

gryfft超过 1 年前
At this point the thing holding ChatGPT back from universally-accepted &quot;AGI&quot; label is its few remaining sub-human skillsets, like forgetting things from too many tokens ago that a human would not forget.<p>My prediction is that over the course of the next 6-48 months, we&#x27;ll see the emergence of LLMs with &quot;working memory,&quot; &quot;short term memory,&quot; and &quot;long term memory,&quot; with working memory being more or less current LLM capabilities, short term memory being made up of a fast one-shot summarization which then gets temporarily stored raw on disk, and long-term storage getting transcribed into a LORA-like module overnight based on perceived importance of the short term memories.<p>I think emotion analogues will be important for the last part, as emotion processing plays a big role in memory formation (this is an adaptation: we more strongly remember things that we had strong emotions about because they&#x27;re more important to us.)<p>So, 6-48 months to computer systems that feel (&#x2F;have emotion analogue) and sleep to dream (&#x2F;summarize into long-term storage overnight.)<p>Those developments, I&#x27;m confident, will absolutely silence anyone who says it&#x27;s not &quot;real&quot; AGI. But then, at that point, you can potentially have built a being that can have feelings about its own existence, and then things get Interesting.
评论 #37915367 未加载
评论 #37913280 未加载
d--b超过 1 年前
Because people think it can&#x27;t infer anything about things it hasn&#x27;t seen before. Like it can&#x27;t build an internal model of some problem that it hasn&#x27;t been trained for.<p>&gt; let&#x27;s play fizz buzz but with 3 and 7<p>ChatGPT<p>Sure, let&#x27;s play Fizz Buzz with 3 and 7. The rules are:<p>Instead of saying &quot;Fizz&quot; for multiples of 3, you say &quot;Fizz.&quot; Instead of saying &quot;Buzz&quot; for multiples of 7, you say &quot;Buzz.&quot; If a number is a multiple of both 3 and 7, you say &quot;FizzBuzz.&quot; Let&#x27;s start counting. I&#x27;ll write the sequence, and you tell me the appropriate response for each number.<p>1, 2, Fizz, 4, Buzz, Fizz, Fizz, Buzz, Fizz, 11, Fizz, Fizz, Buzz, Fizz, 16, FizzBuzz, 18, 19, Buzz, Fizz, 22, 23, Fizz, Buzz, 26, Fizz, Fizz, Buzz, 29, FizzBuzz, 31, 32, Fizz, Buzz, Fizz, 34, Buzz, Fizz, 37, Fizz, Buzz, Fizz, Fizz, Buzz, 41, Fizz, FizzBuzz, 44, Buzz, Fizz, 46, Fizz, 49, Buzz, Fizz, 52, Fizz, Fizz, Buzz, 56, Fizz, 58, 59, Buzz, FizzBuzz, 61, 62, Fizz, Buzz, Fizz, 64, Buzz, Fizz, 67, Fizz, Buzz, Fizz, 71, Fizz, FizzBuzz, 74, Buzz, Fizz, 76, Fizz, 79, Buzz, Fizz, 82, Fizz, Fizz, Buzz, 86, Fizz, 88, 89, Buzz, FizzBuzz, 91, 92, Fizz, Buzz, Fizz, 94, Buzz, Fizz, 97, Fizz, Buzz.<p>Great job! Feel free to continue the sequence or let me know if you&#x27;d like to do something else.
评论 #37918405 未加载
评论 #37915212 未加载
dTal超过 1 年前
They aren&#x27;t &quot;general&quot; enough. Sure they&#x27;re competent at solving problems represented in the training data, and can even in some cases abstract over them and find patterns. But they don&#x27;t have &quot;slow thought&quot;. They can&#x27;t think without talking, and they can only talk like a human, and humans don&#x27;t habitually narrate their train of thought, so there&#x27;s a limit to how far &quot;let&#x27;s think step by step&quot; can take them. The result is that they can&#x27;t abstract <i>recursively</i> - since they are architecturally incapable of &quot;thinking harder&quot; about a problem, there will always be some threshold of input novelty that loses them, and right now that threshold is actually unimpressively low when you get down to brass tacks.
评论 #37920436 未加载
评论 #37920348 未加载
beanbean01超过 1 年前
Here&#x27;s the problem with LLMs: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2301.06627" rel="nofollow noreferrer">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2301.06627</a> Basically, they&#x27;re missing a lot of the brain machinery required to function. For example, if you ask them to solve a math problem, they do just fine ... until you ask them to apply an inference rule on top of it that takes them outside of their training set. The result is something that LOOKS like AGI until you realize it&#x27;s read the entire Internet.
ilaksh超过 1 年前
Everyone has a different definition of what AGI means and no one realizes it or specifies.<p>The people who have a worldview that aligns with strong artificial intelligence often want to call whatever AGI, depending on their definition, but are afraid to do so because they will be ridiculed by the &quot;non-believers&quot;.<p>The opposite worldview sometimes just moves the goalposts as soon as some capability is unlocked.
og_kalu超过 1 年前
Some people already believe this. <a href="https:&#x2F;&#x2F;www.noemamag.com&#x2F;artificial-general-intelligence-is-already-here" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.noemamag.com&#x2F;artificial-general-intelligence-is-...</a><p>Indeed, by testable definitions of GI (i.e all or nearly all humans would also pass), the current state of the art is agi.
danielmarkbruce超过 1 年前
1 - Some people do consider them AGI, or at least that an agent using an LLM as one part of a system is AGI. I do.<p>2 - I have some sense that OpenAI already has a system (that they haven&#x27;t released) that many reasonable people will consider AGI.
评论 #37921374 未加载
aristofun超过 1 年前
Because there is no agreement on what is ordinary &quot;intelligence&quot; is, let alone artificial one.<p>That&#x27;s why.
zzzzzzzza超过 1 年前
peter norvig agrees with you
thiago_fm超过 1 年前
LLMs aren&#x27;t smart at all, anybody serious in the AI field understand so many of its limitations. I wouldn&#x27;t even use the word &quot;intelligence&quot; and LLM in the same sentence, much less AGI.<p>You&#x27;re just impressed that it can write well and &#x27;sounds&#x27; precise, but it&#x27;s the effects of a lot of RLHF, transformers and H100s, not something capable of solving humanity&#x27;s biggest problems or greatly improving our comprehension of the universe.<p>It isn&#x27;t precise at all, and if you&#x27;d give it a whole afternoon, it would still come up with the same stupid solution, meanwhile you give a human an afternoon, and they might come up with something, that has also common sense.<p>Go read about objective-based AI or other subjects in the field. It&#x27;s much more promising than that parrot powered by h100s :-)<p>Sure, there are investors throwing money at any company doing their niche LLMs or whatever, but it&#x27;s snake oil at best.<p>LLMs are just a new interesting interface for humans for computers and data. We need other AI-related fields to develop, for us to unlock the real power of &quot;intelligence&quot;. Computers are just as smart as they were in 2010s.