TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Changed My Mind After Reading Larson's "The Myth of Artificial Intelligence"

10 点作者 abss超过 1 年前
I&#x27;ve recently delved into Erik J. Larson&#x27;s book &quot;The Myth of Artificial Intelligence,&quot; and it has reshaped my understanding of the current state and future prospects of AI, particularly concerning Large Language Models (LLMs) and the pursuit of Artificial General Intelligence (AGI).<p>Larson argues convincingly that current AI (i included LLMs because are still induction and statistics based), despite their impressive capabilities, represent a kind of technological dead end in our quest for AGI. The notion of achieving a true AGI, a system with human-like understanding and reasoning capabilities, seems more elusive than ever. The current trajectory of AI development, heavily reliant on data and computational power, doesn&#x27;t necessarily lead us towards AGI. Instead, we might be merely crafting sophisticated tools, akin to cognitive prosthetics, that augment but do not replicate human intelligence.<p>The book emphasizes the need for radically new ideas and directions if we are to make any significant progress toward AGI. The concept of a technological singularity, where AI surpasses human intelligence, appears more like a distant mirage rather than an approaching reality.<p>Erik J. Larson&#x27;s book compellingly highlights the deficiencies of deduction and induction as methods of inference in artificial intelligence. It also underscores the lack of a solid theoretical foundation for abduction, suggesting that current AI, including large language models, faces significant limitations in replicating complex human reasoning.<p>I&#x27;m curious to hear your thoughts on this. Do you think our current approach to AI, especially with LLMs, is fundamentally limited? Is the idea of AGI as we conceive it now just a myth?

8 条评论

SirensOfTitan超过 1 年前
Gigantic asterisk since this is just a general impression but I think that cognitive science and as a result ML have a huge problem coming to terms. Precise terminology seems like it’s plagued these fields since the beginning. Strong AI, weak AI, AGI, cognition, consciousness vs wakefulness, what are the essential features of consciousness?<p>As a result of this, you can read about AGI and think authors are debating whether a system is an AGI or a proto-AGI but they’re actually debating where the line is drawn.<p>Taking a page from Philosophy in the Flesh, I think that human reasoning and cognition are intrinsically related to the body—like even metaphor is inherently body and environment related. Have we really considered what the human mind would act like completely disembodied? Is language on its own really the right context for AGI to be born in?
评论 #38927277 未加载
aristofun超过 1 年前
I don’t mean to sound arrogant or disrespectful, but the fact that chatgpt has nothing to do with intelligence and is just a “hype material” has been clear to me without any books. Just from a few interactions with it and learning some basic principles of the architecture.
sk11001超过 1 年前
&gt; Originally published: 6 April 2021<p>The things the book talks about are not the same things that exist today.
coolvision超过 1 年前
It was written before ChatGPT, GPT-4 and maybe even before GPT-3. So his impressions of limitations of LLMs should not be taken as facts and should be re-evaluated. If you check yourself, I&#x27;m sure you will find that ChatGPT is quite capable of abductive reasoning.
评论 #38943007 未加载
jryan49超过 1 年前
Yeah I don&#x27;t think llm are going to turn into agi anytime soon. The model is locked and not continuously updating. It&#x27;s more like statistical knowledge than intelligence.
gardenhedge超过 1 年前
I think that&#x27;s a pretty common opinion here. Here&#x27;s what I thinks: LLMs are not AGI but are currently the best way to leverage everyone else&#x27;s knowledge.
评论 #38907475 未加载
kypro超过 1 年前
&gt; Larson argues convincingly that current AI (i included LLMs because are still induction and statistics based), despite their impressive capabilities, represent a kind of technological dead end in our quest for AGI.<p>The idea that intelligence isn&#x27;t just statistics to me is the far more radical position here. If it&#x27;s not statistical modelling then what is it? Intelligence is not magic. Any prediction requires some amount of probabilistic modelling.<p>That said, I think there is probably a significant amount of meta modelling required to achieve true AGI and it seems unlikely that current architectures can achieve this. The fact that LLMs don&#x27;t seem to have inner thought or the fact that learning and inference is separate is a huge limitation of current algorithms.<p>It seems to me when you think about what us humans do the ability to meta analyse and pipe our thoughts through various processes in our head, then discard inaccuracies and adapt to new information is important. LLMs seem rigid in their thinking because they don&#x27;t do this meta reasoning, and are completely unable to adapt to new information. Current LLM act kinda like humans do during exams. In an exam we feel we must provide an answer to every question and if we don&#x27;t know the answer we&#x27;ll just make something up. But outside of exams humans don&#x27;t do this. If we don&#x27;t know something we&#x27;ll gather information, we&#x27;ll test theories, we&#x27;ll ask questions, then we&#x27;ll adapt and take onboard new information.<p>LLMs don&#x27;t do this and therefore feel rigid in their thinking and often act irrationally. For example, you can convince an LLM of something with some faulty information or logic then start a new chat and it&#x27;s reverted back to whatever position it was giving before. They only really get better at reasoning when us humans walk them through meta reasoning processes, but it cannot do this itself and even when we help them with this they do not adapt in light of it.<p>What I will say is that we have clearly made one critical discovery towards AGI (and super intelligence) and that is that scale is important. We also seem to be making process on the algorithmic side too and are certainly edging closer to something that with enough scale could approach something that looks like AGI.<p>I&#x27;ll also add what LLMs are able to do in their infancy is frankly incredible and it&#x27;s not hard to imagine that it would only take a few additional algorithmic breakthroughs from here to get to something very close to AGI, if not achieve it. My guess is that we already have the scale and much of the base neural network architecture required. The main limitation in my opinion is the that training and inference are separate steps – not that LLMs use statistics.<p>Finally, no one serious that I&#x27;m aware of believes that simply scaling current iterations of LLMs will achieve AGI anyway. Dismissing the possibility of AGI because current LLM architecture is missing a few things seems both silly and an uncharitable position to me. The important disagreement here is just how many more algorithmic breakthroughs we need to achieve AGI. We all know GPT-4 reasoning ability is based too wholly in low-level statistical reasoning, and is incapable of the higher-level meta reasoning required for more advantaged intelligence. The real question is how far are we from making process on this.<p>Just my opinions anyway.
haltist超过 1 年前
This is a great opportunity to tell you about my patented and trademarked architecture for achieving AGI: &quot;panoptic computronium cathedral&quot;™. All I need is $80B to make it happen and make AGI a reality. It&#x27;s construction will not be easy but once completed it will allow those that worship in the cathedral to enact the will of the mathematical god that dwells within its GPUs and show all the non-believers that all is mathematics (mostly just a bunch of matrix multiplications and floating point arithmetic).<p>The radical architecture required to achieve AGI is to treat it as a religion and build artifacts, rituals, and practices that will manifest the true technological god and govern the world with nothing more than mathematics implemented on GPUs.
评论 #38904652 未加载