TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

We Have Made No Progress Toward AGI

69 点作者 13years21 天前

15 条评论

ebiester20 天前
The hard part is that for all the things that the author says disprove LLMs are intelligent are failings for humans too.<p>* Humans tell you how they think, but it seemingly is not how they really think.<p>* Humans tell you repeatedly they used a tool, but they did it another way.<p>* Humans tell you facts they believe to be true but are false.<p>* Humans often need to be verified by another human and should not be trusted.<p>* Humans are extraordinarily hard to align.<p>While I am sympathetic to the argument, and I agree that machines aligned on their own goals over a longer timeframe is still science fiction, I think this particular argument fails.<p>GPT o3 is a better writer than most high school students at the time of graduation. GPT o3 is a better researcher than most high school students at the time of graduation. GPT o3 is a better <i>lots</i> of things than any high school student at the time of graduation. It is a better coder than the vast majority of first semester computer science students.<p>The original Turing test has been shattered. We&#x27;re building progressively harder standards to get to what is human intelligence and as we find another one, we are quickly achieving it.<p>The gap is elsewhere: look at Devin as to the limitation. Its ability to follow its own goal plans is the next frontier and maybe we don&#x27;t want to solve that problem yet. What if we just decide not to solve that particular problem and lean further into the cyborg model?<p>We don&#x27;t need them to replace humans - we need them to integrate with humans.
评论 #43772669 未加载
评论 #43774485 未加载
评论 #43773849 未加载
评论 #43776735 未加载
评论 #43775764 未加载
评论 #43773241 未加载
评论 #43779121 未加载
advisedwang20 天前
My understanding was that chain-of-thought is used precisely BECAUSE it doesn&#x27;t reproduce the same logic that simply asking the question directly does. In &quot;fabricating&quot; an explanation for what it might have done if asked the question directly, it has actually produced correct reasoning. Therefore you can ask the chain-of-thought question to get a better result than asking the question directly.<p>I&#x27;d love to see the multiplication accuracy chart from <a href="https:&#x2F;&#x2F;www.mindprison.cc&#x2F;p&#x2F;why-llms-dont-ask-for-calculators" rel="nofollow">https:&#x2F;&#x2F;www.mindprison.cc&#x2F;p&#x2F;why-llms-dont-ask-for-calculator...</a> with the output from a chain-of-thought prompt.
mark_l_watson20 天前
I mildly disagree with the author, but would be happy arguing his side also on some of his points:<p>Last September I used ChatGPT, Gemini, and Claude in combination to write a complex piece of code from scratch. It took four hours and I had to be very actively involved. A week ago o3 solved it on its own, at least the Python version ran as-is, but the Common Lisp version needed some tweaking (maybe 5 minutes of my time).<p>This is exponential improvement and it is not so much the base LLMs getting better, rather it is: familiarity with me (chat history) and much better tool use.<p>I may be be incorrect, but I think improvements in very long user event and interaction context, increasingly intelligent tool use, perhaps some form of RL to develop per-user policies for improving incorrect tool use, and increasingly good base LLMs will get us to a place that in the domain of digital knowledge work where we will have personal agents that are AGI for a huge range of use cases.
评论 #43774444 未加载
评论 #43785059 未加载
thefounder20 天前
So the “reasoning” text of openAI is no more than old broken Windows “loading” animation.
评论 #43778010 未加载
hnpolicestate20 天前
One point that I think seperates AI and human intelligence is LLM&#x27;s inability to tell me how it feels or it&#x27;s individual opinion on things.<p>I think to be considered alive you have to have an opinion on things.
tboyd4720 天前
Fascinating look at how AI actually reasons. I think it&#x27;s pretty close to how the average human reasons.<p>But he&#x27;s right that the efficiency of AI is much worse, and that matters, too.<p>Great read.
xg1520 天前
People ditch symbolic reasoning for statistical models, then are surprised when the model does, in fact, use statistical features and not symbolic reasoning.
评论 #43776582 未加载
setnone20 天前
&gt; All of the current architectures are simply brute-force pattern matching<p>This explains hallucinations and i agree with &#x27;braindead&#x27; argument. To move toward AGI i believe there should be some kind of social awareness component added which is an important part of human intelligence.
maebert20 天前
author says we made no progress towards agi, also gives no definition for what the &quot;i&quot; in agi is, or how we would measure meaningful progress in this direction.<p>in a somewhat ironic twist, it seems like the authors internal definition for &quot;intelligence&quot; fits much closer with 1950s. good old-fashioned AI, doing proper logic and algebra. literally all the progress we made in ai in the last 20 years in ai is precisely because we abandoned this narrow-minded definition of intelligence.<p>Maybe I&#x27;m a grumpy old fart but none of these are new arguments. Philosophy of mind has an amazingly deep and colorful wealth of insights in this matter, and I don&#x27;t know why this is not required reading for anyone writing a blog on ai.
评论 #43775402 未加载
moralestapia20 天前
I really dislike what I now call the American <i>We</i>.<p>&quot;We made it!&quot; &quot;We failed!&quot; written by somebody who doesn&#x27;t have the slightest connection to the projects they&#x27;re talking about. e.g. this piece doesn&#x27;t even have an author but I highly doubt he has done anything more than using chatgpt.com a couple times.<p>Maybe this could be the Neumann&#x27;s law of headlines: if it starts with We, it&#x27;s bullshit.
评论 #43775248 未加载
评论 #43774309 未加载
评论 #43901430 未加载
nsonha20 天前
So? Who even wants it? Whatever the definition is, sounds like AGI and sentient AI are really close concepts. Sentient AI is like a can of worms for ethics.<p>On the other hands, while definitely not having AGI, we have all these building blocks for AI tools for decades to come, to build on top. We&#x27;ve only barely scratched the surface of it.
mlsu20 天前
This thing where AI can improve itself seems to me in violation of the second law. I&#x27;m not a physicist by training merely an engineer but my argument is as follows:<p>- I think the reason humans are clever is because nature spent 6 billion years * millions of energetic lifetimes (that is, something on the order of <i>quettajoules</i> of energy) optimizing us to be clever.<p>- Life is a system, which does nothing more than optimize and pass on information. An organism is a thing which reproduces itself, well enough to pass its DNA (aka. information) along. In some sense, it is a gigantic heat engine which exploits the energy gradient to organize itself, in the manner of a dissipative structure [1]<p>- Think of how &quot;AI&quot; was invented: all of these geometric intuitions we have about deep learning, all of the cleverness we use, to imagine how backpropagation works and invent new thinking machines. All of the cleverness humanity has used to create the training dataset for these machines. This <i>cleverness</i>. It could not arise spontaneously, instead, it arose as a byproduct, from the long existence of a terawatt energy gradient. This captured energy was expended, to compress <i>information&#x2F;energy</i> from the physical world, in a process which created highly organized structures (human brains) that are capable of being clever.<p>- The cleverness of human beings and the machines they make is, in fact, nothing more than the byproduct of an elaborate dissipative structure whose emergence and continued organization requires enormous amounts of physical energy: 1-2% of all solar radiation hitting earth (terawatts), times 3 billion years (existence of photosynthesis).<p>- If you look at it this way it&#x27;s incredibly clear that the remarkable cleverness of these machines is nothing more than a bounded image, of the cleverness of human beings. We have a long way to go, before we are training artificial neural networks, with energy on the order of 10^30 joule [2]. Until then, we will not become capable of making machines that are cleverer than human beings.<p>- Perhaps we could make a machine that is cleverer than one single human. But we will never have an AI that is more clever than a collection of us, because the thing itself must be, in a 2nd law sense, less clever than us, for the simple reason that we have used our cleverness to create it.<p>- That is to say that there is no free lunch. A &quot;superhuman&quot; AI will not happen in 10, 100, or even 1,000 years, unless we find the vast amount of energy (10^30J) which will be required to train it. Humans will <i>always</i> be better and smarter. We have had 3 billion years of photosynthesis, this thing was trained in what, 120 days? A petajoule?<p>[1] <a href="https:&#x2F;&#x2F;pmc.ncbi.nlm.nih.gov&#x2F;articles&#x2F;PMC7712552&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pmc.ncbi.nlm.nih.gov&#x2F;articles&#x2F;PMC7712552&#x2F;</a><p>[2] Where do we get 10^30J?<p>Total energy hitting earth in one year: 5.5×10^24 J<p>Fraction of that energy used by all plants: 0.05%<p>Time plants have been alive on earth: 3 billion years<p>You get to 8*10^30 if you multiply these numbers. Round down.
评论 #43777301 未加载
jug20 天前
Red flag nowadays is when a blog post tries to judge whether AI is AGI. Because these goal posts are constantly moving and there is no agreed upon benchmark to meet. More often than not, they reason why exactly something is not AGI yet from their perspective, while another user happily use AI as a full-fledged employee depending on use case. I’m personally using AI as a coding companion and it seems to be doing extremely well for being brain dead at least.
评论 #43775084 未加载
评论 #43776765 未加载
评论 #43775223 未加载
评论 #43776974 未加载
评论 #43901442 未加载
thisisnotauser20 天前
Imma be honest with you, this is exactly his I would do that math, and that is exactly the lie I would tell if you asked me to explain it. This is me-level agi.
x18746320 天前
&gt; Which means these LLM architectures will not be producing groundbreaking novel theories in science and technology.<p>Is it not possible that new theories and breakthroughs could result from this so-called statistical pattern matching? The information necessary could be present in the training data and the relationship simply never before considered by a human.<p>We may not be on a path to AGI, but it seems premature to claim LLMs are fundamentally incapable of such contributions to knowledge.<p>In fact, it seems that these AI labs are leaning in such a direction. Keep producing better LLMs until the LLM can make contributions that drive the field forward.
评论 #43772146 未加载