TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

What if AGI is not coming?

69 点作者 13years大约 1 年前

33 条评论

zone411大约 1 年前
I&#x27;ve just created a new benchmark to see how top LLMs do on NYT Connections (<a href="https:&#x2F;&#x2F;www.nytimes.com&#x2F;games&#x2F;connections" rel="nofollow">https:&#x2F;&#x2F;www.nytimes.com&#x2F;games&#x2F;connections</a>). 267 puzzles, 3 prompts for each, uppercase and lowercase.<p>GPT-4 Turbo: 31.0<p>Claude 3 Opus: 27.3<p>Mistral Large: 17.7<p>Mistral Medium: 15.3<p>Gemini Pro: 14.2<p>Qwen 1.5 72B Chat: 10.7<p>Claude 3 Sonnet: 7.6<p>GPT-3.5 Turbo: 4.2<p>Mixtral 8x7B Instruct: 4.2<p>Llama 2 70B Chat: 3.5<p>Qwen 1.5 14B: 3.1<p>Nous Hermes 2 Yi 34B: 1.5<p>Notes: 0-shot. Maximum possible is 100. Partial credit is given if the puzzle is not fully solved. There is only one attempt allowed per puzzle. In contrast, humans players get 4 attempts and a hint when they are one step away from solving a group. Gemini Advanced is not yet available through the API.<p>What I found interesting is how this benchmark reveals a large capabilities gap between the top, large models and the rest, in contrast to existing over-optimized benchmarks.
评论 #39626464 未加载
评论 #39627036 未加载
评论 #39631774 未加载
Arthanos大约 1 年前
&quot;no major AI technology breakthroughs in decades.everything we are seeing is larger compute scaling.&quot; This is false. Everything from the transformer to advancements in state space models have been foundational breakthroughs
评论 #39625983 未加载
评论 #39625974 未加载
janalsncm大约 1 年前
One of the things I wonder about is whether “intelligence” can be linearly scaled or if it’s just a way of solving an optimization problem. In other words, humans have come pretty close to the peak of Mt. Smarts and therefore being 1000x as intelligent is more like the difference between 1 meter from the peak and a millimeter from the top. You’re both basically there.<p>In other words, maybe humans have <i>basically</i> solved the optimization problem for the environment we live in. At this point the only thing to compete on is speed and cost.
评论 #39625872 未加载
评论 #39626174 未加载
评论 #39625907 未加载
评论 #39626027 未加载
评论 #39629497 未加载
necovek大约 1 年前
This does not really consider the &quot;what-if&quot; in the title, but mostly puts out the arguments for <i>why</i> is it not coming.<p>So a bit of a cop-out not wanting to say it outright :)
评论 #39658525 未加载
chubot大约 1 年前
Predictions aren&#x27;t worth much without a bet, but I think the tech will plateau in the next decade, for several years or more, just like it has in the past<p>One main reason is that I think people underestimate how much work OUR brains are doing when we interact with LLMs. It seems like the initial &quot;wow&quot; has worn off for many people, but definitely not everybody.<p>For coding, people will get stuck in loops, trying to get LLMs to modify LLM-generated code<p>And I think the market will cool down, which seems inevitable considering Nvidia&#x27;s stock price (I&#x27;m a shareholder), and the fact that they seem to be the only ones really making money<p>If you compare Google after 8 years (2004) to OpenAI after 8 years (2023), the business is uh very different
评论 #39626311 未加载
评论 #39625996 未加载
jimmcslim大约 1 年前
What if intelligence is an product of consciousness, and consciousness is an product of something that can never have a physical definition and is always ethereal... i.e. a &quot;soul&quot;.<p>If we can achieve AGI simply through more and more computation, no matter how novel it is, its ultimately ifs, loops and arithmetic... then surely the human experience is ultimately just a &#x27;wet LLM&#x27; (or whatever we end up calling the machine learning technology behind AGI).
评论 #39625728 未加载
评论 #39632209 未加载
评论 #39625731 未加载
评论 #39625724 未加载
评论 #39625681 未加载
skybrian大约 1 年前
The future is not known to us. But given how inefficient machine learning seems to be, algorithmic efficiency improvements may keep the scaling going for a while? Maybe that&#x27;s not a &quot;major breakthrough&quot; but it&#x27;s improvement nonetheless.<p>It&#x27;s also going to take a while to learn to use the new toys we already have.
评论 #39626134 未加载
评论 #39625924 未加载
gorgoiler大约 1 年前
In life I tend to encounter two common patterns of intelligent people: those who had a good education and those who did not. I worry that when AGI comes it is going to be able to do all the things the smooth fast taking wily folks can do, and none of the things the educated folks can do, and we’ll accelerate not a slide into the singularity but a slide into inane banality.<p>How do you provoke a model into being wacky, challenging, and innovative?
评论 #39626282 未加载
评论 #39626214 未加载
评论 #39626101 未加载
评论 #39627229 未加载
评论 #39626044 未加载
d--b大约 1 年前
This article does not debate the question in its title, makes ridiculous claims like “there hasn’t been any major breakthrough in AI in decades”, and does not offer any real argument.
aeturnum大约 1 年前
I think &quot;LLMs are using well-studied modeling techniques with overwhelming resource investment&quot; is the most fundamental critique and why I&#x27;ve been skeptical of the future of this wave. That&#x27;s not to say we won&#x27;t (and haven&#x27;t already) gotten useful tools! There&#x27;s obviously a lot to do with human language interfaces and complex analysis. I&#x27;m just skeptical a whole new level is just around the corner.
评论 #39625960 未加载
erezsh大约 1 年前
&quot;We will soon be reaching the limits of hardware scaling for larger AI models&quot;<p>Worth noting it&#x27;s been said before for each version of GPT, only to be proven wrong.
评论 #39629541 未加载
anonzzzies大约 1 年前
It’s fine if I doesn’t ; current LLMs are already very helpful; we need them faster, smaller and eating less resources. If not AGI, let’s run 50 personal assistants on my phone.
评论 #39625905 未加载
JoshCole大约 1 年前
The article claims as part of its argument that AI has not had algorithmic advances since the 80s. This is an exceedingly false premise and a common misconception among the ignorant. It would actually be fairer to say that every aspect of neural network training has had algorithmic advances than that no advances have been made.<p>Here is a quote from research related to this subject:<p>&gt; Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.<p>When you apply the principle of charity you can make their claim increasingly vacuous and eventually true. We&#x27;re still doing optimization - we&#x27;re still in the same general structure. The thing is, it becomes absurd when you do that. Its not appropriate to take such a premise seriously. It would be like taking seriously the argument that we haven&#x27;t had any advancement in software engineering since bubble sort since we&#x27;re still in the regime of trying to sort numbers when we sort numbers.<p>Its like, okay, sure, we&#x27;re still sorting numbers, but it doesn&#x27;t make the wider point it wants to make and its false even under the regime it wants to make the point under.<p>This isn&#x27;t even the only issue that makes this premise wrong. For one, AI research in the 80s wasn&#x27;t centered around neural networks. Hell even if you move forward to the 90s PAIP puts more emphasis on rule systems with programs like Eliza and Student than it does learning from data. So it isn&#x27;t as if we&#x27;re in a stagnation without advance; we moved off other techniques to the ones that worked. For another, it tries to narrow down AI research progress myopically to just particular instances of deep learning, but in reality there are a huge number of relevant advances which just don&#x27;t happen to be in publicly available chat bots but which are already in the literature and force a broadening. These actually matter to LLMs too, because you can take the output of a game solver as conditioning data for an LLM. This was done in the Cicero paper. And the resulting AI has outperformed humans on conversational games as a consequence. So all those advancements are thereby advances relevant to the discussion, yet myopically removed from the context, despite being counterexamples. And in there we find even greater than 44x level algorithmic improvements. In some cases we find algorithmic improvements so great that they might as well be infinite as previous techniques could never work no matter how long they ran and now approximations can be computed practically.
hatenberg大约 1 年前
WHat a strange piece of writing.<p>&quot;Planes and Cars today fundamentally use the same technology we had for almost decades, henceforth ....&quot;<p>The real question to ask is &quot;does AGI matter&quot;
评论 #39625958 未加载
评论 #39625947 未加载
cykros大约 1 年前
It strikes me as amazing that we went from the general recognition that AGI wasn&#x27;t anywhere soon to suddenly having this widespread idea that it was right around the corner.<p>Sort of reminds me of the late 90s super-proto-VR stuff where people thought any day now we&#x27;d be jacking into full immersion (tactile, smell and all) virtual reality.<p>Don&#x27;t get me wrong, LLM&#x27;s are useful tools. But ChatGPT aint Neuromancer. Or even Wintermute. It&#x27;s Clippy after a few years of community college.
somenameforme大约 1 年前
I find a simple thought experiment answers this question. Imagine we trained an LLM using modern methods, and gave it infinite compute, on the entirety of human knowledge from 200,000 years ago. Would that AI then be able to create calculus, even if by another name obviously? I offer that as an example, because there&#x27;s 0 need for knowledge of the physical world to derive calculus. All of mathematics is entirely an invention of the human mind.<p>I think the answer is quite obviously no. LLMs can recite their training, and recombine it in ways that correlates strongly to how a human might do so. But creating entirely new knowledge, that goes above and beyond recombinations of what is already known, remains entirely outside the domain of LLMs. An LLM trained on slow classical music is not going to create rap. And an LLM trained on rap is not going to create classical music. And those are trivial examples since it&#x27;s not entirely new, but just taking a concept and using it a slightly different way than &#x27;normal.&#x27; Math, by contrast, is literally creating something from nothing.<p>And this ability to create something from nothing is probably the most key indicator of intelligence. And we&#x27;ve yet to even step foot on the path towards creating software with this ability.
评论 #39626362 未加载
评论 #39626333 未加载
评论 #39626397 未加载
akasakahakada大约 1 年前
As long as keep philosophers keep shifting the definition of AGI, that should never come to us.
评论 #39626117 未加载
评论 #39626205 未加载
simne大约 1 年前
Article based on so many erroneous assumes, I can&#x27;t believe I see it on HN!<p>Most important, authors don&#x27;t know, that all modern AI based on Back Propagation calculations, just because they are easier to implement on cheap old hardware, but natural neurons working on Forward Propagation, which is magnitudes faster on inference.<p>Unfortunately, for FP we need other hardware, but it is not mean &quot;reaching the limits of hardware scaling&quot;, it is just scaling limits for CURRENT hardware, totally other sense.<p>Sure, if people will play blind and avoid to see obvious things, we will have new AI winter, before somebody will reconsider FP technology.
freilanzer大约 1 年前
&gt; Billions to trillions of dollars will be poured into research over the next decade. More humans than ever are looking for breakthroughs. We have exponentially increased the parallel efforts. LLM architecture might be unable to deliver in its current state, but it has ignited monumental investments into research that might find other paths.<p>That&#x27;s not really true, though. Neural network based approaches are funded, and among those mostly transformers and large language models. Real <i>alternatives</i> aren&#x27;t funded that much, imo.
zhugeyangyang大约 1 年前
The next generation of large models is to train several models with different specialties (small and large), and then have a front-end for task scheduling, which is then assigned to different sub models to obtain strong capabilities and professionalism while also controlling costs?
aorloff大约 1 年前
If with each step we are halfway closer to the goal of AGI, how long before we get there ?
评论 #39625853 未加载
评论 #39625825 未加载
throwaway48r7r大约 1 年前
LLMs solve for the next word. Human intelligence solves for survival with many types of input, visual, audio etc. You can&#x27;t create an AGI if you don&#x27;t solve for the problems that created human GI.
评论 #39626124 未加载
评论 #39626430 未加载
评论 #39626043 未加载
rvz大约 1 年前
What if &#x27;AGI&#x27; was another over-promised scam to sell stochastic parrots marketed as &quot;intelligence&quot; for a product that not even its creators can even understand when it goes wrong badly?<p>&quot;Oh don&#x27;t worry, AGI is coming soon and we&#x27;ll solve that later&quot; - AI founders<p>Yet they don&#x27;t even know how long that is since no-one knows or it never happens. Mistakes in AI are costly and are very expensive.<p>What if their startup fails before the time arrives because they still cannot make any money and need to constantly raise VC money every week or quarter?<p>Again, there will only be 90% - 95% of these &#x27;AI&#x27; companies that will fail with the 5% to 10% still around including the incumbents.
评论 #39626072 未加载
peter_retief大约 1 年前
We cannot create life on the simplest scale, there have been experiments with the creation of life in the Miller experiment have only produced so called building blocks, amino acids. However we are unable to create life in dead creatures that have all the building blocks in place.<p>What is happening is the belief that the laws of thermodynamics are probabilistic, like a law that can be broken. Laws like gravity and thermodynamics are deterministic and the hubris of those who make claims of real intelligence in machines we create are going to be as disappointed as those who design perpetual motion machines.
评论 #39626028 未加载
tennisflyi大约 1 年前
It and automation have always been coming. However, it <i>will</i> be here one day.
topbanana大约 1 年前
One thing&#x27;s for sure, there are now a lot more people looking to make it happen
f6v大约 1 年前
In the grand scale, human intelligence evolved over millions of years. We went from personal computers to LLMs in mere decades. I get that everyone wants Singularity now, so do I. But there’s too much over-promise and delusion.
bottlepalm大约 1 年前
I think we&#x27;ve way over done the &#x27;general intelligence&#x27; part of AI already, that is already &#x27;super general intelligence&#x27;.<p>What&#x27;s lacking is agency&#x2F;autonomy. I have a bad feeling even &#x27;general autonomy&#x27; will take a fraction of the power we&#x27;re already using which means &#x27;super autonomy&#x27;... is probably already possible.<p>Which means ASI soonish.. which leads to uncontrolled ASI either deliberately or accidently.. which means.. well it&#x27;s out of our hands at that point. Anything can happen.
fullstackchris大约 1 年前
&gt; What if our LLMs fail to turn into AGI?<p>This is nonsense statement in and of itself. Its like wondering why an orange fails to turn into a chicken.<p>There are SO many missing pieces an LLM just doesnt have. LLMs could certainly be a small part of some sort of AGI _system_, but they themselves can never be AGI
dondeee大约 1 年前
Come on, what’s next? Are we going to doubt that Jesus is coming, too? Not cool
DeathArrow大约 1 年前
&gt;What if AGI is not coming?<p>Nothing of value will be lost.
adastra22大约 1 年前
AGI arrived in 2017.
评论 #39625682 未加载
exitb大约 1 年前
We already have machines that are generally intelligent and require as much energy as a few light bulbs. Why wouldn’t it be eventually possible to replicate them in silicon?
评论 #39625772 未加载
评论 #39625777 未加载
评论 #39625792 未加载
评论 #39625732 未加载
评论 #39625714 未加载
评论 #39625873 未加载
评论 #39625820 未加载