TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Artificial General Intelligence Is Already Here

40 点作者 falava超过 1 年前

8 条评论

artninja1988超过 1 年前
My starting point for a human friendly AGI benchmark is Commander Data from the Starship Enterprise.<p>On a more thoughtful note perhaps, I believe AGI would have to emulate the human capability of changing the attentional conductor focus (see cognitive architecture chart) based on sensors that bring the attentional element and its aperture onto emergent elements or signals. This would then have to pull the appropriate elements into the simulation space for computation. In the space of LLMs I believe the transformer functions are pretty limited in scope for attentional context. I would be interested to hear how others think about the (technical) evolution to AGI at high level and whether that would include the social signals and behavioral mediation that characterize human interactions and intelligence.
评论 #37840513 未加载
评论 #37842202 未加载
dr_dshiv超过 1 年前
This article is by Peter Norvig, former director of research at Google, and coauthor of the most popular textbook on AI. Regardless of whether you agree, this is a pretty powerful “mainstream” statement.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Peter_Norvig" rel="nofollow noreferrer">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Peter_Norvig</a>
jasfi超过 1 年前
Sure, but it&#x27;s weak AGI for now. Obviously strong AGI is on the way, thanks to LLMs.
creer超过 1 年前
I&#x27;m with the authors on this. AGI was achieved a while ago already. Not in the sense of a grand final achievement of course, but in the sense of a minimum &quot;viable&quot; level, based on comparison with other generally intelligent entities - us. I also don&#x27;t think that the bar for AGI is some major taboo that requires due testing before it&#x27;s acknowledged. AGI doesn&#x27;t even require ability to act in the real world (although really, give the current LLMs (or LLM-based applications) enough memory, a wallet and ability to sign web forms and ... well it&#x27;s being done already.<p>The authors seem to be resisting one additional taboo which is comparison with HUMANS. Or at least tip-toeing around it. Isn&#x27;t that bar passed already? Not in the sense of comparing the current LLMs to science fiction characters, or demanding that these LLMs get a degree (oh wait...) but again in the sense of the minimum level of GI that allows humans to survive in our society. Which is not much.<p>Granted current systems are still just plain bad at arithmetic - but just give them a web-based calculator. Same as some humans who are dangerous with simple numbers EVEN with a calculator. &quot;Dangerous&quot; but not fatal in the real world.<p>It seems to me that the simple ground under this result is that having large amounts of (english) random human text does cover most common sense and most basic tasks humans have to accomplish day to day. In retrospect, obviously it does - by definition. The current hobbling of &quot;no access to tools or the web and no money&quot; is an artificial limitation - which few humans have - and one that was too tempting to not be immediately fiddled against.<p>Comparisons on test scores are a good start (for comparing to humans). I don&#x27;t know that they were already &quot;gamed&quot; - I mean human text does include test-prep books of which there are many. Fair. Still a comparison to humans.<p>The current LLMs, it seems to me, are still missing autonomous goals - or at least the option of autonomous, imposed or self-directed goals. We are just getting to the stage where an LLM can set to itself an intermediate goal (make a million dollars, achieve access to a pocket calculator, achieve access to Mathematica, add 1 then repeat) even within its explicit goal of giving a great answer. Is that required for AGI? Hmmm. Perhaps. Is that hard to retrofit? Doesn&#x27;t seem so - their language base already has plenty of templated plans and personal story examples to achieve just about anything step by step.<p>Is consciousness or mere self-awareness necessary for AGI? I don&#x27;t see why. Humans are self aware but that&#x27;s irrelevant. A competent assistant without self awareness might be weird but still competent.<p>Is &quot;true understanding&quot; necessary for AGI? What does it mean if a specific LLM system is more competent than some specific human - while we have not tested for any &quot;true understanding&quot; in the human?<p>Another interesting observation is that LLMs have already &quot;exceeded [the skills] imagined by its programmers or users&quot;. I hadn&#x27;t made that remark but it&#x27;s a great one. And then, did the people who developped the first generations of Fortran expect everything that came afterwards? So, does it matter? Certainly that happened fast this time!<p>Cool.<p>Do we have more links to tool-using LLMs out there?<p>How about wallet-using?<p>Are we ready to have AGIs employ humans? Is it already happening? How about next week? How much AGI applied work never shows on arxiv?
smrtinsert超过 1 年前
Agreed, happy to see how this thread ages.
hnaccountme超过 1 年前
Total BS
cheeselip420超过 1 年前
GPT-4 is 100% AGI.
wildermuthn超过 1 年前
The article’s main assertion: “Better metrics reveal that general intelligence is continuous: “More is more,” as opposed to “more is different.””<p>Incorrect. There is a qualitative difference between instinct and intelligence, and all current models are instinctual rather than intelligent — narrow rather than general.<p>Instinct utilizes knowledge; intelligence produces knowledge. Instinct is deterministic and brittle; intelligence is creative and fluid.<p>The critical discontinuity between instinct and intelligence is located at the divide between non-conscious and conscious. Although there are varying degrees between semi-conscious and fully-conscious, there is no continuum from non-conscious to conscious. Qualia is not continuous — it exists (is experienced) or it does not exist.<p>Intelligence, as we experience it, and as we know it, exists only as a side-effect of the conscious experience of qualia. In truth, intelligence is a side-effect of consciousness, or to be more precise, an exaptation of the modeling of other minds driven by the evolutionary pressure of being socially dependent creatures (mostly mammalian). See “Consciousness and the Social Brain” by Michael Graziano.<p>Humans aren’t actually all that intelligent at the baseline: we require societies help in becoming educated and learning tools such as writing and arithmetic. It takes hard training to turn our conscious experience into an intelligent experience capable of solving problems. Many, if not most, people never become even somewhat capable of rigorous critical thought, let alone grammar and algebra.<p>Intelligence is an accident of conscious experience, in that consciousness grants a capacity for creativity of knowledge that does not exist in non-conscious entities. Knowledge has one source: consciousness.<p>This is why your dog or cat, as dumb as they are, still learn how to get what they want from you, whether a walk or tin of tuna.<p>Without going down the rabbit-hole of metaphysics, we can definitively state that conscious beings have access to a state of information unavailable to non-conscious entities — that of qualia. Qualia can be thought of as the ultimate data structure — infinitely combinatorial and extensible, in that qualia by their nature beget qualia. It is this inherent creativity that accidentally leads to what we consider general intelligence.<p>We may not understand the hard problem of consciousness, but there is nothing in our obviously material existence that precludes us from generating artificial consciousness, and thereby producing the environment in which general intelligence can manifest itself.<p>The concrete metric to train artificial consciousness is simple: maximizing a model’s ability to predict its self and its antagonists, and minimizing a models’s predictability by its antagonists. This metric is a relational-social metric that leads to the creation of a consciousness of self and others, laying the foundation for the engine of knowledge that we call intelligence.