TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Is AI converging on human-like cognition?

6 点作者 seansh4 个月前
It seems every day I see another aspect of the human mind implemented in a very primitive form of course.<p>For example, chain of thought (CoT) can be thought of as the beginning of the internal monologue that you and I have and DeepSeek&#x27;s CoT reads very similar to how one would think about a problem. Did nature also figure out the same solution? and was consciousness born out of something similar to CoT?<p>Another example I was reading about today is Mixture of Experts (MoE) where we have a router that dispatches tokens to subnetworks that specialize in certain domains. If this technique were to develop further would that lead to something like human hub personalities?<p>I may very well be finding patterns where none exist and these may be nothing more than metaphors at best. I have a very crude understanding of AI that&#x27;s why I&#x27;m asking here hoping to get an expert&#x27;s opinion.

6 条评论

BriggyDwiggs424 个月前
Chain of thought is very weird and very impressive. When you watch it work, it looks like a series of flash-frozen human voices being recruited onto the page. We know, of course, that these are the ones that happen to produce the responses we want at the end of the chain, but there isn’t imo an underlying meaning being communicated through the words. It’s a Chinese room responding to itself, built to produce desirable paths. Humans mostly don’t work that way. The subconscious can talk to itself without words, and it’s only the most blunt, underspecified ideas that get put into internal monologue.
评论 #42909094 未加载
iExploder4 个月前
Imho as a non expert, there is nothing having an internal monologue or thinking about anything.<p>These are prediction models that mimic patterns in data they have been taught and the behaviour they had reinforced.<p>One could argue encyclopedic knowledge and routine work is deprecated to a degree.<p>Taking into account that most human work is busy work and there are a lot of inefficiencies due to replicated efforts, in short term I&#x27;m starting to get worried about job security...<p>In medium term I expect there will be a development boom of essentially everyone creating everything imaginable, a lot of that will probably be useful...<p>In long term once embodiment is perfected and AI can effectively learn on the go in realtime, we will be truly screwed, but that has still too many challenges like energy source- batteries, computing power and algorithm efficiency.
评论 #42908382 未加载
usgroup3 个月前
To my understanding, an LLM -- and similar models -- have a Markov chain equivalent.<p>There is an old argument from philosophy that any mechanical interpretation of mind has no need for consciousness. Or conversely, that consciousness is not needed explain any mechanistic aspect of mind.<p>Yet, consciousness -- sentience -- is our primary differentiator as humans.<p>From my perspective, we are making strides in processing natural language. We have made the startling discovery that language encodes a lot about thought patterns of the humans producing the text, and we now have machines which can effectively learn those patterns.<p>Yet, sentience remains no less a mystery.
mettamage3 个月前
&gt; Did nature also figure out the same solution? and was consciousness born out of something similar to CoT?<p>I don&#x27;t think it&#x27;s that. I think what we&#x27;re doing is that we are &quot;conditioning&quot; &#x2F; &quot;teaching&quot; computers to be useful to us, so we use models that we find useful and instill it (sub-consciously) on computers. At least, this happens to some extent. Sometimes we see a completely foreign model that a computer applies well and then we use that.<p>I don&#x27;t think one can infer much about nature when it comes to computers. Not in this way at least. What is happening much more is that we&#x27;re seeing a (part of) a reflection of ourselves.
theothertimcook4 个月前
Disclosure: Not an expert, barely functioning human.<p>No, humans are capable of a level of stupidity well beyond the theoretical potential of computers.
davydm4 个月前
no