TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

What do humans have that LLMs don't

23 pointsby ukaover 1 year ago

8 comments

teleforceover 1 year ago
Stephen Wolfram in his tutorial article on ChatGPT, in his conclusions on the main differences between human and ChatGPT learning approaches [1]:<p>When it comes to training (AKA learning) the different “hardware” of the brain and of current computers (as well as, perhaps, some undeveloped algorithmic ideas) forces ChatGPT to use a strategy that’s probably rather different (and in some ways much less efficient) than the brain. And there’s something else as well: unlike even in typical algorithmic computation, ChatGPT doesn’t internally “have loops” or “recompute on data”. And that inevitably limits its computational capability - even with respect to current computers, but definitely with respect to the brain.<p>[1] What Is ChatGPT Doing and Why Does It Work:<p><a href="https:&#x2F;&#x2F;writings.stephenwolfram.com&#x2F;2023&#x2F;02&#x2F;what-is-chatgpt-doing-and-why-does-it-work&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;writings.stephenwolfram.com&#x2F;2023&#x2F;02&#x2F;what-is-chatgpt-...</a>
mitthrowaway2over 1 year ago
&gt; LLMs produce their answers with a fixed amount of computation per token<p>I&#x27;m not <i>that</i> confident that humans don&#x27;t do this. Neurons are slow enough that we can&#x27;t really have a very large number of sequential steps behind a given thought. Longer complex considerations are difficult (for me at least) without at least thinking out loud to cache my thoughts in audible memory, or having a piece of paper to store and review my reasoning steps. I&#x27;m not sure this is very different than a LLM prompted to reason step by step.<p>The main difference I can think of is that humans can learn, while LLMs have fixed weights after training. For example, once I&#x27;ve thought carefully and convinced myself through step-by-step reasoning, I&#x27;ll remember that conclusion and fit it into my knowledge framework, potentially re-evaluating other beliefs. That&#x27;s something today&#x27;s LLMs don&#x27;t do, but mainly for practical reasons, rather than theoretical ones.<p>I believe the extent of world modelling done by LLMs still remains an open question.
评论 #38428574 未加载
TillEover 1 year ago
The &quot;world model&quot; is basically the old school idea of AI, which has been mostly abandoned because you can get incredibly good results from just ingesting gobs of text. But I agree that it&#x27;s a necessity for AGI; you need to be able to model concepts beyond just words or pixels.
PH95VuimJjqBqyover 1 year ago
The answer is that humans have genitalia.<p>And while that may seem trite, it&#x27;s really not. you can&#x27;t separate humans thinking from the underlying hardware.<p>Until LLM&#x27;s are able to experience real emotion, and emotion here really means a stick by which to lead the LLM, it will always be different from humans.
评论 #38428972 未加载
评论 #38428445 未加载
nittanymountover 1 year ago
Lecun&#x27;s voice in this post, it sounds like he knows the answers for sure, haha ...
评论 #38425866 未加载
评论 #38425108 未加载
lagrange77over 1 year ago
More of a scaling issue: Humans do continuous* online learning, while LLMs get retrained once in a while.<p>* I&#x27;m no expert, &#x27;continuous&#x27; might be oversimplified.
cc101over 1 year ago
subjective experience
floppiploppover 1 year ago
The difference is, LLMs are way better than most humans at impressing gullible morons, even highly intelligent gullible morons. In truth it&#x27;s only an incomprehensible statistical model that does what it&#x27;s told to do, without agency, motivation or ideas. Smart people have build something, they themselves cannot fully understand and the results remind me a lot of what Weizenbaum said about eliza: &quot;I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.&quot;