Stephen Wolfram in his tutorial article on ChatGPT, in his conclusions on the main differences between human and ChatGPT learning approaches [1]:<p>When it comes to training (AKA learning) the different “hardware” of the brain and of current computers (as well as, perhaps, some undeveloped algorithmic ideas) forces ChatGPT to use a strategy that’s probably rather different (and in some ways much less efficient) than the brain. And there’s something else as well: unlike even in typical algorithmic computation, ChatGPT doesn’t internally “have loops” or “recompute on data”. And that inevitably limits its computational capability - even with respect to current computers, but definitely with respect to the brain.<p>[1] What Is ChatGPT Doing and Why Does It Work:<p><a href="https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/" rel="nofollow noreferrer">https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...</a>
> LLMs produce their answers with a fixed amount of computation per token<p>I'm not <i>that</i> confident that humans don't do this. Neurons are slow enough that we can't really have a very large number of sequential steps behind a given thought. Longer complex considerations are difficult (for me at least) without at least thinking out loud to cache my thoughts in audible memory, or having a piece of paper to store and review my reasoning steps. I'm not sure this is very different than a LLM prompted to reason step by step.<p>The main difference I can think of is that humans can learn, while LLMs have fixed weights after training. For example, once I've thought carefully and convinced myself through step-by-step reasoning, I'll remember that conclusion and fit it into my knowledge framework, potentially re-evaluating other beliefs. That's something today's LLMs don't do, but mainly for practical reasons, rather than theoretical ones.<p>I believe the extent of world modelling done by LLMs still remains an open question.
The "world model" is basically the old school idea of AI, which has been mostly abandoned because you can get incredibly good results from just ingesting gobs of text. But I agree that it's a necessity for AGI; you need to be able to model concepts beyond just words or pixels.
The answer is that humans have genitalia.<p>And while that may seem trite, it's really not. you can't separate humans thinking from the underlying hardware.<p>Until LLM's are able to experience real emotion, and emotion here really means a stick by which to lead the LLM, it will always be different from humans.
More of a scaling issue: Humans do continuous* online learning, while LLMs get retrained once in a while.<p>* I'm no expert, 'continuous' might be oversimplified.
The difference is, LLMs are way better than most humans at impressing gullible morons, even highly intelligent gullible morons. In truth it's only an incomprehensible statistical model that does what it's told to do, without agency, motivation or ideas. Smart people have build something, they themselves cannot fully understand and the results remind me a lot of what Weizenbaum said about eliza: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."