Hello.<p>I just watched the GPT-4 Developer livestream. When GPT was reasoning about the tax problem, I got very confused. If this model predicts what word comes next (I know its much much more complicated), how can it understand and reason about a problem?<p>Is it a hand picked problem which it can solve accurately and will fail in future when given similar tasks?<p>Or is there some other module which gives GPT its reasoning ability?<p>Or is it some sort of emergent property of a system which predicts next word and has been trained on a huge amount of data?<p>I know they haven't released much in the paper, but I want to know if anyone has any theories about this.<p>Also, if it can understand and reason about a given block of text, why not give it description of a already solved but very complicated math / physics problem and ask it to solve that. If it can successfully solve that, we can try with an unsolved problem.
It doesn’t reason. People infer reasoning ability because that’s the only model we have in our brains for something that appears to communicate intelligently.<p>Stephen Wolfram wrote a good paper about how ChatGPT works and why it’s not reasoning about anything.
Look, you only have to ask chatgpt something totally nonsensical like<p>“Show me how a martingale converges to the zariski measure on a topos”<p>To see that it has no clue what it is “talking” about AND does not know that does not know.<p>Of course, you need a topic on which there is sacant literature.