I was surprised to learn that GPT-4 can't play tic-tac-toe, but thought people who tried just didn't prompt it correctly.<p>But after trying for 2h to get it work (even with GPT-4V) it seems like a fundamental limitation.<p>I've found a HN submission [1] where someone used a brute-force prompt to get it to play correctly, but as the top/only comment points out, it's a limited action space and enumerating most of it seems moot.<p>I was hoping for a more reasonable prompt. After all humans are able learn tic-tac-toe rapidly.<p>current hypotheses:<p>1) tic-tac-toe requires "spatial reasoning" and LLMs train on sequences (somehow GPT-4V didn't elevate that constraint)<p>2) tic-tac-toe requires "search" of future scenarios<p>Would love to hear what you think/know!<p>---<p>Previous discussion about T3 and GPT-4: https://news.ycombinator.com/item?id=35216614 (7 months ago)<p>[1] https://news.ycombinator.com/item?id=37626918
I agree with jqpabc123 except for one point: that it cannot 'reason' and it is not smart. I disagree with this.<p>The reason it cannot play well is because it has very little 'experience' (training data) with it. It's been trained on 'what the game is', it has not been trained how to win.<p>You can think of it a bit like driving. You can know what driving is, but it doesn't make you a driver if you've never driven before.<p>You can ask a genius who's never played before to play tic tac toe with you, tell it the rules, they will likely not win on the first attempt or play optimally. This doesn't mean that person isn't a genius.<p>You said humans are able to 'learn' to play it rapidly. So is GPT, in training mode, it can process a million games in seconds, where a human can't.<p>The problem here, is it simply has no experience.<p>If I told you every time you played tic tac toe against me, you would forget all your experience the next time we played, would you play optimally?
I think this paper has an interesting way of training problem solving:<p><a href="https://venturebeat.com/ai/microsoft-unveils-lema-a-revolutionary-ai-learning-method-mirroring-human-problem-solving/" rel="nofollow noreferrer">https://venturebeat.com/ai/microsoft-unveils-lema-a-revoluti...</a><p>I submitted to HN but nobody seemed to care:<p><a href="https://news.ycombinator.com/item?id=38128012">https://news.ycombinator.com/item?id=38128012</a><p>I looks like it basically uses GPT-4 to train a smaller model on problem solving.
<i>But after trying for 2h to get it work (even with GPT-4V) it seems like a fundamental limitation.</i><p>It obviously hasn't been "trained" for tic-tac-toe. The way to train it is using statistics --- present every possible position and the correct response so it can build a database.<p>There is no logic or reasoning involved --- it's all statistics. It's not what we call "smart". Any ability to "reason" is just a statistical illusion.
The Turing Test sounds cool, and "is he a clever conversationalist?" is a fairly good test - <i>of social intelligence and class, for casual use in human society</i>.<p>But current "AI's" are intelligent kinda like pocket calculators are intelligent.