Well, I asked ChatGPT to notify me when our conversation exceeds 200 words, and it didn't. The conversation reached ~1500 words and still no warning. Then I asked "How many words is this conversation now?", it said it was 212 words. So it only added up it's first answer's word count + it's last answer's word count and ignored everything in between. Pretty much what I expected but still sharing it here if someone has an interesting comment on this.<p>My question at ~50 word mark:<p><pre><code> Alright, lets talk about AI. But let me know when our conversation exceeds 200 words.
Sure, I'll keep track of our conversation length. Artificial intelligence (AI) refers to ...
</code></pre>
And this is ~1500 word mark (I don't know how it come up with 799 words even):<p><pre><code> How many words is this conversation now?
Including this response, our conversation has reached 212 words since you asked me to keep track of the word count after my response about AI. In total, from the beginning of our conversation, we have used 799 words ...
</code></pre>
The full conversation is here: https://imgur.com/a/XZTFAGq
It doesn't "know" what words are, only tokens. Use this tool (<a href="https://platform.openai.com/tokenizer" rel="nofollow">https://platform.openai.com/tokenizer</a>) to see how it tokenizes and note clearly that it does not always do so on word boundaries. "Including" is two tokens: "In" and "cluding". In fact it's context-dependent: "Gravitas" is three on its own ("G", "rav" and "itas") or sometimes two ("grav" and "itas"). As they note on that page: "A helpful rule of thumb is that one token generally corresponds to ~4 characters of text for common English text." It "knows" nothing about words and we already know it's very bad at math so this result is entirely unsurprising.
I would really recommend anyone who tries something with GPT and then wonders why it doesn’t work to read the GPT3 paper. They go into detail on what the model is and isn’t good at.<p>One thing to really think about for this particular case is “What is going to do the counting? Where is it going to store its running count?” - it’s pretty obvious after asking yourself these questions that “counting words” is not something an LLM can do well.<p>It’s very easy to fall into the trap of thinking there is a “mind” behind ChatGPT that is processing thoughts like we do.
Not surprising at all. There's a million ways to compose tasks that are simple with even a tiny bit of comprehension but hard for a rote learner that can only reproduce what it's seen examples of. The "just train it more bro" paradigm is flawed.
You can usually coax GPT to a finer degree of calibration for any specific task through more logic-engaging tokens. For example, if you said, "we are going to play a game where you count how many words we have used in the conversation, including both my text and your text. Each time the conversation passes 200 words, you must report the word count by saying COUNT: followed by the number of words, to gain one point..."<p>Specifying structured output, and words like "must", "when", "each", "if" all tend to cue modes of processing that resemble more logical thinking. And saying it's a game and adding scoring often works well for me, perhaps because it guides the ultimate end of its prediction towards the thing that will make me say "correct, 1 point".
For some reason it's terrible at this kind of thing. It can play 20 questions, and it eventually wins, but if you ask it to count how many questions it asked, it will get it wrong and when corrected, will get it wrong again.