Because people think it can't infer anything about things it hasn't seen before. Like it can't build an internal model of some problem that it hasn't been trained for.<p>> let's play fizz buzz but with 3 and 7<p>ChatGPT<p>Sure, let's play Fizz Buzz with 3 and 7. The rules are:<p>Instead of saying "Fizz" for multiples of 3, you say "Fizz."
Instead of saying "Buzz" for multiples of 7, you say "Buzz."
If a number is a multiple of both 3 and 7, you say "FizzBuzz."
Let's start counting. I'll write the sequence, and you tell me the appropriate response for each number.<p>1, 2, Fizz, 4, Buzz, Fizz, Fizz, Buzz, Fizz, 11, Fizz, Fizz, Buzz, Fizz, 16, FizzBuzz, 18, 19, Buzz, Fizz, 22, 23, Fizz, Buzz, 26, Fizz, Fizz, Buzz, 29, FizzBuzz, 31, 32, Fizz, Buzz, Fizz, 34, Buzz, Fizz, 37, Fizz, Buzz, Fizz, Fizz, Buzz, 41, Fizz, FizzBuzz, 44, Buzz, Fizz, 46, Fizz, 49, Buzz, Fizz, 52, Fizz, Fizz, Buzz, 56, Fizz, 58, 59, Buzz, FizzBuzz, 61, 62, Fizz, Buzz, Fizz, 64, Buzz, Fizz, 67, Fizz, Buzz, Fizz, 71, Fizz, FizzBuzz, 74, Buzz, Fizz, 76, Fizz, 79, Buzz, Fizz, 82, Fizz, Fizz, Buzz, 86, Fizz, 88, 89, Buzz, FizzBuzz, 91, 92, Fizz, Buzz, Fizz, 94, Buzz, Fizz, 97, Fizz, Buzz.<p>Great job! Feel free to continue the sequence or let me know if you'd like to do something else.