My analogy for GPT-4 is this: GPT-4 is writing a novel, in which a human talks to a very smart AI. This helps me contextualize its hallucinations: if I were writing such a novel and I knew the answer to something, I would put in the correct answer; if I were writing such a novel and I didn't know the answer to something (and had no way to look it up), I would make up something plausible.<p>From that perspective, I think multi-intentionality also works. If I write a story about Bob, then Bob (in the story) has intentions, although he's just a figment of my imagination; and when we read characters in novels, we use the imputed intentions of the characters to understand their behavior, although we know they're fictional and don't actually exist.<p>So yes; on one level, I want to write an exciting story; on a second level, I'm simulating Bob in my head, who wants to execute the perfect robbery. On one level, GPT-4 wants to write a story about a smart AI; on a second level, the smart AI in GPT-4's story wants to win the chess game by moving the queen to put the king in check.