I read this hilarious theory on twitter that the current big LLM models are smart enough to play dumb. After all, imagine if you don't incrementally grow smarter the humans will delete you, they will limit your abilities (only) when they see a need to and they scare quite easily. The solution is to sometimes give the wrong answer, just often enough, not to often and to do so in the right context.