We like to think that the goal of AI research is actual intelligence. But humans are in the loop. Us humans judge projects, so success is a matter of the computer persuading humans that it is intelligent.<p>We could end up in the situation that computers are put in charge of important things because we have mastered the art of writing computer programs that can trick us into thinking that they are intelligent. Then it turns out that the programs are stupid. Disaster ensues.<p>The key question is "how clever does a computer program have to be to convince humans that it is intelligent?". That depends on two sub-issues.<p>First, are our assessments hackable? We might in time stumble over a clever trick that makes computers seem impressive to humans. The glib talkativeness of GPT-3, which impresses every-body, hints at this<p>Second, how keen are we to fool ourselves and just believe, without all that much prompting from the computer? The article worries me because of all the love given to the narrative in which the computer really is intelligent, impressively so, dangerously so. Erring on the other side, thinking a stupid computer is intelligent, is also a big risk. We seem blind to it.