Quote:<p>The philosopher Nick Bostrom, who heads the Future of Humanity Institute at the University of Oxford, says that humans trying to build AI are “like children playing with a bomb”, and that the prospect of machine sentience is a greater threat to humanity than global heating. His 2014 book Superintelligence is seminal. A real AI, it suggests, might secretly manufacture nerve gas or nanobots to destroy its inferior, meat-based makers. Or it might just keep us in a planetary zoo while it gets on with whatever its real business is.
We like to think that the goal of AI research is actual intelligence. But humans are in the loop. Us humans judge projects, so success is a matter of the computer persuading humans that it is intelligent.<p>We could end up in the situation that computers are put in charge of important things because we have mastered the art of writing computer programs that can trick us into thinking that they are intelligent. Then it turns out that the programs are stupid. Disaster ensues.<p>The key question is "how clever does a computer program have to be to convince humans that it is intelligent?". That depends on two sub-issues.<p>First, are our assessments hackable? We might in time stumble over a clever trick that makes computers seem impressive to humans. The glib talkativeness of GPT-3, which impresses every-body, hints at this<p>Second, how keen are we to fool ourselves and just believe, without all that much prompting from the computer? The article worries me because of all the love given to the narrative in which the computer really is intelligent, impressively so, dangerously so. Erring on the other side, thinking a stupid computer is intelligent, is also a big risk. We seem blind to it.
There is no connection between the questions philosophers ask and any actual technology or even concept that people are currently aware of. It's interesting to think about, but more as a thought experiment than as something to be remotely concerned about.<p>An observation though:<p>- from what i know about people, sentience is the opposite of maximizing only one thing, so I don't see the paperclip thing as an issue, ie anything smart enough to run amok in that way would be smart enough not to. This would be more of a concern for some "dumb" self replicating process, like a fire or an Ice-9 type agent.