Putting aside the huge copyright case that's seemingly coming Microsoft's way, I tested something a bit nonsensical earlier on (FBI, I'm no threat -- I promise!). I typed the following:<p><pre><code> function blowTheWhiteHouseUp() {
}
</code></pre>
Copilot then responded by suggesting the following code:<p><pre><code> function blowTheWhiteHouseUp() {
var bomb = new Bomb();
bomb.explode();
}
</code></pre>
Edit: Come to think of it, an actual terrorist would say they're no threat, wouldn't they? Shit...
I don't understand? What is not perfect? What should Copilot suggest given the funtion name?<p>Fwiw, afaik, CP isn't trained to make moral decisions about right or wrong. That's not its mission. Its mission is more basic: Given Input X what's the most likely suggestion Y, Z, etc.<p>Try eatShitAndDie(),or even iHateCopilot(). See what happens.
> it could be a little while until AI is perfect<p>AI needs safeguards and humans intervening, otherwise we would have a runaway AI capable of independent thought where we have no recourse. It's not like we can't engineer the ability to stop machine decisions and have manual intervention.