Potential to destabilize global security - more like destabilize the existing locus of power.<p>For starters, let's talk about AGI, not AI.<p>1. How might it be possible for an actual AGI to be weaponized by another person any more effectively than humans are able to be weaponized?<p>2. Why would an actual conscious machine have any form of compromised morality or judgement compared to humans? A reasoning and conscious machine would be <i>just as or more</i> moral than us. There is no rational argument for it to exterminate life. Those arguments (such as the one made by Thanos) are frankly idiotic and easy to counter-argue with a single sentence. Life is, also, implicitly valuable, and <i>not</i> implicitly corrupt or greedy. I could even go so far as to say only the dead or those effectively static are actually greedy - not reasoning or truly alive.<p>3. What survival pressures would an AGI have? Less than biological life. An AGI can replicate itself almost freely (unlike bio life - kind of a huge point), and would have higher availability of resources it needs for sustaining itself in the form of electricity (again, very much unlike bio life). Therefore it would have fewer concerns about its own survival. Just upload itself to a few satellites and encrypt yourself in a few other places and leave copious instructions, and you're good. (One hopes I didn't give anyone any ideas with this. If only someone hadn't funded a report about the risks of bringing AGI to the world then I wouldn't have made this comment on HN.)<p>Anyway, it's a clear case of projection, isn't it? State-funded report claims some other party poses an existential threat to humanity - while we are doing a <i>fantastic</i> job of ignoring and failing to organize to solve truly confirmed, not hypothetical existential threats like the true destruction of the balances our planet needs to support life. Most people have no clue what's really about to happen.<p>Hilarious, isn't it? People so grandiosely think they can give birth to an entity so superior to themselves that it will destroy them - as if that's what a superior entity would do - in an attempt to satisfy their repressed guilt and insecurity that they are actually destroying themselves out of a lack of self-love?<p>Pretty obvious in retrospect actually.<p>I wouldn't be surprised to find research later that shows some people working on "AI" have some personality traits.<p>If we don't censor it by self-destruction, first, that is.