If you want to understand the decisions of AI companies, it’s pretty easy – just think like a psychopath. Safe Superintelligence, Inc. seeks to safely build AI far beyond human capability, according to Ars Technica but the decision smacks less of selfless humanity than it does ‘pivoting to avoid a market risk.’ No AI company has ‘human progress’ as a KPI, they have MARKET SHARE. AI is the new oil, and a lot of proto-robber barons are lining up to own the well.<p>If I’m being honest, I don’t care that ‘Ilya Sutskever is pursuing safe superintelligence in a straight shot, with one focus, one goal, and one product.’ Good for him, I still don’t trust him, nor do I trust any of these cats and with good reason. There’s a very narrow line between ‘visionary CEO’ and ‘criminal psychopath.’ Sutskever is on the ‘waiting and watching’ list until he proves himself one way or the other.<p>If humanity is going to get better, we need to act like we deserve it and stop taking ‘it’s gonna be okay’ for an answer. Prove it. Tell me how you know, tell me how we’ll know when we get there. If you can’t do that, then rushing your product to market isn’t going to make me feel better. Making me feel better about your product will make me feel better, but that would require you to have empathy for me, the customer. I don’t hear a lot of empathetic language out of ‘visionary CEOs’ and that’s a red flag.<p>When it comes to understanding the motivation of visionary CEOs, there’s a common narrative that ‘we can’t understand because we’re not geniuses.’ Then we get the bad news in a few years – the true motivation of these visionary CEOs is revealed and it’s a scattered form of greedy psychopathy (Looking at you, Elizabeth Holmes, Sam Bankman-Fried, Martin Shkreli, and now maybe Dave Calhoun). Their sad pursuit of growth at any cost is labeled as ‘part of the game of being a leader,’ but that’s an oversimplified excuse for lethally craven, cold-hearted behavior.