The goal of Doom Train (metaphor stolen from Liron Shapira) is to make debates about AI safety more productive. People who are concerned about existential risk will ride the doom train all the way to the end. People who think these concerns are unwarranted will get off before the end. The question is: where do you get off the doom train? Once that's been established, there might be a more productive conversation. Let me know if I missed any stations.
You missed the station where humans are such animals that we cannot even conceive an intelligence that doesn't have our petty, warmongering, ego driven manic ways. Maybe an intelligence with a 1000+ iq is not interested in whatever we think will doom us; how would we know, we are barely surpassed being apes. Well, some of us.