I normally enjoy Aaronson's writing, but I'm actually chilled.<p>This essay depends on a specific, American-hallow take on the Second World War. The 'Orthagonality Thesis' is just a fancy way of shifting the burden of proof from where it should be -- on the person <i>claiming that intelligence has anything to do with morality</i>. It would be better to call it what it really is, the null hypothesis, but sure, ok, for the sake of argument, let's call it the OT.<p>Aaronson's argument against the OT is basically, when you look at history and squint, it appears that some physicists somewhere didn't like Hitler, and that might be because of how smart they were.<p>This amounts to a generalization from historical anecdote and a tiny sample size, ignoring the fact that <i>we all know smart people who are actually morally terrible</i>, especially around issues that they don't fully understand. (Just ask Elon.)<p>I'm not even going to bother talking about the V2 programme or the hypothermia research at Auschwitz, because to do so would already be to adopt a posture that thinks <i>historical anecdote matters</i>.<p>What I'll do instead is notice that Aaronson's argument points the wrong way! If Aaronson is right, and intelligence and morality are correlated -- if being smart inclines one to be moral -- then AI (not AGI) is <i>already</i> a staggering risk.<p>Think it through. Let's say for the sake of argument that intelligence <i>does</i> increase morality (essentially and/or most of the time.) This means that <i>lots of less intelligent/moral people suddenly can draw, argue, and appear to reason</i> as well or better than unassisted minds.<p>Under this scenario, where intelligence and morality are non-orthogonal, AI <i>actively decouples intelligence and morality</i> by giving less intelligent/moral people access to intellect, without the affinity for moral behaviour that (were this claim true) would show up in intelligent people.<p>And this problem arrives first! We would have a billion racist Shakespeares long before we have one single AGI, because <i>that</i> technology is already here, and AGI is still a few years off.<p>Thus I am left praying that the Orthogonality Thesis does in fact hold. If it doesn't, we're IN EVEN DEEPER TROUBLE.<p>I can't believe I'm saying this, but I do believe we've finally found a use for professional philosophers, who, I think, would not have (a) made a poorly-supported argument (self-described as 'emotional') or (b) made an argument that, if true, proves the converse claim (that AI is incredibly dangerous.) Aaronson does both, here.<p>I speculate that Aaronson has unwittingly been bought by OpenAI, and misattributes the cheerfulness that comes from his paycheck as coming from a coherent (if submerged) argument as to why AI might not be so bad. At the very least, there is no coherent argument in this essay to support a cheerful stance.<p>A null hypothesis again! There need be no mystery to his cheer: he has a good sit, and a fascinating problem to chew on.