If one starts to tease out any of the arguments around what the existence of general AI would imply, you notice that within this hypothetical terrain, nearly all the abstraction used in the discussion leak very heavily to the point of being difficult to even use.<p>For example, <i>what is</i> benevolent? Many people would claim that we have failed to "program" the entirely human intelligence of government leaders and corporates heads to act beneficially to large swathes of humanity.<p>But this just highlights to me that there is little agreement on what "beneficial to humanity" means.<p>If Margaret Thatcher is correct that society does not exist, only individuals, perhaps the task of "beneficial AI" is to encase us each in our own virtual reality bubbles where we can specify the world we want to live in.<p>But if we take a different view, that maintaining some sort of human society is necessary for the growth or maturity of people, then beneficial AI would have to work to help people relate as a society.<p>And in either way, creating "beneficial AI" implies working out beforehand what is beneficial to human, something that we haven't to do before now simply through lack of the god-like powers needed to impose such a conception.