I read Nick Bostrom's book "Superintelligence: Paths, Dangers, Strategies" a while back and found the problems in trying to set a goal for AI to be interesting/difficult. He outlines various ways the goals could be misconstrued by the AI, which eventually leads to human extinction. I think setting the goal to be "to increase human autonomy" might get around some of these problems. I'm interested to hear how people think it could go awry though.
I guess a natural question is how to define and measure human autonomy.<p>If it's the autonomy of each individual human, increasing it without bound will cause existing societies to fall apart quickly (which is potentially fine under some ethical theories), and could create severe danger for other humans because people can use their enhanced abilities to fight and harm each other.<p>If it's the autonomy of humanity as a whole, you have to define some way of aggregating preferences or determining the will of humanity as a whole -- already a significant challenge today.