> We want to be an organization that is clear that we are focused on one thing: the benefit for humanity. Let us make sure that the actions we take are the things that we think will maximize that.<p>I wonder what their process for evaluating potential actions is.
One topic I've never seen covered in any depth is how to insure that humanity is not a threat to AI, especially given the common view that AI has the potiental to be a threat to man.<p>(Just to be clear, my reference to AI is not singlural, but plural.)