I am not sure if I agree with the proposal. I feel that this won't solve the problems it is raising. Once a technology is out there people are going to use and misuse it. Disclosing the potential harmful effects won't prevent that. Predicting harmful effects is also a terribly difficult job and humans are pretty good at figuring out ingenious ways to misuse technologies.<p>Using this for peer review process adds another layer of subjectivity to an already messy system. What if the peer reviewer doesn't agree with the negative consequences? Also the big companies can use their researchers to block papers with negative consequences they don't like.<p>On the other hand, I really hope that people keep working on the ethics of using machine learning techniques. Maybe set up ethics bodies with real power like the engineering and medical professions?