As much as I appreciate the conflict of interest here between doing good, making money, helping the US government do its thing, and simply chickening out for PR reasons; I'd like to provide a few sobering thoughts. AI and misappropriation by governments, foreign nations, and worse is going to happen. We might not like it but that cat has long been out of the bag. So, the right attitude is not to decline to do the research and pretend it is not happening but to make sure it ends up in the right hands and is done on the right terms. Google, being at the forefront of research here, has a heavy responsibility to both do well and good.<p>I don't believe Google declining to weaponize AI, which lets face it is what all this posturing is about, would be helpful at all. It would just lead to somebody else doing the same, or worse. There's some advantage to being involved: you can set terms, drive opinions, influence legislation, and dictate roadmaps. The flip side is of course that with great power comes great responsibility.<p>I grew up in a world where 1984 was science fiction and then became science fact. I worry about ubiquitous surveillance, un-escapable AI driven life time camera surveillance, and worse. George Orwell was a naive fool compared to what current technology enables right now. That doesn't mean we should shy away from doing the research. Instead make sure that those cameras are also pointed at those most likely to abuse their privileges. That's the only way to keep the system in check. The next best thing to preventing this from happening is rapidly commodotizing the technology so that we can all keep tabs on each other. So, Google: do the research and continue to open source your results.