So, I'm all for giving someone the benefit of the doubt if they have a change of heart upon reconsidering an issue, but this coming after the fact rings a bit hollow to me. I think the only principle at play here is that it became a PR issue. That's fine, but let's be honest about it.<p>Early emails between Google execs framed this project only in terms of revenue and potential PR backlash. As far as we're aware, there was no discussion about the morality of the matter (I'm not taking any moral stance here just to be clear.) Once this became an internal and external PR issue, Google held a series of all hands meetings and claimed that this was a "small project" and that the AI would not be used to kill people. While technically true, those same internal emails show that Google expected this to become a much larger project over time, eventually bringing in about $250M / year[1]. So even then they were being a bit disingenuous by focusing only on the current scope of the deal.<p>And here we are now with a release from the CEO talking about morality and "principles" well after the fact. I doubt many people do anyway, but I'm not buying the "these are our morals" bit.<p><a href="https://www.bizjournals.com/sanjose/news/2018/06/01/report-google-thought-military-drone-project-would.html" rel="nofollow">https://www.bizjournals.com/sanjose/news/2018/06/01/report-g...</a>
The best way to lead is by example. Thank you, Googlers.<p>The choice not to accept business is a hard one. I've recently turned away from precision-metrology work where I couldn't be certain of its intent; in every other way, it was precisely the sort of work I'd like to do, and the compensation was likely to be good.<p>These stated principles are very much in line with those that I've chosen; a technology's primary purpose and intent must be for non-offensive and non-surveillance purposes.<p>We should have a lot of respect for a company's clear declaration of work which it will not do.
>2. Avoid creating or reinforcing unfair bias.<p>They DO realize that the YouTube recommendation algorithm is a political bias reinforcement machine, right?<p>Like, I think it’s fun to talk trash on google because they’re in an increably powerful position, but this one isn’t even banter.
Several comments don't seem to understand what the "unfair bias" mentioned is. It doesn't have anything to do with censoring your favorite conservative search result.<p>The machine learning "bias", at least the low hanging fruit, is learning things like "doctor == male", or "black face = gorilla". How fair is it that facial recognition or photo algorithms are trained on datasets of white faces or not tested for adversarial images that harm black people?<p>Or if you use translation tools and your daughter translates careers like scientist, engineer, doctor, et al and all of the pronouns come out male?<p>The point is that if you train AI on datasets from the real world, you can end up reinforcing existing discrimination local to your own culture. I don't know why trying to alleviate this problem triggers some people.
>Avoid creating or reinforcing unfair bias.<p>I recommend _The impossibility of “fairness”: a generalized impossibility result for decisions_[0] and _Inherent Trade-Offs in the Fair Determination of Risk Scores_[1]<p>[0] <a href="https://arxiv.org/pdf/1707.01195.pdf" rel="nofollow">https://arxiv.org/pdf/1707.01195.pdf</a>
[1] <a href="https://arxiv.org/pdf/1609.05807v1.pdf" rel="nofollow">https://arxiv.org/pdf/1609.05807v1.pdf</a>
> We believe that AI should:<p>><p>> 1. Be socially beneficial.<p>> 2. Avoid creating or reinforcing unfair bias.<p>> 3. Be built and tested for safety.<p>> 4. Be accountable to people.<p>> 5. Incorporate privacy design principles.<p>> 6. Uphold high standards of scientific excellence.<p>> 7. Be made available for uses that accord with these principles.<p>While I like this list a lot, I don't understand why this is AI-specific, and not software-specific. Is Google using the word "AI" to mean "software"?
> AI applications we will not pursue [...] Technologies that cause or are likely to cause overall harm. [...] Weapons or other technologies<p>This statement will have zero impact on subsequent sensational headlines or posters here claiming Google is making killbots.
Some time ago, the π day... I become aware of the following. Sadly I totally agree with the "trend" :(<p>"If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality."<p>-- Stephen Hawking
Under "AI applications we will not pursue", its telling that the first rule basically allows them to override all the subsequent ones - "where we believe that the benefits substantially outweigh the risks". "We believe" gives them a lot of leeway.
> Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.<p>> We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.<p>I wonder if this is an official response to the people at Google[1] who were protesting[2] against Project Maven.<p>[1] <a href="https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html" rel="nofollow">https://www.nytimes.com/2018/04/04/technology/google-letter-...</a><p>[2] <a href="https://static01.nyt.com/files/2018/technology/googleletter.pdf" rel="nofollow">https://static01.nyt.com/files/2018/technology/googleletter....</a>
Also on this topic, from students who would interview at Google. It's important to hear how the upcoming generation, who would actually be doing this work, feel<p><a href="https://gizmodo.com/students-pledge-to-refuse-job-interviews-at-google-in-p-1826614260" rel="nofollow">https://gizmodo.com/students-pledge-to-refuse-job-interviews...</a> [Students Pledge to Refuse Job Interviews at Google in Protest of Pentagon Work]
I've been asking myself this question for over 20 years: who are these people that click on ads anyway?<p>Ads are inherently going to be the opposite of Google's values, yet Google depends on them for the vast majority of their revenue. They show you some search results in line with their values, and if you can't get to the top of that "intrinsically", you buy ads or SEO. The folks that use that system to exploit the least intelligent win here, and Google takes a share of the profit.<p>Based on my Google search results in the recent past, Google isn't doing a good job of making sure the "best" websites (by my own value system, of course) make it to the top. I find myself having to go into second and third page results to get legitimate information. I'm seeing pages of medical quackery that "sounds good" but isn't based on science when I try to find diet or exercise advice.<p>As technology becomes more democratic, more people will use it. That means that the people that spend more time trying to sell you shit are going to win, because they're the ones that are willing to reverse-engineer the algorithm and push stuff up to the top. They add less value to society because they're spending all their time on marketing and promotion.<p>I wish I knew how to solve this problem. By imposing morals, Google "bites the hand that feeds".
US government should consider accelerating breaking Google monopoly. So that "<i></i> ….we understand there is room for many voices in this conversation. <i></i>" becomes more meaningful.
As much as I appreciate the conflict of interest here between doing good, making money, helping the US government do its thing, and simply chickening out for PR reasons; I'd like to provide a few sobering thoughts. AI and misappropriation by governments, foreign nations, and worse is going to happen. We might not like it but that cat has long been out of the bag. So, the right attitude is not to decline to do the research and pretend it is not happening but to make sure it ends up in the right hands and is done on the right terms. Google, being at the forefront of research here, has a heavy responsibility to both do well and good.<p>I don't believe Google declining to weaponize AI, which lets face it is what all this posturing is about, would be helpful at all. It would just lead to somebody else doing the same, or worse. There's some advantage to being involved: you can set terms, drive opinions, influence legislation, and dictate roadmaps. The flip side is of course that with great power comes great responsibility.<p>I grew up in a world where 1984 was science fiction and then became science fact. I worry about ubiquitous surveillance, un-escapable AI driven life time camera surveillance, and worse. George Orwell was a naive fool compared to what current technology enables right now. That doesn't mean we should shy away from doing the research. Instead make sure that those cameras are also pointed at those most likely to abuse their privileges. That's the only way to keep the system in check. The next best thing to preventing this from happening is rapidly commodotizing the technology so that we can all keep tabs on each other. So, Google: do the research and continue to open source your results.
It's good that they're openly acknowledging the misstep here. However, I wish that the "will not pursue" section got the same bold-faced treatment as the one above it.<p>It seems appropriate at this point for industry leaders in this field, and governments, to come together with a set of Geneva-convention-like rules which address the ethical risks inherent in this space.
> Technologies that gather or use information for surveillance violating internationally accepted norms.<p>What does that even mean? Internationally accepted? By what nations and people groups? I’m pretty sure China and Russia have different accepted norms than Norway and Canada - which ones will you adhere to?
> We want to be clear that while we are not developing AI for use in weapons...<p>we will be developing AI for things that have weapons attached to them. We hope our lawyerly semantics are enough to fool you rubes for as long as it takes us to pocket that sweet military money.
So was the “Don’t be evil” principle or mantra that we’re all disappointed about documented in a blog post? For some reason I thought it was on a page like this: <a href="https://www.google.com/about/our-commitments/" rel="nofollow">https://www.google.com/about/our-commitments/</a><p>Either way it’s just a statement on a webpage which has all the permanence of a sign in their HQ lobby. It’s going to be hard to convince people that statements like this from a Google, a Facebook, or an Uber really mean anything — especially long term.<p>Will their next leadership team or CEO carry on with this?
Pretty rich for them to claim privacy is important when all of this technology is based on funneling your private data straight to them for storage and processing.
But how? Let's assume I personally offer artificial intelligence services. So I provide some API's where my customers upload training and testing data, and I return a trained ML model. I do not know who uses my service or what they are doing...<p>Furthermore, if ban the military. Then another company could do it for them. So every customer would have to explain their activities?
What do you think about the following potential additions?<p>1. "Pursue legislation and regulation to promote these principles across the industry."<p>2. "Develop or support the development of AI based tools to help combat, alleviate, the dangers noted in the other principles in products developed by other companies and governments."
At least they are starting the conversation. I'd be much more comfortable with principles of design and implementation in addition to outcomes. For example, transparency is essential. Also:<p><i>5. Incorporate privacy design principles.<p>We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.</i><p>Why not "give people control over their privacy and over their information"? That's a commitment to an outcome. "Incorporate ... principles", "give opportunity", "encourage", and "appropriate transparency and control" are not commitments. Google seems to be hedging on privacy.
The principles state that they will not make weapons. However the latest report I’ve seen states that their current contract for the military ends some time in 2019. [1]<p>So while google says it will not make weapons, it seems that for the next 6-18 months it will continue to do so.<p>Does anyone know when in 2019 the contact expires? It seems odd to come out with a pledge not to make weapons while continuing to make weapons (assuming that is what they are doing).<p>(Full disclosure, I am a contractor at an Alphabet company, but I don’t know much about project Maven. These are my own opinions.)<p>[1] <a href="https://www.theverge.com/2018/6/1/17418406/google-maven-drone-imagery-ai-contract-expire" rel="nofollow">https://www.theverge.com/2018/6/1/17418406/google-maven-dron...</a>
Google: We take Pentagon contracts to track people's location with our AI. That's so bad.<p>Also Google: We will totally use our AI to 'legally' track a single mom that clicked a fine print EULA once while signing into our app. That's totally fine. It's different mmk?
>At its heart, AI is computer programming that learns and adapts<p>No, that's machine learning. AI is intelligence demonstrated by machines, and it doesn't necessarily mean that it learns or adapts.
Luckily no one needs to worry about Google ever creating advancements AI (they can't, they lack the required skillset). Google is the modern day IBM, and AlphaGo is just another DeepBlue. I wonder when Google will make a gimmick like Watson. I guess Duplex is the beginning of it. It's amazing to see how many people were impressed by that. Then again, the tech scene lacks the scientific rigour that is required for spotting breakthroughs.
Applications they will not pursue include those "that gather or use information for surveillance violating internationally accepted norms." That's some fancy gymnastics there, Mr. Pichai. Well played.<p>I was wondering how or if they were going to address this. It saddens me to see that Google considers collecting as much data as possible about all its users to maximize ad revenue an international norm. It saddens me more to see that they're correct.
Didn't Google have a motto of "Don't be evil," and then new management retired the saying? What's stopping that from happening again in this case?
Great, another piece of "Don't be evil" with a new coat, and they can ditch it whenever they feel powerful enough to ignore society's feedback.<p>Such statement absolutely relieves the pressure came from the public, hence law makers. Can we make sure big companies are legally accountable for what they claim to the public? Otherwise they can say just whatever persuades people to be less vigilant about what they are doing, which is so deceptive and irresponsible.
>4. Be accountable to people.
>We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.<p>Youtube moderation and automated account banning combined with the inability to actually get in contact with a human show they they have a long way to go with this principle.
> Technologies that gather or use information for surveillance violating internationally accepted norms.<p>I guess Google's policy of sucking up any and all data doesn't go against internationally accepted norms.<p>This entire article reads like BS if you think about what Google actually does.
This is pretty weak tea. It seems to completely justify working on anything, as long as the tiny part that Google engineers touch is software, and they aren't personally pulling triggers.<p>> <i>1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.</i><p>Is this "We have solved the trolley problem"?<p>Benefits to who? US Consumers? Shareholders? Someone in Afghanistan with the wrong IMEI who's making a phone-call?<p>Without specifying this, this statement completely fails as a restraint on behavior. For an extrajudicial assassination via drone, is 'the technology' the re-purposed consumer software to aid target selection, or the bomb? Presumably the latter in every case.<p>> <i>2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.</i><p>This leaves the vast majority of military applications in scope. By this definition, Project Maven (the cause of resignations/protests) meets the criteria of not <i>"directly facilitat[ing] injury to people"</i>. It selects who and what to cause injury too at lower cost and accuracy, to scale up the total number of causable injuries per dollar.<p>> <i>3. Technologies that gather or use information for surveillance violating internationally accepted norms.</i><p>Google <i>set the norms</i> for surveillance by being at the leading edge of it.
It's pretty clear from Google's positioning that they consider data stored with them for monetization and distribution to Goverments completely fine. Governments do, too. And of course, <i>"If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place."</i>[0].<p>> <i>4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.</i><p>It's difficult to see how this could be anything but a circular argument that whatever the US military thinks is appropriate, is accepted as appropriate, because the US military thinks it is.<p>The most widely accepted definitions of human rights are the UN's, and the least controversial of those is the Right to Life. There are legal limits to this right, but by definition, extrajudicial assassinations via drone strike are in contravention of it. Even if they're <i>Googley extrajudicial assassinations</i>.<p>[0]: <a href="https://www.eff.org/deeplinks/2009/12/google-ceo-eric-schmidt-dismisses-privacy" rel="nofollow">https://www.eff.org/deeplinks/2009/12/google-ceo-eric-schmid...</a>
Love this leadership from Jeff Dean and the team at Google AI. Technology can be an incredible lever for positive change, but can just as easily be a destructive force. Always import to think principally about how to ensure the former is the case and not the latter.
AI can and will be used to cause harm. I hope this doesn't cause US huge disadvantage against other nations like China where govt has more control and access to AI.
> Avoid creating or reinforcing unfair bias.<p>AI will likely reflect the bias of its training set, which likely reflects the bias of the creators. So, it is fair to say that AI will be biased?
The same AI tech developed for "search and rescue" can be easily re-purposed for "search and destroy". How would Google prevent that from happening?
As someone who has worked in the defense industry his entire career (and served in the Army before that), I find the general tone of most of these comments - in particular the ones coming from supposedly loyal American citizens - disturbing (not to mention insulting). Almost makes me wish we'd actually institute mandatory national service.<p>That said, I'd love to work on ML/AI related defense projects. Thanks to Google, more of this type of work will surely be thrown over to the traditional defense contractors - so maybe I'll get that chance, eh?
Humanity is racing ever faster to craft its own replacement as a species and we need to acknowledge this as our finest gift imaginable ... the cat is out of the bag on AI and no amount of corporate double speak can shed responsibility for any organization who employs armies who then freely spread these skills ... passing the torch to that which runs at light speed and is free of the limits of time which self evolves its own hardware and software can only be something we collectively should be proud of not afraid of ... rejoice as we molt and fly into the infinite now
<i>AI applications we will not pursue</i><p><i>Technologies that gather or use information for surveillance violating internationally accepted norms.</i><p>They already failed.
I do not know, really... if not them, someone else will do however. Google has a competitive advantage (they can hire & pay well the smartest minds on Earth) and is letting it go? EDIT: going to be even more controversial but needs to be said that Google just can’t stay neutral here imho, they either work for autonomus killing machines or against them in order to preserve their market position and brand
The military is using open source software to sort images, with consulting help from Google. No killbots, no acts of war, just doing the <i>only</i> thing that machine learning has any practical use for.<p>Science fiction writing is hard. I don't know why all of you are doing it for no pay. We can't judge Google for what we think they <i>might</i> do. And so far, they're just using ml in the real world
All corporations are amoral. They exist to maximise the profit of their shareholders. This is marketing. It is a nice sounding lie. If it were authentic, the last few months wouldn't have happened at Google. For me, it only makes it worse. Because they think we are suckers. Actions speak louder than words. These words ring hollow.
I think the avoidance of harm
is fundamentally flawed. Creation necessitates destruction. At times safety necessitates assault. Violence can not be eradicated we can only strive to maximize our values.<p>Anyone who claims to be non-violent has simply rationalized ignorance of their violence. See: vegans. (spoken as someone who eats a plant based diet)
This seems like a PR stunt, but at least something. Nothing prevents them and reverting those newly found principles over time... similar with removing "Don't be Evil" from their mission - which kinda would have covered that. Google's goal is to make money and that's what this is about.
google has no moral or principles. how could it possibly have those things? how can a global advertisement corp. not be evil? it doesn't make any sense!
The fact that “make money” isn’t on the list means that you can’t believe <i>any</i> of it.<p>Also point 5 is an outright, blatant falsehood given Google’s track record and indeed entire business model.