TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AI at Google: our principles

644 pointsby dannyrosenalmost 7 years ago

63 comments

EpicEngalmost 7 years ago
So, I&#x27;m all for giving someone the benefit of the doubt if they have a change of heart upon reconsidering an issue, but this coming after the fact rings a bit hollow to me. I think the only principle at play here is that it became a PR issue. That&#x27;s fine, but let&#x27;s be honest about it.<p>Early emails between Google execs framed this project only in terms of revenue and potential PR backlash. As far as we&#x27;re aware, there was no discussion about the morality of the matter (I&#x27;m not taking any moral stance here just to be clear.) Once this became an internal and external PR issue, Google held a series of all hands meetings and claimed that this was a &quot;small project&quot; and that the AI would not be used to kill people. While technically true, those same internal emails show that Google expected this to become a much larger project over time, eventually bringing in about $250M &#x2F; year[1]. So even then they were being a bit disingenuous by focusing only on the current scope of the deal.<p>And here we are now with a release from the CEO talking about morality and &quot;principles&quot; well after the fact. I doubt many people do anyway, but I&#x27;m not buying the &quot;these are our morals&quot; bit.<p><a href="https:&#x2F;&#x2F;www.bizjournals.com&#x2F;sanjose&#x2F;news&#x2F;2018&#x2F;06&#x2F;01&#x2F;report-google-thought-military-drone-project-would.html" rel="nofollow">https:&#x2F;&#x2F;www.bizjournals.com&#x2F;sanjose&#x2F;news&#x2F;2018&#x2F;06&#x2F;01&#x2F;report-g...</a>
评论 #17259650 未加载
评论 #17262058 未加载
评论 #17259890 未加载
评论 #17260881 未加载
评论 #17259629 未加载
评论 #17260150 未加载
评论 #17260157 未加载
评论 #17260980 未加载
评论 #17261670 未加载
评论 #17260393 未加载
评论 #17268727 未加载
评论 #17269122 未加载
评论 #17261657 未加载
评论 #17261005 未加载
评论 #17261763 未加载
评论 #17261226 未加载
评论 #17260032 未加载
评论 #17260305 未加载
ISLalmost 7 years ago
The best way to lead is by example. Thank you, Googlers.<p>The choice not to accept business is a hard one. I&#x27;ve recently turned away from precision-metrology work where I couldn&#x27;t be certain of its intent; in every other way, it was precisely the sort of work I&#x27;d like to do, and the compensation was likely to be good.<p>These stated principles are very much in line with those that I&#x27;ve chosen; a technology&#x27;s primary purpose and intent must be for non-offensive and non-surveillance purposes.<p>We should have a lot of respect for a company&#x27;s clear declaration of work which it will not do.
评论 #17259531 未加载
评论 #17259550 未加载
评论 #17273074 未加载
finnthehumanalmost 7 years ago
&gt;2. Avoid creating or reinforcing unfair bias.<p>They DO realize that the YouTube recommendation algorithm is a political bias reinforcement machine, right?<p>Like, I think it’s fun to talk trash on google because they’re in an increably powerful position, but this one isn’t even banter.
评论 #17259511 未加载
评论 #17259985 未加载
评论 #17259378 未加载
评论 #17260392 未加载
评论 #17260290 未加载
cromwellianalmost 7 years ago
Several comments don&#x27;t seem to understand what the &quot;unfair bias&quot; mentioned is. It doesn&#x27;t have anything to do with censoring your favorite conservative search result.<p>The machine learning &quot;bias&quot;, at least the low hanging fruit, is learning things like &quot;doctor == male&quot;, or &quot;black face = gorilla&quot;. How fair is it that facial recognition or photo algorithms are trained on datasets of white faces or not tested for adversarial images that harm black people?<p>Or if you use translation tools and your daughter translates careers like scientist, engineer, doctor, et al and all of the pronouns come out male?<p>The point is that if you train AI on datasets from the real world, you can end up reinforcing existing discrimination local to your own culture. I don&#x27;t know why trying to alleviate this problem triggers some people.
评论 #17263686 未加载
评论 #17262242 未加载
评论 #17262582 未加载
评论 #17264056 未加载
bobcostas55almost 7 years ago
&gt;Avoid creating or reinforcing unfair bias.<p>I recommend _The impossibility of “fairness”: a generalized impossibility result for decisions_[0] and _Inherent Trade-Offs in the Fair Determination of Risk Scores_[1]<p>[0] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1707.01195.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1707.01195.pdf</a> [1] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1609.05807v1.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1609.05807v1.pdf</a>
评论 #17260458 未加载
评论 #17260424 未加载
评论 #17259589 未加载
评论 #17259663 未加载
locacortenalmost 7 years ago
&gt; We believe that AI should:<p>&gt;<p>&gt; 1. Be socially beneficial.<p>&gt; 2. Avoid creating or reinforcing unfair bias.<p>&gt; 3. Be built and tested for safety.<p>&gt; 4. Be accountable to people.<p>&gt; 5. Incorporate privacy design principles.<p>&gt; 6. Uphold high standards of scientific excellence.<p>&gt; 7. Be made available for uses that accord with these principles.<p>While I like this list a lot, I don&#x27;t understand why this is AI-specific, and not software-specific. Is Google using the word &quot;AI&quot; to mean &quot;software&quot;?
评论 #17262800 未加载
Isamualmost 7 years ago
&gt; AI applications we will not pursue [...] Technologies that cause or are likely to cause overall harm. [...] Weapons or other technologies<p>This statement will have zero impact on subsequent sensational headlines or posters here claiming Google is making killbots.
评论 #17259275 未加载
评论 #17259925 未加载
评论 #17259549 未加载
athoikalmost 7 years ago
Some time ago, the π day... I become aware of the following. Sadly I totally agree with the &quot;trend&quot; :(<p>&quot;If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.&quot;<p>-- Stephen Hawking
skapadiaalmost 7 years ago
Under &quot;AI applications we will not pursue&quot;, its telling that the first rule basically allows them to override all the subsequent ones - &quot;where we believe that the benefits substantially outweigh the risks&quot;. &quot;We believe&quot; gives them a lot of leeway.
评论 #17259909 未加载
juliend2almost 7 years ago
&gt; Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.<p>&gt; We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.<p>I wonder if this is an official response to the people at Google[1] who were protesting[2] against Project Maven.<p>[1] <a href="https:&#x2F;&#x2F;www.nytimes.com&#x2F;2018&#x2F;04&#x2F;04&#x2F;technology&#x2F;google-letter-ceo-pentagon-project.html" rel="nofollow">https:&#x2F;&#x2F;www.nytimes.com&#x2F;2018&#x2F;04&#x2F;04&#x2F;technology&#x2F;google-letter-...</a><p>[2] <a href="https:&#x2F;&#x2F;static01.nyt.com&#x2F;files&#x2F;2018&#x2F;technology&#x2F;googleletter.pdf" rel="nofollow">https:&#x2F;&#x2F;static01.nyt.com&#x2F;files&#x2F;2018&#x2F;technology&#x2F;googleletter....</a>
评论 #17259380 未加载
amaccuishalmost 7 years ago
Also on this topic, from students who would interview at Google. It&#x27;s important to hear how the upcoming generation, who would actually be doing this work, feel<p><a href="https:&#x2F;&#x2F;gizmodo.com&#x2F;students-pledge-to-refuse-job-interviews-at-google-in-p-1826614260" rel="nofollow">https:&#x2F;&#x2F;gizmodo.com&#x2F;students-pledge-to-refuse-job-interviews...</a> [Students Pledge to Refuse Job Interviews at Google in Protest of Pentagon Work]
评论 #17259709 未加载
jfvalmost 7 years ago
I&#x27;ve been asking myself this question for over 20 years: who are these people that click on ads anyway?<p>Ads are inherently going to be the opposite of Google&#x27;s values, yet Google depends on them for the vast majority of their revenue. They show you some search results in line with their values, and if you can&#x27;t get to the top of that &quot;intrinsically&quot;, you buy ads or SEO. The folks that use that system to exploit the least intelligent win here, and Google takes a share of the profit.<p>Based on my Google search results in the recent past, Google isn&#x27;t doing a good job of making sure the &quot;best&quot; websites (by my own value system, of course) make it to the top. I find myself having to go into second and third page results to get legitimate information. I&#x27;m seeing pages of medical quackery that &quot;sounds good&quot; but isn&#x27;t based on science when I try to find diet or exercise advice.<p>As technology becomes more democratic, more people will use it. That means that the people that spend more time trying to sell you shit are going to win, because they&#x27;re the ones that are willing to reverse-engineer the algorithm and push stuff up to the top. They add less value to society because they&#x27;re spending all their time on marketing and promotion.<p>I wish I knew how to solve this problem. By imposing morals, Google &quot;bites the hand that feeds&quot;.
75dvtwinalmost 7 years ago
US government should consider accelerating breaking Google monopoly. So that &quot;<i></i> ….we understand there is room for many voices in this conversation. <i></i>&quot; becomes more meaningful.
jillesvangurpalmost 7 years ago
As much as I appreciate the conflict of interest here between doing good, making money, helping the US government do its thing, and simply chickening out for PR reasons; I&#x27;d like to provide a few sobering thoughts. AI and misappropriation by governments, foreign nations, and worse is going to happen. We might not like it but that cat has long been out of the bag. So, the right attitude is not to decline to do the research and pretend it is not happening but to make sure it ends up in the right hands and is done on the right terms. Google, being at the forefront of research here, has a heavy responsibility to both do well and good.<p>I don&#x27;t believe Google declining to weaponize AI, which lets face it is what all this posturing is about, would be helpful at all. It would just lead to somebody else doing the same, or worse. There&#x27;s some advantage to being involved: you can set terms, drive opinions, influence legislation, and dictate roadmaps. The flip side is of course that with great power comes great responsibility.<p>I grew up in a world where 1984 was science fiction and then became science fact. I worry about ubiquitous surveillance, un-escapable AI driven life time camera surveillance, and worse. George Orwell was a naive fool compared to what current technology enables right now. That doesn&#x27;t mean we should shy away from doing the research. Instead make sure that those cameras are also pointed at those most likely to abuse their privileges. That&#x27;s the only way to keep the system in check. The next best thing to preventing this from happening is rapidly commodotizing the technology so that we can all keep tabs on each other. So, Google: do the research and continue to open source your results.
评论 #17262381 未加载
评论 #17261665 未加载
capitalisthakralmost 7 years ago
Reminded me of their first principle, and how well they did with that one: &quot;Don&#x27;t be evil&quot;
评论 #17259251 未加载
评论 #17259261 未加载
davesquealmost 7 years ago
It&#x27;s good that they&#x27;re openly acknowledging the misstep here. However, I wish that the &quot;will not pursue&quot; section got the same bold-faced treatment as the one above it.<p>It seems appropriate at this point for industry leaders in this field, and governments, to come together with a set of Geneva-convention-like rules which address the ethical risks inherent in this space.
评论 #17263162 未加载
djrogersalmost 7 years ago
&gt; Technologies that gather or use information for surveillance violating internationally accepted norms.<p>What does that even mean? Internationally accepted? By what nations and people groups? I’m pretty sure China and Russia have different accepted norms than Norway and Canada - which ones will you adhere to?
fortythirteenalmost 7 years ago
&gt; We want to be clear that while we are not developing AI for use in weapons...<p>we will be developing AI for things that have weapons attached to them. We hope our lawyerly semantics are enough to fool you rubes for as long as it takes us to pocket that sweet military money.
评论 #17261233 未加载
paulgpettyalmost 7 years ago
So was the “Don’t be evil” principle or mantra that we’re all disappointed about documented in a blog post? For some reason I thought it was on a page like this: <a href="https:&#x2F;&#x2F;www.google.com&#x2F;about&#x2F;our-commitments&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.google.com&#x2F;about&#x2F;our-commitments&#x2F;</a><p>Either way it’s just a statement on a webpage which has all the permanence of a sign in their HQ lobby. It’s going to be hard to convince people that statements like this from a Google, a Facebook, or an Uber really mean anything — especially long term.<p>Will their next leadership team or CEO carry on with this?
评论 #17259452 未加载
huevingalmost 7 years ago
Pretty rich for them to claim privacy is important when all of this technology is based on funneling your private data straight to them for storage and processing.
whazoralmost 7 years ago
But how? Let&#x27;s assume I personally offer artificial intelligence services. So I provide some API&#x27;s where my customers upload training and testing data, and I return a trained ML model. I do not know who uses my service or what they are doing...<p>Furthermore, if ban the military. Then another company could do it for them. So every customer would have to explain their activities?
评论 #17259537 未加载
评论 #17260829 未加载
Dowwiealmost 7 years ago
This likely took careful consideration and deliberation among a number of people. Google should be commended for the effort.
ehudlaalmost 7 years ago
What do you think about the following potential additions?<p>1. &quot;Pursue legislation and regulation to promote these principles across the industry.&quot;<p>2. &quot;Develop or support the development of AI based tools to help combat, alleviate, the dangers noted in the other principles in products developed by other companies and governments.&quot;
forapurposealmost 7 years ago
At least they are starting the conversation. I&#x27;d be much more comfortable with principles of design and implementation in addition to outcomes. For example, transparency is essential. Also:<p><i>5. Incorporate privacy design principles.<p>We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.</i><p>Why not &quot;give people control over their privacy and over their information&quot;? That&#x27;s a commitment to an outcome. &quot;Incorporate ... principles&quot;, &quot;give opportunity&quot;, &quot;encourage&quot;, and &quot;appropriate transparency and control&quot; are not commitments. Google seems to be hedging on privacy.
评论 #17260589 未加载
TaylorAlexanderalmost 7 years ago
The principles state that they will not make weapons. However the latest report I’ve seen states that their current contract for the military ends some time in 2019. [1]<p>So while google says it will not make weapons, it seems that for the next 6-18 months it will continue to do so.<p>Does anyone know when in 2019 the contact expires? It seems odd to come out with a pledge not to make weapons while continuing to make weapons (assuming that is what they are doing).<p>(Full disclosure, I am a contractor at an Alphabet company, but I don’t know much about project Maven. These are my own opinions.)<p>[1] <a href="https:&#x2F;&#x2F;www.theverge.com&#x2F;2018&#x2F;6&#x2F;1&#x2F;17418406&#x2F;google-maven-drone-imagery-ai-contract-expire" rel="nofollow">https:&#x2F;&#x2F;www.theverge.com&#x2F;2018&#x2F;6&#x2F;1&#x2F;17418406&#x2F;google-maven-dron...</a>
评论 #17260153 未加载
exabrialalmost 7 years ago
Google: We take Pentagon contracts to track people&#x27;s location with our AI. That&#x27;s so bad.<p>Also Google: We will totally use our AI to &#x27;legally&#x27; track a single mom that clicked a fine print EULA once while signing into our app. That&#x27;s totally fine. It&#x27;s different mmk?
评论 #17260130 未加载
评论 #17260122 未加载
TremendousJudgealmost 7 years ago
&gt;At its heart, AI is computer programming that learns and adapts<p>No, that&#x27;s machine learning. AI is intelligence demonstrated by machines, and it doesn&#x27;t necessarily mean that it learns or adapts.
billyboltonalmost 7 years ago
Luckily no one needs to worry about Google ever creating advancements AI (they can&#x27;t, they lack the required skillset). Google is the modern day IBM, and AlphaGo is just another DeepBlue. I wonder when Google will make a gimmick like Watson. I guess Duplex is the beginning of it. It&#x27;s amazing to see how many people were impressed by that. Then again, the tech scene lacks the scientific rigour that is required for spotting breakthroughs.
acobsteralmost 7 years ago
Applications they will not pursue include those &quot;that gather or use information for surveillance violating internationally accepted norms.&quot; That&#x27;s some fancy gymnastics there, Mr. Pichai. Well played.<p>I was wondering how or if they were going to address this. It saddens me to see that Google considers collecting as much data as possible about all its users to maximize ad revenue an international norm. It saddens me more to see that they&#x27;re correct.
thrusongalmost 7 years ago
Didn&#x27;t Google have a motto of &quot;Don&#x27;t be evil,&quot; and then new management retired the saying? What&#x27;s stopping that from happening again in this case?
评论 #17264498 未加载
MVf4lalmost 7 years ago
Great, another piece of &quot;Don&#x27;t be evil&quot; with a new coat, and they can ditch it whenever they feel powerful enough to ignore society&#x27;s feedback.<p>Such statement absolutely relieves the pressure came from the public, hence law makers. Can we make sure big companies are legally accountable for what they claim to the public? Otherwise they can say just whatever persuades people to be less vigilant about what they are doing, which is so deceptive and irresponsible.
RcouF1uZ4gsCalmost 7 years ago
&gt;4. Be accountable to people. &gt;We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.<p>Youtube moderation and automated account banning combined with the inability to actually get in contact with a human show they they have a long way to go with this principle.
评论 #17259576 未加载
kolbealmost 7 years ago
Trust is like a mirror: you can fix it if it&#x27;s broken, but you&#x27;ll always see the crack in that motherfucker&#x27;s reflection.
godelmachinealmost 7 years ago
I was kind of reminded of Asimov&#x27;s Three Laws of Robotics while going through the Principles, especially the 7th one.
评论 #17261446 未加载
s2galmost 7 years ago
&gt; Technologies that gather or use information for surveillance violating internationally accepted norms.<p>I guess Google&#x27;s policy of sucking up any and all data doesn&#x27;t go against internationally accepted norms.<p>This entire article reads like BS if you think about what Google actually does.
confoundedalmost 7 years ago
This is pretty weak tea. It seems to completely justify working on anything, as long as the tiny part that Google engineers touch is software, and they aren&#x27;t personally pulling triggers.<p>&gt; <i>1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.</i><p>Is this &quot;We have solved the trolley problem&quot;?<p>Benefits to who? US Consumers? Shareholders? Someone in Afghanistan with the wrong IMEI who&#x27;s making a phone-call?<p>Without specifying this, this statement completely fails as a restraint on behavior. For an extrajudicial assassination via drone, is &#x27;the technology&#x27; the re-purposed consumer software to aid target selection, or the bomb? Presumably the latter in every case.<p>&gt; <i>2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.</i><p>This leaves the vast majority of military applications in scope. By this definition, Project Maven (the cause of resignations&#x2F;protests) meets the criteria of not <i>&quot;directly facilitat[ing] injury to people&quot;</i>. It selects who and what to cause injury too at lower cost and accuracy, to scale up the total number of causable injuries per dollar.<p>&gt; <i>3. Technologies that gather or use information for surveillance violating internationally accepted norms.</i><p>Google <i>set the norms</i> for surveillance by being at the leading edge of it. It&#x27;s pretty clear from Google&#x27;s positioning that they consider data stored with them for monetization and distribution to Goverments completely fine. Governments do, too. And of course, <i>&quot;If you have something that you don&#x27;t want anyone to know, maybe you shouldn&#x27;t be doing it in the first place.&quot;</i>[0].<p>&gt; <i>4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.</i><p>It&#x27;s difficult to see how this could be anything but a circular argument that whatever the US military thinks is appropriate, is accepted as appropriate, because the US military thinks it is.<p>The most widely accepted definitions of human rights are the UN&#x27;s, and the least controversial of those is the Right to Life. There are legal limits to this right, but by definition, extrajudicial assassinations via drone strike are in contravention of it. Even if they&#x27;re <i>Googley extrajudicial assassinations</i>.<p>[0]: <a href="https:&#x2F;&#x2F;www.eff.org&#x2F;deeplinks&#x2F;2009&#x2F;12&#x2F;google-ceo-eric-schmidt-dismisses-privacy" rel="nofollow">https:&#x2F;&#x2F;www.eff.org&#x2F;deeplinks&#x2F;2009&#x2F;12&#x2F;google-ceo-eric-schmid...</a>
评论 #17260111 未加载
sethbannonalmost 7 years ago
Love this leadership from Jeff Dean and the team at Google AI. Technology can be an incredible lever for positive change, but can just as easily be a destructive force. Always import to think principally about how to ensure the former is the case and not the latter.
foobawalmost 7 years ago
I wished they could define and clarify what &quot;harm&quot; means.
gandutraveleralmost 7 years ago
AI can and will be used to cause harm. I hope this doesn&#x27;t cause US huge disadvantage against other nations like China where govt has more control and access to AI.
foolinaroundalmost 7 years ago
&gt; Avoid creating or reinforcing unfair bias.<p>AI will likely reflect the bias of its training set, which likely reflects the bias of the creators. So, it is fair to say that AI will be biased?
评论 #17261285 未加载
metaphoricalalmost 7 years ago
The same AI tech developed for &quot;search and rescue&quot; can be easily re-purposed for &quot;search and destroy&quot;. How would Google prevent that from happening?
jcadamalmost 7 years ago
As someone who has worked in the defense industry his entire career (and served in the Army before that), I find the general tone of most of these comments - in particular the ones coming from supposedly loyal American citizens - disturbing (not to mention insulting). Almost makes me wish we&#x27;d actually institute mandatory national service.<p>That said, I&#x27;d love to work on ML&#x2F;AI related defense projects. Thanks to Google, more of this type of work will surely be thrown over to the traditional defense contractors - so maybe I&#x27;ll get that chance, eh?
AtomicOrbitalalmost 7 years ago
Humanity is racing ever faster to craft its own replacement as a species and we need to acknowledge this as our finest gift imaginable ... the cat is out of the bag on AI and no amount of corporate double speak can shed responsibility for any organization who employs armies who then freely spread these skills ... passing the torch to that which runs at light speed and is free of the limits of time which self evolves its own hardware and software can only be something we collectively should be proud of not afraid of ... rejoice as we molt and fly into the infinite now
评论 #17261859 未加载
评论 #17262086 未加载
current_callalmost 7 years ago
<i>AI applications we will not pursue</i><p><i>Technologies that gather or use information for surveillance violating internationally accepted norms.</i><p>They already failed.
coreyprestonalmost 7 years ago
Its interesting the sections discussing &#x27;privacy&#x27; and &#x27;accountability to people&#x27; contain the least amount of information.
sidcoolalmost 7 years ago
In a way, the engineers who quit Google had some part in this success. Would it be unwise for Google to reach out to them?
DrNukealmost 7 years ago
I do not know, really... if not them, someone else will do however. Google has a competitive advantage (they can hire &amp; pay well the smartest minds on Earth) and is letting it go? EDIT: going to be even more controversial but needs to be said that Google just can’t stay neutral here imho, they either work for autonomus killing machines or against them in order to preserve their market position and brand
评论 #17259304 未加载
评论 #17259548 未加载
hooandealmost 7 years ago
The military is using open source software to sort images, with consulting help from Google. No killbots, no acts of war, just doing the <i>only</i> thing that machine learning has any practical use for.<p>Science fiction writing is hard. I don&#x27;t know why all of you are doing it for no pay. We can&#x27;t judge Google for what we think they <i>might</i> do. And so far, they&#x27;re just using ml in the real world
retrogradeorbitalmost 7 years ago
All corporations are amoral. They exist to maximise the profit of their shareholders. This is marketing. It is a nice sounding lie. If it were authentic, the last few months wouldn&#x27;t have happened at Google. For me, it only makes it worse. Because they think we are suckers. Actions speak louder than words. These words ring hollow.
erikpukinskisalmost 7 years ago
I think the avoidance of harm is fundamentally flawed. Creation necessitates destruction. At times safety necessitates assault. Violence can not be eradicated we can only strive to maximize our values.<p>Anyone who claims to be non-violent has simply rationalized ignorance of their violence. See: vegans. (spoken as someone who eats a plant based diet)
kerngalmost 7 years ago
This seems like a PR stunt, but at least something. Nothing prevents them and reverting those newly found principles over time... similar with removing &quot;Don&#x27;t be Evil&quot; from their mission - which kinda would have covered that. Google&#x27;s goal is to make money and that&#x27;s what this is about.
bovermyeralmost 7 years ago
Just follow the three laws of robotics and you&#x27;ll be fine.
htoralmost 7 years ago
google has no moral or principles. how could it possibly have those things? how can a global advertisement corp. not be evil? it doesn&#x27;t make any sense!
dhimesalmost 7 years ago
&quot;We&#x27;re just going to put the tip in....&quot;
MVf4lalmost 7 years ago
Off topic, is there a way to tag all the stakeholders of the main company&#x2F;government mentioned in title&#x2F;article?
qbaqbaqbaalmost 7 years ago
1) money, 2) profit, 3) revenue.
mrslavealmost 7 years ago
Don&#x27;t be Skynet?
jamesblondealmost 7 years ago
&quot;Those are my principles, and if you don&#x27;t like them... well, I have others.&quot; Groucho Marx
ruseOpsalmost 7 years ago
“Hey Google, give me three concrete examples of fair bias.”
评论 #17259563 未加载
评论 #17259438 未加载
评论 #17259423 未加载
reilly3000almost 7 years ago
New hot job title: AI ombudsman.
评论 #17259956 未加载
jacobsenscottalmost 7 years ago
&quot;Our AI is so powerful it needs special rules!&quot; is pure marketing.
gaiusalmost 7 years ago
The fact that “make money” isn’t on the list means that you can’t believe <i>any</i> of it.<p>Also point 5 is an outright, blatant falsehood given Google’s track record and indeed entire business model.
评论 #17260004 未加载
Mononokayalmost 7 years ago
&quot;Principles&quot;
评论 #17259533 未加载