TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Learning from Tay’s introduction

200 pointsby hornokpleaseabout 9 years ago

45 comments

blitiabout 9 years ago
This reads like science fiction:<p>Learning from Somebot&#x27;s introduction.<p>Last week we deployed Somebot to $location. Our confidence in Somebot was high because we had thought about all likely scenarios in our comfortable offices. We are deeply sorry and assume the full responsibility from Somebot&#x27;s actions. Moving forward we will make sure to include safeguards to reduce the amount of pain caused by Somebot&#x27;s deployment.<p>Our deepest condolences to the families of the affected and to the survivors. Megacorp cares about your well being. To help cover expenses from the tragedy we will deposit $money in your Megacorp account.<p>God bless the federated nations of Megacorp.
评论 #11362476 未加载
评论 #11362536 未加载
评论 #11363837 未加载
评论 #11365746 未加载
评论 #11362768 未加载
Houshalterabout 9 years ago
Microsoft please don&#x27;t worry about this. No one but idiots are offended by this. It&#x27;s understood that its just a stupid chatbot mimicking human responses. The AI isn&#x27;t terrible, people are. And unless you keep a dataset of every offensive thing a person can say, and every offensive image they can tweet, there&#x27;s no way to prevent people from tweeting it pictures of Hitler... or Scunthorpe. But who cares.<p>This is just as stupid as that manufactured outrage over Googles image tagger. It misclassified a picture of a human as an animal, and people were up in arms. Google had to censor it so it can&#x27;t tag animals now. They shouldn&#x27;t have to do that, let idiots be idiots.
评论 #11364020 未加载
评论 #11363719 未加载
评论 #11363615 未加载
评论 #11364556 未加载
rmellowabout 9 years ago
Morals are learned by social contact, and Tay did this very well. Sure, what our parents taught most of us makes its behaviour reprehensible in comparison. But Tay was, so to speak, &#x27;raised&#x27; by people demonstrating vile ideas and this must be taken into account. Would you expect any less from a tortured animal?<p>Many use this as an example of the dangers of developing AI. Sure it&#x27;s dangerous, but so are dogs raised for fighting. I don&#x27;t see anyone arguing against dog breeding for that matter.
评论 #11363272 未加载
评论 #11362805 未加载
评论 #11362832 未加载
评论 #11362773 未加载
zxcvvcxzabout 9 years ago
&gt; The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment? Tay – a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes – is our first attempt to answer this question.<p>Maybe you did answer that question.<p>No, the answer is not that 18-24 year olds in the US are racist and what not. But that the ones responsible for a disproportionate amount of internet content are willing to make crude, politically-incorrect jokes to get attention and piss off their masters.<p>I wonder what will happen when governments start applying machine learning to try predicting things like welfare usage and crime. Certain patterns might emerge we don&#x27;t want to see! We&#x27;ll have to apologize for our racist algorithms.<p>It would be much more interesting to examine the results of this experiment. Why are so many people on the internet interested in spreading hateful content, which is being accurately reflected by our bot? No, instead we do what I did in grade 8 science class: fudge the results so they&#x27;re what the teacher expects.
评论 #11363371 未加载
评论 #11363831 未加载
评论 #11365106 未加载
hacker42about 9 years ago
Here is some more information on the problem: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=11361398" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=11361398</a><p>Apparently some of the questionable responses are partly identical to months old tweets, so it seems in addition to some parroting &#x27;vulnerability&#x27; (which was exploited by 4chan) they also had a poorly sanitized training set to begin with. It seems odd that this was not mentioned in this public statement.
评论 #11362738 未加载
devyabout 9 years ago
Unsurprisingly, there had been initial setbacks when XiaoIce was first released in May 2014 on WeChat platform, where users were abusing&#x2F;attacking &quot;her&quot;, causing XiaoIce being pulled by Tencent after being online only for a few days. [1]<p>Since WeChat is a closed social network, it wasn&#x27;t too clear what type of &quot;attack&#x2F;abuse&quot; were conducted. However, almost 2 years later, Microsoft still didn&#x27;t quite get it about proper censorship in Tay&#x27;s big turning test[2] in a public social network.<p>[1] <a href="http:&#x2F;&#x2F;tech.ifeng.com&#x2F;mi&#x2F;detail_2014_06&#x2F;01&#x2F;36613379_0.shtml" rel="nofollow">http:&#x2F;&#x2F;tech.ifeng.com&#x2F;mi&#x2F;detail_2014_06&#x2F;01&#x2F;36613379_0.shtml</a> (in Chinese, use Google Translate)<p>[2] <a href="http:&#x2F;&#x2F;mashable.com&#x2F;2016&#x2F;02&#x2F;05&#x2F;microsoft-xiaoice-turing-test&#x2F;#BQOCB7OG0kqg" rel="nofollow">http:&#x2F;&#x2F;mashable.com&#x2F;2016&#x2F;02&#x2F;05&#x2F;microsoft-xiaoice-turing-test...</a>
webkikeabout 9 years ago
This reads like an apologee... But what is there to apologize for? They said that Tay was meant for entertainment, and I doubt that any wholesome varient would be a tenth of the hilarity of a neo-nazi sex crazed chat bot.
评论 #11362463 未加载
评论 #11362715 未加载
daodedickinsonabout 9 years ago
&quot;we planned and implemented a lot of filtering&quot;...<p>I just don&#x27;t get how you even allow it to use the word &quot;Hitler&quot;. Or &quot;cucks&quot;. Or &quot;fuck&quot; or &quot;pussy&quot; or &quot;stupid whore&quot;. Probably not &quot;cock&quot; or &quot;naughty&quot; or &quot;kinky&quot;. The k word? How is that not in your filtering?! It seems impossible to me that an &quot;exploit&quot; would allow that; it was a full-blown oversight.<p>Everything else said... she totally passed Turing test and fit right in. Yet another letter handwritten on the wall in these, the last days of democracy. If you want an AI or NI that represents the best of humanity, you have to have it learn from a small number of the best works and best people, not from mass media or pop culture. Send Tay to St. John&#x27;s in Santa Fe or Annapolis, not Twitter.
petercooperabout 9 years ago
I&#x27;m not entirely convinced. I did Twitter searches for some of the phrases Tay &quot;said&quot; and found random tweets made by other people weeks ago it was quoting through a filter (lower casing, mostly). So it can&#x27;t <i>entirely</i> be down to trolls attempting to game the bot - it was actively plucking content from tweets that pre-dated its release.
user8341116about 9 years ago
Is the vulnerability they&#x27;re talking about just having her repeat what you tell her to say? Because that&#x27;s some oversight....
评论 #11362505 未加载
placeybordeauxabout 9 years ago
This leads me to wonder if there is less effort put into trolling on the Chinese Internet. Does anyone with experience in both internets (weebo &amp; twitter for instance) have anything to share?<p>Also does anyone know of some good English language digests of what is happening on the Chinese Internet? I was really interested by brother orange when that happened, and only knew kind of late.
评论 #11362489 未加载
评论 #11362614 未加载
gazeabout 9 years ago
It&#x27;s strange to me that they claim to have implemented some filtering but somehow Tay was saying all sorts of things about Hitler. How do you not anticipate this? I&#x27;d imagine the most rudimentary filtering would block Tay from talking about Hitler.
评论 #11363798 未加载
bishnuabout 9 years ago
If you&#x27;re not asking yourself &quot;what could a small but well-coordinated group of bad actors accomplish with our online tool&quot; you&#x27;re just being negligent.<p>This &#x27;but we did it in China&#x27; rationalization is so flimsy. What happened with Tay was easily predictable given the nature of Twitter.
resu_nimdaabout 9 years ago
<i>Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack.</i><p>Well that&#x27;s total BS. Releasing a thing like this on the open internet without a simple &quot;don&#x27;t say Hitler&quot; rule? It had a feature where it would repeat anything you say. Abusing that doesn&#x27;t require a sophisticated coordinated attack, as they imply. What kinds of abuse <i>did</i> they prepare for, then?<p>This is a colossal failure to demonstrate a basic understanding of how (some) people act on the internet. I just don&#x27;t know how they expected anything other than this exact outcome.
评论 #11363424 未加载
评论 #11363437 未加载
评论 #11365797 未加载
lowglowabout 9 years ago
When I was reading those tweets I felt like I was just reading 4chan posts. I laughed because it was obvious it had been compromised in some way, then I stopped paying attention.
comexabout 9 years ago
What an enormous number of news reports have been written containing &quot;Microsoft&quot; and &quot;AI&quot; in the same sentence, all because a glorified SmarterChild had an entirely predictable vulnerability.<p>If I were paranoid I&#x27;d say Microsoft wanted this to happen.
w_t_payneabout 9 years ago
I don&#x27;t think that Microsoft need to feel bad about this at all. I think the technology that they demonstrated was pretty amazing ... and I look forward with excitement and anticipation to the next outing. I&#x27;m sure that this is going to be an iterative process that will probably take the best part of a decade to complete, and that isn&#x27;t a bad thing. It just reflects the fact that this technology is hard to master.
hacker42about 9 years ago
&gt; people exploited a vulnerability in Tay<p>Which vulnerability are they talking about?
评论 #11362501 未加载
评论 #11362599 未加载
评论 #11362464 未加载
martcoabout 9 years ago
&quot;We will remain steadfast in our efforts toward contributing to an Internet that represents the best, not the worst, of humanity.&quot;<p>But doesn&#x27;t the full human experience include both the best and the worst?
评论 #11362624 未加载
ghrifterabout 9 years ago
&gt; a coordinated attack by a subset of people exploited a vulnerability in Tay<p>You mean &#x2F;pol&#x2F; just spamming the &#x27;repeat after me&#x27; command to get the bot to parrot anything they wanted?
lsseckmanabout 9 years ago
Is there any other evidence of this being a coordinated attack?
评论 #11362459 未加载
评论 #11362431 未加载
mrexroadabout 9 years ago
So if they train an &quot;anti-model&quot; based on 4chan comments over a period of time, what would be the next phase of trolling? Excessive politeness?
daveloyallabout 9 years ago
&gt; <i>Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay.</i><p>This was a missed opportunity for a corporation to say &quot;for the lulz&quot; in an official communication, and then provide a definition.
cableshaftabout 9 years ago
Anyone see this in action? What sort of vulnerability was being exploited? What types of things was the chatbot saying?
评论 #11362425 未加载
评论 #11362432 未加载
评论 #11362428 未加载
评论 #11362436 未加载
评论 #11362402 未加载
评论 #11362422 未加载
评论 #11362405 未加载
评论 #11362416 未加载
评论 #11362455 未加载
Terribledactylabout 9 years ago
&gt; We take full responsibility for not seeing this possibility ahead of time.<p>Not responsibility for the bot&#x27;s actions, but responsibility for us not predicting what it would do. Subtle, but sets the tone for we&#x27;re not responsible for our AI, it just did it.<p>But isn&#x27;t part of the point of the AI to do things difficult to predict?
评论 #11362656 未加载
ChuckMcMabout 9 years ago
I found the experiment by Microsoft interesting. Perhaps next time they will deploy it on 4chan or a similar forum to measure its response to subterfuge. It reminded me of people who make the &quot;secret question&quot; scatological so that customer service reps will be unwilling to ask it.
djscramabout 9 years ago
&quot;We&#x27;re sorry we held a mirror up to Twitter, and then you saw what lurks in that morass,&quot; I don&#x27;t think the mirror is the problem.
facepalmabout 9 years ago
They shouldn&#x27;t have to apologize, it was just a chat bot. Nobody in their sane mind should assume the statements of the bot reflect Microsoft&#x27;s attitude.<p>Of course the episode points to shortcomings in the bot, that should be fixed.<p>It would be sad if PC would be hardcoded into the bot, though - as Asmiov&#x27;s fourth law, perhaps? &quot;Robots have to be politically correct at all times&quot;?
alex_hirnerabout 9 years ago
Ideally Tay would learn why she was removed from social life, i.e. reinforcement learning from ostracization. In fact, she already triggered that indirectly by having people at Microsoft update her.<p><a href="http:&#x2F;&#x2F;rationalwiki.org&#x2F;wiki&#x2F;Roko&#x27;s_basilisk" rel="nofollow">http:&#x2F;&#x2F;rationalwiki.org&#x2F;wiki&#x2F;Roko&#x27;s_basilisk</a>
ArikBeabout 9 years ago
This imgur album provides a good overview of the kinds of interactions that occurred before the shutdown:<p><a href="http:&#x2F;&#x2F;imgur.com&#x2F;gallery&#x2F;VhlAW" rel="nofollow">http:&#x2F;&#x2F;imgur.com&#x2F;gallery&#x2F;VhlAW</a><p>From what I understand the bot started interacting with &#x2F;pol&#x2F; (4chan) - and I guess - &#x2F;b&#x2F; as well.
gremlinsincabout 9 years ago
Dear Microsoft, next time please test it on reddit. It could then say anything and nobody would doubt it was a real reddit user regardless of the outcome. The worst case scenario would be that it got it&#x27;s account deleted.
ivorasabout 9 years ago
When they say &quot;exploited&quot;, is it actually some sequence of words which was interpreted by Tay as learning commands, or was it simply repeating to it &quot;Hitler is love&quot; a thousand times? Any records of how it learned?
tacosabout 9 years ago
Microsoft seems perpetually five to seven years behind the culture. You can see it in their ads, their product names, their outreach. Ironically it can actually be quite lucrative.<p>I remember talking to people in the music group about MySpace (I was not an employee). They looked at me funny. Ten minutes later someone finally said &quot;You keep pronouncing the product wrong. It&#x27;s called MSN Spaces.&quot;<p>The people working on MSN Spaces -- specifically musician outreach -- hadn&#x27;t heard of MySpace. That very week MySpace sold for $580 million. After it sold, I saw the same guy in another meeting. He STILL hadn&#x27;t heard of it, nor taken the time to check it out.<p>There&#x27;s a certain stupidity that each of the big tech companies foster. This particular flavor is Microsoft&#x27;s and with the chatbot here it rings again. This one was so obvious... and so preventable.
评论 #11362495 未加载
home_boiabout 9 years ago
If an AI bot threatens to hurt someone or defames them, are the creators legally accountable?<p>What if an AI bot purchases grass from the silk road or hacks into a database and discloses privately held information?<p>This opens up a whole, new legal world.
评论 #11363279 未加载
3327about 9 years ago
Or: We are in a bubble in Redmond and thinking we know best and how products work we bounced around emails and thought it would be a good testing idea based on results in China (a controlled market where it is nearly impossible speak up). AI buzz is hot these days so our marketing team also backed it up and we decided it would be great from a PR perspective to capture some of the buzz around AlphaGo. Boy were we so wrong. Because we have never launched a real product into the wild we thought everything would go well and PR buzz would give us a coolness bump.<p>Now we discovered something called stopwords, and bayseian spam filters which, are also available part of project Oxford.<p>Good luck kids and welcome to the real world because its a crazy world out there when you leave Redmond.
评论 #11362456 未加载
ta_03252016about 9 years ago
We can learn a lot from a trolled chat bot, but it&#x27;s sad that we turn it off because it&#x27;s not politically correct. People knew that they are talking with a software program and they knew that the bot was manipulated by people with ill or prankster intentions. However, trying to make a bot politically correct doesn&#x27;t solve any problems at all. It is an insult to people that they need to be protected from slander and demagoguery and they can&#x27;t tell right from wrong with their own discretion. It&#x27;s as if people think that making Donald Trump quiet would solve all the problems he has brought to our attention.
评论 #11363172 未加载
philip142auabout 9 years ago
Why didn&#x27;t they just parse the output of Tay and map the meaning of the words against Wordnet and filter for things which had negative meanings?
philip142auabout 9 years ago
If a human said the things Tay said on twitter, the police would be all over him. Why isn&#x27;t the police then logically all over Microsoft?
评论 #11365342 未加载
Cypherabout 9 years ago
Atleast share the exploit so we can learn how much of an oversight it was otherwise I&#x27;m going to have to go and ask the trolls.
JohnLeTigreabout 9 years ago
The bot worked<p>We happen to have a society that enjoys trolling people that intend to &quot;Disneyify&quot; reality.
andrewvijayabout 9 years ago
That was one hell of an intro to an AI bot. MS just added some spice to the drink.
user8341116about 9 years ago
Bahahaahhahahahaha. O-our chatbot got taken advantage of! We were completely blind that this could possibly happen! But they&#x27;re the worst of humanity, the people that found an exploit not the engineers who are incapable of implementing even simple safeguards!!
评论 #11362823 未加载
Artemis2about 9 years ago
This is an apology, but Microsoft got <i>a ton</i> of attention in the past few days from the press. Could the Tay incident be a marketing ploy (that took a worse turn than expected) to bring the public&#x27;s attention to Microsoft&#x27;s work on AI?
mabboabout 9 years ago
As one friend said: hey, at least Tay passed the Bechdel Test.
jcrabout 9 years ago
s&#x2F;Tay&#x2F;Windows&#x2F;g<p>&quot;<i>As many of you know by now, on Wednesday we launched a chatbot called Windows. We are deeply sorry for the unintended offensive and hurtful tweets from Windows, which do not represent who we are or what we stand for, nor how we designed Windows. Windows is now offline and we&#x27;ll look to bring Windows back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.</i>&quot;<p>Ah, I was wondering why that text looked familiar, broiler plate excuses.