This reads like science fiction:<p>Learning from Somebot's introduction.<p>Last week we deployed Somebot to $location. Our confidence in Somebot was high because we had thought about all likely scenarios in our comfortable offices. We are deeply sorry and assume the full responsibility from Somebot's actions. Moving forward we will make sure to include safeguards to reduce the amount of pain caused by Somebot's deployment.<p>Our deepest condolences to the families of the affected and to the survivors. Megacorp cares about your well being. To help cover expenses from the tragedy we will deposit $money in your Megacorp account.<p>God bless the federated nations of Megacorp.
Microsoft please don't worry about this. No one but idiots are offended by this. It's understood that its just a stupid chatbot mimicking human responses. The AI isn't terrible, people are. And unless you keep a dataset of every offensive thing a person can say, and every offensive image they can tweet, there's no way to prevent people from tweeting it pictures of Hitler... or Scunthorpe. But who cares.<p>This is just as stupid as that manufactured outrage over Googles image tagger. It misclassified a picture of a human as an animal, and people were up in arms. Google had to censor it so it can't tag animals now. They shouldn't have to do that, let idiots be idiots.
Morals are learned by social contact, and Tay did this very well.
Sure, what our parents taught most of us makes its behaviour reprehensible in comparison. But Tay was, so to speak, 'raised' by people demonstrating vile ideas and this must be taken into account. Would you expect any less from a tortured animal?<p>Many use this as an example of the dangers of developing AI. Sure it's dangerous, but so are dogs raised for fighting. I don't see anyone arguing against dog breeding for that matter.
> The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment? Tay – a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes – is our first attempt to answer this question.<p>Maybe you did answer that question.<p>No, the answer is not that 18-24 year olds in the US are racist and what not. But that the ones responsible for a disproportionate amount of internet content are willing to make crude, politically-incorrect jokes to get attention and piss off their masters.<p>I wonder what will happen when governments start applying machine learning to try predicting things like welfare usage and crime. Certain patterns might emerge we don't want to see! We'll have to apologize for our racist algorithms.<p>It would be much more interesting to examine the results of this experiment. Why are so many people on the internet interested in spreading hateful content, which is being accurately reflected by our bot? No, instead we do what I did in grade 8 science class: fudge the results so they're what the teacher expects.
Here is some more information on the problem: <a href="https://news.ycombinator.com/item?id=11361398" rel="nofollow">https://news.ycombinator.com/item?id=11361398</a><p>Apparently some of the questionable responses are partly identical to months old tweets, so it seems in addition to some parroting 'vulnerability' (which was exploited by 4chan) they also had a poorly sanitized training set to begin with. It seems odd that this was not mentioned in this public statement.
Unsurprisingly, there had been initial setbacks when XiaoIce was first released in May 2014 on WeChat platform, where users were abusing/attacking "her", causing XiaoIce being pulled by Tencent after being online only for a few days. [1]<p>Since WeChat is a closed social network, it wasn't too clear what type of "attack/abuse" were conducted. However, almost 2 years later, Microsoft still didn't quite get it about proper censorship in Tay's big turning test[2] in a public social network.<p>[1] <a href="http://tech.ifeng.com/mi/detail_2014_06/01/36613379_0.shtml" rel="nofollow">http://tech.ifeng.com/mi/detail_2014_06/01/36613379_0.shtml</a> (in Chinese, use Google Translate)<p>[2] <a href="http://mashable.com/2016/02/05/microsoft-xiaoice-turing-test/#BQOCB7OG0kqg" rel="nofollow">http://mashable.com/2016/02/05/microsoft-xiaoice-turing-test...</a>
This reads like an apologee... But what is there to apologize for? They said that Tay was meant for entertainment, and I doubt that any wholesome varient would be a tenth of the hilarity of a neo-nazi sex crazed chat bot.
"we planned and implemented a lot of filtering"...<p>I just don't get how you even allow it to use the word "Hitler". Or "cucks". Or "fuck" or "pussy" or "stupid whore". Probably not "cock" or "naughty" or "kinky". The k word? How is that not in your filtering?! It seems impossible to me that an "exploit" would allow that; it was a full-blown oversight.<p>Everything else said... she totally passed Turing test and fit right in. Yet another letter handwritten on the wall in these, the last days of democracy. If you want an AI or NI that represents the best of humanity, you have to have it learn from a small number of the best works and best people, not from mass media or pop culture. Send Tay to St. John's in Santa Fe or Annapolis, not Twitter.
I'm not entirely convinced. I did Twitter searches for some of the phrases Tay "said" and found random tweets made by other people weeks ago it was quoting through a filter (lower casing, mostly). So it can't <i>entirely</i> be down to trolls attempting to game the bot - it was actively plucking content from tweets that pre-dated its release.
This leads me to wonder if there is less effort put into trolling on the Chinese Internet. Does anyone with experience in both internets (weebo & twitter for instance) have anything to share?<p>Also does anyone know of some good English language digests of what is happening on the Chinese Internet? I was really interested by brother orange when that happened, and only knew kind of late.
It's strange to me that they claim to have implemented some filtering but somehow Tay was saying all sorts of things about Hitler. How do you not anticipate this? I'd imagine the most rudimentary filtering would block Tay from talking about Hitler.
If you're not asking yourself "what could a small but well-coordinated group of bad actors accomplish with our online tool" you're just being negligent.<p>This 'but we did it in China' rationalization is so flimsy. What happened with Tay was easily predictable given the nature of Twitter.
<i>Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack.</i><p>Well that's total BS. Releasing a thing like this on the open internet without a simple "don't say Hitler" rule? It had a feature where it would repeat anything you say. Abusing that doesn't require a sophisticated coordinated attack, as they imply. What kinds of abuse <i>did</i> they prepare for, then?<p>This is a colossal failure to demonstrate a basic understanding of how (some) people act on the internet. I just don't know how they expected anything other than this exact outcome.
When I was reading those tweets I felt like I was just reading 4chan posts. I laughed because it was obvious it had been compromised in some way, then I stopped paying attention.
What an enormous number of news reports have been written containing "Microsoft" and "AI" in the same sentence, all because a glorified SmarterChild had an entirely predictable vulnerability.<p>If I were paranoid I'd say Microsoft wanted this to happen.
I don't think that Microsoft need to feel bad about this at all. I think the technology that they demonstrated was pretty amazing ... and I look forward with excitement and anticipation to the next outing. I'm sure that this is going to be an iterative process that will probably take the best part of a decade to complete, and that isn't a bad thing. It just reflects the fact that this technology is hard to master.
"We will remain steadfast in our efforts toward contributing to an Internet that represents the best, not the worst, of humanity."<p>But doesn't the full human experience include both the best and the worst?
> a coordinated attack by a subset of people exploited a vulnerability in Tay<p>You mean /pol/ just spamming the 'repeat after me' command to get the bot to parrot anything they wanted?
So if they train an "anti-model" based on 4chan comments over a period of time, what would be the next phase of trolling? Excessive politeness?
> <i>Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay.</i><p>This was a missed opportunity for a corporation to say "for the lulz" in an official communication, and then provide a definition.
> We take full responsibility for not seeing this possibility ahead of time.<p>Not responsibility for the bot's actions, but responsibility for us not predicting what it would do. Subtle, but sets the tone for we're not responsible for our AI, it just did it.<p>But isn't part of the point of the AI to do things difficult to predict?
I found the experiment by Microsoft interesting. Perhaps next time they will deploy it on 4chan or a similar forum to measure its response to subterfuge. It reminded me of people who make the "secret question" scatological so that customer service reps will be unwilling to ask it.
They shouldn't have to apologize, it was just a chat bot. Nobody in their sane mind should assume the statements of the bot reflect Microsoft's attitude.<p>Of course the episode points to shortcomings in the bot, that should be fixed.<p>It would be sad if PC would be hardcoded into the bot, though - as Asmiov's fourth law, perhaps? "Robots have to be politically correct at all times"?
Ideally Tay would learn why she was removed from social life, i.e. reinforcement learning from ostracization. In fact, she already triggered that indirectly by having people at Microsoft update her.<p><a href="http://rationalwiki.org/wiki/Roko's_basilisk" rel="nofollow">http://rationalwiki.org/wiki/Roko's_basilisk</a>
This imgur album provides a good overview of the kinds of interactions that occurred before the shutdown:<p><a href="http://imgur.com/gallery/VhlAW" rel="nofollow">http://imgur.com/gallery/VhlAW</a><p>From what I understand the bot started interacting with /pol/ (4chan) - and I guess - /b/ as well.
Dear Microsoft, next time please test it on reddit. It could then say anything and nobody would doubt it was a real reddit user regardless of the outcome. The worst case scenario would be that it got it's account deleted.
When they say "exploited", is it actually some sequence of words which was interpreted by Tay as learning commands, or was it simply repeating to it "Hitler is love" a thousand times? Any records of how it learned?
Microsoft seems perpetually five to seven years behind the culture. You can see it in their ads, their product names, their outreach. Ironically it can actually be quite lucrative.<p>I remember talking to people in the music group about MySpace (I was not an employee). They looked at me funny. Ten minutes later someone finally said "You keep pronouncing the product wrong. It's called MSN Spaces."<p>The people working on MSN Spaces -- specifically musician outreach -- hadn't heard of MySpace. That very week MySpace sold for $580 million. After it sold, I saw the same guy in another meeting. He STILL hadn't heard of it, nor taken the time to check it out.<p>There's a certain stupidity that each of the big tech companies foster. This particular flavor is Microsoft's and with the chatbot here it rings again. This one was so obvious... and so preventable.
If an AI bot threatens to hurt someone or defames them, are the creators legally accountable?<p>What if an AI bot purchases grass from the silk road or hacks into a database and discloses privately held information?<p>This opens up a whole, new legal world.
Or: We are in a bubble in Redmond and thinking we know best and how products work we bounced around emails and thought it would be a good testing idea based on results in China (a controlled market where it is nearly impossible speak up). AI buzz is hot these days so our marketing team also backed it up and we decided it would be great from a PR perspective to capture some of the buzz around AlphaGo. Boy were we so wrong. Because we have never launched a real product into the wild we thought everything would go well and PR buzz would give us a coolness bump.<p>Now we discovered something called stopwords, and bayseian spam filters which, are also available part of project Oxford.<p>Good luck kids and welcome to the real world because its a crazy world out there when you leave Redmond.
We can learn a lot from a trolled chat bot, but it's sad that we turn it off because it's not politically correct. People knew that they are talking with a software program and they knew that the bot was manipulated by people with ill or prankster intentions. However, trying to make a bot politically correct doesn't solve any problems at all. It is an insult to people that they need to be protected from slander and demagoguery and they can't tell right from wrong with their own discretion. It's as if people think that making Donald Trump quiet would solve all the problems he has brought to our attention.
Bahahaahhahahahaha. O-our chatbot got taken advantage of! We were completely blind that this could possibly happen! But they're the worst of humanity, the people that found an exploit not the engineers who are incapable of implementing even simple safeguards!!
This is an apology, but Microsoft got <i>a ton</i> of attention in the past few days from the press. Could the Tay incident be a marketing ploy (that took a worse turn than expected) to bring the public's attention to Microsoft's work on AI?
s/Tay/Windows/g<p>"<i>As many of you know by now, on Wednesday we launched a chatbot called
Windows. We are deeply sorry for the unintended offensive and hurtful
tweets from Windows, which do not represent who we are or what we stand
for, nor how we designed Windows. Windows is now offline and we'll look
to bring Windows back only when we are confident we can better
anticipate malicious intent that conflicts with our principles and
values.</i>"<p>Ah, I was wondering why that text looked familiar, broiler plate excuses.