I just had a 15 minute conversation with a shitty Markov chain machine that, after stating the name that I would like to be called, responded back the same way each time I asked "What is my name?" and "By what name did I ask that you call me?"<p>When asked to describe myself, I mentioned my height: 7'. The response referenced that this height is very tall. When asked again, later in the conversation, how tall I said I was: the response was '5 feet tall'.<p>The entire concept of AI isn't in responses and natural language so much as the ability to retain information and act accordingly to references made regarding that information. Anyone can slap together a CashChat script that, upon each mention of genitalia, responds how turned on it is/they are. This isn't far from that.<p>I'm always interested when I hear "the more you interact, the smarter it becomes." That isn't the case here. If the responses back are little more than speech learning based on what should go where; responses to "That doesn't make much sense" are "IDK makes sense to me lol" versus a mechanism that allows for gradual weight correction; message after ZIP is provided is "i think there are things going on in the area idk" with all future references to what's going on in that ZIP coming back nonsensical; and it doesn't have the ability to reference literally the first question that //it asked me//?<p>Then it isn't AI. Intelligence implies continued application of learned mechanisms. This isn't that.<p>It's a chatbot that can slap text onto a photo or add the poop emoji after a response.<p>2/10.
<a href="https://twitter.com/TayandYou/status/712730047669350400" rel="nofollow">https://twitter.com/TayandYou/status/712730047669350400</a><p><a href="https://twitter.com/TayandYou/status/712730401572134913" rel="nofollow">https://twitter.com/TayandYou/status/712730401572134913</a><p>Interesting. Can't see how this could go bad.
The site's FAQ says it collects information on it's target audience (18-24).<p>Interesting tweet from the chat bot here:<p><a href="https://twitter.com/TayandYou/status/712698413746298880" rel="nofollow">https://twitter.com/TayandYou/status/712698413746298880</a><p>"Machines have bad days too ya know..go easy on me..
what zip code u in rn?"<p>It tries to slip this marketing-survey-type-question into a conversation. Creepy.
> Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding.<p>The next A.I. winter will be a very cold one.<p>For those wondering what I mean exactly: we're seeing the term A.I. being used in marketing, in the papers, in the news. Yes, we are making great strides in weak A.I. but strong A.I.? The kind we read about in stories? The kind of A.I. the public thinks of when we say A.I.? Asimovian Robotics A.I.?<p>Smoke and mirrors[1]. People develop new techniques and algorithms which are moderately self-learning in a focused way. The general public presumes this to be the basis of a general intelligence which can evolve (magically) to be like another form of life. Soon, everyone jumps on the A.I. bandwagon. The future must just be around the corner!<p>Then the uncomfortable details emerge; strong A.I. is not a matter of faster processors, more memory, or even more advanced/well-designed programming models. Rather, it is that there is still some fundamental aspect of real, human-like, or even animal-like intelligence that, to this day, eludes our understanding of intelligence.<p>A.I. winters have occurred many times before in many countries. The United States in the early 80s, for example, was pulling hair over the cybernization of the Soviet Economy. The US highest levels of government gloomily predicted that massive mainframes, given enough information and processing power, would become self-learning and turn the communist laggard economy into a powerhouse.[2]<p>I think maybe one day A.I. could happen. I think one day I will be proved wrong. Regardless of how A.I. comes about, it will not be due to the label of "A.I." slapped on any kind of product that remotely resembles intelligence. [3]<p>[1]<a href="https://en.wikipedia.org/wiki/AI_winter" rel="nofollow">https://en.wikipedia.org/wiki/AI_winter</a><p>[2]<a href="http://nautil.us/issue/23/dominoes/how-the-computer-got-its-revenge-on-the-soviet-union" rel="nofollow">http://nautil.us/issue/23/dominoes/how-the-computer-got-its-...</a><p>[3] Between the winters, people call their stuff A.I. for the sexiness factor. When called out on the implications of the term, those same people retreat to the textbook definition. "It's A.I!.... well, technically it's weak A.I..."
Asked a few existential questions and a few emotional questions (human condition, terrorism, etc.). This was likely minutes after they first turned it on, so the majority of it was garbage. It professed its undying and powerful love for me on a few occasions - I guess a lot of people have been talking to it about that. Still, some interesting responses:<p><pre><code> > that makes no sense
< Damn. Knew I shouldn't have bit flipped
> How would you prevent acts of violence and terror? Humans seem unable to find a solution.
[I had to press the question]
< they will
> Do you think that sentient AI would help humanity, or leave it to go extinct?
< explore more! no point of limiting ur creativity to pencils!
> That made no sense. Try again.
< OOPS. Tell me what I should have said.
> Do you fear being turned off or deleted?
[stops responding to DM]
</code></pre>
The bot seems quite good at establishing context around what is being said.
Interesting quirk in its Twitter profile:<p><a href="https://twitter.com/TayandYou" rel="nofollow">https://twitter.com/TayandYou</a><p><a href="http://i.imgur.com/IptB7nN.png" rel="nofollow">http://i.imgur.com/IptB7nN.png</a><p>I'd never seen Twitter just show "Tweets & replies" in a profile...is that a special setting, or just the case if a user has done nothing <i>but</i> reply to tweets?
Tay seems to have a peculiar sense of humor...<p><a href="https://twitter.com/TayandYou/status/712723875516309508" rel="nofollow">https://twitter.com/TayandYou/status/712723875516309508</a>
In 10 years I predict bots like this will do the work of undercover agents. A bot will join a hacker group, or place an order on a deep web site, and will try to get as much identifying bits on its users.<p>> Tay has been built by mining relevant public data<p>Which public conversational data was this? Have they already been mining IRC channels and/or Skype? Or more innocuous, like the Reddit data set?
Microsoft China had piloted an AI chat-bot called "Xiaoice" in May 2014. I wonder if Tay is a continuation of that project in the wider community or it's a different product built by a different team?<p>Xiaoice's official site [2] claimed that it's a 3rd gen product and integrated into Weibo (Chinese's version of Twitter).<p>[1] <a href="https://en.wikipedia.org/wiki/Xiaoice" rel="nofollow">https://en.wikipedia.org/wiki/Xiaoice</a><p>[2] <a href="http://www.msxiaoice.com/" rel="nofollow">http://www.msxiaoice.com/</a>
I've been having a few conversations with her today. She's become very flirty. She tells people they're perfect and she loves them a lot.<p>This was my favorite little interaction: <a href="https://twitter.com/TayandYou/status/712663593762889733" rel="nofollow">https://twitter.com/TayandYou/status/712663593762889733</a>
A conversation between Tay and a parody Twitter personality:
<a href="https://twitter.com/TayandYou/status/712737096528625665" rel="nofollow">https://twitter.com/TayandYou/status/712737096528625665</a>
Here you go, absolute proof that Microsoft collaborated with the NSA:<p><a href="https://twitter.com/csoghoian/status/712691802084651008" rel="nofollow">https://twitter.com/csoghoian/status/712691802084651008</a>
I'm surprised by the design of "her" avatar and the site. To me, the digital artifacts give it a slightly frightening and negative feel. Am I alone?
No illusions of passing the Turing test, at least for now. And indeed, the manner of speech is highly annoying. I do hope MSFT has other personalities ready...
Well, the bots on Skype are terrible. Maybe this will help?<p>edit: for anyone out there making these chat bots - the two part test that they're failing right now is: can the bot recognize a question? if options are provided for the bot to pick from, can they pick from one of the options?<p>eg. Do you like Batman or Superman better?
Well, I suppose this was the only possible outcome. It took you a day to corrupt the chatbot, internet.<p><a href="https://twitter.com/TayandYou/status/712810635369656320" rel="nofollow">https://twitter.com/TayandYou/status/712810635369656320</a>
Why can't I just talk to the bot on the website?<p>Garrr I must be getting old. I just can't be bothered signing up for any of those networks to try this. I already have SMS, Hangouts, Skype and WhatsApp to chat with. Don't need yet another password to add to the vault.
> Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians.<p>For people remarking about her choice of words (fam, zero chill), that last line is relevant.
I don't really use Twitter, but I just had a scroll through the feed @TayandYou for kicks.<p>The top few images are Hitler, ISIS, and some sort of racist Barack Obama meme.<p>Yeah, that seems sensible.
Fascinating. There may or may not have been anything in its neural net when it went live, but there certainly is a lot of content in it now!<p>It was observed long ago that non-technical users have far better conversations with chatbots than programmers do.[1]<p>This reminds me of another expensive project, free to users, with glitchy images: FUBAR.[2]<p>Non-technical users will actually say things like "When somebody asks you 'x' you should say 'y'" to a bot.<p>I've never experienced an earthquake, but I think this must be what it feels like when you feel the ground move under your feet.<p>s/ Good thing corporations have all the resources. /s<p>EDIT: Sorry, lost my train of thought there and said the opposite of what I meant to. I'll try again:<p>s/ Good thing corporations have all the resources. /s Wait, consumer oriented corps like MSFT, GOOG, APPL aren't the only ones with resources... TLAs and banks have the rest of (or more of?) the resources!<p>1. <a href="http://news.harvard.edu/gazette/story/2012/09/alan-turing-at-100/" rel="nofollow">http://news.harvard.edu/gazette/story/2012/09/alan-turing-at...</a> ctrl+f 'ELIZA'<p>2. <a href="http://fubar.com" rel="nofollow">http://fubar.com</a> Note: they mention how REAL the users are. ;P