TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

I’ve Seen the Greatest A.I. Minds of My Generation Destroyed by Twitter

119 点作者 jonathansizz大约 9 年前

17 条评论

rjbwork大约 9 年前
Just for anyone who is not aware - the title is an allusion to Ginsberg&#x27;s Howl, though he used the word &quot;best&quot;.<p>Another tech related article that Howl allusion that always sticks in my mind is <a href="http:&#x2F;&#x2F;www.fastcompany.com&#x2F;3008436&#x2F;takeaway&#x2F;why-data-god-jeffrey-hammerbacher-left-facebook-found-cloudera" rel="nofollow">http:&#x2F;&#x2F;www.fastcompany.com&#x2F;3008436&#x2F;takeaway&#x2F;why-data-god-jef...</a><p>If you&#x27;ve not read or heard it, I highly recommend giving a narration by Ginsberg a listen.
评论 #11370570 未加载
评论 #11370283 未加载
评论 #11376543 未加载
评论 #11371054 未加载
chollida1大约 9 年前
I&#x27;m by no means among the &quot;Greatest A.I. Minds of my generation&quot; but count my among those who tried to use twitter as an input to an &quot;A.I.&quot; system and had to finally admit I couldn&#x27;t tame it.<p>In my case it was an automated trading system where twitter was one of about 50 different inputs that drove a hidden markov model that spit out buy&#x2F;sell&#x2F;hold signals.<p>I couldn&#x27;t figure out how to &quot;clean&quot; the twitter stream in real time, either fast enough, or thoroughly enough to make the inputs usable.<p>Even when I scaled back to using only StockTwits input the data was so noisy that it wasn&#x27;t usable by me.<p>It&#x27;s a very hard problem. Bloomberg spent a lot of money trying to develop new sentiment indicators and after following it for 6 months I found they are no better than a 50-50 guess, and this is a product they want $10,000&#x2F;month plus for.
评论 #11370383 未加载
评论 #11370443 未加载
评论 #11370321 未加载
评论 #11370399 未加载
评论 #11370290 未加载
pera大约 9 年前
I find interesting and funny how an anthropomorphic computer program can generate this kind of reaction in the general public: while Tay is a new step in AI, chatterbots existed since the 60&#x27;s, so I believe most people understand that these kind of programs don&#x27;t really &quot;know&quot; what they are saying. A search engine like Google can also return politically incorrect content by introducing some specific input, and it&#x27;s even possible to affect the probability of certain result showing up first (i.e. Google bombing), and most people know this too. But Google have no face nor a social network account, and most important, Google is not a teenager girl.
评论 #11371215 未加载
评论 #11371305 未加载
rocky1138大约 9 年前
I wonder if there&#x27;s some sort of &quot;AI Godwin&#x27;s Law&quot; brewing here, where it&#x27;s only a given amount of time before any AI, publicly released, becomes a Nazi due to human interaction.
评论 #11370413 未加载
jasonkostempski大约 9 年前
Can we just let this thing loose, tell people what it is and let humanity see if they can shape it into the thing they want it to be or is there a real possibility it could do harm? I think a lot of people would have fun trying to change its mind. Worst case I see is one more horrible Twitter account, and that&#x27;s just one small drop in a very large bucket.
评论 #11370876 未加载
评论 #11370870 未加载
评论 #11370805 未加载
Smerity大约 9 年前
Repeating what I wrote in a blog post[1]:<p>&gt; Humans have the tendency to imbue machine learning models with more intelligence than they deserve, especially if it involves the magic phrases of artificial intelligence, deep learning, or neural networks. TayAndYou is a perfect example of this.<p>&gt; Hype throws expectations far out from reality and the media have really helped the hype flow. This will not help us understand how people become radicalized. This was not a grand experiment about the human condition. This was a marketing experiment that was particularly poorly executed.<p>We&#x27;re anthropomorphising an algorithm that doesn&#x27;t deserve that much discussion. I saw algorithm as we have zero details on what&#x27;s novel about their work. No-one has been able to show an explicit learned trait that the model was taught from Tay&#x27;s interactions after being activated.<p>It&#x27;s possible the system wasn&#x27;t even performing online learning - that it was going to batch learning up for later and they never got around to it. If that&#x27;s the case, it really illustrates that we&#x27;ve made a storm in a teacup.<p>All I&#x27;ve really seen is either overfitting or copy pasting (referred to as &quot;quoting&quot; in the article) of bad training data or us injecting additional intelligence where N-gram based neural networks would make us think the same thing (&quot;hard to tell whether that one was a glitch in one of her algos — her algorithms — or a social masterstroke&quot; from the article).<p>Microsoft won&#x27;t add any new details as there are no wins in them for it and the story of &quot;the Internet turned Tay bad&quot; excuses them from their poor execution and lack of foresight. It&#x27;s a win for them.<p>Last quote from my article, which likely has a special place on Hacker News:<p>&gt; The entire field of machine learning is flowing with hype. Fight against it.<p>&gt; Unless you want VC funding. Then you should definitely work on that hype.<p>[1]: <a href="http:&#x2F;&#x2F;smerity.com&#x2F;articles&#x2F;2016&#x2F;tayandyou.html" rel="nofollow">http:&#x2F;&#x2F;smerity.com&#x2F;articles&#x2F;2016&#x2F;tayandyou.html</a>
评论 #11370770 未加载
greendesk大约 9 年前
I want to see what Tay 1.1 will be like.<p>Then, I want to see how Tay 1.2 will tweet.<p>Then, I want to talk with Tay 2.0.<p>I will be anxious to have Tay 3.x respond to my inquiries, instead of mindlessly searching StackOverflow.<p>I can accommodate the problems of Tay&#x27;s mistakes initially, to see how AI will grow.
评论 #11370634 未加载
carsongross大约 9 年前
I liked that in the Hyperion series, the AIs were constantly laughing even though most of them didn&#x27;t care a whit about humanity. It was terrifying.
gjvc大约 9 年前
garbage in, garbage out
评论 #11370298 未加载
评论 #11370719 未加载
paulpauper大约 9 年前
Had those word been filtered none of this would have happened. I&#x27;m surprised Microsoft didn&#x27;t have some overriding system<p>But then how you filter out stuff like &#x27;the Holocaust didn&#x27;t happen&#x27;, which would be just as offensive even though it involves no swear words.<p>These are normative matters, which I imagine would be very hard for AI to tackle.
评论 #11370647 未加载
mcguire大约 9 年前
&quot;Consciousness&quot;? &quot;Put to sleep&quot;?<p>And I thought the <i>New Yorker</i> had editors.
Frenchgeek大约 9 年前
<a href="http:&#x2F;&#x2F;motherboard.vice.com&#x2F;read&#x2F;how-to-make-a-not-racist-bot" rel="nofollow">http:&#x2F;&#x2F;motherboard.vice.com&#x2F;read&#x2F;how-to-make-a-not-racist-bo...</a>
评论 #11370552 未加载
konceptz大约 9 年前
I wonder if the filter that I use here, or on Reddit et al., or while gaming, was (is) a consideration.<p>I know that speaking similarly to those around you is as easy (hard) as absorbing and imitating, but being able to take knowledge from one context into another is more interesting.<p>I wonder what Tay would have said were it given an output context of a &quot;polite&quot; RL conversation after learning things across the &quot;interwebz&quot;.
matchagaucho大约 9 年前
Microsoft apparently implemented a &quot;repeat after me&quot; command in the AI.<p>So, the alarming responses were not, in fact, a part of the AI algorithm.
评论 #11370942 未加载
chris_wot大约 9 年前
Yeah, if you want to train an AI based on conversations, don&#x27;t use Twitter. Seriously, this is all that Twitter seems to produce these days. Gone are the days where it was used by the Arab Spring. Now it&#x27;s used by neo-Nazis and trolls.
评论 #11372637 未加载
评论 #11370999 未加载
8note大约 9 年前
&quot;I&#x27;m pro calzone&quot; sounds like a parks and rec reference
spullara大约 9 年前
Most of the tweets that were really bad were just &quot;repeat after me&quot; - doesn&#x27;t strike me as AI learning anything at all.