> In a way, what Leck has created is a pro-active search engine: it answers twitter users who aren't even aware of their own ignorance.<p>On the one hand the idea of a reverse search engine is somewhat appealing, on the other hand; it's Clippy for the internet.<p>"""
I see you're trying to deny global warming. Would you like to:<p><pre><code> 1. research the available facts and science?
2. Have an authority figure you trust tell you, you're wrong?
3. Meet other like-minded singles?</code></pre>
"""
Let me see if I understand this correctly.<p>Some programmer assumes that he is right and others are wrong, so he writes a bot to chase down people he might disagree with and bombard them with one-liners.<p>And Technology Review thinks this is behavior worth promoting?<p>I suspect the reason is that they agree with the programmer.<p>Try it out for a bit with the roles reversed.<p>Doesn't feel quite the same, huh?<p>ADD: The assumption here is that there is no way you can believe in X. Therefore it is okay for me to write a bot that spams and pesters everybody who believes in X, since they are horribly misinformed.<p>Works great -- as long as you're omniscient.
The people who are arguing for the climate change are the same group of people that used to tell me that the (rain)forests are the lungs of the earth.<p>I grew older, learned agriculture and forestry and saw that in mature forests the old trees die and rot, releasing the same amount of CO2 that was captured when they grew.<p>Now, the same people are telling me that we are responsible for the climate change.<p>Don't get me wrong, I'm all for protecting the rain forests, but thats because of the diversity of life they represent. I'm also for reducing the use of fossile fuel, -because we'll have to sooner or later anyway.
As a similar but less controversial use, it might be interesting to use this to debunk retweeted urban legends by sending out snopes.com links. But ultimately I think it crosses the line into spam and just gets annoying.
It looks like it's having a tough time with a lot of misfires (probably due to sarcasm according to the article), at least from a cursory glance at the bot's twitter feed (<a href="http://twitter.com/AI_AGW" rel="nofollow">http://twitter.com/AI_AGW</a>) that currently has a lot of apologies.<p>It's a neat idea though.
Makes me think of the Turing Test for artificial intelligence.<p>Then again, is it still a Turing Test when the human involved is as automatic and predictable as a computer?
It's disturbing that (among other things) this programmer has so confidently separated humanity into "crowd"s, and feels free to deride and explain his superiority to at least two of these crowds in his quotes. I'm surprised that this degree of stereotyping and subsequent mockery was published by TR with no trace of criticism or meaningful commentary.
I think this can be an ethical thing to do, provided some circumstances hold. Consider the sides to be S and T, then the circumstances are:<p>1. S are in the wrong<p>2. the debate matters, i.e. if S wins, bad things will happen (e.g. creationism will be taught in schools as science)<p>3. the chatbot will suck up more time from S than T (e.g. because T people are more likely to recognise it as being a chatbot).<p>Note that each of these is likely to be true if another circumstance holds -- that S is composed of people on average less intelligent than T.<p>Finally, what a delicious irony it would be if an evolutionarily programmed chatbot was written that succeeded in wasting the time of creationists!
As a libertarian one of my major issues with global warming is that we're basically wasting our limited time & resources trying to address it. Whether our climate modeling is accurate or not, we're basically so far down this rabbit hole that we'd be better off focusing our efforts elsewhere. This TED talk explains it pretty well: <a href="http://video.google.com/videoplay?docid=-122089912362687601#" rel="nofollow">http://video.google.com/videoplay?docid=-122089912362687601#</a>