This is the second study I've seen on HN today that used a pool of subjects from Mechanical Turk.<p>By coincidence, I tried MT itself (as a worker) for the first time a few days ago, just to see what it was. It's mildly interesting, but one thing that immediately jumps out is that the pool of people actually working these tasks is probably very different from the general population.<p>Good enough for click-bait I suppose, but hope no one is trying to do serious science this way.
I wonder if the results would be the same with AI generated content from a well known liberal figure?<p>Emotion plays a huge role in how easily people are fooled.
The difficulty with this is that the fake Trump posts are generated by mashing together words and phrases that the real Trump uses. So they aren't fake at all, it is all Trump. Not surprising that people can't tell them apart.<p>Almost 30 years ago I posted a Markov chain text generator to comp.sources.games. Can't locate the original, but found Rich Salz's cleaned-up version at<p><a href="http://mailing-list-archive.cryptoanarchy.wiki/archive/1995/11/89e2939b0d041c79d02b21ca6a5d5d593559f0371be2b6cd1f476b107bf5ad7e/" rel="nofollow">http://mailing-list-archive.cryptoanarchy.wiki/archive/1995/...</a><p>You just put in text and you get text out that has the same statistics for three-word sequences. More modern approaches avoid the Memento effect of not remembering anything other than the previous two words, but it's the same idea: you take Trump text in and just rearrange it.
> <i>Given that RoboTrump could crank out literally millions to billions of words per day, you can imagine how this technology could be used to manipulate opinion.</i><p>I don't see it. If someone autogenerate a million fake comments in a million post pretending to be Trump, would this change the elections results?<p>If someone floods the net with fake Trump quotes, why would people believe them. (Moreover, it will offer some cover for deniability of errors in real quotes.)