I'm not sure how anyone can think discussions on reddit are not manipulated at this point. If you watch carefully, you see the exact same verbiage used from multiple accounts on the same topic used to steer conversations. And as new responses come up, there will be a multi-hour delay, then new verbiage will get posted simultaneously to multiple accounts. There is clearly behind-the-scenes writing efforts going on, then being distributed to accounts. And if I see this just as a casual observer, I can only imagine what you would find if you really dug deep.
It's a shame this has happened. It used to that aggregated news was the best news because it wasn't opinionated. It also didn't need to focus on the big ticket items (murder, sex, drugs) like newspapers, as sales weren't a concern, so you have science, tech, and I kid you not, actual good news to read about!<p>But now with the pay-to-get-upvotes scams going on, we get utterly biased and even ridiculous stories constantly on the front page. And how to even start on the comments, which just read like blurbs to the title. A great example on the front page at the moment:<p><i>"Donald Trump's war on media is 'biggest threat to democracy' says Navy Seal who brought down Osama Bin Laden"</i>.<p>I almost feel like this stuff is AI generated at this point, just throwing together keywords that get clicks. As someone across the pond looking in, the bias is laughable obvious. I just hope that more people realise that manipulation is present, and remember not to believe everything they read.
Link to this post in three years and ponder in its prescients... Web 3.0 will be born in the death of the heavily botted social networks. Reddit, Facebook & Instagram, Twitter... are all basically pay-to-win schemes at this point, benefiting greatly in terms of adoption from the grey market pay-for-likes botnets, where marketers and propagandists know they can make their content highly visible if they're willing to pay.<p>People pay very little attention to the fragility that has arisen in the Web 2.0 economy. Once there is widespread understanding of the gaming, cheating, and botting, these social network institutions will crumble. There will be piecemeal attempts to reduce botting, but they will annoy end users, and annoy investors and shareholders as they come to realize XX% of their user base never existed.<p>Web 3.0 will be the death of anonymity, with social networks and APIs that are, by design, hard to automate, and even harder to hide your true identity from. There will be CAPTCHA-like systems (possibly tied to hardware) that facilitate this. This will of course promise to fix the problem of botting, while heavily benefiting surveillance, for state, and for advertisement. There will be a new breed of social networks built around ensuring the content you are seeing is genuine or "organic", and yet the incentives will be much more perverse, and their networks, much more invasive.
I can only see one cure to this pervasive issue that affects every website on the internet that allows for user comments:<p>First, verified accounts, like Twitter, are visually separate from non-verified accounts. The distinction has to be visible on every place the username is displayed or when their post is displayed.<p>Second, only verified users have upvote/downvote privileges. To me it is downright foolish to allow any jackass or botnet to make a ton of accounts and up/down vote the conversation as they please.<p>I have seen this forum management all over the web and it's turning me off from social media more and more by the day. Newspaper comment sections used to be insightful, and now they are generally cesspools of shills and bots. Double or more for forums like Reddit. I even see it on Hacker News.<p>We can have open discussions and we can have discussions free of paid influence, but I do not believe it is possible to have both.
Trend to watch in next few years: Phony Actors<p>This same thing happens on all social networks - facebook, instagram, reddit, twitter, etc all have accounts on them that may have naturally grown and are now paid to post / influence content on their respective platforms. I've seen this first hand on Instagram how large of an effect it can have in promoting apps, products etc. Previously disclosing affiliations / paid promotions was limited to a much smaller set influencers. Now, these platforms give anyone from a kid with spare time to a professional marketing agency a means to build accounts and leverage the vast reach of these networks for their own gain. Facebook is _starting_ to realize this and clamp down on it for fake news. But there are many more avenues and it'll be interesting to see how this really gets solved if at all.
I'd noticed something like this on HN a few years ago. A negative comment about Apple might be voted up at first. Then, about an hour after posting, there would be many negative votes. The timing on this was consistent. After that, no more negative votes, and the rating would float up again.
Reddit is awesome for tons of small game communities, I hope whatever happens on those apparently more important subreddits won't ever affect that
We've seen this before. Usenet was killed from spam as it didn't have a defence against those seeking to profit from the audience.<p>This is today's equivalent of the same problem. The difference is it is now two monied interests battling for the audience.<p>I long for a modern usenet newsgroup equivalent that has some kind of protection from monied interests, I suppose a small audience is a guard.
Previous discussion here.<p><a href="https://news.ycombinator.com/item?id=13714159" rel="nofollow">https://news.ycombinator.com/item?id=13714159</a><p>(I would like to give a more substantive comment, so I'll leave the somewhat jaded perspective of rephrasing the title: "<social channel> is being manipulated by <anyone who participates in social engagement/growth hacking/perception management>". The question I contemplate is where we draw the line of what social engagement is above the bar acceptability. (To answer my own question, I typically consider "disclosure" the answer to that))
I think it goes without saying that a social media site with millions of users and place 23 on the Alexa rankings is an interesting advertising platform, and that it happens on a regular basis and a large distributed scale across the platform.<p>The far more important problem in my opinion in this situation is to be able to differentiate a "normal" user submitted post from an advertisiement, a skill that is missing in 80% of today's youth:<p><a href="http://fortune.com/2016/11/23/stanford-fake-news/" rel="nofollow">http://fortune.com/2016/11/23/stanford-fake-news/</a>
I don't understand why "financial services" are called out in the headline. They're being manipulated by shady digital marketing companies who have some financial clients among many others.
This is a fun subreddit to subscribe to: <a href="https://www.reddit.com/r/HailCorporate/" rel="nofollow">https://www.reddit.com/r/HailCorporate/</a>
This is a very old phenomenon called sockpuppetry[1]. Any online forum of sufficient size and popularity is bound to be a target of it. It's no surprise at all that Reddit would be targeted.<p>Most reputable forums try to deal with it somehow, but it's difficult to stamp it out completely -- especially if the site administrators are ever themselves compromised.<p>[1] - <a href="https://en.wikipedia.org/wiki/Sockpuppet_%28Internet%29" rel="nofollow">https://en.wikipedia.org/wiki/Sockpuppet_%28Internet%29</a>
Has anyone ever tried something like a captcha for upvoting? Upvoting isn't really something that needs to be done conveniently in rapid succession.
A much more insidious spam but very common are fly-by-night 'news' sites that copy paste content from reputable sources and then get up-voted to the top of popular subreddits, generating adsense revenue for the webmasters. Adsense is such a pox on the internet. I know companies need to make money but it also gives rise to soooo much spam.
If anyone wants to play a game of "how could HN be manipulated" ... <a href="https://news.ycombinator.com/item?id=13718417" rel="nofollow">https://news.ycombinator.com/item?id=13718417</a>
And I wonder which discussion forum isn't... the closer any site gets to money (example: stock discussions), the more is at worth with manipulative conversations.
And HN and other tech sites are somehow exempt from this phenomenon? With billions of dollars at stake at Uber and its various competitors, is it impossible to think, for example, that Susan Fowler had professional authoring help, or was offered compensation to publish her recent blog post? Or that paid/professional/fake commenters downvoted, attacked, and drowned out any voice who simply requested documentation, or cautioned that judgment should be reserved until both sides of the story were made public?