I used to manage a few online communities and you're always going to have trolls. Here's some ways you can control them.<p>1 - Moderate comments and remove any comments you don't like. Make a rule that all comments need to be approved before they will be posted. This will cut down on about 60% of the stuff you'll see.<p>2 - Shadwo ban users: <a href="https://en.wikipedia.org/wiki/Shadow_banning" rel="nofollow">https://en.wikipedia.org/wiki/Shadow_banning</a><p>3 - If Shadow Banning doesn't work, then target and block their IP addresses. If you need to, warn them you will contact their ISP and let them know.<p>These are the big three that always seemed to limit the trolling behavior I saw. Only in extreme cases do you have to do #3.
It's probably best to start defining what factors contribute to trolling of an online community.<p>What particular communities are more susceptible and why?<p>Let's first tackle the non-sybil form of trolling and think about individuals.<p>Consider the differences and similarities between a community like linked in vs say 4chan? These are as far apart on the spectrum of communities as I can currently think of. Is one of these communities more easily trolled or duped than the other? Why might that be?<p>I hypothesize a community like linkedin would be more difficult to troll as you must offer personal data or spoof personal data in order to be let into the community. Certainly this isn't impossible to circumvent, but requires a greater degree of commitment. This commitment is essentially a subconscious time value calculation as well as a calculation of social cost. The logical conclusion is that the social cost of being a troll on Linked in is difficult as there is a very real and immediately perceivable lost opportunity cost.<p>Let's take another example of online discourse and hold a single variable constant, identity.
Consider a discussion forum for a college course where students are asked to participate in discussion of class material learned from a physical lecture hall VS topical online discussion on a facebook group. Both groups are forced to offer up some form of personal identification. There is one difference between the groups and that is face to face interaction. My hypothesis is that the first group has a larger social cost tied to being a troll on the discussions and perceivable punishment from acting in bad faith vs facebook where the social costs can be diminished and mostly result in being unfriended. Even removing the possibility of lost tuition dollars as punishment (aka you attend free college) I would argue that the per capita rate of trolling would be diminished compared to facebook.<p>To summarize my hypothesis with in the original scope, if an individual does not perceive the social costs for their actions with in any community their propensity for becoming a bad actor should increase.
Another bad rule I don't advise: insist on a peer-reviewed journal reference for literally everything said. There are a lot of problems with this, too many to list, really.<p>One is that inevitably it is only fully applied to statements that contradict the current prejudice, whatever that is; blocking the very sort of up-to-date knowledge you hope to find in a forum.<p>Another problem is that accomplished trolls will badly distort or reverse the takeaway from an opaque study and drop then drop that as a reference. The moderator can't possibly devote the hours to police that.<p>But yes, I've actually seen this system tried in a forum.
Humans need intermediate punishments. It's how we work. (Homeostasis and all that.)<p>Therefore Facebook (etc) ought to provide a very automatic "time-out" system for moderators allowing them to one-click to create a temporary ban (a week, a month) for a user that reinstates that user automatically after that period <i>without</i> further action by the moderator. Automatic tracking of past penalties and auto-escalations (if desired) to harsher punishments up to a ban would be a good idea, too.<p>A quick display of relevant stats regarding that user would be helpful right then, too; giving you some rough idea of how valuable past contributions by them have been. (Sometimes what looks like a troll is just a fact that few people, even in the field, know.)<p>Manually managing such a system works but Lord is it a ton of fiddly (and unnecessary) work for the moderator.<p>One of the advantages of this is that mild penalties can usually make the point with little risk that a bad penalty decision will permanently piss-off a valuable contributor.
A bad "solution" I've seen more than once in health user groups - ban disagreement (but allow contradictory threads.) "If you don't like what you see, just move along to the next thread." This removes any chance of limiting fake news, and takes trolling to a whole new level since the first big lie wins, in any thread. So not that.
Ban users early and often. Make signing up for accounts difficult (pay real money, require phone number verification, etc). It needs to cost the user money(or time) directly (they need to keep finding phone numbers that aren't banned, etc etc).
Depends on the type of troll.<p>Meatball wiki has a bunch of information about this. Here are a few.<p><a href="http://meatballwiki.org/wiki/UsAndThem" rel="nofollow">http://meatballwiki.org/wiki/UsAndThem</a><p><a href="http://meatballwiki.org/wiki/VestedContributor" rel="nofollow">http://meatballwiki.org/wiki/VestedContributor</a><p><a href="http://meatballwiki.org/wiki/GoodBye" rel="nofollow">http://meatballwiki.org/wiki/GoodBye</a><p><a href="http://meatballwiki.org/wiki/DissuadeReputation" rel="nofollow">http://meatballwiki.org/wiki/DissuadeReputation</a>
vBulletin online forum had a way for admin to quietly disable user. Making his posts invisible to anyone but himself.<p>So proud troll keep moving full steam ahead with no harm to anyone.<p>They called this feature: "Tachy goes to coventry"
A.I. to detect Character Disorders - since people with Character Disorders tend to be very repetitive and characteristic in their behavior (hence the name.) People with Character Disorders can't be remotely cured and won't temper their behavior in any way that truly matters.