Both reddit and voat have been accused of censorship for banning communities that contain questionable or illegal content, content that clearly goes against the ToS for those platforms. For alternative community-based websites, what does the HN community think are the more effective site-wide policies (cf. community-specific moderation policies) for encouraging community growth while preventing illegal or questionably legal content? Some examples - without advocation - to suggest some standards for the level of detail:<p>1) Any content that appears illegal must be removed within [x] hours by a community's mod else an admin can remove.<p>2) An admin should immediately remove any content reported or identified that appears to be illegal.<p>3) Any content that is reported to admins as potentially illegal is pushed to community mods for a determination and resolution that must occur within [x] hours else an admin can remove.<p>4) Any community that promotes or fosters an environment for posting [illegal content, discriminatory content, content against ToS] will be banned by the admins.<p>5) Authors (OP) of content that appears illegal will be banned.<p>6) Only admins can ban users.<p>7) [Community owners, mods] can be banned for any illegal content residing within their communities after [x] hours of being reported.<p>8) [Community owners, mods] are responsible for being familiar with the content-related laws in [website's country] and removing content in their communities that violate those laws within [x] hours.<p>9) Any community without a single owner cannot be [posted to, viewed] until a new owner takes over.<p>10) All communities with more than [number of posts, number of comments] must have a minimum of [x] mods.<p>11) All communities must have at least [x] mods that login and review all community content every [x] hours.<p>12) All formally received [legal notices, threats to take legal action] to remove content will be [publicly posted, forwarded to chillingeffects.org].
The best policy is not to invite illegal content. Any idea where moderation over <i>legally</i> questionable content is an obvious impediment to community growth is inherently problematic because it is basically an "attractive nuisance", e.g. a site that appeals to people who think 4chan banning jailbait photos is heavyhanded has baked-in problems in regard to moderation.<p>Despite the passion and emotion about what is appropriate for StackOverflow, the debate isn't about links to free downloads of PhotoShop or cracking commercial Wordpress themes because people who are interested in that sort of thing go elsewhere. Likewise, nobody on HN complains when such links are killed upon posting here.<p>The solution isn't in mechanism, it's in policy at the highest level of abstraction; that of "What this site is about."<p>Good luck.
Are you talking only about content that is potentially illegal? Or questionable in the general sense of being "wrong" in some way, even if legal? If the latter then I don't think there's a universal "right" way to handle it. Censorship can be a big value-add in many communities.
I realize I haven't defined "best" or "more effective" and would also be interested in your views for definitional standards. I have not added my opinions as I prefer not to bias your feedback.
What is questionable or illegal?<p>You may argue it is full-body nudity, depictions of erections, exposed female genitals or breasts... but then someone will produce some art that everyone accepts really is art and shouldn't be censored, or a photo of breast-feeding. And so you make an exception as you do not wish to censor.<p>Then what differentiates the porn from the art? Is it simply the context of where it appears... a porn site or a gallery? What is the context of your site?<p>I'm arguing here that everything is subjective when it's not obviously illegal (against the letter of a law).<p>Law itself struggles with these definitions and tends to rely on the concept of what a jury or judge "reasonably" determines to be against some wording of a law.<p>The underlying key question in determining a moderation policy is whether you, the site owner, wishes to take on the decision making for those subjective cases. And, in doing so, whether you wish to accept the liability that comes with it.<p>Simply: Are you willing to be liable for the content posted by third parties?<p>If yes, then you need to have the equivalent of editorial processes, clear definitions you can communicate to users, declared processes for handling breaches of those definitions (and repeated breaches).<p>If no, then you can have the equivalent of "mere conduit" or "safe harbor". Anything you're unaware of you can't be liable for, and once you're made aware you need to handle it with whatever declared process you have.<p>Reddit and others were the latter.<p>And the latter works fine until you start moving towards advertising and those paying for advertising start to say "You cannot show our brand next to this (or that) type of content". Suddenly you need to know what content is where and your ability to deny knowledge of what is on the site is reduced and you have been pushed down the path, and probably taken the first steps, towards a more editorial process.<p>Trying to encapsulate processes in a set of rules, as you've done in the Ask HN, will fail if the rules themselves are against the nature of the site. The nature of a site that survives on advertising is that the processes and user terms and conditions need to create content acceptable and favourable to advertisers.<p>For most of what you've outlined above, I've had lawyers create terms and conditions for sites I run that does encapsulate a distributed and hands-off moderation policy that complies with European law. You can see those docs over here: <a href="https://github.com/microcosm-cc/legal" rel="nofollow">https://github.com/microcosm-cc/legal</a>