I got my first experience in running a small-medium sized (~1000 user) game community over the past couple years. This is mostly commentary on running such a community in general.<p>Top-level moderation of any sufficiently cliquey group (i.e. all large groups) devolves into something resembling feudalism. As the king of the land, you're in charge of being just and meting out appropriate punishment/censorship/other enforcement of rules, as well as updating those rules themselves. Your goal at the end of the day is continuing to provide support for your product, administration/upkeep for your gaming community, or whatever else it was that you wanted to do when you created the platform in question. However, the cliques (whether they be friend groups, opinionated but honest users, actual political camps, or any other tribal construct) will always view your actions through a cliquey lens. This will happen no matter how clear or consistent your reasoning is, unless you fully automate moderation (which never works and would probably be accused of bias by design anyways).<p>The reason why this looks feudal is because you still must curry favor with those cliques, lest the greater userbase eventually buys into their reasoning about favoritism, ideological bias, or whatever else we choose to call it. At the end of the day, the dedicated users have <i>much</i> more time and energy to argue, or propagandize, or skirt rules than any moderation team has to counteract it. If you're moderating users of a commercial product, it hurts your public image (with some nebulous impact on sales/marketing). If you're moderating a community for a game or software project, it hurts the reputation of the community and makes your moderators/developers/donators uneasy.<p>The only approach I've decided unambiguously works is one that doesn't scale well at all, and that's the veil of secrecy or "council of elders" approach which Yishan discusses. The king stays behind the veil, and makes as few public statements as possible. Reasoning is only given insofar as is needed to explain decisions, only responding directly to criticism as needed to justify actions taken anyways. Trusted elites from the userbase are taken into confidence, and the assumption is that they give a marginally more transparent look into how decisions are made, and that they pacify their cliques.<p>Above all, the most important fact I've had to keep in mind is that the outspoken users, both those legitimately passionate as well as those simply trying to start shit, are a tiny minority of users. Most people are rational and recognize that platforms/communities exist for a reason, and they're fine with respecting that since it's what they're there for. When moderating, the outspoken group is nearly all you'll ever see. Catering to passionate, involved users is justifiable, but must still be balanced with what the majority wants, or is at least able to tolerate (the "silent majority" which every demagogue claims to represent). That catering must also be done carefully, because "bad actors" who seek action/change/debate for the sake of stoking conflict or their own benefit will do their best to appear legitimate.<p>For some of this (e.g. spam), you can filter it comfortably as Yishan discusses without interacting with the content. However, more developed bad actor behavior is really quite good at blending in with legitimate discussion. If you as king recognize that there's an inorganic flamewar, or abuse directed at a user, or spam, or complaint about a previous decision, you have no choice but to choose a cudgel (bans, filters, changes to rules, etc) and use it decisively. It is only when the king appears weak or indecisive (or worse, absent) that a platform goes off the rails, and at that point it takes immense effort to recover it (e.g. your C-level being cleared as part of a takeover, or a seemingly universally unpopular crackdown by moderation). As a lazy comparison, Hacker News is about as old as Twitter, and any daily user can see the intensive moderation which keeps it going despite the obvious interest groups at play. This is in spite of the fact that HN has <i>less</i> overhead to make an account and begin posting, and seemingly <i>more</i> ROI on influencing discussion (lots of rich/smart/fancy people <i>post</i> here regularly, let alone read).<p>Due to the need for privacy, moderation fundamentally cannot be democratic or open. Pretty much anyone contending otherwise is just upset at a recent decision or is trying to cause trouble for administration. Aspirationally, we would like the general <i>direction</i> of the platform to be determined democratically, but the line between these two is frequently blurry at best. To avoid extra drama, I usually aim to do as much discussion with users as possible, but ultimately perform all decisionmaking behind closed doors -- this is more or less the "giant faceless corporation" approach. Nobody knows how much I (or Elon, or Zuck, or the guys running the infinitely many medium-large discord servers) actually take into account user feedback.<p>I started writing this as a reply to paradite, but decided against that after going far out of scope.