Damn, of course there's an internal form to fill out. Google/Meta demonstrably <i>know</i> that there needs to be a way around broken automated ban systems. They just gate it behind access to an employee (who wants to keep their job) to prevent people from using it.<p>So let's have a thought exercise: what would it cost, in dollars, to have a hyperscale web service that doesn't have this problem? In other words, a Facebook/Google-equivalent web service of some kind that <i>has a public a link to a 'mistakenly banned appeal' form</i> for anyone to use.<p>The constraints/spirit of the exercise are that the volume of worldwide engagement with our hypothetical service needs to be equivalent to engagement with Google/Facebook (and equivalently appealing for ToS-violators to abuse, leading to a comparable volume of automated bans). Auto-ban systems and ToS are similar to Fb/Google as well. The product doesn't matter a ton. Maybe it's a social network, or an office/email suite, whatever.<p>The difference is that ImaginaryCorp honestly strives to provide a ban-appeal process equivalent to what you get if you know someone who works at Google to <i>absolutely everyone</i>. Let's say they promise a verdict within a calendar month, with at least some amount of useful explanation.<p>The review process can be automated within reason, but not if it violates the goal of reducing abuse <i>and</i> false ban rates to the level that a Google employee can expect when they submit their uncle Doug's marginally-suspicious banned account for internal review. ImaginaryCorp <i>really believes</i> in the spirit of that goal, so they're not willing to compromise on the review process much. Yeah yeah, it's implausible that belief in the spirit of the goal would be present at every level of the company. It's a thought excercise about monetary cost, just go with it.<p>They won't half-ass it and just grant everyone's appeal because the have a similar commitment to Fb/Goog to not losing users after becoming a cesspit, are subject to CSAM etc. laws, and generally care about not being a clearinghouse for abusive content.<p>Doing this would probably require a lot more human review (unless Google is for some reason sitting on a way better auto-ban system and not using it), and as a result would require a massive staff and process edifice to execute that, keeping false-positives to a bare minimum, on a global scale. Financially, the business has to remain sustainable with this system in place (but not necessarily successful or growing on par with competition).<p>So the question is: how much would this cost me, the user? Even if ImaginaryCorp runs ads like Fb/Google, would that account-review edifice require subscription fees to not run the company out of money? If so, assume users act like shitty free-but-your-data-and-clicks-are-the-product users and not like paying customers, so legitimate abuse rates stay high (again, thought experiment). If there are fees, how much am I paying a month--$5? $50? $500? Or would running this review system just mean that the company is a little (or a lot) less profitable than its competitors?<p>It's an interesting question to think about. The requirement of providing useful explanation to legitimately banned users actually complicates things a great deal, because it might lead to gaming the system. Staffing is obviously a huge challenge: high-social-context groups of reviewers would need to be present all over the world. Staffing is an even bigger challenge if you opt to solve issues "at the source" by fixing the auto-ban systems themselves (which results in needing to solve realtime moderation at scale, a notoriously intractable problem). Automation is tempting, but risks falling directly back into the issue we're trying to solve.<p>In the real world, this would obviously never happen, but I'm interested in what you all think.