I'm no fan at all of Facebook. Don't have it, never have. Think it should be destroyed utterly. You can find me on Mastodon.[0]<p>That said: the company sees 2--3 <i>billion</i> MAU,[1] and sees on the order of 5 billion pieces of content submitted <i>per day</i>.<p>The best I understand, their measure of exposure is not <i>items</i> but "prevelance<i>, that is, the number of total </i>presentations* of a particular content.[2] Long-standing empirical media evidence suggests that this follows a power curve, where the number of <i>impressions</i> is inverse to the number of items. So, say, 1 might see 1 million impressions, 10: 100k, 100: 10k, 1,000: 1k, etc.<p>This means that a service can budget and staff for <i>either</i> the minimum prevalence threshold before manual review, <i>or</i> the total number of items granted more than some <i>maximum</i> unreviewed threshold. Machine-assisted filtering can help. In either case, though, mistakes will happen, and at <i>5 billion items/day</i>, the number of misclassifications <i>even at very high accuracy</i> is <i>large</i>:<p>- 1%: 50m/dy<p>- 0.1%: 5m/dy<p>- 0.01%: 500k/dy<p>- 0.001%: 50k/dy<p>... which necessitates secondary review and additional costs, as well as, of course, malicious appeals by bad-faith actors. If the filtering system is fed by user reports (flags and the like), then malicious or simply disagreement-based flags may well trigger moderation. (Crowdsourcing has its own profound limits.)<p>Another element is that, <i>especially with AI-based filtering systems</i>, what results is <i>determination without explanation</i>. We know <i>that</i> a specific item was rejected, but not <i>why</i>. And <i>in all likelihood, FB and its engineers cannot determine the specific reason either.</i><p>(I've encountered this situation more often from Google, again, as I don't use FB, but the underlying mechanics of AI-based decision systems are the same between such systems.)<p>The upshot though is:<p>- Moderation <i>is</i> necessary.<p>- It's ultimately capricious and error-prone. There are initiatives and proposals for greater transparency and appeals.<p>- Cause-determination is ... usually ... poorly founded.<p>________________________________<p>Notes:<p>0. <a href="https://toot.cat/@dredmorbius" rel="nofollow">https://toot.cat/@dredmorbius</a> Also Diaspora (see below).<p>1. Monthly active users. <a href="https://investor.fb.com/investor-news/press-release-details/2021/Facebook-Reports-First-Quarter-2021-Results/default.aspx" rel="nofollow">https://investor.fb.com/investor-news/press-release-details/...</a><p>2. See Guy Rosen, VP of Integrity for both content and prevalence references: <a href="https://nitter.kavin.rocks/guyro/status/1337493574246535168?s=20" rel="nofollow">https://nitter.kavin.rocks/guyro/status/1337493574246535168?...</a> I've written more on the topic here: <a href="https://joindiaspora.com/posts/f3617c90793101396840002590d8e506" rel="nofollow">https://joindiaspora.com/posts/f3617c90793101396840002590d8e...</a>