I was trying to reply to a post that got flagged, so I'll repost at the top level instead:<p>I strongly disagree with [the other poster's lack of concern about war crimes], but I want to try to paraphrase the part of your intuition that makes sense to me.<p>Facebook gets criticized by its users and governments both for taking down "too much" and "too little", sometimes with regard to the same post or subject area.<p>Suppose user X posts something related to an armed conflict, violence, suffering, or death. This post might be part of a crime by X against Y, or evidence of a crime by Y against Z, or evidence of a crime by Y against Z, or not really a crime at all but just really disturbing and upsetting. Also, in various circumstances people might want records about violent crimes against them to be destroyed, or publicized, or not destroyed but not publicized (only used by some judicial process, or truth-and-reconciliation process, or historians, or something). Also, user X might have an intention that's different from the primary value or valence of the content, like prurient enthusiasm for violence, or making one of the parties depicted look bad, or trying to intimidate one of the parties depicted.<p>In order to figure out which category (or categories) a post falls into, Facebook has to (1) learn the language(s) involved in all posts, (2) learn the political context of violent conflicts, (3) perform some level of adjudication and fact-finding about political conflicts and disputes, and maybe even (4) try to understand the motives of the person who posted something <i>on each particular occasion</i>.<p>Is it reasonable to expect Facebook to do all those things, compared to other choices that are consistent, neutral, and inevitably result in various type I and type II errors with respect to the nature and purposes of posts related to violence?