I am reposting a comment to the original article that appeared in the Washington Post originally because of the slippery slope dangers (<a href="https://en.wikipedia.org/wiki/Slippery_slope" rel="nofollow">https://en.wikipedia.org/wiki/Slippery_slope</a>):<p>In a previous comment on this very same subject on Apple's attempt to flag CSAM I wrote: This invasive capability on the device level is a massive intrusion on everyone's privacy and there will be no limits for governments to expand it's reach once implemented. The scope will always broaden.
Well in the article they correctly point out how the scope of scanning is already broad by governments around the world and a violation of privacy by content matching political speech and other forms of censorship and government tracking.
We already have that now on the big tech platforms like Twitter that censor or shadow ban contetnt that they as the arbiters (egged on by the politicians and big corporate media) of truth (or truthiness as Colbert used to say in the old show The Colbert Report) label as misinformation or disinformation.
Do we now need to be prevented from communicating our thoughts and punished for spreading the truth or non-truths, especially given the false positives, and malware injections and remote device takeovers and hijackings by the Orwellian Big Tech oligopolies.
Power corrupts absolutely and this is too much power in the hands of Big Corporations and Governments.
From the article in case you need the lowdown:
Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.
A foreign government could, for example, compel a service to out people sharing disfavored political speech. That’s no hypothetical: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require pre-screening content critical of government policy. Russia recently fined Google, Facebook and Twitter for not removing pro-democracy protest materials.
We spotted other shortcomings. The content-matching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny.
We were so disturbed that we took a step we hadn’t seen before in computer science literature: We warned against our own system design, urging further research on how to mitigate the serious downsides. We’d planned to discuss paths forward at an academic conference this month.
That dialogue never happened. The week before our presentation, Apple announced it would deploy its nearly identical system on iCloud Photos, which exists on more than 1.5 billion devices. Apple’s motivation, like ours, was to protect children. And its system was technically more efficient and capable than ours. But we were baffled to see that Apple had few answers for the hard questions we’d surfaced.
China is Apple’s second-largest market, with probably hundreds of millions of devices. What stops the Chinese government from demanding Apple scan those devices for pro-democracy materials? Absolutely nothing, except Apple’s solemn promise. This is the same Apple that blocked Chinese citizens from apps that allow access to censored material, that acceded to China’s demand to store user data in state-owned data centers and whose chief executive infamously declared, “We follow the law wherever we do business.”
Apple’s muted response about possible misuse is especially puzzling because it’s a high-profile flip-flop. After the 2015 terrorist attack in San Bernardino, Calif., the Justice Department tried to compel Apple to facilitate access to a perpetrator’s encrypted iPhone. Apple refused, swearing in court filings that if it were to build such a capability once, all bets were off about how that capability might be used in future.