This just sounds like a giant can of worms that is going to blow up in people's faces. Remove for a moment any potential future government alignments, just so we aren't talking about hypothetical fictitious governments. Let's just examine the governments currently signed to it.<p>Indonesia has harsh religious laws, crack downs on illegal reporting, and literally raids on LGBTQ gatherings. The Senegal government arbitrarily arrests dissidents, the LGTBQ community has to hide because it's illegal, and protests are outlawed. India already overuses counter terrorism laws to charge dissidents and activists and there are religious minorities that suffer heavily from discrimination.<p>This type of call to action will only further entrench government strangles over freedom of speech. Now people have given them moral authority to curb an already very broad and ambiguous category of terrorists and now extremists. Sure I get it, there is a bunch of vile on the internet and the world would be a better place without it. But my "better place without it" is different from somebody else's and so is the "it". This won't end up like what is in your head.<p>The hate isn't "spreading through social media" the hate and the fear were already there. These people grew up with it. Social, cultural, religious, sexual, moral borders, you name it, every border we have is being rewritten and when you rewrite those borders, especially this quickly, people are gonna get scared, they're gonna lash out, and because of it more people are getting scared and want to control one of the most powerful tools to freedom.
This is a very hard problem that YouTube and Facebook made for themselves by becoming the world’s largest advertising platforms. They depend on engagement for ad revenue, they designed world-class algorithms to promote this engagement, and it turned out that extremist content happens to be very engaging.<p>And so the problem is, building an algorithm that blindly promotes whatever keeps users on the site, for all its complexity, is a far more tractable problem compared to building a system that can avoid promoting content that promotes violence. In the meantime, they throw armies of people at the problem, to moderate content and respond to user reports, but it’s a losing battle.<p>They had the technology to create a monster, but don’t have the technology to stop it.
It's amazing to me how political this is and how oblivious to that fact a large portion of the NZ population is.<p>This horrible act happened on the current governments watch. I've seen more outrage and effort from the government on the spreading of the video and manifesto than introspection into how this slipped through in the first place. I suppose in a country where carrying a weapon for the purpose of self defense is considered a crime, something this terrible shattering the illusion of nanny government protecting you requires a whole lot of deflection and ultra maneuvers to secure the next election cycle.<p>New Zealand's knee has jerked so hard I'm feeling it in my groin 8k miles away.
> <i>Tech companies and governments sign up</i><p>Two wolves and a sheep vote on what's for dinner. What's for dinner being our most fundamental rights as citizens.
Preventing viral spreading of the videos I think is unquestionably ideal, but there's also a section in this "Call" stating another goal, to "Counter the drivers of terrorism ... to resist ideology and narratives... through education and building media literacy..." [some words removed so the message is less hidden]. It's hard not to suspect political motivation, given that Internet platforms are turf wars for politics these days.
The Christchurch shooter amassed a cache of weapons, and also posted a copy of his manifesto and a link to his real Facebook account to 4chan.<p>The censorship/Facebook algorithms amplifying abhorrent content debate is one thing but I'm surprised by the lack of scrutiny of the security services over this. Especially for a member of the 5 Eyes. I can't help but feel this could have been prevented without any of the changes being proposed.
I would be interested to know what the technical difficulties are in scrubbing a banned video, and all derivatives, from a Facebook.<p>Are there practical AI/video analysis techniques to detect that a video contains a fragment of another video? Surely.
Aren't they missing the point here? The problem isn't that this guy streamed what he was doing on facebook, it's the fact that he did it in the first place?<p>As these large hosts move more and more away from mere platforms to content curators it does make a lot of sense that they'd also be more responsible for what they curate, but at the same time, it seems like this responsibility will ultimately leak back into the parts of these services that are really just platforms and ultimately to those that don't curate content at all.
The approach seems fairly reasonable, it sounds like it's limited to explicitly violent extremist content, and it's being done using pension funds of various governments in a activist investment manner to try to bring about changes
I find it problematic that there have been oodles of very classical kinds of 'terrorism' and 'extremism' on Social Media since the start.<p>ISIS has been recruiting with absolutely brutal kind of stuff on Twitter, etc..<p>But now we have this nutbar thing in New Zealand and it's a 'global action'?<p>Aside from the complications mentioned in some other comments ...<p>... the Jacinda / Trudeau / Macron triumvirate I think were looking in the wrong places.<p>So it's probably good that we're taking action, and just beyond repulsive that that some massacre was broadcast live on Facebook, but I hope we accomplish do this without too many existential issues.
Utterly nonsensical. You don't need the internet to be radicalized, people have been willing to kill in the name of their convictions for as long as humans have existed. I predict even if somehow every white supremacist were booted off the internet, no lives at all would be saved, as they just don't need the internet to kill people, or to learn to hate.<p>This is purely giving up rights for the sake of security theatre.
Considering that religious speech and the guiding religious books are often completely intolerant of all other views of the world and can thus be considered extremist, it would appear that such an agreement could ironically cause the censorship of the very religion that was brutally targeted in these attacks. Or perhaps they have specific censorship goals in mind? Think of the result in Alabama just yesterday, it is abundantly clear that Christianity as an ideology causes real tangible harm to women and ought to be completely scrubbed from social media.
I hope one day we will be able to condemn these politicians, bureaucrats and big corps for their crimes against free speech, just like we did with the nazis when they tried to subvert Europe.
We need serious laws with draconian punishments to protect our rights, what we have now is insufficient.
It'd be great if people who want to explore this as a free speech issue would engage with the question of what happens to the free speech (and other) rights of people who are killed by extremists, and whether they are more or less important than the rights of people who advocate such killings.