> <i>This articles was "flagged" by Hacker News about 90 minutes after I posted it there, meaning they removed it. Does that surprise anyone? I would be especially interested to see what would happen if a reader were to resubmit this article to Hacker News. Would they flag it a second time?.</i><p>When you see [flagged] on a submission, it means that users flagged it. This is in the FAQ: <a href="https://news.ycombinator.com/newsfaq.html" rel="nofollow">https://news.ycombinator.com/newsfaq.html</a>.<p>We can only guess why users flag things, but presumably they thought it didn't gratify intellectual curiosity and therefore went against the site guidelines: <a href="https://news.ycombinator.com/newsguidelines.html" rel="nofollow">https://news.ycombinator.com/newsguidelines.html</a>. There have been incredibly many threads and variations on this general theme, and curiosity withers under repetition, so this is not surprising. Lots of past explanations about this:<p><a href="https://hn.algolia.com/?dateRange=all&page=0&prefix=false&sort=byDate&type=comment&query=curiosity%20repetition%20by:dang" rel="nofollow">https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...</a><p><a href="https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sort=byDate&type=comment&query=follow-up%20by%3Adang" rel="nofollow">https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...</a><p><a href="https://hn.algolia.com/?dateRange=all&page=0&prefix=false&sort=byDate&type=comment&query=%22significant%20new%20information%22%20by%3Adang" rel="nofollow">https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...</a>
What worked in the 1960s does not necessarily work in 2020s. In the 60s University of California Berkeley started the Free Speech Movement. But more recently the same UC Berkeley was in the news for banning the infamous Ann Coulter from speaking on campus.<p>This is not because young people today are less woke than young people in the 60s. Societal attitudes towards free speech have changed a lot. This is related to technological advancement and the rise of social media. The advent of social media has made it too easy to spread dangerous levels of hate and false information online. Malicious individuals and groups now have the power to reach hundreds of millions instantly, at no cost to themselves. It started off innocently enough, with cat videos uploaded to YouTube, but soon extremists were taking advantage of social media for radicalization purposes, adversarial nations were spreading fake news to influence who gets elected, and others were even live-streaming mass murders.<p>This has caused an upheaval in attitudes towards free speech. Enough is enough! There needs to be limits. Communities started imposing limits to free speech. Society — as opposed to governments — have decided that some censorship is in order. Some censorship, by private parties such as Twitter, as opposed to absolute free speech, will be the new normal. We live in a new world; the old norms no longer apply.
Some points that are relevant:<p>Tech has a lot of consolidation behind it both infrastructure-wise and end-user watering-hole-wise.<p>The fewer providers, the fewer people that make the rules, the more likely it is you have to play by their rules.<p>If you're upset that there are more free-speech 'restrictions' on the internet and you're cheering every time there's another tech acquisition, you're boiling the pot you're sitting in.
I understand the argument this guy is making, but the quality of the writing is horrible. A wsj editorial section comment expanded into rambling long form. Why is it here?
Social media has been weaponized by people with nefarious intent. This has unfortunate collateral damage.<p>I have run afoul of the crude tools in place to detect this myself on both Twitter and Facebook. But I have yet to have a day-to-day casual conversation with anyone about their precious freedoms being limited on the Internet - except for some of my friends whom are part of an easy to predict demographic and they are all about it these days.<p>But you don't have first amendment rights on a company's online forum. And it seems like attempts to create a forum that supports supposed first amendment rights immediately realizes why you can't do that when people hack the crap out of it.<p>No solution other than waiting for people to become desensitized to all the agitprop the way they are now desensitized to deep fakes despite all the hubbub about them being the end of the world.
If you want arguments like these to be taken seriously, you really can’t use phrases like “free speech Nazis” to describe forum moderators. This isn’t a cogent argument or a thoughtful essay, this is contentless red meat for an audience that already made a conclusion and is looking to be stroked and told how brave and intelligent they are.<p>Also, read the room? Not exactly the ideal time for making an ethical argument in favor of distribution of misinformation.
>So, when social media platforms put in place rules, or algorithms, to silence all opposing views, the only two choices we bloggers have are either to resign ourselves to continue writing in obscurity or to muzzle ourselves.<p>Yet another example of someone who can't differentiate "free speech" from "freedom from consequences of speech". Nobody is stopping you from speaking, they just don't want to listen to you. Capitalist profit motive will always guide corporations to the most simple and inoffensive content. The problem isn't "free speech Nazis", it's corporations who want appear safe for everyone so they can address the largest market.