On the surface, it sounds really nice. But anyone who’s paying attention in the fake news era has a general idea on what this will be used for.<p>Any tweets questioning the official narrative will be roundly criticized and ridiculed. Instead of having to delve into a long reply thread to see debunkers make their case, there will be an easy to digest notice within grasp. Which is not necessarily a bad thing if it is used fairly and responsibly. But judging on Twitter’s past performance, it likely won’t be.<p>Dissent will be publicly humiliated while pure disinformation from governments, think tanks, and corporations have no such objections. Any doubts to the accepted story will be pointed at one of the fact checker sites and further inquiries censored.<p>In an honest world, this would be one giant step to finally getting at the truth. But this isn’t an honest world, is it?
> As we develop algorithms that power Birdwatch — such as reputation and consensus systems<p>Consensus is the enemy of understanding. For topics where there is conflicting or poor evidence or that bear on the culture war, I do not want people voting on what is the consensus truth. I want to see all the evidence. We have enough problems with researchers not publishing uncomfortable data; I don't want the little that exists flagged because it conflicts with the average Twitter user's sensibilities.
Nearly all attempts to "fix misinformation" on social media I've seen in the last several years ranged from hopelessly clueless to downright sinister.<p><i>"Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context."</i><p>You're not going to fix anything by attacking the symptoms, which is exactly what this seems to propose. To fix the actual problem we need to create systems that generate and propagate trustworthy information, which people actually <i>want</i> to consume rather than attacking information <i>you</i> don't want people to consume.<p><i>"we have conducted more than 100 qualitative interviews with individuals across the political spectrum who use Twitter"</i><p>There is already a selection bias in play then, because there are large numbers of people who don't use Twitter for various reasons.
I suspect this will be minorly used for direct misinformation control("Donald Rumsfeld is not a lizard person"), but mostly used for narrative control("While this fact is true by itself, you need to look at the bigger context..."), and so will be entirely ineffective. People readily take up new information, but are very hesitant to change their internal narrative on a matter. Especially when they're being told by others what their narrative should be. Double especially if part of their narrative is that big tech/liberals/academia/coastal elites/etc are trying to feed you the Big Lie.<p>Conservative misinformation is a big talking point for liberals, and maybe the big societal issue at the moment, but as a small scale test run, I'd love to see birdwatch try to correct the record for misinformation that is commonly believed in liberal circles: Anti-GMO, anti-vaxx, toxic whiteness, the extents of systemic racism in our society, some of the more dire prognostications of nuclear war and global warming.
Would be very interested to see how this addresses one of the primary underlying pathways of misinformation - confirmation bias. Many people believe information because it confirms their worldview; whether it's provably true or false is often irrelevant. In fact, I suspect that having a belief proved wrong often even reinforces that belief in some cases.<p>How much does truth matter in a post-truth society?
We need to see when a someone on one side disagrees with their own side. Echo chambers drown the voice of dissenters.<p>We need to see when one side ignores the other side. We need a list of unanswered questions to hold every side accountable.
Introducing Watchbird, a community-based approach to misinformation (heh).<p>Posters: Get paid to post online, starting at $0.20/post! Our top posters earn up to $50/hour! Join now!<p>Sponsors: We have over 50,000 active users ready to post whatever you need online, no questions asked!<p><i>Similar services actually exist, but you know, one could always create a new one aimed at "fact checking" the fact checkers.</i><p>I guess my point is, you can't solve this problem with even more crowd bullshit. It needs to be done at a fundamental level, preferably by governments in school.<p>Afaik, there's still zero official classes/courses/lessons/whatever in most schools that would teach you to not trust everything you read and triple check everything yourself before believing anything.<p>Plus, this is pretty prone to abuse. Individuals are inherently dangerous, and crowds even more so. Someone doesn't like person X, so they "fact check" his tweets. Others see this coming from a "reputable" poster and jump on the bandwagon.<p>Seen it so many times it got old. Experimented with it myself. A post on Reddit (same content) that gets 8-12 fake upvotes in the first 30 minutes after being posted is infinitely more likely to start getting upvoted by hundreds of real users and get to the subreddit's hot front page than a post that got 0-3 upvotes, for example.<p>I was interested in Reddit's voting system and learned some interesting stuff. They're really smart about it, you can't just have multiple accounts and some proxies/VPNs and go at it like in the good old days. Votes are going to be ignored unless you know what you're doing. Probably not news for anyone in the industry, but I found it interesting.
I just want to point out that Minds[1] has a pretty clever content moderation policy that involves an appeal process with randomly selected users of the platform making blind judgements.<p>I haven’t been part of the process myself, nor have I used the platform yet at all. But this feature sounds quite good in theory.<p>[1]: <a href="https://www.minds.com/content-policy" rel="nofollow">https://www.minds.com/content-policy</a>
Reminds me of the Overwatch moderation system used by Valve to collect cheater data from human evaluators in Counter-Strike. Eventually, they leveraged that data to improve their automated cheat detection systems. [0]<p>Although Twitter's problem is way harder, IMO.<p>[0] <a href="https://www.youtube.com/watch?v=ObhK8lUfIlc" rel="nofollow">https://www.youtube.com/watch?v=ObhK8lUfIlc</a>
A lot of people are wondering how this will stop misinformation. I agree that we can't crowdsource truth. But we can crowdsource information that can help reduce misinformation. When you have two sides disagreeing the first step is to build some common ground.<p>Twitter is trying to solve a tough problem. On one hand you've got people accusing Twitter of hosting and platforming hateful, harmful content. On the other hand you have people claiming that Twitter is calling the shots about what's true and suppressing information it doesn't like.<p>Maybe this is the first step towards something like a digital court. People on both sides present evidence, experts, witnesses. The two sides get a hand in picking the jury.<p>Or maybe the solvable problem is that information gets misconstrued and propagated. A video clip might get edited a certain way, for example. Solving this problem may not help us all agree on what happened in the video clip. However, we should at least be able to agree on what the two interpretations are. To make this happen, both sides would have to steel man the other side. Otherwise, the opposing side would claim they're being misportrayed. Having things that opposing sides agree upon would greatly help reduce unnecessary conflict.
> <i>[...] we’re designing Birdwatch to encourage contributions from people with diverse perspectives, and to reward contributions that are found helpful by a wide range of people.</i><p>> <i>For example, rather than ranking and selecting top notes by a simple majority vote, Birdwatch can consider how diverse a note’s set of ratings is and determine whether additional inputs are needed before a consensus is reached. Additionally, Birdwatch can proactively seek ratings from contributors who are likely to provide a different perspective based on their previous ratings.</i><p>> <i>Further, we plan for Birdwatch to have a reputation system in which one earns reputation for contributions that people from a wide range of perspectives find helpful.</i><p><a href="https://twitter.github.io/birdwatch/about/challenges/" rel="nofollow">https://twitter.github.io/birdwatch/about/challenges/</a>
Sounds interesting, and I coyld definitely use something better than my filed, chaotic bookmarks to organize my collection of websites. The problem with this though? The same one that so many other systems for long term data organization can have: How do I know your service won't just die or fold sooner or later, leaving me and my organization efforts swinging in the wind? Bookmarks at least are just HTML files that can be exported, saved and imported across all browsers regardless of platform. Simple, resilient and thus robust.
There's a lot of reason to doubt that this will work. But one thing makes me hopeful that this might actually work is that Wikipedia's "Talk" pages appear to serve a similar purpose, and they serve that purpose adequately.
There's another active thread here: <a href="https://news.ycombinator.com/item?id=25908439" rel="nofollow">https://news.ycombinator.com/item?id=25908439</a>. Not sure whether to merge them.
Did anybody even ask for fact checking notes by Twitter? I don't think so.<p>Whatever the Twitter fact checking note says, you should still not take any information from the internet at facee value.
This will be a kangaroo court used to blunt-force-trauma a select subset of strawmen and deplorables off of the service. In the end, does anyone really care if Twitter is "truthful"?
I'm curious what mechanisms will be in place, and how effective they will be, at preventing dogpiling on people using this system.<p>As a specific example, as a nonbinary person, I'm constantly running into people online who tell me there are only two genders, or that singular they is some sort of new concept. My concern with a consensus system is that it will be used to shut down people like me (or trans folk, or socialists, etc. etc. etc.)<p>How do you build a consensus system that protects minority persons and also weeds out misinformation?<p>Edit: Downvotes for... being non-binary I guess?