I appreciate the recognition that there is a problem in journalism. But I'm afraid that the fundamental problems are problems in the nature of humans and aren't ameniable to this sort of solution. Those problems include:<p>1) Many people choose to believe what feels right to them, rather than dispassionately seeing things for what they are. Unfortunately there are a great many of people that do this regularly, and they have practiced this their entire lives. Richard Feynman cautioned us in saying that "The first principle is that you must not fool yourself - and you are the easiest person to fool." That caution was aimed at scientists who are already among the most objective of us, yet they can still suffer from this ailment.<p>2) When people talk about the facts, they rarely just lay out the facts dispassionately and without judgement. Humans need a motivation to tell a story, and most people's motivations include convincing other people to support their ideology, or else berating those who dont. Facts are organized within a mental framework of the ideology of the teller, wrapped up in beliefs, desires, and biases andcarry additional information about which ideological position you ought to hold -- sometimes they are even intended to mislead and manipulate the listener.<p>The modern news media is composed of businesses in search of profits. That goal is often not aligned with a straight reporting of facts. By pandering to an audiences ideology they increase ratings. By reporting on salacious and exciting news, even if incorrect, they get ratings. Buyer beware.
Awkward question here... can anyone cite an example of a major-media news story that was actually factually wrong in a way that "open source fact checking" could have detected it?<p>Just because a bunch of people believe it's a problem doesn't mean it's a problem. I find the bias, when it exists, is in the presentation, not the facts themselves. And mainstream journalism is really committed to getting the facts right.
IMHO reputation is important to build into the base level for a platform like this. Allowing anon registrations and commenting will eventually lead to a lot of noise and abuse of the platform for misinformation. I think some sort of verification of a person's qualifications on particular subject areas and then allowing them to only "fact check" on those qualified areas is required here to have an effective platform that will scale well.
Wikipedia talk pages are a perfect example of facts ignored and removed from articles to push an agenda. You can see how a narrative can be pushed with half-truths and selective editing. Same thing is happening in News, its agenda driven, not truth driven. Entertainment opinion pieces that pretends to be news. People are just bending the facts to support their personal politics.<p>If Wikipedia can't be factually accurate due to editors personal politics, how do you keep those personal politics out of a fact checker?
What problem does this solve that existing reputation systems don't? When I read a New Yorker article, for example, part of what my subscription is paying for is considerable skilled labor by a group of in-house fact checkers. How does moving this work to an outside free platform improve the situation?<p>Moreover, how do you replicate work like calling sources to verify statements before publication?<p>I wrote a short piece for Wired once and had to go back and forth with their fact checker on the most picayune details. In an article about balancing online work and travel I had mentioned in passing searching under a hotel bed for an outlet, and she wanted to know what hotel it was, what country it was in, when I stayed there, etc. The process was exhaustive and not something I would go through with a bunch of Internet randos. It was also done before publication and involved a fairly high level of trust.
So, your examples are all "reporting about court cases" and you use court documents as your citations for facts.<p>Court cases are as good as we have for determining facts, I suppose. (Though the error rate in court cases is quite high!) But 99.9999% of reporting isn't about court cases. Most reporting isn't about issues that will ever be litigated, and even those things that will be litigated, haven't been at the time of the reporting.<p>"Mitch McConnell had oatmeal for breakfast this morning, according to sources."<p>There, that's a fact. Now, tell me which court case I can refer to in order to verify that fact?<p>Fact-checking is an impossible endeavor. The essence of journalism is reporting things no one knows. That's what news is! This is completely at odds with "fact-checking". A good news story CANNOT be fact-checked by the public because if the public knew it, it wouldn't be news. There is an extremely limited genre of "news by trawling public but little-known documents", such as reporting on old court cases. Almost zero news fits into that category.
Nice to see more work in this area.<p>However as others have said a reputation system for participants is absolutely key. Any platform like this has to assume it is going to be flooded by bots trying to influence opinion on behalf of corporations, foreign governments and activists.<p>A reputation where users gain reputation slowly (through work, by verifying sources, etc..) become more able to influence opinion, but the reputation drops quickly if they are found to be misleading people. How this actually works is harder than it seems if you want a truly robust system.<p>When I think of platforms like this I think of Galileo[0]. He was perceived as a troll or heretic in his time, but of course he was right. How do you create a platform that allows correct ideas to flourish even when they go against conventional wisdom?<p>[0] <a href="https://en.wikipedia.org/wiki/Galileo_Galilei" rel="nofollow">https://en.wikipedia.org/wiki/Galileo_Galilei</a>
Been experimenting with building tools to make fact checking more transparent and reusable for a few years now, this is the latest iteration of that project. Feedback appreciated!
YazIAm, you should check out <a href="https://www.kialo.com" rel="nofollow">https://www.kialo.com</a> and <a href="https://web.hypothes.is/journalism/" rel="nofollow">https://web.hypothes.is/journalism/</a> and <a href="https://www.procon.org/" rel="nofollow">https://www.procon.org/</a>
This is pretty awesome. Any time I open any article, the first thing I do is try to follow the trail of links/sources back to the primary sources and read that instead. If I can't find one, I don't trust the article.
Hey everybody, I just want to point out that I'm working on a similar project to document evidence for and against claims: <a href="https://www.wikiclaim.org" rel="nofollow">https://www.wikiclaim.org</a><p>The tech is dumb (MediaWiki on Heroku), but it's been sufficient to allow me to spend several hundred hours creating probably about 200 hundred wiki pages so far.
While I like how the software ties together the sourcing, I feel like the following is lacking (and I realize this is an alpha, but its things to consider)<p>1. Anyone who answers a question regarding whether the claim is "fully" proven by a source or whether the source is valid for the claim must or should (debatable) provide proof of their own interpretation by citing either specific instances in the source in question supporting it. For example, in the Net Neutrality article, Yatz states this: <i>The adopted principles in this statement are at the top of page 3</i> in a response to does the source fully prove the claim. It would be nice if that was linked to the actual location in the document or identified through highlighting or some other mechanism to get full context for the response. There also should be uses for outside sources to be attached when evaluating these questions that are suppose to validate a source. This is because someone could submit a response to these source validity questions and cite an external source or internal quote of the source in question for their reasoning that doesn't support their answer.<p>2. Having some sort of expertise verification is extremely important and should be weighted either separately as in <i>Experts: Yes</i> <i>Experts: No</i> category and should be more important. It distinguishes this from random anon answers that have no training in how the source should be interpreted.<p>3. Having anonymous people do this seems rather dubious and the current platform seems like it would make it prone to Youtube comment syndrome. Maybe have some reputation system to gate keep like StackOverflow?<p>4. Sources usually require appropriate interpretation in order to be taken in the right context and be considered correct. Sometimes there's no single source or any source, philosophically, that fully proves a claim. I'm guessing that's what supporting documents are for?<p>5. Why is the original articles claim allowed to be edited?
This is really interesting. It reminds me a bit of peer-reviewing journalism via Web annotation: <a href="https://web.hypothes.is/" rel="nofollow">https://web.hypothes.is/</a><p>I have a couple comments to make as a former newspaper reporter:<p>* Many important stories use anonymous sources that are impossible to verify via crowd-sourcing or by linking to official documents. Much of what happens in the world is not publicly documented nor is it widely known (until a reporter publishes a story). That gap between events and any record of those events is one of the structural problems that allow "fake news" to occur.<p>* Timing matters. Reporters working for the dwindling number of publications that employ fact-checkers or editors have their facts checked before they publish. For that small group of outlets, this diminishes the number of falsehoods they are responsible for unleashing online. Fact-checking a story <i>after</i> its release is useful, but the cat's already out of the bag. The lie is already speeding around the world at the speed of viral outrage.<p>* As with any platform where better information leads people to make different choices, there are lots of incentives for publications <i>not</i> engage in crowd-sourced fact-checking. Why do people lie or distort the facts on their dating profiles and job applications? Why do companies put forward their most utopian face? They are trying to shape the world with information. And so are media outlets such as Fox News, to name just the most egregious.<p>* Fact-checking is a lot of work, and can descend into epistemological quibbles. Most people don't have time do engage in it, or even to understand the debate around a given fact. This is why, in the past, we outsourced this work to the editorial staff of the fourth estate. While this would bring more transparency to the process, I am not sure who would have time to take advantage of that transparency, anymore than the normal reader will closely follow the "talk" tab on a Wikipedia article.
I like the idea and genuinely hope that fact checking becomes more mainstream. Seriously, kudos to the author for building something. But what I'm not getting is why a news platform would use this, unless this is intending on becoming a news platform itself?<p>I think there is an underlying assumption that the general world cares about things like "primary sources" and "logical fallacies" enough to bother hovering over the text or viewing the content, and that there isn't an intentional manipulation of these things by media organizations to fit a narrative. Maybe if this was a browser plugin that came with <major browser> by default and automatically highlighted fact checked statements in exiting articles. That way folks wouldn't have to opt in, but as it is, whats the incentive for either the populace or the media to participate? Think of it this way- how did the fact checking in the examples get done? Someone spent 2 minutes on their favorite search engine. People willing to do that will do it if they have the time, and people not willing probably aren't interested in information that would contradict their opinions anyway - unless it comes from a source they already trust, like someone in their bubble. My own experience with this is that the folks who will care exist but are rare. Others, you can literally watch their eyes glaze over the moment you introduce a little cognitive dissonance.
How does this handle anonymous sources?<p>Not only do you have to account for what those sources say, but also, a reporter reporting what an anonymous source said can be 100% factual (Yes, the anonymous, valid source was indeed reported accurately), but maybe the anonymous source is incorrect. The reporting is accurate. The source is not.<p>Basically, a lot of grey area. And with that comes the issue of anything not "fact checked" by a service like this could be dismissed. What harm does that do?<p>Not a dismissal of a service like this. Rather, just open questions.
The main fault I see is how do we augment your "facts" with other "facts".<p>For instance, I clicked on Net Neutrality. It's tough to truly appreciate the "net neutrality" dynamic of the past few years without understanding the backstory of Reed Hastings battling the ILECs (<a href="https://www.dslreports.com/shownews/Netflix-CEO-Comcast-Wants-Whole-Internet-to-Pay-Them-129154" rel="nofollow">https://www.dslreports.com/shownews/Netflix-CEO-Comcast-Want...</a>), buddying up to Obama (<a href="https://www.businessinsider.com/house-of-cards-obama-2013-12" rel="nofollow">https://www.businessinsider.com/house-of-cards-obama-2013-12</a>), getting the law changed in his favor (<a href="https://www.nytimes.com/2015/03/13/technology/fcc-releases-net-neutrality-rules.html" rel="nofollow">https://www.nytimes.com/2015/03/13/technology/fcc-releases-n...</a>), and now rewarding Obama in kind: <a href="https://www.nytimes.com/2018/05/21/us/politics/barack-obama-netflix-show.html" rel="nofollow">https://www.nytimes.com/2018/05/21/us/politics/barack-obama-...</a>.<p>It's almost as if it had nothing to with actual "net neutrality" at all since none of the horrible things have come to pass that were predicted.<p>Broadband isn't as good as it could or should be, but it isn't getting worse.
Interesting work, YazIAm. Conceptually this reminds me a little bit of the idea behind Kialo. Down the road, there might be some interesting synergy potential between fact attestation and debate.<p>I've had a few related thoughts, on the off chance you find them useful:<p>1. It would be interesting, variously for journalists and the reading public, if it was possible for journalists to generate, disclose, and research cryptographic sourcing identifiers that enable them to figure out when they share a source. This could, critically, help journalists identify sources with a record of feeding other journalists bad information.<p>It'd be nice if the same work could help the rest of us unravel citation chains based on a small number of unique sources, but I'm not sure there's a way to achieve that without it being fairly simple to re-identify sources via brute-force.<p>2. If we can drill enough to find some bedrock, I think it might be useful to have virtuous-cycle user tools, like browser extensions that warn users when visiting an article by writers/publications (and potentially even sources, per above) with repeated sourcing/attestation problems. I don't think of this as purely a journalistic thing. In the sciences it could apply to methodology, data-collection/statistical integrity. With some bedrock, tooling in place, and established trust/community practice in place here, it might also be possible to expand the scope a little and address things like headlines that aren't at all moored to evidence.
Would be nice to avoid having to open another tab to do fact checking - having some kind of a summary or something beyond "10 verifications" would at least let me know if opening the new tab is really worth it. Ideally the article and all verifications would live in one tab.
This reminds me a bit of an application I wanted to build many years ago: A platform for collections of arguments. For example, the question "are humans contributing to global warming?" Allow crowd-submitted answers. Collect information about facts cited (similar to how this works).<p>Mostly-rational people can be swayed by evidence and arguments, yet often disagree with each other because they doubt the truth of the other side's statements (fact checking), they believe the other side has missed important context in their argument, the other side has implicit assumptions that they disagree with (this is something that should be explored as a sub issue, then).<p>I'm very happy to see this. I'd love to see more of this kind of thing in the future.
But journalism is not only stating true facts, but choosing a limited set of facts to present.<p>You can pretty well do fake news by stating a list of true facts (and omitting a few other, fundamental, true facts).<p>Example:<p>Published news : person A killed person B by pushing a knife on his stomach, at a time when person B was unarmed.<p>Non-published statements : just before being killed, person B and a group of three friends armed with knifes attacked person A, which was unarmed at the time, and started cutting him. During the struggle, person A managed to get hold of B's knife and fought back, harming B. The friends of B were scared and ran away. Person A finally survived his injuries, but person B didn't.<p>I suppose everybody agrees that publishing the first statement alone is a clear example of fake news.
This is reminiscent of how some scientific papers are written. You might be able to pick up some tips from those processes as well. For example:<p>- Write each sentence on a different line.<p>This allows for better change tracking, commentary etc in a number of tools including github.<p>- Inline sources and compile into a references section.<p>I want to see that a claim is backed by more than one piece of supporting work whenever possible.<p>- Have a compiled / formatted version for holistic evaluation.<p>After you've done the hard work of writing factual statements the piece also needs to be readable. This is an orthogonal view into the same information however and should be presented as such.
This is a very good idea but I'm afraid it won't play along the human condition.<p>People construct reality with things they want to believe. Most people aren't interested and/or skilled enough to be reasonable and think logically.<p>How could someone like that orange ape become POTUS???? This is just one major indicator for the fall of reason.<p>The enlightenment has failed and instead of a mass of mature citizens who are able to govern the world via consensus like the democratic system needs it to be we are driven by emotions.<p>Keep on doing this and I hope it may still have some impact.
Little feedback : Nice. Please add a vertical splitter in the verify page so that the verification is not constrained to a small margin on the right, which make it not easily readable.<p>It would also be nice to have some color coding when reading articles, so you can know what's the majority opinion on the validity of the piece.<p>Hopefully once you have gathered enough data, you can get one of these NLP bots to check the fact in an even more neutral way.
Just a couple quick feedback points:<p>I'd prefer if the currently selected fact text remained highlighted when the mouse is not hovered over it. That way you know what the active "Investigate >" link is referring to.<p>Also, for accessibility and convenience, it would be nice if you could tab through the instances of the fact text blocks.
This is really great idea. Today I don't trust the journalists, but if they provide references atleast there is a bit a scope for trusting a particular article. ofcourse, provided the references can stand the scrutiny.
How would a reporter use this? Do you need to have an article already written/published? Does it have to be on your site?<p>It seems like its less for fact checking, which would happen prior to publication, than for fact proof?
The hover state reminds me of the lyric annotation system Genius (<a href="https://genius.com/" rel="nofollow">https://genius.com/</a>) has, which is a good thing.
Cool platform. One suggestion - create a unique url/route for each article. I wanted to send a particular article to a friend, but realized there's no way to provide a unique link.
First, love the project. Hope it works and makes journalism better. Even if it doesn't it will be interesting to see how much the news industry sees value in crowd-sourced fact-checking.<p>Good luck!
This reminds me of Grasswire: <a href="https://news.ycombinator.com/item?id=7954327" rel="nofollow">https://news.ycombinator.com/item?id=7954327</a>
"open source" is not the right term and also is not a verb.<p>"transparency" is the word you are looking for. It's more general and predates "open source" by far.
See also <a href="https://fullfact.org" rel="nofollow">https://fullfact.org</a> which sponsored an Apache Solr hackathon a couple of years back.
What would you say if a well known, analytical journalist approached you to be able to perform fact checking, but you knew they were openly biased on many topics?
Sadly this "platform to open source factchecking" is not itself open source, as the site's source code is not available and the user content license restricts commercial use (see point 3 <a href="https://sourcedfact.com/terms_of_service" rel="nofollow">https://sourcedfact.com/terms_of_service</a> and <a href="https://blog.okfn.org/2010/06/24/why-share-alike-licenses-are-open-but-non-commercial-ones-arent/" rel="nofollow">https://blog.okfn.org/2010/06/24/why-share-alike-licenses-ar...</a>)
I'm surprised that FAGAM haven't got into the crowd-sourced fact checking space. Seems like a great way to build a knowledge engine / AI, by getting humans to "connect the facts" while motivating (herding) them with the idea they are doing it to "preserve truth in an era of fake narratives", when actually they are just helping build an AI.