Many of the comments in this thread seem to not have anything to do with the content of the article, or the researchers' blog post cited within the article. There also seem to be some mistaken assumptions about the purpose of this AI.<p>Here is what the model is actually (attempting) to do:<p>>It calls attention to questionable citations, allowing human editors to evaluate the cases most likely to be flawed without having to sift through thousands of properly cited statements. If a citation seems irrelevant, our model will suggest a more applicable source, even pointing to the specific passage that supports the claim. Eventually, our goal is to build a platform to help Wikipedia editors systematically spot citation issues and quickly fix the citation or correct the content of the corresponding article at scale.
Most of my life I have been told: "do not rely on Wikipedia, it is inaccurate". And I get it, it's true if you're going through academia...<p>But. Compare to the rest of the internet. Compare to every single propaganda website. Text on Wikipedia has one of the highest chances of being true by default. If a random website contradicts Wikipedia one shouldn't trust that website.<p>I'm sick of people comparing Wikipedia to peer reviewed journals... When instead people get their knowledge from tabloids, random newspapers, individuals getting mad in youtube videos, and websites like Facebook.<p>If Facebook claims to know more about the world than Wikipedia, it should factcheck itself.
This is the science of citiogenisis.<p>Fact-check is already a sullied overloaded term. It would be better replaced and served by something like “citation-check”.<p>There are a panoply of cited sources (“facts”) that needs to be properly vetted (“aligned”) to contribute toward its premise (“fact”).<p>This is why Wikipedia can often lay claim to being more scientific (through sheer column of citations containing of “facts”) assembled by its editors (citation scientists). This is also why many educators teach their students not to cite “Wikipedia” which is (a poor attempt?) to indoctrinate the students into learning how to root out the misleading source (often mistaken as “fact”.)<p>Mmmm, but it’s SCIENCE! Doesn’t necessarily means it’s a fact.<p>Meta (“Facebook”) would be venturesome to claim the science of citiogenesis as there are money, prestige, and power to be gained through shaping “science” of these citations. That is, by using artificial intelligence (AI).<p>Arguably, today’s Fact-checkers would try to use a science process that rarely achieves its “factual” (but really called a premise) claim … in a clean and unarbitrary manner while free of bias: always with unnecessary fillers with a goal to sway the readers with cemented anchors to keep it away from their basic but unwanted “fact”. We call those “fact-checkers” an opinionated citation checkers; they save the readers from doing the work by its artificial power of singular analysis through the curation of its premise (“fact”).<p>Fact-checkers are basically wannabe- citiogenesis scientists that are just merely interpreting their point of views. And readers (students) who failed their educators’ lesson would claim it as “fact”.<p>AI may or may not help and into both directions toward and away from their desired premise.<p>How many different algorithms would be used to dislodge this badly-abused citing of these singular analysis efforts by seemingly “fact-checkers”?<p>Who would be in control of this Machine Learning of AI? Meta (Facebook)!<p>Who oversees these AI algorithms? Meta!<p>And who would be the one that watches the watchers? Meta?<p>And will today’s educators teach these future generation of discerning readers on the much needed distinguishing between citation checkersd vs. today’s “fact-checkers”? (*<i>cricket*)</i>
More in their line to build a fact-checker on the sort of Metaverse nonsense Zuck seems to be heavily invested in.<p>The only reason for doing something like this is to ultimately subvert the Wikimedia editors, setting up Factbook/Meta as the sole arbiter of what's correct and true on Wikipedia.
The original blog post would be a better link to post here rather than the article: <a href="https://tech.fb.com/artificial-intelligence/2022/07/how-ai-could-help-make-wikipedia-entries-more-accurate/" rel="nofollow">https://tech.fb.com/artificial-intelligence/2022/07/how-ai-c...</a><p>Reading through it, I strongly disagree with FB's example for "Better citations in action".
I don't see an improvement in the wording and IMO they would be making it worse by switching from an official first party source to a third party one.<p>While their tool might be useful to find semantic (mis)matches, a much more important part of verifying citations is to verify that the source has any business to make claims about the matter in the first place. <a href="https://xkcd.com/978/" rel="nofollow">https://xkcd.com/978/</a><p><i>QUOTE
Better citations in action
Usually, to develop models like this, the input might be just a sentence or two. We trained our models with complicated statements from Wikipedia, accompanied by full websites that may or may not support the claims. As a result, our models have achieved a leap in performance in terms of detecting the accuracy of citations. For example, our system found a better source for a citation in the Wikipedia article “2017 in Classical Music.” The claim reads:<p>“The Los Angeles Philharmonic announces the appointment of Simon Woods as its next president and chief executive officer, effective 22 January 2018.”<p>The current Wikipedia footnote for this statement links to a press release from the Dallas Symphony Association announcing the appointment of its new president and CEO, also effective January 22, 2018. Despite their similarities, our evidence-ranking model deduced that the press release was not relevant to the claim. Our AI indices suggested another possible source, a blog post on the website Violinist.com, which notes,<p>“On Thursday Los Angeles Philharmonic announced the appointment of Simon Woods as its new Chief Executive Director, effective Jan. 22, 2018.”<p>The evidence-ranking model then correctly concluded that this was more relevant than Wikipedia’s existing citation for the claim.
/</i>QUOTE
I can't help but think of The Onion's take on Wikipedia [1].<p>It's more accurate to say that this AI is fact-checking citations. There are lots of ways you can skew or fabircate information on Wikipedia. One well known way is to create some source for a claim and then cite it on Wikipedia. What will an AI do in this case?<p>3-4 years ago I stood in Menlo Park when Mark Zuckerberg got up and announced in response to the misinformation issues of the 2016 election that an effort would be made to fact check articles. My immediate thought, which hasn't changed, is "that's never going to work". You will always find edge cases where reasonable people would disagree but that's not even the big problem.<p>The big problem is that there are lots of people who aren't the slightest bit interested in the "truth". I've heard it say that whatever ridiculous claim you want to fabricate, you can find 30% of Americans who will believe it. As soon as you start trying to label content as truthful or not, you won't change the minds of most people. For many you will be contradicting their world view and you'll simply be dismissed as "biases" or "fake news".<p>I honestly don't know what the solution to this is. I do think sharing links on Facebook itself was probably a mistake for many reasons.<p>So this effort to fact check citations just seems more of the same doomed policy.<p>Disclaimer: Ex-Facebooker.<p>[1]: <a href="https://www.theonion.com/wikipedia-celebrates-750-years-of-american-independence-1819568571" rel="nofollow">https://www.theonion.com/wikipedia-celebrates-750-years-of-a...</a>
Look i understand that ai as a tech will evolve and become better, but right now when i hear something switches to ai i immediately assume low quality and bugs. Brace for false positives and an overall average experience.
I wonder if this famous quote by Lenin is in the dataset:<p>"The humanity's biggest problem with quotes you find on the Internet is that people tend to immediately believe in their authenticity."
Not scary at all that the company that censored news articles at the behest of the FBI in order to swing an election is going to do even more "fact checking".
Who guards the guardians?<p>Who fact-checks the fact-checkers?<p>Which authoritative AI determines the authoritativeness of the other AI's?<p>?<p>Or, perhaps phrased in "American Dad" terms... "Who's manning the Internet?" <g><p><a href="https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes" rel="nofollow">https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes</a>
This is clearly gonna be a failure as far as truth is concerned.<p>What Meta is gonna try here is to use AI to overwhelm reality based on Human Resources to meet whatever Mr Zuck believes reality is.<p>IOW they’re gonna DDOS Wikipedia to show Mark’s reality.<p>Anyone who thinks that’s not gonna happen (whether it happens intentionally or not) is just fooling themselves.
Fortunately I got a 2 year old clone of wiki on a zim file. 2.6 GB isn't all that much and it has pictures.<p>I don't think I will trust wiki ever again if they do this. It is already like an encylopedia made in bar bathroom graphiti. Now they want to turn it into 4chan.
One the one hand, good on them for tryin' to do somethin' properly innovative with "AI", but on the other hand, AHAHAHAHAHAHAHAAHAHAHAHAH! Meta "fact-checking"? Hahahahahah!
And then what is it going to do once it "grades" the information pulled from sources?<p>Edit the article? I don't expect it to be accurate enough to avoid getting quickly banned or auto-reverted.<p>Raise some issue for volunteers to manually review? Probably not accurate or important enough to be a priority given the likely volume.<p>Honestly a good portion of the sources I check out in a wikipedia article just 404 or have become paywalled, and that would be pretty trivial for a bot to detect, so there's obviously not a huge desire to have bots checking sources in the first place.
I'm not religious anymore but this feel applicable:<p>> Thou hypocrite, first cast out the beam out of thine own eye; and then shalt thou see clearly to cast out the mote out of thy brother's eye. - Matthew 7:5<p>(Sorry, I grew up on KJV, it what I remember and what sounds right in my head)