Misinformation is everywhere, and fact-checking often feels like playing whack-a-mole. What if we flipped the script?<p>Imagine a blockchain where scientific papers get added, and key facts are extracted and linked back to their source. If a study turns out to be flawed—bad stats, misinterpretation, whatever—the linked facts get flagged automatically.<p>Now, let’s take it further: News articles and reports can reference these facts to prove they’re based on solid data. But if those facts later get debunked, the misinformation spreads in reverse—articles that relied on them also get marked.<p>This way, truth isn’t just a snapshot—it’s something that evolves over time, transparently. No more endless debunking. Just a system that adapts as knowledge does.<p>Do you know if anything like that already exists and how it performs? I would be very interested in learning about it, tanks in advance :)
Compare:<p>* A system of computable logic to represent real-world science... which uses a blockchain.<p>* An interstellar probe... controlled by smartphone app.<p>In each case, the first part is huge unsolved problem of deep complexity and hard-work, and the second part is a side bonus that can be added later.
There is a general rule: very few non-blockchain things benefit from being on blockchain, and most people who propose blockchain-based solution to real world problems are either scammers or have not thought about subject matter much.<p>"key facts are extracted and linked back to their source" is where all the complexity is. Who/what is going to be extracting the facts? How are they linked to the source? What is a "source" anyway? Is that sentence "key fact" or not? What if there is an accidental mistake and the wrong source is linked? What if there is an intentional misdirection from paper's authors to avoid linking to dubious study? What is there is an intentional misdirection from third-party who wants to declare paper "invalid" by incorrectly linking it to flawed study?<p>To solve those, you will need some sort of trusted team to set policies, a community to work on those, and a few supporting tools (like website, forum and maybe in-person/virtual conferences). A public (or semi-public) log of changes might be involved to increase trust, but which technology it uses does not really matter.
SciHub with Claude Citations on top of it with an API to enrich public comms via browser extension and community notes labelers? Thoughts: Who acts as the curators/editors? How do you make sure they continually operate in good faith? What do you do when people ignore facts? You're building a supercharged knowledge graph like Wikipedia, so you're going to need a lot of scaffolding around it.<p><a href="https://docs.anthropic.com/en/docs/build-with-claude/citations" rel="nofollow">https://docs.anthropic.com/en/docs/build-with-claude/citatio...</a><p><a href="https://docs.anthropic.com/en/prompt-library/cite-your-sources" rel="nofollow">https://docs.anthropic.com/en/prompt-library/cite-your-sourc...</a>