My pet idea for making science less wrong is to maintain a citation graph of all papers. Then when a problem is found with any paper, every other one downstream from it is automatically flagged as at risk of being wrong. Now all those authors (or others) have to go back and re-evaluate how that citation was used and how it affects their result. Once they decide it's OK, they update their paper and remove the flag. If it's not OK then their paper is retracted or flagged as wrong or simply keeps its at-risk flag if it's not important enough for anybody to bother re-checking it.<p>This way, people will be reluctant to cite too many useless papers for their friends, and will be reluctant to depend on unreliable ones. Work with good methodology would be more popular to cite or rely on.<p>Maybe peer review could even become optional this way. If other researchers trust your work enough to risk their own papers by citing it, that would act as a peer approval in the long term. Actual peer review would only be needed to give an immediate indication of the quality.<p>We would first need to solve the versioning problem where there's no way to update a paper when mistakes are discovered or simply to improve it.
This is a nice summary of fivethirtyeight's science crisis work, but (as with most treatments of the subject) it leaves me fundamentally unsatisfied.<p>Apparently we've identified the problem enough to say "there's a problem". We can say "look at all the ways you can manipulate this analysis to achieve the desired outcome". We can say "gosh, science is harder than we thought". But it seems we're still far from a convincing solution.<p>The fact that statistical analysis is so liable to manipulation seems to call the entire thing into question. In the article they take comfort from the fact that many of the labs analyzing the red card/race data arrived at similar conclusions. One would assume this is because they made similar choices in the analysis. But what says those were the right sorts of choices? Could it not simply be that the labs shared the same biases and errors, making that outcome more common? Is a proper analysis really determined by (essentially) democratic vote? If that's what we've arrived at, it gives me less rather than more confidence in the robustness of the scientific process.<p>It feels like something fundamental has to be reimagined. It's difficult to prove things about the world---but maybe it's actually near-impossible? Or maybe we need to get real about the cost of actually demonstrating anything reliably. Instead of individual labs running one-off experiments it becomes researchers collaborating openly to build the perfect experiments, which are then run by many different labs, then analyzed collectively in the open for strengths and weaknesses, then reformulated, sent out again, and so on iteratively until at the end of years of research one little bit of probably-truth drips out the bottom of the system.<p>But that bit would be something we could build on.
Science isn't broken... here's a bunch of ways that people cheat?<p>Science itself can never be broken, but when people are cheating the system that makes it effective to actually figure out what is true and what isn't, for personal gain, then that system is broken.
I know tons of scientists, post-docs, lab heads.<p>It's <i>WAY</i> too hard to be a scientist. The kind of salary and other sacrifices scientists are asked to make are unfair and surely deter many to leave science.<p>Science is the only thing moving everything forward and if the funding strategy is look for irrational people who will work insanely hard for next-to-nothing salaries...it doesn't sound like a great strategy.
Science a huge pr problem, one so bad you might as well say science is broken.<p>The average person is losing faith in science because of high profile failures, and the distrust is only increasing.<p>I fear Nasim Talib is right.<p><a href="https://medium.com/incerto/the-intellectual-yet-idiot-13211e2d0577#.pbipdn1dg" rel="nofollow">https://medium.com/incerto/the-intellectual-yet-idiot-13211e...</a>
I think the most damning thing to come out of the replication crisis was when they asked a bunch of scientists to place bets on whether a given paper (with p < 0.05) would replicate, and it turned out these bets were right quite often (<a href="https://fivethirtyeight.com/features/how-to-tell-good-studies-from-bad-bet-on-them/" rel="nofollow">https://fivethirtyeight.com/features/how-to-tell-good-studie...</a>).<p>That shouldn't be possible! Science is supposed to be the best possible epistemological methodology, and here it is being beaten in "success rate of determining true from false" by guessing. What's immensely frustrating is that it's not a question of whether we're just not smart enough to tell true from false, we clearly have the power (since the guesses were often right) but we're not using it. Whatever "truth compass" the guessers were using should be part of the scientific process somehow. That's what is "broken".
If a person's mental state can impact their decisions and the quality of their work, why aren't we tracking the subjective states of those conducting research? And does <a href="http://eqradio.csail.mit.edu" rel="nofollow">http://eqradio.csail.mit.edu</a> provide a tool for doing so?
> It’s no accident that every good paper includes the phrase “more study[1] is needed” ...<p>[1] read: FUNDING<p>Let's distinguish "science" -- the scientific method, the general advancement of human understanding over the centuries, etc. -- from "pork": institutionalized government funding, the establishment pursuing that, and all of the mundane, bureaucratic processes that ensue, and then the resulting hype, pettiness, recriminations, and sacrificing of ideals that it inculcates.<p>There is nothing wrong with science, although it may be harder these days to recognize it.<p>Pork, on the other hand, is approaching a singularity.
Nobody is claiming that the basic principle is not working. What is warped and bend is the pipeline, that would allow for science to proceed faster, for the results to be transferred to companies faster, for the companies to actually apply the results in buy-able products and for that revenue to feedback into science endeavors.<p>That machine- is broken, leaking and actually in parts moving contrary to the scientific interests of humanity.<p>The quality issue of science itself, could easily be remedied by replacing citations with partial repeatable experimental coverage in citations this way also ending inflation.
This article is from August 2015. Previous discussion: <a href="https://news.ycombinator.com/item?id=10085698" rel="nofollow">https://news.ycombinator.com/item?id=10085698</a>
The answer is clear.<p>Hypothesis driven experiment.<p>Drive it into the brains of everyone who might enter the scientific profession, and then, when someone is caught with an <i>experiment driven hypothesis</i>, we don't have to speculate whether or not it was fraud, because they will have known better, by simple virtue of having their credentials, and we can then safely revoke those credentials.
There are actually ways in which science can come up with the wrong answer - for example if we lived much later, the universe would be expanding so rapidly that we could not see the stars around us - the night sky would be dark. We would wrongly conclude that nothing is out there.<p>Sadly, what people call science these days has nothing to do with the scientific method, it's just a bunch of idiots doing correlation and thinking it's causation.