This issue also concerns studies with non-results in all scientific fields. The pressure to produce "results" causes two types of problems:<p>1. Massive fudging of data to achieve (statistical) significance.<p>2. Inefficiencies due to researchers repeating failing experiments because they can't learn from the unpublished non-results of others.<p>It's fundamentally a problem of human psychology (reputation/face saving), and of organizational design, which sets up the rewards context (universities, tenure process, journals, etc.) The system is pretty outdated and broken for the modern pace of information production, imho.
If I had to pick one thing in our society that worries me the most these days it would be this mentality of "what I believe is more important than the truth so that makes it ok to bend the truth to fit my beliefs" I really began to notice it during this election season and even made a post on my blog a while back: <a href="http://www.tomstechblog.com/post/Why-I-Dont-Trust-Polls-(and-What-We-Should-Do-About-It).aspx" rel="nofollow">http://www.tomstechblog.com/post/Why-I-Dont-Trust-Polls-(and...</a><p>(please excuse the inadvertent plug, I don't think there's any way to post images here)<p>To me this news about medical studies represents the same mentality but at a much more dangerous level. People willing to twist medical facts in order to support the conclusion they went in trying to prove.<p>I think our culture needs to really look at the value we put on "truth" and start judging those who try to hide it much more harshly.
First of all, the title is misleading. Negative results are not the same thing as unfavorable results. Second, as a person involved in biomedical research, I am very familiar with the bias toward publishing positive results, and leaving the negative results buried in a lab notebook somewhere. There are two root causes for this:<p>1. Funding agencies reward positive results. Of course, the biggest funding agency in the U.S. is the U.S. gov't. The gov't must answer to the people, and the people only want to hear about positive results. Show some interest or at least concern for negative findings (and learn, or teach kids in school, why negative findings are important), and you'll find more scientists publishing negative findings.<p>2. Funding, especially in the U.S., is a competition. Why would you tell your competitors all the things that didn't work? Why give them that strategic advantage? Would you expect Google to tell Yahoo which search algorithms don't work? Reward scientists based on consistent good work, and not based on their ability to beat out competitors, and you'll find more scientists publishing negative findings.
I can't trust this article. If they had done this study and found unfavorable results are just as likely to be published, their study would be much more boring, and it would not have been published. ;-)
Good steps, at least for pharma:<p>- Require better registration of clinical trials and automatic aggregation of results, as part of medical regulation<p>- Make the clinical data submitted to the FDA (or equivalent agencies) public, or at least accessible to researchers. Currently, the data sent to journals is <i>not</i> the same set previously submitted to the FDA; it's been touched up to make it more suitable for publication. Another paper found that overall these papers show slightly more positive results than the corresponding FDA data does. (Dunno how they managed to get that data set.) Authors have their own specific justifications for this, but the overall trend is bad.
Peter Norvig has a good article about the implications of this and other research problems.<p><a href="http://norvig.com/experiment-design.html" rel="nofollow">http://norvig.com/experiment-design.html</a>
Personally I find the problem the lack of raw data. We need a much more opaque process with science. Imagine a website that allowed the following workflow:<p>1. Upload hypothesis
2. Describe experiment
3. Add datasets as they come in
4. Analyse data
5. Publish<p>This way, if there was a new amazing result, the first thing you would do is go through the raw data to check. Re-test the statistics. You could automatically look for signs of fraudulent data.
I wonder if the double blind model can be applied right into the publishing stage? Or perhaps journals could have some requirement, 50/50 positive/negative results.