The PubMed Commons initiative[1] by the National Institutes of Health, mentioned in the article kindly submitted here, is a start at addressing the important problems described in the article. One critique[2] of the PubMed Commons effort says that that is a step in the right direction, but includes too few researchers so far. A blog post on PubMed Commons[3] explains a rationale for limiting the number of scientists who can comment on previous research at first, until the system develops more.<p>[1] <a href="http://www.ncbi.nlm.nih.gov/pubmedcommons/" rel="nofollow">http://www.ncbi.nlm.nih.gov/pubmedcommons/</a><p>[2] <a href="http://retractionwatch.wordpress.com/2013/10/22/pubmed-now-allows-comments-on-abstracts-but-only-by-a-select-few/" rel="nofollow">http://retractionwatch.wordpress.com/2013/10/22/pubmed-now-a...</a><p>[3] <a href="http://www-stat.stanford.edu/~tibs/PubMedCommons.html" rel="nofollow">http://www-stat.stanford.edu/~tibs/PubMedCommons.html</a><p>USING MY EDIT WINDOW:<p>Some of the other comments mention studies with data that are just plain made up. Fortunately, most human beings err systematically when they make data up, making it look too good to be true. So an astute statistician who examines a published paper can (as some have done) detect made-up data just by analyzing what data are reported in a paper. A researcher who does this a lot to find made-up data in psychology is Uri Simonsohn, who publishes papers about his methods and how other scientists can apply the same statistical tests to find made-up data.<p><a href="http://opim.wharton.upenn.edu/~uws/" rel="nofollow">http://opim.wharton.upenn.edu/~uws/</a><p>From Jelte Wicherts writing in Frontiers of Computational Neuroscience (an open-access journal) comes a set of general suggestions<p>Jelte M. Wicherts, Rogier A. Kievit, Marjan Bakker and Denny Borsboom. Letting the daylight in: reviewing the reviewers and other ways to maximize transparency in science. Front. Comput. Neurosci., 03 April 2012 doi: 10.3389/fncom.2012.00020<p><a href="http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2012.00020/full" rel="nofollow">http://www.frontiersin.org/Computational_Neuroscience/10.338...</a><p>on how to make the peer-review process in scientific publishing more reliable. Wicherts does a lot of research on this issue to try to reduce the number of dubious publications in his main discipline, the psychology of human intelligence.<p>"With the emergence of online publishing, opportunities to maximize transparency of scientific research have grown considerably. However, these possibilities are still only marginally used. We argue for the implementation of (1) peer-reviewed peer review, (2) transparent editorial hierarchies, and (3) online data publication. First, peer-reviewed peer review entails a community-wide review system in which reviews are published online and rated by peers. This ensures accountability of reviewers, thereby increasing academic quality of reviews. Second, reviewers who write many highly regarded reviews may move to higher editorial positions. Third, online publication of data ensures the possibility of independent verification of inferential claims in published papers. This counters statistical errors and overly positive reporting of statistical results. We illustrate the benefits of these strategies by discussing an example in which the classical publication system has gone awry, namely controversial IQ research. We argue that this case would have likely been avoided using more transparent publication practices. We argue that the proposed system leads to better reviews, meritocratic editorial hierarchies, and a higher degree of replicability of statistical analyses."