The submission here is an interesting article by the founders of PubPeer, which has already been in the news quite a bit for finding examples of shoddy science papers that have had to be withdrawn by journal editors. I learned about PubPeer on the group blog Retraction Watch (RT), and I just bopped over to Retraction Watch after reading the article kindly submitted here. RT reports in detail on the defamation suit against PubPeer that is mentioned in the parent article of this thread.[1] I hope the PubPeer experiment can continue and thrive and promote better scientific research practices.<p>Some of the other comments here suggest that anonymity of reviewers is dangerous in itself. That's why some researchers promote an open review process. Jelte Wicherts and his co-authors put a set of general suggestions for more open data in science research in an article in Frontiers of Computational Neuroscience (an open-access journal).[2]<p>"With the emergence of online publishing, opportunities to maximize transparency of scientific research have grown considerably. However, these possibilities are still only marginally used. We argue for the implementation of (1) peer-reviewed peer review, (2) transparent editorial hierarchies, and (3) online data publication. First, peer-reviewed peer review entails a community-wide review system in which reviews are published online and rated by peers. This ensures accountability of reviewers, thereby increasing academic quality of reviews. Second, reviewers who write many highly regarded reviews may move to higher editorial positions. Third, online publication of data ensures the possibility of independent verification of inferential claims in published papers. This counters statistical errors and overly positive reporting of statistical results. We illustrate the benefits of these strategies by discussing an example in which the classical publication system has gone awry, namely controversial IQ research. We argue that this case would have likely been avoided using more transparent publication practices. We argue that the proposed system leads to better reviews, meritocratic editorial hierarchies, and a higher degree of replicability of statistical analyses."<p>Wicherts has published another article, "Publish (Your Data) or (Let the Data) Perish! Why Not Publish Your Data Too?"[3] on how important it is to make data available to other researchers. Wicherts does a lot of research on this issue to try to reduce the number of dubious publications in his main discipline, the psychology of human intelligence. When I see a new publication of primary research in that discipline, I don't take it seriously at all as a description of the facts of the world until I have read that independent researchers have examined the first author's data and found that they check out. Often the data are unavailable, or were misanalyzed in the first place.<p>[1] <a href="http://retractionwatch.com/2014/12/10/pubpeer-files-motion-dismiss-sarkar-defamation-case/" rel="nofollow">http://retractionwatch.com/2014/12/10/pubpeer-files-motion-d...</a><p>[2] Jelte M. Wicherts, Rogier A. Kievit, Marjan Bakker and Denny Borsboom. Letting the daylight in: reviewing the reviewers and other ways to maximize transparency in science. Front. Comput. Neurosci., 03 April 2012 doi: 10.3389/fncom.2012.00020<p><a href="http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2012.00020/full" rel="nofollow">http://www.frontiersin.org/Computational_Neuroscience/10.338...</a><p>[3] Wicherts, J.M. & Bakker, M. (2012). Publish (your data) or (let the data) perish! Why not publish your data too? Intelligence,40, 73-76.<p><a href="http://wicherts.socsci.uva.nl/Wichertsbakker2012.pdf" rel="nofollow">http://wicherts.socsci.uva.nl/Wichertsbakker2012.pdf</a>