An excellent article making good, clear points. If only more science writing were like this.<p>In summary, the authors got 29 teams of researchers to work on the same data set and answer the same research question (are football [soccer] referees more likely to give red cards to dark-skinned players?). The teams proposed different analytical/statistical approaches, discussed each others' approaches, and came up with a range of effect sizes based on the data.<p>Some key quotes:<p>"Most researchers would find this broad range of effect sizes disturbing. It means that taking any single analysis too seriously could be a mistake, yet this is encouraged by our current system of scientific publishing and media coverage."<p>"The transparency resulting from a crowdsourced approach should be particularly beneficial when important policy issues are at stake. The uncertainty of scientific conclusions about, for example, the effects of the minimum wage on unemployment, and the consequences of economic austerity policies should be investigated by crowds of researchers rather than left to single teams of analysts."<p>"Scientists around the world are hungry for more-reliable ways to discover knowledge and eager to forge new kinds of collaborations to do so. Our first project had a budget of zero, and we attracted scores of fellow scientists with two tweets and a Facebook post."<p>It would be great to see collaborative platforms for the scientific community grow in popularity and give rise to more valid, vetted research findings.
The research here is interesting, but the title seems odd for the content. A consensus of researchers found either no results of racism (which are often unpublished) or results with the similar meaning (that race was a factor). In a research paper, rather than media echo chambers, methods are generally clear to their intended audience. To me, the finding indicate that research is functioning correctly.