I think it's situations like this that result in society having such a hard time with 'science'. The term has been so heavily co-opted by fields that just don't have sufficient rigger for the term to hold weight. Yet, on various topics, we have this publicized attack of "your a science denier!". At the end of the day, there are two 'types' of science, one where I can take the results and make accurate predictions, and one where I can't. The later just amounts to 'our best guess' where the accuracy is entirely unknown. If we want the general populous to 'trust science', we need to stop calling the later science. In short, if you can't repeat and predict, stop calling it science.
"The paper received a great deal of attention, and was covered by over 80 media outlets including The Economist, the Boston Globe, the Los Angeles Times, and Scientific American."<p>And how many of these will cover the retraction? A dozen at most? And all those articles will be sitting out there, getting cited and read on occasion.
Releasing your data should be a requirement for publication. If the original author had wanted to keep this a secret he could've withheld his data and nobody would've been able to correct him, there simply would've been discrepant studies.
Alas, these kinds of problems are not restricted to the social sciences. Case in point, this retraction from a couple of days ago: <a href="https://retractionwatch.com/2019/09/25/nature-paper-on-ocean-warming-retracted/" rel="nofollow">https://retractionwatch.com/2019/09/25/nature-paper-on-ocean...</a> Very similar to this one really; the paper claimed to overturn our existing knowledge in a way that fitted a narrative people were inclined to believe (in that case: we're all doomed) and was immediately seized on by all the news sites because of it, except the statistics were mucked up and it couldn't show what it claimed to. The fact that it was so surprising should've been even more of a massive warning sign in that case though.
Conclusion of the new analysis:<p><i>In sum, Decety et al. [1] have amassed a large and valuable dataset, but our reanalyses provide different interpretations of the authors’ initial conclusions. Most of the associations they observed with religious affiliation appear to be artifacts of between-country differences, driven primarily by low levels of generosity in Turkey and South Africa. However, children from highly religious households do appear slightly less generous than those from moderately religious ones.</i><p><a href="https://www.sciencedirect.com/science/article/pii/S0960982216306704" rel="nofollow">https://www.sciencedirect.com/science/article/pii/S096098221...</a>
From the article:<p>Although Decety’s paper had reported that they had controlled for country, they had accidentally not controlled for each country, but just treated it as a single continuous variable so that, for example “Canada” (coded as 2) was twice the “United States” (coded as 1).<p>I mean I don't even understand how this seemed like a normal thing to do?
Sometimes I feel weird coding zip codes as strings but this is a great example why. If my program ever treats a zip code like a number I would like it to throw an error. At least in this case the error looks like an accident.<p>On topic, from yesterday: <a href="https://news.ycombinator.com/item?id=21067764" rel="nofollow">https://news.ycombinator.com/item?id=21067764</a><p>It's another social sciences paper but in this case a co-author has requested a retraction over his strong belief that the paper includes fabricated data. The retraction request has been denied. It differs from this paper in that the data anomalies look intentional.
I wrote a webpage in 2015 about the Decety paper, what it was and how it was presented in the media, which might be of interest. The paper seemed highly suspect in various ways even at the time. I added an update in 2017 that "new analysis shows that the original study failed to adequately control for the children’s nationality". On the (unfinished) page I'm using the paper as an excuse to teach myself basic statistics and research methods.<p><a href="http://www.adamponting.com/decety/" rel="nofollow">http://www.adamponting.com/decety/</a>
"But when they included their categorically-coded country (1 = US, 2 = Canada, and so on) in their models, it was entered not as fixed effects, with dummy variables for all of the countries except one, but as a continuous measure. This treats the variable as a measure of ‘country-ness’ (for example, Canada is twice as much a country as the US) instead of providing the fixed effects they explicitly intended"<p>How did this not get caught immediately? If I did a study and found out that kids in Zambia are 47 more times as generous as American kids that'd make me instantly suspicious.<p>Or maybe the reviewers were all Canadian /s
There were even news sites that published articles about the original article AFTER the retraction was announced! The state of science reporting is very sad.
so a categorical variable got mixed up as a numerical one and produced misleading results.<p>to the credit of the authors, they released their data sets. -- however, i suspect that proper data exploration and visualisation would have prevented all this. visual inspection would have most likely revealed that there is no visible effect, or even an effect in the opposite direction, and once you see this, all alarm bells should go off if your model predicts otherwise. so i suspect that the authors skipped some basic steps and got carried away by results that promised a nice headline.
As much as I like spreadsheets and other general purpose numerical analysis tools, I wish there was a restricted subset of their functionality that could be used that was more formally verifiable, to prevent issues like this and Reinhart–Rogoff (see: <a href="https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt#Methodological_flaws" rel="nofollow">https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt#Metho...</a> ) from being a common occurrence.
If a research does not release the code used in their research, their papers should not be trusted. We wouldn't trust a paper where the researchers hid their experimental methods or analysis - it wouldn't even be allowed to be published. But if their experimental methods or analysis are done in code, those parts are allowed to be a black box and we're supposed to trust them. If we were willing to just trust them, there wouldn't even be a peer review process and it certainly wouldn't deserve the imprimatur of 'science'. This has been a problem for years, and I will personally not be the least bit surprised if we eventually see a death toll attached to it.
Can we go back to basic principles and all agree that psychology and social studies are not true sciences anyways?<p>Science is to me something where you extract natural laws that predict phenomena will occur 100% of the time given certain conditions. Physics, chemistry, computer science, most branches of medicine operate this way.<p>A field like psychology that says "well sometimes people will..." or "we found in 60% of cases that..." is not science. It's a comment upon society maybe, but it does not produce broadly repeatable, predictable results.<p>Not true science.
I wonder if a religious education resulting in some positive effects might be akin to the "any diet is effective" thing, i.e. it's not the specific upbringing as much as being exposed to _any_ moral thinking in general.<p>I don't mean to imply that there aren't moral a-religious educators of course, just that it seems likely to have less discussion of ethics for any kid once the temple/church/mosque/whatever is removed.
This is fundamentally one of the many disastrous outcomes of scientific publishing being controlled by greedy conglomerates.<p>For the uninitiated: If you want to publish a scientific paper today, you basically sign up to sign over the rights to any publisher that's interested (please someone publish me). That publisher will then review that paper in a more often than not mostly undisclosed process and publish that paper. Everyone knows that to be a high-regarded publisher one must have a very own typographic formatting: Unreadable font, weird multi column layout to prevent accessibility and tables disregarding standards are a good start. Afterwards, the paper gets published on the publisher website, which again follows as little agreed upon standard as possible. Data is of course excluded, study itself is in PDF. Also put up a fat paywall, don't want those pesky poor people be scientifically literate. Give a few cents of your $70 fee to the authors, it is not an unethical business you're doing here!<p>THIS is the systemic error people seem to be so happy to ignore because they're neck deep in social science not being real science memes.<p>This prevents interesting new startups for fact-checking or meta analysis, which are e.g. happening in journalism because that field has a lot of the things science is sadly lacking.<p>This creates a drift between extremely rich and rather poor countries/unis/humans in scientific ability.<p>This generates a tar pit for scientific process.<p>This wastes billions in funds because of people unaware of each other doing redundant studies (and not referencing/refuting/supporting each other in the process neither, of course) because they are literally better search engines to find Harry Potter fan fiction than there are for finding studies.<p>And finally, this of course allows anything from honest statistical mistakes to snake oil sellers to slip through and doing generations worth of damage, because correcting, fact checking, re-researching, comparing, meta research, <i>anything</i> is slowed to a crawl.<p>So please stop embracing scientific elitism and gatekeeping for this is exactly what brought us here in first place...
Wait a second. If the issue with the analysis is as the article says, that is that some countries were weighted far higher, that means that in some countries the original conclusion does hold (probably for a small sample size). Perhaps this is because some religions promote generosity and some do not? Would be interesting to look in to.
And even though it was corrected and the result retracted, it was already cited by many papers, the media didn’t report the retraction etc. How much of scientific reporting gets skewed by media interests (controversy and sensationalism) and funding and political interests?<p>Furthermore, this whole field was called cargo cult science by Feynman, using correlations, p-hacking, data dredging, and more<p><a href="https://en.wikipedia.org/wiki/Multiple_comparisons_problem" rel="nofollow">https://en.wikipedia.org/wiki/Multiple_comparisons_problem</a><p>The best analysis I have seen of the corner we have painted ourselves into<p><a href="https://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/" rel="nofollow">https://slatestarcodex.com/2014/04/28/the-control-group-is-o...</a>
> Decety<p>What an apt name.<p>I wonder about the damage to public this unintentional deceit will bring...<p>Go science! After all, a retraction is part of discovery.
> In fact, Decety’s paper has continued to be cited in media articles on religion. Just last month two such articles appeared (one on Buzzworthy and one on TruthTheory) citing Decety’s paper that religious children were less generous.<p>“Media articles” is carrying a lot of weight there
My default presumption is that all results from the "social sciences" are false if they contradict my intuition. Whatever these fields are producing, it's not science in the Popperian or practical sense. A ton of policy has been built on the bad research and wishful thinking of the past few decades, and it's going to take a long time to unwind it all.<p>The problem, really, has been accelerating for a while. Something really went off the rails after WWII.
Another, more consequential paper that popped up on HN recently has been retracted as well. The one about ocean warming. The retraction notice is a masterclass on weasel language, worth reading in its own right.<p><a href="https://www.nature.com/articles/s41586-019-1585-5" rel="nofollow">https://www.nature.com/articles/s41586-019-1585-5</a>
Psychology, sociology and theology (and more logys?) was never meant to be sciences. We can blame the enlightenment for that idea, let’s revert them back to renaissance activities