> Publishing your own weak papers is one thing, but citing other people's weak papers? This seemed implausible...<p>This is practically required by reviewers and editors. If you wade into a topic area, you need to review the field and explain where you fit in, even though you know full well many of those key citations are garbage. You basically need to pay homage to the "ground breakers" who claimed that turf first, even if they did it via fraud. They got there first, got cited by others, and so are now the establishment you are operating under.<p>And making a negative reference to them is not a trivial alternative. For one thing, you need to be certain, not just deeply suspicious of the paper, which just adds work and taking a stand may bring a fight with reviewers that hurts you anyway.
There were some stunning claims being made on Twitter last month based on a recently published study. Instantly skeptical, I dug into the methodology section and found this gem:<p>"It should be noted that the results cannot be estimated using a physician fixed effect due to a numeric overflow problem in Stata 15 which cannot be overcome without changing the assumptions of the logit model."<p>... The sad part was they didn't even choose a reasonable model in the first place.
Is there a similar study done on the physical sciences? I’m getting a bit of holier-than-thou feeling from this article.<p>Edit: from all this talk of reproducibility, I wonder what percentage of cutting edge ML research is reproducible (either from lack of public training sets / not enough compute)
This guy overstates his case somewhat. Consider:<p>"If the original study says an intervention raises math scores by .5 standard deviations and the replication finds that the effect is .2 standard deviations (though still significant), that is considered a success that vindicates the original study!"<p>Why the exclamation point here? The replication study isn't magically more accurate than the original study. If the original paper finds an 0.5 standard deviation effect and the replication study finds an 0.2 standard deviation effect, that increases our confidence that a real effect was measured, but there's no reason to believe that the replication study is more accurate than the original study. Maybe the true effect is less than measured, but maybe not. So yes, it should be considered a success.
The "social sciences" include a lot. Wrt Sociology, I'd say one problem is the overemphasis on quantitative methods - they try to be as serious as the big boys.<p>The best sociological research I've read was qualitative though. Questionable replicability is of course built-in in this type of research but the research dealt with relevant questions. Most quantitative sociology seems rather irrelevant to me.<p>Another problem is of course that most quantitative sociologists don't have a clue what they are doing. They don't even know the basics of statistics and then use some statistical methods they don't understand. It's some kind of overcompensation, I think. Although, psychologists are even worse in this respect. It's really fun to watch an psychologists torturing SAS.<p>I write this as someone who was originally trained as sociologist and over the years turned into a data scientist.
I've tried to understand this (obviously quite angry/ranty) article and cannot actually figure out what data it has.<p>It seems to not be based on <i>actual</i> replication results, but <i>predicted</i> replication results? But then the first chart isn't even predictions from the market, but just the <i>author's</i> predictions?<p>The author clearly has a real hatred for practices in the social sciences. But I don't see any actual <i>proof</i> of the magnitude of the problem, the article is mostly just a ton of the author's opinions.<p>Is there any actual "meat" here than I'm missing? Or is all this just opinions based on further opinions?
I try not to look down on social science, for the most part data is data as long as you can reason about how it was collected and who by.<p>The only thing that worries me a little (or a lot sometimes) is that there doesn't seem to be much "bone" for the meat to hang off of - that is, in physics, if your theory doesn't match experiment it's wrong whereas in social science you're never going to have a (mathematical) theory like that so you have to start (in effect) guessing. The data is really muddy, but thanks to recent (good) political developments whatever conclusions can be drawn from it may not be right <i>in their eyes</i>. For example, (apparently) merely commenting on the variation hypothesis can get you fired [<a href="https://en.wikipedia.org/wiki/Variability_hypothesis#Contemporary_controversies" rel="nofollow">https://en.wikipedia.org/wiki/Variability_hypothesis#Contemp...</a>].
Summary tweet thread by the author:
<a href="https://mobile.twitter.com/AlvaroDeMenard/status/1304399437641461760" rel="nofollow">https://mobile.twitter.com/AlvaroDeMenard/status/13043994376...</a>
I am glad this topic is getting attention. There is significant bias in academia in social science even outside flaws in statistical techniques. The field has been weaponized to build foundational support for political stances and blind institutional trust granted to academia is enabling it. This author mentions the implicit association test (IAT) as an example of a social science farce that is well known to be a farce, and notes that most social science work is undertaken in good faith.<p>However the damage has been done and it doesn’t matter if MOST work is done in good faith if the bad work has big impact. As an example, IATs have been used to make claims about unconscious biases and form the academic basis of books like “White Fragility” by Robin DiAngelo. Quillette wrote about problems with White Fragility and IAT as early as 2018 (<a href="https://quillette.com/2018/08/24/the-problem-with-white-fragility-theory/" rel="nofollow">https://quillette.com/2018/08/24/the-problem-with-white-frag...</a>), and others continue to write about it even recently in 2020 (<a href="https://newdiscourses.com/2020/06/flaws-white-fragility-theory-primer/" rel="nofollow">https://newdiscourses.com/2020/06/flaws-white-fragility-theo...</a>). However few people are exposed to these critical analyses, and the flaws in the scientific/statistical underpinnings have not mattered, and they have not stopped books like White Fragility from circulating by the millions.<p>We need a drastic rethink of academia, the incentives within it, and the controls that regulate it to stop the problem. Until then, it’s simply not worth taking fields like social science seriously.
Does anyone have links to the Replication Prediction Market results mentioned in the article? That sounds super interesting.<p>As an amusing nudge, I bet you could do some ML to predict replicability of a paper (per author's suggestion that it's laughably easy to predict) and release that as a tool for authors to do some introspection on their experimental design (assuming they're not maliciously publishing junk).
> Even if all the statistical, p-hacking, publication bias, etc. issues were fixed, we'd still be left with a ton of ad-hoc hypotheses based, at best, on (WEIRD) folk intuitions.<p>This is the quiet part which most social scientists, particularly psychologists, don't want to discuss or admit: WEIRD [0] selection bias massively distorts which effects are inherent to humans and which are socially learned. You'll hear people today crowing about how Big Five [1] is globally reproducible, but never explaining <i>why</i>, and never questioning whether personality traits are shaped by society; it's hard not to look at them as we look today at Freudians and Jungians, arrogantly wrong about how people think.<p>[0] <a href="https://en.wikipedia.org/wiki/Psychology#WEIRD_bias" rel="nofollow">https://en.wikipedia.org/wiki/Psychology#WEIRD_bias</a><p>[1] <a href="https://en.wikipedia.org/wiki/Big_Five_personality_traits" rel="nofollow">https://en.wikipedia.org/wiki/Big_Five_personality_traits</a>
I guess I fall under the field of "Progress Studies" though I think I'm much less concerned with the replication crisis than most.<p>Most new social science research is wrong. But the research that survives over time will have a higher likelihood of being true. This is because a) it is more likely to have been replicated, b) its more likely to have been incorporated into prevailing theory, or even better, have survived a shift in theory, and c) is more likely to have informed practical applications or policy, with noticeable effect.<p>Physics and other hard sciences have a quick turnaround from publication to "established knowledge". But good social science is Lindy. So skip all the Malcolm Gladwell books and fancy psych findings, and prioritize findings that are still in use after 10 or 20 years.
It's common to see this topic: what's "wrong" with social science. But there are always some things wrong with every science. If nothing was wrong, there wouldn't be any science left to do.<p>Social science asks more of us than any other science. Physics demands that we respect electricity and don't increase the infrared opacity of the atmosphere. Chemistry requires that we not emit sulfur and nitrogen compounds into the air. But social sciences will not rarely call for the <i>restructuring of the whole society.</i><p>This is the "problem" with social science, or more properly, with the relationship between the social sciences and the society at large. When we call for "scientific" politics, it is a relatively small ask from the natural sciences, but it is a <i>revolution</i> -- even the social scientists themselves use this word -- when the social sciences are included in the list (Economics is no different). Psychology, as usual, falls somewhere in between.<p>So the relationship between the social scientists and the politicians may never be as cordial as the relationship between the natural sciences and the politicians. The "physics envy", where social scientists lament that they do not receive the kind of deference that natural scientists do, will have to be tempered by the understanding that the cost of such deference differs widely.<p>(All of this is ignoring that physics had a 200-year head start)
> Stupidity: they can't tell which papers will replicate even though it's quite easy.<p>I am not familiar with this work. What exactly makes a paper predictably replicatible?
Of malice vs stupidity, I'm pretty certain it's stupidity. Or more precisely, self-delusion.<p>The story of Millikan's oil drop experiment replications and also James Randi's (and CSICOP's) battle with pseudo-scientists convince me of this.
Shameless plug with the ten relevant problems I scooped from a very recent literature review: interculturalism, introspection, truth, authenticity, human enhancement, critical thinking, technocracy, privilege, ethics, higher education. Link to free intro: <a href="https://www.tenproblems.com/2020/08/01/ten-problems-for-social-sciences-in-the-2020s/" rel="nofollow">https://www.tenproblems.com/2020/08/01/ten-problems-for-soci...</a>
Most people just don't have a clue about what they are doing and have no passion for their research whatsoever. When you have money as the main driver for science, this kind of stuff is exactly what you should expect. There's homeless and crackheads etc at a 3km radius from the majority of social sciences schools around the world. It's a complete a failure and scam. Science development is analog to social development and nothing is going to change by appealing to scientists to don't cite weak research lmao
Lots of social science in crap, for sure, no arguing about it, dunno how to make it better then not to do it. Though, some are interesting if you have the patience for it, ex. Linguistics, Psychology and Economics, even things like critical theory are sort of useful, think of it like the abstract algebra of social science. Just people pulling apart concept to see if they can be put back together in another way to create something new. I now a lots of CS researchers and they do shit work and cite each others excrement, honestly CS is the sociology of STEM. Their I said it.