The problem I can see right now with scientific community is that, no one pays you to re-implement the same idea just to confirm that it is correct. Every professor wants shiny new innovation from her Ph.D student. No one wants you to experiment the currently published ideas.<p>I'm about to finish my master thesis. I implemented a couple of ideas collected from multiple papers in my thesis, and I can say all of those absolutely stunning results were just intelligently crafted experiments that are not applicable elsewhere.<p>Basically, my whole thesis is just to show that some paper was wrong and the idea is not applicable in another experiment.
It's worth noting that this is talking about research that involves sampling groups of people, or other similar sampling approaches. It is not talking about mathematical results which you can often find in computer science, physics, and elsewhere. Not is it necessarily true all of the physical sciences.<p>It also shouldn't be read as "science is broken and wrong so therefore my opinion should be considered equally." There is definitely a problem with accuracy in many scientific fields that needs to be addressed, but the baby doesn't need to be thrown out with the bath water.
This is why one has to read papers in context. You can't just read a paper: you have to then go off and read some reviews that cover the surrounding science. Then read some other papers on the topic of interest. Pay attention to dates on papers, as opinions change with time. Note the disagreements.<p>Reading the scientific literature is reading the moment to moment output of a noisy search algorithm. Any given publication is near meaningless on its own.<p>This business of reading the context for a specific item of research isn't that hard. If as a layperson you feel up to having an opinion on a specific paper, then you are certainly equipped to do more reading in the field.<p>Start with review papers, which tend to be a gentler uphill slope, and then fit other papers into what you see there. Take note of the disagreements between reviews, the different emphasis placed on different aspects of the topic. In most scientific fields review papers are usually pretty good at explicitly covering the unknowns and debates of interest. The subtexts and unwritten stuff, such as funding-driven conflicts of research strategy, take longer to figure out. But one has to start somewhere.<p><a href="https://www.fightaging.org/archives/2009/05/how-to-read-the-output-of-the-scientific-method/" rel="nofollow">https://www.fightaging.org/archives/2009/05/how-to-read-the-...</a><p>"The scientific community doesn't produce an output of nice, neat tablets of truth, pronouncements come down from the mountain. It produces theories that are then backed by varying weights of evidence: a theory with a lot of support stands until deposed by new results. But it's not that neat in practice either. The array of theories presently in the making is a vastly complex and shifting edifice of debate, contradictory research results, and opinion. You might compare the output of the scientific community in this sense with the output of a financial market: a staggeringly varied torrent of data that is confusing and overwhelming to the layperson, but which - when considered in aggregate - more clearly shows the way to someone who has learned to read the ticker tape."
Super important paper, the implications of which have been corroborated repeatedly within the most prestigious publications in psychology, social sciences/economics and cancer biology. If you'd like to read more about such issues and actually work on doing something about it, you may want to check out the Reddit community (<a href="https://www.reddit.com/r/metaresearch/" rel="nofollow">https://www.reddit.com/r/metaresearch/</a>) and a recent initiative at Stanford (<a href="http://reproduciblescience.stanford.edu/" rel="nofollow">http://reproduciblescience.stanford.edu/</a>).
The problem have a name: business.<p>If research is public, well founded by government, and universities are public entity research is accurate and effective, otherwise it's only a matter of making money quickly and moving on.
This only adds credence to the fact that social survey is not a science. Most of it is glorified
door to door salespeople work. It's not even research.<p>We have to draw a clear line soon to prevent good-intentioned people from being lumped into shitty science.