> If you want me to read the vast literature, cite me two papers that are exemplars of that literature. I will read them. If these two papers are full of mistakes and bad reasoning, I will feel free to skip the rest of the vast literature. Because if that’s the best you can do, I’ve seen enough.<p>This is one of worst ideas I've ever read. Wouldn't you just be cheating yourself?<p>How is tying your own learning to someone else's ability to find the best papers in any way a smart thing to do? It would be much better to doubt the person who gave you the papers (perhaps he miscalculated which two papers were the best), than to dismiss the entire field.
Claude Shannon's paper on information theory [1] is possibly an example of such a paragon of work. No citations, 50 pages of pure awesome, spawned a whole research field of its own and 70 years on just as valid.<p>[1] A Mathematical Theory of Communication - <a href="http://math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf" rel="nofollow">http://math.harvard.edu/~ctm/home/text/others/shannon/entrop...</a>
On the psychology replication crisis<p>>Overall, 36% of the replications yielded significant findings (p value below 0.05) compared to 97% of the original studies that had significant effects. The mean effect size in the replications was approximately half the magnitude of the effects reported in the original studies.<p><a href="https://en.wikipedia.org/wiki/Replication_crisis#Psychology_replication_rates" rel="nofollow">https://en.wikipedia.org/wiki/Replication_crisis#Psychology_...</a><p>Of course if you point out that psychology research is strongly biased, the immediate response is that you must be an anti-intellectual. As if the system that produced such misleading results is a representation of all intellectual pursuits, and not just a mistake. The same way you get called an anti-intellectual for questioning the post-modern-analyses of whatever.<p>I'm really sick of people claiming that criticizing obviously bad science is "anti-intellectual". When you produce that many useless papers you're obviously, as a field, lacking an understanding of why science is good and useful in the first place.<p>I'm presuming that economics has similar problems with replication, and that you can only trust the most basic and obvious of their findings.
I would agree that "no paper is that good" -- on the first try or even the second try. Just as we need constantly refactor code, just as we need constantly re-edit a book, a paper that explains a scientific idea also needs constant re-write to become good. The sad reality is that scholars simply do not do that.<p>It is understandable though. Once the paper is published, then it is no-longer novel, and there is obviously lack of reward and motivation to write the same idea again -- just to write it better. ... unless you are actually writing a review paper, but then, it appears there is little value unless the review is "complete". A good illustration of the idea need be able to high-light the key idea while avoid having minor ideas obscure the key. Therefore, a "complete" review paper rarely provide a good read.<p>Compared to papers, a good text-book often is a much better read than all the original papers. It is not really because of the size, rather it is because a good text-book is written from the reader's point of view and focuses on conveying the idea itself (vs. selling the idea). And a good text-book takes many rounds to develop.<p>There is no reason papers cannot be developed in a similar way as text books: once a ground-breaking paper is published, it shall be constantly updated, each new edition reflects what the author has newly learned and incorporating new development including the entire community...<p>Alas, that is not the culture, and there is no motivation to do such.
I assume this relates mostly to economics or the humanities (social science, if you insist) in general. These apply to sciences too, but are less debilitating in the long run.<p><i>Fifth, most researchers’ priors are heavily influenced by some extremely suspicious factors.</i><p>Academics in the humanities identify themselves "I am an X." X can be post structuralist, rational materialist, classical liberal or some other broad, hairy, intellectual identity. This is bad news for objectivity. Everyone has a dog in the fight.
Well, Watson & Crick (Nature 1953) is that good. It does contain an error (the actual DNA structure is slightly wrong) And Avery (JEM 1944) <a href="http://jem.rupress.org/content/79/2/137" rel="nofollow">http://jem.rupress.org/content/79/2/137</a>) which "proved" that DNA is the molecule of heredity, is also there.<p>But to be qualified to read these papers and appreciate why they are points of quality within a sea of crap? That's hard.
I was sort of on board with the two-paper rule until I got to the arguments about the reviews. Maybe in his mind integrating literature into a cohesive summary (ala meta-analysis) is a novel contribution, but if not, the attitude behind the two-paper rule is part of why science is in such crisis. The author is right, that the interpretation of the literature should be based on an accumulated read, and not one or two papers (unless they're reviews or meta-analyses).
Allow me to disagree. Chris Okasaki's thesis, "Purely Functional Data Structures" is That Good for the field of functional programming. There are other exemplary papers, like Godel's incompleteness theorem, Cantor's diagonal argument, or Einstein's statement of special relativity. Or Satoshi's paper on Bitcoin.<p>These papers are not exhaustive summaries of a field. But a reader comes away understanding the type of problems a field is devoted to solving and many of the existing ideas. And I believe that each is a paragon of their field.
Papers aren't meant to be this crystallized nugget of truth, they are a progress report on an ongoing piece of living research. They aren't meant to be infallible.
On a slightly tangential note: how do you guys <i>read</i> research papers? Do you go through them word by word or you only skim them to get a general understanding?<p>I try to do the former but the work seems so boring that I am hardly motivated to do this more.
It seems like a lot of commenters here are under the impression that the "two paper rule" (if someone says you should read the literature in field X, ask them for the two best papers in that field they can think of, and if those aren't impressive then don't bother looking further) is a proposal of Bryan Caplan, who wrote the OP here.<p>That is incorrect. The two-paper rule is Noah Smith's, and Bryan Caplan is <i>disagreeing</i> with it.<p>(It's scarcely possible to read any of Caplan's post without realising that; I conclude that many commenters here have not bothered to read the OP before commenting.)
Following the link through to the original proposal, this seems like a misinterpretation of the suggestion. The problem this was supposed to "fix" was people using the existence of their being "vast literature" to shut down arguments.<p>From that perspective, the suggestion seems fine. You should be able to dig out two examples that show your field isn't nonsense. I don't think it was meant to be a high bar:<p>> There are actual examples of vast literatures that contain zero knowledge: Astrology, for instance. People have written so much about astrology that I bet you could spend decades reading what they've written and not even come close to the end. But at the end of the day, the only thing you'd know more about is the mindset of people who write about astrology. Because astrology is total and utter bunk.
><i>The best papers get up to around .20. Again, No Paper Is That Good. If you demur, consider this: In twenty years, will you still hold up the best papers of today as “paragons” or “exemplars” of compelling empirical work?</i><p>If the field is not totally vague, then yes. We can still consider certain papers in physics, or chemistry, or medicine, computer science etc. as exemplary decades, or even centuries, later, even when they deal with empirical work.<p>Soft sciences need not apply.
I suspect this is limited to economics and the social sciences. There is absolutely no way this holds up in the natural sciences like math, physics, etc.<p>People forget the original name for what we call economics was "political economy". That alone should tell you all you need to know about the dangers of treating that field like a science. If you ever want to know why expert economists can't seem to agree on things that happened 50-100 years ago or make accurate predictions for the future, the original name is very telling. Wouldn't it be ridiculous if we were still debating the validity of f=ma or e=mc^2? Wouldn't it be crazy for someone to claim general relativity is just flat out wrong, even though GPS systems would not work properly without humans accepting it?<p>Why is nothing remotely approaching reasonable standards used in the social sciences before acceptance?
Bitcoin's paper was that good.<p>Simple.<p>Easy to read.<p>Easy to replicate (a poc in python is easy)<p>It is a recent key innovation (the blockchain tech).