I think they should have said more about the trust and quality control issues with journals (especially Elsevier, which has hundreds of obscure journals). I'm just a layperson ... but in my brief experience working with academic researchers during my undergrad days (specifically in the area of data mining / machine learning), I was surprised to learn that they did not trust research simply because it was published in a journal - they only trusted people who they knew, people they could speak with through informal channels to get honest opinions from. In data mining, the pattern of most papers is "here is a new mash-up of algorithms A, B and C, with modifications X and Y, and here are some prediction accuracy benchmarks that prove the supposed usefulness of our supposedly new method." Needless to say, the actual code and test data sets are rarely ever shared. I recall coming across an outrageous number of data mining papers, appearing in Elsevier journals, which claimed to be able to predict stock markets (!!!), written by people who did not have any real understanding of finance and I guess did not understand how unlikely it really is to find predictive patterns in liquid, financial markets.<p>Another thing missing from this discussion is the style in which scientific papers are written. Invariably, published scientific papers are unhelpfully dense and terse, difficult to understand, full of needlessly obfuscated mathematical notation, and in general they are severely lacking in clarity. There is a stark contrast in the style of these formal papers and the style in which real research is actually shared and understood - through talks, presentations, teaching, textbooks, consultations, etc, where you have some hope of efficiently comprehending what the author is trying to present. I suspect that this obfuscation in papers is driven by the publishers and referees who impose a specific rigid style, combined with the researchers themselves who think that the less comprehensible their papers are to a wide audience, the smarter and more credible they will appear to be on the surface.<p>Another problem is that one can often identify small groups of researchers who publish papers on the same topic and cite each others and their own papers, but nobody outside of their little bubble cites their research. I think that this phenomenon is due to a combination of the aforementioned lack of trust, lack of academic honestly, lack of transparency and deliberate lack of clarity.<p>One idea that I had is that the Internet could be used to create public networks of trust, so that researchers can identify other researchers as trusted authorities on specific topics. Academic communities have these implicit networks of trust already, but to an outsider it is very difficult to figure out who the leading innovators are on some obscure topic. A trust network, combined with citation data, could provide a graph that could serve as a useful tool for research as well as a kind of "GitHub" for researchers to increase their prestige and positions.<p>Another tool for escaping the publishing doldrums is standard benchmarks. In data mining and machine learning, for example, there are some standard data sets and performance measurements, so that anyone who claims to come up with a better statistical predictor can test their theory against the existing data sets and compare results against other approaches. There are also similar performance benchmarks for database query performance, in computer science. I think that there should be more of these.<p>The real difficulty with all of this is incentivization, as the article points out. I think it goes beyond the issue of for-profit publishing companies and funders. I suspect that there is a large contingent of researchers who are secretly "hacks" and they don't WANT the bright spotlight of transparency to shown onto them, because they would be exposed and would not be able to sustain academic careers and tenures built on publishing worthless papers in obscure journals for "bubble communities." One example of a bubble community is "Fuzzy Logic," which has proven to be intellectually unsound and logically inconsistent, but which continues to fuel academic publishing careers, facilitated by companies like Elsevier who maintain obscure, wacky journals with a for-profit motive. I think the article is entirely appropriate in describing academic publishing as "fraud-lite." Personally, I was permanently turned off from the idea of an academic career after seeing "how the sausage is made" and seeing how worthless and suspect so many published papers are.