> Increasingly it will be seen as perverse to submit a paper to a journal and wait 12 months for comments from two scientists, instead of sharing it on a platform like Academia.edu and getting comments from hundreds of scientists in two weeks.<p>This seems absurdly farfetched for me. It is easy and common for mathematicians to share their work (e.g. on the arXiv) and I consider myself fortunate to get two or three good comments, let alone hundreds. (Of course, this is a claim made by TC, not by academia.edu.) Whatever the virtues of academia.edu or any other technological solution, I do not expect it to multiply the audience for my work by an order of magnitude.<p>By way of comparison, check out Terry Tao's math blog. His reputation, his expositional ability, and the size and enthusiasm of his audience are surpassed by no one. And of course, the blog (and the ability to comment) are freely available to everyone with a web browser.<p><a href="http://terrytao.wordpress.com/" rel="nofollow">http://terrytao.wordpress.com/</a><p>Look at the comment counts. For many posts, in the single digits. And this low, even though Tao personally responds to many of the comments and questions.<p>I suggest that this is likely to be an upper bound for responses to math writing posted anywhere, in any format, with or without restrictions on how can read, write, or respond to what's posted.<p>I don't think peer review is broken, at least not in math. It can be annoying to do (which is why it takes so long), but we regard it as an obligation and that's why we do it.<p>The issue, in my mind, is to separate peer review from "publishing", or more to the point, from dissemination. The reason this problem is so thorny and obnoxious (again, in my mind) is that in principle it is trivial.
I'm missing the leap here:<p>> <i>The taxpayer ends up paying twice for the same research: once to fund it and a second time to read it. The heart of the problem lies in the reputation system, which encourages scientists to put their work behind paywalls. The way out of this mess is to build new reputation metrics.</i><p>The heart of the problem simply lies in closed-access journals, and a simpler solution is to move to open-access journals, which is already happening. Reputation is not an insurmountable barrier to this. In my field (artificial intelligence) the top two journals are now open-access. No fancy new metrics or removal of the concept of journal needed; problem solved.<p>I view the solutions proposed here as a bit more problematic. Download counts in particular are likely to just reward linkbaity research, rather than quality research, and exacerbate the already problematic race to put out misleading hyped-up press releases. Citation counts are liable to gaming as well, and are commonly gamed by both citation rings, and by people who consciously choose the subjects they publish in an ADDish citation-maximizing way, jumping in and out of hot areas to drop off a paper that'll turn up in searches. When it comes to judging the quality of AI research, I have a lot more trust in the editorial process of the open-access journals like <i>JMLR</i> or <i>JAIR</i> than I do in gimmicky new metrics that try to reconceptualize scientific publishing as gamification, with all the baggage gamification brings (mostly notably that it's not openly <i>open to</i>, but positively <i>encourages</i> treating it as a points-grabbing system to be gamed).<p>I don't have much trust in the supposedly "open" motives of this new batch of para-academic for-profit companies, either. Notice how academia.edu won't even let you download the PDFs of articles without registering for an account. I would guess the real purpose of these metrics is to set themselves up as new scientific gatekeepers in one form or another: to transition from a journal-oriented publication system to an "academic marketplace" oriented system where they own the marketplace. I have a lot more trust in the by-scientists, for-scientists model of <i>JMLR</i>.
The article mentions Stack Overflow reputation. I've always seen it as an extremely bad measure of reputation. In practice, it only measures a few things, none of which are very relevant:<p>- The length of time somebody has been using SO.<p>- That person's ability (which usually amounts to just having lots of free time) to repeatedly answer the very basic jQuery or .NET questions that are asked time and time again by new users.<p>- The number of other people who will blindly upvote an answer after merely seeing a picture of, say, Jon Skeet's face.<p>None of those are indicative of true knowledge, experience, or natural talent. They're more a measure of popularity than they are of reputation.
<i>The business models that will emerge in science will be as diverse as the ones on the web at large. There will be advertising businesses; freemium models; and enterprise sales models.</i><p>I'm not sure who the author thinks the customers for these services will be. Perhaps universities will pay for access but I guarantee you faculty and students will not. I also can't imagine advertisers paying much to target academics, a small and not particularly unique demographic. In short, I just don't see how these sites hope to generate any revenue. Academia is an incredibly difficult market to get money out of and even if you're going to sell at the university level you're talking about a massive sales effort that is going to take years to yield results (think Blackboard). As other have said here journals are going to end up being open-access and free. Many of them will probably end being run by well endowed university presses (e.g. MIT press).
The scientific publication business modell is a work of "genius". In a nutshell<p>- Outsource content production, get content for free<p>- Outsource the review process, get quality assurance for free<p>- Slap brand name on it and charge (many customers being opted in on autopay plans)<p>I'm looking forward to one of the disruptors succeeding. I think that it is reasonable to demand that any tax funded research should be openly available for society to benefit
I see the article kindly submitted here is a guest post by Richard Price, founder and CEO of the Academia.edu website. I have signed up for Academia.edu (and the confusingly similar site ResearchGate, run by a different group of founders), but so far haven't seen a lot of activity on these new sites.<p>That point is well taken that there needs to be reform in how scientists develop their reputation as researchers. Right now, reviewing submissions to scientific journals is anonymous, and not well rewarded. Jelte Wicherts, writing in Frontiers of Computational Neuroscience (an open-access journal),<p>Jelte M. Wicherts, Rogier A. Kievit, Marjan Bakker and Denny Borsboom. Letting the daylight in: reviewing the reviewers and other ways to maximize transparency in science. Front. Comput. Neurosci., 03 April 2012 doi: 10.3389/fncom.2012.00020<p><a href="http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2012.00020/full" rel="nofollow">http://www.frontiersin.org/Computational_Neuroscience/10.338...</a><p>suggests new procedures for making the peer-review process in scientific publishing more rewarding and more reliable too. Wicherts does a lot of research on this issue to try to reduce the number of dubious publications in his main discipline, the psychology of human intelligence.<p>"With the emergence of online publishing, opportunities to maximize transparency of scientific research have grown considerably. However, these possibilities are still only marginally used. We argue for the implementation of (1) peer-reviewed peer review, (2) transparent editorial hierarchies, and (3) online data publication. First, peer-reviewed peer review entails a community-wide review system in which reviews are published online and rated by peers. This ensures accountability of reviewers, thereby increasing academic quality of reviews. Second, reviewers who write many highly regarded reviews may move to higher editorial positions. Third, online publication of data ensures the possibility of independent verification of inferential claims in published papers. This counters statistical errors and overly positive reporting of statistical results. We illustrate the benefits of these strategies by discussing an example in which the classical publication system has gone awry, namely controversial IQ research. We argue that this case would have likely been avoided using more transparent publication practices. We argue that the proposed system leads to better reviews, meritocratic editorial hierarchies, and a higher degree of replicability of statistical analyses."
There are a number of aspects of publishing and perhaps reputation academics would like to reboot. We are frustrated with e.g. Elsevier, and there is a big boycott on. Presumably this boycott, and the reasons for it, will extend to Mendeley after it becomes part of Elsevier. Given that all these startups are probably heading for similar exists (if successful), this is just more of the same. When we reboot publishing, it will be in a way we control, not with Elsevier 2.0.
"A few years ago, Google Scholar started displaying inbound citation counts for papers... . Scientists have started to see these inbound citation counts as a way to demonstrate the impact of their work..."<p>Let's note that article citation counts have been commonly available at research libraries for about fifty years in the Science Citation Index. (Disclaimer/disclosure: I do not speak for the business that produces the Science Citation Index. That business is my current employer.)
i disagree with the author of this piece about a major premise underlying his thesis: that journals ultimately have no real contribution to the author or community that can't be done away with.<p>top tier journals are top tier journals not because of marketing but because of their standards. their standards are high because they enforce rigorous quality control (reviews) and solicit the best, most groundbreaking work in the field. they bring on the best editorial staff and set directions, not following it.<p>arxiv, for example, is not the same as PNAS. anyone can put anything in arxiv, there is no quality control, no editorial board, no selection process.<p>i think that if you tried to demolish that, you'd wind up with a replacement, not a whole new model.<p>to outsiders this all seems capricious, fickle, arbitrary and anachronistic. instead, it's important because science - the forward progression of human knowledge and understanding - relies on ensuring that valid, repeatable results get publish, not falsehoods (e.g. "vaccines cause autism" and such tripe) or outright plagiarisms. journals work because they provide this. top tier journals are recognized as such because they publish the best quality work which attracts more top quality work. w
There's one other function of journals that few people seem to be bringing up or addressing: that of filtering the flood of research. Hundreds of papers per week are published a week in "biology" across dozens of journals, but in most weeks, 0-1 of those will be interesting to me. By scanning the titles in Science, Nature, and Cell, I'm much likelier to find the really cool stuff than if I were just given a list of all papers with a given tag.<p>I'm not saying that this recommendation is an intrinsically hard problem, just that it doesn't seem to be getting as much play as the other aspects of the open science revolution.
Richard's proposition is basically that metrics should be created and used because its always better to have more data. This is only true if the data is good. The biggest problem I can see is that metrics are easy to fake. That's fine on sites StackOverflow where the stakes are very low but when metrics are accorded extremely high value, as is being proposed here, you immediately create a huge incentive to pollute the data.<p>The only way out of this is to create some kind of quality assurance of the metrics themselves. To me that seems like a monster problem that no startup can possibly handle.<p>If they restrict their scope a bit more than trying to liberate all of academia from the publishers there are probably ways to add value.
> <i>and the primary reputation metric in science is being published in prestigious journals, such as Nature, Science, and The Lancet.</i><p>Absolutely not. The primary metric is citations.<p>Where do people get these ideas?
Richard, do you have any plans of making tools for making writing scientific articles easier? Right now, the tools and output are too rooted in the old ways of deadtree publishing - after all, articles are still called "papers". The whole format is based on printing.<p>Since you're trying to get scientists to publish new types of media, don't you think you need to provide software that makes creating such media easier?<p>You can't move from books to blogs to tweets, if all you have is a typewriter.
A more straightforward solution would be to pass this bill [1], which would require that tax-payer funded research be made publicly available. Once this becomes mandatory the publication/reputation system will be forced to adjust accordingly (and many journals may not survive the transition).<p>[1] <a href="http://www.govtrack.us/congress/bills/112/hr4004" rel="nofollow">http://www.govtrack.us/congress/bills/112/hr4004</a>
"Disrupt[ing] the scientific journal industry" is a massively ambitious goal. Good luck! I'd really like to know more about startups that are trying a less ambitious first step; something like "wordpress for marine biologists who want to start an open-access journal" or "reddit for geneticists." The issues in different disciplines seem different enough that <i>intimate</i> domain knowledge is almost mandatory and something that starts with the idea "I want to talk to these people about this subject in this way" could work out better than "kill the journals." Since it's fun to quote PG:<p>> The way to get startup ideas is not to try to think of startup ideas. It's to look for problems, preferably problems you have yourself.
[1]<p>I guess, "I want to read other people's research for free" counts as a problem, but I'd actually be surprised if it were literally a problem that the founders of this startup (and the others mentioned in the article) were facing.<p>I mention this in a comment below [2], but Economics is <i>extremely</i> open by a lot of standards (maybe because of the policy-relevance of a lot of the research, but that's speculation on my part). There is a large blogging community that includes very accomplished researchers, most working papers are available online well before publication (and will typically reflect at least one round of referee comments), there are comprehensive citation-based rankings, etc. Anyone interested in this stuff should check out <a href="http://repec.org" rel="nofollow">http://repec.org</a>; the html is dated, but it seems like an established unwalled-garden version of what this article, and others, describe (one that uses email and rss feeds rather than direct-messages and social-network-type friends; decide for yourself which you prefer).<p>Economists still publish any important research in peer-reviewed publications, and most of them are closed-access. So I find it unlikely that just the right infrastructure and "metrics" are going to kill peer-reviewed publication. I think it's <i>possible</i> that some set of tools could kill the crappy scientific journals in one or two fields and certainly could kill all of the crappy closed-access journals in those fields, and then iterations and incremental improvements could lead to something that worked across fields and worked for higher-quality publications.<p>[1] <a href="http://paulgraham.com/startupideas.html" rel="nofollow">http://paulgraham.com/startupideas.html</a>
[2] <a href="https://news.ycombinator.com/item?id=5160633" rel="nofollow">https://news.ycombinator.com/item?id=5160633</a>
Seems like Pagerank would be an ideal reputation metric, following citations in papers instead of links in web pages. Except it's patented until 2018 or so.