Yes, there is a time lag problem. However, instant distribution has been around for a long time (in the case of arXiv.org, since 1991). It's widely accepted in the physics community, but it hasn't gained much traction in most other scientific disciplines. I think there are two reasons for this: the chicken and egg problem, and the peer review problem.<p>The chicken and egg problem is that no one in these disciplines publishes unreviewed manuscripts because no one reads them. The corollary here is that if you do something interesting and someone happens to read it, take your idea, and publish first, as far as credit goes, you're fucked. This happens with any form of public presentation of ideas, not all that often but often enough that every scientist knows someone who it has happened to. If you just sank a year of your life into a project, you want to make damn sure you're going to get credit for it. At present, instant distribution is too risky. If the profile of instant distribution can rise to the point where a manuscript will be sufficiently widely read to be acknowledged as the source of an idea, scientists in less competitive areas may be more open to it.<p>The bigger issue is, I think, that scientists actually appreciate peer review. Peer review ensures both quality and fairness in research. If I read a paper in a high-impact journal, I generally believe can trust the results regardless of who wrote it. By contrast, any reputation-based metrics will be strongly colored by the reputation of the lab from which the paper originates. (I have a hunch that this is already true for citation metrics.) Replacing peer review with reputation-based metrics may mean research gets out there faster, but it may also mean that a lot of valuable research gets ignored. This still sucks, and it may suck more. Turning a paper into a startup that may succeed or may fail depending on how well a scientist can market his or her findings would absolutely suck ass. IMHO, scientific funding is already too concentrated in the hands of established labs, and these labs are often too large to make effective use of their personnel. Reputation-based metrics would only contribute to this problem. They would also lead to confusion in the popular press, which is already somewhat incapable of triaging important and unimportant scientific results. This is a much bigger deal in biomedical science than in theoretical physics, because the former has direct bearing on individuals' lives.<p>On top of this, citation metrics are simply not peer review. In his previous article, Richard Price pointed out that researchers need to spend a lot of time performing peer review. This is absolutely the way it should be. Researchers should spend hours poring over new papers, suggest ways of improving them to the authors, and ultimately ensure that whatever makes it into press is as high quality as possible. IMHO, the easiest way to get quality research out faster is to encourage journals to set shorter peer review deadlines and encourage researchers to meet them, not to throw away the entire system.<p>OTOH, I think open sharing of data sets among researchers will massively enhance scientific progress, and has a reasonable chance of happening because the push is coming from funding agencies, not startups. As a scientist, the idea of being able to ask my own questions with other people's data gets me far more excited than being able to read their papers before release.