TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Future of Science

60 pointsby RichardPriceabout 13 years ago

12 comments

nkorenabout 13 years ago
On a related note, some years ago I read an academic paper -- alas, it was printed on a dead tree, and I can't find a link to a digital version of it -- which pointed out that the rate at which papers were cited was driven primarily by the rate at which papers were cited. This is not as much of a tautology as it sounds like.<p>Think about the process of writing a paper: you do some keyword searches for recent articles on related subject. You then look at the bibliographies for those articles, pick out whatever looks relavent to your topic, look at the bibliographies of <i>those</i> articles, etc. What this means is that apart from your initial keyword search, the primary criteria for including an article in your research is: "has it been cited already?". Relevance is merely a secondary filter.<p>This paper pointed out the effects of this phenomena: the vast majority of published scientific papers are <i>never</i> cited again; a moderate number are cited only a few times, and the remaining few -- having reached a bibliographic critical mass -- are cited thousands of times. The authors of the paper made a strong case that this was not a good reflection of the quality of research. In many cases, the process of reaching bibliographic critical was simply based on the almost random chance of acquiring those first few citations. The authors provided several examples of important scientific ideas which had been lost for decades, arguably because they had not attracted a critical mass of citations in the years immediately after publication.<p>In other words, humans suck at pagerank.<p>Anyhow, it occurred to me that this is a problem which could be solved with technology. Imagine an online word processor which -- in a sidebar -- suggests potentially related articles from ArXiv and Google Scholar. This would be based not on crawling bibliographies, but rather on semantic analysis of the adjacent paragraphs.<p>I think this would create some real benefits. It would remove much of the problems with citation bias, ensuring that important ideas aren't lost, and also that prior research isn't unwittingly duplicated. Wish I had time to implement something like this!
评论 #3906740 未加载
评论 #3906649 未加载
评论 #3906906 未加载
simonsterabout 13 years ago
Yes, there is a time lag problem. However, instant distribution has been around for a long time (in the case of arXiv.org, since 1991). It's widely accepted in the physics community, but it hasn't gained much traction in most other scientific disciplines. I think there are two reasons for this: the chicken and egg problem, and the peer review problem.<p>The chicken and egg problem is that no one in these disciplines publishes unreviewed manuscripts because no one reads them. The corollary here is that if you do something interesting and someone happens to read it, take your idea, and publish first, as far as credit goes, you're fucked. This happens with any form of public presentation of ideas, not all that often but often enough that every scientist knows someone who it has happened to. If you just sank a year of your life into a project, you want to make damn sure you're going to get credit for it. At present, instant distribution is too risky. If the profile of instant distribution can rise to the point where a manuscript will be sufficiently widely read to be acknowledged as the source of an idea, scientists in less competitive areas may be more open to it.<p>The bigger issue is, I think, that scientists actually appreciate peer review. Peer review ensures both quality and fairness in research. If I read a paper in a high-impact journal, I generally believe can trust the results regardless of who wrote it. By contrast, any reputation-based metrics will be strongly colored by the reputation of the lab from which the paper originates. (I have a hunch that this is already true for citation metrics.) Replacing peer review with reputation-based metrics may mean research gets out there faster, but it may also mean that a lot of valuable research gets ignored. This still sucks, and it may suck more. Turning a paper into a startup that may succeed or may fail depending on how well a scientist can market his or her findings would absolutely suck ass. IMHO, scientific funding is already too concentrated in the hands of established labs, and these labs are often too large to make effective use of their personnel. Reputation-based metrics would only contribute to this problem. They would also lead to confusion in the popular press, which is already somewhat incapable of triaging important and unimportant scientific results. This is a much bigger deal in biomedical science than in theoretical physics, because the former has direct bearing on individuals' lives.<p>On top of this, citation metrics are simply not peer review. In his previous article, Richard Price pointed out that researchers need to spend a lot of time performing peer review. This is absolutely the way it should be. Researchers should spend hours poring over new papers, suggest ways of improving them to the authors, and ultimately ensure that whatever makes it into press is as high quality as possible. IMHO, the easiest way to get quality research out faster is to encourage journals to set shorter peer review deadlines and encourage researchers to meet them, not to throw away the entire system.<p>OTOH, I think open sharing of data sets among researchers will massively enhance scientific progress, and has a reasonable chance of happening because the push is coming from funding agencies, not startups. As a scientist, the idea of being able to ask my own questions with other people's data gets me far more excited than being able to read their papers before release.
评论 #3906564 未加载
3amabout 13 years ago
Admirable cause, but the author doesn't do themselves any favors by dramatically overstating the role of publication in knowledge sharing (informal channels &#38; conferences exist, publication serves more of a recognition purpose), and with somewhat offensive, unsupported claims like,<p>"The stakes are high. If these inefficiencies can be removed, science would accelerate tremendously. A faster science would lead to faster innovation in medicine and technology. Cancer could be cured 2-3 years sooner than it otherwise would be, which would save millions of lives"
评论 #3906832 未加载
评论 #3906637 未加载
评论 #3907605 未加载
reasonattlmabout 13 years ago
This is a time for revolution in the methods of science and the funding of science, long overdue and enabled by the internet. It will be a mix of removing the barriers to entry, blurring the priesthood at the edges, open and iterative publishing of data, drawing crowdfunding directly from interested groups of the public rather than just talking to the traditional funding bodies.<p>Astronomy has long been heading in this direction, actually - it's a leading indicator for where fields like medicine and biotechnology are going. People can today do useful and novel life science work for a few tens of thousands of dollars, and open biotechnology groups are starting to formalize (such as biocurious in the Bay Area).<p>There is a lot of good science and good application of science that can be parallelized, broken up into small fragments, distributed amongst collaborative communities. The SENS Foundation's discovery process for finding bacterial species that might help in attacking age-related buildup of lipofuscin, for example: cheap, could be very parallel. In this, these forms of work are much like software development - consider how that has shifted in the past few decades from the varied enclosed towers to the open market squares below.<p>This greater process is very important to all of us, as it is necessary to speed up progress in fields that have great potential, such as biotechnology. Only a fraction of what could be done will be done within our lifetimes without a great opening of funding and data and the methodologies of getting the work done.
评论 #3906484 未加载
评论 #3906468 未加载
timdellingerabout 13 years ago
The problem with distributed, grass roots peer review is that you get poor quality reviewers. The current structure is slow and very "old media", but it is this way because it's the only way to guarantee quality peer reviews.<p>If journals cease to exist, and a new publish-it-anywhere-then-publicize-it paradigm emerges, along with some associated metrics (kinda sorta like Reddit), then I predict that conference presentations will become the new metric of success. They have gatekeepers, and scarcity due to limited bandwidth (i.e. there are a limited number of time slots available). The whole journal publishing infrastructure will just be shifted over to conferences... along with the ecosystem of for-profit vs. trade group, etc., and the Slowness and Single Mode of Publication problems that the OP describes.
thisisnotmynameabout 13 years ago
"The norms don’t encourage the sharing of an interactive, full-color, 3 dimensional model of the protein, even if that would be a more suitable media format for the kind of knowledge that is being shared."<p>This is simply wrong - when you solve a protein structure is is mandatory that you submit it to the pdb (i.e <a href="http://www.pdb.org/pdb/101/motm.do?momID=148" rel="nofollow">http://www.pdb.org/pdb/101/motm.do?momID=148</a>) and nearly every journal I read has both color figures and extensive online supplementary materials.
评论 #3907113 未加载
archgoonabout 13 years ago
No 3d models for new proteins?<p>The protein databank exists precisely for that reason.<p><a href="http://www.rcsb.org/pdb/home/home.do" rel="nofollow">http://www.rcsb.org/pdb/home/home.do</a>
评论 #3906997 未加载
wiggins37about 13 years ago
I'm glad that the author is thinking about ways to increase communication between scientific authors, but some of the statements he made, specifically regarding "curing cancer 2-3 years sooner" make him sound ignorant of some of the challenges facing researchers. Not all scientific knowledge is presented only through journal articles. As others have already mentioned, conferences with "poster presentations" are pretty common in medicine to discuss ideas before the paper comes out. In addition, labs across the country working on similar problems often exchange ideas and substrates by email and mail respectively. I agree that it would be great if there was a more centralized repository of information online to get information. If anyone has any experience with blogs, forums or websites specifically addressing oncology (that are not just press releases) I would appreciate learning about them.
telabout 13 years ago
<i>Imagine if all the stories in your Facebook News Feed were 12 months old. People would be storming the steps of Congress, demanding change.</i><p>To play devil's advocate, the time lag forces your conversations to strive to a higher standard of quality, comprehensiveness, correctness, and context than Facebook updates could even be compared to.<p>Then again, striving for the higher standard also invents pseudosciences, bad stat, and outright fraud.<p>In short, I don't think the solution is to replace the paper with something instantaneous. I agree that instantaneous (public) communication could be better used in the academic community, but there's a trend that way already as blog posts begin to signal a certain kind of good advisor.<p>I especially don't agree that search engines have any business replacing peer review.
stephenhandleyabout 13 years ago
Many of the new approaches to science publishing I’ve seen haven’t done enough to directly address the silo problem or provide significantly improved alternatives. I suspect this is primarily because they’re trying to create viable scientific publishing businesses of their own. I believe taking a different approach around free, distributed, open source publishing and aggregation software would be better suited to transforming scientific communication into a more open, continuous, efficient, and data-driven process.<p>more here: <a href="http://tldr.person.sh/on-the-future-of-science" rel="nofollow">http://tldr.person.sh/on-the-future-of-science</a>
kirk21about 13 years ago
It always strikes me how much time it takes to finish your paper (eg. conference template, correcting spelling mistakes, formatting figures etc.) while you could outsource this. Student-assistants are an option<p>Furthermore it would be nice to discuss your ideas without having to spend months writing a paper. Guess there is a difference between alpha and beta sciences.<p>Finding relevant conferences is a challenge as well (since I'm still a junior researcher).
mukaijiabout 13 years ago
I recently had a chit-chat with a 5th year phd friend of mine in front of the stanford bookstore. We both did academic research, and both know all too well the incredible frustration of tech not-having full penetrated academic research. If ever you'd like to work toward making research faster, reply to this. We could think about a couple of things and start cranking out some solutions.
评论 #3907499 未加载
评论 #3908214 未加载
评论 #3907506 未加载
评论 #3907868 未加载