TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Less research is needed

90 点作者 zootar将近 11 年前

8 条评论

antognini将近 11 年前
Last year a professor in my department (astronomy) suggested that he and I write a similar tongue-in-cheek paper to be published on April 1. The idea was to promote a moratorium on new astronomical data for one year. This would give observers time to reduce all the data they&#x27;ve already collected and theorists time to catch up to the observers.<p>It&#x27;s facetious, of course, but there was a serious point behind it all. There is a certain tendency in science for a researcher to perform the same study over and over again just using larger or slightly modified data sets simply because that&#x27;s what he knows how to do. Most of the time these sorts of Version 2.0 studies just reduce the error bars on the result without telling anyone anything new.<p>Now, of course, sometimes interesting results do come from such things. But much more often interesting results come from studies that attack a radically different problem or use a radically different approach. Science is a manpower-limited, not data-limited endeavor. Scientists have a finite amount of time that they can devote to research and they have to choose what projects to work on. There is still a great deal of low-hanging fruit---projects that require relatively small amounts of funding, relatively small amounts of manpower, and have the potential to yield genuinely new results. There are, for example, some really excellent projects that are being done with a telescope that basically consists of putting a commercial camera lens on a telescope mount [1]. But the difficulty of these sorts of projects is that they require creativity, and that is hard to come by. I&#x27;m not faulting anyone, though---I&#x27;m not an especially creative researcher myself!<p>Part of the problem is that grant agencies have a strong bias towards funding incremental science. While they say that they are in favor of funding breakthrough science rather than incremental science, the projects that actually get funded tell a different story. And it&#x27;s hard to blame them because no one knows a good way to predict breakthrough results. It&#x27;s an especially difficult problem to solve for theorists---in order to write a compelling theory proposal you basically have to have solved the problem already!<p>I&#x27;ve heard a number of solutions to these problems, but they&#x27;re all as compelling to me as a year-long data moratorium (which, to be fair, would indeed force the community to become more creative). Hmm, maybe I&#x27;ll actually write up that paper for April 1, 2015.<p>[1] <a href="http://www.astronomy.ohio-state.edu/~assassin/index.shtml" rel="nofollow">http:&#x2F;&#x2F;www.astronomy.ohio-state.edu&#x2F;~assassin&#x2F;index.shtml</a>
评论 #8107231 未加载
danieltillett将近 11 年前
There are so many issues raised here that it hard to know where to respond.<p>1. Once area we could stop is useless data-mined correlation studies that show statistical significance (assuming you ignore that data-mining has occurred) between action X and outcome Y - the sort where a retrospective study of 500,000 nurses finds that eating candied peanuts reduces prostate cancer by 15%. The rule of thumb in any of these studies is that unless the effect is 300% or greater (smoking and lung cancer is 1500%) then the result is certain to be garbage.<p>2. We need less “novel” research and more replication of past results. The whole scientific system is set up to reward novelty over accuracy. It is so bad that unless I have seen two independent groups repeat something I doubt it is real no matter how famous the group.<p>3. We need to reward being right over being first. Right now groups rush papers out so they don’t get scooped and so don’t check their results as well as they should. I would personally like to remove the date off all scientific papers to stop these silly games - after all if something is true does it become less true just because it was published last year rather than last week.<p>4. We need to reward people who put the effort into replicating work. A simple proposal would be to give publication right to every group that replicated (or could not replicate) a study in the same journal. If some study is published in Nature and you go to the effort of replicating it then you should get an automatic Nature publication.<p>5. Stop scientist from holding on to raw data. In theory scientist are supposed to share their data, but in practice this doesn’t happen very often. It should be possible to report groups that don’t share data to the funding bodies and if they are found to not be not sharing (or only sharing some of the data) then the group is banned from getting any new funding. It would only take a few banning to stop this immoral data hoarding.
评论 #8105802 未加载
ISL将近 11 年前
&gt; On my first day in (laboratory) research, I was told that if there is a genuine and important phenomenon to be detected, it will become evident after taking no more than six readings from the instrument.<p>This is the reverse of a rule of thumb I find useful, that if you wish to measure something and get an approximate picture of your uncertainty, you should measure it 7-8 times.<p>The author&#x27;s rule of thumb hinges delicately upon the definition of &quot;readings&quot;, in particular upon the reach and precision of a given reading. I can look in the sky on dark nights and see Mercury, but even if I watch it through binoculars for years, I&#x27;ll never resolve the &quot;Genuine and Important&quot; precession of its orbit [1], the first solid evidence for General Relativity.<p>Some important phenomena are subtle and rare. You can watch a liter of pure water for ~1500 years before you can expect a single neutrino from the Sun to interact and make a tiny flash of light [2].<p>[1] <a href="http://en.wikipedia.org/wiki/Tests_of_general_relativity#Classical_tests" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Tests_of_general_relativity#Cla...</a><p>[2] <a href="http://www.atlasobscura.com/places/super-kamiokande" rel="nofollow">http:&#x2F;&#x2F;www.atlasobscura.com&#x2F;places&#x2F;super-kamiokande</a>
adamtj将近 11 年前
Perhaps this is one of those white lies we tell to justify doing the right thing. A dishonest means to an honest end.<p>Is this maybe how researchers publish negative results without having to admit failure? We often complain about the dearth of published negative results. We talk about pre-registering studies and so forth.<p>It seems better to me for researchers to recast a negative result as an inconclusive positive result &quot;requiring more study&quot;, than to not publish it at all. Just because there is a call for further research doesn&#x27;t mean we have to do it.
Pxtl将近 11 年前
&gt; Despite consistent and repeated evidence that electronic patient record systems can be expensive, resource-hungry, failure-prone and unfit for purpose, we need more studies to ‘prove’ what we know to be the case: that replacing paper with technology will inevitably save money, improve health outcomes, assure safety and empower staff and patients.<p>Paper-based systems are also failure-prone and unfit for purpose. They just fail in familiar ways that the old guard have accepted as just part of the business.
评论 #8105528 未加载
评论 #8105211 未加载
austinjp将近 11 年前
Perhaps less primary research is needed, and more secondary research i.e. more reviews.<p>It strikes me that making the scientific literature machine-parsable and query-able may help a great deal.<p>Currently the literature is &quot;scraped&quot; to produce scientific metadata which is stored in databases such as PubMed. Of course, that&#x27;s back to front. Experimental data, findings, methods, workflows, etc etc should be stored in databases of some sort, and &quot;literature&quot; produced by querying the data.<p>A pipe-dream, of course. But some steps have been taken towards something approaching this.<p><a href="https://sdm.lbl.gov/sdmcenter/" rel="nofollow">https:&#x2F;&#x2F;sdm.lbl.gov&#x2F;sdmcenter&#x2F;</a> <a href="http://authors.library.caltech.edu/28168/" rel="nofollow">http:&#x2F;&#x2F;authors.library.caltech.edu&#x2F;28168&#x2F;</a> <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5476716" rel="nofollow">http:&#x2F;&#x2F;ieeexplore.ieee.org&#x2F;xpl&#x2F;articleDetails.jsp?arnumber=5...</a>
gwern将近 11 年前
I really disliked that post: <a href="http://slatestarcodex.com/2014/07/11/links-for-july-2014/#comment-120032" rel="nofollow">http:&#x2F;&#x2F;slatestarcodex.com&#x2F;2014&#x2F;07&#x2F;11&#x2F;links-for-july-2014&#x2F;#co...</a>
Joof将近 11 年前
Maybe smarter research is needed? It seems to me that the problem is a similar one to what data science is trying to solve. How do we make sense of all this data?<p>Of course more research is still needed in many areas anyway.