TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AAAS: Machine learning 'causing science crisis'

136 点作者 adzicg超过 6 年前

24 条评论

kevcampb超过 6 年前
Is machine learning really to blame for the reproducibility crisis? I&#x27;m not in academia, but it seemed to me that the problem was entirely present without machine learning being involed.<p>For example, Amgen reporting that of landmark cancer papers they reviewed, 47 of the 53 could not be replicated [1]. I would have assumed that most of them didn&#x27;t involve &#x27;machine learning&#x27;<p>[1] <a href="https:&#x2F;&#x2F;www.reuters.com&#x2F;article&#x2F;us-science-cancer&#x2F;in-cancer-science-many-discoveries-dont-hold-up-idUSBRE82R12P20120328" rel="nofollow">https:&#x2F;&#x2F;www.reuters.com&#x2F;article&#x2F;us-science-cancer&#x2F;in-cancer-...</a>
评论 #19184356 未加载
评论 #19184736 未加载
评论 #19185605 未加载
评论 #19184800 未加载
评论 #19185817 未加载
harry8超过 6 年前
Fails to touch on the perverse incentives in academia, &quot;publish or perish&quot; etc. Torturing a dataset to find a p value that a journal will like (or equivalent stat measure) is better for your career than not publishing a paper that will be discredited in time. You have no incentive at all to decide &quot;my results are unconvincing at this point, I&#x27;m not going to submit them&quot; and every reason to write them up as a useful contribution to human understanding even if you kind of know, deep down, it really isn&#x27;t. Especially if you&#x27;re not senior...
评论 #19184066 未加载
评论 #19184155 未加载
评论 #19183991 未加载
评论 #19184076 未加载
评论 #19184890 未加载
评论 #19184016 未加载
评论 #19186361 未加载
hobofan超过 6 年前
ML is not causing a reproducibility crisis, it just exposes one that is already there.<p>&gt; If we had an additional dataset would we see the same scientific discovery or principle on the same dataset?<p>The same holds true for traditional science based on traditional statistics. It just seems that traditional datasets are under less scrutiny of reproducibility and are taken more easily at face value.
评论 #19184072 未加载
willj超过 6 年前
Curious (possibly naive) question: isn&#x27;t there a fundamental difference between the goals behind creating models with ML vs the &quot;old-fashioned&quot; way? That is, in modern ML applications, you&#x27;re creating a model with dozens&#x2F;hundreds of potential variables, without a hypothesis of how they relate or contribute to the target (other than that they might, hence your including them in the modeling process). You&#x27;re using the model for predictions more than for explainability (though there is work ongoing into improving explainability, but it seems kind of post hoc to me). And there&#x27;s an expectation that you will retrain, or at least tune, the model as its predictive accuracy decays over time.<p>By contrast, traditionally in science you&#x27;re coming in with a hypothesis ahead of time about what variables predict what target. The goal is to come up with a model that is consistent with your hypothesis (and possibly some existing theory), and which can be applied generally, and which should need no tuning. For example, the very simple model for Beer&#x27;s Law-- absorbance vs concentration. That is a law that will apply in every other circumstance, but if modern ML methods had been applied, the scientist might have chosen the model with a slightly better score but which includes nonsense variables in addition to concentration.<p>All that to say, it seems to me the problem stems from scientists&#x27; lack of hypotheses at the outset of a project, and&#x2F;or the understandable desire to get the best bang for their buck out of an experiment by measuring dozens of variables at once and hoping the magic of ML can find a hypothesis for them.<p>Hope that made sense.
评论 #19184132 未加载
评论 #19184544 未加载
评论 #19184382 未加载
评论 #19184428 未加载
paraschopra超过 6 年前
Science works because it posits models first, and then data is sought to confirm or disconfirm it. The benefit of having a model first is that it is much more likely to be general (and hence reproducible).<p>ML does completely opposite. Data first, and then the model is discovered using data. It&#x27;s pretty easy to see why it would lead to non-reproducible models.
评论 #19186448 未加载
fock超过 6 年前
An undergrad to his supervisor in our office talking about publishing a paper: I&#x27;ve fixed the data, now the plots look ok. I (undergrad too) am sitting there thinking - well, you are using ML as a regression blackbox to plot a line, I can do that too w&#x2F;o ML if I&#x27;m fixing the data. Supervisor: ok, that&#x27;s really great. Me cringing...<p>I&#x27;m not hammering the ML-keyword above my work (and thus am getting considerably less academic attention), but it&#x27;s nice to hear from people who made it in academia that they support my theory. 50% off the people are just showoffs throwing buzzwords and positivity around while they produce a load of sh<i></i>...
评论 #19184128 未加载
评论 #19185602 未加载
评论 #19184189 未加载
评论 #19183986 未加载
评论 #19184213 未加载
caramelsuit超过 6 年前
That was a terrible article. I didn&#x27;t see even one concrete example of their complaint. Blaming the reproducibility crisis on machine learning methods is just a cheap dodge.
bitL超过 6 年前
My impression from the article was that the doctor stating those opinions has no idea how ML works and how to apply it properly, leading to statements like that. &quot;ML gap&quot; is real I guess...
评论 #19185272 未加载
评论 #19185059 未加载
boomskats超过 6 年前
Good read. It&#x27;s also refreshing to see a mainstream article that talks about ML without once mentioning &#x27;AI&#x27;.
评论 #19183982 未加载
evrydayhustling超过 6 年前
How does this article manage not to mention a single actual example of ML-related misconceptions?? I&#x27;m sure they exist, but there is literally nothing here except some assertions and a plug for a vaguely remedial research line.
评论 #19196856 未加载
dguest超过 6 年前
This is a misleading title. The researcher they quote is<p>&gt; ... developing the next generation of machine learning and statistical techniques that can ... also report how uncertain their results are and their likely reproducibility.<p>So she&#x27;s actually using machine learning to access systematic uncertainties, i.e. to get better, more reproducible research. Of course, like all forms of automation, people tend to sensationalize progress as a crisis because it makes it too easy to shoot yourself in the foot.<p>But doing things &quot;the old fashioned way&quot; isn&#x27;t any better. Early particle physics experiments would get armies of undergrads classify photographs of collisions in bubble chambers. These results took thousands of researcher-hours to compile, which might seem all fine and dandy, until you realize that there may have been a systematic bias in your classification. Now what do you do?<p>Thanks to machine learning, there are a lot of things we can do: we can try to remove the bias and retrain the algorithm, or we can train with extreme examples of bias and use that to quote a systematic uncertainty. We can try a multitude of approaches to estimate uncertainties rerun our entire analysis in a few hours. Good luck doing that with an army of undergrads.
sgt101超过 6 年前
Case in point : LHC Higgs results - how many detection&#x27;s vs how many events? How were the detection&#x27;s determined... The answer is with a large booster [1]<p>I postulate that out of 12 billion random events it would be remarkable if a booster didn&#x27;t extract 100 or so items that looked similar to a Higgs detection.<p>Well, let&#x27;s give it 20 years and a new generation of PI&#x27;s who aren&#x27;t invested in this and have grad students who are keen to find something different in the data.<p>But ohh.. all the data has been thrown aways... oh! [2]<p>[1] <a href="https:&#x2F;&#x2F;indico.cern.ch&#x2F;event&#x2F;705941&#x2F;contributions&#x2F;2897000&#x2F;attachments&#x2F;1605280&#x2F;2546655&#x2F;mlhepAthens-Feb22-2018.pdf" rel="nofollow">https:&#x2F;&#x2F;indico.cern.ch&#x2F;event&#x2F;705941&#x2F;contributions&#x2F;2897000&#x2F;at...</a><p>[2] <a href="https:&#x2F;&#x2F;www.forbes.com&#x2F;sites&#x2F;startswithabang&#x2F;2018&#x2F;09&#x2F;13&#x2F;has-the-large-hadron-collider-accidentally-thrown-away-the-evidence-for-new-physics&#x2F;#d1c86469270a" rel="nofollow">https:&#x2F;&#x2F;www.forbes.com&#x2F;sites&#x2F;startswithabang&#x2F;2018&#x2F;09&#x2F;13&#x2F;has-...</a>
评论 #19185164 未加载
评论 #19185061 未加载
评论 #19184231 未加载
afabisch超过 6 年前
Overfitting is a well-known problem in the ML community. There are methods to avoid this: cross validation, train-test splits, etc. There are also models that give you an estimate of the standard deviation of a prediction. What is the point? We don&#x27;t need new algorithms, we just have to apply existing methods properly.
itg超过 6 年前
Title makes it sound as if the AAAS made this statement, its a single researcher who is making this claim.
x3tm超过 6 年前
&gt; Machine learning &#x27;causing science crisis&#x27;<p>ML or more generally mathematics do not cause anything. People who misuse mathematics are to blame here. Some fields are simply using tools they don&#x27;t understand and this predates ML advances by decades. Thinking of stats use in psychology and medicine for instance.<p>This trend of presenting ML are some kind of magic powder is ridiculous. I blame hyped presentations by influential ML scientists for this.
e_carra超过 6 年前
I wonder: don&#x27;t machine learning frameworks&#x27; results come with a level of confidence?<p>Ps: I have no experience with anything regarding ML.
评论 #19184737 未加载
评论 #19187041 未加载
评论 #19184130 未加载
评论 #19184098 未加载
anjc超过 6 年前
I can see there being issues with reproducibility, i.e. getting the exact same results, but has there ever been a time when science was more replicable? Data&#x2F;techniques&#x2F;findings&#x2F;papers are under more scrutiny than ever. No positive results will be taken as sacrosanct in CS anymore. This is a complete 180 from 10+ years ago.
评论 #19186674 未加载
raverbashing超过 6 年前
Hopefully machine learning helps with confidence and making predictions out of experiments as opposed to the limited capability of &quot;understanding&quot; from the way things are done now (as if an experiment with slightly higher p values are ignored or with smaller values might have hidden biases, etc).
77pt77超过 6 年前
I&#x27;ve often imagined how different Newtonian physics would be if we had gone the ML route from the beginning.
bayesian_horse超过 6 年前
The other day someone lamented that you can&#x27;t get published as an honest ML researcher, because other scientists are rendering whole professions obsolete all the time...
评论 #19184065 未加载
daodedickinson超过 6 年前
It&#x27;s not like teaching to the test works better for humans.
repolfx超过 6 年前
As other comments observe, the replication crisis predates the use of ML, so the causes are clearly deeper.<p>I think there&#x27;s actually a very simple explanation for this which lots and lots of people hate, so they&#x27;re sort of in denial about it. Academia is entirely government funded and has little or no accountability to the outside world. Academic incentives are a closed loop in which the same sorts of people who are producing papers are also reviewing them, publishing them, allocating funding, assessing each other&#x27;s merits etc. It&#x27;s a giant exercise in marking your own homework.<p>Just looked at in purely economic terms, academia is a massive planned economy. The central planners (grant bodies) decide that what matters is volume and novelty of results, so that&#x27;s what they get, even though the resulting stream of papers is useless to the people actually trying to apply science in the real world ... biotech firms here but the same problem crops up in many fields. It&#x27;s exactly what we&#x27;d expect to see given historical precedent and the way the system works.<p>There&#x27;s another huge elephant in the room here beyond the replication crisis (&quot;to what extent are the outputs wrong&quot;) which is the question of to what extent are the outputs even relevant to begin with? Whenever I sift through academic output I&#x27;m constantly amazed at the vast quantity of obviously useless research directions and papers that appear to be written for their cleverness rather than utility. The papers don&#x27;t have to be wrong to be useless, they can just solve non-problems or make absurd tradeoffs that would never fly in any kind of applied science.<p>I read a lot of CS papers and I&#x27;ve noticed over time that the best and most impactful papers are almost always the ones coming out of corporate research teams. I think this is because corporate funded research has some kind of ultimate accountability and connection to reality that comes from senior executives asking hard questions about applicability. For instance in the realm of PL research academia pumps out new programming languages all the time, but they rarely get any traction and the ideas they explore are frequently ignored by the industrial developers of mainstream languages because they&#x27;re completely impractical. This problem is usually handwaved away by asserting that the ideas aren&#x27;t bad ideas, they&#x27;re just incredibly futuristic and 30 years from now we&#x27;ll definitely be using them - but this kind of reasoning is unfalsifiable on any kind of sensible timescale so it&#x27;s the same as saying, &quot;I shouldn&#x27;t be held accountable within the span of my own career for how I spend tax and student money&quot;.<p>As time goes by I am getting more and more sympathetic to the idea of just drastically cutting academic funding and balancing the books by drastically reducing corporation tax. The amount of total research would fall significantly because corporations wouldn&#x27;t invest all the newly available money in research, or even most of it, but it&#x27;s unclear to me that this would be a bad thing - if 75% of research studies coming out of academic biotech are wrong then it stands to reason that if standards were improved significantly, funding could be reduced by (say) 50% and still get a similar quantity of accurate papers out the other end. It&#x27;s possible the science crisis is really just reflecting massive oversupply of scientists, massive undersupply of accountability and in general research should be a much smaller social effort than it presently is.
评论 #19198280 未加载
评论 #19185538 未加载
stiff超过 6 年前
A dishonest scientist can mine a dataset for statistically significant hypotheses and for a long time no institutional protection against it was in place:<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Data_dredging" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Data_dredging</a><p><a href="https:&#x2F;&#x2F;www.xkcd.com&#x2F;882&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.xkcd.com&#x2F;882&#x2F;</a><p>Machine learning makes it easier to test great many hypothesis, but even going fully &quot;by hand&quot; it is very easy to deviate from what the statistical framework of hypothesis testing would demand. There are now some discussions about counter-measures, e.g. about preregistration of studies:<p><a href="http:&#x2F;&#x2F;www.sciencemag.org&#x2F;news&#x2F;2018&#x2F;09&#x2F;more-and-more-scientists-are-preregistering-their-studies-should-you" rel="nofollow">http:&#x2F;&#x2F;www.sciencemag.org&#x2F;news&#x2F;2018&#x2F;09&#x2F;more-and-more-scienti...</a><p>You can see this as another chapter in the long debate about the correct way to test scientific hypotheses:<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Statistical_hypothesis_testing#Criticism" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Statistical_hypothesis_testing...</a>
评论 #19183971 未加载
maxander超过 6 年前
The issue talked about here is distinct from the larger &quot;reproducibility crisis&quot;; the latter is a result of shoddily designed (or simply fraudulent) <i>experimental</i> work, whereas the issue here is the aggregate effects of the huge amount of <i>computational</i> work that is being done- even when that work is being done correctly and honestly.<p>Testing a hypothesis against a pre-existing dataset is a valid thing to do, and it is also almost trivially simple (and completely free) for someone with a reasonable computational background. There are researchers who spend a decent portion of their careers performing these analyses. This is all well and good- we want people to spend time analyzing the highly complex data that modern science produces- but we run into problems with statistics.<p>Suppose an analyst can test a hundred hypotheses per month (this is probably a low estimate.) Each analysis (simplifying slightly!) ends with a significance test, returning a p-value indicating the likelihood that the hypothesis is false. If p &lt; 0.01, the researcher writes up the analysis and sends it off to a journal for publication, since the odds that this result was spurious are <i>literally</i> hundred-to-one. But you see the problem; even if we assume that this researcher tests <i>no valid hypotheses at all</i> over the course of a year, we would expect them to send out one paper per month- and each of these papers would be entirely valid, with no methodological flaws for reviewers to complain about.<p>In reality, of course, researchers sometimes test true hypotheses, and the rate of true to false computational-analysis papers would depend on the ratio of &quot;true hypotheses that analysis successfully catches&quot; to &quot;false hypothesis that squeak by under the p-value threshold&quot; (i.e., the True Positive rate vs the False Positive rate.) It&#x27;s hard to guess that this ratio would be, but if AAAS is calling things a &quot;crisis,&quot; it&#x27;s clearly lower than we would like.<p>But there&#x27;s a further problem, since the obvious solution- lower the p-value threshold for publication- would lower <i>both</i> the False Positive rate and the True Positive rate. The p-value that gets assigned to the results of an analysis of a <i>true</i> hypothesis are limited by the statistical power (essentially, size and quality) of the dataset being looked at; lower the p-value threshold too much, and analysts simply won&#x27;t be able to make a sufficiently convincing case for any given true hypothesis. It&#x27;s not a given that there is a p-value threshold for which the True Positive&#x2F;False Positive ratio is much better than it is now.<p>&quot;More data!&quot; is the other commonly proposed solution, since we can safely lower the p-value threshold if we have the data to back up true hypotheses. But even if we can up the experimental throughput so much that we can produce True Positives at p &lt; 0.0001, that simply means that computational researchers can explore more complicated hypotheses, until they&#x27;re testing thousands or millions of hypotheses per month- and then we have the same problem. In a race between &quot;bench work&quot; and &quot;human creativity plus computer science,&quot; I know which I&#x27;d bet on.