TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Unprovability comes to machine learning

218 点作者 joker3超过 6 年前

16 条评论

nafizh超过 6 年前
It&#x27;s funny to note that the first author signed a boycott [0] for publishing research in Nature&#x27;s new journal (Nature Machine Intelligence) covering AI topics but is one of the first to publish there.<p>[0] <a href="https:&#x2F;&#x2F;openaccess.engineering.oregonstate.edu&#x2F;signatures" rel="nofollow">https:&#x2F;&#x2F;openaccess.engineering.oregonstate.edu&#x2F;signatures</a>
评论 #18861586 未加载
评论 #18861671 未加载
评论 #18862765 未加载
评论 #18863011 未加载
评论 #18860861 未加载
评论 #18860821 未加载
jerf超过 6 年前
It sounds like we can too determine its practical effect, which is zero. No real-world machine learning algorithm can use the real numbers. We can not obtain them to feed them as input, our real-world machines can not process them if we could, and if we got real-number-based answers, we could not use them. Whether the universe itself is or is not based on real numbers, we do not have access to them either way. Our real-world machines only have access to the computable numbers (and only a subset of them, at that).<p>I&#x27;m only speaking to the practical impact.<p>It&#x27;s possible someone will find some way work this down into something practical, but I&#x27;d be fairly surprised. Real machine learners get finite data, for which the continuum hypothesis is irrelevant.<p>(I assume the original authors fully understand this.)
评论 #18859801 未加载
评论 #18860903 未加载
评论 #18860576 未加载
评论 #18864321 未加载
评论 #18860608 未加载
评论 #18864310 未加载
评论 #18863044 未加载
评论 #18861088 未加载
sampo超过 6 年前
The actual paper: <a href="https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;s42256-018-0002-3" rel="nofollow">https:&#x2F;&#x2F;www.nature.com&#x2F;articles&#x2F;s42256-018-0002-3</a><p>(I find the actual research paper easier to understand than the news coverage.)
roywiggins超过 6 年前
I much prefer Yedidia &amp; Aaronson&#x27;s paper that explicitly builds a Turing machine whose halting behavior is independent of ZFC:<p><a href="https:&#x2F;&#x2F;www.scottaaronson.com&#x2F;busybeaver.pdf" rel="nofollow">https:&#x2F;&#x2F;www.scottaaronson.com&#x2F;busybeaver.pdf</a>
评论 #18860365 未加载
goldenkey超过 6 年前
Though I haven&#x27;t analyzed the particulars of this article or the paper -- I can say: we already know that compression is incomplete. That is: we cannot know the actual Kolmogorov Complexity of a piece of data due to the halting problem. It shouldn&#x27;t be surprising then, that programs of a certain schema, Neural Networks, suffer from similar issues. One could suppose that neural networks that are less code-parameterized, and more data-parameterized (weights), would be less prone to having divergences. Well, it&#x27;s already established that the more data-like NN aren&#x27;t turing complete, and aren&#x27;t powerful enough to solve the kind of problems that we really want to solve (AI.) We have to turn to Hopfield Nets, Boltzman Machines, and RNNs for that. The learning&#x2F;training process for these nets is pretty much encumbered by their capabilities. That is, exploring a field of numbers is one thing. Exploring a field of code? Code&lt;-&gt;Data is the one function in the entire universe that is the most non-linear. It is the one function that cannot be described concisely by mathematics. It&#x27;s like Wolfram terms, &quot;computationally irreducable.&quot; The closer a NN reflects an actual space of turing-complete functions, the farther it is from actually being trainable. Alas...we willl figure out middlegrounds as we have already.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Kolmogorov_complexity" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Kolmogorov_complexity</a>
评论 #18918752 未加载
hexhex超过 6 年前
The author puts Gödel&#x27;s Incompleteness and the Continuum hypothesis on the same level, which is misleading. The continuum hypothesis is unprovable in our current mathematical foundation ZFC, but there are extensions to ZFC that either make the continuum hypothesis true or false.<p>Incompleteness is a property of every sufficiently complex formal system and thus poses a general constraint in logic.<p>That a particular learning problem is not provable in ZFC is not that surprising. Connections between learning and Incompleteness are way more interesting (and there is a lot of pseudo-research going on, &quot;proving&quot; that humans are not simulatable by computer etc.)
yters超过 6 年前
How would this affect the practical nature of the problem? Also, why is this novel? All machine learning algorithms can be implemented on a Turing machine, so they all are subject to the halting problem.
评论 #18859979 未加载
评论 #18859840 未加载
评论 #18859177 未加载
评论 #18859283 未加载
评论 #18859560 未加载
评论 #18859533 未加载
评论 #18859209 未加载
评论 #18859306 未加载
hitsthings超过 6 年前
I bought into this article until &quot;the field of machine learning, although seemingly distant from mathematical logic&quot; and then I sold it all.
jeromebaek超过 6 年前
I&#x27;m curious about the applications of this to machine learning in public policy. In papers like Big Data&#x27;s Disparate Impact (<a href="https:&#x2F;&#x2F;papers.ssrn.com&#x2F;sol3&#x2F;papers.cfm?abstract_id=2477899" rel="nofollow">https:&#x2F;&#x2F;papers.ssrn.com&#x2F;sol3&#x2F;papers.cfm?abstract_id=2477899</a>) we see misguided and underbudgeted policymakers using very sketchy and very opaque ML algorithms to, for example, decide who stays in prison and who does not.<p>If we can prove that the &quot;decision problem&quot; of who stays in prison and who does not is <i>undecidable</i>, invariant of the specific implementations of the ML algorithm, this could make a case for stopping such overreaches of ML.
alhazen超过 6 年前
As the authors say in their paper, &quot;[a] closer look reveals that the source of the problem is in defining learnability as the existence of a learning function rather than the existence of a learning algorithm&quot;. Algorithmic decision making under uncertainty is NOT the same thing as finding learning functions over infinite domains. Sometimes such a function does not exist. Why should it? Although the authors seem pretty much aware of this fundamental shortcoming in their analysis, they nevertheless go ahead and publish a whole paper about learning functions! This has a name, intellectual fraud.
jcoffland超过 6 年前
&gt; Because EMX is a new model in machine learning, we do not yet know its usefulness for developing real-world algorithms. So these results might not turn out to have practical importance.<p>I don&#x27;t think that&#x27;s the correct conclusion. More likely, all learning models have unprovable problems so the choice of model is not important wrt the existence of unprovable problems.<p>Really, this is not a surprising result. All mathematical systems gave unprovable problems. ML is a mathematical system and no exception.
gromit1987超过 6 年前
Can someone explain the meaning of the sentence: ``All distributions in this text are finitely supported over the σ-algebra of all subsets of X&#x27;&#x27; ? On the one hand, it is impossible to define a probability measure on the σ-algebra of all subsets of [0,1]. On the other hand, if a distribution is finitely supported, then there is no point in using a σ-algebra different than what is generated by its support, right?
DiegoDiazEspin超过 6 年前
Conclusions emphasize on no practical use for machine learning algorithms but in the way learnability is defined and treated while using mathematical models to describe it. Some of this mathematical models could result in undecidable algorithms if the wrong model is chosen
sgt101超过 6 年前
The idea that machine learning is a subset of mathematics is very dangerous. In the real world the data is never prepped and clean, and the system is complex and interacts with humans and society at many levels.
评论 #18860246 未加载
评论 #18859302 未加载
评论 #18860174 未加载
评论 #18859583 未加载
dooglius超过 6 年前
&gt; Because EMX is a new model in machine learning, we do not yet know its usefulness for developing real-world algorithms. So these results might not turn out to have practical importance. But we do now know that we should be careful when introducing new models of learning. Moreover, we might need to look again at the many subtleties that can come up, even in established learning models.<p>In other words, &quot;Machine Learning&quot; here is a buzzword, without which the paper would probably get less attention.
评论 #18859508 未加载
Frenum超过 6 年前
For me, the interesting conclusion is that there are things that humans are mathematically incapable of knowing.
评论 #18863907 未加载