TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Post-human mathematics

175 点作者 subnaught超过 9 年前

18 条评论

whatshisface超过 9 年前
I think one of the greatest difficulties in completely automated mathematics would be the discovery of mostly those things which are <i>interesting</i>. If computer mathematics is ever to diverge from human thinking, someone must pay to keep the lights on. Since nobody would pay for random points in theorem-space, it must have some way of figuring out what the human brains down at the breaker box actually want. This to me looks a lot like the problem of computer automated storytelling or even art.<p>Also, I don&#x27;t think the alien-ness of computer proofs is a given. Perhaps someday some psychologist or philosopher might work out exactly what our cognition likes to work with, upon which time you could write a proof compiler that outputted those things.<p>That raises and interesting question. Can all interesting proofs be built with human-friendly steps? Maybe that&#x27;s why we haven&#x27;t worked out P and NP.
评论 #10615973 未加载
评论 #10617356 未加载
评论 #10616151 未加载
评论 #10616286 未加载
评论 #10615932 未加载
评论 #10617574 未加载
评论 #10617086 未加载
评论 #10615948 未加载
评论 #10616041 未加载
评论 #10616051 未加载
评论 #10616758 未加载
评论 #10616078 未加载
评论 #10616961 未加载
评论 #10616738 未加载
评论 #10622791 未加载
marcelluspye超过 9 年前
&gt; Is there a structure to mathematics which is independent of the human brain?<p>I would venture to say &quot;no&quot;. I don&#x27;t think humans will ever have no place in mathematics, because the problems we deem &quot;important&quot; are often relatively arbitrary. If a &quot;post-human mathematician&quot; starts spewing out thousands of pages of mathematics a second, all in a form only a computer can understand, no one will care. If a computer fells a tree in the woods, it doesn&#x27;t make a sound.<p>I wouldn&#x27;t discount the possibility, though, that a future &quot;creative&quot; computer manages to produce a proof indecipherable to humans of a theorem we care about, at which point I think there will be quite a perturbation in the mathematical community. If a computer proves the Riemann Hypothesis in such a way that no one can understand it, but it spits out a Coq document that everyone can load and verify, will people consider the problem solved?
评论 #10616110 未加载
评论 #10616067 未加载
评论 #10616490 未加载
评论 #10617464 未加载
评论 #10616539 未加载
评论 #10617683 未加载
netcan超过 9 年前
<i>Computers are used more and more but do not play a creative role</i><p>It&#x27;s always difficult to get concepts straight when talking about any of these AI-ish questions. The thing that we describe before the fact as &quot;intelligence&quot; is generally stuff we can&#x27;t imagine mechanizing. Once it is mechanized, and we see the mechanic we don&#x27;t really like to call it AI. I think it&#x27;s the same issue for &quot;creative.&quot;<p>When humans do mathematics they look for theorems that are interesting intuitively. We don&#x27;t really understand what this intuition is. If a computer does it, say along the way to solving some other problem, we will be able to look into the mechanic and it probably won&#x27;t seem like intuition to us.
zitterbewegung超过 9 年前
For mechanical theorem proving there is ACL2 which integrates machine learning into the system. See <a href="http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1404.3034" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1404.3034</a> also Coq has machine learning add ons <a href="http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1410.5467" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1410.5467</a>
评论 #10616472 未加载
Strilanc超过 9 年前
Almost all of this paper is dedicated to discussing whether or not computers can be &quot;creative&quot;. Then it mentions in passing that computer-discovered theorems may be long and impenetrable (even if they are profound).<p>I guess I was expecting some attempted concrete task definitions of creativity, such as &quot;finds and proves statements with many useful implications and applications&quot;, and discussion of how well existing theorem provers do at those kinds of tasks. But instead of the &quot;how might we achieve this, and what will change?&quot; paper I was hoping for, this is more of an &quot;are we special?&quot; paper.
评论 #10617083 未加载
ChuckMcM超过 9 年前
Loved this quote -- <i>&quot;Note, by the way, that a great mathematician is one who does something new, not one who is good at doing again things that have been done before.&quot;</i><p>Same can be said of engineers.
评论 #10616540 未加载
评论 #10616951 未加载
评论 #10617065 未加载
评论 #10616474 未加载
nemo1618超过 9 年前
<p><pre><code> &gt;The ability to speak was clearly favored by evolution, and the &gt;same might be said of the ability to count from 1 to 10. </code></pre> Actually: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Pirah%C3%A3_language#Pirah.C3.A3_and_linguistic_relativity" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Pirah%C3%A3_language#Pirah.C3....</a>
jessriedel超过 9 年前
Can someone give a better summary of the author&#x27;s points? The abstract gives an intro, but not much else.
评论 #10615860 未加载
评论 #10615786 未加载
评论 #10615720 未加载
评论 #10615746 未加载
spooningtamarin超过 9 年前
<a href="https:&#x2F;&#x2F;agtb.wordpress.com&#x2F;2012&#x2F;04&#x2F;01&#x2F;automatic-proof-for-reimanns-hypothesis&#x2F;" rel="nofollow">https:&#x2F;&#x2F;agtb.wordpress.com&#x2F;2012&#x2F;04&#x2F;01&#x2F;automatic-proof-for-re...</a><p>Love this one as a look into the future.
taber超过 9 年前
The technology of mathematics is not theorems. Think of theorems like unit tests: no matter what framework you use to get a result, it should match the corresponding results that other approaches have yielded.<p>The technology of mathematics is words. Words that define the barriers between abstract objects and their different properties. The set of words that a mathematician uses to approach a problem is where progress is made.<p>Until a computer can conceptualize a problem outside of the words used to describe it, it will never mimic this aspect of abstract thought.
tvural超过 9 年前
How human mathematicians decide that a problem is &quot;important&quot; is not as arbitrary as people seem to think. The work mathematicians choose to do can be heavily influenced by fashion, but in general, a problem is important if its solution would give insight into many other important, unsolved problems. Given this definition, it wouldn&#x27;t be difficult to get the computer&#x27;s definition of an interesting problem to align with ours.
api超过 9 年前
They&#x27;d probably be more... computational:<p><a href="http:&#x2F;&#x2F;www.amazon.com&#x2F;dp&#x2F;1579550088&#x2F;?tag=googhydr-20&amp;hvadid=48305385595&amp;hvpos=1t1&amp;hvexid=&amp;hvnetw=g&amp;hvrand=2373967073518921741&amp;hvpone=28.65&amp;hvptwo=&amp;hvqmt=b&amp;hvdev=c&amp;ref=pd_sl_4j8ledlqpp_b" rel="nofollow">http:&#x2F;&#x2F;www.amazon.com&#x2F;dp&#x2F;1579550088&#x2F;?tag=googhydr-20&amp;hvadid=...</a>
eveningcoffee超过 9 年前
<i>you please enter digital certificate of virginity of grandmother , or some such nonsense.</i><p>How did they get this kind of spam into arxiv.org?
tianlins超过 9 年前
I think automated theorem proving replaces the &quot;search function&quot; of a mathematician. But there is another part, which seems to be more important, is the creative insights. For example, inferring and conjecturing a theorem from few &quot;data points&quot;. Machines at the moment do not have such capability yet.
评论 #10616642 未加载
golergka超过 9 年前
While the theme seems interesting, I found the article itself pretty underwhelming. It just describes the current state of affairs and then goes into hypotheticals with less precision and imagination than many sci-fi authors before.
underlings超过 9 年前
So I guess this is it then. If people were wondering about what comes after &quot;postmodern&quot; it&#x27;s probably:<p>Posthuman
cLeEOGPw超过 9 年前
As Mike Tyson said &quot;a computer might win a chess match, but he would lose in boxing ring&quot;.
评论 #10616501 未加载
评论 #10616458 未加载
sevzi7超过 9 年前
I&#x27;m already doing this with Java multicore. Is Java multicore human? No.