TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Isn't This a Category Error?

1 点作者 daly将近 2 年前
Lisp has been criticized as a language that cannot handle all of the type classes.<p>The essential claim, as I understand it, is that Lisp could not handle type union, intersection, and negation. I&#x27;ve been slowly working to better understand that criticism.<p>In this paper (p3) the author writes ( https:&#x2F;&#x2F;www.irif.fr&#x2F;~gc&#x2F;papers&#x2F;set-theoretic-types-2022.pdf ) The more precise is a type the fewer functions it types, the most precise type being one that, as (¬false→false)&amp;(false→true) completely defines the semantics of a function.<p>While I agree with the first half of the sentence about the precision of a type, it seems to me that &quot;type&quot; and &quot;semantics&quot; are categorically different.<p>Trying to claim that types capture semantics seems to extend an essentially mechanical process (typing) to an essentially philosophical one (semantics).<p>So my question is, &quot;Am I incorrect in claiming that types and semantics are categorically different&quot;? This matters in the kind (pun) of work I&#x27;m trying to do, especially in the struggle of defining specifications.<p>Like Godel, I keep struggling with self-referential and self-modifying systems (self-reproducing robots being my actual domain of research). Systems with these properties have types and semantics that can &quot;move&quot; over time.<p>A self-modifying system (e.g a neural network) could start out properly classifying inputs (e.g. cat vs dog) but over time add a third &quot;type&quot; of &quot;don&#x27;t know&quot; as more training occurs. That isn&#x27;t a temporal logic or fuzzy logic but more of a &quot;morphing(?) logic&quot;. Here time is not the subject of logic reasoning but the essence of the mechanism of reasoning.<p>The game gets even worse as my neural nets can change structure, get feedback from their own output as input, split into and&#x2F;or join with other nets, and accept their own structure as input and output (similar to genetic algorithms). The physical model is closer to yeast cells than neural nets. ( https:&#x2F;&#x2F;www.wiley.com&#x2F;en-us&#x2F;Yeast%3A+Molecular+and+Cell+Biology%2C+2nd+Edition-p-9783527332526 ) (Lest you think I&#x27;m joking, yeast can find optimal paths in graphs and optimally solve mazes.) All of this nonsense sits on self-modifying hardware (FPGAs).<p>Godel considered self-reference but not self-modification. Might a system that can&#x27;t currently solve a problem slowly self-modify into one that could?

2 条评论

DemocracyFTW2将近 2 年前
&gt; yeast can find optimal paths in graphs and optimally solve mazes<p>One should be amazed by this but then one shouldn&#x27;t. As youtube person Masakazu Matsumoto observes:<p><i>Water and milk can solve the maze faster than the slime mould. 水も迷路を解けます。しかも粘菌よりも速く。 It is a counter-example to the advocacy that the slime mould may be intellectual. [...] Note that this is NOT the evidence for the fact that water and milk are more intellectual than the slime mould, but showing that solving a maze is an easy (i.e. non-NP) problem.</i>
daly将近 2 年前
Language is a self-referencing, self-modifying system. What it lacks is agency. Embodied systems like robots fix that. Perhaps this is what the AI people like Hinton fear.