TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The Symbol Grounding Problem (1990) [pdf]

40 点作者 rfreytag超过 1 年前

3 条评论

aaroninsf超过 1 年前
TL;DR you ground symbols by connecting them to non-linguistic features in your network. In specific, to the sensorium; but also: to utility.<p>Things have names, but things <i>are</i> what we perceive about them and largely about what we can and do, do with them.<p>This is what any agent embodied in an environment must do and do do.<p>Many of the criticisms about LLM will evaporate for multi-modal models, as they become multimodal, and gain (or infer from) agency.
评论 #37530343 未加载
jstrebel超过 1 年前
I see this article as a piece of its time - it tries to bridge the divide between connectionists and symbolists. And that&#x27;s exactly my problem with it: of course, you need a non-symbolic approach to sensor data processing, fusion, aggregation, category discovery etc. but it does not necessarily mean that a neural network is the only option. In the end, an ANN implements a mathematical function and as such, you as the system designer should be able to choose any mathematical framework for your lowest level.
AIorNot超过 1 年前
From GPT:<p>The symbol grounding problem, as discussed in the paper snippet, addresses the challenge of how to connect the semantic interpretation of symbols within a formal symbol system to the real world, making it intrinsic to the system rather than relying on external interpretations. In the context of Large Language Models (LLMs) and AI, this problem is relevant as it raises questions about the meaning and understanding of symbols or tokens generated by these systems.<p>The paper suggests a potential solution to the symbol grounding problem: grounding symbolic representations in nonsymbolic representations of two kinds - “iconic representations” (analogous to sensory projections) and “categorical representations” (feature detectors for object and event categories). Elementary symbols are then the names of these categories, assigned based on their categorical representations.<p>Connectionism, a neural network-based approach, is proposed as a mechanism to learn the invariant features underlying categorical representations, thus connecting symbols to the sensory world. This hybrid model combines symbolic and connectionist elements to address the symbol grounding problem.<p>In the context of LLMs, they operate primarily based on symbolic manipulation of text, and while they can generate text that appears semantically meaningful, the challenge lies in grounding this meaning in real-world understanding. LLMs lack the sensory perceptions and cognitive mechanisms that humans have for grounding symbols. Addressing the symbol grounding problem in LLMs would require incorporating mechanisms to connect their symbolic outputs to real-world understanding, similar to the proposed hybrid model discussed in the paper.<p>In summary, the symbol grounding problem is a significant challenge in AI and LLMs, as it questions how symbolic representations can be tied to real-world semantics. Addressing this problem would require developing mechanisms to bridge the gap between symbolic manipulation and true understanding of the world.