TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

A Multi-Level View of LLM Intentionality

102 点作者 zoltz超过 1 年前

7 条评论

lukev超过 1 年前
I&#x27;m not sure the definition of &quot;intention&quot; the article suggests is a useful one. He tries to make it sound like he&#x27;s being conservative:<p>&gt; That is, we should ascribe intentions to a system if and only if it helps to predict and explain the behaviour of the system. Whether it <i>really</i> has intentions beyond this is not a question I am attempting to answer (and I think that it is probably not determinate in any case).<p>And yet, I think there&#x27;s room to argue that LLMs (as currently implemented) cannot have intentions. Not because of their capabilities or behaviors, but because we know how they work (mechanically at least) and it is incompatible with useful definitions of the word &quot;intent.&quot;<p>Primarily, they are pure functions that accept a sequence of tokens and return the next token. The model itself is stateless, and it doesn&#x27;t seem right to me to ascribe &quot;intent&quot; to a stateless function. Even if the function is capable of modeling certain aspects of chess.<p>Otherwise, we are in the somewhat absurd position of needing to argue that all mathematical functions &quot;intend&quot; to yield their result. Maybe you could go there, but it seems to be torturing language a bit, just like people who advocate definitions of &quot;consciousness&quot; wherein even rocks are a &quot;little bit conscious.&quot;
评论 #37475614 未加载
评论 #37476309 未加载
评论 #37473642 未加载
评论 #37473288 未加载
评论 #37473724 未加载
评论 #37473580 未加载
评论 #37473596 未加载
评论 #37475294 未加载
评论 #37475676 未加载
评论 #37474463 未加载
og_kalu超过 1 年前
&gt;Unless you think that there is some fundamental reason why LLMs will never be able to play chess competently, and I doubt there is, then it seems that we could with the right prompts implement some sort of chess AI using an LLM.<p>You can play a good game of chess (or poker for that matter) with GPT.<p><a href="https:&#x2F;&#x2F;twitter.com&#x2F;kenshinsamurai9&#x2F;status&#x2F;1662510532585291779" rel="nofollow noreferrer">https:&#x2F;&#x2F;twitter.com&#x2F;kenshinsamurai9&#x2F;status&#x2F;16625105325852917...</a><p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2308.12466" rel="nofollow noreferrer">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2308.12466</a><p>There&#x27;s also some work going on in the eleuther ai discord training LLMs specifically for chess to see how they shape up. They&#x27;re using the pythia models. so far:<p>Pythia 70M, est ELO 1050<p>Pythia 160M, est ELO 1370
评论 #37472274 未加载
labrador超过 1 年前
The authors of the text the model was trained on certainly had intentions. Many of those are going to be preserved in the output.
评论 #37472490 未加载
评论 #37476829 未加载
gwd超过 1 年前
My analogy for GPT-4 is this: GPT-4 is writing a novel, in which a human talks to a very smart AI. This helps me contextualize its hallucinations: if I were writing such a novel and I knew the answer to something, I would put in the correct answer; if I were writing such a novel and I didn&#x27;t know the answer to something (and had no way to look it up), I would make up something plausible.<p>From that perspective, I think multi-intentionality also works. If I write a story about Bob, then Bob (in the story) has intentions, although he&#x27;s just a figment of my imagination; and when we read characters in novels, we use the imputed intentions of the characters to understand their behavior, although we know they&#x27;re fictional and don&#x27;t actually exist.<p>So yes; on one level, I want to write an exciting story; on a second level, I&#x27;m simulating Bob in my head, who wants to execute the perfect robbery. On one level, GPT-4 wants to write a story about a smart AI; on a second level, the smart AI in GPT-4&#x27;s story wants to win the chess game by moving the queen to put the king in check.
intended超过 1 年前
After having spent a ridiculous amount of effort to get LLMs to work, I am certain they are simply predicting the next token.<p>If LLMs actually could reason, there is a much much wider set of applications where they would be actively used.<p>The term “hallucination” does us all an injustice by propagating the idea of an anthropomorphized LLMs.<p>Everything an LLM does is a hallucination.<p>You and me can make out valid patterns from invalid patterns, because we have an idea of some reality.<p>(Incidentally there are some very weird implications&#x2F; perspectives deriving from these 2 positions. Eg - If you had infinite data, would a LLM ever need to calculate?)<p>Point being - the more intimate the use with an LLM, the more its emergent properties are non-emergent.
评论 #37477480 未加载
评论 #37477491 未加载
mgraczyk超过 1 年前
The reason why it isn&#x27;t useful to ascribe intentions without a mechanistic explanation of intentionality is because you will incorrectly predict what the model will do in surprising ways.<p>I think it&#x27;s true that current generation LLMs could, in principle, have intentionality in the way described in the article. But they would have to be trained on many orders of magnitude more data than current models.<p>Also AutoGPT does not work. I encourage the author to play around with it and try to get it to do something useful with high success probability.
prvc超过 1 年前
A Multi-Level View of a True Scotsman.