TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

What does Alan Kay think about LLMs?

184 点作者 agomez314大约 1 年前

17 条评论

ozten大约 1 年前
&gt; That humans also do this all the time is “interesting”, “dangerous” etc., but it is also why trying to move from <i>superstition</i> (this is actually what “reasoning by correlation” amounts to) to more scientific methods is critical for anything like civilization to be created.<p>&quot;reasoning by correlation&quot; as superstition is a brutal insight.
评论 #39760513 未加载
评论 #39760026 未加载
评论 #39759826 未加载
评论 #39760771 未加载
评论 #39760453 未加载
skadamat大约 1 年前
This answer is amazing and classically Alan Kay! There&#x27;s so much here to unpack because of all the different areas Alan draws from in his work (he&#x27;s like a computing <i>philosopher</i>).<p>All I will say is that for people who want to understand his perspective, there&#x27;s a large epistemological load to overcome. Sampling his talks is a good starting point though: <a href="https:&#x2F;&#x2F;tinlizzie.org&#x2F;IA&#x2F;index.php&#x2F;Talks_by_Alan_Kay" rel="nofollow">https:&#x2F;&#x2F;tinlizzie.org&#x2F;IA&#x2F;index.php&#x2F;Talks_by_Alan_Kay</a>
评论 #39758856 未加载
评论 #39759779 未加载
评论 #39759762 未加载
评论 #39758709 未加载
devjab大约 1 年前
I use LLMs quite a lot to help me in my work, but they are wrong so often that it’s ridiculous. This isn’t a major issue when you’re an expert using the tools to be more efficient, because you’ll spot and laugh at the errors it makes. Sometimes it’ll be things that anyone would notice, like how a LLM will simply “invent” a library function that has never existed. Even if you’re not an expert, you’re not going to get that PNP-whatever function to work in Powershell if it never existed in the module to begin with.<p>Where it becomes more dangerous, at least in my opinion, is when the LLM only gets it sort of wrong. Maybe the answer it gives you is old, maybe it’s inefficient, maybe it’s insecure or a range of other things, and if you’re new to programming, you’re probably not going to notice. Hell, I’ve reviewed code from senior programmers that pulled in deprecated things with massive security vulnerabilities and never noticed because they were too focused on fast delivery and “it worked”. I can’t imagine how that would work out for people trying to actually learn things.<p>I’m not sure what we can really do about it though. I work a side gig as an external examiner for CS students. A lot of the curriculum being taught (at least here in Denmark) are things I’ve seen the industry move “beyond” in the previous 20 years. Some of it is so dated that it really makes no sense at all. Which isn’t exactly a great alternative to the LLMs, and it’s only natural that a lot of people simply turn to these powerful tools.<p>I tend to tell people to ask their favorite LLM to help them solve a crossword. When you ask it to give you words ending on “ing” it’ll give you words that don’t end on “ing” because of how the tokens used work. This tends to be an eye opener for people in regards to how much they trust their LLM. At least until they get refined enough that they can also do these things.<p>Anyway, it’s a good answer.
评论 #39759351 未加载
评论 #39759292 未加载
slaymaker1907大约 1 年前
&gt; If we look at human anthropology, we see a species that treats solitary confinement and banishment from society as punishments — we are a society because we can cooperate and trust a little — but when are safely back in society, we start competing like mad (and cheating like mad) as though the society is there to be strip-mined.<p>I really like this quote. We simultaneously value trust and community, yet so many people also treat it as just another resource to turn into money and power. Alan Kay is a real gem.
mempko大约 1 年前
I see a lot of people here are missing his &quot;big deal&quot; which he talks about in the end where he references the &quot;Spaceship Earth&quot; problem.<p>What I believe he is getting at is people are going to use LLMs to build systems at scale to further strip mine society.<p>The &quot;Spaceship Earth&quot; problem is a reference to Limits to Growth. For those who haven&#x27;t read &quot;Limits to Growth&quot;, and the more recent Re-calibration of Limits to Growth, I implore you to do so.<p><a href="https:&#x2F;&#x2F;onlinelibrary.wiley.com&#x2F;doi&#x2F;full&#x2F;10.1111&#x2F;jiec.13442" rel="nofollow">https:&#x2F;&#x2F;onlinelibrary.wiley.com&#x2F;doi&#x2F;full&#x2F;10.1111&#x2F;jiec.13442</a>
评论 #39759478 未加载
评论 #39759651 未加载
dimal大约 1 年前
Off topic, but I didn&#x27;t realize Alan Kay was regularly answering questions on Quora. I can ask &quot;What does Alan Kay think about X?&quot; and get an answer from Alan Kay?!?
评论 #39760078 未加载
评论 #39760621 未加载
dpflan大约 1 年前
There is a lot in here, various paths to venture off, but the bottom line seems to be trust is important when running commands on a machine, and LLMs are not trustable. What else?
评论 #39758940 未加载
评论 #39761665 未加载
keybored大约 1 年前
What I like about programming (<i>real</i> programming) is that it is dumb and obtuse. You can see why things happen. Because the instruction languages are painfully literal. They will throw their hands up if you omit a step. Or crash.<p>You can understand it.<p>That’s why I dread the AI future.
svieira大约 1 年前
&gt; By “help” I mean that — especially when <i>changes in epistemological points of view from one’s own common sense are required</i> , it can make a huge difference to be near a “special human” whose personality is strong enough to make us rethink what we think we know.<p>This is how I measure Ed-Tech companies. Do they have an awareness that you cannot replace the connection with other human beings that is an essential part of teaching with &quot;facts&quot; or not? If &quot;yes, they have that awareness&quot; how do they mitigate the problem?
ianbicking大约 1 年前
This feels like a limited and perhaps naive perspective on LLMs. If you looked at computers as adding machines in the 60s&#x2F;70s then you&#x27;d be missing most of what was interesting about computers. And if you look at LLMs as a question answering service now, you are also missing a lot.<p>It&#x27;s hard to compare trust of LLMs to other computing, because many of the things that LLMs get wrong and right were previously intractable. You could ask a search engine, but it&#x27;s certainly no more trustworthy than an LLM, gameable in its own way. The closest might be a knowledge graph or database, which can formally represent some portion of what an LLM represents.<p>To be fair the relational systems can and will give &quot;no answer&quot; when an LLM (like a search engine) always gives some answer. Certainly an issue!<p>But this is all in the realm of coming up with answers in a closed system, hardly the only way LLMs can be used. LLMs can also come up with questions, for instance creating queries for a database. Are these trustworthy? Not entirely, but the closest alternative supportive tool is perhaps some query builder...? I have seen expert humans come up with untrustworthy queries as well... misinterpretation of data is easy and common.<p>That&#x27;s just one example of how an LLM can be used. If you use an LLM for something that you can directly compare to a non-LLM system, such as speech recognition or intent parsing, it&#x27;s clear that the LLM is more trustworthy. It can and does do real error correction! That is, you can get higher quality data out of an LLM than you put in. This is not unheard of in computing, but it is uncommon. Internet networking, which Kay refers to, might be an analog... creating reliable connections on top of unreliable connections.<p>What we don&#x27;t have right now is systematic approaches to computing with LLMs.
评论 #39760342 未加载
评论 #39760139 未加载
评论 #39760113 未加载
LAC-Tech大约 1 年前
Typical Alan Kay. Very long, very thought provoking, full of historical references, and barely answered the question at all. :)
评论 #39760141 未加载
aaroninsf大约 1 年前
Always a welcome and thoughtful opinion, but not correct about the potential for LLM. Waving away their output as &quot;BS&quot; is awfully flip, and IMO is hence one more example of Ximm&#x27;s Law (that every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.)<p>One might find ambiguity in his criticism though, that LLM <i>alone</i> are insufficient... but, that&#x27;s what Ximm&#x27;s Law is saying. It&#x27;s not very interesting to (as I would say, he does here) take on straw, rather than steel.<p>(A steely defense of LLM is to say that no one is particularly interested in scaling LLM without other improvements, though scaling alone provides improvements; multi-modal, multi-language, long-context, and most of all augmented systems which integrate LLM into systems rather than making them &quot;systems on a chop&quot;, are where things look and IMO will be interesting.)
PCMPSTR大约 1 年前
This part of his answer:<p>&gt; A key part of their design was to <i>not allow direct sending of commands</i> — only bits could be sent. This means that (other) software inside each physical computer has the responsibility to <i>interpret</i> the bits, and the power to <i>do</i> (or not do) <i>some action</i><p>seems, at least on a basic reading, to contradict this famous little argument (or maybe trolling?) he had on HN with Rich Hickey where he seems to be suggesting that one shouldn&#x27;t just send raw bits, but also a little interpreter along with the data: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=11945722">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=11945722</a><p>Maybe this is an inevitable consequence of always speaking so abstractly&#x2F;vaguely, but it also makes it difficult to know what exactly he&#x27;s suggesting the industry, that he is so routinely critical of, should concretely do next.
评论 #39761348 未加载
amelius大约 1 年前
LLMs are horribly broken.<p>But we somehow like the idea of putting everything behind an API and then call it a solved problem.
sebastianconcpt大约 1 年前
It is by definition that the silicon automatons cannot escape from their ontologically stochastic imitative hallucinations.<p>All they can do is be observed by us and with that expose us to get induced by association, ontologically human hallucinations and deal with its outcome of real consequences.
sebastianconcpt大约 1 年前
Questioning trustfulness to ontological hallucinatory automatons. Nailed it!
better_sh大约 1 年前
but what does Ja Rule think?