TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Richard Stallman's thoughts on ChatGPT, AI and their impact on humanity

156 点作者 nixcraft大约 2 年前

29 条评论

leveraction大约 2 年前
This sounds just like something my brother-in-law said. I think they are both technically correct and both missing the point. Does a calculator truly understand math when it spits out a correct answer? Of course not. And it doesn't matter. I have been really impressed with chatgpt, and when it comes to shiny new tech I am usually in the poo poo camp. If tech does something useful then it is useful tech. The fact that it is not true intelligence doesn't matter at all. Besides, what's intelligence anyway? Aren't we still debating that ourselves'?
评论 #35314635 未加载
评论 #35314487 未加载
评论 #35314802 未加载
评论 #35314287 未加载
评论 #35314522 未加载
评论 #35315132 未加载
评论 #35314855 未加载
评论 #35315078 未加载
评论 #35315679 未加载
评论 #35315056 未加载
评论 #35314524 未加载
评论 #35314705 未加载
评论 #35314649 未加载
评论 #35314557 未加载
lagrange77大约 2 年前
I think Stallman is right. It&#x27;s really the term &#x27;intelligence&#x27;, that&#x27;s the issue here.<p>We should stop using using that term. I personally just use &#x27;machine learning&#x27; or &#x27;(statistical&#x2F;mathematical) model&#x27;. But then there&#x27;s marketing, i know.
评论 #35315270 未加载
评论 #35315925 未加载
评论 #35314850 未加载
评论 #35316496 未加载
gjitcbcz大约 2 年前
No context was given but here was his actual statement—-<p>&gt; I can&#x27;t foretell the future, but it is important to realize that ChatGPT is not artificial intelligence. It has no intelligence; it doesn&#x27;t know anything and doesn&#x27;t understand anything. It plays games with words to make plausible-sounding English text, but any statements made in it are liable to be false. It can&#x27;t avoid that because it doesn&#x27;t know what the words _mean_.
评论 #35315860 未加载
评论 #35316068 未加载
评论 #35314616 未加载
st_goliath大约 2 年前
Surprisingly not mentioned in the post: The whole Free Software movement aside, RMS has actually spent a chunk of his academic career at the MIT AI lab, researching artificial intelligence and co-authored some papers during his time there.<p>Granted, many of their research topics at the time are now no longer considered AI topics, part of what ultimately lead to the AI winter at the time.<p>Particularly because of that connection, however, I think it could indeed be interesting to hear some more from his perspective on developments in recent years.
评论 #35315867 未加载
jasfi大约 2 年前
I would argue that ChatGPT has reached a certain level of understanding about what it&#x27;s saying. That&#x27;s because you can ask it questions about what it says, and it can continue to reason along the line of what it&#x27;s previously said. It does sometimes make mistakes in this, but this is improving. It&#x27;s just that people want understanding to look a certain way, that seems more familiar to us.
评论 #35316393 未加载
xiphias2大约 2 年前
Richard was super sensitive about the power companies have over users with closed source software, and had a great impact on our culture (just look at the controversy over the name OpenAI), but it seems like he&#x27;s deeply underestimating the much much bigger power AI is (and soon will be) having over us, even though there have been countless books and movies predicting it, and we feel it coming.
评论 #35314695 未加载
eulers_secret大约 2 年前
In his recent talk for the FSF at Boston Stallman suggested that published weights are open source. I guess because they’re modifiable and auditable. It’s an interesting argument. So far, I’ve managed to modify llama myself so I guess so??
评论 #35315921 未加载
usgroup大约 2 年前
What I would just love to see is the out come of the following:<p>1. Train Chat GPT on human stuff.<p>2. Make Chat GPT spit out libraries of knowledge by random walk.<p>3. Train Chat GPT on its own stuff.<p>4. Do this a few times.<p>5. Ask it some questions.
评论 #35314477 未加载
评论 #35316141 未加载
UniverseHacker大约 2 年前
It’s shocking to see so many famous intellectuals like Stallman and Chomsky get this so wrong. It seems like they have preconceptions so strongly held that no amount of hard evidence could convince them otherwise.<p>GPT-4 deeply understands what it is talking about. I was able to pose it difficult classical physics problems that would be hard for a physics undergrad to solve and it could give correct answers consistently. These are things I made up that are nothing like existing examples. When asked to explain it’s work, or modify it for unusual scenarios, it could do so. I could also get it to invent new terms and concepts to describe the work, and then use those correctly. It even could reason out appropriate simplifying approximations when demands exceeded its calculating abilities. If gpt-4 isn’t really understanding what it is saying, or not intelligent than neither are any humans.
评论 #35317747 未加载
nixcraft大约 2 年前
Mobile friendly URL: <a href="https:&#x2F;&#x2F;i.reddit.com&#x2F;r&#x2F;linux&#x2F;comments&#x2F;122gmm9&#x2F;richard_stallmans_thoughts_on_chatgpt_artificial&#x2F;" rel="nofollow">https:&#x2F;&#x2F;i.reddit.com&#x2F;r&#x2F;linux&#x2F;comments&#x2F;122gmm9&#x2F;richard_stallm...</a> (sorry about that).
评论 #35314088 未加载
评论 #35314096 未加载
NotYourLawyer大约 2 年前
Link should be to old.reddit.com
tim333大约 2 年前
Stallman&#x27;s statement (as it&#x27;s deleted on Reddit):<p>&gt;I can&#x27;t foretell the future, but it is important to realize that ChatGPT is not artificial intelligence. It has no intelligence; it doesn&#x27;t know anything and doesn&#x27;t understand anything. It plays games with words to make plausible-sounding English text, but any statements made in it are liable to be false. It can&#x27;t avoid that because it doesn&#x27;t know what the words _mean_.<p>I asked ChatGTP plus what it thought:<p>&gt;do you understand what words mean?<p>and it replied:<p>&gt;Yes, as an AI language model, I have been trained on vast amounts of data to understand the meanings of words and their usage in various contexts. I use this knowledge to generate responses to questions and statements that are as accurate and informative as possible. However, it is important to note that language is a complex and nuanced system, and there may be cases where my understanding of a word or its context may not be entirely accurate or complete.<p>Which I guess is not so different from humans although we have other inputs from the physical world I guess.
guilhas大约 2 年前
This is just a snippet of text out of context posted somewhere<p>It states something pretty uncontroversial, repeated several times through HN<p>And does not contain an opinion about ChatGPT utility or impact on humanity<p>But we already know his opinion opaque software is bad. And the widespread interest in llama and alpaca shows a lot of people agree, want to run it locally, learn how it works, reproduce it
WinstonSmith84大约 2 年前
&gt; It plays games with words to make plausible-sounding English text, but any statements made in it are liable to be false<p>Well, that could also be perfectly applied to any human. The difference being that humans may make purposefully or not false statement (i.e. by lying or by ignorance), whereas the AI will only make false statements by ignorance.<p>Also... &quot;It can&#x27;t avoid that because it doesn&#x27;t know what the words _mean_.&quot;. That&#x27;s very subjective to understand what a word &quot;means&quot;, is that again about lies and ignorance? And we need the AI to lie to declare it intelligent?<p>Maybe, ChatGPT has ironically a modest answer to this claim, when asked what it&#x27;s thinking about Richard Stallman&#x27;s statement:<p>```<p>Richard Stallman raises some valid concerns about the limitations of AI language models like ChatGPT. It is true that these models, while highly advanced and capable of generating human-like text, still have limitations in terms of understanding and reasoning.<p>ChatGPT, being a product of GPT-4 architecture, learns from vast amounts of text data and is designed to predict and generate plausible-sounding responses based on the input it receives. However, it lacks the capability to truly comprehend meaning or context in the same way a human can.<p>It is important to recognize these limitations and not overestimate the abilities of AI language models. Users should be aware that the information provided by ChatGPT might not always be accurate or reliable, and it is essential to cross-verify any critical information obtained from such models with trusted sources.<p>In summary, while Richard Stallman&#x27;s statement highlights important concerns, it is worth noting that AI language models like ChatGPT can still provide valuable insights and assistance when used responsibly and with an understanding of their limitations.<p>```
评论 #35314688 未加载
评论 #35314332 未加载
评论 #35314278 未加载
评论 #35314286 未加载
评论 #35314269 未加载
评论 #35314499 未加载
crop_rotation大约 2 年前
This seems to miss the forest for the trees. Whether LLMs will have an impact similar to the industrial revolution will not depend on whether they pass some arbitrary threshold where everyone is convinced that it is an AGI and understands what words mean. It will depend on the utility they provide. And the utility is there right now. GPT4 is so immensely useful for so many things. At even a reasonable pace of improvement, it is hard to see why LLMs would not be able to do more and more things.<p>Morever and slightly off topic, most humans also don&#x27;t care what words mean or what numbers mean in a philosophical sense. If you talk of &quot;The Axiom of Choice&quot; in a big company software meeting, people will ask if it is the new ice cream flavour in the cafeteria. That doesn&#x27;t prevent people from getting value out of both words and numbers.
评论 #35314579 未加载
评论 #35315165 未加载
评论 #35315022 未加载
version_five大约 2 年前
One one hand, it would be more interesting to hear RMS&#x27; views in the implications, if any, on software freedom (personally I think there are many angles here).<p>On the other hand, the comment attributed to him is correct, if simple and pretty obvious.
tinyhouse大约 2 年前
I actually think it shows incredible reasoning ability already. It can change its answers based on new content you provide. For example, you can show it a Java program and ask how it will behave, then show it release notes of a new Java version it has never seen before and ask it how the functionality may change, and it will get it right. Most programmers won&#x27;t because our ability to attend to information is far inferior unless we really try hard. Focus is not something most humans excel at. Our brains are more capable but are less utilized most of the time.
wonderingyogi大约 2 年前
&gt; It doesn&#x27;t need intelligence to nullify human&#x27;s labour.
dgb23大约 2 年前
I propose a new term for this thing: AK - Artificial Knowledge.<p>The purpose of this tool is to compress, match and combine text based, _informal_ information.
pjio大约 2 年前
&gt; I can&#x27;t foretell the future...<p>I like how he states the scope of his answer just like ChatGPT would.
评论 #35314572 未加载
muyuu大约 2 年前
I think people tend to underestimate the concept of understanding, and also the concept of conveying thing by just saying them.<p>Most people don&#x27;t get most things most of the time.
braingenious大约 2 年前
I wonder if Stallman has actually <i>used</i> GPT-4. His opinion seems like a conclusion that a person could arrive at by just reading the specifications.
orbital-decay大约 2 年前
<i>&gt; It has no intelligence; it doesn&#x27;t know anything and doesn&#x27;t understand anything.</i><p>I don&#x27;t like the word &quot;intelligence&quot;. It&#x27;s too arbitrary and depends on the language. In my native language, there are two synonyms which imply different thresholds for something to be considered intelligent. I&#x27;m sure in other languages it&#x27;s also pretty arbitrary.<p>Instead, let&#x27;s compare those models with complex biological systems we typically consider to be at least somewhat intelligent.<p>- biological systems use spiking networks and are incredibly power efficient. This is more or less irrelevant for capabilities.<p>- biological systems have a lot of surrounding neural and biochemical hardware - hardwired motorics, senses processing, internal regulators. Complex I&#x2F;O is missing from these models, but is being added as we&#x27;re talking. The large downside of current models is that it cannot understand what drives humans as it has different hardware, it&#x27;s trained on their output, and has to &quot;reverse engineer&quot; the motivation. Which might or might not be possible, but it makes them <i>different</i>.<p>- biological systems are <i>autonomous agents</i> in their world. They exist on an uninterrupted timeline, with input and output streams constantly present. Those models don&#x27;t exist on a timeline, they are activated by the user each time.<p>- biological systems have some form of memory; they compress incoming data into higher order concepts on the fly, and store them. This is a HUGE DEAL. The model has no equivalent of memory or neuroplasticity, it&#x27;s a function that doesn&#x27;t keep any state. LLMs have the context which can be turned into a sliding window with an external feedback loop (chatbots do that), however it&#x27;s not equivalent to biological memory at all, as it just stores tokens verbatim, instead of trying to compress the incoming data.<p>- biological systems exhibit highly complex emergent behavior. This also happens in LLMs and even simpler models.<p>- biological systems are social. Birds compose songs from tokens, and spread them through the population. Dogs, monkeys, and humans teach their kids. The mental capacity of a human isn&#x27;t that great; every time you think you&#x27;re smart, remember that you stand on the shoulders of giants. The model does have much more capacity than a human.<p>My own conclusion: sparks of &quot;intelligence&quot;? Undeniably, the emergent behavior alone is enough. They <i>do</i> understand things, in the conventional terms. However, they are also profoundly different than human intelligence, and still lack key elements like the memory.
Eumenes大约 2 年前
Stallman is always right
golf_mike大约 2 年前
Great... and now prove this does not hold for yourself as well
评论 #35317977 未加载
wslh大约 2 年前
Richard Stallman is neglecting the impact of ChatGPT. It doesn&#x27;t matter if ChatGPT is a magician or not, it absorbes our minds.
sinenomine大约 2 年前
That&#x27;s why opensource AI is trailing behind proprietary implementations. Sad!
评论 #35314099 未加载
wozer大约 2 年前
I think his answer is driven by a preference for the status quo and a reluctance to face difficult changes.<p>ChatGPT, and especially GPT-4, seem to do much more than just play games with words. You can’t overlook the “emergent” phenomena that manifest themselves when using them.
评论 #35314558 未加载
diego_moita大约 2 年前
Man, this is so wrong in so many ways...<p>&gt; it is important to realize that ChatGPT is not artificial intelligence.<p>The first is to assume that there is a technical, precise, objective and clear definition of &quot;Artificial Intelligence&quot;. There isn&#x27;t. He should know that.<p>&gt; it doesn&#x27;t know anything and doesn&#x27;t understand anything.<p>And what does &quot;know&quot; or &quot;understand&quot; means in the context of a machine that doesn&#x27;t even have self-consciousness?<p>Besides, are you implying that human beings know stuff? The overwhelming majority of people knows very, very little. Most of the people I know are too lazy to think or doing the hard work of studying. I&#x27;d suggest Kahneman&#x27;s &quot;Thinking Fast and Slow&quot; before putting any faith in people&#x27;s &quot;knowledge&quot;.<p>&gt; It plays games with words to make plausible-sounding English text, but any statements made in it are liable to be false.<p>I think we can we apply the same judgment to Stallman&#x27;s argument itself, since his concepts are so badly defined.<p>And thank you for an open democratic society where every statement is liable to be false, regardless if it comes from a machine, Richard Stalman or Putin and Jiping.<p>I&#x27;ll take Turing&#x27;s test approach: if it looks intelligent to me then it is certainly more intelligent than me.<p>Also, I&#x27;ll take Dijkstra approach: &quot;the question on whether computers can think is as irrelevant as whether submarines can swim&quot;.<p>Edit: to all Stallman&#x27;s fanboys downvoting this: got any good argument?
评论 #35314667 未加载
评论 #35314687 未加载