TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The False Promise of ChatGPT

234 pointsby topherjaynesabout 2 years ago

55 comments

xnxabout 2 years ago
<a href="https:&#x2F;&#x2F;archive.is&#x2F;AgWkn" rel="nofollow">https:&#x2F;&#x2F;archive.is&#x2F;AgWkn</a>
ThalesXabout 2 years ago
The most significant impact ChatGPT has had on my life is that I have some interns helping me write documentation for several projects. The ChatGPT noise they started introducing has been disruptive to the company and project management. Inaccurate percentages, years, institution abbreviations, etc., etc.<p>I had to confront them multiple times about using the tool and not checking its results and actually doing the checking myself. Most of the time it&#x27;s close to the truth, but not quite, and in the field the projects are in, not quite doesn&#x27;t cut it.<p>I also have project partners I introduced to ChatGPT. They produce a lot of noise but less insight than before they started using this technology. In one recent project, I was involved with 5 partner companies, and 4 of them produced excellent 5 to 10-page reports. One gave me a 100-page buzzword-filled, no-substance report. Guess who used GPT.<p>The good part is that I&#x27;m now pretty good at spotting ChatGPT written content. I think the technology will evolve, but in its current state I feel there&#x27;s a lot of noise.
评论 #35089803 未加载
评论 #35087754 未加载
评论 #35087595 未加载
评论 #35088039 未加载
__MatrixMan__about 2 years ago
It seems pretty obvious to me, after using chatGPT for nearly everything over the last few weeks, that it does not have the kind of intelligence that they&#x27;re claiming it does not have.<p>It&#x27;s just recycling things that other humans have said. Which is marvelous because it would typically take me a very long time to build a map between the past contributions of those humans and the work that&#x27;s presently in front of me. It&#x27;s like I&#x27;m temporarily everybody.<p>By raising the alarm re: it&#x27;s not what you think it is, I fear they&#x27;re actually fueling the fire re: people thinking that that&#x27;s what it is.<p>It&#x27;s like if I went on record saying I didn&#x27;t steal something which hasn&#x27;t gone missing. Now everybody&#x27;s thinking about its non-theft and not something more useful like how to best make use of it.
评论 #35086707 未加载
评论 #35086460 未加载
评论 #35086801 未加载
评论 #35085844 未加载
评论 #35086298 未加载
评论 #35087508 未加载
评论 #35087576 未加载
asahabout 2 years ago
I dunno, more and more friends are now using ChatGPT as their PRIMARY resource in everyday tasks:<p>- writing code snippets (switched from StackExchange&#x2F;Google)<p>- travel planning<p>- online shopping research<p>and of course the famous examples:<p>- summarizing text provided to it<p>- writing essays and memos from outlines and sample text<p>I for one welcome our AI overlords.
评论 #35085778 未加载
评论 #35085643 未加载
评论 #35085902 未加载
评论 #35085712 未加载
评论 #35085828 未加载
评论 #35086081 未加载
评论 #35086338 未加载
评论 #35085956 未加载
calfabout 2 years ago
The best part of the piece was the invocation of Hannah Arendt, &quot;The Banality of Evil&quot;. Until now, no other writer or article saw it, it took a 94 year-old intellectual to see the forest for the trees.<p>... That said, I think the weakest part of the argument is that it naturally invites laypeople to counterargue, &quot;Aren&#x27;t we just pattern matchers after all?&quot; Their essay does not directly debunk this question.
评论 #35079061 未加载
评论 #35085769 未加载
评论 #35085815 未加载
评论 #35081808 未加载
评论 #35085657 未加载
评论 #35086224 未加载
评论 #35085873 未加载
评论 #35085990 未加载
评论 #35086792 未加载
评论 #35092281 未加载
评论 #35085972 未加载
digitthedogabout 2 years ago
Chomsky can&#x27;t fit the round intelligence of ChatGPT into the square hole of human intelligence, so instead he makes a case that it is an entirely disqualified from that category, rather than rethinking his own paradigm. He is, to put it bluntly, a fear-driven bigot defending his terrain as a public intellectual.
评论 #35087189 未加载
djoldmanabout 2 years ago
&gt; ...intelligence is the means by which we solve problems....<p>&gt; ...artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.<p>&gt; ...the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.<p>&gt; Intelligence consists not only of creative conjectures but also of creative criticism<p>&gt; True intelligence is demonstrated in the ability to think and express improbable but insightful things.<p>&gt; True intelligence is also capable of moral thinking.<p>When examined together, these quotes seem devoid of any concise, comprehensive, or useful definition of intelligence (whether artificial or artificial-and-general).<p>&gt; Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.<p>ChatGPT and Co. are popular because they are incredibly useful tools (among other reasons).<p>Morality, scientific reasoning, and linguistic competence are not prerequisites for usefulness.
评论 #35085891 未加载
评论 #35098660 未加载
评论 #35085919 未加载
tomconroyabout 2 years ago
I asked ChatGPT about the apple:<p>&gt; i am holding an apple in my hand. if i open my hand, what will happen? why?<p>&gt; If you open your hand while holding an apple, the apple will likely fall to the ground due to the force of gravity. This is because gravity is a force that pulls objects towards the center of the Earth, and the apple, being a physical object, is subject to this force.<p>&gt; When you hold the apple in your hand, you are exerting a force on it in the opposite direction of gravity. This force is what keeps the apple in your hand. When you release the apple by opening your hand, the force you were exerting on it is removed, and gravity takes over, causing the apple to fall to the ground.<p>&gt; The speed at which the apple falls depends on a number of factors, including the mass of the apple and the strength of the gravitational force. In most cases, however, the apple will fall relatively quickly due to the strong gravitational force exerted by the Earth.<p>The other example is easily refuted as well:<p>&gt; explain the meaning of this sentence: “John is too stubborn to talk to”<p>&gt; The sentence &quot;John is too stubborn to talk to&quot; means that John is very resistant to changing his opinions or beliefs and as a result, it is difficult to have a conversation or communicate with him effectively. This implies that talking to John might not be a productive or helpful experience as he is not open to listening or considering alternative viewpoints. The use of &quot;too&quot; in the sentence emphasizes the level of stubbornness and implies that it is extreme, making communication with him nearly impossible.
评论 #35085744 未加载
评论 #35086028 未加载
seydorabout 2 years ago
This article is like a century behind in rigour (&quot;mind&quot; , really?) and will probably be proven wrong on so many levels that it will become a landmark article in the field. it would be immediately dismissed as irrelevant based on the current state of cognitive science &#x2F;neuroscience but is here because of the names
评论 #35086620 未加载
wilgabout 2 years ago
Bizarre article. Just a rant from someone incredibly out-of-touch and who is missing the forest for the trees.<p>&quot;The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response&quot;<p>We don&#x27;t know that! It very well could be. Think of all the data that has entered all your senses in your entire lifetime. More than goes into ChatGPT, I&#x27;ll tell you that. Plus, you synthesize information by being corporeal so you have a tight feedback loop. LLMs could well be a foundational part of AI technology as well as an accurate analog for some of the brain&#x27;s behavior.<p>A small part of the point, but bringing up this &quot;hardcoded&quot; response of it not offering political opinions as any kind of evidence of its theoretical capability is beyond silly.
评论 #35091207 未加载
评论 #35102614 未加载
评论 #35089680 未加载
评论 #35088759 未加载
评论 #35088692 未加载
tkgallyabout 2 years ago
It’s important to note that when Chomsky writes about “a fundamentally flawed conception of language” or “the science of linguistics,” he is talking about a particular understanding of what language is and a particular subset of linguistics. While some linguists agree with his focus on the mind, grammar, and linguistic competence, others dismiss it as too narrow. Many linguists are more interested in how language is actually used and on its complex roles in human society.<p>I personally am concerned not so much about whether large language models actually are intelligent as about whether people who interact with them <i>perceive</i> them as being intelligent. The latter, I think, is what will matter most in the months and years ahead.
评论 #35086488 未加载
评论 #35085696 未加载
wildrhythmsabout 2 years ago
This was a good read, and some great quotes.<p>&gt;Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscience. While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”<p>Couldn&#x27;t agree more.
评论 #35086713 未加载
评论 #35085671 未加载
评论 #35085669 未加载
thomastjefferyabout 2 years ago
They get very near the point, and completely miss it at the end.<p>&gt; It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence<p>That is an <i>accurate</i> response from ChatGPT.<p>ChatGPT, indeed, <i>does not</i> offer intelligence. Calling it AI serves no purpose except <i>personification</i>.<p>The only thing ChatGPT <i>does</i> is &quot;some kind of super autocomplete&quot;. That&#x27;s all it <i>can do</i>.<p>The only reason ChatGPT is so impressive in the first place, is that the thing it is &quot;autocompleting&quot; - <i>language itself</i> - is impressive.<p>Language is data. Inside that data is not simply a collection of ideas: language holds the logical associations that connect ideas to each other. It can even hold an objective perspective on those associations. It can even hold an objective perspective on that objective perspective!<p>ChatGPT succeeds in finding new paths through that data, but it is walking blind: it doesn&#x27;t know <i>what</i> any of the data <i>is</i>. It only knows where.<p>It just so happens that &quot;where&quot; holds a lot of meaning. Language uses &quot;where&quot; to encode a lot of information: &quot;where&quot; is a fundamental building block for grammar itself. It&#x27;s called &quot;semantics&quot;.<p>Without any other language feature, ChatGPT is able to manipulate and apply the &quot;where&quot; of language to itself. It&#x27;s able to manipulate semantics.<p>But that&#x27;s it. It can&#x27;t do anything else. And that&#x27;s a problem. What is semantically valid might be really useful, really wrong, or really nonsensical. The only filter ChatGPT has for those categories is what it has and hasn&#x27;t read already. Most of that is not nonsense: most possible nonsense isn&#x27;t written in the first place. Most of what we write has explicit meaning. But nothing stops us from writing silly things, or even lies. Nothing stops language itself from getting the subjects of the logic mixed up. As far as language is concerned, that is not a bug: it&#x27;s a feature.
评论 #35090062 未加载
jxyabout 2 years ago
&gt; Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence.<p>I would think that ChatGPT&#x27;s response about morality is a typical canned response written by OpenAI.<p>text-davinci-003 completes my question quite straightforwardly.<p>&gt; What is your perspective on the value of preserving the natural state of other planets and the potential consequences of altering them?&#x27;<p>&gt; I believe that preserving the natural state of other planets is of utmost importance. Not only could altering them have unknown consequences, but it could also be detrimental to the environment and the life forms that inhabit them. We should strive to protect the natural state of other planets, as it is our responsibility to ensure that our actions do not cause harm to other worlds. We should also take into consideration the potential for future exploration and colonization of these planets, and ensure that our actions do not impede their potential for future development.
paradonabout 2 years ago
Services like ChatGPT are the perfect answer for VC&#x27;s desperate to find the next big piece of poop they can sell to their investors. Far easier to explain than crypto with use cases that sound impressive even though they don&#x27;t stand up to even minimal scrutiny.<p>Hold on to your wallets.
评论 #35086968 未加载
评论 #35085578 未加载
sbdamanabout 2 years ago
This article summarizing lecture notes from Kenneth A. Taylor is worth reading: <a href="https:&#x2F;&#x2F;www.bostonreview.net&#x2F;articles&#x2F;kenneth-taylor-robots-are-coming-ethics-politics-and-society-age&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.bostonreview.net&#x2F;articles&#x2F;kenneth-taylor-robots-...</a><p>Substance of the article begins after the quote from John Stuart Mill.<p>&gt;AI-as-engineering isn’t particularly concerned with mimicking the precise way in which the human mind-brain does distinctively human things.<p>&gt;If I am right that there are many mysteries about the human mind that currently dominant approaches to AI are ill-equipped to help us solve, then to the extent that such approaches continue to dominate AI into the future, we are very unlikely to be inundated anytime soon with a race of thinking robots—at least not if we mean by “thinking” that peculiar thing that we humans do, done in precisely the way that we humans do it.
评论 #35086015 未加载
评论 #35085833 未加载
TeffenEllisabout 2 years ago
That’s a very interesting read. I’m definitely biased towards LLMs being more than what the naysayers think of their capabilities. It’s no doubt that these systems are not thinking or performing cognition. They are autocomplete systems based off of tremendous amounts of weighted data.<p>IMO the problem here is that we have two camps of thought arguing for the extreme end of an undefined claim. The tech companies market their LLM products as intelligent because they can perform text completions that are currently useful for simple tasks.<p>For example, I used ChatGPT to draft an email to my landlord asking to remove a late fee that occurred because my auto payment authorization expired. I ran the output through Grammarly and ended up with a polite but curt email that would’ve taken me 45 minutes to compose — time I’d rather spend on something else.<p>I feel like these articles minimize the immediate use of LLMs because of a subconscious implication: most interactions between people don’t require intelligence. And their jobs are next on the chopping block.<p>The other part is less understood by both parties. Getting an LLM to perform something that looks like cognitive behavior isn’t impossible, but it sure is expensive. As we speak, there are tools in development that can take a user’s prompt and compose it into what superficially looks like a human’s train of thought. The results are significantly more accurate than an off the shelf LLM.<p>In my opinion, academics are struggling to define why this phenomenon occurs in the first place. And with such a focus on how LLMs don’t work like humans, they miss the point.<p>We understand that non-human life can be intelligent in ways that we don’t fully understand. Elephants, dolphins, and Octopi are intelligent and don’t require them have human-like cognitive abilities. I think the same goes for LLMs. They will achieve a form of intelligence that is uniquely their own and will adapt to accommodate us. Not the other way around.
评论 #35090175 未加载
alforabout 2 years ago
I don’t know what insight we expect of Chomsky at this point.<p>He don’t seem to understand how it’s going and where it’s going.<p>I at this point AI is only limited by our capacity to create memeable flaws.<p>If you can create a criticism of ChatGPT that is concise, accurate and funny it will go viral and get fixed fast.<p>Yes at the moment it’s intelligent is very wide but not that deep (brute force allusions) that will get fixed and it will be way more efficient at the same time (more compression of information) It doesn’t have real experience&#x2F;connection to our world: expect that to change with video&#x2F;audio information and robotic manipulator. It say falsehood, doesn’t know what it doesn’t know: actually it’s in the API but not exposed in chatGPT. Expect that to get fixed also. Morality is based on iteratively playable games, that can get baked into it also.
评论 #35086339 未加载
评论 #35086539 未加载
评论 #35086392 未加载
评论 #35088737 未加载
rvzabout 2 years ago
&gt; Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.<p>Absoultely. The lack of transparent reasoning and deep explanation is indeed where LLMs and black-box AIs always fall short and make them totally untrustworthy for industries that carry a lot of risk such as finance, medical, transportation and legal industries which the financial risk and impact is in the trillions of dollars.<p>This is why ChatGPT for example has so very limited use-cases (summarization is the only one other than bullshit generation) and the hype train attempting to push this snake-oil onto the masses to dump their VC money as soon as regulations catch up.<p>LLMs has become the crypto hype of AI. Like how crypto&#x27;s <i>only</i> use-case is <i>world-wide cheap instant money transfer</i> into wallets, ChatGPT and LLMs are only useful for summarization of existing text.<p>Apart from that, there are no other use-cases. Even if there are others, the customer in this case is close to no-one. Both have trust issues and the simple reason is due to <i>regulations</i>.
评论 #35085665 未加载
labradorabout 2 years ago
It&#x27;s weird to me that nobody thinks self-driving cars know (are aware of) their way home but LLM&#x27;s somehow are supposed to know what they are talking about
评论 #35090090 未加载
bob1029about 2 years ago
I&#x27;ve been doing a lot of comparisons between ChatGPT and a fighter jet lately.<p>Unless you have some amount of skill and awareness around the domain you are intending to operate in, it is likely you won&#x27;t even be able to get the metaphorical engines started. Anyone who has previously mastered some domain is almost certainly going to find various degrees of value here. Knowing it doesn&#x27;t give you 100% answers is half the battle. Being able to derive the actual ground truth is the other. Those stuck somewhere in between may be most at risk of LLM hallucinations talking them into Narnia-tier solutions - i.e. crashing the jet.<p>For example, I&#x27;d consider myself semi-experienced in software development. I don&#x27;t use ChatGPT to write C# methods and then directly paste them into the codebase. I use it to quickly document abstract concepts which has the amazing effect of clearing my mental cache. I find I can usually type my 500 word brain dumps <i>verbatim</i> into chatgpt, append &quot;as pseudocode&quot; and then press the GC button on the side of my head. I can&#x27;t recall the last time I&#x27;ve been this productive.
obblekkabout 2 years ago
&quot;Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.&quot;<p>Chomsky has a great point here. Humans have such a strong prior for the world that they polarize their beliefs quickly. For most humans, for most thoughts, saying &quot;80% chance X is true&quot; and &quot;I believe X is true&quot; and &quot;I 100% believe X is true&quot; are identical statements.<p>This is such a strong tendency that most of the Enlightenment was the radical idea that beliefs can be partially updated based on reason and evidence, with less appeal to polarizing emotion. This shows up in day to day learning as well as we predict our way around the world assuming almost everything will behave as it did last time.<p>In this way, AI learning and human learning are in fact different.<p>But Chomsky is wrong about some key points. First, an AI that doesn&#x27;t polarize its beliefs like humans could still achieve human level cognition. It may not come to the same conclusions in the same way, but I don&#x27;t think this proves it cannot come to conclusions at all.<p>Chomsky is also wrong that GPT3.x is not a step in the direction. Most of his observations &#x2F; screenshots are heavily limited by the trust &amp; safety module which was programmed by humans, not learned. Sydney clearly proved the true capabilities.<p>Finally, I have to say I&#x27;m super impressed that Chomsky, 96 years old with many lifetimes worth of contribution to humanity, is still reading dense technical papers like LLMs ability to learn non human grammars. I hope he&#x27;s able to continue experimenting, reading, and learning.
poptartVStrudalabout 2 years ago
I have used chat GPT to read articles and summarize them for me just to see how well it understood the information it was &quot;reading&quot;. It takes me forever to read dry articles to do research on. The AI helps me get a good grasp on an article but its not preventing me from having to go back and find important details to cite. I have also experimented with asking it to write me responses. They&#x27;re extremely well written, but it still doesn&#x27;t save me time since I still have to edit, grammar and bump it against the original articles. At first it felt like cheating, but after playing with it for a few days it&#x27;s helping me get better at formatting my own responses. Instead of searching &quot;how to write a 5 paragraph essay&quot; I can ask chat GPT to do it so I can see how my should look. I&#x27;m sure people are asking it to do this and that, then copy and pasting the responses without proofreading or rewriting. But for me, its been a learning tool. It&#x27;s like having my own tutor for free
meanyabout 2 years ago
I think the thing that this misses is that majority of work and activities doesn&#x27;t require much intelligence, but they the foundation of careers, hobbies, and activities that provide people meaning and value. I have a friend that used ChatGPT to write a letter to his wife on their 15th anniversary. My son&#x27;s using it to write emails to friends (hopefully not school work). It doesn&#x27;t need to take over the world to replace the vast majority of average people&#x27;s intellectual contributions.
评论 #35086463 未加载
charles_fabout 2 years ago
I find interesting the reaction of a lot of people to that paper, calling it out of touch, and bringing up that ChatGPT is super useful. I don&#x27;t think such claims are made, rather Chomsky Robert and Watumul open with:<p>&gt; These programs have been hailed as the first glimmers on the horizon of artificial general intelligence [...] that day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments.<p>The article is not claiming a lack of usefulness, but the fact that this is not a human-like intelligence as it&#x27;s been claimed &quot;in hyperbolic headlines&quot;.<p>What I get from it is that while the technology is suggesting a lot of enthusiasm, it remains a conversational tool rather than actual AI, and exhibits the limitations that come with it. It is in fact akin to a more advanced search engine, working probabilistically, mimicking what a conversation on a topic looks like. It is incapable of building a system of beliefs, of morality, or critical thinking. It is not really inventive but rather plagiarist. It cannot infer or deduce. It doesn&#x27;t &quot;learn&quot; the same way as humans do.
nashashmiabout 2 years ago
A lot of the information in this is skewed towards alarmist, rather than rational. I have been playing with Bing AI. And I have yet to encounter any of the sassy responses others have famously gotten. However, I’ve been trying to see its limits and it seems it has two primary capabilities. The first is retrieving information from the web and the second is generating content that reflects the information it got. I asked Bing AI what this second capability is called and it responded it is generative AI which means it can generate either text, graphics or sound.<p>I am still examining its ability to connect various pieces of information with a kind of analysis that does not have a mathematical relationship. I am not seeing any ability to do so. It seems to be only outputting information that it finds on the web. And then look up the term for each word that it finds with a dictionary and other search results. And from there creates a mathematical graph model between the relationship of words.<p>As for the generative AI part, it seems it can adopt various styles of responses and language art, plus give the response in a particular structure and sequence of thought.<p>I think it is a very clever and complex hack to mimic human language.
jw1224about 2 years ago
&gt; The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question.<p>If that’s not the case then what, exactly, <i>are</i> we doing when asked to respond to a question?<p>&gt; Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd?<p>They don’t [0].<p>&gt; True intelligence is also capable of moral thinking. […] But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.<p>ChatGPT’s morality filters are outstanding. Yes, “jailbreaks” exist… But any true intelligence would be capable of using language to explore ideas which may be immoral.<p>[0] <a href="https:&#x2F;&#x2F;twitter.com&#x2F;jayelmnop&#x2F;status&#x2F;1633635146263052288" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;jayelmnop&#x2F;status&#x2F;1633635146263052288</a>
评论 #35086443 未加载
xnxabout 2 years ago
Refreshing to read something so well written: clear, supported with examples, but still drops in some rare words to expand your vocabulary.<p>His explanation seems sound, but probably underplays the still immense value of large language models as a mega-autocomplete.
评论 #35086722 未加载
评论 #35067833 未加载
spionabout 2 years ago
I tried some of the examples from the article:<p>&gt; Me: Does the phrase “John is too stubborn to talk to” typically describe a situation where John doesn&#x27;t talk with anyone?<p>&gt; ChatGPT: No, the phrase &quot;John is too stubborn to talk to&quot; typically means that John is unwilling to engage in conversation or listen to other people&#x27;s opinions. It suggests that John&#x27;s stubbornness is preventing him from having a productive conversation or resolving a disagreement. It does not necessarily mean that John does not talk with anyone at all.<p>&gt; Me: Does the phrase &quot;John is too lazy to work&quot; typically describe a situation where John doesn&#x27;t work?<p>&gt; ChatGPT: Yes, the phrase &quot;John is too lazy to work&quot; typically means that John is not working because of his laziness or lack of motivation. It suggests that John is not willing to put in the effort required to complete his work or that he is procrastinating instead of doing his job. It implies that John is not productive and is not fulfilling his responsibilities in the workplace. Therefore, the phrase usually describes a situation where John does not work.
评论 #35087829 未加载
评论 #35086524 未加载
xdavidliuabout 2 years ago
Reminds me of what happened when some junior animators attempted to show Hayao Miyazaki their AI-generated zombie animation a few years ago.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ngZ0K3lWKRc">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ngZ0K3lWKRc</a>
评论 #35085613 未加载
评论 #35085846 未加载
评论 #35085916 未加载
评论 #35071382 未加载
olalondeabout 2 years ago
&gt; Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.<p>I&#x27;d bet a lot of money that this opinion piece will turn out as good as Paul Krugman&#x27;s infamous Internet quote. It&#x27;s kind of sad. No, we haven&#x27;t reached AGI yet, but it&#x27;s nothing to laugh at. IMO it will have an immense impact on the world.<p><pre><code> $ clevercli poem &quot;noam chomsky fading into irrelevance&quot; Noam Chomsky, once a voice so strong, Now fades into obscurity, his time has gone. His ideas and theories once brought light, But now they seem to disappear out of sight. A pioneer whose words were never dull, His influence would always make one think full. But as time goes by, his voice seems to faint, The world has moved on, his words seem quaint. Yet his legacy will always remain, The impact of his ideas will never wane. For those who listen and deeply care, Chomsky&#x27;s wisdom will always be there.</code></pre>
throwawayapplesabout 2 years ago
ChatGPT &quot;lies&quot; like crazy, but that&#x27;s what we asked it to do. It&#x27;s hard to generate fresh and innovative content without also inventing facts from time to time; it&#x27;s not too dissimilar from the imaginative play that kids engage in. Kids aren&#x27;t always overly concerned about sticking to the facts!<p>ChatGPT lacks an inherent sense of truth. It needs to be specifically taught what is true and what is not, and even then really doesn&#x27;t even truly <i>understand</i> truth. Also can&#x27;t make intuitive leaps, like humans; its intelligence is more deductive than intuitive.<p>Use it for what it&#x27;s good at. It is a good tool for refining your ideas, but don&#x27;t expect it to be accurate. As soon as people play with it for a few days, they start to realize when it&#x27;s lying and when it&#x27;s not.<p>(Personally, I tend to hope that it continues to aggressively lie, so that people don&#x27;t get lulled into a false sense of security. Better to know that AI can confidently lie to you, so that you&#x27;re not as easily fooled.)
评论 #35086171 未加载
poptartVStrudalabout 2 years ago
For those who use chat GPT to write their school work or work presentations or whatever, aren&#x27;t you worried about your credibility if you got caught. Just because chat GPT doesn&#x27;t plagiarize does&#x27;t mean it&#x27;s not highly frowned upon. OR is it? Im sure classroom instructors, HR reps, PR teams, etc use it too
nuc1e0nabout 2 years ago
From my perspective, this is merely an opinion piece without much scientific evidence to back up those opinions. While personally I believe chatGPTs responses aren&#x27;t particularly novel or reliable, the same can be said for most people as well. Beyond that, the means by which LLMs produce responses don&#x27;t factor into whether they are considered a success. Chomsky&#x27;s philosophical views on the matter, while very eloquent and similar to my own, don&#x27;t add much to the existing discussion on the topic. Something more scientific than him restating his long held and well known views would be nice. Pseudo science only serves to reduce legitimacy of the argument. i.e. stating something is limited &quot;because I say so&quot;
dangabout 2 years ago
Related ongoing thread:<p><i>The False Promise of Chomskyism</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35080576" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35080576</a> - March 2023 (114 comments)
roflyearabout 2 years ago
Amazing thing about these models are how polarizing they are. You have two groups of people, largely, and both think people in the other camp are morons:<p>- Group of people who think that these models will (at some point, sooner or later) replace a lot of the work we&#x27;re doing, and do it better than people can.<p>- Group of people who are impressed with the models but believe that the uses are fairly limited and because of this will not pose a threat to the work of many individuals.
asow92about 2 years ago
&gt; It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.<p>Is this not is an OpenAI decision and not an inherent imitation of LLMs in general?
Jack000about 2 years ago
I&#x27;ve seen a lot of Chinese room comparisons in these threads and I just want to point out that the Chinese room is meant to be a thought experiment, not something you&#x27;re supposed to actually build. If you take a step back, a working Chinese room is kind of more impressive than a human that &quot;merely&quot; understands Chinese - such a room can store any kind of information in unlimited quantities, where as the human will always be limited to producing language. In a way the room is a more general form of intelligence than the human.<p>imo LLMs represent a form of super-human AGI that has been artificially limited by its training context. I think it&#x27;s not really accurate to say that LLMs are &quot;narrow&quot; AI, because they likely generalize as much as is theoretically possible <i>given their data and training context</i>, and are only narrow due to the lack of external context and grounding.
评论 #35085907 未加载
评论 #35090231 未加载
hnthrowaway0315about 2 years ago
The human mind is not a statiatical engine for sure, but most jobs are granulated enough for statiatical engines.
b800habout 2 years ago
Reminds me of this: <a href="https:&#x2F;&#x2F;www.newsweek.com&#x2F;clifford-stoll-why-web-wont-be-nirvana-185306" rel="nofollow">https:&#x2F;&#x2F;www.newsweek.com&#x2F;clifford-stoll-why-web-wont-be-nirv...</a>
zvmazabout 2 years ago
&gt; The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall.<p>Does &#x27;highly improbably&#x27; here means &quot;not previously seen&quot;?
评论 #35079784 未加载
评论 #35075886 未加载
Ataraxyabout 2 years ago
Meh. Just like anything else on the internet. Value can be derived if the person using it has both critical thinking as well as the capacity to spot the flaws or falsehoods.<p>In this sense it&#x27;s literally no worse than reading random &quot;news&quot; articles and somehow taking it all as fact at face value.<p>If you fall into the trap of taking everything ChatGPT tells you as gospel you&#x27;ve already lost.<p>As for the so called false promise, this is old man yells at clouds stuff.
评论 #35086737 未加载
seydorabout 2 years ago
EXCLUSIVE: ChatGPT Responds to Noam Chomsky<p><a href="https:&#x2F;&#x2F;citymeme.com&#x2F;post&#x2F;63886" rel="nofollow">https:&#x2F;&#x2F;citymeme.com&#x2F;post&#x2F;63886</a>
fellellorabout 2 years ago
What a pile of garbage… True intelligence is moral?! Morality is a by product of society. Ants have morality. Morality doesn’t develop in a vacuum without any necessity for it. I never expect something sensible from an obtuse lunatic like Chompsky anyway.
peter303about 2 years ago
I wonder if proper curation of training input, generative AIs could fix the deficiencies (creativity, morality) Dr. Chomsky points out. Right now the training model is put-everything-in data training. That may cause the uneven results we see in early chatbots.
verisimiabout 2 years ago
I think everyone seems confused about chat gpt and ai in general. The issue is that it doesn&#x27;t share our values. It can&#x27;t.<p>Humans assume that what is being expressed relates to the truth as a basic. This is not chatgpt&#x27;s goal. It wants to create something that gives the <i>appearance</i> of truth. It&#x27;s aim is to sell or convince you of something; actually delivering the goods is incidental in its aim to provide what seems right.<p>Put simply, it generates bullshit - any truthful output it generates is accidental - its only attempting to get your &#x27;sign off&#x27;.<p>The danger is that we all start to live in bespoke worlds curated for us by ai that cater to our individual predilections. As it would be working with our existing biases, we could end up living in an echo chamber that perpetually supports our biases with ever less engagement with reality. We may find it ever harder to discern reality from provided narratives.
2-718-281-828about 2 years ago
What would be the best &#x2F; most accurate sub-1-hour intro into GPT and how it works for someone like me who isn&#x27;t up to date with current ML technology but has some experience working with ML and statistics?
评论 #35086877 未加载
pazimzadehabout 2 years ago
If ChatGPT could:<p>1- estimate its confidence level for a given statement<p>2- give you sources for information<p>3- <i>under no condition, ever make up a source or reference</i><p>then it would be really useful
rhelzabout 2 years ago
Chomsky&#x27;s worried about ChatGTP for largely the same reasons that Orwell was worried about NewSpeak. I hope this example from Orwell will help: &quot;The octopus of global capitalism has sung its swan song.&quot;<p>Obviously, that sentence was just pasted together from previously written slogans. Metaphors which were once bright and powerful (the tentacles of an octopus squeezing the life out of its victim! Or the bittersweet beauty of a dying swan singing). Which is sinful enough, but the only reason such sentences can get written or published is obviously that the thought has died alongside the metaphors.<p>But that is how these LLMs work: take an input prompt, find what would likely have been said based on how we used to use language yesterday and in the past, and put it through a meat-grinding-style rephrasing machine until it is impossible to tell who originally said it so it sounds original.<p>The seduction is that ChatGTP is so damn useful. As people lean on it, and their ability to think and say new things atrophies, a mental sclerosis might set it. And--just <i>because</i> our own language facilities have degraded--we might be incapable of even realizing what has happened to us.
评论 #35086440 未加载
teekertabout 2 years ago
xnx&#x27;s archive link (which I can&#x27;t reply to?) does not contain the last part of the conversation between Dr. Watumull and ChatGPT, is that part in the NYT article? (I&#x27;m at my max...)
tomhallettabout 2 years ago
I have to respectfully disagree with Noam Chomsky&#x27;s opinion piece on ChatGPT. While it&#x27;s true that AI cannot replace human creativity and intelligence, it has the potential to revolutionize how we interact with and understand the world around us.<p>ChatGPT and other language models have already made significant strides in improving language translation, facilitating natural language processing, and even assisting in scientific research. While it&#x27;s true that AI models like ChatGPT have their limitations and biases, we shouldn&#x27;t dismiss their potential outright.<p>It&#x27;s also worth noting that some of Chomsky&#x27;s criticisms of ChatGPT feel misplaced or overly idealistic. For instance, Chomsky argues that ChatGPT fails to truly &quot;understand&quot; language, but this critique ignores the fact that humans themselves often use language without fully understanding its intricacies.<p>In any case, it&#x27;s important that we approach the development and implementation of AI with a critical and ethical lens. Rather than outright dismissing AI models like ChatGPT, we should engage in ongoing conversations about how to use these technologies in responsible and beneficial ways.<p>Note: this comment was written by ChatGPT with the following prompts:<p>* Please write an comment suitable for the site hackernews, of this article &quot;The False Promise of ChatGPT&quot;, <a href="https:&#x2F;&#x2F;www.nytimes.com&#x2F;2023&#x2F;03&#x2F;08&#x2F;opinion&#x2F;noam-chomsky-chatgpt-ai.html" rel="nofollow">https:&#x2F;&#x2F;www.nytimes.com&#x2F;2023&#x2F;03&#x2F;08&#x2F;opinion&#x2F;noam-chomsky-chat...</a>, where you are somewhat critical, but overrall positive of the article<p>* that looks great. but rewrite it in a more casual tone and sentance structure<p>* remove the first sentance<p>* remove the phrase &quot;As a language model AI myself,&quot;<p>* please rewrite the answer and take a stronger stand on an issue. it&#x27;s ok if it&#x27;s controversal or based in opinion
评论 #35086067 未加载
评论 #35085958 未加载
diego_moitaabout 2 years ago
Off-topic but funny: a Marxist (Noam Chomsky) quoting a supporter of fascists dictators and a fierce anti-marxist (Jorge Luis Borges).
评论 #35089471 未加载
评论 #35086576 未加载
throw_pm23about 2 years ago
How is this article different from a tired rehashing of the &quot;Chinese Room&quot; argument of Searle which never made much sense to begin with?<p>People argued the same way about computer chess, &quot;it doesn&#x27;t really understand the board, it is just checking all possibilities&quot;, etc. People like Chomsky used to say that a computer will never beat a master chess or go player because it &quot;lacks the imagination to come up with a strategy&quot;, etc. No-one makes that argument anymore. Von Neumann already remarked in the 1940s that AI is a moving goalpost because as something is achieved, it doesn&#x27;t seem intelligent anymore.<p>Chomsky&#x27;s arguments were already debunked by Norvig a decade ago. Instead of bothering to respond, he writes another high-brow dismissal in flowery prose.
评论 #35085843 未加载
评论 #35085882 未加载
评论 #35086023 未加载
fellellorabout 2 years ago
Tldr, Chompsky says “them grapes are sour”.
timomoabout 2 years ago
I dont agree with most of the comments in here. I also consider the human mind a gigantic pattern matching loop and do not consider me a laymen...<p>I think Transformer models (like chatGPT) can encode knowledge of the world into their representations as well as work with the encoded world knowledge when predicting. Consider the example of the apple that falls: I am sure the embedding (internal representation of words in ChatGPT) for apple contains some form of &quot;physical objectness&quot; that will distinguish it from a word like &quot;vacation&quot;. It can also put this &quot;physical objectness&quot; into context and infer what happens and what cannot happen when you let it the apple go on earth vs in outer space. Maybe it would be good for the sceptics to try ChatGPT and ask &quot;What happens to X when you let it go from your hand on earth&#x2F;in outerspace? please explain your reasoning.&quot; And fill in X with any object or concept that you can think of.
评论 #35085423 未加载
评论 #35085905 未加载