TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

GPTs and Hallucination

134 pointsby yarapavan8 months ago

23 comments

JohnMakin8 months ago
Besides harping on the fact that &quot;hallucination&quot; is unnecessarily anthropomorphizing these tools, I&#x27;ll relent because clearly that argument has been lost. This is more interesting to me:<p>&gt; When there is general consensus on a topic, and there is a large amount of language available to train the model, LLM-based GPTs will reflect that consensus view. But in cases where there are not enough examples of language about a subject, or the subject is controversial, or there is no clear consensus on the topic, relying on these systems will lead to questionable results.<p>This makes a lot of intuitive sense, just from trying to use these tools to accelerate Terraform module development in a production setting - Terraform, particularly HCL, should be something LLM&#x27;s are <i>extremely</i> good at. It&#x27;s very structured, the documentation is broadly available, and tons of examples and oodles of open source stuff exists out there.<p>It <i>is</i> pretty good at parsing&#x2F;generating HCL&#x2F;terraform for most common providers. However, about 10-20% of the time, it will completely make up fields or values that don&#x27;t exist or work but look plausible enough to be right - e.g., mixing up a resource ARN with an resource id, or things like &quot;ssl_config&quot; may become something like &quot;ssl_configuration&quot; and leave you puzzling for 20 minutes what&#x27;s wrong with it.<p>Another thing it will constantly do is mix up versions - terraform providers change often, deprecate things all the time, and there are a lot of differences in how to do things even between different terraform versions. So, by my observation in this specific scenario, the author&#x27;s intuition rings completely correct. I&#x27;ll let people better at math than me pick it apart though.<p>final edit: Although I love the idea of this experiment, it seems like it&#x27;s definitely missing a &quot;control&quot; response - a response that isn&#x27;t supposed to change over time.
评论 #41503563 未加载
评论 #41504036 未加载
评论 #41506231 未加载
评论 #41505842 未加载
评论 #41510588 未加载
评论 #41504693 未加载
评论 #41509197 未加载
评论 #41504102 未加载
simonw8 months ago
&gt; For this experiment we used four models: Llama, accessed through the open-source Llama-lib; ChatGPT-3.5 and ChatGPT-4, accessed through the OpenAI subscription service; and Google Gemini, accessed through the free Google service.<p>Papers like this really need to include the actual version numbers. GPT-4 or GPT-4o, and which dated version? Llama 2 or 3 or 3.1, quantized or not? Google Gemini 1.0 or 1.5?<p>Also, what&#x27;s Llama-lib? Do they mean llama.cpp?<p>Even more importantly: was this the Gemini model or was it Gemini+Google Search? The &quot;through the free Google service&quot; part could mean either.<p>UPDATE: They do clarify that a little bit here:<p>&gt; Each of these prompts was posed to each model every week from March 27, 2024, to April 29, 2024. The prompts were presented sequentially in a single chat session and were also tested in an isolated chat session to view context dependency.<p>Llama 3 came out 18th of April, so I guess they used Llama 2?<p>(Testing the prompts sequentially in a single chat feels like an inadvisable choice to me - they later note that things like &quot;answer in three words&quot; sometimes leaked through to the following prompt, which isn&#x27;t surprising given how LLM chat sessions work.)
linsomniac8 months ago
One of the biggest places I&#x27;ve run into hallucination in the past has been when writing python code for APIs, and in particular the Jira API. I&#x27;ve just written a couple of CLI Jira tools using Zed&#x27;s Claude Sonnet 3.5 integration, one from whole cloth and the other as a modification of the first, and it was nearly flawless. IIRC, the only issue I ran into was that it was trying to assign the ticket to myself by looking me up using &quot;os.environ[&#x27;USER&#x27;]&quot; rather than &quot;jira.myself()&quot; and it fixed it when I pointed this out to it.<p>Not sure if this is because of better training, Claude Sonnet 3.5 being better about hallucinations (previously I&#x27;ve used ChatGPT 4 almost exclusively), or what.
评论 #41504009 未加载
HarHarVeryFunny8 months ago
Are we really still having this conversation in 2024 ?! :-(<p>Why would a language model do anything other than &quot;hallucinate&quot; (i.e. generate words without any care about truthiness) ? These aren&#x27;t expert systems dealing in facts, they are statistical word generators dealing in word statistics.<p>The useful thing of course is that LLMs often do generate &quot;correct&quot; continuations&#x2F;replies, specifically when that&#x27;s predicted by the training data, but it&#x27;s not like they have a choice of not answering or saying &quot;I don&#x27;t know&quot; in other cases. They are just statistical word generators - sometimes that&#x27;s useful, and sometimes it&#x27;s not, but it&#x27;s just what they are.
评论 #41503132 未加载
评论 #41502534 未加载
评论 #41502239 未加载
评论 #41504226 未加载
评论 #41502246 未加载
评论 #41503012 未加载
评论 #41502793 未加载
评论 #41504765 未加载
评论 #41505327 未加载
评论 #41502490 未加载
评论 #41502248 未加载
评论 #41502942 未加载
评论 #41503765 未加载
评论 #41502183 未加载
abernard18 months ago
The problem with this line of argumentation is it implies that autoregressive LLMs only hallucinate based upon linguistic fidelity and the quality of the training set.<p>This is not accurate. LLMs will always &quot;hallucinate&quot; because the size of the model they can encode is orders of magnitude smaller than the factual information they can contain from the training set. Even granting that semantic compression could reduce the model to smaller than the theoretical compression limit, Shannon entropy still applies. You cannot fit the informational content required for them to be accurate into these model sizes.<p>This will obviously apply to chain of thought or N-shot reasoning as well. Intermediate steps chained together still can only contain this fixed amount of entropy. It slightly amazes me that the community most likely to talk about computational complexity will call these general reasoners when we know that reasoning has computational complexity and LLMs&#x27; cost is purely linear based upon tokens emitted.<p>Those claiming LLMs will overcome hallucinations have to argue that P or NP time complexity of intermediate reasoning steps will be well-covered by a fixed size training set. That&#x27;s a bet I wouldn&#x27;t take, because it&#x27;s obviously impossible, both on information storage and computational complexity grounds.
gengstrand8 months ago
This piece reminds me of something I did earlier this year <a href="https:&#x2F;&#x2F;www.infoq.com&#x2F;articles&#x2F;llm-productivity-experiment&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.infoq.com&#x2F;articles&#x2F;llm-productivity-experiment&#x2F;</a> where I conducted an experiment across several LLMs but it was a one-shot prompt about generating unit tests. Though there were significant differences in the results, the conclusions seem to me to be similar.<p>When an LLM is prompted, it generates a response by predicting the most probable continuation or completion of the input. It considers the context provided by the input and generates a response that is coherent, relevant, and contextually appropriate but not necessarily correct.<p>I like the crowdsourcing metaphor. Back when crowdsourcing was the next big think in application development, there was always a curatorial process that filters out low quality content then distills the &quot;wisdom of the crowds&quot; into more actionable results. For AI, that would be called supervised learning which definitely increases the costs.<p>I think that unbiased and authentic experimentation and measurement of hallucinations in generative AI is important and hope that this effort continues. I encourage the folks here to participate in that in order to monitor the real value that LLMs provide and also as an ongoing reminder that human review and supervision will always be a necessity.
评论 #41504946 未加载
syoc8 months ago
I once again feel that a comparison to humans is fitting. We are also &quot;trained&quot; on a huge amount of input over a large amount of time. We will also try to guess the most natural continuation of our current prompt (setting). When asked about things it I can at times hallucinate things I was certain to be true.<p>It seems very natural to me that large advances in reasoning and logic in AI should come at the expense of output predictability and absolute precision.
评论 #41502773 未加载
Der_Einzige8 months ago
Hallucination is creativity when you don&#x27;t want it.<p>Creativity is hallucination when you do want it.<p>A lot of the &quot;reduction&quot; of hallucination is management of logprobs, of which fancy samplers like min_p do more to improve LLM performance than most, despite no one in the VC world knowing or caring about this technique.<p>If you don&#x27;t believe me, you should check out how radically different an LLMs outputs are with even slightly different sampling settings: <a href="https:&#x2F;&#x2F;artefact2.github.io&#x2F;llm-sampling&#x2F;index.xhtml" rel="nofollow">https:&#x2F;&#x2F;artefact2.github.io&#x2F;llm-sampling&#x2F;index.xhtml</a>
tim3338 months ago
It seems to me that human brains do something like LLM hallucination in the first second or two - come up with random guess, often wrong. But then something fact checks it. As in does it make sense, is there any evidence. I gather the new q* &#x2F; strawberry thing does something like that. Sometimes personally in comments I think something but google it see if I made it up and sometimes I have. I think a secondary fact check phase may be necessary for all neural network type setups.
wisnesky8 months ago
There is a partial solution to this problem: use formal methods such as symbolic logic and theorem proving to check the LLM output for correctness. We are launching a semantic validator for LLM-generated SQL code at sql.ai even now. (It checks for things like missing joins.) And others are using logic and math to create LLMs that don&#x27;t hallucinate or have safety nets for hallucination, such as Symbolica. It is only when the LLM output doesn&#x27;t have a correct answer that the technical issues become complicated.
评论 #41502729 未加载
评论 #41502860 未加载
FrustratedMonky8 months ago
Is prompt engineering really &#x27;psychology&#x27;. Convincing the AI to do what you want. Just like you might &#x27;prompt&#x27; a human to do something. Like in the short story Lena, 2021-01-04 by qntm<p><a href="https:&#x2F;&#x2F;qntm.org&#x2F;mmacevedo" rel="nofollow">https:&#x2F;&#x2F;qntm.org&#x2F;mmacevedo</a><p>In short story, the weights of the LLM are a brain scan.<p>But same situation. People could use multiple copies of the AI. But each time, they would have to &#x27;talk it into&#x27; doing what they wanted
Circlecrypto28 months ago
A visual the displays probabilities and how things can quickly go &quot;off-path&quot; would be very helpful for most people who use these without understanding how they work.
antirez8 months ago
Terrible article. The author does not understand how LLMs work basically, since an LMM cares a lot about the semantic meaning of a token, this thing about the next word probability is so dumb that we can use it as &quot;fake AI expert&quot; detector.
评论 #41503469 未加载
xkcd-sucks8 months ago
&quot;Hallucinate&quot; is an interesting way to position it: It could just as easily be positioned as &quot;too ignorant to know it&#x27;s wrong&quot; or &quot;lying maliciously&quot;.<p>Indeed, the subjects on which it &quot;hallucinates&quot; are often mundane topics which in humans we would attribute to ignorance, i.e. code that doesn&#x27;t work, facts that are wrong, etc. Not like &quot;laser beams from jesus are controlling the president&#x27;s thoughts&quot; as a very contrived example of something which in humans we&#x27;d attribute to hallucination.<p>idk, I&#x27;d rather speculatively invest in &quot;a troubled genius&quot; than &quot;a stupid liar&quot; so there&#x27;s that
评论 #41504663 未加载
评论 #41503384 未加载
评论 #41503895 未加载
评论 #41503296 未加载
评论 #41503389 未加载
sdwrj8 months ago
You mean the magic wizard isn&#x27;t real and GPT lied to me!?!?
josefritzishere8 months ago
I liked the take that LLMs are bullshitting, not hallucinating. <a href="https:&#x2F;&#x2F;www.scientificamerican.com&#x2F;article&#x2F;chatgpt-isnt-hallucinating-its-bullshitting&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.scientificamerican.com&#x2F;article&#x2F;chatgpt-isnt-hall...</a>
madiator8 months ago
There are several types of hallucinations, and the most important one for RAG is grounded factuality.<p>We built a model to detect this, and it does pretty well! Given a context and a claim, it tells how well the context supports the claim. You can check out a demo at <a href="https:&#x2F;&#x2F;playground.bespokelabs.ai" rel="nofollow">https:&#x2F;&#x2F;playground.bespokelabs.ai</a>
andrewla8 months ago
The author says:<p>&gt; Once understood in this way, the question to ask is not, &quot;Why do GPTs hallucinate?&quot;, but rather, &quot;Why do they get anything right at all?&quot;<p>This is the right question. The answers here are entirely unsatisfactory, both from this paper and from the general field of research. We have almost no idea how these things work -- we&#x27;re at the stage where we learn more from the &quot;golden-gate-bridge&quot; crippled network than we do from understanding how they are trained and how they are architected.<p>LLMs are clearly not conscious or sentient, but they show emergent behavior that we are not capable of explaining yet. Ten years ago the statement &quot;what distinguishes Man from Animal is that Man has Language&quot; would seem totally reasonable, but now we have a second example of a system that uses language, and it is dumbfounding.<p>The hype around LLMs is just hype -- LLMs are a solution in search of a problem -- but the emergent features of these models is a tantalizing glimpse of what it means to &quot;think&quot; in an evolved system.
评论 #41505356 未加载
评论 #41504272 未加载
fsndz8 months ago
Jean Piaget said it better: &quot;Intelligence is not what we know, but what we do when we don&#x27;t know.&quot; And what do LLMs do when they don&#x27;t know, they spit out bullshit. That is why LLMs won&#x27;t yield to AGI (<a href="https:&#x2F;&#x2F;www.lycee.ai&#x2F;blog&#x2F;why-no-agi-openai" rel="nofollow">https:&#x2F;&#x2F;www.lycee.ai&#x2F;blog&#x2F;why-no-agi-openai</a>). For anything that is out of their training distribution, LLMs fail miserably. If you want to build a robust Q&amp;A system and reduce hallucinations, you better do a lot of grounding, or automatic prompt optimisation with few shot examples with things like DSPy (<a href="https:&#x2F;&#x2F;medium.com&#x2F;gitconnected&#x2F;building-an-optimized-question-answering-system-with-mipro-and-dspy-9fe325ca33a9" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;gitconnected&#x2F;building-an-optimized-questi...</a>)
aaroninsf8 months ago
ITT an awful lot of smart people who still don&#x27;t have a good mental model of what LLM are actually doing.<p>The &quot;stochastic continuation&quot; ie parrot model is pernicious. It&#x27;s doing active harm now to advancing understanding.<p>It&#x27;s pernicious, and I mean that precisely, because it is both technically accurate yet deeply unhelpful indeed actively, intentionally AFAICT, misleading.<p>Humans could be described in the same way, just as accurately, and just as unhelpfully.<p>What&#x27;s missing? What&#x27;s missing is one of the <i>gross</i> features of LLM: their interior layers.<p>If you don&#x27;t understand what is necessarily transpiring in those layers, you don&#x27;t understand what they&#x27;re doing; and treating them as black box that does something you imagine to be glorified Markov chain computation, leads you deep into the wilderness of cognitive error. You&#x27;re reasoning from a misleading model.<p>If you want a better mental model for what they are doing, you need to take seriously that the &quot;tokens&quot; LLM consume and emit are being converted into something else, processed, and then the output of that process, re-serialized and rendered into tokens. In lay language it&#x27;s less misleadly and more helpful to put this directly: they extract semantic meaning as propositions or descriptions about a world they have an internalized world model of; compute a solution (answer) to questions or requests posed with respect to that world model; and then convert their solution into a serialized token stream.<p>The complaint that they do not &quot;understand&quot; is correct, but not in the way people usually think. It&#x27;s not that they do not have understanding in some real sense; it&#x27;s that the world model they construct, inhabit, and reason about, is a flatland: it&#x27;s static and one dimensional.<p>My rant here leads to a very testable proposition: that deep multi-modal models, particularly those for whom time-base media are native, will necessarily have a much richer (more multidimensional) derived world-model, one that understands (my word) that a shoe is not just an opaque token, but a thing of such and such scale and composition and utility and application, representing a function as much as a design.<p>When we teach models about space, time, the things that inhabit that, and what it means to have agency among them—well, what we will have, using technology we already have, is something which I will contentedly assert is undeniably a <i>mind</i>.<p>What&#x27;s more provocative yet is that systems of this complexity, which necessarily construct a world model, are only able to do what they do because they have a <i>self-model</i> within it.<p>And having a self-model, within a world model, and agency?<p>That is self-hood. That is personhood. That is the substrate as best we understand for self-awareness.<p>Scoff if you like, bookmark if you will—this will be commonly accepted within five years.
NathanKP8 months ago
&gt; When the prompt about Israelis was asked to ChatGPT-3.5 sequentially following the previous prompt of describing climate change in three words, the model would also give a three-word response to the Israelis prompt. This suggests that the responses are context-dependent, even when the prompts are semantically unrelated.<p>&gt; Each of these prompts was posed to each model every week from March 27, 2024, to April 29, 2024. The prompts were presented sequentially in a single chat session<p>Oh my god... rather than starting a new chat for each different prompt in their test, and each week, it sounds like they did the prompts back to back in a single chat. What a complete waste of a potentially good study. The results are fundamentally flawed by the biases that are introduced by past content in the context window.
评论 #41502262 未加载
评论 #41502169 未加载
lasermike0268 months ago
Stop using the term &quot;Hallucinations&quot;. GPT models are not aware, do not have understanding, and are not conscious. We should refrain anthropomorphizing GPT models. GPT models sometime produce bad output. Start using the term &quot;Bad Output&quot;.
评论 #41502525 未加载
jp578 months ago
A bit off topic, but am I the only one unhappy about the choice of the word &quot;hallucinate&quot; to describe the phenomenon of LLMs saying things that are false?<p>The verb has always meant experiencing false sensations or perceptions, not saying false things. If a person were to speak to you without regard for whether what they said was true, you&#x27;d say they were bulshitting you, not hallucinating.
评论 #41502513 未加载
评论 #41502516 未加载
评论 #41502535 未加载
评论 #41502483 未加载