TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Apple puts "do not hallucinate" into prompts, it works

4 pointsby djhope999 months ago

4 comments

ggm9 months ago
I&#x27;d like a rational explanation of how the LLM interprets &quot;don&#x27;t hallucinate&quot; -Is it perhaps &quot;translated&quot; internally to the functional equivalent of a higher confidence check on output?<p>Otherwise, I think it&#x27;s baloney. I know there is not a simple linear mapping from plain english to the ML, but the typed word clearly is capable of being parsed and processed, its the &quot;somehow&quot; I&#x27;d like to understand better. What would this do the interpretation of paths through weights.<p>Pretty much &#x27;citation needed&#x27;
TillE9 months ago
Everything about prompt engineering is just the voodoo chicken.<p><a href="https:&#x2F;&#x2F;wiki.c2.com&#x2F;?VoodooChickenCoding" rel="nofollow">https:&#x2F;&#x2F;wiki.c2.com&#x2F;?VoodooChickenCoding</a>
评论 #41177492 未加载
unlisted73479 months ago
Interestingly, negative prompts for stable diffusion (like &quot;deformed hands&quot;) has similar effect. How LLM decides what&#x27;s hallucinations? Mayhaps, it double checks itself? But probably it became self-aware.
sva_9 months ago
X doubt