TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How Does Garbage in Garbage Out Apply to LLMs?

3 pointsby shaburnalmost 2 years ago

5 comments

brucethemoose2almost 2 years ago
Set accuracy aside for a moment.<p>There is an opportunity cost for stuffing garbage into an model&#x27;s limited parameter count. Every SEO bot article, angry tweet, or off topic ingestion (like hair product comparisons or neutron star descriptions in your code completion llm) takes up &quot;space&quot; that could instead be taken up by a textbook, classic literature or whatever.<p>Generative AI works pretty well <i>in spite</i> of this garbage because of the diamonds in the rough. But I am certain the lack of curation and specialization leaves a ton of efficiency&#x2F;quality left on the table.
PaulHoulealmost 2 years ago
It learns to imitate what it is shown so if you show it text from StackOverflow it will learn the wrong answers as well as the right answers unless you are really good about filtering out the wrong answers.
jstx1almost 2 years ago
1. it matters what training data the creators of the LLM use<p>2. the step of reinforcement learning with human feedback is important<p>3. as a user you need to ask questions well and know how to prompt it to get the best results
compressedgasalmost 2 years ago
Yes, GIGO even applies to humans.
rolphalmost 2 years ago
train with slang, jargon, euphemisms, and promiscuity of dialect, versus train with colloquial language, proper grammer&#x2F;syntax, and punctuation.