Set accuracy aside for a moment.<p>There is an opportunity cost for stuffing garbage into an model's limited parameter count. Every SEO bot article, angry tweet, or off topic ingestion (like hair product comparisons or neutron star descriptions in your code completion llm) takes up "space" that could instead be taken up by a textbook, classic literature or whatever.<p>Generative AI works pretty well <i>in spite</i> of this garbage because of the diamonds in the rough. But I am certain the lack of curation and specialization leaves a ton of efficiency/quality left on the table.
It learns to imitate what it is shown so if you show it text from StackOverflow it will learn the wrong answers as well as the right answers unless you are really good about filtering out the wrong answers.
1. it matters what training data the creators of the LLM use<p>2. the step of reinforcement learning with human feedback is important<p>3. as a user you need to ask questions well and know how to prompt it to get the best results