TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Overcoming the limits of current LLMs

119 点作者 sean_pedersen10 个月前

21 条评论

mitthrowaway210 个月前
LLMs don&#x27;t only hallucinate because of mistaken statements in their training data. It just comes hand-in-hand with the model&#x27;s ability to remix, interpolate, and extrapolate answers to other questions that aren&#x27;t directly answered in the dataset. For example if I ask ChatGPT a legal question, it might cite as precedent a case that doesn&#x27;t exist at all (but which seems plausible, being interpolated from cases that do exist). It&#x27;s not necessarily because it drew that case from a TV episode. It works the same way that GPT-3 wrote news releases that sounded convincing, matching the structure and flow of real articles.<p>Training only on factual data won&#x27;t solve this.<p>Anyway, I can&#x27;t help but feel saddened sometimes to see our talented people and investment resources being drawn in to developing these AI chatbots. These problems are solvable, but are we really making a better world by solving them?
评论 #40998570 未加载
评论 #41001045 未加载
评论 #40998045 未加载
评论 #41001712 未加载
评论 #41000155 未加载
RodgerTheGreat10 个月前
One of the main factors that makes LLMs popular today is that scaling up the models is a simple and (relatively) inexpensive matter of buying compute capacity and scraping together more raw text to train them. Without large and highly diverse training datasets to construct base models, LLMs cannot produce even the superficial appearance of good results.<p>Manually curating &quot;tidy&quot;, properly-licensed and verified datasets is immensely more difficult, expensive, and time-consuming than stealing whatever you can find on the open internet. Wolfram Alpha is one of the more successful attempts in that curation-based direction (using good-old-fashioned heuristic techniques instead of opaque ML models), and while it is very useful and contains a great deal of factual information, it does not conjure appealing fantasies of magical capabilities springing up from thin air and hands-off exponential improvement.
评论 #40993440 未加载
评论 #40992523 未加载
nyrikki10 个月前
&gt; ...manually curate a high-quality (consistent) text corpus based on undisputed, well curated wikipedia articles and battle tested scientific literature.<p>This assumption is based on the mistaken assumption that science is about objective truth.<p>It is confusing the map for the territory. Scientific models are intended to be useful, not perfect.<p>Statistical learning, vs symbolic learning is about existential quantification vs universal quantification respectively.<p>All models are wrong some are useful, this applies to even the most unreasonably accurate versions like QFT and GR.<p>Spherical cows, no matter how useful are hotly debated outside of the didactic half truths of low level courses.<p>The corpus that the above seeks doesn&#x27;t exist in academic circles, only in popular science where people don&#x27;t see that practical, useful models are far more important that &#x27;correct&#x27; ones.
lsy10 个月前
We can&#x27;t develop a universally coherent data set because what we understand as &quot;truth&quot; is so intensely contextual that we can&#x27;t hope to cover the amount of context needed to make the things work how we want, not to mention the numerous social situations where writing factual statements would be awkward or disastrous.<p>Here are a few examples of statements that are not &quot;factual&quot; in the sense of being derivable from a universally coherent data set, and that nevertheless we would expect a useful intelligence to be able to generate:<p>&quot;There is a region called Hobbiton where someone named Frodo Baggins lives.&quot;<p>&quot;We&#x27;d like to announce that Mr. Ousted is transitioning from his role as CEO to an advisory position while he looks for a new challenge. We are grateful to Mr. Ousted for his contributions and will be sad to see him go.&quot;<p>&quot;The earth is round.&quot;<p>&quot;Nebraska is flat.&quot;
评论 #40993253 未加载
评论 #40999930 未加载
评论 #40993270 未加载
darby_nine10 个月前
Man it seems like the ship has sailed on &quot;hallucination&quot; but it&#x27;s such a terrible name for the phenomenon we see. It is a major mistake to imply the issue is with perception rather than structural incompetence. Why not just say &quot;incoherent output&quot;? It&#x27;s actually descriptive and doesn&#x27;t require bastardizing a word we already find meaningful to mean something completely different.
评论 #40992115 未加载
评论 #40998517 未加载
评论 #40998924 未加载
评论 #40992726 未加载
评论 #40998358 未加载
评论 #41001639 未加载
评论 #40993002 未加载
评论 #40993491 未加载
评论 #40998965 未加载
评论 #41001295 未加载
评论 #40998290 未加载
ainoobler10 个月前
The article suggests a useful line of research. Train an LLM to detect logical fallacies and then see if that can be bootstrapped into something useful because it&#x27;s pretty clear that all the issues with LLMs is the lack of logical capabilities. If an LLM was capable of logical reasoning then it would be obvious when it was generating made-up nonsense instead of referencing existing sources of consistent information.
评论 #40995111 未加载
评论 #40992576 未加载
RamblingCTO10 个月前
My biggest problem with them is that I can&#x27;t quite get it to behave like I want it to. I built myself a &quot;therapy&#x2F;coaching&quot; telegram bot (I&#x27;m healthy, but like to reflect a lot, no worries). I even built a self-reflecting memory component that generates insights (sometimes spot on, sometimes random af). But the more I use it, the more I notice that neither the memory nor the prompt matters much. I just can&#x27;t get it to behave like a therapist would. So in other words: I can&#x27;t find the inputs to achieve a desirable prediction from the SOTA LLMs. And I think that&#x27;s a bigger problem for them not to be a shallow hype.
评论 #40993431 未加载
trte9343r410 个月前
&gt; One could spin this idea even further and train several models with radically different world views by curating different training corpi that represent different sets of beliefs &#x2F; world views.<p>You can get good results by combining different models in chat, or even the same model with different parameters. Model usually gives up on hallucinations when challenged. Sometime it pushes back and provides explanation with sources.<p>I have a script that puts models into dialog, moderates discussion and takes notes. I run this stuff overnight, so getting multiple choices speeds up iteration.
fatbird10 个月前
In my mind LLMs are already fatally compromised. Proximity matching via vector embeddings that offer no guarantees of completeness or correctness have already surrendered the essential advantage of technological advances.<p>Imagine a dictionary where the words are only mostly in alphabetical order. If you look up a word and don&#x27;t find it, you can&#x27;t be certain it&#x27;s not in there. It&#x27;s as useful as asking someone else, or several other people, but it&#x27;s value <i>as a reference</i> is zero, and there&#x27;s no shortage of other people on the planet.
评论 #40992980 未加载
FrameworkFred10 个月前
I&#x27;m playing around with LangChain and LangGraph (<a href="https:&#x2F;&#x2F;www.langchain.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.langchain.com&#x2F;</a>) and it seems like these enable just the sort of mechanisms mentioned.
wokwokwok10 个月前
Does anyone really believe that having a good corpus will remove hallucinations?<p>Is this article even written by a person? Hard to know; they have a real blog with real article, but stuff like this reads strangely. Maybe it&#x27;s just not a native english speaker?<p>&gt; Hallucinations are certainly the toughest nut to crack and their negative impact is basically only slightly lessened by good confidence estimates and reliable citations (sources).<p>&gt; The impact of contradictions in the training data.<p>(was this a prompt header you forget to remove?)<p>&gt; LLM are incapable of &quot;self-inspection&quot; on their training data to find logical inconsistencies in it but in the input context window they should be able to find logical inconsistencies.<p>Annnnyway...<p>Hallucinations cannot be fixed by a good corpus in a non-deterministic (ie. temp &gt; 0) LLM system where you&#x27;ve introduced a random factor.<p>Period. QED. If you think it can, do more reading.<p>The idea that a good corpus can <i>significantly improve</i> the error rate is an open question, but the research I&#x27;ve seen <i>tends</i> to fall on the side of &quot;to some degree, but curating a &#x27;perfect&#x27; dataset like that, of a sufficiently large size, is basically impossible&#x27;&quot;.<p>So, it&#x27;s a pipe dream.<p>Yes, if you could have a perfect corpus, absolutely, you would get a better model.<p>...but <i>how</i> do you plan to <i>get</i> that perfect corpus of training data?<p>If it was that easy, the people spending <i>millions and millions of dollars</i> making LLMs would have, I guess, probably come up with a solution for it. They&#x27;re not stupid. If you could easily do it, it would already have been done.<p>my $0.02:<p>This is a dead end of research, because it&#x27;s impossible.<p>Using LLMs which are finetuned to evaluate the output of <i>other</i> LLMs and using multi-sample &#x2F; voting to reduce the incidence of halluciations that make it past the API barrier is both actively used and far, far more effective.<p>(ie. it doesn&#x27;t matter if your LLM hallucinates 1 time in 10; if you can reliably <i>detect</i> that 1 instance, sample again, and return a non hallucination).<p>Other solutions... I&#x27;m skeptical; most of the ones I&#x27;ve seen haven&#x27;t worked when you actually try to use them.
评论 #40993196 未加载
评论 #40993264 未加载
评论 #40993797 未加载
评论 #40994352 未加载
评论 #40993594 未加载
luke-stanley10 个月前
As I understand it: the Phi models, are trained with a much more selective training data, the Tiny Stories research was one of the starts of that, they used GPT-4 to make stories and encyclopedia like training data for Phi to learn from and code, which probably helps with logical structuring too. I think they did add in real web data too though but I think it was fairly selective.<p>Maybe something between Cyc and Google&#x27;s math and geometry LLM&#x27;s could help.
thntk10 个月前
We knew high quality data can help as evidenced by the \Phi models. However, this alone can never eliminate hallucination because data can never be both consistent and complete. Moreover, hallucination is an inherent flaw of intelligence in general if we think of intelligence as (lossy) compression.
xarope10 个月前
I do feel like we&#x27;ve reached a local maxima with the current state of LLMs, and researchers need to find something completely different to hit a new maxima (whether that is the global maxima or not, we&#x27;ll know when we hail our new AI overlords).
DolphinAsa10 个月前
I&#x27;m surprised he didn&#x27;t mention the way, that we are solving the issue at amazon. It&#x27;s not an secret at this point, giving the LLM&#x27;s hands or agentic systems to run code or do things that get feedback in a loop DRAMATICALLY REDUCE Hallucinations.
fsndz10 个月前
The thing is we probably can&#x27;t build AGI: <a href="https:&#x2F;&#x2F;www.lycee.ai&#x2F;blog&#x2F;why-no-agi-openai" rel="nofollow">https:&#x2F;&#x2F;www.lycee.ai&#x2F;blog&#x2F;why-no-agi-openai</a>
评论 #40998070 未加载
评论 #40992540 未加载
Carrok10 个月前
I wish he went into how to improve confidence scores, though I guess training on better data to begin with should improve results and thus confidence.
MR4D10 个月前
Q: is hallucination a milestone towards consciousness?<p>Given how inevitable it is, it seems to me that it might be.
jillesvangurp10 个月前
There has been steady improvement since the release of chat gpt into the wild, which is still only less than two years ago (easy to forget). I&#x27;ve been getting a lot of value out of chat gpt 4o, like lots of other people. I find with each model generation my dependence on this stuff for day to day work goes up as the soundness of its answers and reasoning improve.<p>There are still lots of issues and limitations but it&#x27;s a very different experience than with gpt 3 early on. A lot of the smaller OSS models are a bit of a mixed bag in terms of hallucinations and utility. But they can be useful if you apply some skills. Half the success is actually learning to prompt these things and learning to spot when it starts to hallucinate.<p>One thing I find useful is to run ideas by it in kind of a socratic mode where I try to get it to flesh out brain farts I have for algorithms or other kinds of things. This can be coding related topics but also non technical kinds of things. It will get some things wrong and when you spot it, you can often get a better answer simply by pointing it out and maybe nudging it in a different direction. A useful trick with code is to also let it generate tests for its own code. When the tests fail to run, you can ask it to fix it. Or you can ask it for some alternative implementation of the same thing. Often you get something that is 95% close to what you asked for and then you can just do the remaining few percent yourself.<p>Doing TDD with an LLM is a power move. Good tests are easy enough to understand and once they pass, it&#x27;s hard to argue with the results. And you can just ask it to identify edge cases and add more tests for those. LLMs take a lot of the tediousness out of writing tests. I&#x27;m a big picture kind of guy and my weakness is skipping unit tests to fast forward to having working code. Spelling out all the stupid little assertions is mindnumbingly stupid work that I don&#x27;t have to bother with anymore. I just let AI generate good test cases. LLMs make TDD a lot less tedious. It&#x27;s like having a really diligent junior pair programmer doing all the easy bits.<p>And if you apply SOLID principles to your own code (which is a good thing in any case), a lot of code is self contained enough that you can easily fit it in a small file that is small enough to fit into the context window of chat gpt (which is quite large these days). So, a thing I often do is just gather relevant code, copy past it and then tell it to make some reasonable assumptions about missing things and make some modifications to the code. Add a function that does X; how would I need to modify this code to address Y; etc. I also get it to iterate on its own code. And a neat trick is to ask it to compare its solution to other solutions out there and then get it to apply some of the same principles and optimizations.<p>One thing with RAG is that we&#x27;re still under utilizing LLMs for this. It&#x27;s a lot easier to get an LLM to ask good questions than it is to get them to provide the right answers. With RAG, you can use good old information retrieval to answer the questions. IMHO limiting RAG to just vector search is a big mistake. It actually doesn&#x27;t work that well for structured data and you could just ask it to query some API based on a specification of use some sql, xpath, or whatever query language. And why just ask 1 question? Maybe engage in a dialog where it zooms in on the solution via querying and iteratively coming up with better questions until the context has all the data needed to come up with the answer.<p>If you think about it, this is how most knowledge workers address problems themselves. They are not oracles of wisdom that know everything but merely aggregators and filters of external knowledge. A good knowledge worker &#x2F; researcher &#x2F; engineer is one that knows how to ask the right questions in order to come up with an iterative process that converges on a solution.<p>Once you stop using LLMs as one shot oracles that give you an answer given a question, they become a lot more useful.<p>As for AGI, a human AI enhanced by AGI is a powerful combination. I kind of like the vision behind neuralink where the core idea is basically improving the bandwidth between our brains and external tools and intelligence. Using a chat bot is a low bandwidth kind of thing. I actually find it tedious.
评论 #40993169 未加载
Animats10 个月前
Plausible idea which needs a big training budget. Was it funded?
simplysparsh10 个月前
I came here thinking I will learn how to make LLMs better. But leaving with more complicated questions:<p>1. Do I want LLMs to be trained with licensed data, that&#x27;s arguably well curated. Or, do I want LLM to scrape the web because it is more democratic in opinions?<p>2. If hallucination is not about training data but how LLM uses that data to extrapolate info that&#x27;s not directly present in training data - can we teach it this skill to make better choices?<p>3. It&#x27;s easy to define good data for facts. How to define good data for subjective topics?<p>4. For subjective topics, is it better to have separate LLMs trained with each theme of opinions or one big LLM with a mix of all opinions?<p>5. Is using LLM to improve its own training data truly helpful as the author claims? If yes - is this recursion method better or it&#x27;s better to use multiple LLMs together?<p>Dang! If I interview for a position that requires knowledge of AI - every question they ask will be answered with more questions. smh!