TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Do "Runge Spikes" offer a better metaphor for hallucinations by LLMs?

8 pointsby alan-croweabout 1 month ago
Sometimes LLMs put on their novelist hat and insert fiction into contexts that demand an accurate account of the real world. For example, an LLM may invent a legal case that never happened, citing it as a precedent, to the peril of humans who never suspect that it could be merely made up.<p>We use anthropomorphic language: hallucination, confabulation, lies. But the behaviour of the LLM is weirdly inhuman. We are using language magic against ourselves by choosing our words unwisely. We persuade ourselves that artificial intelligence is like human intelligence, even as we describe how it differs from human intelligence.<p>We have a rough familiarity with fitting polynomials to evenly spaced data. We know that extrapolation works badly; higher order polynomial approximations breaks down especially badly. We know that interpolation with low order polynomials is fairly safe, and cough, mumble.<p>But high order polynomial interpolation may work well or badly, especially towards the ends of the intervals. The interpolating polynomial may trick us with good accuracy at the middle of the range, but further out, even though we are still interpolating, the values swing wildly. The graph exhibits spikes between the data points. See https:&#x2F;&#x2F;www.johndcook.com&#x2F;blog&#x2F;2017&#x2F;11&#x2F;18&#x2F;runge-phenomena&#x2F;<p>This offers a metaphor for the unwanted insertion of fiction. We picture the LLM interpolating the training data. We picture it doing some kind of high order or clever interpolation, capable of impressive accuracy. And whoops! What happened there? There are surprises lurking. Surprising in the same way that Runge Spikes are surprising.<p>The name &quot;Runge Spike&quot; offers an escape from anthropomorphism. It invites us to view &quot;hallucinations&quot; as a technical issue in interpolation. We are not accidentally insinuating that the LLM will have metabolised its tab of LSD in a year or two and stop hallucinating, without any need for a break through by researchers.

no comments

no comments