Sometimes LLMs put on their novelist hat and insert fiction into contexts that demand an accurate account of the real world. For example, an LLM may invent a legal case that never happened, citing it as a precedent, to the peril of humans who never suspect that it could be merely made up.<p>We use anthropomorphic language: hallucination, confabulation, lies. But the behaviour of the LLM is weirdly inhuman. We are using language magic against ourselves by choosing our words unwisely. We persuade ourselves that artificial intelligence is like human intelligence, even as we describe how it differs from human intelligence.<p>We have a rough familiarity with fitting polynomials to evenly spaced data. We know that extrapolation works badly; higher order polynomial approximations breaks down especially badly. We know that interpolation with low order polynomials is fairly safe, and cough, mumble.<p>But high order polynomial interpolation may work well or badly, especially towards the ends of the intervals. The interpolating polynomial may trick us with good accuracy at the middle of the range, but further out, even though we are still interpolating, the values swing wildly. The graph exhibits spikes between the data points. See https://www.johndcook.com/blog/2017/11/18/runge-phenomena/<p>This offers a metaphor for the unwanted insertion of fiction. We picture the LLM interpolating the training data. We picture it doing some kind of high order or clever interpolation, capable of impressive accuracy. And whoops! What happened there? There are surprises lurking. Surprising in the same way that Runge Spikes are surprising.<p>The name "Runge Spike" offers an escape from anthropomorphism. It invites us to view "hallucinations" as a technical issue in interpolation. We are not accidentally insinuating that the LLM will have metabolised its tab of LSD in a year or two and stop hallucinating, without any need for a break through by researchers.