LLMs are calculators for language.<p>Just as the calculator cleaved computation from mathematical understanding, LLMs have cleaved language use from linguistic reasoning. We used to treat expression and comprehension as tightly entangled. Now they're demonstrably separable. We’ve built a machine that can "speak" without understanding, just as calculators can "solve" without knowing.
Maybe we could do with a new term. I mean "general intelligence" is pretty vague and could apply to all sorts of stuff.<p>Re "momentous milestone, ... obvious when it has been built" personally I think a major point is when the AIs could keep running the world without us, including building energy plants, chip factories and so on. AI independence maybe?<p>I think they are wrong on "AGI won't be a shock to the economy because diffusion takes decades" - ChatGPT reached 100m users in 2 months. These things can happen quickly.
Great piece—I appreciate how you frame AGI as a continuous set of capabilities rather than a singular endpoint. At RunLLM, we've observed precisely this: generalized intelligence as just the starting line, with specialization critical to delivering reliable, practical value. Curious about your views on specialization as a way to address common LLM issues, like hallucinations?
With the release of OpenAI’s latest model o3, there is renewed debate about whether Artificial General Intelligence has already been achieved. The standard skeptic’s response to this is that there is no consensus on the definition of AGI. That is true, but misses the point — if AGI is such a momentous milestone, shouldn’t it be obvious when it has been built?<p>In this essay, we argue that AGI is not a milestone. It does not represent a discontinuity in the properties or impacts of AI systems. If a company declares that it has built AGI, based on whatever definition, it is not an actionable event. It will have no implications for businesses, developers, policymakers, or safety.