I'm not sure how one can percentage-wise compare scaling and algorithmic advances - per Dwarkesh's prediction that "70% scaling + 30% algorithmic advance" will get us to AGI ?!<p>I think a clearer answer is that scaling alone will certainly NOT get us to AGI. There are some things that are just architecturally missing from current LLMs, and no amount of scaling or data cleaning or emergence will make them magically appear.<p>Some obvious architectural features from top of my list would include:<p>1) Some sort of planning ahead (cf tree of thought rollouts) which could be implemented in a variety of ways. A simple single-pass feed forward architecture, even a sophisticated one like a transformer, isn't enough. In humans this might be accomplished by some combination of short term memory and the thalamo-cortical feedback loop - iterating on one's perception/reaction to something before "drawing conclusions" (i.e. making predictions) based on it.<p>2) Online/continual learning so that the model/AGI can learn from it's prediction mistakes via feedback from their consequences, even if that is initially limited to conversational feedback in a ChatGPT setting. To get closer to human-level AGI the model would really need some type of embodiment (either robotic or in a physical simulation virtual word) so that it's actions and feedback go beyond a world of words and let it learn via experimentation how the real world works and responds. You really don't understand the world unless you can touch/poke/feel it, see it, hear it, smell it etc. Reading about it in a book/training set isn't the same.<p>I think any AGI would also benefit from a real short term memory that can be updated and referred to continuously, although "recalculating" it on each token in a long context window does kind of work. In an LLM-based AGI this could just be an internal context, separate from the input context, but otherwise updated and addressed in the same way via attention.<p>It depends too on what one means by AGI - is this implicitly human-like (not just human-level) AGI ? If so then it seems there are a host of other missing features too. Can we really call something AGI if it's missing animal capabilities such as emotion and empathy (roughly = predicting other's emotions, based on having learnt how we would feel in similar circumstances)? You can have some type of intelligence without emotion, but that intelligence won't extend to fully understanding humans and animals, and therefore being able to interact with them in a way we'd consider intelligent and natural.<p>Really we're still a long way from this type of human-like intelligence. What we've got via pre-trained LLMs is more like IBM Watson on steroids - an expert system that would do well on Jeopardy and increasingly well on IQ or SAT tests, and can fool people into thinking it's smarter and more human-like than it really is, just as much simpler systems like Eliza could. The Turing test of "can it fool a human" (in a limited Q&A setting) really doesn't indicate any deeper capability than exactly that ability. It's no indication of intelligence.