It's all prediction. Wolfram has been saying this from the beginning, I think. It hasn't changed and it won't change.<p>But it could be argued that the human mind is <i>fundamentally</i> similar. That consciousness is the combination of a spatial-temporal sense with a future-oriented simulating function. Generally, instead of simulating words or tokens, the biological mind simulates physical concepts. (Needless to say, if you imagine and visualize a ball thrown through the air, you have simulated a physical and mathematical concept.) One's ability to internally form a representation of the world and one's place in it, coupled with a subjective and bounded idea of self in objective space and time, results in what is effectively a general predictive function which is capable of broad abstraction.<p>A large facet of what's called "intelligence" -- perhaps the largest facet -- is the strength and extensibility of the predictive function.<p>I really need to finish my book on this...
Here’s an interpretability idea you may find interesting:<p>Let's Turn AI Model Into a Place. The project to make AI interpretability research fun and widespread, by converting a multimodal language model into a place or a game like the Sims or GTA.<p>Imagine that you have a giant trash pile, how to make a language model out of it? First you remove duplicates of every item, you don't need a million banana peels, just one will suffice. Now you have a grid with each item of trash in each square, like a banana peel in one, a broken chair in another. Now you need to put related things close together and draw arrows between related items.<p>When a person "prompts" this place AI, the player themself runs from one item to another to compute the answer to the prompt.<p>For example, you stand near the monkey, it’s your short prompt, you see around you a lot of items and arrows towards those items, the closest item is chewing lips, so you step towards them, now your prompt is “monkey chews”, the next closest item is a banana, but there are a lot of other possibilities around, like an apple a bit farther away and an old tire far away on the horizon (monkeys rarely chew tires, so the tire is far away).<p>You are the time-like chooser and the language model is the space-like library, the game, the place. It’s static and safe, while you’re dynamic and dangerous.
Would love to see a similar explanation of how "reasoning" versions of LLMs are trained. I understand that OpenAI was mum about how they specifically trained o1/o3 and that people are having to reverse engineer from the DeepSeek paper which may or may not be a different approach, but would like to see a coherent explanation which is not just an regurgitation of Chain of Thought or handwavy "special reasoning tokens give the model more time to think".
I'm not sure if I would call this "simple" but I appreciated the walk through. I understood a lot of it at a high level before reading, and this helped solidify my understanding a bit more. Though it also serves to highlight just how complex LLMs actually are.
While I appreciate the pictures, really at the end of the day all you have is a glossary and slightly more detailed arbitrary hand waving.<p>What <i>specific</i> architecture is used to build a basic model?<p>Why is that <i>specific</i> combination of basic building blocks used?<p>Why does it work when other similar ones don’t?<p>I generally approve of simplifications, but these LLM simplifications are too vague and broad to be useful or meaningful.<p>Here my challenge: take that article and write an LLM.<p>No?<p>How about an article on raytracing?<p>Anyone can do a raytracer in a weekend.<p>Why is building an LLM miles of explanation of concepts and nothing concrete you can actually build?<p>Where’s my “LLM in a weekend” that covers the theory <i>and</i> how to actually implement one?<p>The distinction between this and something like <a href="https://github.com/rasbt/LLMs-from-scratch">https://github.com/rasbt/LLMs-from-scratch</a> is stark.<p>My hot take is, if you haven’t built one, you don’t <i>actually</i> understand how they work, you just have a kind of vague kind-of-heard of it understanding, which is not the same thing.<p>…maybe that’s harsh, and unfair. I’ll take it, maybe it is; but I’ve seen a lot of LLM explanations that conveniently stop before they get to the hard part of “and how do you actually do it?”, and another one? Eh.
Why don't you come on my podcast to explain LLMs?
I would love it.<p><a href="https://www.youtube.com/@CouchX-SoftwareTechexplain-k9v" rel="nofollow">https://www.youtube.com/@CouchX-SoftwareTechexplain-k9v</a>