I'm quite enthusiastic about reading this. Since watching the progress by the larger LLM labs, I've noted that they're not making material changes in model configuration that I think to be necessary to proceed toward more refined and capable intelligence. They're adding tools and widgets to things we know don't think like a biological brain. These are really useful things from a commercial perspective, but I think LLMs won't be an enduring paradigm, at least wrt genuine stabs at artificial intelligence. I've been surprised that there hasn't been more effort to transformative work like in the linked article.<p>The two things that hang me up on current progress in intelligence is that:<p>- there don't seem to be models which possess continuous thought. Models are alive during a forward pass on their way to produce a token and brain-dead any other time
- there don't seem to be many models that have neural memory
- there doesn't seem to be any form of continuous learning. To be fair, the whole online training thing is pretty uncommon as I understand it.<p>Reasoning in token space is handy for evals, but is lossy - you throw away all the rest of the info when you sample. I think Meta had a paper on continuous thought in latent space, but I don't think effort in that has continued to anything commercialised.<p>Somehow, our biological brains are capable of super efficiently doing very intelligent stuff. We have a known-good example, but research toward mimicking that example is weirdly lacking?<p>All the magic happens in the neural net, right? But we keep wrapping nets with tools we've designed with our own inductive biases, rather than expanding the horizon of what a net can do and empowering it to do that.<p>Recently I've been looking into SNNs, which feel like a bit of a tech demo, as well as neuromorphic computing, which I think holds some promise for this sort of thing, but doesn't get much press (or, presumably, budget?)<p>(Apologies for ramble, writing on my phone)