I think there's more to this than the skeptical commenters here think.<p>The closer you look at quantum mechanics the more it seems to be entirely (in some sense) information-theoretic, although this nature is largely concealed from our vantage point on the inside. Measurement and quantization are due to the fact that we're part of any quantum system we measure and we have to experience a single value for our measurement, which comes out as a probability distribution over possible values (although why it takes the exact distribution that it does is a still mysterious, I believe?). Entropy and decoherence under time evolution result from the fact that if you let a bunch of states mix together for a while they statistically tend into more decohered configurations. Conservation laws seem to be slightly emergent rather than completely fundamental: internal to systems some limited 'off-shell' interactions that don't conserve e.g. energy are allowed, but these are suppressed by the time we measure anything, so we only experience the laws holding exactly.<p>When all of this is peeled away, the resulting system looks like "a bunch of initial (discrete) data, plus rules for looking at the implications of that data as it is allowed to interact with other data, in such a way that certain high-amplitude states amplify and low-amplitude states get destructively interfered away, allowing complex patterns to crystallize and replicate/evolve at increasingly macroscopic scales". Which, if you squint, looks a lot like a cellular automata such as Conway's Game of Life. But it can also (under some squinting) look like how thinking works, or how neural networks work, -ish: start with some low level bits, look for patterns in those, look for patterns in those patterns, etc, and then observe that certain resulting states have their amplitudes driven to 1 while others are driven to 0. Which reminds me, at least, of convolutional NNs. I don't know much about how LLMs work but I suspect they are perhaps slightly less good of an analogy, although perhaps they do but it's just slightly more masked by the architecture.<p>I wouldn't, like, bet anything on the details, but I suspect that in the long run there will be some widely-held hypothesis (similar in status to the Church-Turing thesis) that the universe, the brain, neural networks, and cellular automata all have some essential symmetry in their structure. Something to do with the requirements for how a system has to work if it is going to have emergent complexity.<p>(Incidentally I think this is what Stephen Wolfram sees also and is the basis for his quirky "Wolfram Physics Project", although I suspect that in his overconfidence/immunity to critique he's pretending to see a lot more of it than he actually does.)