Since it is objectively true that LLMs are predictive text engines first and foremost, it leads me to the hypothesis that the intelligence displayed by them, and by association, perhaps humans as well, is in fact imbedded into memetic structures themselves in some kind of n-dimensional probability matrix.<p>In the same way that an arbitrarily detailed simulation could in theory be made into a “make your own adventure”lookup table, where the next “page” (screen bitmap) was determined by the “control” inputs, the underpinnings of reason could easily be contained in a mundane and deceptively static medium such as a kind of multidimensionally linked list structure.<p>It could be that neural networks inherently gravitate towards the processing of symbolic grammar (sequences of “symbols”) and that the ordered complexity inherent in arbitrarily high dimensional interrelations of these symbols in human memetic structures is sufficient to create the process that we think of as reasoning or even sentience.<p>While I definitely struggle to intuit this interpretation from an emotional standpoint, the sheer multitude of states possible inside such a system are sufficient to appear infinite and therefore intrinsically dynamic, and I fail to find evidence that they could not be instead developed from a static data structure .<p>If there is a grain of truth to this hypothesis it would fundamentally change the philosophical landscape not only around LLMs but also regarding intelligence itself, the implication being not that LLMs might be intelligent, but rather that biological intelligence might in fact derive its behavior from iterating over multidimensional matrixes of learned data, and that human intelligence owes much more to culture (a vastly expanded data set) than we may have previously imagined.