There is a fundamental fallacy here: equating mildly complex algorithms the fundamental workings of which are actually designed and well understood - even if not trivially explainable, to very complex biological and brain processes that are still very superficially understood.<p>This reveals at best a paucity of imagination. There is no reason to imagine that all complex systems irrespective of fundamental structure and spanning vastly different degrees of complexity are "emerging" anything like the same behaviors.<p>The explainability of LLM's and so-called "deep learning" will be liberated when it disentangles itself from vague and forced anthropomorphism but the marketing allure is alas to strong.