AI models like today's LLMs are basically text generators and data retrievers. They're trained in a straightforward, linear fashion, unlike the complex, multi-dimensional way humans learn. That's why they struggle with math that requires context but can handle straightforward language tasks pretty well.
>LLMs cannot truly adapt to novelty because they have no ability to basically take their knowledge and then do a fairly sophisticated recombination of that knowledge on the fly to adapt to new context.<p>I find the semantics to be very misleading and, I believe, intentionally so. The LLM does not have any "knowledge". It has data to which software applies statistical algorithms to predict the next token. However, the software is incapable of "understanding" in the sense that "knowledge" would require. The failure of these types of discussions to address this issue is troubling and is why I believe it is intentional (e.g. because billions of dollars have been invested into LLMs upon the thesis that LLMs are indeed a form of AI - when in fact there is no "intelligence" whatsoever).<p>>>“I don’t think there’s anything particularly special about biological systems versus systems made of other materials that would, in principle, prevent non-biological systems from becoming intelligent.”<p>I don't understand how a scientist could make the above statement with a straight face. Science has been unable to understand human sentience and consciousness - yet this scientist wants me to believe they can create build something to duplicate a brain they themselves don't even yet understand? It's laughable.