For perspective, one might look at the Natural Number Game, an introduction to mathematical proof using the language Lean 4:<p><a href="https://adam.math.hhu.de/#/g/leanprover-community/NNG4" rel="nofollow">https://adam.math.hhu.de/#/g/leanprover-community/NNG4</a><p>I recall the feeling, being one of many who "rediscovered" Gröbner bases. This is why one studies history, to recognize the taste when one is a turn away from new history:<p>There was a leap needed, to generalize Gaussian elimination and the Euclidean algorithm to their common ground of arbitrary polynomials. There's a further leap to noncommuting strings, and then there's typed trees, the "algebraic datatypes" foundation of languages like Haskell and Lean 4. To see term rewriting in these settings as a continuum, one needs to incorporate recursion, to understand how Schützenberger's notions of rational and algebraic languages apply.<p>The idea of a term ordering is too gnarly here (well past the interstellar frontier of "decideability") for anything besides human intuition or deep learning.<p>Nevertheless, one sees that term rewriting on typed trees can be the foundation of human thought. Deep learning, while impressive, struggles to simulate recursion using multiple layers. We are discovering exactly how fundamental recursion is to thought. A version of neural nets more closely based on a probability substrate of typed trees might more adeptly model thought.