Repeatedly through the history of humanity, huge advancements came from the conception of simple systems that replaced complex systems. An example is the decimal system itself. It does essentially the same that roman numbers do, except much better, <i></i>for being simpler<i></i>. And that very fact opened doors for science to progress much faster, because we weren't busy spending evenings multiplying numbers. This is one of countless examples that show there is a great benefit in pursuing the simplest alternative.<p>When it comes to programming languages, we didn't learn that lesson. Each other day someone develops a new fancy programming language with X more features, not realizing that all those features are already present on the simplest languages. The λ-calculus - a 70 year old system - already does everything that all those do, simpler and faster (compiling to Haskell, it already runs an order of magnitude faster than the average Python program). We treat PLs like products, but in reality they're just mathematical objects. Imagine if each other day someone created a new fancy numeric system? <i></i>Why, instead of designing hundreds of fancy, complex programming languages, aren't we focusing in studying how those features manifest on the simplest languages, and implementing faster evaluators for them?<i></i><p>Here is Euler's 1st problem solved on the Caramel syntax for the λ-calculus:<p><pre><code> euler1 = (sum (filter (either (mod 3) (mod 5)) (range 1000)))</code></pre>