Pierce's <i>Types and Programming Languages</i> is a phenomenal textbook. I'm almost done with my implementation of System F-omega (polymorphic lambda calculus with higher kinded types and type operators), featuring a full handwritten lexer/parser with helpful diagnostics.<p>My end goal is to use it as one phase of IR for a functional language compiler.
Cool article.<p>While I cannot find the exact quote from Martin Odersky[0], I do believe he once said something along the line of, "it takes about ten years to make a complete typed language."<p>If anyone also recalls this and has a link to the quote, I would much appreciate the pointer to it.<p>0 - <a href="https://en.wikipedia.org/wiki/Martin_Odersky" rel="nofollow">https://en.wikipedia.org/wiki/Martin_Odersky</a>
Curious: from what I can see, a type system is really just an embedded declarative DSL for doing set algebra.<p>So is there a technical reason why education and implementations always intertwine it with a larger client language, rather than treating it as a complete, self-contained entity in its own right? Or is that lack of decomposition just oversight?
I started reading but stopped when I saw "succ n" and "prev n". Unary numbers are so academic and disconnected from reality that I lose interest in any paper that uses them. Lambda calculus makes me yawn too. Guess I'll be making a programming language on my own to see how far I get without reading TAPL or any CS papers :-)