That's fun and a bit surprising, but maybe it shouldn't be. I'm reminded that Dana Scott with Christopher Strachey showed that by using lattices or complete partial orders with a bit of topology to model graphs of possible computations of a program you could, just as in analysis, define least upper and lower bounds and a notion of continuity to derive a construction of limit for a computation which is analogous to a limit in analysis. They called this model a domain. That bit of of math is the basis of denotational semantics of programming languages and is necessary because sets are largely sufficient as the basis for algebra and analysis but not for programs which have evils like partial functions, side effects and variable assignment. I believe that Christopher Strachey with Scott also introduced the formal notions of sum, product and recusive types. They also showed how definitions or models of recursive types and functions could be well founded through their construction of limits on domains. An early tech report on it can be found here:<p><a href="https://www.cs.ox.ac.uk/files/3228/PRG06.pdf" rel="nofollow">https://www.cs.ox.ac.uk/files/3228/PRG06.pdf</a><p>and here's a more recent free book from David Schmidt on the topic:<p><a href="http://people.cs.ksu.edu/~schmidt/text/DenSem-full-book.pdf" rel="nofollow">http://people.cs.ksu.edu/~schmidt/text/DenSem-full-book.pdf</a>
This is almost identical to the study of combinatorial species (<a href="https://en.wikipedia.org/wiki/Combinatorial_species" rel="nofollow">https://en.wikipedia.org/wiki/Combinatorial_species</a>)
If you like this you'll want to look at "semiring programming" (just search on that phrase) and Categorical Programming, e.g.: "Compiling to categories" Conal Elliott
<a href="http://conal.net/papers/compiling-to-categories/" rel="nofollow">http://conal.net/papers/compiling-to-categories/</a>
Nat = Nat + 1 does end up being meaningful: since ultimately the numbers that fall out (e.g. Bool = 2) end up describing the sizes of sets, an interpretation of Nat is some infinite cardinal (aleph-null), which is the size of the set of natural numbers.
A cool aspect is how this algebra/calculus sheds light that maps(/dictionary/associative-arrays) have an equivalent cardinality to functions and that they are in many ways the same.
I once tried to take the Taylor series idea further and define the "cosine" of a type as the Taylor expansion of cos(x), but didn't get anywhere with it.<p>Can you do anything else weird with ADTs?
One thing I’ve been wondering is how to interpret the function types `Void -> a` and `a -> Void`, where Void is the uninhabited type (unique up to isomorphism). The number of inhabitants of those types should be |a|^0=1 and 0^|a|=0, respectively, but what does that mean? In real world, function(s) of type a->Void certainly exist, such functions being those that diverge (loop forever or halt via some non-local means).
Very neat insights into data type differentiation! Minor nitpick: the article uses<p><pre><code> data Unit = Unit
</code></pre>
but the Haskell standard Prelude already has the empty tuple type (), with single element (), functioning as the standard unit type.