The following is a comment from Frank Atanassow on a LtU discussion about the Curry Howard correspondence:<p>blah blah blah dependency injection blah blah blah
(This is a bit off-topic, but it came to mind, so what the hell.)
You are correct in your observations that given most programming languages (even those such as Haskell), it is difficult to see exactly how Curry-Howard is useful.
I recently stumbled across someone mentioning something called "dependency injection". I didn't know what it was, so I googled (I guess this is lowercase nowadays!) it and read Martin Fowler's article on it. It is a bit on the long side, and I kept waiting for the punch-line; you know, the point at which the author hits you with the insight which justifies the preceding verbosity and the hi-tech-sounding name ("dependency injection" — I can't help but think of "fuel injection", and gleaming motor engine showcases), but it seemed indefinitely postponed. And in the end, it turned out that "dependency injection" just means "abstraction" specifically by parametrization, by update and by what I think amounts to type abstraction plus update. (Apparently these are called — I kid you not — type 3 IoC, type 2 IoC and type 1 IoC...!)
To me this all seemed rather obvious and it got me thinking about why it isn't obvious to the author or his readership.
In Haskell, if I am given some type B which I need to produce somehow, and I realize that the B-values I need depend on some other values of type A, the first thing I do is write down "f :: A -> B". Then I write down "f a =", and then I start writing stuff after the equals sign until I have what I need. I do that because I know once I have the type that if there is an inhabitant of the type "A -> B" it can be expressed as "\a -> b" for some b, so the "f a =" part is always part of my solution and I will never have to change that unless I want to. So once I've written that down I feel one step closer to my solution.
I know that for three reasons. First, because of my experience as a functional programmer. Second, because it is part of the universal property of exponentials ("factors uniquely"), that is, of function types. And third, because by the Curry-Howard correspondence with natural deduction, I can start any proof of A which depends on B by assuming A, that is, adding it as a hypothesis.
So, why is it so obscure in Java? I think part of the reason is that in Java you have update, so there are complications and additional solutions. But part of the reason is also that it largely lacks structural typing, and that makes it hard to see that a class('s interface) is a product of exponentials. (With nominal typing, you tend to think of a class by its name, rather than its structure.)
You could also blame the syntax of method signatures, which obscure the relationship with exponentials and implication. But is the syntax the cause or just a symptom? (You know what I think about syntax...) If CH could be readily applied to Java, perhaps Java's designers would have chosen a more suggestive syntax. But even if they had decided to stick anyway with C-style syntax, the idea of using abstraction to handle dependencies would have been more obvious.<p>More: <a href="http://lambda-the-ultimate.org/node/1532" rel="nofollow">http://lambda-the-ultimate.org/node/1532</a>