I think the real failure of FP has been demonstrating its advantages in practical terms. While deferring impurity is admittedly interesting (if you care about the theoretical underpinnings of programming at all) what do I gain as a programmer?<p>I'll try to illustrate my point:<p>In programming we often have to do something more than once. In terms of mental complexity, I'd rank our options like so, from easy to hard:<p>Loop -> Recursion -> Fixed Point Iteration<p>We can do the same thing with all of them. Recursion in some cases can lead to a stack overflow and any fixed point iteration has the disadvantage of, well, being f-ing hard to understand.<p>A beginner can get a `while` or a `for` loop in seconds. They're very, very intuitive. Once you figure out solving problems in software often requires doing things repeatedly, loops open up a lot of power.<p>Recursion is more difficult once you throw in conditionals and return values. They're harder to reason about. Loops are simple. So why bother with recursion? I learned recursion because it exists and I love learning. Could I have written just as much software without it? Yes. Would it have been just as functionally correct, maintainable, easy to read? Yes.<p>I know very little about fixed point iteration. Why? Well, because I have loops and if I'm feeling jaunty, recursion.<p>Having said that, I recently came across this article about the Y combinator:<p><a href="http://matt.might.net/articles/implementation-of-recursive-fixed-point-y-combinator-in-javascript-for-memoization/" rel="nofollow">http://matt.might.net/articles/implementation-of-recursive-f...</a><p>I recommend that article for everyone. But the most important part is the section "Exploiting the Y combinator".<p>> This formulation still has exponential complexity, but we can change it to linear time just by changing the fixed-point combinator. The memoizing Y combinator, Ymem, keeps a cache of computed results, and returns the pre-computed result if available:<p>The author is talking about his algorithm for calculating Fibonacci numbers. While the naive recursive approach is limited due to its computation complexity, by caching intermediate results we can short-circuit the calculations made.<p>> The end result is that the 100th Fibonacci number is computed instantly, whereas the naive version (or the version using the ordinary Y combinator) would take well beyond the estimated lifetime of the universe.<p>I don't know enough about what's being done here to say that this optimization couldn't have been made with a loop (I believe Turing equivalence proves that it could have been) but at least I get to see some of the magic that fixed point iteration gives us.<p>Haskell, as a functional language, offers a lot (purity! type safety!) but the community hasn't really shown <i>how</i> these things are valuable. Type safety is great, but is Haskell materially better than Objective-C (my primary language) in that regard? Pure functions are trivial to test, but in a language with a reasonable amount of type safety, how many individual functions are getting tested anyway? Of course I can write pure functions in an imperative language and often do. And I can do so without painfully abstract concepts like the dreaded monad if I want to do something as pedestrian as generate a random number.<p>The reality is writing software for most people is about the platform they're targeting, available libraries, ease of use and techniques for dealing with long-term complexity. I have yet to find a reason to use Haskell even though I would love to -- more accurately, I have yet to find a situation where I could justify using Haskell.