For a long time, I assumed the recursive tail call optimization was a really smart optimization done by the compiler to transform a normal recursive call into a tall call. I was trying to figure out how a compiler could do that.<p>Then later I learned that the optimization part is really done by human to transform the recursive call into a tail call. The "optimization" done by the compiler is relatively trivial in reusing the stack frame for the last function call.
As always SICP has the answers[1], and helped me shift from a "production rule that transforms slow code to be faster" mindset towards a "implementation detail of the compiler that ensures it doesn't create unnecessary state" one. It's no more an optimization than explicitly not filling code with NOP's is.<p>[1] <a href="http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-34.html" rel="nofollow">http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-34.html</a>
I think calling it an "optimization" is unfair. You don't call while loops that run in constant space an optimization, do you?<p>Without support for proper tail calls, a large range of otherwise valid recursive programs does not work. So, in a very real sense, supporting proper tail recursion changes the semantics of the language, letting you certain programs more easily.
Neat, that top answer is mine. And yes, it does borrow quite generously from the first chapter of SICP, which is the canonical source for that kind of info. Still, it's nice to see something I wrote coming up again. I'm glad it's helping people.
Shameless advertisement: A new answer for those of us preferring imperative languages: <a href="http://stackoverflow.com/a/9814654/48015" rel="nofollow">http://stackoverflow.com/a/9814654/48015</a>