the "lambda the ultimate" papers and the birth of scheme was a loong time ago, so it grates on my ears to hear this topic presented as "an optimization". Yes, it is sometimes an optimization a compiler can make, but the idea is much better presented as a useful semantic of a language.<p>in the same way that passing parameters to a subfunction "creates" a special set of local variables for the subfunction, the tail recursion semantic updates this set of local variables in an especially clean way for loop semantics, allowing "simultaneous assignment" from old values to new ones.<p>(yes, it would be confusing with side effected C/C++ operators like ++ because then you'd need to know order of evaluation or know not to do that, but those are already issues in those languages quite apart from tail recursion)<p>because it's the way I learned it, I tend to call the semantic "tail recursion" and the optimization "tail call elimination", but since other people don't do the same it's somewhat pointless; but I do like to crusade for awareness of the semantic beyond the optimization. If it's an optimization, you can't rely on it because you could blow the stack on large loops. If it's a semantic, you can rely on it.<p>(the semantic is not entirely "clean" either. it's a bit of a subtle point that you need to return straightaway the return value of the tail call or it's not a tail call. fibonacci is the sum of the current with the next so it's not a tail call unless you somewhat carefully arrange the values you pass/keep around. also worth pointing out that all "tail calls" are up for consideration, not just recursive ones)
Every compiler should recognize and optimize for tail recursion. It's not any harder than most other optimizations, and some algorithms are far better expressed recursively.<p>Why is this not done?
Very nice article demonstrating a neat use of ASM bytecode. The Java language devs are also working on Project Babylon (code reflection), which will bring additional techniques to manipulate the output from the Java compiler: <a href="https://openjdk.org/projects/babylon/articles/code-models" rel="nofollow">https://openjdk.org/projects/babylon/articles/code-models</a>
Scala has been using this technique for years with its scala.annotation.tailrec annotation. Regardless, it's cool to see this implemented as a bytecode pass.
It's been a long time since I've messed with Java bytecode [1], but shouldn't the private method call use INVOKESPECIAL?<p>In general I don't think you can do this to INVOKEVIRTUAL (or INVOKEINTERFACE) as it covers cases where your target is not statically resolved (virtual/interface calls). This transformation should be limited to INVOKESTATIC and INVOKESPECIAL.<p>You also need lots more checks to make sure you can apply the transformations, like ensure the call site is not covered by a try block, otherwise this is not semantics preserving.<p>1: <a href="https://jauvm.blogspot.com/" rel="nofollow">https://jauvm.blogspot.com/</a>
I never understood the need for tail recursion optimization in imperative languages. Sure, you need it in FP if you don't have loops and recursion is you only option, but what is the benefit of recursive algorithms, that could benefit from tail optimization (i.e recursive loops), in a language like Java?