In Python, “Inline caching has been a huge success” [0]. I believe quickening has been proposed as part of a plan for further speed improvements [1].<p>[0] <a href="https://mobile.twitter.com/raymondh/status/1357478486647005187" rel="nofollow">https://mobile.twitter.com/raymondh/status/13574784866470051...</a><p>[1] <a href="https://github.com/markshannon/faster-cpython/blob/master/plan.md" rel="nofollow">https://github.com/markshannon/faster-cpython/blob/master/pl...</a>
> <i>improvements that could be made [...] Make a template interpreter like in the JVM. This will allow your specialized opcodes to directly make use of the call stack.</i><p>What is a “template interpreter”? How does it differ from a normal bytecode interpreter?