TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Inline Caching: Quickening

36 pointsby r4umover 4 years ago

2 comments

pansa2over 4 years ago
In Python, “Inline caching has been a huge success” [0]. I believe quickening has been proposed as part of a plan for further speed improvements [1].<p>[0] <a href="https:&#x2F;&#x2F;mobile.twitter.com&#x2F;raymondh&#x2F;status&#x2F;1357478486647005187" rel="nofollow">https:&#x2F;&#x2F;mobile.twitter.com&#x2F;raymondh&#x2F;status&#x2F;13574784866470051...</a><p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;markshannon&#x2F;faster-cpython&#x2F;blob&#x2F;master&#x2F;plan.md" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;markshannon&#x2F;faster-cpython&#x2F;blob&#x2F;master&#x2F;pl...</a>
评论 #26063836 未加载
pansa2over 4 years ago
&gt; <i>improvements that could be made [...] Make a template interpreter like in the JVM. This will allow your specialized opcodes to directly make use of the call stack.</i><p>What is a “template interpreter”? How does it differ from a normal bytecode interpreter?
评论 #26063734 未加载
评论 #26063881 未加载
评论 #26067855 未加载
评论 #26063888 未加载