TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Inline Caching: Quickening

36 点作者 r4um超过 4 年前

2 条评论

pansa2超过 4 年前
In Python, “Inline caching has been a huge success” [0]. I believe quickening has been proposed as part of a plan for further speed improvements [1].<p>[0] <a href="https:&#x2F;&#x2F;mobile.twitter.com&#x2F;raymondh&#x2F;status&#x2F;1357478486647005187" rel="nofollow">https:&#x2F;&#x2F;mobile.twitter.com&#x2F;raymondh&#x2F;status&#x2F;13574784866470051...</a><p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;markshannon&#x2F;faster-cpython&#x2F;blob&#x2F;master&#x2F;plan.md" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;markshannon&#x2F;faster-cpython&#x2F;blob&#x2F;master&#x2F;pl...</a>
评论 #26063836 未加载
pansa2超过 4 年前
&gt; <i>improvements that could be made [...] Make a template interpreter like in the JVM. This will allow your specialized opcodes to directly make use of the call stack.</i><p>What is a “template interpreter”? How does it differ from a normal bytecode interpreter?
评论 #26063734 未加载
评论 #26063881 未加载
评论 #26067855 未加载
评论 #26063888 未加载