TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Speeding up function calls with lru_cache in Python

125 点作者 Immortal333将近 5 年前

20 条评论

jedberg将近 5 年前
This technique is called Memoization [0]<p>Here is an implementation of a memoize decorator in Python that will support all inputs [1]. You&#x27;d have to modify to not use all the Pylons framework stuff.<p>[0] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Memoization" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Memoization</a><p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;reddit-archive&#x2F;reddit&#x2F;blob&#x2F;master&#x2F;r2&#x2F;r2&#x2F;lib&#x2F;memoize.py" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;reddit-archive&#x2F;reddit&#x2F;blob&#x2F;master&#x2F;r2&#x2F;r2&#x2F;l...</a>
Denvercoder9将近 5 年前
Caching the result is not speeding up function calls.
评论 #23669394 未加载
评论 #23669300 未加载
评论 #23669271 未加载
initbar将近 5 年前
I had worked on a open source project called &#x27;safecache&#x27; on the similar note. As others has already commented, @lru_cache does not play well with mutable data structures. So my implementation handles for both immutable and mutable data structures as well as multi-threaded operations.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;Verizon&#x2F;safecache" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Verizon&#x2F;safecache</a>
评论 #23670987 未加载
gammarator将近 5 年前
(Fibonacci numbers have a closed form analytic solution that can be computed in constant time: <a href="https:&#x2F;&#x2F;www.evanmiller.org&#x2F;mathematical-hacker.html" rel="nofollow">https:&#x2F;&#x2F;www.evanmiller.org&#x2F;mathematical-hacker.html</a>)
评论 #23670415 未加载
评论 #23670319 未加载
评论 #23670926 未加载
bravura将近 5 年前
I recently discovered that joblib can do something similar, both on disk and in memory: <a href="https:&#x2F;&#x2F;joblib.readthedocs.io&#x2F;en&#x2F;latest&#x2F;memory.html" rel="nofollow">https:&#x2F;&#x2F;joblib.readthedocs.io&#x2F;en&#x2F;latest&#x2F;memory.html</a>
评论 #23669273 未加载
anandoza将近 5 年前
&gt; As, we can see the optimal cache size of fib function is 5. Increasing cache size will not result in much gain in terms of speedup.<p>Try it with fib(35), curious what you find.
评论 #23670445 未加载
评论 #23670202 未加载
mac-chaffee将近 5 年前
Maybe I&#x27;m abusing lru_cache but another use for it is debouncing.<p>We had a chatbot that polls a server and sends notifications, but due to clock skew it would sometimes send two notifications. So I just added the lru_cache decorator to the send(username, message) function to prevent that.
评论 #23671406 未加载
评论 #23669543 未加载
评论 #23672091 未加载
评论 #23671062 未加载
评论 #23669558 未加载
评论 #23670977 未加载
drej将近 5 年前
Functools’ lru_cache also has good methods for getting more info about the cache’s utilisation (.cache_info() I think), which is quite helpful when found in logs.
CyberDildonics将近 5 年前
One line summary: Use lru_cache decorator<p>@functools.lru_cache(maxsize=31)<p>def fib(n):
评论 #23669249 未加载
andreareina将近 5 年前
lru_cache doesn&#x27;t work with lists or dicts, or indeed any non-hashable data, so it&#x27;s not quite a transparent change. About half the time I use a cache I end up implementing my own.
评论 #23671038 未加载
intrepidhero将近 5 年前
This is neat and I learned something new. TLDR: use the functools.lru_cache decorator to add caching to slow functions.<p>I must admit I was hoping for a general approach to speeding up all function calls in python. Functions are the primary mechanism for abstraction and yet calls are relatively heavy in their own right. It would be neat if python had a way to do automatic inlining or some such optimization so that I could have my abstractions but avoid the performance hit of a function call (even at the expense of more byte code).
评论 #23670858 未加载
评论 #23670180 未加载
satyanash将近 5 年前
What&#x27;s the ruby equivalent of `functools` and specifically `functools.lru_cache`?<p>Note that `lru_cache` doesn&#x27;t just do caching, it also provides convenient methods on the original function to get cache stats etc in a pythonic way.
lalos将近 5 年前
The last bit of deterministic functions is a main selling point of functional programming (and get help from compilers) or any design that promotes creating pure functions.
oefrha将近 5 年前
&gt; One line summary: Use lru_cache decorator<p>Okay, at least got the decency of providing a TL;DR. But if your summary is literally three words, why not put it in the title: “speed up Python function calls with functools.lru_cache”?<p>God I hate clickbait.
m4r35n357将近 5 年前
Huh? He sped it up much more by rewriting as a loop . . .
aresic将近 5 年前
dvic CV function on problems
helloxxx123将近 5 年前
Cool
fastball将近 5 年前
tl;dr – memoization.
asicsp将近 5 年前
Instead of the last quote in the article, I prefer this one (got it from [0])<p>&gt;&quot;There are two hard things in computer science: cache invalidation, naming things, and off-by-one errors.&quot; – Martin Fowler<p>And there&#x27;s plenty of similar articles, for example [1] [2]<p>[0] <a href="https:&#x2F;&#x2F;www.mediawiki.org&#x2F;wiki&#x2F;Naming_things" rel="nofollow">https:&#x2F;&#x2F;www.mediawiki.org&#x2F;wiki&#x2F;Naming_things</a><p>[1] <a href="https:&#x2F;&#x2F;dbader.org&#x2F;blog&#x2F;python-memoization" rel="nofollow">https:&#x2F;&#x2F;dbader.org&#x2F;blog&#x2F;python-memoization</a><p>[2] <a href="https:&#x2F;&#x2F;mike.place&#x2F;2016&#x2F;memoization&#x2F;" rel="nofollow">https:&#x2F;&#x2F;mike.place&#x2F;2016&#x2F;memoization&#x2F;</a>
评论 #23669766 未加载
评论 #23669539 未加载
评论 #23670929 未加载
评论 #23669938 未加载
perfunctory将近 5 年前
&gt; @functools.lru_cache<p>Avoid these tricks if you care about thread safety.
评论 #23670795 未加载