Every example seems to follow this pattern<p><pre><code> client = pymemcache.client.Client(('127.0.0.1', 11211)) #2 create a client
# save to memcache client, expire in 60 seconds.
@ring.memcache(client, expire=60) #3 lru -> memcache
def get_url(url):
return requests.get(url).content
</code></pre>
How are you supposed to configure the client at 'runtime' instead of 'compile time' (when the code is executed and not when it's imported)?<p>Careful placement of imports in order to correctly configure something just introduces delicate pain points. It'll work now, but an absent minded import somewhere else later can easily lead to hours of debugging.
Extremely poor design:<p>* Not DRY. What if I want to use a cache for production but disable caching in development? And I have 10s or even 100s of functions that rely on the cache? Because the decorators contain implementation/client-specific parameters, I now have to add another entire layer of abstraction over this.<p>* Implementation is tied to the decorator, e.g. `ring.memcache` -- seriously? Why does it matter?<p>* What about setting application defaults, such as an encoding scheme, a key prefix/namespace, a default timeout?<p>I'm sorry but this is over-engineered garbage and good luck to anyone who uses it.
Is there a python equivalent to php's apcu? Apcu, in the PHP world, leverages mmap to provide a multi-process kv store, with fast, built in serialization. So it's simple and very fast for single server, multi-process caching.
Great project. There is only one angle that I feel is missing: multiple requests for the same resource could cause duplicated work, especially if the value generating function is slow.<p>I wrote a sample solution to that problem, feel free to reach out if you ever consider adding a similar feature, I'd be happy to contribute. (fyi: the current implementation is in Go)<p><a href="https://github.com/kristoff-it/redis-memolock" rel="nofollow">https://github.com/kristoff-it/redis-memolock</a>
Looks extensive and I'll likely try using the module at some point.<p>One thing, why not stash all the function methods under a "ring" or "cache" attribute, eg<p><pre><code> @ring.lru()
def foo()
..
foo.cache.update()
foo.cache.delete()
..
</code></pre>
This might be less likely to clash with any existing function attributes (if you're wrapping a 3rd party function say).
Like this a lot.<p>How could only invalidate everything related to a specific client/customer/account?<p>I wonder how they cascade these invalidations at bigger and more complex systems.
The api doesn't seem to be fleshed out compared to dogpile.cache yet.<p>Normally you don't want to pass cache backend instance to decorators on module level.
>Cache is a popular concept widely spread on the broad range of computer science but its interface is not well developed yet.<p>This sentence is grammatically incorrect. Replace "Cache" with Caching".
I needed something like this that allows access to and manual manipulation of the cache, and I ended up forking functools.lru_cache code. This library definitely fits the bill.
> Memcached itself is out of the Python world<p>Don't know why this bothers me so much... but it's actually from Perl. It was born at LiveJournal, a well-known Perl shop.
To me, mocking of the caches for testing is super important and missing.<p>I searched the article, the linked "Why Ring?", and this page of responses for "mock", but no results.<p>Maybe it's just me!