Is there a document that lists out Redis best practices like "Redis is very fast as long as you use O(1) and O(log_N) commands"?<p>Sure it's probably all obvious things, but it would be nice to have a checklist to skim over, to be sure I haven't forgotten any major consideration when designing a new system.
As I was reading through, the back of my mind was saying “Nice heuristics but now you’re adding uncertainty. The behavior of DEL could vary dramatically and users would not know why.”<p>I was relieved that antirez recognized this as a semantic change and gave it a new name. Very thoughtful.<p>Perhaps there is something to be learned from GC algorithms in this case?
> Everybody knows Redis is single threaded.<p>You would be surprised.<p>The whole point of Redis is to run it on powerful single treaded machines. I can count number of times how 'experts' threw it on multi-core beasts of systems and were surprised a single core machine that cost /2 of that system destroyed their provisions.<p>At the end of the day; latency is the king.
I am probably missing something here, but if a delete happens asynchronously wouldn't that make the key still be available?
What happens if you check if that key still exists in a different operation as you're deleting it?<p>Also, how slow is an operation to rename it before you delete it?
He kind of buried the lede: he intends to implement thread-per-connection :<p>"... it is finally possible to implement threaded I/O in Redis, so that different clients are served by different threads. This means that we’ll have a global lock only when accessing the database, but the clients read/write syscalls and even the parsing of the command the client is sending, can happen in different threads. This is a design similar to memcached, and one I look forward to implement and test."
It's a nightmare to do perf/scale testing, capacity planning with a backend who's performance characteristics change with the dataset & operations being performed - that too in a fuzzy way due to 'lazy'. I like memcached way better than this simply because api operations are all deterministic.
I have a question. UNLINK will be more responsive than DELETE as it will not block and run it in background. But will subsequent requests be faster than DELETE. How will it behave if immediately followed by GET.<p>In background processing I think it is keeping the list of elements to delete. So when a GET is received it will check against that list too. The overhead will remain when you have to process incoming queries and deletion is not finished yet.<p>DELETE followed by GET takes x sec (mostly due to DELETE).<p>//In GET after DELETE time is not affected<p>UNLINK followed by GET1,GET2,... takes y1,y2... sec<p>//Will y1>y2>... until deletion finishes ?<p>Is this correct ? It looks like a trade-off, improving current latency at cost of later ones. (I feel it is worth it).