TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Lazy Redis is better Redis

205 pointsby mostafahover 9 years ago

7 comments

nxbover 9 years ago
Is there a document that lists out Redis best practices like &quot;Redis is very fast as long as you use O(1) and O(log_N) commands&quot;?<p>Sure it&#x27;s probably all obvious things, but it would be nice to have a checklist to skim over, to be sure I haven&#x27;t forgotten any major consideration when designing a new system.
评论 #10283079 未加载
评论 #10283037 未加载
评论 #10282871 未加载
mwshermanover 9 years ago
As I was reading through, the back of my mind was saying “Nice heuristics but now you’re adding uncertainty. The behavior of DEL could vary dramatically and users would not know why.”<p>I was relieved that antirez recognized this as a semantic change and gave it a new name. Very thoughtful.<p>Perhaps there is something to be learned from GC algorithms in this case?
korzunover 9 years ago
&gt; Everybody knows Redis is single threaded.<p>You would be surprised.<p>The whole point of Redis is to run it on powerful single treaded machines. I can count number of times how &#x27;experts&#x27; threw it on multi-core beasts of systems and were surprised a single core machine that cost &#x2F;2 of that system destroyed their provisions.<p>At the end of the day; latency is the king.
评论 #10284493 未加载
seivanover 9 years ago
I am probably missing something here, but if a delete happens asynchronously wouldn&#x27;t that make the key still be available? What happens if you check if that key still exists in a different operation as you&#x27;re deleting it?<p>Also, how slow is an operation to rename it before you delete it?
评论 #10282978 未加载
tedd4uover 9 years ago
He kind of buried the lede: he intends to implement thread-per-connection :<p>&quot;... it is finally possible to implement threaded I&#x2F;O in Redis, so that different clients are served by different threads. This means that we’ll have a global lock only when accessing the database, but the clients read&#x2F;write syscalls and even the parsing of the command the client is sending, can happen in different threads. This is a design similar to memcached, and one I look forward to implement and test.&quot;
评论 #10283753 未加载
vinay_ysover 9 years ago
It&#x27;s a nightmare to do perf&#x2F;scale testing, capacity planning with a backend who&#x27;s performance characteristics change with the dataset &amp; operations being performed - that too in a fuzzy way due to &#x27;lazy&#x27;. I like memcached way better than this simply because api operations are all deterministic.
评论 #10282657 未加载
评论 #10282724 未加载
评论 #10283259 未加载
bipin_nagover 9 years ago
I have a question. UNLINK will be more responsive than DELETE as it will not block and run it in background. But will subsequent requests be faster than DELETE. How will it behave if immediately followed by GET.<p>In background processing I think it is keeping the list of elements to delete. So when a GET is received it will check against that list too. The overhead will remain when you have to process incoming queries and deletion is not finished yet.<p>DELETE followed by GET takes x sec (mostly due to DELETE).<p>&#x2F;&#x2F;In GET after DELETE time is not affected<p>UNLINK followed by GET1,GET2,... takes y1,y2... sec<p>&#x2F;&#x2F;Will y1&gt;y2&gt;... until deletion finishes ?<p>Is this correct ? It looks like a trade-off, improving current latency at cost of later ones. (I feel it is worth it).