TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

A proposal for more reliable locks using Redis

54 pointsby janerikabout 11 years ago

7 comments

mjbabout 11 years ago
The classic &quot;Leases: an efficient fault-tolerant mechanism for distributed file cache consistency&quot; (<a href="http://portal.acm.org/citation.cfm?id=74870" rel="nofollow">http:&#x2F;&#x2F;portal.acm.org&#x2F;citation.cfm?id=74870</a>) dating back to 1989 is a good read about these kinds of systems. It makes some interesting observations about the approach, and introduces the need for bounded drift.<p>I think antirez is saying &quot;skew&quot; here when &quot;drift&quot; would be more appropriate. The safety property appears to refer to the different in rates between clocks, rather than the difference in absolute values. That&#x27;s a much more reasonable assumption, and is likely to be true even with very bad clock hardware over short periods of time.<p>Obviously the bounded drift assumption,
评论 #7755968 未加载
Glyptodonabout 11 years ago
Is this new? I feel like using Redis for locks is something that&#x27;s been going around for a while. I&#x27;ve used Redis to make locks, and also used it to make counting semaphores. It&#x27;s a fairly interesting use because it&#x27;s frequently the the simplest, least overhead means to solve a problem that&#x27;s <i>reliable enough</i> without actually being reliable.<p>The most obvious issue is that if Redis goes down you could end up with problems if the processes using the locks continue, particularly depending on when and what state Redis restarts.<p>Another is that you have to take an approach of re-checking out your lock so as not to let it expire if you can&#x27;t guarantee strict time constraints. Once you do this, you run a risk of something not finishing but extending its lock indefinitely.<p>A final issue is that you can end up with a situation where there&#x27;s no guarantee that a waiting task (or whatever you call something that wants a lock or in on a semaphore) will ever run.<p>I don&#x27;t really buy those who talk about this being an insane violation 0 state&#x2F;share nothing. When I&#x27;ve needed these kinds of primitives it rarely has to do with the application state itself - for example I&#x27;ve used the counting semaphores to control how many worker processes can be active. Likewise, I&#x27;ve used the plain locks (and lock-like structures) to do things like insure atomic&#x2F;ordered writes for user sessions (I suppose session is stateful, but it&#x27;s also not really shared application state).<p>In any case, there are some issues, but at the cost of a minimal amount of nursing the ease of implementation and integration often makes Redis a go-to choice for these kinds of things, particularly in resource (hardware) constrained environments. On the other hand if you&#x27;re operating a scale where you&#x27;ve got multiple datacenters and such, it&#x27;s a different ballgame.
评论 #7756344 未加载
ThePhysicistabout 11 years ago
I wrote a very similar, Redis-based lock in Python a while ago, here it is:<p><a href="https://gist.github.com/adewes/6103220" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;adewes&#x2F;6103220</a><p>It uses Redis pipelines and watchers to make sure that no race conditions between two processes requiring the same lock occur, and uses &quot;expire&quot; keys to avoid deadlocks.
评论 #7757023 未加载
评论 #7756820 未加载
jcampbell1about 11 years ago
&gt; Step 2) It tries to acquire the lock in all the N instances sequentially, using the same key name and random value in all the instances.<p>&gt; so ideally the client should try to send the SET commands to the N instances at the same time using multiplexing.<p>I am confused. Are the locks requested sequentially, or at the same time? It seem like if they are requested sequentially, the the random backoff time would need to be a large multiple of the combined latency.
评论 #7755879 未加载
nkozyraabout 11 years ago
It seems like a lock should be able to autorelease in a distributed environment if the acquirer is no longer available. Would this not be considered &quot;safe?&quot;<p>What about broadcasting acquire&#x2F;release messages?
评论 #7755527 未加载
oivavoiabout 11 years ago
So the locks in a Distributed version work something like the NRW concept in Riak ?
评论 #7755300 未加载
edddabout 11 years ago
I finish reading after first paragraph... When do people will learn that using locks, shared memory does not work? It is just wrong. Things should be immutable, you should share nothing. And by nothing i mean nothing at all. It is just WRONG.
评论 #7755903 未加载
评论 #7758868 未加载