Is this new? I feel like using Redis for locks is something that's been going around for a while. I've used Redis to make locks, and also used it to make counting semaphores. It's a fairly interesting use because it's frequently the the simplest, least overhead means to solve a problem that's <i>reliable enough</i> without actually being reliable.<p>The most obvious issue is that if Redis goes down you could end up with problems if the processes using the locks continue, particularly depending on when and what state Redis restarts.<p>Another is that you have to take an approach of re-checking out your lock so as not to let it expire if you can't guarantee strict time constraints. Once you do this, you run a risk of something not finishing but extending its lock indefinitely.<p>A final issue is that you can end up with a situation where there's no guarantee that a waiting task (or whatever you call something that wants a lock or in on a semaphore) will ever run.<p>I don't really buy those who talk about this being an insane violation 0 state/share nothing. When I've needed these kinds of primitives it rarely has to do with the application state itself - for example I've used the counting semaphores to control how many worker processes can be active. Likewise, I've used the plain locks (and lock-like structures) to do things like insure atomic/ordered writes for user sessions (I suppose session is stateful, but it's also not really shared application state).<p>In any case, there are some issues, but at the cost of a minimal amount of nursing the ease of implementation and integration often makes Redis a go-to choice for these kinds of things, particularly in resource (hardware) constrained environments. On the other hand if you're operating a scale where you've got multiple datacenters and such, it's a different ballgame.