So this is for read through caching with distributed locking. There are a couple scenarios where this is interesting.<p>1) If calculating the item in the cache is so expensive to some system, and might happen so many times concurrently, that you'd rather block than duplicate work.
2) There is something not-idempotent about the operation you are blocking on.<p>But, coming from experience, be very very very very careful with distributed locking and put lots of logging/monitoring/timeouts around it if you have to do it. Avoid it if possible. Try hard to use a different persistent data structure, store, or algorithm that makes the locking irrelevant.