I had need for a special purpose lock that solved this problem: There are `N` task sharing a single resource. It is incorrect for the `n`th task to acquire a lock on it until after the `n - 1`th task has released its lock. When the `N - 1`th task releases its lock the `0`th task may acquire the lock again.<p>I couldn't find any lock off the shelf that solved this, or a clean way to hack it with a fair mutex. So I wrote my own, and I'm pretty sure it's correct (1).<p>It's like a ticket lock, except the total number of tickets is known up front so every task can get one ticket. To acquire a lock is one CAS, where the lock is acquired iff the "current" value of the lock matches the ticket value of the task, if so we swap in a magic LOCK value. When the writer releases the lock it writes (ticket + 1) % num_tickets back to current so the lock can be acquired by the next task.<p>There's a deadlock condition though - if one task is cancelled while any other task is outstanding, no other task may acquire the lock, even if the cancelled task didn't acquire it!<p>To deal with that, the lock is poisoned with a magic POISON value. Then when any other task attempts to acquire the lock, but its value is POISON, then an error is returned and those tasks may be cancelled accordingly.<p>It's be possible to support adding/removing tasks concurrently by replacing the tickets with a circularly linked list of atomic pointers, so when one task is cancelled the "next" of the "prev" item can be atomically swapped, but I don't have a use case for that and it's annoying enough to write in Rust that I didn't want to bother with it.<p>(1) The code for this is here: <a href="https://github.com/m-hilgendorf/sequex/">https://github.com/m-hilgendorf/sequex/</a>. I call it "sequex" for sequence-mutex. It could also be called a round-robin lock or something like that.<p>---<p>You could do this with just a ticket lock, where you take a ticket up front N times and then give one ticket to each task. When a task's ticket comes up it acquires the lock, but before releasing the lock, it takes another ticket. That's where I started, but I found this implementation cleaner.