I don't understand the motivation for this design? Almost immediately I ask, "Why have sentinels? Why not have an election process exist within the ring of master service candidates?" The second question I ask is, "Wouldn't Zookeeper also be an excellent way to coordinate this sort of operation?"<p>I know that part of the design goal of Redis is to create a system that is "Simple" and "Readable", and the redis server as it stands succeeds at this goal admirably. But the approach in this draft is neither particularly simple nor is it going to make the codebase more readable. It seems fairly awkward and introduces more of a burden on the operations and management of your product.<p>Can someone explain the value of this design?
I find this bit interesting:<p><i>Modify clients configurations when a slave is elected.</i><p>(I assume elected == promoted). This is an idea I haven't really seen in other servers/services before, and I'm curious how this will be implemented. I assume some sort of pub/sub subscription to each of the slaves so that your server is notified when one of them takes over? It sounds tricky, but really interesting. Document seems a bit scant on details for this part at the moment.<p>Regardless, really thrilled about this project.<p>[Edit: Ah, I missed this part:<p><i>client reconfiguration are performed running user-provided executables (for instance a shell script or a Python program) in a user setup specific way.</i>]
As far as I can tell, the core mechanism here is that the sentinals keep an eye on your redis instances and agree which slave should become a master if your master dies. They then "inform the clients" of the configuration change.<p>One thing that worries me... normally if I have a configuration setting that might change at runtime I store it in redis! Does anyone have a good way of storing the configuration of where the redis server is in a way that can be updated at runtime (assuming a standard shared-nothing architecture) - without putting it in redis?