I'm working on something similar here - <a href="https://github.com/joerussbowman/Scale0" rel="nofollow">https://github.com/joerussbowman/Scale0</a><p>I'm still in the phase of documenting how I want it to work, trying to include things like failover servers for the primary broker and a protocol for applications to communicate with so it can tell backends to move to other servers and such. I'm going for a fully service agnostic approach, which from what I gather from your docs yours is too.<p>I'm also plugging in a LRU queue and a queue for waiting replies to route a little more securely (response not in a wait queue, discard it).<p>I was just getting to the point of trying out some code to wrap my head around everything when my schedule changed and I ran out of time to work on it. Going on a vacation sometime soon and planning on spending some quality time on the project then. Vacation will include no internet access so I got to do something to keep my brain busy :)<p>Being able to add/remove workers without having to change a config is also one of the pain points I'm trying to solve and was one of my first motivating factors, glad to know I'm not the only one who wants to fix that.
HTTP is not a very expensive protocol. I still haven't seen a convincing rationale for imposing a process boundary between the application server and the HTTP stack.
Would something similar* be possible using Nginx's scripting engine? When a back-end comes online, it connects to an endpoint to notify Nginx that it has come online, and this gets added to the proxy configuration. I wish I had the time to find out for myself!<p>* to the title, as I haven't investigated the project carefully...
This is what Zed Shaw has been trying to achieve with his Mongrel2 project.<p><a href="http://mongrel2.org/home" rel="nofollow">http://mongrel2.org/home</a>