This is something the process manager should handle. With this approach each language and program has to implement the fd-passing and restart coordination. It also doesn't integrate really well with systemd/upstart because they want a stable pid.<p>That's why I wrote socketmaster[1], it's simple enough that it doesn't need to change and it's the one handling the socket and passing it to your program. I haven't had to touch it for years now.<p>For my current work I wrote crank[2], a refinement of that idea. It's a bit more complex but allows to coordinate restarts. It implements a subset of the systemd socket activation protocol conventions. All your program has to do is look for a LISTEN_FDS environment variable to find the bound file descriptor and send a "READY" message on the NOTIFY_FD file descriptor when it's ready to accept new connection. Only then will crank shutdown the old process.<p>* [1]: <a href="https://github.com/zimbatm/socketmaster" rel="nofollow">https://github.com/zimbatm/socketmaster</a>
* [2]: <a href="https://github.com/pusher/crank" rel="nofollow">https://github.com/pusher/crank</a>
exec.Command() is a more elegant approach, I've written about this back in June, my write-up was specific to an HTTP server: <a href="http://grisha.org/blog/2014/06/03/graceful-restart-in-golang/" rel="nofollow">http://grisha.org/blog/2014/06/03/graceful-restart-in-golang...</a><p>I think the article also misses an important step - you need to let the new process to initialize itself (e.g. read its config files, connect to db, etc), and then signal the parent that it is ready to accept connections, only at which point the parent stops accepting. The important point here is that the child may fail to init, in which case the parent should carry on as if nothing happened.
Process upgrades are a variant of fail-over on either hardware death or bug. I recommend treating upgrades as a chance to test your failure recovery processes.<p>If you really can't afford someone getting a "connection refused" what happens when the machine's network connection dies?
I wonder if there's any library out there that uses the SO_REUSEPORT option (see <a href="http://lwn.net/Articles/542629/" rel="nofollow">http://lwn.net/Articles/542629/</a>). It allows multiple programs to bind and accept on the same port. So I guess it should be possible to just start a second new process and then gracefully terminate the old one. Any thoughts?
Interesting technique! I can see this being useful in applications that are single points of failure. In redundant systems, however, I have found it quite effective and generally prefer to solve this problem upstream of the application, in the load balancer, by routing traffic around machines during each machine's deployment.<p>First step of a deployment: shift traffic away from the machine, while allowing outstanding requests to complete gracefully. Next you can install new software or undertake any upgrade actions in isolation. This way any costs involved in the deployment don't impair the performance of real traffic. Bring the new version up (and prewarm if necessary). Finally, direct the load balancer to resume traffic. We call the general idea "bounce deployments", as a feature of the deployment engine.<p>Two advantages of having a general-purpose LB solution:<p>(1) You can apply it to any application or protocol, regardless of whether the server supports this type of socket handoff. Though to be fair, some protocols are more difficult to load balance than others - but most can be done, with some elbow grease (even SSH).<p>(2) It's possible to run smoke tests and sanity tests against the new app instance, such that you can abort bad deployments with no impact. Our deployment system has a hook for sanity tests to be run against a service after it comes up. These can verify its function before the instance is put back into the LB, and are sometimes used to warm up caches. If you view defects and bad deployments as inevitable, then the ability to "reject" a new app version in production with no outage impact is a great safety net. With the socket handover, your new server must function perfectly, immediately, or else the service is impaired. (Unless you keep the old version running and can hand the socket back?)<p>(By LB I don't necessarily mean a hardware LB. A software load balancer suffices as well - or any layer acting as a reverse proxy with the ability to route traffic away from a server automatically.)<p>A technique like this would also be useful for implementing single-points like load balancers or databases, so that they can upgrade without outage. Though failover or DNS flip is usually also an option.
Questions from someone who doesn't use Go:<p>1. Won't this leave the parent process running until the child completes? And, if you do this again & again, won't that stack up a bunch of basically dead parent processes? Maybe I'm misunderstanding how parent/child relatioships work with ForkExec<p>2. What if you want the command-line arguments to change for the new process?<p>3. In addressing (2), in general would it be simpler to omit the parent-child relationship with a wrapper program? The running (old) process can write its listener file descriptor to a file, similar to how it is done here, and the wrapper reads that file & sets an environment variable (or cmd-line argument) telling the new process?<p>The wrapper could be used for any server process which adheres to a simple convention:<p>on startup, re-use a listener FD if provided (via env or cmd line ... or ./.listener)<p>once listening, write your listener FD to well-known file (./.listener)<p>on SIGTERM, stop processing new connections but don't close the listener (& exit after waiting for current connections to close, obvi)<p>4. Am i the only one who finds "Add(1)/Done()" to be an odd naming convention? I might go with "Add(1)/Add(-1)" instead just for readability
Here's the library that implements this pattern:<p><a href="https://github.com/gwatts/manners" rel="nofollow">https://github.com/gwatts/manners</a><p>And Mailgun's fork that supports passing file descriptors between processes:<p><a href="https://github.com/mailgun/manners" rel="nofollow">https://github.com/mailgun/manners</a>
Goagain by Richard Crowley is a great package that we are using for graceful restarts: <a href="https://github.com/rcrowley/goagain" rel="nofollow">https://github.com/rcrowley/goagain</a><p>EDIT: added author
I've played with Einhorn from Stripe, which works pretty nicely for graceful restarts too:<p><a href="https://stripe.com/blog/meet-einhorn" rel="nofollow">https://stripe.com/blog/meet-einhorn</a><p><a href="https://github.com/stripe/go-einhorn" rel="nofollow">https://github.com/stripe/go-einhorn</a>
So I wrote a golang application and it runs behind nginx. My server "restart" when I want to push new code is,<p>Re run my program on a different port, point nginx at the new port, reload nginx, kill the old.<p>Curious what is so bad about this approach? I admit it's hacky, but it works. Is there just too many things to do?
Does anyone know what's going on with this line:<p>> file := os.NewFile(3, "/tmp/sock-go-graceful-restart")<p>What's with that filesystem path, which isn't referenced anywhere else, and which should be unnecessary because the file descriptor 3 is inherited when the process starts?
Here's the `grace` package from Facebook: <a href="https://github.com/facebookgo/grace" rel="nofollow">https://github.com/facebookgo/grace</a>
Here's a way to do it for any language: <a href="https://github.com/iffy/grace" rel="nofollow">https://github.com/iffy/grace</a>