Fun stuff!<p>If you like this kind of thing, we are developing a very powerful and flexible reverse proxy with load balancing into Caddy 2: <a href="https://github.com/caddyserver/caddy/wiki/v2:-Documentation#httphandlersreverse_proxy" rel="nofollow">https://github.com/caddyserver/caddy/wiki/v2:-Documentation#...</a><p>It's mostly "done" actually. It's already looking really promising, especially considering that it can do things that other servers keep proprietary, if they do it at all (for example, NTLM proxying, or coordinated automation of TLS certs in a cluster).<p>If you want to get involved, now's a great time while we're still in beta! It's a fun project that the community is really coming together to help build.
Seems like most of the work is done by the `ReverseProxy` package and this code is more about health checking.<p>Nice to see how simple it is now though. Go is definitely a great choice for low-level networking, and .NET Core has recently become a great option as well.
It's nice to see a walkthrough of what goes into a load balancer and how simple it is to build on in Go.<p>One nitpick is that the autho reversed the meaning of active and passive health checks. Active generates new traffic to the backends just to determine their healthiness, passive judges this based on the responses to normal traffic.
I found it more convenient to keep the field unnamed when using mutex in a struct. So in the example that would be<p>type Backend struct {<p><pre><code> URL *url.URL
Alive bool
sync.RWMutex
ReverseProxy *httputil.ReverseProxy
}</code></pre>
With talk of proxy and go. I am surprised gobetween hasn’t been mentioned yet <a href="https://github.com/yyyar/gobetween" rel="nofollow">https://github.com/yyyar/gobetween</a> . Last time I looked at the source it was very approachable.
Load balancer seem like one of those problems that engineers should be cutting their teeth on.<p>And yet we have only a handful and one of the most popular charges money for cool features and does not appear to have an ABI for addons.
This is pretty cool. But I think an implementation that avoids the mutexes (mutices?) when allocating the backends and uses channels instead would probably perform better.<p>2 channels needed, 1 for available backends and 1 for broken ones.<p>On incoming request, the front end selects an available backend from channel 1. On completion, the backend itself puts itself either back onto channel 1 on success, or channel 2 on error.<p>Channel 2 is periodically drained to test the previously failed backends to see if they're ready to go back onto channel 1.
> Multiple clients will connect to the load balancer and when each of them requests a next peer to pass the traffic on race conditions could occur.<p>I quite don't understand what this means? What race conditions? Can anybody explain? Thanks.
The author states:<p>> After playing with professional Load Balancers like NGINX I tried creating a simple Load Balancer for fun.<p>And while nginx[0] certainly can perform in this role, another production quality load balancer is HAProxy[1]. Both can do more than this, of course.<p>Reinventing solutions "for fun" certainly can be educational and help others learn key concepts, but the author should clearly state what they are doing is not meant to replace production quality solutions.<p>0 - <a href="https://www.nginx.com/" rel="nofollow">https://www.nginx.com/</a><p>1 - <a href="https://www.haproxy.com/solutions/load-balancing/" rel="nofollow">https://www.haproxy.com/solutions/load-balancing/</a>