Oh hey HN. This is in the Caddy 2.6 beta already: <a href="https://github.com/caddyserver/caddy/releases/tag/v2.6.0-beta.3" rel="nofollow">https://github.com/caddyserver/caddy/releases/tag/v2.6.0-bet...</a> - please try it out!<p>Thanks to Marten Seemann for maintaining the quic-go library we use. (I still haven't heard whether Go will add HTTP/3 to the standard library.)<p>Caddy 2.6 should be the first stable release of a general-purpose server to support and enable standardized HTTP/3 by default. HTTP versions can be toggled on or off. (Meaning you can serve <i>only</i> HTTP/3 exclusively if you're hard-core.)<p>PS. Caddy 2.6 will be our biggest release since 2.0. My draft release notes are about 23 KB. We're looking at huge performance improvements and powerful new features like events, virtual file systems, HTTP 103 Early Hints, and a lot of other enhancements I'm excited to show off on behalf of our collaborators!
I've been using Caddy in a 3 node HA cluster sharing an anycast bgp address for about 18 months now and it's been fantastic. Certs "just work" across the cluster once consul is wired up. I recently added IPv6 which also "just works."<p>greenpau/caddy-security is fantastic and "just works" for OIDC sso.<p>mholt, thanks for recently adding the ability to bind to multiple specific IP addresses by default, this help me conserve precious public IPv4 addresses.
Thanks for a great piece of software that I use every day in production and just works.<p>At first I was scared of how stupid simple it is. It feels like web servers are supposed to have giant config files with a hundred mysterious knobs to twiddle. Now I always default to Caddy, and have yet to find an instance where it didn't fit my needs. Congrats.
Weirdly enough, among the reasons to switch from nginx to Caddy I never find the lack of observability nginx and its almost useless Prometheus exporter have.<p>I see several posts discussing under which circumstances or use cases one might outperform the other but they never seem to care about having decent metrics.<p>I might overrate the importance of this, who knows...
lucaslorentz/caddy-docker-proxy works like Traefik, in that Container metadata labels are added to the reverse proxy configuration which is reloaded upon container events, which you can listen to when you subscribe to a Docker/Podman_v3 socket (which is unfortunately not read only)<p>So, with Caddy or Traefik, a container label can enable HTTP/3 (QUIC (UDP port 1704)) for just that container.<p>"Labels to Caddyfile conversion"
<a href="https://github.com/lucaslorentz/caddy-docker-proxy#labels-to-caddyfile-conversion" rel="nofollow">https://github.com/lucaslorentz/caddy-docker-proxy#labels-to...</a><p>From
<a href="https://news.ycombinator.com/item?id=26127879" rel="nofollow">https://news.ycombinator.com/item?id=26127879</a> re: containersec :<p>> > <i>- [docker-socket-proxy] Creates a HAproxy container that proxies limited access to the [docker] socket</i>
Will have to give Caddy a try. Been a long time nginx user, but they have been very slow to implement new features (including HTTP/3), unfortunately.
Tangential, but Google is serving HTTP/3 by default out of Google Cloud now: <a href="https://cloud.google.com/blog/products/networking/cloud-cdn-and-load-balancing-support-http3" rel="nofollow">https://cloud.google.com/blog/products/networking/cloud-cdn-...</a><p>Pretty cool stuff.
I have been using HAproxy and it has been very performant but the lack of documentation for its APIs may be the reason I will start playing with Caddy.
I'm curious about the implementation, and haven't looked at the source of quic-go yet: Does it use a single UDP socket to handle datagrams for all QUIC connections, does it use connected UDP sockets per connection, or does it use multiple UDP sockets, where each handle a certain set of connections - and where an external load balancer is required to redirect to the sockets?<p>Unfortunately there's no best answer for this: Using a single socket will allow for connection migration, but it will end up being a bottleneck in terms of scalability since it will serialize access on a lot of kernel and driver datastructures (just a single transmit/receive queue). Connected sockets avoid that, but don't allow for address migration. And doing external load balancing gets far more complex than just starting a binary - even the most simple solution requires running XDP code.
Beware of a performance hit (in term of bps not req/s) if you push big data with Caddy.<p>Go implementation of HTTP/2 already took a /5 hit over http/1.1 (Go http/2 implementation is 5x slower than Go http/1.1)<p>With HTTP/3 our early benchs indicate /2 ot /3 from HTTP/2 (so /10 from http/1.1)
For HTTP/3 support with python clients:<p>- aioquic supports HTTP/3 only now <a href="https://github.com/aiortc/aioquic" rel="nofollow">https://github.com/aiortc/aioquic</a><p>- httpx is mostly requests-compatible, supports client-side caching, and HTTP/1.1 & HTTP/2, and here's the issue for HTTP/3 support:
<a href="https://github.com/encode/httpx/issues/275" rel="nofollow">https://github.com/encode/httpx/issues/275</a>
Since address validation is about blocking senders which forge their IP address, I think the number of connection attempts which where the client doesn't eventually validate its address should factor into the decision to enable this feature. This should rarely happen for legitimate clients (e.g. connection loss/cancellation during the first roundtrip) but always happen for IP address forgers.<p>Or perhaps simply use the number of half-open/embryonic connections as the metric.