shutdown() and half-closes are not "archaic" features.<p>You need them to get even the basic stuff right (see <a href="https://blog.netherlabs.nl/articles/2009/01/18/the-ultimate-so_linger-page-or-why-is-my-tcp-not-reliable" rel="nofollow">https://blog.netherlabs.nl/articles/2009/01/18/the-ultimate-...</a>) and you need it even more to implement "modern" application layer protocols like HTTP2 (if you don't use it, you get data loss bugs like this: <a href="https://trac.nginx.org/nginx/ticket/1250#comment:4" rel="nofollow">https://trac.nginx.org/nginx/ticket/1250#comment:4</a>).
I applaud the effort to hate on "smart" middleware proxies!<p>That being said, author gets no points for namedropping random distributed systems algorithms and using tcp keepalives (2 hours minimum!) as an argument against TLS terminating proxies.<p>Is there a reason to (as he says) "fully implement the protocol" in the proxy? I battled with websockets through Pound last week, and it simply doesn't work because the author took a non-postel stand on protocol specifics.<p>Having a protocol agnostic proxy like hitch (previously stud) fixed that without losing functionality, and I expect it to age better as well.
I dont grok this, if tcp's model has fundamental problems how come the Internet works. :)<p>The fact that a protocol technically is not perfect and causes jip for isps does not mean the application layer has to get involved.<p>I've been writing tcp based apps for years and the stream abstraction has never failed me. After reading this I dont see why I should change that assumption? I have to rebuild connections occasionally but its never cost my application so much that an alternative more complicated abstraction layer made sense.
I usually write req/response over tcp, an even more inaccurate abstraction. Occasionally nonblocking code. Never have I wanted more complexity than nio in my application layer.<p>Devs do know that "tcp is not a stream of bytes" but deliberately do not want to get app code involved.
I suppose you could call it a two-node consensus algorithm, the same way plugging a flash drive into your laptop is. Even after reading, I don't see the benefit of viewing TCP this way.
The problem is in thinking of an HTTPS request-response through proxies as a single TCP connection. It isn't.<p>A TLS proxy is not a normal part of a layered TCP/IP connection. It's literally in the name: "terminating" proxy. It stops the connection right there. Anything after the TLS proxy is outside the scope of the initial connection. Applications have to be engineered to pass on data from one connection to another.<p>An example is stateful firewalls. Almost all stateful firewalls are NAT gateways with rules. NAT gateways are designed to pass certain things from one connection to another, but they are not simply unwrapping a layer from a connection and passing it on: they maintain separate connections. <i>edit</i> Apparently I'm wrong, as Netfiler apparently only defragments and then changes addresses and ports, but firewall vendors basically keep independent connections (for security reasons)<p>TCP is specified just fine <i>for consensus on a single TCP connection</i>. It isn't specified for an HTTPS connection through middleware. Hence, such middleware is complicated.