I see all the arguments for a need to optimize HTTP, but by going binary and increasing the complexity like we're seeing here, I think we lose something valuable.<p>A huge part of what I know I owe to tinkering with protocols in the late 90ies. Being able to send an Email using Telnet on port 25 was one hell of an eye-opener to me. But even today - being able to quickly debug a HTTP thing using telnet is incredibly handy.<p>Yeah. 1.1 will remain and with it some of the debugability. But then what you are deubgging quickly is not what browsers are going to see. Yes. You can add more tools to the mix to help you, but I still think we lose something (quite like going to a binary syslog format, btw).<p>Is the speed increase to gain from HTTP/2.0 really worth the loss of discoverability and the increase of complexity? It's my feeling that the connections are getting faster more quickly than optimizing HTTP would gain us.<p>If HTTP over TCP is inefficient, can't we try to "fix" TCP? Yeah - that'll be really hard, but so will be to get the Upgrade-header to work in order to do HTTP/2.0 over port 80. Too much stuff is interfering with HTTP these days (maybe also a result of the high readability of the current protocol - I don't know).<p>I wonder whether these aspects are part of the discussion currently happening or whether this feeling of mine is just an effect of me getting old.