Why didn't they use SRV[1] records in DNS to resolve http2 requests? It has so many advantages:<p><pre><code> * Permitted at the domain apex (yes really! unlike CNAMEs!)
* Allow weighted round-robin
* Allows lower-priority fallback services
* Unusual port numbers no longer required in URIs
* Doesn't get confused with non-HTTP services located at the same FQDN.
</code></pre>
It's the modern way to federate services! And there's very wide DNS server support - everything from BIND to Active Directory.<p>Fortunately the standard (nor as far as I can see, the normative references) doesn't actually say you have to use an A-type record. Unfortunately that will remain the convention unless someone makes this easy but explicit change.<p>I'd get involved but I fear the politics. Would I have any chance of being able to advocate for this change?<p>[1] <a href="http://en.wikipedia.org/wiki/SRV_record" rel="nofollow">http://en.wikipedia.org/wiki/SRV_record</a>
> Another new concept is the ability for either side to push data over an established connection. While the concept itself is hardly revolutionary — this is after all how TCP itself functions – bringing this capability to the widespread HTTP world will be no small improvement and may help marry the simplicity of an HTTP API with the fully-duplexed world of TCP. While this is also useful for a server-to-server internal APIs, this functionality will provide an alternative to web sockets, long polling, or simply repeated requests back to the server – the traditional three ways to emulate a server pushing live data in the web world.<p>As far as I know, this is not true. Server Push is only for the server and can only be done as a response to a request. It's not a WebSocket alternative.<p>Server Push means that when a client sends a request (GET /index.html), the server can respond with responses for multiple resources (e.g. /index.html, /style.css and /app.js can be sent). This means the client doesn't have to explicitly GET those resources which saves bandwidth and latency.
Microsoft already released and open sourced server code which (partially) supports HTTP 2:
<a href="http://blogs.msdn.com/b/interoperability/archive/2013/07/29/start-testing-with-first-implementation-of-ietf-http-2-0-draft-from-ms-open-tech.aspx" rel="nofollow">http://blogs.msdn.com/b/interoperability/archive/2013/07/29/...</a>
I think that the changes being made for "HTTP 2" are a terrible decision for HTTP. For SPDY, sure, make it as complex and as hard to work with as you want in the name of performance, but please keep my HTTP a nice, simple, text-based protocol that I can work with very easily.<p>I just feel that HTTP should not reïmplement TCP. SPDY/HTTP2 just seems much more complex than necessary.<p><a href="http://jimkeener.com/posts/http" rel="nofollow">http://jimkeener.com/posts/http</a> is a 90% complete post of what I would like to see as HTTP 1.2 and some other things I think would be beneficial.
The draft was released earlier this month. There was an interesting discussion about it back then too:
<a href="https://news.ycombinator.com/item?id=6012525" rel="nofollow">https://news.ycombinator.com/item?id=6012525</a><p>At the same time I also submitted another article that I still think is interesting and relevant as of today:
<a href="https://news.ycombinator.com/item?id=6014976" rel="nofollow">https://news.ycombinator.com/item?id=6014976</a>
Does <a href="http://tools.ietf.org/html/draft-ietf-httpbis-http2-04#section-4.1" rel="nofollow">http://tools.ietf.org/html/draft-ietf-httpbis-http2-04#secti...</a> <a href="http://tools.ietf.org/html/draft-ietf-httpbis-http2-04#section-4.2" rel="nofollow">http://tools.ietf.org/html/draft-ietf-httpbis-http2-04#secti...</a> and <a href="http://tools.ietf.org/html/draft-ietf-httpbis-http2-04#section-9.1" rel="nofollow">http://tools.ietf.org/html/draft-ietf-httpbis-http2-04#secti...</a> mean sendfile(2) can't be used with HTTP/2.0?
I think HTTP/2.0 should break backward compatibility and take a more advanced step than "little improvements like that". Killing TCP/IP completely and inventing a more efficiently compressed, more government resistant and more easily encryptable Protocol would be highly anticipated. The reason is that even adopting HTTP2.0 in that state would take at least a decade or more.<p>Here's stuff that backs my argument:s<p>a) <a href="http://rina.tssg.org/docs/PSOC-MovingBeyondTCP.pdf" rel="nofollow">http://rina.tssg.org/docs/PSOC-MovingBeyondTCP.pdf</a><p>b) <a href="http://users.ece.cmu.edu/~adrian/630-f04/readings/bellovin-tcp-ip.pdf" rel="nofollow">http://users.ece.cmu.edu/~adrian/630-f04/readings/bellovin-t...</a><p>And here are more viable and real alternatives that not only increase the speed by a factor of n, but also increase security and compatibility to our mobile generation:<p><a href="http://www.fujitsu.com/global/news/pr/archives/month/2013/20130129-02.html" rel="nofollow">http://www.fujitsu.com/global/news/pr/archives/month/2013/20...</a><p><a href="http://users.ece.cmu.edu/~adrian/630-f04/readings/bellovin-tcp-ip.pdf" rel="nofollow">http://users.ece.cmu.edu/~adrian/630-f04/readings/bellovin-t...</a><p><a href="http://roland.grc.nasa.gov/nrg/local/sctp.net-computing.pdf" rel="nofollow">http://roland.grc.nasa.gov/nrg/local/sctp.net-computing.pdf</a> / <a href="http://tools.ietf.org/html/rfc4960" rel="nofollow">http://tools.ietf.org/html/rfc4960</a><p><a href="http://www.qualcomm.com/media/documents/why-raptor-codes-are-better-tcpip-file-transfer" rel="nofollow">http://www.qualcomm.com/media/documents/why-raptor-codes-are...</a><p>PS: I was initially afraid that HTTP2.0 was optimized for Advertisers...pheww