Why didn't they use SRV[1] records in DNS to resolve http2 requests? It has so many advantages:
* Permitted at the domain apex (yes really! unlike CNAMEs!)
* Allow weighted round-robin
* Allows lower-priority fallback services
* Unusual port numbers no longer required in URIs
* Doesn't get confused with non-HTTP services located at the same FQDN.
It's the modern way to federate services! And there's very wide DNS server support - everything from BIND to Active Directory.
Fortunately the standard (nor as far as I can see, the normative references) doesn't actually say you have to use an A-type record. Unfortunately that will remain the convention unless someone makes this easy but explicit change.
I'd get involved but I fear the politics. Would I have any chance of being able to advocate for this change?
> Another new concept is the ability for either side to push data over an established connection. While the concept itself is hardly revolutionary — this is after all how TCP itself functions – bringing this capability to the widespread HTTP world will be no small improvement and may help marry the simplicity of an HTTP API with the fully-duplexed world of TCP. While this is also useful for a server-to-server internal APIs, this functionality will provide an alternative to web sockets, long polling, or simply repeated requests back to the server – the traditional three ways to emulate a server pushing live data in the web world.
As far as I know, this is not true. Server Push is only for the server and can only be done as a response to a request. It's not a WebSocket alternative.
Server Push means that when a client sends a request (GET /index.html), the server can respond with responses for multiple resources (e.g. /index.html, /style.css and /app.js can be sent). This means the client doesn't have to explicitly GET those resources which saves bandwidth and latency.
>HTTP/2.0 provides the ability to multiplex multiple HTTP requests and responses onto a single connection. Multiple requests or responses can be sent concurrently on a connection using streams (Section 5).
This requires supporting client-initiated requests over an established connection.
I think that the changes being made for "HTTP 2" are a terrible decision for HTTP. For SPDY, sure, make it as complex and as hard to work with as you want in the name of performance, but please keep my HTTP a nice, simple, text-based protocol that I can work with very easily.
I just feel that HTTP should not reïmplement TCP. SPDY/HTTP2 just seems much more complex than necessary.
http://jimkeener.com/posts/http is a 90% complete post of what I would like to see as HTTP 1.2 and some other things I think would be beneficial.
I actually dont like a lot of what is on that page. For example: he says to remove the User-Agent header. Without that https://www.dropbox.com/downloading wouldnt work (where they can give you the correct download and show you pictures of how to access/install it). Furthermore, the Date header is very successfully used for caching operations in many cases. Moreover, it suggests problems that I see (such as the cookie kludge) but not a good replacement/solution for it. The solution given isnt adequate because the solution cookies are trying to solve is maintaining stateless servers. However, they cannot trust the clients and so have to resort to nasty things like hmac'ing the cookies and more easy to mess up security details.
> For example: he says to remove the User-Agent header. Without that https://www.dropbox.com/downloading wouldnt work (where they can give you the correct download and show you pictures of how to access/install it).
There is no good reason to do UA sniffing. That page could simply provide you one of 3 (or more) options to select.
> Furthermore, the Date header is very successfully used for caching operations in many cases.
Date headers are not useful for that purpose. Expiration would be based on the time of the UA, not the one given by a server.
> Moreover, it suggests problems that I see (such as the cookie kludge) but not a good replacement/solution for it.
The use of a session identifier, or to use client-side storage until it is needed. The session identifier is not the best solution, but it is a step towards a better system, I believe. Eventually I would like to see it removed.
> However, they cannot trust the clients and so have to resort to nasty things like hmac'ing the cookies and more easy to mess up security details.
You should never trust anything given to you from a client. If I send you a product list, that product list should be opaque ids. The session should be ephemeral and not matter anyway, so there is no reason for it to be signed.
> There is no good reason to do UA sniffing. That page could simply provide you one of 3 (or more) options to select.
So, giving people the correct file instead of making them know what they need (which many people dont... especially if it is browser specific) is not a good reason? What if a server wants to provide a client with native order endianess (for RPC for example), that shouldnt be allowed?
> Date headers are not useful for that purpose. Expiration would be based on the time of the UA, not the one given by a server.
Tell that to my browser which countless times doesnt fetch a new file because it has a cached copy. Furthermore, if the headers are stored with the cached copy, there is no server/client time problem because you can calculate the difference between server time and client time.
> The session should be ephemeral and not matter anyway, so there is no reason for it to be signed.
This is correct in idea, but not in practice. Often times, the server isnt a single server, but rather a set of load-balanced servers. When this happens, it is hard to keep track of client state because a client might get load-balanced to another machine on its next request. Therefore, client state is kept with the client (and signed to make sure that it is legitimate). This sort of behavior has become necessary, although can be better dealt with with some sort of standard (esp. to ensure the protection of the cookie et cetera).
> So, giving people the correct file instead of making them know what they need (which many people dont... especially if it is browser specific) is not a good reason?
No, it is not. Give the user the option of what to download. What if I want the Windows version even though I'm running Linux?
> What if a server wants to provide a client with native order endianess (for RPC for example), that shouldnt be allowed?
RPC should have a standard byte order defined.
> Tell that to my browser which countless times doesnt fetch a new file because it has a cached copy.
That's based on cache control, which isn't affected by the Date header sent by the server.
> Often times, the server isnt a single server, but rather a set of load-balanced servers.
I know. I've set these systems up before. There shouldn't be anything of consequence stored on the client. So what if someone changes the session ID? Does it really matter? If someone has someone else's ID they probably have the signature too. If it's a random search, using random ids goes a very long way. Also, beyond sessions, there doesn't need to be a session. There are now ways to store data on the client that don't require sending it back and forth to the server on every request.
That post both advocates for TLS-everywhere (which I support) and thinks it would be beneficial to drop HTTP Keep-Alive... Aren't you concerned about the latency hit? TCP has 1 RTT to setup, TLS has 1+ RTT to setup.
Also, TCP's congestion window grows over time; with your proposed model, you'd continuously open connections with tiny congestion windows, rather than a few connections with growing congestion windows.
I think all it'd take to change your mind is to load Facebook or Twitter with SPDY and Keep Alives turned off...
I understand why Keep-Alive exists, but I think HTTP is just the wrong place. I don't believe that the round-trip latency shouldn't be an issue. It simply creates too much complexity for what is suppose to be a simple protocol.
> I think all it'd take to change your mind is to load Facebook or Twitter with SPDY and Keep Alives turned off...
I also believe that those sites are loading way too many resources. I'm also not against SPDY, but I don't think it should be HTTP. If someone wants to use SPDY, then so be it.
EDIT: Actually, I just loaded Twitter and Facebook with HTTP 1.0 (No Keep Alive). It was a bit slower, on the order of a handful of seconds, but nothing that I would consider terrible. These are also some of the heaviest sites a browser is going to load.
Not entirely. You have TCP_CORK to allow headers to be stuck in front; sendfile can also take ranges so you don't blow the frame limits. I would imagine that kind of set up is more trouble than it's worth though (is sendfile(2) still the fastest way of doing things? I thought it had been superceded anyway...)
You'd context switch to/from kernel way more often with small ranges. sendfile on Solaris, Linux, BSD and TransmitFile on Windows allow much larger ranges in one call.
What's the replacement for sendfile(2)? Solaris has sendfilev which is still pretty much the same thing and sendfile(2) on Linux uses splice(2), vmsplice(2), tee(2) internally but I don't know of a replacement.
I think HTTP/2.0 should break backward compatibility and take a more advanced step than "little improvements like that". Killing TCP/IP completely and inventing a more efficiently compressed, more government resistant and more easily encryptable Protocol would be highly anticipated. The reason is that even adopting HTTP2.0 in that state would take at least a decade or more.
And here are more viable and real alternatives that not only increase the speed by a factor of n, but also increase security and compatibility to our mobile generation:
> Killing TCP/IP completely and inventing a more efficiently compressed, more government resistant and more easily encryptable Protocol would be highly anticipated.
You do realize HTTP and TCP/IP reside at very different OSI stack levels, right? Reïnventing TCP is not HTTP's job.
eh, yes.. now what? You do realize that the HTTP RFCs define what protcols are used?
I know that most TCP improvements mostly add new behaviour to specific situations, especially congestion and yes I have read those papers/links. I know that many improvements are UDP based. So, I think you misread it. I said kill TCP/IP in order to replace it by something better and wished that HTTP/2.0 would be that anticipated step. Did you even check the alternatives, before going negative?
> Did you even check the alternatives, before going negative?
One does not need alternatives to dislike a system. Alternatives may affect if the system is used, but they don't negate criticism of it.
I don't think HTTP 2 should replace TCP/IP or any other protocol that low in the OSI stack. That is not what HTTP was designed for and I believe throwing out the ideas behind its creation and still calling a new protocol HTTP is disingenuous.
What is so difficult about encrypting HTTP with an SSL Layer?
SSL is a key you keep and an unlocked padlock you give someone. They use it to give back a box with no idea how to open it.
You configure a web server with a key and a padlock. It keeps the key and serves the padlock. How can this be improved? (Serious question, maybe it can - this concept still seems esoteric to many)
SSL has the concept of many central authority organs (many of them got compromised or hand their private keys to government agencies), there is the proposal of a web-of-trust to counter that, but it's not there yet.
Hmm, do you suggest a new protcol/idea/improvement? A TCP/IP alternative also needs a security layer (not necessarily SSL though).
No offense, but if it's so easy, then you may know something many scientists and researchers and cryptographers haven't discovered, in any case I'm happy and all ears, if you are willing to contribute to a solution.. http://en.wikipedia.org/wiki/Secure_Socket_Layer
Replacing SSL/TLS just because of the current CA system is absurd. Nothing in either requires CAs (indeed, in theory anyone can issue a root certificate, and that's very much a feature). The issue is how to convey what certificates are legimite and which are not, and that's a tangential issue to whether you allow trusted subtrees or not.
QUIC lives on top of UDP. If we reinvent TCP on top of UDP we may as well just throw in the towel on IPv6 and embrace IPv4 + NAT. Why bother trying to fix layer 2 when we're content to fuck shit up on the higher layers for the sake of back compat? IPv6 is our chance to bring back end-to-end networking with stateless routers.
QUIC is the prototype; if you build an experiment that doesn't work in the short term then you never learn from it and you can't fix the flaws you didn't learn about. Once it's finished maybe it could become a real protocol instead of layered on UDP.
Fortunately the standard (nor as far as I can see, the normative references) doesn't actually say you have to use an A-type record. Unfortunately that will remain the convention unless someone makes this easy but explicit change.
I'd get involved but I fear the politics. Would I have any chance of being able to advocate for this change?
[1] http://en.wikipedia.org/wiki/SRV_record