The next version of HTTP won’t be using TCP
In its continued efforts to make Web networking faster, Google has been working on an experimental network protocol named QUIC: "Quick UDP Internet Connections." QUIC abandons TCP, instead using its sibling protocol UDP (User Datagram Protocol). UDP is the "opposite" of TCP; it's unreliable (data that is sent from one end may never be received by the other end, and the other end has no way of knowing that something has gone missing), and it is unordered (data sent later can overtake data sent earlier, arriving jumbled up). UDP is, however, very simple, and new protocols are often built on top of UDP.
QUIC reinstates the reliability and ordering that TCP has but without introducing the same number of round trips and latency. For example, if a client is reconnecting to a server, the client can send important encryption data with the very first packet, enabling the server to resurrect the old connection, using the same encryption as previously negotiated, without requiring any additional round trips.
The Internet Engineering Task Force (IETF—the industry group that collaboratively designs network protocols) has been working to create a standardized version of QUIC, which currently deviates significantly from Google's original proposal. The IETF also wants to create a version of HTTP that uses QUIC, previously referred to as HTTP-over-QUIC or HTTP/QUIC. HTTP-over-QUIC isn't, however, HTTP/2 over QUIC; it's a new, updated version of HTTP built for QUIC.
Accordingly, Mark Nottingham, chair of both the HTTP working group and the QUIC working group for IETF, proposed to rename HTTP-over-QUIC to HTTP/3, and the proposal seems to have been broadly accepted. The next version of HTTP will have QUIC as an essential, integral feature, such that HTTP/3 will always use QUIC as its network protocol.
(Score: 5, Insightful) by choose another one on Tuesday November 13 2018, @09:11AM (1 child)
> For human-readable web sites --- simply design them properly, without the mountains of javascript and ridiculously oversized images that have become the fashion of late.
The javascript and large images is not the biggest problem, IMO. I just did a quick test (without ad blockers) for kicks:
front page of this site: 17 requests, all from one hostname.
front page of major mainstream news site (stopped after 30secs loading): 738 requests, 35 different hostnames for the top page plus dozens of iframes using other hostnames that I couldn't be bothered to expand to count
The DNS request overhead alone must be a major part of the problem, especially the latency.
(Score: 2) by Unixnut on Tuesday November 13 2018, @02:12PM
God that is awful, definitely would class that as a poorly designed web site (and insecure to boot, only one of those 35 different hostname needs to be compromised for your site to be compromised too).
So the GPPs point is still valid, even if you remove the JS crap and images.