HTTP / 3 is the next generation of the HTTP protocol. It is powered by QUIC, which replaces TCP at the transport warehouse and reduces the number of rounds a client must make to establish a connection.
What makes it better?
If you can not say from the abbreviation “QUIC”;, HTTP / 3 is much faster.
HTTP is only part of the OSI model, which drives the Internet as we know it. Each layer in the model serves a different purpose, with high-level APIs such as HTTP located at the top (application layer), all the way down to the physical wires and connections connected to routers:
But there is a bottleneck in this model – and despite the new name, the HTTP standard is not in itself the problem.
TCP (transport layer) is the culprit here; it was already designed in the 70’s and as such was not built to handle real-time communication. HTTP over TCP has reached its limit. Google and the rest of the technology space have been working on a replacement for TCP.
In 2012, Google created SPDY, a protocol that builds on top of TCP and fixes many common problems. SPDY itself is depreciated, but parts of it went to HTTP / 2, which is currently used by 40% of the web.
QUIC is a new standard, much like SPDY, but it is built on top of UDP rather than TCP. UDP is much faster than TCP, but is generally less reliable because it does not have the same error checking and loss prevention as TCP does. It is commonly used in applications that do not require packages to be in exactly proper order, but care about latency (like direct video calling).
QUIC is still reliable, but it implements its error checking and reliability on top of UDP, so it gets the best of both protocols. The first time a user connects to a QUIC-enabled site, they do so via TCP.
The main problem with TCP that QUIC fixes is block-up-line blocking. Once a connection has been established between server and client, the server sends data packets to the client. If the connection is poor and a packet is lost, the client retains all the packets received after that until the server returns the lost packet. HTTP / 2 solves the problem somewhat by allowing multiple transmissions over the same TCP connection, but it is not perfect and may actually be slower than HTTP / 1 with high loss connections.
QUIC fixes the problem and handles high-loss connections much better. Early tests from Google showed improvements of about 15% in high latency scenarios and up to 30% improvements in video buffering on mobile connections. As QUIC reduces the number of handshakes that must be done, there will be latency improvements across the board.
Is it difficult to implement?
While QUIC is a new standard, it is built on top of UDP, which is already supported almost everywhere. It will not require any new kernel updates, which can be problematic for servers. QUIC should work out of the box on all systems that support UDP
HTTP-over-QUIC should be a drop-in replacement for HTTP-over-TCP when it is readily available. At the time of writing, Chrome supports QUIC, but it is disabled by default. You can enable it for testing by going to:
and enable the “Experimental QUIC Protocol” flag. Firefox will add support later this fall, and when Edge moves to Chromium, they will get support soon as well.
At the end of the server, if you use CloudFlare as your CDN, you will be able to enable the option already in the dashboard, but you will not have many clients actually using it until mobile browsers have it on by default. Fastly works actively with support. If you want to enable it on your web server but have to wait a bit – early support for QUIC is planned to appear during the nginx 1.17 development cycle, but Apache support is nowhere visible yet.
Once nginx and Apache have been updated to support it, QUIC to your webpage or web app will be as simple as updating your web server and enabling the option. You do not need to make any changes to your app or code, as everything is managed at the infrastructure level. It’s not here yet, but it’s coming very soon, and you’ll definitely want to enable it when supported by default.