HTTP/3 represents the most significant architectural change in web communication since the internet's inception. While previous versions of HTTP (the protocol that allows your browser to fetch web pages) made incremental improvements, HTTP/3 fundamentally reimagines how data travels across the internet. It replaces the 50-year-old foundation (TCP) with an entirely new transport system called QUIC, designed for the mobile-first, always-connected world we live in today.
Key takeaway: HTTP/3 is not just a faster version of HTTP/2. It is a ground-up redesign that solves problems the original internet architects never anticipated: smartphones switching between WiFi and cellular networks, video calls that cannot tolerate delays, and billions of simultaneous connections that must remain secure without sacrificing speed.
HTTP stands for Hypertext Transfer Protocol. Think of it as the language that web browsers and servers use to communicate. When you type a website address into your browser, HTTP is the system of rules that governs how your request is sent and how the server responds with the webpage you want.
Real-world analogy: HTTP is like the postal system for the internet. Just as the postal service has standardized formats for addresses, envelopes, and delivery procedures, HTTP provides standardized formats for requesting and delivering web content.
Created by Tim Berners-Lee, the inventor of the World Wide Web, this first version was remarkably simple. It could only request documents using a single command ("GET") and return plain text. No images, no formatting options, no error messages.
Analogy: Like sending a telegram that could only say "SEND DOCUMENT X" and receiving only the raw text back.
This version introduced headers (metadata about requests and responses), status codes (like the famous "404 Not Found"), and support for different content types (images, audio, etc.).
Analogy: The postal system evolved to include tracking numbers, delivery confirmation, and the ability to ship different types of packages, not just letters.
This became the workhorse of the internet for nearly two decades. It introduced persistent connections (keeping the communication line open for multiple requests) and chunked transfers (sending data in pieces rather than all at once).
Persistent connections explained: Imagine calling a customer service line. In HTTP/1.0, you would hang up after each question and call back for the next one. HTTP/1.1 lets you stay on the line and ask multiple questions in sequence.
Developed from Google's experimental SPDY protocol, HTTP/2 introduced multiplexing: the ability to send multiple requests and receive multiple responses simultaneously over a single connection.
Multiplexing explained: Instead of a single-lane road where cars must travel one behind another, HTTP/2 created a multi-lane highway where many cars (data streams) could travel side by side.
However, HTTP/2 had a hidden weakness: it still relied on TCP for transport, which created a problem called "head-of-line blocking" (explained in detail in Part 2).
HTTP/3 abandons TCP entirely, building instead on QUIC, a new transport protocol that runs over UDP. This represents the most fundamental change in web communication infrastructure since the internet began.
The story of HTTP/3 begins with Jim Roskind, an American software engineer at Google. In 2012, Roskind designed what would become QUIC (originally an acronym for "Quick UDP Internet Connections," though it is now simply a name, not an acronym).
Roskind's background is remarkable: he earned his PhD from MIT in 1983, co-founded the early search engine Infoseek in 1994, wrote the Python profiler that ships with every Python installation, and was inducted into the National Cyber Security Hall of Fame in 2024 for his contributions to network security.
Google deployed QUIC experimentally in Chrome in 2012 and announced it publicly in 2013. The protocol was submitted to the IETF (Internet Engineering Task Force, the organization that creates internet standards) in 2015. After years of refinement, HTTP/3 was officially published as RFC 9114 in June 2022.
When TCP was designed in 1974, computers were stationary machines connected by wires. Today, billions of people browse the internet on smartphones while walking, commuting, and constantly switching between WiFi and cellular networks.
The issue: TCP connections are identified by four numbers: your device's IP address, the server's IP address, and two "port" numbers. When you switch from WiFi to cellular, your IP address changes, and TCP sees this as a completely new device. All connections break and must be re-established from scratch.
HTTP/3's solution: QUIC uses connection IDs instead of IP addresses to identify connections. Think of it like a customer loyalty number: no matter which store location you visit (which network you are on), the system recognizes you and your preferences.
This is perhaps the most important technical problem HTTP/3 solves. TCP guarantees that data arrives in order, which sounds good but creates a traffic jam when packets are lost.
Plain-language explanation: Imagine a highway with multiple lanes that suddenly merges into a single lane before reaching the destination. If one car in that single lane breaks down, every car behind it must stop, even if they are trying to reach completely different destinations. That is head-of-line blocking.
In HTTP/2, if you are downloading a web page with images, CSS files, and JavaScript, and one small piece of one image is lost in transmission, everything stops until that piece is re-sent and received. The CSS file that arrived perfectly fine? It must wait.
HTTP/3's solution: QUIC maintains separate "streams" for each resource. If a packet is lost for one stream (one image), only that stream pauses. Everything else continues flowing.
Establishing a secure HTTP/2 connection requires multiple round trips between your device and the server:
- TCP handshake (establishing a reliable connection)
- TLS handshake (establishing encryption)
- HTTP request
Each "round trip" adds latency, which is especially painful on mobile networks where round trips might take 100 milliseconds or more.
HTTP/3's solution: QUIC combines the transport and encryption handshakes into a single round trip. For servers you have visited before, QUIC can even achieve "0-RTT" (zero round-trip time) resumption, sending your first request immediately with no handshake at all.
- HTTP is the protocol that enables web browsers and servers to communicate.
- HTTP has evolved through five major versions (0.9, 1.0, 1.1, 2, and 3), with each addressing limitations of its predecessor.
- HTTP/3 represents the most fundamental change, replacing TCP with QUIC.
- Jim Roskind at Google designed QUIC in 2012; it became an official standard in 2022.
- HTTP/3 solves three critical problems: connection breakage on mobile networks, head-of-line blocking, and slow connection setup.
To understand HTTP/3, you must first understand the two transport protocols that underpin internet communication.
What it does: TCP is like a meticulous postal service that guarantees delivery. When you send data via TCP, the protocol:
- Establishes a connection before sending anything
- Numbers every packet
- Requires acknowledgment of receipt for every packet
- Retransmits lost packets
- Delivers data to the application in the exact order it was sent
The guarantee: If you send "HELLO WORLD" via TCP, the receiver will get "HELLO WORLD" in that exact order, or they will get nothing at all (if the connection fails).
The cost: All these guarantees require time and communication overhead. The famous "three-way handshake" (SYN, SYN-ACK, ACK) just to establish a connection adds latency before any data can flow.
Analogy: TCP is like sending a package via registered mail with return receipt. You know for certain it arrived, you know who signed for it, and if there is a problem, you are notified. But it takes longer and costs more than dropping a letter in a mailbox.
What it does: UDP is a "fire and forget" protocol. It sends packets with no guarantees:
- No connection establishment
- No delivery confirmation
- No ordering guarantees
- No retransmission
The benefit: Blazing speed and minimal overhead.
The cost: Packets can be lost, arrive out of order, or arrive duplicated, and the sender will never know.
Analogy: UDP is like shouting a message across a crowded room. It is fast, but you have no guarantee the intended recipient heard you, understood you, or got the message in the right order if you shouted multiple things.
QUIC is built on top of UDP but implements reliability features similar to TCP. Think of it as using the fast, lightweight postal system of UDP but adding your own tracking, insurance, and guaranteed delivery at a higher level.
Why not just improve TCP? TCP is implemented in operating system kernels and network hardware around the world. Changing it requires updating billions of devices, operating systems, and routers, a process that takes decades. UDP, by contrast, is simple enough that you can build whatever features you need on top of it in application software, which can be updated in weeks.
Analogy: Instead of waiting for the city to rebuild the road (TCP), QUIC builds a smooth surface layer on top of the existing road (UDP) that can be updated independently.
The most important concept in QUIC is the stream. A stream is an independent, bidirectional flow of data within a QUIC connection.
Plain-language explanation: Think of a QUIC connection as an office building with many separate offices (streams). Each office can operate independently. If there is a problem in Office 5, work continues in Offices 1, 2, 3, 4, and so on. This is fundamentally different from TCP, where the entire building shuts down if any office has a problem.
In practical terms:
- When your browser loads a webpage, each resource (HTML, CSS, images, scripts) gets its own stream.
- If a packet for one image is lost, only that image's stream pauses for retransmission.
- All other streams continue unimpeded.
QUIC connections are identified by a connection ID (CID), not by IP addresses and ports.
Technical detail: The CID is a random number assigned when the connection is established. Every QUIC packet includes this identifier. When your phone switches from WiFi to cellular (changing your IP address), the server recognizes the CID and continues the connection seamlessly.
Security consideration: CIDs can be changed during a connection to prevent tracking. The client and server can negotiate new CIDs, making it harder for network observers to track connections over time.
Analogy: Think of the CID like a conversation thread ID in a messaging app. Even if you switch from your phone to your tablet (different device, different IP address), the conversation continues because both ends are tracking the thread ID, not your device identity.
TLS (Transport Layer Security) is the encryption protocol that puts the "S" in HTTPS. QUIC mandates TLS 1.3, the most modern and secure version.
What TLS 1.3 provides:
- Forward secrecy: Even if encryption keys are stolen in the future, past communications cannot be decrypted. Each session uses unique keys that are never stored.
- Faster handshake: TLS 1.3 requires only one round trip to establish encryption, compared to two for TLS 1.2.
- Reduced complexity: TLS 1.3 removed outdated and vulnerable cryptographic options, simplifying implementation and reducing attack surface.
QUIC's unique integration: Unlike HTTP/1.1 and HTTP/2, where TLS is an optional layer added on top of TCP, QUIC always encrypts traffic. Even more notably, QUIC encrypts not just the data but also most of the protocol metadata (packet numbers, connection signals). This provides privacy benefits but creates challenges for network operators (discussed in Part 4).
Analogy: Older HTTP versions are like sending a postcard with the contents in a sealed envelope. Anyone handling the postcard can see who sent it, where it is going, and some handling information. QUIC is like a completely sealed, opaque package where even the tracking number is only visible to sender and recipient.
For servers you have visited before, QUIC offers 0-RTT (zero round-trip time) resumption.
How it works: During your first visit to a server, you receive a "session ticket" that is stored on your device. On subsequent visits, you send this ticket along with your first request. The server validates the ticket and can immediately process your request, no handshake delay required.
The tradeoff: 0-RTT data is vulnerable to "replay attacks," where an attacker captures your encrypted request and sends it again. For this reason, 0-RTT should only be used for idempotent requests (requests that have the same effect whether executed once or multiple times, like fetching a webpage) and not for sensitive operations like financial transactions.
Analogy: Think of 0-RTT like a VIP pass at a club. Your first visit requires ID check and paperwork (1-RTT). But you receive a VIP pass that, on subsequent visits, lets you walk straight in. The risk is that someone could steal your pass and impersonate you, so important transactions should still require fresh verification.
HTTP headers contain metadata about requests and responses: what type of content you accept, your browser version, cookies, caching instructions, and more. These headers can be surprisingly large, sometimes exceeding the size of the actual content being requested.
HTTP/2 used a compression algorithm called HPACK that maintained a shared "dictionary" of common headers. Once a header was seen, future references could use a short code instead of the full text.
HPACK assumed that data arrives in order (as TCP guarantees). But QUIC's streams can arrive out of order. If stream 3 uses a dictionary entry that was defined in stream 2, but stream 3's packet arrives first, the receiver cannot decode it.
QPACK uses dedicated streams just for dictionary updates. Think of it like having a separate communication channel just for agreeing on abbreviations.
Components:
- Static table: A predefined list of common headers (like "content-type: text/html") that never changes.
- Dynamic table: A connection-specific dictionary built during the session.
- Encoder stream: A dedicated channel for the sender to announce new dictionary entries.
- Decoder stream: A dedicated channel for the receiver to confirm which entries it has received.
Plain-language summary: Before using an abbreviation, QPACK confirms that both parties have agreed on what that abbreviation means, even if messages arrive out of order.
Here is what happens when your browser connects to an HTTP/3 server:
Step 1: Client Hello Your browser sends a message containing:
- Supported cryptographic algorithms
- A randomly generated key share
- The connection ID it wants to use
Step 2: Server Hello + Encrypted Data The server responds with:
- Its chosen cryptographic algorithm
- Its key share
- An encrypted certificate proving its identity
- An encrypted "finished" message proving the handshake was successful
- Already, encrypted application data (the webpage you requested)
Total time: One round trip. The server can begin sending data in its very first response.
Comparison: HTTP/2 over TCP+TLS requires three round trips before any application data can flow.
Analogy: Imagine ordering at a restaurant. The TCP+TLS way: You sit down, the waiter confirms you are at a table, you confirm you are ready to order, the waiter confirms they are ready to take your order, you order, the waiter confirms the order. The QUIC way: You sit down, immediately tell the waiter your order, and food starts arriving.
Mental Model 1: The Highway Analogy
- TCP is a single-lane highway. Fast, reliable, but one accident stops everyone.
- HTTP/2 over TCP is like multiple lanes that merge into one before entering your town. Multiple streams, but one lost packet blocks everything.
- QUIC is like multiple independent roads to different destinations. A problem on one road does not affect the others.
Mental Model 2: The Phone Call Analogy
- TCP is a landline phone call. Reliable, but if you move to a different room, you cannot take the call with you.
- QUIC is a cell phone call. You can walk around the house, step outside, even drive away, and the call continues.
Rule of Thumb: HTTP/3 benefits increase with network instability. On a perfect, stable, low-latency connection, HTTP/3 might be marginally better than HTTP/2. On a mobile phone switching between WiFi and cellular, or on a congested network with packet loss, HTTP/3 can be dramatically faster.
- QUIC builds reliability features on top of UDP, combining UDP's speed with TCP's guarantees.
- Streams allow multiple independent data flows within one connection, eliminating head-of-line blocking.
- Connection IDs enable seamless network switching without breaking connections.
- TLS 1.3 is mandatory and deeply integrated, encrypting even protocol metadata.
- 0-RTT resumption enables instant connections to previously visited servers.
- QPACK provides header compression that works with QUIC's out-of-order delivery.
- The full handshake completes in one round trip, compared to three for HTTP/2.
Google has been serving QUIC traffic since 2014. Their research paper reported:
- 8% average reduction in search result load times on desktop
- 3.6% reduction on mobile
- Up to 16% improvement for the slowest 1% of users
- 20% reduction in video buffering ("stalling") on YouTube in countries like India with challenging network conditions
Today, more than half of all Chrome connections to Google's servers use QUIC, and all Google mobile apps (YouTube, Gmail, Maps) support the protocol.
In October 2020, Meta announced that 75% of its internet traffic uses QUIC and HTTP/3. Their implementation, called "mvfst," delivered:
- 6% reduction in request errors
- 20% reduction in tail latency (the slowest requests)
- 5% reduction in response header size
Meta developed their own QUIC implementation specifically because they needed to test and iterate rapidly at their massive scale, serving billions of users globally.
Cloudflare, which handles a significant portion of global web traffic, has enabled HTTP/3 across their entire network. Their testing showed:
- Up to 33% improvement in connection setup times
- Up to 20% improvement in Largest Contentful Paint (LCP), a key web performance metric
- More than 250ms improvement in connection times for users in high-latency regions like the Philippines
As of 2024:
- Chrome, Edge, Firefox, Safari: All support HTTP/3 by default
- Over 95% of web users have browsers capable of HTTP/3
- Approximately 34% of the top 10 million websites support HTTP/3
HTTP/3 was designed for the mobile-first world. Benefits include:
- Seamless connection migration when switching between WiFi and cellular
- Better performance on high-latency cellular networks
- Reduced battery consumption due to fewer connection re-establishments
- More resilient video streaming on unreliable connections
Practical example: A user watching a YouTube video on their phone while walking from their apartment (WiFi) to their car (cellular) to their office (WiFi again). With HTTP/2, the video might buffer or restart at each network switch. With HTTP/3, the stream continues without interruption.
Low latency is critical for gaming, and HTTP/3's 0-RTT resumption and reduced connection overhead help. More importantly, the upcoming WebTransport protocol (built on HTTP/3) offers:
- Unreliable datagram support for game state updates where old data is worthless
- Multiple streams for separating game logic, voice chat, and other data
- Lower latency than WebSockets
HTTP/3's packet loss resilience is particularly valuable for video:
- Dropped packets affect only one stream, not the entire connection
- Adaptive bitrate streaming can adjust more smoothly
- Connection migration prevents interruptions during network changes
Modern web applications often make dozens of simultaneous API requests. HTTP/3's multiplexing without head-of-line blocking means:
- Failed requests do not slow down successful ones
- Connection setup overhead is amortized across many requests
- Mobile apps perform better on congested networks
HTTP/3 is not a magic bullet. Performance improvements depend on conditions:
Conditions where HTTP/3 excels:
- High-latency connections (intercontinental, satellite, congested cellular)
- Networks with packet loss (WiFi with interference, mobile networks)
- Scenarios with frequent network changes (mobile users on the move)
- Applications loading many resources in parallel
Conditions where improvements are marginal:
- Low-latency, stable data center connections
- Connections with minimal packet loss
- Scenarios where connection setup is amortized over long-lived connections
Measured results from various studies:
- Google: 8% average improvement, up to 16% for slow connections
- Wix: Up to 33% improvement in connection times
- Cloudflare: 1-4% average improvement in stable networks, significant improvements in challenging conditions
- Faster, more reliable web browsing, especially on mobile
- More seamless video streaming and gaming
- Better experience in challenging network conditions
- New tools for building real-time applications (WebTransport)
- Ability to design applications assuming reliable connection migration
- Understanding of when to leverage HTTP/3's strengths
- Infrastructure decisions about enabling HTTP/3
- Security implications of encrypted metadata
- Capacity planning for QUIC's different CPU characteristics
- Google, Meta, and major CDNs have deployed HTTP/3 at scale with measurable improvements.
- Over 95% of browsers and 34% of top websites support HTTP/3.
- HTTP/3 excels on mobile networks, high-latency connections, and scenarios with packet loss.
- Real-world improvements range from marginal (stable networks) to dramatic (challenging conditions).
- Understanding HTTP/3 matters for users, developers, and organizations making infrastructure decisions.
Understanding where HTTP/3 fits in the broader internet architecture helps clarify its significance.
The networking world often uses a layered model to describe how data moves:
- Application Layer (HTTP, SMTP, FTP): What applications see and use
- Transport Layer (TCP, UDP, QUIC): How data is reliably transmitted
- Network Layer (IP): How packets are routed across networks
- Link Layer (Ethernet, WiFi): How bits are transmitted on physical media
HTTP/3 changes layers 1 and 2 together. This is unusual, as most protocol upgrades only affect one layer.
QUIC's unique position: QUIC is technically a transport protocol but runs in user space (application software) rather than kernel space (the operating system core). This enables rapid iteration but means QUIC cannot take advantage of hardware optimizations built for TCP.
TLS has evolved separately from HTTP:
- TLS 1.0, 1.1: Now deprecated due to security vulnerabilities
- TLS 1.2: Still widely used, but considered legacy
- TLS 1.3: Required by QUIC, offers significant security and performance improvements
HTTP/3 mandating TLS 1.3 has accelerated TLS 1.3 adoption across the internet.
Before connecting via HTTP/3, your browser must resolve the domain name to an IP address (DNS lookup). Recent developments in DNS are complementary:
- DNS over HTTPS (DoH): DNS queries encrypted within HTTP/2 or HTTP/3
- DNS over QUIC (DoQ): DNS queries using QUIC directly
These ensure that the privacy benefits of HTTP/3's encryption are not undermined by unencrypted DNS lookups.
-
Always encrypted: Unlike HTTP/1.1 and HTTP/2, there is no unencrypted variant of HTTP/3. Every connection is secured.
-
Encrypted metadata: QUIC encrypts packet numbers, connection close signals, and other metadata that TCP exposes. This improves privacy but complicates network monitoring.
-
Forward secrecy by default: TLS 1.3's mandatory forward secrecy means that even if long-term keys are compromised, past sessions cannot be decrypted.
-
Amplification attack protection: QUIC includes mechanisms to prevent attackers from using servers to amplify traffic toward victims.
-
Reduced visibility for network defenders: The same encryption that protects privacy from attackers also prevents legitimate security tools from inspecting traffic. Firewalls, intrusion detection systems, and malware scanners may not be able to analyze QUIC traffic.
-
0-RTT replay risks: The 0-RTT feature, while fast, allows attackers to capture and replay requests. Applications must be careful to only use 0-RTT for idempotent operations.
-
New attack surface: As a new protocol, QUIC implementations may have undiscovered vulnerabilities. The attack surface is different from well-studied TCP.
Organizations face specific challenges adopting HTTP/3:
Many enterprise firewalls and security tools were designed for TCP. QUIC presents challenges:
- UDP port 443 traffic may be blocked or rate-limited
- Deep packet inspection tools may not understand QUIC
- TLS interception proxies may not support QUIC decryption
Current state: Many security vendors recommend blocking QUIC in the short term, allowing browsers to fall back to HTTP/2. This is a temporary measure while vendors develop QUIC-aware solutions.
Load balancers, reverse proxies, and web application firewalls may need updates:
- Connection routing must understand QUIC connection IDs
- SSL termination must support QUIC
- Rate limiting must account for QUIC's different traffic patterns
QUIC currently uses more CPU than TCP for equivalent throughput:
- UDP processing is not as hardware-optimized as TCP
- QUIC's encryption covers more data than TLS over TCP
- User-space implementation cannot leverage kernel optimizations
This is expected to improve as QUIC matures and hardware vendors add optimizations.
The most exciting near-term development is WebTransport, a new API for web applications built on HTTP/3. It offers:
- Unreliable datagrams: Send data without retransmission, perfect for real-time applications where old data is useless (game state, live sensor data)
- Multiple streams: Open many independent communication channels within one connection
- Lower latency than WebSockets: Direct use of QUIC's performance benefits
Use cases: Multiplayer gaming, live collaboration tools, IoT dashboards, video conferencing.
-
Congestion control optimization: QUIC allows experimentation with congestion control algorithms (how fast to send data without overwhelming the network). Research continues on optimal approaches.
-
Multipath QUIC: Using multiple network paths simultaneously (WiFi and cellular together) for redundancy and increased bandwidth.
-
Non-web applications: QUIC is transport-agnostic. Research explores using QUIC for DNS, SSH, and other protocols currently using TCP.
-
Hardware acceleration: As QUIC matures, hardware vendors are developing QUIC-aware network cards that can offload processing from CPUs.
-
How will security infrastructure adapt? The tension between encryption and network visibility remains unresolved. Will new approaches to security emerge that do not require traffic inspection?
-
Will QUIC's CPU overhead decrease enough for high-volume servers? Large-scale deployments currently use more compute resources than HTTP/2.
-
How will the long tail of devices adopt HTTP/3? While major browsers and CDNs support HTTP/3, older devices, embedded systems, and legacy applications may take years or decades to upgrade.
5G networks promise lower latency and higher bandwidth, which might seem to reduce the need for HTTP/3's optimizations. However:
- 5G still has packet loss in practice
- Edge computing scenarios benefit from 0-RTT
- Connection migration becomes more relevant as devices move faster
Edge computing moves processing closer to users. HTTP/3's 0-RTT is particularly valuable here, as edge servers may see many short-lived connections.
Internet of Things devices often operate on unreliable networks with limited resources. HTTP/3's resilience to packet loss is valuable, though CPU requirements may be challenging for constrained devices.
- HTTP/3 changes both the application and transport layers, an unusual architectural shift.
- Security is enhanced through mandatory encryption but creates visibility challenges for network defenders.
- Enterprise adoption is slowed by firewall limitations, security tool compatibility, and CPU overhead.
- WebTransport represents the next frontier for real-time web applications.
- Open questions remain about security infrastructure adaptation, CPU efficiency, and legacy device support.
- HTTP/3's design complements emerging technologies like 5G, edge computing, and IoT.
HTTP/3 is not just an incremental improvement; it is a fundamental rearchitecting of how the web communicates. The core innovations are:
-
QUIC replaces TCP: Moving from a 50-year-old protocol designed for stationary computers to one designed for mobile devices and unreliable networks.
-
Streams eliminate head-of-line blocking: Lost packets affect only their own stream, not the entire connection.
-
Connection IDs enable mobility: Network changes no longer break connections.
-
Mandatory encryption enhances privacy: All traffic is encrypted, including most metadata.
-
Reduced latency through combined handshakes: One round trip instead of three to establish secure connections.
-
For everyday understanding: HTTP/3 makes the web faster and more reliable, especially on phones and poor networks. Pages load faster, videos buffer less, and switching between WiFi and cellular no longer disrupts your browsing.
-
For technical understanding: HTTP/3 uses QUIC over UDP to avoid TCP's head-of-line blocking, integrates TLS 1.3 for security, uses connection IDs for mobility, and supports independent streams for parallel data transfer.
-
For strategic understanding: HTTP/3 is the future of web communication. Major platforms have adopted it, most browsers support it, and the transition is underway. Understanding its implications helps in making infrastructure decisions, building applications, and anticipating the evolution of the internet.
The standardization of HTTP/3 in 2022 marked a milestone, not an endpoint. WebTransport, multipath QUIC, and hardware acceleration are all active areas of development. The principles underlying HTTP/3, designing for mobility, assuming packet loss, prioritizing security, will continue to shape internet protocols for years to come.
- HTTP/3 - Wikipedia
- What Is HTTP/3? - Cloudflare
- HTTP/3 Explained - http.dev
- The Ultimate Guide To The HTTP/3 And QUIC Protocols - DebugBear
- QUIC - Wikipedia
- Jim Roskind - Wikipedia
- Evolution of HTTP - MDN Web Docs
- TCP Head of Line Blocking - HTTP/3 Explained
- Head-of-line blocking - Wikipedia
- Comparing HTTP/3 vs. HTTP/2 Performance - Cloudflare
- HTTP/3 vs HTTP/2 Performance - DebugBear
- How Facebook is Bringing QUIC to Billions - Meta Engineering
- TLS 1.3 Handshake Explained - The SSL Store
- What Happens in a TLS Handshake? - Cloudflare
- RFC 9204 - QPACK: Field Compression for HTTP/3
- HTTP/3 Protocol - Security Implications - CodiLime
- HTTP/3 Challenges to Security - Medium
- The Challenges Ahead for HTTP/3 - Internet Society
- The Future of WebSockets: HTTP/3 and WebTransport - WebSocket.org
- WebTransport over HTTP/3 - IETF