Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 2
May 19, 2016

Jean Tunis
RootPerformance

Share this

This blog is the second in a 5-part series on APMdigest where I discuss web application performance and how new protocols like SPDY, HTTP/2, and QUIC will hopefully improve it so we can have happy website users.

Start with Web Performance 101: The Bandwidth Myth

Start with Web Performance 101: 4 Recommendations to Improve Web Performance

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 1

The HyperText Transport Protocol (HTTP) is the application layer protocol in the TCP/IP stack used for the communication of web traffic. The current version that has been ratified by Internet Engineering Task Force (IETF) is HTTP/2 (more on that later), and it happened in May 2015.

But the most widely used version is the previous version, HTTP/1.1.

According to the HTTP/2 Dashboard, only about 4% of the top 2 million Alexa sites truly support HTTP/2. So we still have a ways to go.

Ratified almost 20 years ago in 1997, HTTP/1.1 was meant to address two big limitations in the previous HTTP/1.0.

HTTP/1.0 Limitations

One limitation was a lack of persistent connections. With 1.0, every HTTP request required opening up a new TCP connection. As mentioned in my previous blog, this requires resources and introduces additional latency.

Another limitation was being able to send multiple requests at one time without needing any responses from the other side. The ability to pipeline requests in HTTP/1.1 was meant to address this.

But as the web continued to advance, it became clear that HTTP/1.1 still had many limitations that needed to be worked on.

HTTP/1.1 Limitations

1.1 has a number of limitations, but I want to talk about three of them that has been issues over the years.

Many small requests makes HTTP/1.1 latency sensitive

With images, HTML files, CSS files, JS files, and many others, HTTP transfers a lot of requests. Many of these requests are short-lived with files that can be on the order of tens of KBs.

But the same process happens each time a new connection is made, and many steps occur every time a new request on the same connection is made. Things like a DNS query, packet propagation from the browser to the server and back, encryption, compression, etc. All these things require time across the network, no matter how small.

So all these little requests introduce latency, thereby making HTTP latency-sensitive.

Pipelining is not multiplexing

Pipelining was supposed to address a limitation in HTTP/1.0. But over the years, we've seen that in HTTP/1.1, it caused other limitations itself.

For one, no matter how many requests were pipelined, the server still was required to respond to each request in order. So if one of those requests got to the server out of the order it was sent, and arrived later, the server could not respond to the other that got there earlier. It had to wait for the out-of-order request before replying to the others.

Two, the nature of the TCP protocol is such that segmentation and reassembly of data occurred in proper order. Due to how the protocol operates, any segments at the head of a stream of segments had to be processed first. This caused the TCP head-of-line blocking.

Because of these limitations, most modern browsers disabled pipelining, thus, obviously defeating the purpose of having it in place as part of the standard.

Short-lived requests affected by TCP slow start

As a connection-oriented protocol, TCP ensures delivery of each and every piece of data it sends. In the early days of the Internet, we didn't have a lot of bandwidth, by today's standard anyway. Remember 56K modems? TCP was designed at a time before then.

To prevent applications from overwhelming the network, and jeopardizing TCP's operations, the concept of a slow-start was introduced in RFC 1122. This ensured that the application would start with sending a little bit of data to the server, initially 1 MSS, wait until it gets an ACK, and then gradually send more data via the congestion window until it gets to the maximum advertised window size.

Years ago, the default number of segments (or congestion window size) was 3. With the default TCP maximum segment size (MSS) being 1,460 bytes, it means that the maximum amount of data that could be sent at one time was only about 4KB.

HTTP requests were small, but not that small. And since HTTP requests often don't last very long, this meant that many requests never got out of TCP slow start before the connection was no longer required.

Since then, the initial congestion window size was increased to 10 segments, or almost 15KB. A paper published by Google in 2010 showed that 10 segments is the sweet spot to maximize throughput and response time. This has become part of RFC 6928.

Read Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 3, covering common HTTP/1.1 workarounds, SPDY and HTTP/2.

Jean Tunis is Principal Consultant and Founder of RootPerformance
Share this

The Latest

March 19, 2024

If there's one thing we should tame in today's data-driven marketing landscape, this would be data debt, a silent menace threatening to undermine all the trust you've put in the data-driven decisions that guide your strategies. This blog aims to explore the true costs of data debt in marketing operations, offering four actionable strategies to mitigate them through enhanced marketing observability ...

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...

March 11, 2024

Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...

March 07, 2024
In this digital era, consumers prefer a seamless user experience, and here, the significance of performance testing cannot be overstated. Application performance testing is essential in ensuring that your software products, websites, or other related systems operate seamlessly under varying conditions. However, the cost of poor performance extends beyond technical glitches and slow load times; it can directly affect customer satisfaction and brand reputation. Understand the tangible and intangible consequences of poor application performance and how it can affect your business ...
March 06, 2024

Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...

March 05, 2024

Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...