Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 2
May 19, 2016

Jean Tunis
RootPerformance

Share this

This blog is the second in a 5-part series on APMdigest where I discuss web application performance and how new protocols like SPDY, HTTP/2, and QUIC will hopefully improve it so we can have happy website users.

Start with Web Performance 101: The Bandwidth Myth

Start with Web Performance 101: 4 Recommendations to Improve Web Performance

Start with Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 1

The HyperText Transport Protocol (HTTP) is the application layer protocol in the TCP/IP stack used for the communication of web traffic. The current version that has been ratified by Internet Engineering Task Force (IETF) is HTTP/2 (more on that later), and it happened in May 2015.

But the most widely used version is the previous version, HTTP/1.1.

According to the HTTP/2 Dashboard, only about 4% of the top 2 million Alexa sites truly support HTTP/2. So we still have a ways to go.

Ratified almost 20 years ago in 1997, HTTP/1.1 was meant to address two big limitations in the previous HTTP/1.0.

HTTP/1.0 Limitations

One limitation was a lack of persistent connections. With 1.0, every HTTP request required opening up a new TCP connection. As mentioned in my previous blog, this requires resources and introduces additional latency.

Another limitation was being able to send multiple requests at one time without needing any responses from the other side. The ability to pipeline requests in HTTP/1.1 was meant to address this.

But as the web continued to advance, it became clear that HTTP/1.1 still had many limitations that needed to be worked on.

HTTP/1.1 Limitations

1.1 has a number of limitations, but I want to talk about three of them that has been issues over the years.

Many small requests makes HTTP/1.1 latency sensitive

With images, HTML files, CSS files, JS files, and many others, HTTP transfers a lot of requests. Many of these requests are short-lived with files that can be on the order of tens of KBs.

But the same process happens each time a new connection is made, and many steps occur every time a new request on the same connection is made. Things like a DNS query, packet propagation from the browser to the server and back, encryption, compression, etc. All these things require time across the network, no matter how small.

So all these little requests introduce latency, thereby making HTTP latency-sensitive.

Pipelining is not multiplexing

Pipelining was supposed to address a limitation in HTTP/1.0. But over the years, we've seen that in HTTP/1.1, it caused other limitations itself.

For one, no matter how many requests were pipelined, the server still was required to respond to each request in order. So if one of those requests got to the server out of the order it was sent, and arrived later, the server could not respond to the other that got there earlier. It had to wait for the out-of-order request before replying to the others.

Two, the nature of the TCP protocol is such that segmentation and reassembly of data occurred in proper order. Due to how the protocol operates, any segments at the head of a stream of segments had to be processed first. This caused the TCP head-of-line blocking.

Because of these limitations, most modern browsers disabled pipelining, thus, obviously defeating the purpose of having it in place as part of the standard.

Short-lived requests affected by TCP slow start

As a connection-oriented protocol, TCP ensures delivery of each and every piece of data it sends. In the early days of the Internet, we didn't have a lot of bandwidth, by today's standard anyway. Remember 56K modems? TCP was designed at a time before then.

To prevent applications from overwhelming the network, and jeopardizing TCP's operations, the concept of a slow-start was introduced in RFC 1122. This ensured that the application would start with sending a little bit of data to the server, initially 1 MSS, wait until it gets an ACK, and then gradually send more data via the congestion window until it gets to the maximum advertised window size.

Years ago, the default number of segments (or congestion window size) was 3. With the default TCP maximum segment size (MSS) being 1,460 bytes, it means that the maximum amount of data that could be sent at one time was only about 4KB.

HTTP requests were small, but not that small. And since HTTP requests often don't last very long, this meant that many requests never got out of TCP slow start before the connection was no longer required.

Since then, the initial congestion window size was increased to 10 segments, or almost 15KB. A paper published by Google in 2010 showed that 10 segments is the sweet spot to maximize throughput and response time. This has become part of RFC 6928.

Read Web Performance and the Impact of SPDY, HTTP/2 & QUIC - Part 3, covering common HTTP/1.1 workarounds, SPDY and HTTP/2.

Jean Tunis is Senior Consultant and Founder of RootPerformance.

Share this

The Latest

November 17, 2017

Just in time for the holiday shopping season, APMdigest asked experts from across the industry for their opinions on the best way to measure eCommerce performance, in terms of applications, networks and infrastructure. Part 3, the final installment, covers the customer journey ...

November 16, 2017

Just in time for the holiday shopping season, APMdigest asked experts from across the industry for their opinions on the best way to measure eCommerce performance, in terms of applications, networks and infrastructure. Part 2 covers APM and monitoring ...

November 15, 2017

As the holiday shopping season looms ahead, and online sales are positioned to challenge or even beat in-store purchases, eCommerce is on the minds of many decision makers. To help organizations decide how to gauge their eCommerce success, APMdigest compiled a list of expert opinions on the best way to measure eCommerce performance ...

November 14, 2017

More than 90 percent of respondents are concerned about data and application security in public clouds while nearly 60 percent of respondents reported that public cloud environments make it more difficult to obtain visibility into data traffic, according to a new Cloud Security survey ...

November 13, 2017

Today's technology advances have enabled end-users to operate more efficiently, and for businesses to more easily interact with customers and gather and store huge amounts of data that previously would be impossible to collect. In kind, IT departments can also collect valuable telemetry from their distributed enterprise devices to allow for many of the same benefits. But now that all this data is within reach, how can organizations make sense of it all? ...

November 09, 2017

CIOs trying to lead digital transformation at the speed needed to succeed need a mix of three scale accelerators, according to Gartner, Inc. The three scale accelerators include: digital dexterity, network effect technologies, and an industrialized digital platform ...

November 08, 2017

While the majority of IT practitioners in the UK believe their organization is equipped to support digital services, over half of them also say they face consumer-impacting incidents at least one or more times a week, sometimes costing their organizations millions in lost revenue for every hour that an application is down, according to PagerDuty's State of Digital Operations Report: United Kingdom ...

November 07, 2017

Today's IT is under considerable pressure to remain agile, responsive and scalable to meet the changing needs of business. IT infrastructure can't become a bottleneck, it must be the enabler. But as new paradigms, such as DevOps, are adopted, data center complexity increases and infrastructure constraints can block the ability to achieve these goals ...

November 06, 2017

It's 3:47am. You and the rest of the Ops team have been summoned from your peaceful slumber to mitigate an application delivery outage. Your mind races as you switch to problem solving mode. It's time to start thinking about how to make this mitigation FUN! ...

November 03, 2017

With the increased complexity of IT environments, the rising cyber threats and the growing number of IT alerts, IT organizations have come to the realization that throwing more people at IT issues doesn't solve the problem ...