3 Key Metrics for Successful Web Application Performance - NBA Edition Part 2
November 09, 2016

Jean Tunis
RootPerformance

Share this

Start with 3 Key Metrics for Successful Web Application Performance - NBA Edition Part 1

In Part 1 of this 3-part blog series, I talk about using WebPageTest to look into why my favorite NBA team's website – the New York Knicks – is slow at times. I talked about the first metric you need to look at when running such testing – the number of HTTP requests.

In Part 2, I talk about the second metric ...

Metric #2: Number of Bytes Transferred

According to HTTP Archive, a repository of the performance of the world's top websites, the size of the average website across the globe is actually going up. While Internet speeds are going up as well, a heavier website with more data to send is not a good trend.


As of October 15, 2016, the average size of the top websites is about 2.5MB, compared to about 2.2MB a year earlier – an increase of more than 10%.

The heavier the website or web application, the more the client has to process. You want to keep processing time down as much as you can.

The reason for these heavier websites is due to the size of the images being used, and to some degree the rapid increased use of video, compared to years past. We humans are very visual. Displaying images can help convey a message much more clearly when accompanied by an image. And in the days of Netflix and YouTube, people are accustomed to consuming a lot of video online.


According to HTTP Archive, images make up almost 65% of all bytes of data on an average web page.

In my test of the Knicks website, I get similar results – about 60% of the amount of data that was downloaded are images.

If you add up all those bytes, you get something around 8MB that was transferred to the WebPageTest machine. That's more than 3x the average website monitored on HTTP Archive!

During the test, this was over both a 20Mbps FIOS connection and a 1.5Mbps DSL connection.

Clearly, not every connection is made the same. Some companies have customers from all over the world. Some have fast Internet connections, while others have really slow ones, especially if they're on mobile.

I live in the US, and my Internet provider can only give me DSL. Others in the United States have FIOS and Cable connections that are much faster. So a heavy site, like the Knicks website, is bound to be a performance problem for me and others like me.

This was clearly evident when I tested the site with FIOS and compared it to one with DSL. The difference was almost a 7x response time increase over DSL to FIOS.

Not good for me!

What can be done about that?

Compress and Conquer

Most web server software – by default – allow for the compression of files before sending them to clients. If your server is not doing this, you should enable it. When this is enabled, the amount of the data size sent to the client is reduced.

However, in order for this to occur, both sides have to be able to support compression. It is quite common these days to find that the client browser supports compression, but the server either does not support it or it is not enabled.

All of the top web servers, like Apache HTTP Server, Nginx, Microsoft IIS and others, support compression, so there's really little reason not to enable it.


In my test of the Knicks site, only 58% of the text from the site is compressed. This is a missed opportunity to reduce the amount of data the server sends to the browser by about 585 KB.


For images, only 49% of the total image size is compressed. That means some 51% of the size of images is being transferred across the network at full size. This is another missed opportunity.

In the NBA, 49% is a decent percentage for shooting field goals. Knicks star Carmelo Anthony shoots about 45% for his career, and he's considered a good shooter.

But not with web performance. 49% is bad! There's a lot of room for performance improvements.

So compression should always be enabled if the server supports it.

No Send

Just like the number of requests, you can reduce the amount of data sent by doing one thing – not sending it. Caching can be utilized here as well.

For new users, they can get a compressed file when the request is made. When those users return, their client application won't need to make the request again, thereby avoiding having to download this file again. Instead it pulls it from its local cache.

Also, you can use the HTTP Expires header and/or Cache-Control: max-age.

According to the WebPageTest results of my test of the Knicks site, only 8% of static text and image requests are being cached consistently. In the test, the HTTP cache headers for many of the requests are either fairly short or they don't exist. This means that the next time a request for these text and image files is needed, the browser will make an HTTP request to find out whether the file it has is still valid. If the file is still valid, the server's response should be an HTTP 304 Not Modified response.

While some time may have been saved by the server not having to process and transfer the file, it's still time spent across the network for the client to make the request.

The solution would be to utilize the cache headers so that they contain dates far enough into the future, thereby helping reduce the number of bytes transferred to the user's browser and avoiding an additional round trip altogether.

Image Reduction

As you saw when looking at the HTTP Archive summary of the websites that it monitors, the bulk of the size of an average website is the images, with about 65% of the total transferred size. Some of these image files can be quite big.

Some sites are not using the best image format like JPEG. Many are using PNG and GIF. There can obviously be good reasons for using these formats. Some images look better with PNG when compared to JPEG or vice versa.


According to HTTP Archive, while JPEG makes up for 44% of images, PNG and GIF together make up a total of 53% of all image formats used by the top websites.

Depending on the type of image, and whether it looks acceptable with some lost quality, there's an opportunity to convert PNG and GIF images to JPEG instead.

Look at the difference between this image I found for all three formats ...


Photo credit: Jeanotderivative work

Can you see the difference? Probably not by a lot, if at all. But look at the file size difference between PNG and JPEG. An 81% reduction in size!

On the Knicks site, one of the biggest images they have is a PNG of Kristaps Porzingis. It's big – with a dimension of 760x442 – and a size of 427 KB. Converting that file to JPEG makes it 68KB. That's a 84% reduction in size! That's a lot less data being sent over the visitor's network connection.

Read 3 Key Metrics for Successful Web Application Performance - NBA Edition Part 3, the final blog of the series, where I finish up with the most important metric of all – the one visitors care about – response time.

Jean Tunis is Senior Consultant and Founder of RootPerformance.

Share this

The Latest

January 20, 2017

Traditionally, Application Performance Management (APM) is usually associated with solutions that instrument application code. There are two fundamental limitations with such associations. If instrumenting the code is what APM is all about, then APM is applicable only to homegrown applications for which access to code is available ...

January 19, 2017

The correlation between mobile app crashes and increasing churn rates (or declining user retention) has long been suspected. In the report, titled Crash and Churn, Apteligent set out to understand the impact of per user crash rate on churn ...

January 18, 2017

In Fall 2016, Paessler AG surveyed 650 system administrators from 49 countries to get a "state of the SysAdmin" and find out how their jobs are changing, how they spend their time, and what their priorities are. The survey responses led to some interesting findings – namely, that when it comes to today's SysAdmins, things are not as they seem. Here are some of the key findings that illustrate the gap between perception and reality ...

January 17, 2017

Choosing an application performance monitoring (APM) solution can be a daunting task. A quick Google search will show popular products, but there's also a long list of less-well-known open source products available, too. So how do you choose the right solution? ...

January 13, 2017

Digital transformation is a key initiative for enterprises that want to reach new customers and offer greater value via technology. Changing user expectations, new modes of engagement and the need to improve responsiveness are the main factors driving companies to update outdated processes and develop new applications as part of a digital transformation strategy. But in order to deliver on the promise of digital transformation, organizations must also modernize their IT infrastructure to support speed, scale and change ...

January 12, 2017

Digital transformation is evolving the enterprise to one in which high performance applications are now the norm as organizations use video, graphics and other information intensive multimedia to populate these new channels of engagement. Digital technologies, and high performance applications, create further pressure on IT staffs which are grappling with PCs that are past their optimum performance. As a result, IT is looking at alternatives to swapping out PCs and investing in more costly equipment that will inevitably have an expiration date. One solution is to build on virtualization solutions that incorporate high-performance thin clients ...

January 11, 2017

If your business depends on mission-critical web or legacy applications, then monitoring how your end users interact with your applications is critical. Most monitoring solutions try to infer the end-user experience based on resource utilization. However, resource utilization cannot provide meaningful results on how the end-user is experiencing an interaction with an application. The true measurement of end-user experience is availability and response time of the application, end-to-end and hop-by-hop ...

January 10, 2017

There's nothing like a major web outage to remind us how much our applications rely on other web services and technologies to function. In late October of last year, a Distributed Denial of Service (DDoS) attack on Dyn, one of the largest Domain Name Service (DNS) providers on the internet, disrupted service for consumer and business applications across the web. This attack shed light on the delicate interdependent nature of the web as productivity and uptime across the world was effected ...

January 09, 2017

As an IT professional, I'm used to words that mean different things to different people. For example, "log monitoring" could mean anything from simple text files to logfile aggregation systems. "Uptime" is also notoriously hard to nail down. Heck, even the word "monitoring" itself can be obscure. This is why I'm not surprised that application performance monitoring (APM) can mean so many different things depending on the context ...

January 06, 2017

Big data continues to be the fastest-growing segment of the information management software market. New findings released by Ovum estimate that the big data market will grow from $1.7bn in 2016 to $9.4bn by 2020, comprising 10% of the overall market for information management tooling ...