Can the Internet Handle the Expected 2014 World Cup Traffic Records?
June 11, 2014

Alon Girmonsky
BlazeMeter

Share this

The 2010 FIFA World Cup fever tested the Internet’s limits more than ever before. News site traffic reached a blistering 12.1 million visitors per minute – a record that far exceeds the 8.5 million set by Barack Obama’s presidential election win back in 2008.

And, this year, the Internet is taking it one step further as the BBC plans to host a 24/7 World Cup feed, across all television, radio and digital platforms. That’s 50 percent more coverage than 2010. So, with more than 160 hours of programming, including highlights and match replays across all of their online channels, you have to wonder, how are they going to pull it off?

DevOps will be conducting some pretty rigorous testing to ensure their channels can hold up under what could be another record-breaking moment of traffic in Internet history. But, will this be enough?

Simulating Traffic

A key to performance testing is being able to simulate peak traffic to ensure your website will hold up under load. But, it’s important to avoid the all-too-common mistake of only testing within your corporate local area network (LAN).

Viewers of this year’s World Cup will span continents, so only testing traffic capacity within your own network will not suffice. It’s great if your site is able to sustain one million concurrent connections on your LAN, but when those connections are coming from other regions, putting more strain on your bandwidth, performance becomes uncertain.

Simulating a load scenario where the traffic only originates from within the corporate LAN can be compared to training for the Tour de France … on a stationary bike. Sure, you may be able to tackle the 3,500 kilometers over 23 days of training, but that doesn’t account for friction on the road, cyclist traffic or natural elements like wind, heat and rain.

That kind of training is only testing your body’s ability to perform under the most ideal conditions, which is the same as testing website performance from within the corporate LAN. On the LAN, you don’t have to go through the firewall, cache, load balancer, network equipment, modem or routers, thereby avoiding any kind of packet collisions or re-transmits. Ideal? Yes. Realistic? Not a chance.

Cloud-Based Performance Testing

Cloud-based performance testing enables broadcasters to simulate the millions of real users coming directly from the Internet – just as they will be on June 12 when the World Cup kicks off.

The cloud is extremely well-suited to generating the peak demands required for website performance testing. Not only can you ensure that sufficient compute power is available to scale from 100,000 to 1,000,000 virtual users and beyond, but you can also do it on demand with automatic resource provisioning.

Gone are the performance-testing delays of deploying and verifying internally managed hardware. With the cloud, concerns over the number of available servers on hand and whether idle servers are wasting valuable resources are something of the past. Performance testing can be run from anywhere with an Internet connection and a browser without the risk of costly over provisioning.

If broadcasters like ESPN, the BBC and ITV that are expecting to handle an increase in traffic from the World Cup were to solely use an on-premise testing model, they would have to acquire enough resources to support the tremendous capacity planning for that event. But, those resources could potentially go unused for the rest of the year.

Matters are complicated further when you consider that viewers will expect to watch seamless coverage of the games on TV, tablets and smartphones, so traffic simulations should take multiple devices into account.

The elasticity and agility of cloud resources means they can be easily scaled up or down as needed while only paying for what you use thanks to pay-as-you-go or utility-style pricing. This makes it an extremely efficient and cost-effective solution for performance testing needs.

Handling Global Load

Performance tests for something as big as the World Cup need to go even further to test global demand from most countries around the world. After all, soccer is one of the most widely watched sports there is, with a footballer fan base extending far beyond this year’s host country, Brazil. The global nature of the cloud serves this requirement well. Load tests can easily be carried out across different geographies since the cloud allows virtual users to be replicated in a variety of locations to test international performance. Cloud providers and test solutions can evaluate website global readiness, all without requiring you to stand up an expensive data center of your own in each location.

All in all, it would appear that technology is saving the day once more. The ability to broadcast live international coverage over the Internet enables an increasing number of fans to get connected and stay connected. With that, broadcasters let themselves in to a bottomless pit of demand for live viewing - which, in turn, leads to increased revenue from advertisers. Without cloud-based performance simulations, chances are, broadcasters would be getting yellow cards of dissatisfaction all around.

Alon Girmonsky is CEO of BlazeMeter.

Share this

The Latest

January 17, 2017

Choosing an application performance monitoring (APM) solution can be a daunting task. A quick Google search will show popular products, but there's also a long list of less-well-known open source products available, too. So how do you choose the right solution? ...

January 13, 2017

Digital transformation is a key initiative for enterprises that want to reach new customers and offer greater value via technology. Changing user expectations, new modes of engagement and the need to improve responsiveness are the main factors driving companies to update outdated processes and develop new applications as part of a digital transformation strategy. But in order to deliver on the promise of digital transformation, organizations must also modernize their IT infrastructure to support speed, scale and change ...

January 12, 2017

Digital transformation is evolving the enterprise to one in which high performance applications are now the norm as organizations use video, graphics and other information intensive multimedia to populate these new channels of engagement. Digital technologies, and high performance applications, create further pressure on IT staffs which are grappling with PCs that are past their optimum performance. As a result, IT is looking at alternatives to swapping out PCs and investing in more costly equipment that will inevitably have an expiration date. One solution is to build on virtualization solutions that incorporate high-performance thin clients ...

January 11, 2017

If your business depends on mission-critical web or legacy applications, then monitoring how your end users interact with your applications is critical. Most monitoring solutions try to infer the end-user experience based on resource utilization. However, resource utilization cannot provide meaningful results on how the end-user is experiencing an interaction with an application. The true measurement of end-user experience is availability and response time of the application, end-to-end and hop-by-hop ...

January 10, 2017

There's nothing like a major web outage to remind us how much our applications rely on other web services and technologies to function. In late October of last year, a Distributed Denial of Service (DDoS) attack on Dyn, one of the largest Domain Name Service (DNS) providers on the internet, disrupted service for consumer and business applications across the web. This attack shed light on the delicate interdependent nature of the web as productivity and uptime across the world was effected ...

January 09, 2017

As an IT professional, I'm used to words that mean different things to different people. For example, "log monitoring" could mean anything from simple text files to logfile aggregation systems. "Uptime" is also notoriously hard to nail down. Heck, even the word "monitoring" itself can be obscure. This is why I'm not surprised that application performance monitoring (APM) can mean so many different things depending on the context ...

January 06, 2017

Big data continues to be the fastest-growing segment of the information management software market. New findings released by Ovum estimate that the big data market will grow from $1.7bn in 2016 to $9.4bn by 2020, comprising 10% of the overall market for information management tooling ...

January 05, 2017

Apica highlights the following trends and predictions for 2017, covering the API Economy, ecommerce, analytics and more ...

January 04, 2017

IDC has been chronicling the emergence and evolution of the 3rd Platform, built on cloud, mobile, big data/analytics, and social technologies, for nearly a decade. Over the past several years, the adoption of these technologies has accelerated as enterprises commit to the 3rd Platform and undergo digital transformation (DX) on a massive scale. Looking forward, IDC predicts that digital transformation will attain macroeconomic scale over the next three to four years, changing the way enterprises operate and reshaping the global economy. This is the dawn of the "DX Economy" ...

January 03, 2017

Riverbed highlights the following trends and predictions for 2017, covering digital transformation, software-defined everything and more ...