The End of Net Neutrality Highlights Importance of APM
April 30, 2014

Mike Heumann

Share this

In February, the Supreme Court effectively killed “Net Neutrality” for most US households by ruling that cable companies, who are not “common carriers” within the scope of the FCC’s definition, do not have to provide equal treatment of all Internet traffic.

While the FCC may revisit whether cable companies should be common carriers, the question that millions of Netflix customers trying to stream Season 2 of House of Cards are likely asking is whether the end of Net Neutrality will result in spotty performance with lots of buffering, or will they get a broadcast-like on-demand experience from their Netflix player.

Needless to say, this was also on the minds at Netflix, which recently agreed to a deal with Comcast whereby Netflix will place caches and other hardware within Comcast’s broadband network in order to improve the delivery of its streaming content to subscribers with Comcast cable and fibre broadband. Netflix may also be paying Comcast for this privileged access as well.

From a broader perspective, this approach represents one of the biggest public endorsements to date of the need for Application Performance Management (APM) across the Internet. Netflix is making a major investment in APM to ensure that its end users are able to use the application they are paying for — Netflix’s player and streamed content library — effectively.

While Netflix’s APM play is mostly around quality of service (QoS), the same APM pressure points apply to a variety of B2B applications as well — be it high frequency trading (HFT) platforms, transaction processing systems such as large e-commerce platforms and credit card authorization networks, or even a large database full of datasets. The critical requirement for APM is to ensure that critical data is served up and delivered in a consistent, ordered and un-congested manner in order for the target application to function correctly, reliably and consistently.

Historically, APM has been a focus point for large data centers with a few, typically massive applications. Examples of this include SAP, Oracle, and other large database platforms. The goal of APM systems in these “large platform” environments is to help identify issues impacting transactional performance, and ultimately to provide “alerting” that identifies potential issues before performance becomes unacceptable.

Application-Aware Network Performance Monitoring

Of particular interest has been the growth of “application-aware network performance monitoring tools” (AA-NPM), which are blurring the line between APM and Network Performance Management (NPM). While it might seem obvious that networks can have a big impact on application performance, its criticality to application performance is really highlighted by new technologies such as Virtual Desktop Infrastructure (VDI) and Voice over IP (VOIP), where the network is delivering mission-critical applications in real time.

As highlighted by the Netflix example above, the next frontier in AA-NPM is measuring performance of applications across the Internet. To be sure, this is more than simply a “consumer subscriber” issue affecting things like entertainment and communications. As enterprises embrace the private, hybrid, and public cloud models as a way of delivering critical services to their internal and external customers, the need to measure performance across the Internet becomes more critical.

This need is also applicable when enterprises use third party services such as Salesforce.com or other such applications. As enterprises of all sizes move towards colocation, outsourcing, and applications as a service, customer demand for performance information across the Internet will only increase. This will put additional pressure on Internet and Managed Service Providers (ISPs and MSPs) and other platform operators to put countermeasures and agreements in place to ensure that their applications are not choked off by traffic shaping, peak-time congestion and other broad-spectrum throughput issues that can affect the steady and consistent flow of packets across the public Internet, as well as via the last-mile WAN connection. A two-tier Internet is — it can be argued — an unfortunate but ultimately necessary by-product of ensuring that Internet and web-based apps and services are not starved of the data flows they need to function.

Cutting Through the Noise with Network Visibility

One of the obvious challenges of performance monitoring in enterprises with high-density 10Gb Ethernet (10GbE) networks and hundreds or thousands of virtualized servers is “breaking through” all of the noise in the environment to find out what is going on across these networks, and how it is affecting performance. The aim of network visibility is to cut through the noise and multiple levels of virtualization and indirection to identify the actual traffic of interest so steps can be taken to make sure that traffic passes from A to B at the rate necessary to support the target service.

This technology is equally applicable in enterprises with multiple sites connected across the Internet. By deploying network visibility tools at multiple locations and “mining” the data from these tools centrally, things like transit time between sites can be measured, profiled, and ultimately analyzed to identify such things as performance bottlenecks or changes in network topologies. While this data does not in and of itself improve the performance applications across the Internet, it certainly provides the insight necessary to understand what is impacting performance, allowing corrective actions to be planned and implemented effectively.

Network visibility is one of the most powerful tools to emerge in the quest to identify issues affecting application and network performance in the enterprise. By applying network visibility tools across multiple sites, IT organizations now have the ability to “peer across the internet” and monitor performance in new ways. As we transition to a multi-tier Internet and more enterprises start to measure how well their applications perform across the Internet in a manner similar to what they do on their own networks, look for them to use network visibility tools as a way to see through the noise to identify the causes of performance issues, especially in an age without Net Neutrality.

Mike Heumann is Sr. Director, Marketing (Endace) for Emulex.

Related Links:

www.emulex.com/

Share this

The Latest

December 03, 2024

We're at a critical inflection point in the data landscape. In our recent survey of executive leaders in the data space — The State of Data Observability in 2024 — we found that while 92% of organizations now consider data reliability core to their strategy, most still struggle with fundamental visibility challenges ...

December 02, 2024

From the accelerating adoption of artificial intelligence (AI) and generative AI (GenAI) to the ongoing challenges of cost optimization and security, these IT leaders are navigating a complex and rapidly evolving landscape. Here's what you should know about the top priorities shaping the year ahead ...

November 26, 2024

In the heat of the holiday online shopping rush, retailers face persistent challenges such as increased web traffic or cyber threats that can lead to high-impact outages. With profit margins under high pressure, retailers are prioritizing strategic investments to help drive business value while improving the customer experience ...

November 25, 2024

In a fast-paced industry where customer service is a priority, the opportunity to use AI to personalize products and services, revolutionize delivery channels, and effectively manage peaks in demand such as Black Friday and Cyber Monday are vast. By leveraging AI to streamline demand forecasting, optimize inventory, personalize customer interactions, and adjust pricing, retailers can have a better handle on these stress points, and deliver a seamless digital experience ...

November 21, 2024

Broad proliferation of cloud infrastructure combined with continued support for remote workers is driving increased complexity and visibility challenges for network operations teams, according to new research conducted by Dimensional Research and sponsored by Broadcom ...

November 20, 2024

New research from ServiceNow and ThoughtLab reveals that less than 30% of banks feel their transformation efforts are meeting evolving customer digital needs. Additionally, 52% say they must revamp their strategy to counter competition from outside the sector. Adapting to these challenges isn't just about staying competitive — it's about staying in business ...

November 19, 2024

Leaders in the financial services sector are bullish on AI, with 95% of business and IT decision makers saying that AI is a top C-Suite priority, and 96% of respondents believing it provides their business a competitive advantage, according to Riverbed's Global AI and Digital Experience Survey ...

November 18, 2024

SLOs have long been a staple for DevOps teams to monitor the health of their applications and infrastructure ... Now, as digital trends have shifted, more and more teams are looking to adapt this model for the mobile environment. This, however, is not without its challenges ...

November 14, 2024

Modernizing IT infrastructure has become essential for organizations striving to remain competitive. This modernization extends beyond merely upgrading hardware or software; it involves strategically leveraging new technologies like AI and cloud computing to enhance operational efficiency, increase data accessibility, and improve the end-user experience ...

November 13, 2024

AI sure grew fast in popularity, but are AI apps any good? ... If companies are going to keep integrating AI applications into their tech stack at the rate they are, then they need to be aware of AI's limitations. More importantly, they need to evolve their testing regiment ...