Keeping Digital Business Running
Network Performance Management for Digital Operations
September 22, 2016

Jim Frey
Kentik

Share this

The importance of digital business operations is now a given, and for good reason. Recently, Pandora announced that it was launching a subscription service and lowering monthly fees, which means that the already huge percentage of its revenues driven by advertising is going to have to increase in order to maintain the top line. It goes without saying that streaming music, like many other ad-driven business models, relies critically on user experience, and user experience relies critically on network performance. So much so that streaming media, gaming and many other such digital service providers have built private CDNs to guarantee that app and ad bits make it to user eyes and ears in a very timely and reliable fashion.

Network performance monitoring (NPM) has been around a long time. Unlike APM, NPM is still in the process of catching up to cloud realities. In May of this year, Gartner analyst Sanjit Ganguli published a research note entitled Network Performance Monitoring Tools Leave Gaps in Cloud Monitoring. It's a fairly biting critique of the NPM space that says, essentially, that the vast majority of current NPM approaches were largely built for a pre-cloud era, and are unable to adapt because of the new complexities brought by decentralization and full stack virtualization. As a result, network managers are left in the lurch when trying to adapt to the realities of digital operations.

NPM had its origins in open-source manual tools such as MRTG, Nagios, and Wireshark, which are still widely available and useful. However, on a commercial basis, traditional NPM approaches came about during the rise of centralized, private enterprise data centers connected by networks that were built to reach campuses and branch offices across an outsourced, yet essentially private IP/MPLS WAN. Applications of this era were developed in a relatively monolithic fashion. This overall architecture meant that there a few, well defined traffic aggregation points, such as the juncture between LAN and WAN at major datacenters and campuses. Enterprise switches and routers deployed in these environments offered span ports, and thus a generation of NPM packet capture (PCAP) appliances were born that could attach to these span ports directly or via a convenient tap or packet broker device. Appliances weren't the exclusive domain of NPM offerings – they were used for many network management and security products and still are – but the majority of packet-centric NPM solutions leverage appliances to achieve scale and PCAP storage objectives.

A funny thing happened though – the cloud. The rise of IaaS, PaaS, and SaaS meant that there was a new breed of alternative for just about every IT infrastructure and application component. Applications becoming more and more distributed and, increasingly, components started living not just in separate containers, VMs and infrastructure clusters, but in separate datacenters, spread out across networks and the Internet. This cloud way of developing, distributing, hosting and communicating established a dramatically altered set of network traffic patterns.

Unfortunately, NPM appliances aren't nearly as helpful in this new reality. In many clouds you don't have a network interface to tap into for sniffing or capturing packets. The proliferation of application components multiplies the communication endpoints.

In addition, digital business means that users aren't necessarily reached across a private WAN, but rather across the Internet.

Finally, appliances are bedeviled by limited storage and compute power, so they can't offer very much depth of analysis without extreme cost impact. With digital business and DevOps practices being so data-driven, being limited to summary reports and a small window of details isn't acceptable anymore, especially when scale-out computing and storage is so readily available.

This change in how the network and Internet interacts with and influences application performance requires a new approach to NPM. NPM for the digital operations era needs to offer a level of flexibility in deployment and cost-effectiveness to allow for broad, comprehensive instrumentation to collect network performance metric data. In addition, the volume of network performance data ingest, depth of storage, and analytical sophistication needs to scale based on today's cloud economics. Fortunately, there are plenty of technology options available to build these capabilities. So while Gartner has rightly identified a gap in NPM, the good news is that the gap can be readily filled.

Jim Frey is VP of Strategic Alliances at Kentik.

Share this

The Latest

March 27, 2017

Monitoring a business means monitoring an entire business – not just IT or application performance. If businesses truly care about differentiating themselves from the competition, they must approach monitoring holistically. Separate, siloed monitoring systems are quickly becoming a thing of the past ...

March 24, 2017

A growing IT delivery gap is slowing down the majority of the businesses surveyed and directly putting revenue at risk, according to MuleSoft's 2017 Connectivity Benchmark Report on digital transformation initiatives and the business impact of APIs ...

March 23, 2017

Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. Without the right foresight, DevOps and IT teams may lose a lot of visibility into these containers resulting in operational blind spots and even more haystacks to find the presumptive performance issue needle ...

March 22, 2017

Much emphasis is placed on servers and storage when discussing Application Performance, mainly because the application lives on a server and uses storage. However, the network has considerable importance, certainly in the case of WANs where there are ways of speeding up the transmission of data of a network ...

March 21, 2017

The majority of IT executives believe investment in IT Service Management (ITSM) is important to gain the agility needed to compete in an era of global, cross-industry disruption and digital transformation, according to Delivering Value to Today’s Digital Enterprise: The State of IT Service Management 2017, a report by BMC, conducted in association with Forbes ...

March 17, 2017

Let’s say your company has examined all the potential pros and cons, and moved your critical business applications to the cloud. The advertised benefits of the cloud seem like they’ll work out great. And in many ways, life is easier for you now. But as often happens when things seem too good to be true, reality has a way of kicking in to reveal just exactly how many things can go wrong with your cloud setup – things that can directly impact your business ...

March 16, 2017

IT leadership is more driven to be innovative than ever, but also more in need of justifying costs and showing value than ever. Combining the two is no mean feat, especially when individual technologies are put forward as the single tantalizing answer ...

March 15, 2017

The move to Citrix 7.X is in full swing. This has improved the centralizing of Management and reduction of costs, but End User Experience is becoming top of the business objectives list. However, delivering that is not something to be considered after the upgrade ...

March 14, 2017

As organizations understand the findings of the Cyber Monday Web Performance Index and look to improve their site performance for the next Cyber Monday shopping day, I wanted to offer a few recommendations to help any organization improve in 2017 ...

March 13, 2017

Online retailers stand to make a lot of money on Cyber Monday as long as their infrastructure can keep up with customers. If your company's site goes offline or substantially slows down, you're going to lose sales. And even top ecommerce sites experience performance or stability issues at peak loads, like Cyber Monday, according to Apica's Cyber Monday Web Performance Index ...