Application Performance Management and Data Overload
April 18, 2012

Michael Azoff
Ovum

Share this

In a large data centre, an application performance management (APM) solution can generate thousands of metric data points per second. This data avalanche will contain the health information about the IT environment and the challenge is to manage the sheer scale of this data.

One approach is to store it in a data warehouse and process it with sophisticated analytics tools that are designed to deal with the large volumes rapidly. An alternate approach is to treat the problem as Big Data and sample the data in real-time. Either approach has led to innovation in the APM market place.

Analytics Tools to Complement Existing APM Solutions

A number of technology initiatives have led to a growth in the data generated by APM solutions. The solutions are providing end-to-end application and service monitoring, covering every tier from network to business transactions. Service oriented architecture is not quite as dead as some would lead to believe and results in an increase in data traffic to be monitored. Virtualization and cloud computing are also creating a dynamic environment with an increase in complexity of metrics to be monitored.

In order to keep pace with the sophistication of the IT environment, the APM solutions started with static threshold-based monitoring but then progressed to event correlation, dynamic thresholding, and pattern matching.

The next step to deal with the data overload problem has been tackled by specialist APM vendors such as Netuitive and NEC Corporation whose Predictive Analytics and Masterscope Invariant Analyzer respectively are designed to complement existing APM solutions. These tools are statistics based and employ self-learning algorithms that process metric data in real-time to make sense of the data avalanche and identify issues that need attention. They are based on multivariate correlation and regression and can also provide forecasts of issues that are likely to escalate and become problems.

An alternate approach is to deploy a network appliance such as ExtraHop's which is passive and can monitor data flow in real-time without the need for storage or indexing the data. With networks operating at 10 Gbps and data collections in excess of 100TB per day, many solutions resort to sampling rather than inspecting all the data, whereas a network appliance can deal with all the data traffic.

A start-up vendor that has just gone live with a new product this week is Boundary, which offers a SaaS solution for APM related Big Data monitoring, charging users on hourly usage. Customers such as GitHub are discovering patterns of traffic at a granularity not seen before, with snapshots that are true real-time to aggregates that help spot unusual behaviour. Boundary stores the data as part of its hosted solution on Big Data databases and is ideal for monitoring in public cloud environments where an appliance is not possible.

Many of the mainstream APM vendors have also been up scaling their data analytics capabilities in order to deal with this new order of data. The scale of this issue is fast becoming a Big Data problem and large data centre administrators will need to assess their existing APM solutions in handling these volumes of data.

Michael Azoff is a Principal Analyst at Ovum.

Related Links:

www.ovum.com

www.netuitive.com

www.extrahop.com

www.boundary.com

Share this