How to Detect (and Resolve) IT Ops/APM Issues Before Your Users Do
September 19, 2014

Kevin Conklin
Ipswitch

Share this

Among the most embarrassing situations for application support teams is first hearing about a critical performance issue from their users. With technology getting increasingly complex and IT environments changing almost overnight, the reality is that even the most experienced support teams are bound to miss a major problem with a critical application or service. One of the contributing factors is their continued reliance on traditional monitoring approaches.

Traditional tools limit us to monitoring for a combination of key performance indicator thresholds and failure modes that have already been experienced. So when it comes to finding new problems, the best case is alerts that describe the symptom (slow response time, transaction fails, etc.). A very experienced IT professional will have seen many behaviors, and consequently can employ monitoring based on best practices and past experiences. But even the most experienced IT professional will have a hard time designing rules and thresholds that can monitor for new, unknown problems without generating a number of noisy false alerts. Anomaly detection goes beyond the limits of traditional approaches because it sees and learns everything in the data provided, whether it has happened before or not.

Anomaly detection works by identifying unusual behaviors in data generated by an application or service delivery environment. The technology uses machine learning predictive analytics to establish baselines in the data and automatically learn what normal behavior is. The technology then identifies deviations in behavior that are unusually severe or maybe causal to other anomalies – a clear indication that something is wrong. And the best part? This technology works in real-time as well as in troubleshooting mode, so it's proactively monitoring your IT environment. With this approach, real problems can be identified and acted upon faster than before.

More advanced anomaly detection technologies can run multiple analyses in parallel, and are capable of analyzing multiple data sources simultaneously, identifying related, anomalous relationships within the system. Thus, when a chain of events is causal to a performance issue, the alerts contain all the related anomalies. This helps support teams zero in on the cause of the problem immediately.

Traditional approaches are also known to generate huge volumes of false alerts. Anomaly detection, on the other hand, uses advanced statistical analyses to minimize false alerts. Those few alerts that are generated provide more data, which results in faster troubleshooting.

Anomaly detection looks for significant variations from the norm and ranks severity by probability. Machine learning technology helps the system learn the difference between commonly occurring errors as well as spikes and drops in metrics, and true anomalies that are more accurate indicators of a problem. This can mean the difference between tens of thousands of alerts each day, most of which are false, and a dozen or so a week that should be pursued.

Anomaly detection can identify the early signs of developing problems in massive volumes of data before they turn into real, big problems. Enabling IT teams to slash troubleshooting time and decrease the noise from false alarms empowers them to attack and resolve any issues before they reach critical proportions.

If users do become aware of a problem, the IT team can respond "we're on it" instead of saying "thanks for letting us know."

Kevin Conklin is VP of Product Marketing at Ipswitch
Share this

The Latest

July 26, 2017

The retail industry is highly competitive, and as retailers move online and into apps, tech factors play a deciding role in brand differentiation. According to a recent QualiTest survey, a lack of proper software testing — meaning glitches and bugs during the shopping experience — is one of the most critical factors in affecting consumer behavior and long-term business ...

July 25, 2017

Consumers aren't patient, and they are only one back-button click from Google search results and competitors' websites. A one-second delay can bump the bounce rate by almost 50 percent on mobile, and a two-second delay more than doubles it ...

July 24, 2017

Optimizing online web performance is critical to keep and convert customers and achieve success for the holidays and the entire retail year. Recent research from Akamai indicates that website slowdowns as small as 100 milliseconds can significantly impact revenues ...

July 21, 2017

Public sector organizations undergoing digital transformation are losing confidence in IT Operations' ability to manage the influx of new technologies and evolving expectations, according to the 2017 Splunk Public Sector IT Operations Survey ...

July 20, 2017

It's no surprise that web application quality is incredibly important for businesses; 99 percent of those surveyed by Sencha are in agreement. But despite technological advances in testing, including automation, problems with web application quality remain an issue for most businesses ...

July 19, 2017

Market hype and growing interest in artificial intelligence (AI) are pushing established software vendors to introduce AI into their product strategy, creating considerable confusion in the process, according to Gartner. Analysts predict that by 2020, AI technologies will be virtually pervasive in almost every new software product and service ...

July 18, 2017

Organizations are encountering user, revenue or customer-impacting digital performance problems once every five days, according a new study by Dynatrace. Furthermore, the study reveals that individuals are losing a quarter of their working lives battling to address these problems ...

July 17, 2017
Mobile devices account for more than 60 percent of all digital minutes in all 9 markets profiled in comScore's report: Mobile’s Hierarchy of Needs ...
July 14, 2017

Cloud adoption is still the most vexing factor in increased network complexity, ahead of the internet of things (IoT), software-defined networking (SDN), and network functions virtualization (NFV), according to a new survey conducted by Kentik ...

July 13, 2017

Gigabit speeds and new technologies are driving new capabilities and even more opportunities to innovate and differentiate. Faster compute, new applications and more storage are all working together to enable greater efficiency and greater power. Yet with opportunity comes complexity ...