Alerting Survival Strategies
July 24, 2014

Larry Haig
Intechnica

Share this

(aka – “If that monitoring system wakes me up at 3am one more time … !”)

In considering alerting, the core issue is not whether a given tool will generate alerts, as anything sensible certainly will. Rather, the central problem is what could be termed the actionability of the alerts generated. Failure to flag issues related to poor performance is a clear no-no, but unfortunately over-alerting has the same effect, as these will rapidly be ignored.

Effective alert definition hinges on the determination of “normal” performance. Simplistically, this can be understood by testing across a business cycle (ideally, a minimum of 3-4 weeks). That is fine providing performance is reasonably stable. However, that is often not the case, particularly for applications experiencing large fluctuations in demand at different times of the day, week or year.

In such cases (which are extremely common), the difficulty becomes “at which point of the demand cycle should I base my alert threshold?” Too low, and your system is simply telling you that it’s lunchtime (or the weekend, or whenever greatest demand occurs). Too high, and you will miss issues occurring during periods of lower demand.

There are several approaches to this difficulty, of varying degrees of elegance:

■ Select tooling incorporating a sophisticated baseline algorithm - capable of applying alert thresholds dynamically based on time of day/week/month etc. Surprisingly, many major tools use extremely simplistic baseline models, but some (e.g. App Dynamics APM) certainly have an approach that assists. When selecting tooling, this is definitely an area that repays investigation.

■ Set up independent parallel (active monitoring) tests separated by “maintenance windows”, with different alert thresholds applied depending upon when they are run. This is a messy approach which comes with its own problems.

■ Look for proxies other than pure performance as alert metrics. Using this approach, a “catchall” performance threshold is set for performance that is manifestly poor regardless of when it is generated. This is supplemented by alerting based upon other factors flagging delivery issues – always providing that your monitoring system permits these. Examples include:

- Payload – error pages or partial downloads will have lower byte counts. Redirect failures (e.g. to mobile devices) will have higher than expected page weights.

- Number of objects

- Specific “flag” objects

■ Ensure confirmation before triggering alert. Some tooling will automatically generate confirmatory repeat testing; others enable triggers to be based on a specified number or percentage of total node results.

■ Gotchas – take account of these. Good test design, for example by controlling the bandwidth of end user testing to screen out results based on low connectivity tests, will improve the reliability of both alerts and results generally. As a more recent innovation, the advent of long polling / server push content can be extremely distortive of synthetic external responses, especially if not consistently included. In this case, page load end points need to be defined and incorporated into test scripts to prevent false positive alerts.

RUM based alerting presents its own difficulties. Because it is visitor traffic based, alert triggers based on a certain percentage of outliers may become distorted in very low traffic conditions. For example, a single long delivery time in a 10 minute timeslot where there are only 4 other “normal” visits would represent 20% of total traffic, whereas the same outlier recorded during a peak business period with 200 normal results is less than 1% of the total. RUM tooling that enables alert thresholds to be modified based on traffic are advantageous.

Although it does not address the “normal variation” issue, replacing binary trigger thresholds with dynamic ones (i.e. an alert state exists when the page/transaction slows by more than x% compared to its average over the past) can sometimes be useful.

Some form of trend state messaging (that is, condition worsening/improving) subsequent to initial alerting can serve to mitigate the amount of physical and emotional energy invoked by simple “fire alarm” alerting, particularly in the middle of the night.

An interesting (and long overdue) approach is to work directly on the source of the problem – download raw baseline data to a data warehouse, and apply sophisticated pattern recognition analysis. These algorithms can be developed in-house if time and appropriate skills are available, but unfortunately the mathematics is not necessarily trivial. Some standalone tooling exists and it is expected that more will follow as this approach proves its worth – the baseline management of most APM vendors represents an open goal at present.

Incidentally, such analysis is valuable not only for alerting but also for demand projection and capacity planning.

A few final thoughts on alerts post-generation. The more evolved alert management systems will permit conditional escalation of alerts – that is: alert this primary group first, then inform group B if the condition persists/worsens etc. Systems allowing custom coding around alerts (such as Neustar) are useful here, as are the specific third party alert handling systems available. If using tooling that only permits basic alerting, it is worth considering integration with external alerting, either of the “standalone service” type, or (in larger corporates) integral with central infrastructure management software.

Lastly, delivery mode. Email is the basis for many systems. It is tempting to regard SMS texting as beneficial, particularly in extreme cases. However, as anyone who has been sent a text on New Year’s Eve, only to have it show up 12 hours later knows, such store and forward systems can be false friends.

Larry Haig is Senior Consultant at Intechnica.

Share this

The Latest

March 30, 2017

Application bottlenecks can lead an otherwise functional computer or server to slow down to a crawl. Addressing bottleneck issues usually results in returning the system to operable performance levels; however, fixing bottleneck issues requires first identifying the underperforming component. These five bottleneck causes are among the most common ...

March 29, 2017

Providing business-critical services at agreed service levels is not easy because numerous factors such as faulty configuration changes, bandwidth, network performance and attacks affect them. To ensure seamless availability of business-critical services, IT teams have to overcome such challenges. ManageEngine surveyed IT professionals to find out the top five challenges that might impact IT in 2017 ...

March 28, 2017

The amount of data, research and case studies on the benefits and effectiveness of hybrid cloud keeps growing. Yet some people are still skeptical, thinking that it’s just a play to keep data centers relevant in a public cloud age. Some recent outages of cloud-based systems may be just the thing to bring hybrid cloud skeptics around ...

March 27, 2017

Monitoring a business means monitoring an entire business – not just IT or application performance. If businesses truly care about differentiating themselves from the competition, they must approach monitoring holistically. Separate, siloed monitoring systems are quickly becoming a thing of the past ...

March 24, 2017

A growing IT delivery gap is slowing down the majority of the businesses surveyed and directly putting revenue at risk, according to MuleSoft's 2017 Connectivity Benchmark Report on digital transformation initiatives and the business impact of APIs ...

March 23, 2017

Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. Without the right foresight, DevOps and IT teams may lose a lot of visibility into these containers resulting in operational blind spots and even more haystacks to find the presumptive performance issue needle ...

March 22, 2017

Much emphasis is placed on servers and storage when discussing Application Performance, mainly because the application lives on a server and uses storage. However, the network has considerable importance, certainly in the case of WANs where there are ways of speeding up the transmission of data of a network ...

March 21, 2017

The majority of IT executives believe investment in IT Service Management (ITSM) is important to gain the agility needed to compete in an era of global, cross-industry disruption and digital transformation, according to Delivering Value to Today’s Digital Enterprise: The State of IT Service Management 2017, a report by BMC, conducted in association with Forbes ...

March 17, 2017

Let’s say your company has examined all the potential pros and cons, and moved your critical business applications to the cloud. The advertised benefits of the cloud seem like they’ll work out great. And in many ways, life is easier for you now. But as often happens when things seem too good to be true, reality has a way of kicking in to reveal just exactly how many things can go wrong with your cloud setup – things that can directly impact your business ...

March 16, 2017

IT leadership is more driven to be innovative than ever, but also more in need of justifying costs and showing value than ever. Combining the two is no mean feat, especially when individual technologies are put forward as the single tantalizing answer ...