Alert Thresholds: Aggravating Mess or Indispensable Friends?
September 18, 2013
Praveen Manohar
Share this

Setting up a network or application monitoring system involves creating alerts for critical parameters that need attention. Alerts are an integral part of monitoring and they should be easily understandable, provide actionable knowledge and should not make excessive noise. For an alert to be valuable to the user and meet those criteria, the right set of thresholds is essential. That really is the question then: How do you find the right threshold values for your alerts?

To determine and set the right threshold value, a deep level of understanding about the application, the server that hosts the application and the environment where the servers reside is required. Also needed is an application monitoring system that simplifies the process of isolating abnormal performance patterns in your environment. In the best case, you also have tools to assist with automatic threshold determination based on your real-world environment.

The Challenge of Dynamic Environments

When an application behaves as expected or if there is no significant variation in its day-to-day behavior, setting an alert threshold is a cakewalk. You know what is normal vs. what is unexpected. What if the application does not have a fixed baseline behavior? When it comes to applications with dynamic behavior patterns, even the Subject Matter Experts (SMEs) may find it challenging to set ideal thresholds, or have the patience to maintain and recalibrate them over time.

Let us take a look at some examples of where alerting can be difficult because finding the right threshold is challenging. Take Page Files for example. Their usage depends on workload, kernel and Operating System parameters and thus the usage differs from server to server.

Other examples are LDAP, Exchange, Lotus, etc., all of whose behavior depends on organization size, deployment platform and usage patterns. And then there is SQL - SQL server behavior changes based on number of applications connected to the DB.

The scenarios we saw now contribute to some major problems:

- False alerts flood your inbox leading you to a “Crying Wolf” situation. This occurs because you used a very low threshold and your mail box was flooded with alarms leaving you with no way to identify truly important alerts.

- You use a very high threshold value and there are almost no alerts. In such cases the first alert you may receive will be a critical level ticket raised by a user about application performance.

- The right threshold varies from server to server and also over time. This means you need to constantly monitor your servers and adapt to changing usage patterns. This could require you to invest time and resources on recalculating numerous threshold values, way more often than you'd like. That is easier said than done.

Since thresholds can change over time and from server to server, the investment of time and resources that goes into pulling up multiple reports, recalculating thresholds, and that too for multiple servers with every change can be huge. This is why it is imperative that you use a monitoring tool with the ability to automatically set your thresholds for alerts.

Your monitoring tools should be able to make use of the data it's already collecting for a monitored parameter and do the math to suggest the right threshold for a parameter. Such a tool can save time because you don't have to constantly revisit hundreds or thousands of metrics for every change in the network or services environment, pull reports and recalculate. Math should be reserved for more enjoyable leisure activities like calculating subnets. With automation you won’t have to find mean values and standard deviations to then determine what you think is the right threshold. And all this leads to reducing false alerts and giving you an opportunity to quickly cut thought the clutter and easily identify critical issues before users call the helpdesk.

Because application uptime is critical, automatic threshold capability will leave you with enough time to deal with issues that really need your attention. Alert tuning shouldn't be one more thing on your backlog list, they can be dependable partners who may bring you bad news, but in the best possible way. Who every thought you could enjoy alerts?

ABOUT Praveen Manohar

Praveen Manohar is a Head Geek at SolarWinds, a global IT management software provider based in Austin, Texas. He has 7 years of IT industry experience in roles such as Support Engineer, Product Trainer and Technical Consultant, and his expertise lies in technologies including NetFlow, Flexible NetFlow, Cisco NBAR, Cisco IPSLA, WMI and SNMP. Manohar gives strategic guidance for end users on applications, networks and performance monitoring tools.

Related Links:

www.solarwinds.com

Share this

The Latest

November 21, 2024

Broad proliferation of cloud infrastructure combined with continued support for remote workers is driving increased complexity and visibility challenges for network operations teams, according to new research conducted by Dimensional Research and sponsored by Broadcom ...

November 20, 2024

New research from ServiceNow and ThoughtLab reveals that less than 30% of banks feel their transformation efforts are meeting evolving customer digital needs. Additionally, 52% say they must revamp their strategy to counter competition from outside the sector. Adapting to these challenges isn't just about staying competitive — it's about staying in business ...

November 19, 2024

Leaders in the financial services sector are bullish on AI, with 95% of business and IT decision makers saying that AI is a top C-Suite priority, and 96% of respondents believing it provides their business a competitive advantage, according to Riverbed's Global AI and Digital Experience Survey ...

November 18, 2024

SLOs have long been a staple for DevOps teams to monitor the health of their applications and infrastructure ... Now, as digital trends have shifted, more and more teams are looking to adapt this model for the mobile environment. This, however, is not without its challenges ...

November 14, 2024

Modernizing IT infrastructure has become essential for organizations striving to remain competitive. This modernization extends beyond merely upgrading hardware or software; it involves strategically leveraging new technologies like AI and cloud computing to enhance operational efficiency, increase data accessibility, and improve the end-user experience ...

November 13, 2024

AI sure grew fast in popularity, but are AI apps any good? ... If companies are going to keep integrating AI applications into their tech stack at the rate they are, then they need to be aware of AI's limitations. More importantly, they need to evolve their testing regiment ...

November 12, 2024

If you were lucky, you found out about the massive CrowdStrike/Microsoft outage last July by reading about it over coffee. Those less fortunate were awoken hours earlier by frantic calls from work ... Whether you were directly affected or not, there's an important lesson: all organizations should be conducting in-depth reviews of testing and change management ...

November 08, 2024

In MEAN TIME TO INSIGHT Episode 11, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Secure Access Service Edge (SASE) ...

November 07, 2024

On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner's annual global survey of CIOs and technology executives ...

November 06, 2024

Artificial intelligence (AI) is rapidly reshaping industries around the world. From optimizing business processes to unlocking new levels of innovation, AI is a critical driver of success for modern enterprises. As a result, business leaders — from DevOps engineers to CTOs — are under pressure to incorporate AI into their workflows to stay competitive. But the question isn't whether AI should be adopted — it's how ...