Ensuring application performance is a never ending task that involves multiple products, features and best practices. There is no one process, feature, or product that does everything. A good place to start is pre-production and production monitoring with both an Application Performance Management (APM) tool and a Unified Monitoring tool.
The APM tool will trace/instrument your application and application server activity and often the end user experience via synthetic transactions. The development team and DevOps folks need this.
The Unified Monitoring tool will monitor the supporting infrastructure. The IT Ops team needs this. DevOps likes it too because it helps make IT Ops more effective, which in turn helps assure application delivery.
More Cost Effective
APM tools do not specialize in infrastructure monitoring like unified monitoring solutions do, and unified monitoring solutions do not provide application monitoring depth and diagnostics like the APM tools do. And on top of that, the different audiences need different information.
The best approach is to buy APM for the most critical applications. Most organizations use APM for 10% - 15% of their applications. It is too expensive to buy it for everything. Then for the second tier applications that need some monitoring, they use the unified monitoring solution. It is much less expensive and if you select one with synthetic transaction capability you can get "good enough" end user experience monitoring to know whether or not the application is performing well or not.
Service-Centric is Key
When it comes to unified monitoring, it is important to understand that most unified monitoring vendors provide endpoint monitoring. With endpoint monitoring alone, it is impossible to provide highly accurate root-cause isolation. And they don't identify which service, or application, is impacted. And they can't tell you the extent of the impact. Is it just at risk without impacting application delivery yet OR is it down OR is it somewhere in between?
Be sure the unified monitoring vendor is service-centric and models relationships between components, and that it identifies root-cause; the service or application impacted; and the extent of the impact. This can save hours when there is an outage.
Better yet, by identifying when services are at risk, this can help you to proactively identify and address issues before services/application delivery is impacted.
Scott Hollis is Director of Product Marketing for Zenoss.
The Latest
On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner's annual global survey of CIOs and technology executives ...
Artificial intelligence (AI) is rapidly reshaping industries around the world. From optimizing business processes to unlocking new levels of innovation, AI is a critical driver of success for modern enterprises. As a result, business leaders — from DevOps engineers to CTOs — are under pressure to incorporate AI into their workflows to stay competitive. But the question isn't whether AI should be adopted — it's how ...
The mobile app industry continues to grow in size, complexity, and competition. Also not slowing down? Consumer expectations are rising exponentially along with the use of mobile apps. To meet these expectations, mobile teams need to take a comprehensive, holistic approach to their app experience ...
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...
High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...