A recent APMdigest blog by Jean Tunis, The Evolving Needs of Application Performance Monitoring - Part 2, provided an excellent background on Application Performance Monitoring (APM) and what it does. APM solution benefits are much more understood than in years past. An interesting data point from Gartner Inc. mentioned in the article confirms this, stating that IT departments are planning to increase the use of APM solutions to monitor their applications from 5% in 2018 to a projected 20% in 2021.
A further topic that I wanted to touch on though is the need for good quality data. If you are to get the most out of your APM solution possible, you will need to feed it with the best quality data. Irrelevant data, fragmented data, and corrupt data are all common culprits that either end up decreasing the speed to resolution, or prevent problem resolution altogether, by APM solutions.
There are two easy activities you can conduct to increase the quality of the input data to your APM tool. First, install taps to collect monitoring data. Taps can be installed anywhere across your network. This lets you collect ingress/egress traffic to your network, data to/from remote branch offices, and data from anywhere across the network that you think might be experiencing some sort of issue.
Taps deliver the ultimate experience in flexibility. In contrast, SPAN and mirroring ports off of your Layer 2 and 3 switches do not have that same flexibility. For instance, placing switches all over your network to capture data is unnecessary and expensive. In addition, mirroring ports can drop data, especially in CPU overload situations. When it comes to troubleshooting and performance monitoring, you need every piece of relevant data, not just portions of relevant data.
Secondly, you need to deploy a network packet broker (NPB) in your network. The function of the NPB is to aggregate monitoring data from across your network, filter that data based upon the criteria you are looking for, and remove unnecessary, duplicate copies of the data. Once this is accomplished, the NPB forwards the data onto your APM solution. The NPB may reduce the traffic sent to your APM solution by 50% or more; making your APM solution that much more effective and potentially reduce your future APM tool costs.
Something else to consider is that the tap and NPB concept can be used in cloud solutions as well. This means you can deploy the concept for both physical on-premises and virtual network. This is especially important for hybrid cloud (mixture of physical on-premises and public/private cloud) scenarios that are prevalent in today’s enterprise networks. This mixture of different network types can be a significant problem that is easily remedied with a tap, virtual tap, and NPB approach.
In the end, APM solutions are a critical component to troubleshooting and performance monitoring, but you need to make sure that the APM solution is getting the right data.
The Latest
On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner's annual global survey of CIOs and technology executives ...
Artificial intelligence (AI) is rapidly reshaping industries around the world. From optimizing business processes to unlocking new levels of innovation, AI is a critical driver of success for modern enterprises. As a result, business leaders — from DevOps engineers to CTOs — are under pressure to incorporate AI into their workflows to stay competitive. But the question isn't whether AI should be adopted — it's how ...
The mobile app industry continues to grow in size, complexity, and competition. Also not slowing down? Consumer expectations are rising exponentially along with the use of mobile apps. To meet these expectations, mobile teams need to take a comprehensive, holistic approach to their app experience ...
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...
High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...