The quality of an end user's experience of an application is becoming an ever more important consideration in the APM world. It's not enough to draw a conclusion about the end user's experience based on an evaluation of how an individual application is performing. Increasingly, multiple applications and loosely coupled infrastructure components are coming together to contribute to the end user's experience. Understanding how all those applications and components are interacting at the point where the user is engaging them is crucial to an understanding of the user's experience.
So where do you start to gain this understanding? First, you must identify what constitutes a user's experience of an application: Response speed? Ease of information access? Depth of integration with other applications? Until you understand what constitutes a user's experience, you're not in a position to measure or quantify it.
Some of the elements that contribute to an end user's experience of an application will be inside the corporate firewall — servers, routers, database machines, and more.
Other elements contributing to the end user's experience will be outside the corporate firewall — data feeds from third parties, for example.
Organizations that want to know how well their applications are performing for users — particularly customers who are interacting from outside the firewall — need tools to monitor the user's experience that look at it from both the inside and the outside.
Monitoring Application Response Times For Each Transaction
Today's application infrastructures involve many servers, routers, switches, load balancers, and more. In any given application, information moves among these different devices. To understand fully what is happening every time the data moves among application or network elements, you need tools that can track and capture transaction information in real time and at a very granular level.
You also need to monitor for patterns in user engagement. Response times for an online booking application, for example, may be consistent all week long, then spike suddenly on a Friday night when everyone leaves work for the weekend. The user experience of your applications on a Friday night may be poor, given the traffic that your systems are experiencing.
Without insight into the response times for each movement between application and infrastructure elements, though, you won't know where to make changes to improve the end user experience.
Monitoring Business Metrics Related to Application Performance
While the ability to monitor all the different aspects of the application and infrastructure that contribute to end user experience is critical, you also need a context in which the data you capture from that monitoring effort has relevance. You need to develop business metrics that identify desired transaction performance levels.
Without both the metrics and the ability to track transaction performance against those metrics, you have information without any context — and it is impossible know where or how to refine a user's experience without that context.
Monitoring the Impact on End User Experience Across Infrastructure Tiers
Increasingly, today's applications are built from loosely coupled components that can exist in many different places and in many different infrastructure tiers — even within a single organization. Tracing root causes of end user experience problems is more complicated now, given the different infrastructure tiers in place.
In order to improve that end user experience, you need tools that can provide a comprehensive view of all those infrastructure elements — and show you how data and messages are moving between those elements.
Generating Synthetic Transactions For Measuring End User Performance
Finally, the ability to monitor the end user experience and trace root causes of problems across different transactions and infrastructure elements is crucial when an end user calls to report a problem. With these tools, you can find and fix a problem quickly.
However, it would be better to monitor the system proactively, finding end user experience problems before the end users report them. If you are able to do that, you could eliminate a large number of poor experiences before users even encounter them.
Passive monitoring tools can provide insights into the end user experience from outside the firewall. They can monitor the transactions, the transitions from page to page in a web application, and how much time it takes before the user can move on to a next step while waiting for a transaction to complete.
Active monitoring tools, in contrast, can create synthetic transactions that you can use to understand end user experience without the end user's involvement. They enable you to get a jump on end user experience management, because you can find and fix problems before the users do.
Ultimately, when you're looking at APM, you need to pay particular attention to the tools that enable you to monitor and manage the experience of the end user. The traditional APM tools are powerful tools for managing traditional applications, but as newer applications veer away from the traditional development and deployment models, you need tools that can focus on the end user experience, in order to understand how best to use the APM tools to modify the application delivery environment.
Create the right user experience, and you will keep more customers. They will be engaged with the experience you have created — and that, ultimately, is the best measure of application performance.
About Raj Sabhlok and Suvish Viswanathan
Raj Sabhlok is the President of ManageEngine. Suvish Viswanathan is an APM Research Analyst at ManageEngine. ManageEngine is a division of Zoho Corp. and makers of a globally renowned suite of cost-effective network, systems, security, and applications management software solutions.
Related Links:
Click to read "Another Look at Gartner's 5 Dimensions of APM" by Raj Sabhlok and Suvish Viswanathan
The Latest
The mobile app industry continues to grow in size, complexity, and competition. Also not slowing down? Consumer expectations are rising exponentially along with the use of mobile apps. To meet these expectations, mobile teams need to take a comprehensive, holistic approach to their app experience ...
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...
High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...
Organizations recognize the benefits of generative AI (GenAI) yet need help to implement the infrastructure necessary to deploy it, according to The Future of AI in IT Operations: Benefits and Challenges, a new report commissioned by ScienceLogic ...
Splunk's latest research reveals that companies embracing observability aren't just keeping up, they're pulling ahead. Whether it's unlocking advantages across their digital infrastructure, achieving deeper understanding of their IT environments or uncovering faster insights, organizations are slashing through resolution times like never before ...