In a recent post, Why Today's APM Solutions Aren't Optimized for DevOps, I discussed the odd contradiction I’ve been noticing lately in the APM marketplace. Fragmented approaches to APM are being promoted as solutions to support the DevOps ideal of continuous integration and delivery, but the stark lack of integrated tools in these APM arsenals isn’t likely to make communication and collaboration between dev and ops any easier or more efficient.
That’s why integrated, unified APM solutions — consisting of software tools and testing functions that can fluently speak to each other and look at the same information at the same time — are the only hope for APM in a streamlined DevOps world. Unfortunately, even the best attempts at tool integration won’t solve the deeper issues of performance management if they approach it completely backwards from the start.
The Varieties of Anti-User Experience
The problem is, most vendors in the APM arena are looking at what they do from the wrong way around. Starting from the volumes of data that their tools generate and record, they woo and immerse their customers in “analytics.” Eventually, somewhere down the line, they may accidentally stumble upon the issues that are actually impacting end users.
Lo and behold! There are humans on the other side of this matrix. And what kind of experience are those users of the application having? It’s hard to say, since we can only extrapolate from our data and try to imagine what the quality of the user experience might be. But wait a minute – How does that make any sense? Shouldn’t we be looking at application speed and response time from the perspective of the people to whom it ultimately matters? Whose idea was it, anyway, to privilege data analytics over what our end users actually experience and perceive?
Data: A Supporting Character in a Story Written By User Experience
These are obviously rhetorical questions, because there’s always been a better way to engage in APM, and it begins and ends with the end-user. If monitoring and optimizing performance to deliver a streamlined end-user experience is our goal, then it should be obvious that the right way to go about it is to start with our end-users’ experience and work our way back through the software architecture from there.
At the end of the day, no matter how many sources of performance lags you’ve caught and corrected, your efforts only make a difference if they improve the user experience of your software. Your work needs to become user-centric, both in theory and in practice, if customer experience has any connection to your business and revenue goals (which it almost always, most certainly does).
Of course, monitoring server responses, stressing your system baselines with regular load tests, and analyzing the resulting data is essential to being able to manage the quality and reliability of your applications, day in and day out. I’m not arguing otherwise. Big-data analytics and code-level visibility are important concepts in the APM space, and I believe them to be critical components of any full-featured, end-to-end solution.
But the fact remains that user experience is actually bigger and more inclusive than data, because without a clear emphasis on your end-user experience, your deep-data dives may lose their meaning. The fact is, sometimes performance problems can’t even be found at the level of code and internal datacenters, but rather in more obvious user experience issues, like a slow web page caused by 3rd party content. As the old adage has it, if you focus only on the trees, you may lose sight of the forest ... and end up getting lost.
But perhaps Steve said it best:
“You’ve got to start with the customer experience and work back toward the technology – not the other way around.”
-Steve Jobs
Denis Goodwin is Director of Product Management, APM, AlertSite UXM, SmartBear Software.
The Latest
On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner's annual global survey of CIOs and technology executives ...
Artificial intelligence (AI) is rapidly reshaping industries around the world. From optimizing business processes to unlocking new levels of innovation, AI is a critical driver of success for modern enterprises. As a result, business leaders — from DevOps engineers to CTOs — are under pressure to incorporate AI into their workflows to stay competitive. But the question isn't whether AI should be adopted — it's how ...
The mobile app industry continues to grow in size, complexity, and competition. Also not slowing down? Consumer expectations are rising exponentially along with the use of mobile apps. To meet these expectations, mobile teams need to take a comprehensive, holistic approach to their app experience ...
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...
High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...