“A company missing its targets is an underperforming company. A company missing its forecast is a company out of control.”
This was a favorite saying of an old friend of mine in sales management, and while you could argue the merits of the first sentence, the second is fairly undeniable. And yet it seems to me that when it comes to APM, many organizations fail to qualify for either category – neither underperforming nor even out of control.
Sounds absurd? Perhaps. But, if you think about what these sentences imply about an organization for a minute, you’ll see where I’m coming from. They clearly imply the company has (financial) performance targets, that it is forecasting its performance against these targets. It is also monitoring its achievement, and evaluating itself based on this achievement. This is almost a no-brainer for monitoring of financial performance, but not many organizations can boast the same level of maturity for their application performance monitoring.
Just to carry the analogy a little further, if company financial performance was tracked and reported to shareholders in the same manner as APM, then there would be a lot more senior management churn. E.g. Overall costs are up, but we’re not sure exactly why, the two areas we track in detail were unchanged, so it must have been something else”.
And yet today, understanding application performance has a massive impact on business success or failure — from worker retention and satisfaction to IT infrastructure spend to customer experience — so it is surprising that the current state of play is allowed to persist.
There are numerous surveys and much anecdotal evidence to support the view that customer satisfaction with APM is low, and that APM does not seem to prevent application outages and poor performance.
Why Are These Poor Results Tolerated?
Why are these poor results tolerated? There are probably myriad reasons and much disagreement about their relative significance, but a few that come to mind are:
- Ultimate responsibility for delivering application performance is split among many teams — network ops and engineering, application ops, software development, datacenter engineering — with no “Performance Tsar” to join the dots and, more importantly, to call the shots.
- An almost emotional attachment to outdated tools and work practices. The application delivery environment has changed dramatically, but many of the monitoring practices have not kept pace.
- A belief that what was “good enough” yesterday will be good enough for today. Just because network availability was the principle metric for network performance 10 years ago, it doesn’t mean that’s the case today.
- Crucially, it’s not always easy to relate application performance to profit and loss, or conversely, it’s not always easy to justify investment in application performance by demonstrating increased revenue or reduced costs. So the temptation to muddle through and accept the current situation is strong.
Confronting APM Issues
One industry which has had to confront these issues and force through solutions is electronic trading. When application performance problems can quite literally put you out of business, then application performance monitoring takes on a whole new significance.
These organizations have understood that their applications are directly in competition with the applications of their competitors. (The funny thing is that the same holds true for applications in other industries but the same realization has not yet been generally reached!)
They have taken the following steps:
- Appoint a trusted individual to own the application performance problem. Technically competent across a broad range, able to lead cross-functional teams and someone who gets things done. This individual usually reports straight to the management board for this function, irrespective of where they actually sit in the org chart.
- Decide on the outcomes they want and then make them happen. They don’t go the traditional route of evaluating the options available in the market and then choosing the best option – this might allow each team to settle on a solution that’s back in their comfort zone. Instead, force innovation by demanding business outcomes whether they seem reasonable or not. (The story of Steve Jobs and the fan in the iMac comes to mind.)
- Make sure everyone in the organization understands the importance of and is aligned behind the need to really make this happen. It’s a team, and the team is in a competition, right, so if you don’t play together you can’t expect to win.
- Set aggressive timelines, in phases, review results, then iterate.
It might all sound a bit obvious, so it’s amazing that it’s not happening in more organizations. Companies who have implemented this have reaped benefits not just in improved application performance and reduced outages. They also are able to plan and implement change with greater confidence and less risk, and they can allocate future IT spend to the areas where it will deliver the greatest impact.
How important is application performance to you?
Donal O'Sullivan is Vice President of Product Management for Corvil.