IT Operations Unsatisfied with APM and BSM, Survey Says
February 18, 2013

More than half (63%) of senior IT operations executives are dissatisfied with their Application Performance Monitoring (APM) solutions, and 75% are dissatisfied with their Business Service Monitoring (BSM) solutions, according to a new BlueStripe survey of Fortune 500 companies.

While reasons vary, a common theme is the inability of these tools to keep pace with the make-up of applications both in the data center and within public and hybrid cloud environments.

Top reasons for dissatisfaction with APM tools, according to the survey, include an inability to support all applications or track all application components; metrics that are too developer-centric; difficult tool integration; and the simple fact that the tools do not actually help IT solve problems.

The problems cited with BSM tools include manpower requirements to keep service models up to date; lack of root cause analysis; too many alerts; difficult integration with other tools; and limited alerting for service level issues.

The survey highlighted three key trends in IT Operations:

- Current IT Operations processes for application monitoring and problem solving are both ineffective and manpower intensive

- IT Operations leaders are dissatisfied with their current set of performance monitoring and management tools

- Enterprise companies are hesitant to move mission-critical transactional applications to the cloud until processes and tools become more effective

“As companies continue to incorporate new technologies into their applications, the inability of conventional APM and BSM tools to keep up is taking its toll on IT Operations,” said Chris Neal, BlueStripe co-founder and CEO. “We were surprised to learn that in 2013, 81 percent of companies still have more than a quarter of their application issues go un-resolved, even with APM and BSM tools.”

Additional results from the survey:

- 68% of respondents reported failing to identify at least 1 in 10 business impacting incidents before users did

- 36% of respondents reported learning about more than 25% of problems from end user complaints

- Only 8% of respondents have a monitoring framework that both aggregates alerts and provides appropriate application and service level context for interpreting and acting on those alerts

- 92% of of respondents either have fragmented monitoring, using separate tools, or basic integrated monitoring, which does not correlate alerts to service level issues

- 52% of respondents reported that the standard process for fixing outages is a bridge call - which in large organizations can involve more than 50 individuals

- Companies using bridge calls as the primary approach reported the lowest success rates, with only 14% solving outages quickly

- Companies that used smaller teams for problem solving reported a greater success rate, with 29% able to solve outages quickly

Survey results also indicated a sharp contrast in attitudes regarding virtualization and private cloud versus public and hybrid cloud deployments for critical applications. In last year’s (January 2012) survey, IT Operations executives indicated that they viewed virtualization and private cloud as “just another technology” to be managed within their application architecture. The 2013 results build on this, showing widespread adoption of virtualization and private cloud.

In contrast, attitudes toward public and hybrid cloud among large company IT operations executives were distinctly skeptical. Despite the rapid growth of public cloud services like Amazon Web Services (AWS) and Microsoft Azure, large companies are explicitly avoiding critical application deployments using public and hybrid cloud, in part due to the limited ability of APM and BSM tools to monitor and manage new technologies.

About the Survey

BlueStripe Software surveyed senior IT Operations executives at 166 large US-based companies in early 2013.

Related Links:

www.bluestripe.com

Click here for a full PDF report and video on the survey

The Latest

July 06, 2015

For more than half of Federal IT decision makers, it takes a day or more to detect and fix application performances. That is why there is a federal network visibility crisis. Poor application performance directly impacts federal agency productivity and the costs associated with network outages can be staggering. Today the average cost of an enterprise application failure per hour is $500,000 to $1 million ...

July 01, 2015

CIOs are under pressure to support fast-evolving digital business scenarios but are finding traditional project and development methods unsuitable, according to Gartner, Inc. Enterprises are increasingly turning to agile development to speed up projects and illustrate their value ...

June 30, 2015

European organizations with the strongest Operational Intelligence capability are most likely to conquer the complexity of the fastest growing IT concerns, according to a new report titled Masters of Machines II, from analyst firm Quocirca ...

June 29, 2015

In a Catchpoint study from March, news sites were found to have a significantly higher percentage of their site content – as well as their speed bottlenecks – coming from third parties than sites from the eCommerce, banking, and travel industries. And in a more recent survey of the top 50 news sites across both desktop and mobile, it's easy to see why ...

June 26, 2015

While nearly half (45%) of service desks are interested in technology integration, 75% do not have the ability to calculate return on investment, according to new research by LANDESK in partnership with the Service Desk Institute (SDI) ...

June 25, 2015

ManageEngine recently released the findings of its inaugural ITSM survey of organizations using service desk software. The survey reveals the high level of first-time IT help desk adoption as well as the high number of IT help desk implementations beyond IT ...

June 24, 2015

Given the extent to which companies are contracting out their IT organization to other parties, outsourcing appears to be making a comeback. But migrating your IT infrastructure and management to the cloud or another party remains a hot topic. In the outsourcing procedure you lay down your criteria for the quality to be delivered by the other party. We have to do this, because otherwise the supplier will rest on his laurels, which is the last thing we want. So, we've got our criteria, but who's going to monitor them and how transparent are the figures? ...

June 23, 2015

For decades IT operations has been viewed as something of a back-office technology function; the IT engine room. That’s not wrong since the applications under control have generally been large monolithic systems of record designed to automate internal business processes. These systems have been inherently complex and tightly-coupled, so changing them has been difficult, time consuming and costly. As such, our operational mindset has remained firmly focused on maintaining reliability and avoiding risk at all costs – even if that means holding back releases and ticking off our colleagues in development. Not anymore ...

June 22, 2015

The "point of delivery", which is where users access composite apps, is the only perspective from which user experience should be evaluated. Thus, the most relevant metric for IT teams is not about infrastructure utilization. Instead, it is at what point of utilization the user experience begins to degrade. This means transaction completion. If transactions do not complete, user experience suffers as does business performance ...

June 19, 2015

When gathering performance data it is important to note that correlation does not prove causality ...

Share this