Flying Blind — The 2013 IT Operations Quotient Report
June 11, 2013

Sasha Gilenson
Evolven

Share this

IT Operations is now overwhelmed — by the volume, velocity and variety of change and configuration data, lacking insight or actionable information, all making change and configuration problems a chronic pain.

As shown by recent surveys at the Gartner Data Center Summit and ServiceNow Knowledge13 conferences, where Evolven surveyed over 300 IT Operations professionals asking questions critical to IT operations management, 84% of IT professionals said that they want to significantly improve their IT operations management.

The 2013 IT OQ (Operations Quotient) Report provides a good indication to IT executives as to whether IT ops investments have yielded desired results, using the IT Operations Quotient (OQ), a metric for evaluating operational ability to support existing business services and incoming business requirements.

When an Incident Occurs, Can You Quickly Know What Changed?

Only 7% of the professionals surveys indicated that, using their current IT management tools, they could quickly identify what changed in order to respond to problems and incidents.

The first question IT operations asks themselves when an incident occurs is "what changed?" Due to the complexity and dynamics taking place in the modern data center, with overwhelming configuration data and frequent changes, this question has become quite formidable.

Between applications, environments, and individual instances, mistakes and unauthorized changes happen, demanding that IT ops spend significant amounts of time managing configuration values.

Traditional IT management tools were not designed to deal with the complexity and dynamics of the modern data center. These tools have not been automated to collect data down to granular details, analyzing all changes and consolidating information to extract meaningful information from the sea of raw change and configuration data.

Without systems to manage and organize this growth, IT will drown in its own data.

Can You Automatically Validate that Your Release Deployed Accurately?

Only 8% of the participants surveyed agreed that they could currently automatically validate the accuracy of their deployments. Available release management tools are unprepared for one-off changes or changes that do not follow policy.

IT organizations regularly transition changes to production environments, checking changes throughout a set of pre-production environments.

Now IT is under even more pressure. To meet business requirements, application deployments have accelerated and software deployment schedules have driven up high-paced change activity. The increasingly agile nature of application and infrastructure change requests, leaves IT operations at a loss as they are inundated by change requests that run the gamut from the critical and high priority to the minor and unimportant.

With a typical environment having thousands of different system configuration parameters, any little change can impact performance. So it’s not surprising to see many companies going through painful stabilization periods after a release, as well as production outages.

Even when using automated tools for deployment, the lack of detailed visibility into the release means IT ops can’t ensure accurate, error-free deployments.

Can You Quickly Identify the Incident’s Root Cause?

As shown in this survey, the vast majority of IT professionals surveyed concurred that they lack the capabilities to quickly identify an incident’s root cause. IT organizations find themselves challenged when assessing system failure and tracking down the root cause, such as if a patch wasn't deployed or a server failed.

Any minute misconfiguration or omission of a single configuration parameter can quickly lead to an incident with high impact. With an infinite number of these configuration parameters in play when an environment incident hits, finding the root cause consumes both precious time and manpower, making MTTR woefully high in most organizations.

The root cause of downtime and incidents often start at the most granular level of configuration changes where today's configuration management and change management tools don't provide visibility. The different groups in organizations, like IT Development, Support, and Operations, tend to point the finger of blame for issues, and fail to diagnose or deal with the root cause of the problem.

After a major incident, root cause analysis should focus on root cause of the failure in order to not only resolve the incident but to head off a recurrence. Even when IT teams manage to suppress a failure, and operations can return to "normal", the true root cause may still remain unresolved, leaving the organization exposed to further chaos.

Can You Automatically Verify the Consistency of Your Environments?

From our survey, only 5% of the respondents felt that currently they can automatically verify the consistency of their environments, where they need to go into the fine, granular details and identify the make-up of even minor changes, having to process the enormous amounts of configuration data, for verifying the consistency between servers and environments.

As IT organizations regularly transition changes to production environments, IT teams need to check changes throughout a set of pre-production environments that can include system test, performance test, UAT, staging, etc (changes are also mirrored in a Disaster Recovery environment). IT has sought to diversify their workloads, spreading deployments over multiple IT environments to mitigate risk, yet also doubling complexity.

The high volumes of changes means that not all changes consistently make their way to all environments (pre-prod, prod, DR). The configuration parameters must be validated for consistency in real-time.

IT Operations Analytics Helps

With performance at risk from any disruptions to stability, IT teams need to know exactly what has changed in an environment.

Managing IT environments with intelligent automated analytics will drive more sophisticated proactive processes like comparing environment states, validating releases, and verifying consistency of changes,helping to prevent or identify critical issues. So rather than continue to feed bloated system tools, IT Operations should strive to simplify and implement configuration management based on IT Operations Analytics, and turn the situation around from what can’t be managed to being what can be done about performance and availability.

Sasha Gilenson is the Founder and CEO of Evolven Software.

Share this

The Latest

March 24, 2017

A growing IT delivery gap is slowing down the majority of the businesses surveyed and directly putting revenue at risk, according to MuleSoft's 2017 Connectivity Benchmark Report on digital transformation initiatives and the business impact of APIs ...

March 23, 2017

Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. Without the right foresight, DevOps and IT teams may lose a lot of visibility into these containers resulting in operational blind spots and even more haystacks to find the presumptive performance issue needle ...

March 22, 2017

Much emphasis is placed on servers and storage when discussing Application Performance, mainly because the application lives on a server and uses storage. However, the network has considerable importance, certainly in the case of WANs where there are ways of speeding up the transmission of data of a network ...

March 21, 2017

The majority of IT executives believe investment in IT Service Management (ITSM) is important to gain the agility needed to compete in an era of global, cross-industry disruption and digital transformation, according to Delivering Value to Today’s Digital Enterprise: The State of IT Service Management 2017, a report by BMC, conducted in association with Forbes ...

March 17, 2017

Let’s say your company has examined all the potential pros and cons, and moved your critical business applications to the cloud. The advertised benefits of the cloud seem like they’ll work out great. And in many ways, life is easier for you now. But as often happens when things seem too good to be true, reality has a way of kicking in to reveal just exactly how many things can go wrong with your cloud setup – things that can directly impact your business ...

March 16, 2017

IT leadership is more driven to be innovative than ever, but also more in need of justifying costs and showing value than ever. Combining the two is no mean feat, especially when individual technologies are put forward as the single tantalizing answer ...

March 15, 2017

The move to Citrix 7.X is in full swing. This has improved the centralizing of Management and reduction of costs, but End User Experience is becoming top of the business objectives list. However, delivering that is not something to be considered after the upgrade ...

March 14, 2017

As organizations understand the findings of the Cyber Monday Web Performance Index and look to improve their site performance for the next Cyber Monday shopping day, I wanted to offer a few recommendations to help any organization improve in 2017 ...

March 13, 2017

Online retailers stand to make a lot of money on Cyber Monday as long as their infrastructure can keep up with customers. If your company's site goes offline or substantially slows down, you're going to lose sales. And even top ecommerce sites experience performance or stability issues at peak loads, like Cyber Monday, according to Apica's Cyber Monday Web Performance Index ...

March 10, 2017

Applications and infrastructure are being deployed and commissioned at a faster rate than ever before, the number of tools it takes to effectively manage these services is multiplying, and the expectations placed on IT to ensure customer satisfaction is increasing, according to The State of Monitoring 2017 report from BigPanda ...