Change Management Part 2: Metrics, Best Practices and Pitfalls
August 25, 2015

Dennis Drogseth
EMA

Share this

This is Part 2 of a three-part series on change management. In Part 1, I addressed the question, “What is change management?” and examined change management from the perspectives of both process and use case. In this blog, I’ll look at what it takes to make change management initiatives succeed — including metrics and requirements, best practice concerns, and some of the more common pitfalls. Much of the content is derived from past EMA consulting experience as reflected our book, CMDB Systems: Making Change Work in the Age of Cloud and Agile.

Start with Change Management Part 1

Metrics and Requirements

Whether you’re targeting lifecycle endpoint management, data center consolidation, or the move to cloud, it’s important to have some way to measure your progress. These measurements might address operational efficiencies, impacts on the infrastructure and its supported applications, and even impacts on your service consumers and business outcomes. Some of the high-level metrics EMA analysts recommend include:

■ Reduction in number of change collisions

■ Reduction in number of failed changes and re-dos

■ Reduced cycle time to review, approve, and implement changes

■ Improved time efficiency to validate that changes made are non–service disruptive

■ Number of changes that do not deliver expected results

In one consulting engagement in particular, we also saw the following:

■ Degree of conformance to current software licensing agreements

■ Exceptions detected during configuration audits (e.g., when actual state is not as authorized)

■ Cost savings for acquisition and retirement of assets

■ Faster ability to provide services

Of course, these are just a few examples, and these metrics are primarily beginning points. In other words, they are not fully fleshed-out requirements you can use to create the very specific, and hence more measurable, objectives that you will need to go forward.

Going from high-level metrics, such as those above, to more detailed requirements typically means understanding ownership, process, and impact specifics. One example cited in our book involved documented costs in terms of phone time spent in the service desk trying to find the right individual in operations to handle incident-related issues, or what they called “mean time to find someone (MTTFS).” In this case, a CMDB-related initiative saved them nearly $100,000 per year, just in terms of personnel costs of time spent on the phone. The same MTTFS metric might apply to requests involving changes, such as those made in response to service requests or onboarding new end users—where a mixture of IT and non-IT stakeholders for approval and review is often required. Knowing who owns a specific problem for a specific configuration item (CI) is worth its weight in gold.

Some Common Change Management Issues

Developing an appropriate set of metrics and requirements typically involves dialog with relevant stakeholders and executives. While it might be nice to simply legislate your change management initiative with a few emails, EMA consulting experience consistently underscores the need for two-way dialog in which stakeholders are both informed and listened to. These dialogs or interviews not only help to pave the way for new and better ways of managing change, they will usually shed light on other issues that, once documented, can help your IT organization move forward in any number of (sometimes surprising) ways.

Scope Creep: While you want enthusiasm for going forward, and in fact you’ll probably want to target your more enthusiastic stakeholders, many change management initiatives can get bogged down by trying to do too much at once. Two of my favorite quotes from our consulting reports along these lines are:

“The biggest issue now is scope creep. Trying to make everyone happy at this point is like trying to rebuild the Titanic from the bottom up.”

Another change management initiative was more prescriptive: “We’re managing scope creep by being incremental in how we’re driving our deployment—going forward with small steps on a regular schedule.”

Toolset Ownership: Managing changes well requires attention to technologies, both those already in use and new technology investments, as I’ll discuss in my next blog. But making the right technology choices can often become a political as well as a technology challenge. EMA consulting has seen literally hundreds of tools addressing monitoring, inventory, configuration, and change management in larger enterprises, each affiliated with its own determined set of owners. This can create problems when you’re trying to promote more cross-domain capabilities for discovery, automation, and configuration updates. So once again, dialog, leadership, and attention to consistent processes are key. Two quotes from EMA consulting serve to underscore this point:

“We are territorial and don’t want to replace our tools.”

“We have issues with toolset ownership. There is no confidence that others will do the work. So, you do it yourself.”

Issues Surrounding Standards and Best Practices: Whether you’re seeking to leverage processes defined in the IT Infrastructure Library (ITIL) or other formalized best practices or you’re simply documenting your own, trying to establish good change management processes across a heterogeneous and often siloed set of stakeholders may well be your biggest single challenge. Even when good technology is in place, trying to get the necessary mix of players to use it well and consistently is not often easy, especially without some level of executive sponsorship. Here are a few additional quotes from EMA consulting reports to provide you with some process-related examples:

“There are over 5000 change requests per year, and all of them are marked ‘high priority.’”

“Change control needs to hold people accountable if it is to be effective. No one questions why.”

“I believe in standards, as long as they’re mine.”

And finally, something positive: “We had an opportunity to reinvent change management in our organization and go from a project management approach that was very ambivalent when it came to execution to a much more enforceable approach that supported clear ownership and led to increased levels of automation.”

Read Change Management Part 3

Dennis Drogseth is VP at Enterprise Management Associates (EMA).

Dennis Drogseth is VP at Enterprise Management Associates (EMA)
Share this

The Latest

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...

March 11, 2024

Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...

March 07, 2024
In this digital era, consumers prefer a seamless user experience, and here, the significance of performance testing cannot be overstated. Application performance testing is essential in ensuring that your software products, websites, or other related systems operate seamlessly under varying conditions. However, the cost of poor performance extends beyond technical glitches and slow load times; it can directly affect customer satisfaction and brand reputation. Understand the tangible and intangible consequences of poor application performance and how it can affect your business ...
March 06, 2024

Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...

March 05, 2024

Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...

March 04, 2024

This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...