This is the first of a three-part series on change management. In this blog, I’ll try to answer the question, “What is change management?” from both a process and a benefits (or use-case) perspective.
In the second installment, I’ll address best practices for both planning for and measuring the success of change management initiatives. I’ll also examine some of the issues that EMA has seen arise when IT organizations try to establish a more cohesive cross-domain approach to managing change. In part three, I’ll focus on the impacts of cloud, agile, and mobile, including the growing need for investments in automation and analytics to make change management more effective.
Change Management Processes
Like many words and concepts in English language, especially when applied to technology, “change management” carries with it a wide variety of associations. In terms of the processes established in the IT Infrastructure Library (ITIL), change management is best understood as a strategic approach to planning for change.
ITIL defines change management succinctly as, “the process responsible for controlling the lifecycle of all changes, enabling beneficial changes to be made with minimum disruption to IT Services.” As such, change management is a logical system of governance that addresses a set of relevant questions, which include the following:
■ Who requested the change?
■ What is the reason for the change?
■ What is the desired result of the change?
■ What are the risks involved with making the change?
■ What resources are required to deliver the change?
■ Who is responsible for the build, test, and implementation of the change?
■ What is the relationship between this change and other changes?
But this system of governance doesn’t stand alone. Actually implementing and managing changes requires attention to other ITIL processes. These include (but are not limited to):
■ Service asset and configuration management (SACM) – “The process responsible for maintaining information about configuration items required to deliver an IT Service, including their relationships.” SACM addresses how IT hardware and software assets (including applications) have been configured and, even more critically, identifies the relationships and interdependencies affecting infrastructure and application assets.
■ Release and deployment management – “The process responsible for planning, scheduling and controlling the build, test and deployment of releases, and for delivering new functionality required by the business while protecting the integrity of existing services.” As you can imagine, release management and automation should go hand in hand.
There are other ITIL processes relevant to managing change effectively, including capacity management, problem management, availability management, and continual service improvement, just to name a few. From just this brief snapshot, you might get the (correct) impression that change management in the “big picture” is at the very heart of effective IT operations. If done correctly, change management touches all of IT—including the service desk, operational teams, development, the executive suite, and even non-IT service consumers. This central position makes change management both an opportunity and a challenge.
Change Management Use Cases
Probably the best way to understand the “change management opportunity” is to look at some of the use cases affiliated with it. Effective change management can empower a wide range of other initiatives, from lifecycle asset management to DevOps, service impact management, and improved service performance. EMA consultants have estimated that more than 60% of IT service disruptions come from the impacts of changes made across the application infrastructure—and this estimate is conservative compared to some of the other industry estimates I’ve seen. Having good change management processes and technologies in place is also a foundation for better automation, as well as for better optimization of both public and private cloud resources. And the list goes on.
Even the list below, derived in large part from CMDB Systems: Making Change Work in the Age of Cloud and Agile, is a partial one, but it should provide a useful departure point for your planning—as you seek to prioritize the use case(s) most relevant to you.
■ Governance and compliance: Managing change to conform with critical industry, security, and asset-related requirements for compliance, while minimizing change-related disruptions. This, can provide significant financial benefits including OpEx savings, superior service availability, improved security and savings from avoiding the penalty costs incurred when changes are made poorly.
■ Data center consolidation—mergers and acquisitions: Planning new options for data center consolidation is definitely on the rise, and mergers and acquisitions often lead to data center consolidation initiatives. Effective change management can shorten consolidation time, minimize costs, and improve the quality of the outcome.
■ Disaster recovery – Disaster recovery initiatives may be an extension of data center consolidation, or they may be independent. Automating change for disaster recovery is one of the more common drivers for a more systemic approach to change management.
■ The proverbial “move to cloud” – The stunning rise of virtualization and the persistent move to assimilate both internal and public cloud options make change impact management and effective change automation essential.
■ Facilities management and Green IT – This use case requires dynamic insights into both configuration and “performance”-related attributes for configuration items (CIs), both internal to IT (servers, switches, desktops, etc.) and external to traditional IT boundaries (facilities, power, etc.).
■ Optimizing the end-user experience across heterogeneous endpoints – Meeting the challenges of unified endpoint management including mobile endpoints, requires a flexible adoption of change management best practices and automation. But the benefits of doing this can be significant—impacting asset management, security, and financial optimization, while increasing end-user satisfaction with IT services.
The Latest
Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...
In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...
The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...
In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...
eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...
Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...
Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...
Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...
This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...