AIOps is still relatively new compared to existing technologies such as enterprise data warehouses, and early on many AIOps projects suffered hiccups, the aftereffects of which are still felt today. That's why, for some IT Ops teams and leaders, the prospect of transforming their IT operations using AIOps is a cause for concern.
But at the same time, AIOps has matured enough to a point where a critical mass of enterprises today — including some of the largest companies in the world — have successfully deployed it, and have learned valuable lessons along the way. Mainly, it's a matter of setting clear expectations and following several guidelines that help you avoid common pitfalls when setting out on an AIOps journey.
Set Your Expectations Straight
Yes, we all know the saying "Aim for the moon, and even if you miss you'll land among the stars."
But unfortunately this doesn't apply to AIOps adoption … As Gartner states in its recent Market Guide for AIOps Platforms — enterprises should "prioritize practical outcomes over aspirational goals by adopting an incremental approach…" when deploying AIOps platforms.
Biting off more than you can chew can delay your AIOps project — often by months or even years. Start small, where it hurts the most in your IT operations ecosystem, or what causes the most delays to your incident management lifecycle. Do so by integrating one tool at time, and testing one AIOps capability at a time. Once you are satisfied, you can incrementally add more tools to the AIOps platform, and then test more capabilities. In addition to making sure that your AIOps platform has proven itself before you begin to fully rely on it, this step-by-step approach also gives your team the chance to accumulate the skills and confidence they need, over time.
Additionally, remember that there is a tendency to think that AI behaves in a human-like manner. And so, it is often anthropomorphized and thought to have unrealistic "superhuman" capabilities. The reality is that AI in IT operations is algorithmic, and relies on alert ingestion, normalization and enrichment (or tagging), before correlation patterns can be generated, tested and refined. Which leads us to the next items on our list.
Make Sure You Can Integrate with All Your Existing Tools
Every enterprise uses and depends on several different tools that span different domains such as observability and monitoring, change, topology, collaboration and remediation. In almost all cases, these tools reflect years of investment, development and customization. Often these tools are deeply embedded into critical IT operations workflows and processes — and reflect the organization's tribal knowledge.
Your chosen AIOps platform needs to be able to integrate with these tools and ingest their data. Otherwise, vital information and key capabilities needed for the AI to work properly will be missing. And that's besides the fact that a long and painful long rip-and-replace project can easily derail a project just by the sheer amount of effort and long time to value.
You Need to Be Able to Adequately Prepare and Cleanse Your Data
"Garbage in, garbage out" is a well-known maxim in IT, and it applies to IT operations as well. As we just mentioned, it's critical to ingest all the alerts from all your tools. But it's not enough. Event normalization, enrichment and tagging (aka data preparation and cleansing) also have an outsized impact on the success of AIOps solutions.
Why? Because AIOps tools have to correlate the hundreds of thousands of ingested alerts into a small number of high-quality, actionable incidents. The ability of AI/ML to detect correlation patterns and "compress" alerts relies heavily on the quality of the data fed to it. Context-less data leads to limited, low-quality incidents as a result of weak correlation.
In a similar fashion, successful root cause analysis relies on the ability to understand and leverage the different dependencies between infrastructure and application components in modern environments. Some of this information is buried in incoming alert streams, and some of this information is contained in external data sources such as asset and inventory management systems, orchestration tools, APM service or flow maps, CMDBs and more.
Additionally — you need to be able to match incidents to problem changes (aka root cause changes) that are causing incidents and outages, and this information resides in a variety of tools such as CI/CD, Change Management, and more.
Your AIOps platform must deliver built-in normalization, enrichment and tagging that can add all this much-needed context at scale, and be able to process millions of IT alerts every day.
Your AI/ML Need to Be Explainable
Good data going into your AIOps platform will get you good results, and successfully leveraging your existing tribal knowledge to train and configure the AI will definitely benefit you. But, you also have to be able to see, understand and edit the correlation logic as the AI/ML trains itself. Unfortunately, some solutions still obscure it and do not provide adequate control and testability. This is one of the most common causes of AIOps failure.
Google spam filters are a good analogy. Google provides a baseline configuration that's very sophisticated at detecting spam. But it does give you the choice of classifying something as spam on your own, or removing the spam tag from a wrongly detected email. It provides an explanation of its decision, and then learns from your intervention moving forward.
The same is true for AI/ML in IT Ops. Your teams have to trust the results your AIOps tool is producing, and that trust comes from explainability. They need to understand why the AI correlated certain alerts together, and they must then have the ability to either accept or change the correlation pattern so it produces the desired result. Remember, you can have the best AI in the world, but if your teams don't understand why it's grouping certain alerts together (and why it's not grouping others) they are going to always be suspicious of the results even when they are correct, and eventually avoid using the ML.
Your AIOps Needs to Be Democratized
Today's enterprises are heterogeneous. Some have large, centralized IT Ops and NOC teams, whereas others have dozens or even hundreds of distributed DevOps and SRE teams. Some have "grown up" in the cloud, while others are mid-way or even just getting started with their modernization initiatives. In each of these enterprises there are many important stakeholders that can benefit from AIOps: from NOC Managers and L1 users to VPs of IT Ops to service owners to the heads of BUs and CIOs.
AIOps platforms must be accessible and be able to present their data, views and dashboards to every persona in your organization, no matter which type of enterprise you belong to. Additionally, the platform cannot be reliant on data scientists, configuration cannot depend on 3rd party consultants and product experts, and the admin overhead needs to be minimal.
Only then, can you realize the full potential of your AIOps investment.
The Latest
We're at a critical inflection point in the data landscape. In our recent survey of executive leaders in the data space — The State of Data Observability in 2024 — we found that while 92% of organizations now consider data reliability core to their strategy, most still struggle with fundamental visibility challenges ...
From the accelerating adoption of artificial intelligence (AI) and generative AI (GenAI) to the ongoing challenges of cost optimization and security, these IT leaders are navigating a complex and rapidly evolving landscape. Here's what you should know about the top priorities shaping the year ahead ...
In the heat of the holiday online shopping rush, retailers face persistent challenges such as increased web traffic or cyber threats that can lead to high-impact outages. With profit margins under high pressure, retailers are prioritizing strategic investments to help drive business value while improving the customer experience ...
In a fast-paced industry where customer service is a priority, the opportunity to use AI to personalize products and services, revolutionize delivery channels, and effectively manage peaks in demand such as Black Friday and Cyber Monday are vast. By leveraging AI to streamline demand forecasting, optimize inventory, personalize customer interactions, and adjust pricing, retailers can have a better handle on these stress points, and deliver a seamless digital experience ...
Broad proliferation of cloud infrastructure combined with continued support for remote workers is driving increased complexity and visibility challenges for network operations teams, according to new research conducted by Dimensional Research and sponsored by Broadcom ...
New research from ServiceNow and ThoughtLab reveals that less than 30% of banks feel their transformation efforts are meeting evolving customer digital needs. Additionally, 52% say they must revamp their strategy to counter competition from outside the sector. Adapting to these challenges isn't just about staying competitive — it's about staying in business ...
Leaders in the financial services sector are bullish on AI, with 95% of business and IT decision makers saying that AI is a top C-Suite priority, and 96% of respondents believing it provides their business a competitive advantage, according to Riverbed's Global AI and Digital Experience Survey ...
SLOs have long been a staple for DevOps teams to monitor the health of their applications and infrastructure ... Now, as digital trends have shifted, more and more teams are looking to adapt this model for the mobile environment. This, however, is not without its challenges ...
Modernizing IT infrastructure has become essential for organizations striving to remain competitive. This modernization extends beyond merely upgrading hardware or software; it involves strategically leveraging new technologies like AI and cloud computing to enhance operational efficiency, increase data accessibility, and improve the end-user experience ...
AI sure grew fast in popularity, but are AI apps any good? ... If companies are going to keep integrating AI applications into their tech stack at the rate they are, then they need to be aware of AI's limitations. More importantly, they need to evolve their testing regiment ...