Operational complexity in virtualized, scale-out, and cloud environments and composite Web-based applications will drive demand for automated analytic performance management and optimization tools that can quickly discover, filter, correlate, remediate, and ideally prevent performance and availability slowdowns, outages, and other service-interrupting incidents.
The need to rapidly sort through tens of thousands — or even hundreds of thousands — of monitor variables, alerts and events to quickly discover problems and pinpoint root causes far exceeds the capabilities of manual methods.
To meet this growing need, IDC expects powerful performance management tools, based on sophisticated statistical analysis and modeling techniques, to emerge from niche status and become a recognized mainstream technology during the coming year. These analytics will be particularly important in driving increased demand for application performance management (APM) and end user experience monitoring tools that can provide a real-time end-to-end view of the health and business impact of the total environment.
Typically, IT infrastructure devices, applications, and IT-based business processes are monitored to see how they are performing. Monitored metrics are tested against thresholds (often adaptive ones) to see if they are exceeding defined limits or service objectives.
With the proliferation of scale-out architectures, virtual machines, and public and private clouds for applications deployment, the number of monitored elements increases rapidly and often results in a large stream of data with many variables that must be quickly scanned and analyzed to discover problems and find root causes. Multivariate statistical analysis and modeling are long-established mathematical techniques for analyzing large volumes of data, discovering meaningful relationships between variables, and building formulas that can be used to predict how related variables will behave in the future.
What is emerging is the wider application of this methodology, often called predictive analytics, to discovering, predicting, analyzing, and even preventing IT performance and availability problems. Key use cases include application performance management, virtualization management, and cloud management.
IDC expects wider distribution and use of this technology during the coming year from a growing number of vendors given the challenges of managing today's large, complex dynamic environments.
This article originally appeared in "Worldwide System Infrastructure Software 2012 Top 10 Predictions" IDC Document # 231593, December 2011, on www.idc.com.
About Tim Grieser
Tim Grieser is Program Vice President, Enterprise System Management Software, at IDC. He has extensive background in system management software technology including the use of predictive models for performance management and capacity planning.
Twitter: @TimGrieser
The Latest
Broad proliferation of cloud infrastructure combined with continued support for remote workers is driving increased complexity and visibility challenges for network operations teams, according to new research conducted by Dimensional Research and sponsored by Broadcom ...
New research from ServiceNow and ThoughtLab reveals that less than 30% of banks feel their transformation efforts are meeting evolving customer digital needs. Additionally, 52% say they must revamp their strategy to counter competition from outside the sector. Adapting to these challenges isn't just about staying competitive — it's about staying in business ...
Leaders in the financial services sector are bullish on AI, with 95% of business and IT decision makers saying that AI is a top C-Suite priority, and 96% of respondents believing it provides their business a competitive advantage, according to Riverbed's Global AI and Digital Experience Survey ...
SLOs have long been a staple for DevOps teams to monitor the health of their applications and infrastructure ... Now, as digital trends have shifted, more and more teams are looking to adapt this model for the mobile environment. This, however, is not without its challenges ...
Modernizing IT infrastructure has become essential for organizations striving to remain competitive. This modernization extends beyond merely upgrading hardware or software; it involves strategically leveraging new technologies like AI and cloud computing to enhance operational efficiency, increase data accessibility, and improve the end-user experience ...
AI sure grew fast in popularity, but are AI apps any good? ... If companies are going to keep integrating AI applications into their tech stack at the rate they are, then they need to be aware of AI's limitations. More importantly, they need to evolve their testing regiment ...
If you were lucky, you found out about the massive CrowdStrike/Microsoft outage last July by reading about it over coffee. Those less fortunate were awoken hours earlier by frantic calls from work ... Whether you were directly affected or not, there's an important lesson: all organizations should be conducting in-depth reviews of testing and change management ...
In MEAN TIME TO INSIGHT Episode 11, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Secure Access Service Edge (SASE) ...
On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner's annual global survey of CIOs and technology executives ...
Artificial intelligence (AI) is rapidly reshaping industries around the world. From optimizing business processes to unlocking new levels of innovation, AI is a critical driver of success for modern enterprises. As a result, business leaders — from DevOps engineers to CTOs — are under pressure to incorporate AI into their workflows to stay competitive. But the question isn't whether AI should be adopted — it's how ...