Operational complexity in virtualized, scale-out, and cloud environments and composite Web-based applications will drive demand for automated analytic performance management and optimization tools that can quickly discover, filter, correlate, remediate, and ideally prevent performance and availability slowdowns, outages, and other service-interrupting incidents.
The need to rapidly sort through tens of thousands — or even hundreds of thousands — of monitor variables, alerts and events to quickly discover problems and pinpoint root causes far exceeds the capabilities of manual methods.
To meet this growing need, IDC expects powerful performance management tools, based on sophisticated statistical analysis and modeling techniques, to emerge from niche status and become a recognized mainstream technology during the coming year. These analytics will be particularly important in driving increased demand for application performance management (APM) and end user experience monitoring tools that can provide a real-time end-to-end view of the health and business impact of the total environment.
Typically, IT infrastructure devices, applications, and IT-based business processes are monitored to see how they are performing. Monitored metrics are tested against thresholds (often adaptive ones) to see if they are exceeding defined limits or service objectives.
With the proliferation of scale-out architectures, virtual machines, and public and private clouds for applications deployment, the number of monitored elements increases rapidly and often results in a large stream of data with many variables that must be quickly scanned and analyzed to discover problems and find root causes. Multivariate statistical analysis and modeling are long-established mathematical techniques for analyzing large volumes of data, discovering meaningful relationships between variables, and building formulas that can be used to predict how related variables will behave in the future.
What is emerging is the wider application of this methodology, often called predictive analytics, to discovering, predicting, analyzing, and even preventing IT performance and availability problems. Key use cases include application performance management, virtualization management, and cloud management.
IDC expects wider distribution and use of this technology during the coming year from a growing number of vendors given the challenges of managing today's large, complex dynamic environments.
This article originally appeared in "Worldwide System Infrastructure Software 2012 Top 10 Predictions" IDC Document # 231593, December 2011, on www.idc.com.
About Tim Grieser
Tim Grieser is Program Vice President, Enterprise System Management Software, at IDC. He has extensive background in system management software technology including the use of predictive models for performance management and capacity planning.
Twitter: @TimGrieser
The Latest
The mobile app industry continues to grow in size, complexity, and competition. Also not slowing down? Consumer expectations are rising exponentially along with the use of mobile apps. To meet these expectations, mobile teams need to take a comprehensive, holistic approach to their app experience ...
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...
High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...
Organizations recognize the benefits of generative AI (GenAI) yet need help to implement the infrastructure necessary to deploy it, according to The Future of AI in IT Operations: Benefits and Challenges, a new report commissioned by ScienceLogic ...
Splunk's latest research reveals that companies embracing observability aren't just keeping up, they're pulling ahead. Whether it's unlocking advantages across their digital infrastructure, achieving deeper understanding of their IT environments or uncovering faster insights, organizations are slashing through resolution times like never before ...