Last Fall, I wrote two blogs on BSM Analytics: What Are They? And Why Should You Care? This Summer I’m taking that idea several giant steps forward and developing what at EMA we call a “Radar” – our equivalent I suppose to a “Magic Quadrant” – to look at advanced performance analytics in context with vendor designs, deployment successes and use-case related strengths and weaknesses.
The goal is provide IT adopters with a useful departure point for investing in “advanced performance analytics” from three core vantage points:
- Technical Performance Analytics: focused on optimizing the technical resiliency of critical application and business services — including VoIP and rich media - in Cloud (public/private) as well as non-Cloud environments
- Business Impact: including user experience, customer experience and customer management, business process impacts, business activity management, and data such as revenue per transaction, abandonment rates, competitive impact, and IT operational efficiency
- Change Impact and Capacity Optimization/Planning: which are admittedly two use cases combined into one, but which share requirements in terms of understanding interdependencies across the application/service infrastructure as volumes increase, changes are made, and configuration issues arise.
I might mention that while security is not singled out as a separate use case, there is a modest but growing overlap between advanced performance analytics and Security Information and Event Management (SIEM), and this Radar will include some insights there, as well.
Similarly, while DevOps requirements are not a major use case in themselves in this Radar, they are touched upon in several places in the research.
I would be lying if I claimed that any Radar was absolute science - the empirical equivalent of divine truth. I can’t speak for our competitors, but I suspect that their situation is similar. Rather, an EMA Radar provides a good way to accelerate the process of getting to know innovative solutions and how they position in a use-case context.
All radars address requirements such as time to deployment and administrative efficiencies, costs, breadth of services, scalability and other architectural strengths, key supported integrations, and vendor strengths (marketplace, financial, partners, etc.). This Radar will also address analytical breadth and depth across domains and types of services, as well as diagnostic, business impact, change impact and capacity-optimization-related outcomes.
One thing that’s key is dialog with actual deployments. Most vendors have unique ways of looking at their solutions, and talking to deployments helps to even out some of the inconsistencies. Without naming names, I will tell you that while some vendors may be inclined to overstate their case, a few will actually understate it. A careful Q&A with the participating vendors, as well as customer interviews, goes a long way to even out that playing field.
I don’t seem to be on the winning side of what’s chic in hot buzzwords. Several years ago I fought against the use of the term “cloud” for anything other than its original proper place: wide area networks. I have written extensively about my reservations about the APM market gorilla. And recently showed that, at least according to recent EMA research data, “User Experience Management,” far from being a subset of APM, is transcendent to it - with strong business impact priorities.
Following this tradition, I don’t especially like the term “Big Data,” either. Does it mean moving a lot of data into one place (warehouse or now “big data storage”)? Or does it simply mean processing huge amounts of information from multiple sources for analysis? Or can it be a combination of both?
The solutions we’re looking at here are optimized to process huge amounts of information from multiple sources – and of course different vendors target different types of sources from a variety of perspectives. Some advanced performance analytics solutions can process tens of millions of metrics, events and other data within five minutes or less, which may come from third party or other monitoring tools, or directly from device, flow, transaction, log file, or other type of data collection. In some cases, this may include CMDB/CMS related configuration data, or even data accessed from warehouse-related data stores.
Conversely, some advanced performance analytic solutions have fully supported integrations with data warehousing capabilities for more historical, business-related, or other types of trending. So here a “Big Data” back end comes into play, sometimes in combination with business, financial or other data from other sources.
What I think still gets underserved in much of the “Big Data” hype is the need for efficiency and relevance. Moving data into one place can be tremendously helpful, but it’s certainly not the only answer for advanced analytics – especially when “real-time” or near real-time insights are required. These demand a different type of analytics investment than traditional big-data data stores, without all of the ETL and data normalization requirements that data warehousing typically requires.
In the more advanced performance analytics solutions, relevance and context are “learned” dynamically as the data is acquired and as often unexpected patterns emerge. Moreover, advanced performance analytics can present an edge in populating back-end data stores with far more optimally relevant data sets than otherwise might be possible.
As for Cloud, suffice it to say that once again cloud is an emerging resource from a data processing and deployment perspective. Cloud is also an “accelerator” as the need for dynamic insights into services delivered over changing infrastructures, changing business and service provider interdependencies, and accelerated DevOps requirements are all playing to the need for more advanced and diverse advanced performance analytics.
And finally, a word or two on the question of “rocket science.” While the phrase may apply here more than anywhere else in IT management — I’m not pretending to be a rocket scientist myself, or to do a Radar for the rocket scientists among you. Sorry if you’re one of those.
We will look at heuristics and approaches — descriptively — and here’s a partial list: Anomaly detection, Case-based reasoning, chaos theory, comparators, correlators, data mining and OLAP, fuzzy logic, neural networks, object-based modeling, optimization algorithms, predictive algorithms, and application-transaction analysis.
I expect to see some new additions from past research like DNA Resequencing as well. But aside from a solid focus on predictive strengths, we won’t be going out of our way to recommend one approach over the other. (If you’re looking for a mathematical treatise, you’ll have to look somewhere else.)
Instead, I’ll be digging in around more pragmatic values, in terms of efficiencies, costs, scope, value and relevance to different roles in IT and beyond. Hopefully that should be enough for most of you to make an effective initial take on one of the most progressive, and arguably most overlooked, areas in IT and SP service management innovation over at least the last five years.
Dennis Drogseth is VP at Enterprise Management Associates (EMA).