This blog is the second in a series to help you getter a better sense of what kind of analytic technologies are available in the market today and how they might best apply to your environment.
Last month I described core analytic technologies in brief from, “correlation” -- most often associated with events, to “anomaly detection,” and “data mining and OLAP” among others.
But how and where do they cluster and how might they best be applied to your needs today?
Well, as a starting point, EMA documented the following counts five years ago -- across a wide array of vendor 44 different vendors.
Comparators or signature based analysis - 20 vendors supported this
Correlators or correlating across events – 32 vendors supported this
Anomaly detection – 32 vendors supported this
Object-based modeling – 21 vendors supported this
Predictive algorithms – 25 vendors supported this
Optimization algorithms – 20 vendors supported this
Data mining and OLAP – 20 vendors supported this
Fuzzy logic — 6 vendors supported this
Neural networks – 5 vendors supported this
Case based reasoning – 8 vendors supported this
Chaos theory - 4 vendors supported this
Application transaction analysis – 25 vendors supported this
Interestingly, only 14 vendors had offerings that fell outside these categories, and most of those were various types of self-learning algorithms.
In the last five years, we’ve seen a real rise in analytics targeted at Application Transaction Analysis, as well as serious growth in Predictive Algorithms often linked to Anomaly Detection, as well as Data Mining and On Line Analytical Processing (OLAP).
On the other hand, some of the fancier pedigrees like Fuzzy Logic, Neural Networks, and Chaos Theory never seem to crop up any more in external marketing. No doubt, some of them are still there lurking in the background –- and I know a few cases where that’s true at least for Chaos Theory –- but IT marketing has apparently decided that these terms create more fear in prospective buyers than credibility, I suspect for good reason.
And by the way, if you’re confused about “Data Mining” which so many vendors casually check off regarding their reporting capabilities, versus true OLAP –- one good answer is “finding the dog.” Let me explain:
Aside from the famous OLAP cube –- which is an architectural foundation for OLAP, a good use-case explanation taught to me many years ago by a true “data mining” vendor –- was the following: Their client ran a home improvement magazine and wanted to find the best demographics to market to. Not surprisingly, the data mining query turned up young, upwardly-mobile (yes, it was during the Yuppie era) married couples who owned a home. But it also turned up another critical, far less obvious variable. Those couples with dogs were far more likely to stay at home and invest in home improvement than those without. And so if you really need to find “the dog” that’s disrupting your service delivery, you will need a true OLAP capability.
On the other hand, “data mining” as it’s associated with reporting brings a lot of value as well, by allowing you to create query patterns at will to visualize what hopefully good analytic technologies can bring to bear in whatever issues you seek to resolve.
And that raises the next question. How do analytics map to actual disciplines or processes? Well, five years ago, we saw the following ratios across our 44 vendors.
Service accounting – 11 vendors had analytics that supported this
Asset management – 21 vendors
Financial planning — 9 vendors
Business service management – 26 vendors
Quality of Experience - 26 vendors
Service Level Management – 31 vendors
Optimization and capacity planning – 28 vendors
Security – 24 vendors
Performance – 34 vendors
Availability — 31 vendors
Multi-vendor configuration – 25 vendors
Element management – 24 vendors
In the last five years EMA has witnessed a huge uptake in Quality of Experience, now more generally called “User Experience Management,” and so that number today would be meaningfully higher.
Not unrelated to this, Performance Management, in particular as it relates to application performance from a cross-domain perspective, has also been hugely on the rise. And this category, APM, may include within it many analytic as well as discovery-specific technologies, such as application discovery and dependency mapping. Virtualization and cloud have upped the ante once again in driving these areas of investment forward, along with Multi-vendor Configuration Management.
Two other categories above seem inflated to me as far as November 2006 is concerned, but they’ve been getting a lot more attention in the last few years, once again in part due to cloud and virtualization. And these are Optimization and Capacity Planning and Service Accounting. We are finally entering an era where insights into usage, capacity and performance are understood as a meaningful continuum, and investing in analytic technologies designed to support that continuum can bring you many advantages in terms of cost and value for your management, infrastructure and service investments.
Our data also showed a strong connection between analytic investments and automation capabilities. Multi-vendor configuration listed above got 25 votes and thanks largely to virtualization is hugely on the rise. IT Process Automation, or Run book, in a separate breakout, mapped to 24 instances of analytics across our 44 vendors and is now also on the rise. And these are just two of many examples.
And finally, back in 2006 we showed that only 4 of our 44 vendors had no support for topologies, while 10 could leverage four or more topological inputs. Similarly 32 of our 44 vendors could leverage some CMDB-related information. Resident analytics contextually informed by models informing on topological, as well as other logical and physical interdependencies can run at optimal performance like racecars given the added advantages and visibility of a super highway. This is not the same as topologically-dependent analytics – (e.g. event correlation schemes that use topology to eliminate downstream and hence superfluous alarms).
Modeling can be used to prioritize “trusted sources” and recognize when many redundant inputs are coming from multiple tools impacting the same CI. It can also link owners, customers, maintenance contracts and change management histories with problems -- and do so dynamically in a growing number of cases -- in near real time.
Up until recently the progress here has been minimal, as industry confusion over federated modeling and its relationship to a truly dynamic CMS, combined with still evolving CMDB technologies -- has slowed uptake, especially in the US. But 2011 has been a good year to stand up and pay attention once again to this confluence with new announcements to watch especially in Q4 of this year -- and the prospects for the future, given a little patience -- look bright.
As I write this, for instance, IBM has just made as strong analytics-centric announcement with the introduction of Tivoli Analytics for Service Performance, with more shoes to drop very soon from one of IBM’s core competitors.
At the same time, vendors with a long standing investment in advanced, self-learning analytics, such as Netuitive, continue to evolve and broaden their reach -- in Netuitive’s case beyond Operations toward more executive and application-centric users.
However, I would like to end on a practical note. Great design and great vision moves markets forward and can empower IT organizations to survive in an increasingly turbulent and demanding world. But technologists and engineers too often fall down rabbit holes where the value of more advanced designs get crushed by myopic attention to the wrong details, or simple oblivion to anything human and practical.
So do a little tire kicking around deployment, administration and integration before investing in any analytic technology. Good analytics should seriously minimize these issues. If they don’t -- even if they have bright and shiny names -- you’re probably better off looking elsewhere.