Having just attended one of IBM’s analyst events in which fairly predictable themes like “cloud,” “mobility,” “analytics,” and “dev /ops” were mixed with less industry-wide themes—such as “Smart Cities,” and IBM’s distinctive initiative in the verticals area, it occurred to me once again that IT organizations are going through not one but multiple revolutions. And these revolutions don’t often align in either convenient or even logical ways.
Two in particular have garnered centered stage here: Cloud and what I am calling for the sake of this column User Experience Management Analytics.
Once again, Cloud has clearly taken center stage in late 2011. It promises a fluidity that was heretofore impossible in IT -- although some of that fluidity comes at a price, for instance commitments to standardized infrastructures, which in a total sense tend not to exist anywhere. Maybe in part that’s why so much of cloud beyond virtualized data centers is focused on the dev/ops equation. In other words—targeting cloud-related capabilities for provisioning new application services.
And in fact a number of vendors, including IBM, BMC, CA and others have made significant advances through cloud in provisioning application-to-infrastructure marriages. This can be done by blueprinting Infrastructure-as-a-Service (IaaS) and pulling Platform-as-a-Service (PaaS) on top creating something like an “operating system as a service” platform that in the ideal world also benefits from insights across application dependency models and CMDB/CMS systems.
Conversely, even if the “race to provision” seems far from fully baked, the challenges of monitoring cloud environments -- internal and mixed internal external -- seem to be yet less well defined. Monitoring cloud services, including SLA planning and commitments, while still very much an active topic for discussion, hasn’t received the vital interest it should have from many in the industry -- as if it’s doomed to be a kind of second-class-citizen situation where data is too imperfect, too dynamic, and too divided between multiple constituencies to really take monitoring seriously.
One exception, I’m happy to report, is User Experience Management. I’ve written fairly extensively in the past about the one set of metrics that ALWAYS apply to any service -- metrics that capture the actual user experience. Responsiveness. Consistency. Navigability. And other factors like security and appropriateness should be considered. And I’m glad to see that the industry overall is moving incrementally to support that point of view.
One of the corollaries of this UEM perspective is that monitoring or operations-related activities, if effectively carried out, can provide insights that are critical in the design and planning of new application features and services. When combined with the right analytics, not only can some user experience software, for instance, inform on how applications "perform" they can also inform on how they are used and in some cases provide insights into with what business impacts. By looking at who’s using an application or part of an application and how they’re using it, it’s possible to improve on portfolio planning, as well.
And multi-dimensional usage-based insights can also evolve into an accounting system to show value, impact and cost—with or without a formal chargeback system in place. Such a system is imperative for IT organizations seeking to optimize cloud -- by showing where, why and how cloud services work, where they don’t work so well, and what tradeoffs therefore are optimal for both IT efficiency and business effectiveness.
Vendors such as Knoa Software, Centrix Software, Compuware and Keynote all have several oars in the water here -- albeit that latter two skewed more to operations than portfolio and business planning.
All this brings me back to “closing the lifecycle feedback loop” on cloud and other services.
Much of the industry discussion to date around cloud tends to position operations as a tradition-bound organization trying to catch up with new requirements for delivering application services far more quickly than before. And while to some degree this may be true in many IT organizations -- it completely misses the parallel revolution in UEM and analytics.
It also neglects the best-practice approach that there should be a feedback loop that goes through operations right back into application development. In other words, the "cloud blueprinting" revolution in provisioning is mostly missing the handshake with the "UEM and analytics" revolution in monitoring and service delivery. And with that missing handshake, the true dynamism of cloud -- in which IT assumes the role of being an "intelligent broker of services" including those services it develops in house -- dissolves once again into fragments of organizations, processes and technologies that still fundamentally fail to coalesce.
Of course this handshake between UEM analytics and application service planning and development has more than technology to stand in its way. One could argue that even with the ultimate in UEM analytics to work with there are currently no skill sets presently in IT trained to optimize them.
Who in IT is trained to look at patterns of application usage and relay these back to portfolio planners and application developers? Who is trained to look at the intersection of application interactions and business outcomes, once the data is available? And who can translate all of these things at once to IT performance and efficiencies, and business goals and initiatives and assess the tradeoffs across both?
The answer, at best, is found with some critical ERP deployments where there are Centers of Excellence and even trainers associated with deploying a new application system. But these answers don’t scale to the volume of data, options and services that are required to manage and optimize hundreds and possibly even thousands of application tradeoffs across IT. They are nowhere near what's needed for IT as an organization to mature into its role of "intelligent service broker".
Which leaves me in a not unfamiliar place.
Once again the benefits of a powerful emerging technology is blocked by cultural factors, which in this case would suggest a relatively easy solution. The creation of a new IT role or function. A new type of service portfolio planner working with operations, development and business planners and trained to read the "statistical tealeaves".
When combined with the right technology, this would mean that for the first time in IT history, meaningful data about how their services are consumed is assessed and fed back into the service planning, as is the case in virtually all manufacturing and service industries outside of IT to date. And with this, finally, the lifecycle feedback loop for cloud and other IT services would be addressed in a fairly scalable and automated way.
I would welcome your thoughts on this one. Are you seeing any individuals taking on this kind of role in your organization? Or do you view it as something for many years out in the future? Or something, arguably, that doesn’t even belong to IT at all?