If you are like most large enterprises you have already begun your journey to “cloud computing”. This journey, that begins with server virtualization initiatives and ends with IT-as-a-service and dynamic resource provisioning, however, is uncharted territory for most enterprise IT departments.
Fortunately, there are pioneers who have built some of the world’s largest virtualization deployments that have been willing to assist in developing a road map – so your expensive new infrastructure ends up running more than just print servers. These pioneers are successfully migrating Tier 1 applications to their private clouds while delivering even better, more proactive service management.
Here are 6 key lessons to help you avoid common design mistakes, get key application owners on board, and deliver on performance promises by leveraging predictive analytics for IT:
Think strategically and broaden your vision of cloud computing to be more than simple server consolidation, which some executives view as tactical. One global bank tied their cloud computing project to a corporate “Green IT” initiative – since server consolidation means fewer idle resources and flexible dynamic allocations consume less power on average - leading to better overall computing efficiency.
The ease of adding virtual machines to eliminate under utilized physical assets can often result in virtual sprawl – right under your nose. And there is an operational cost as well: one VP points out that “Managing 20 VMs that share a server requires the same amount of work as 20 physical servers.” So it’s crucial to regularly eliminate those “idle” VMs polluting your virtualization environment.
Break down organizational silos and don’t invest in just point tools. An isolated tool that only manages capacity and performance of a virtualized environment will only create another silo – a costly design error. Encourage collaboration between IT Operations, Application Support and Engineering. Consider a solution that “brings it all together” with out-of-the-box integrations and an open API to integrate with your existing monitoring tools and provide a holistic, end-to-end, cross platform view of your entire environment.
The need for real-time capacity management is critical in a virtualized environment, where capacity and performance are intrinsically intertwined. It’s natural to want to throw resources at perceived bottleneck problems, but you may be treating the symptom and not the real problem. It is important to have a full understanding of the behavior of the system and its inter-dependencies in order to identify the true root-cause of a performance problem.
The Tier 1 applications are the lifeblood of the business. These applications are also often the most complex and expensive to manage. While these app owners care about cost, they are much more concerned about performance. In order to win over Tier 1 applications, make sure you are addressing their top concerns: how you will guarantee SLAs, dynamically provision resources for a “pay-as-you-go” model, forecast performance issues to easily add or reduce computing capacity, and provide disaster recovery at a fraction of the cost.
One of the key promises of private or hybrid cloud computing is the ability to dynamically right size resources to meet periods of peak demand. In order to orchestrate this dynamic resource allocation, a new class of software has emerged called “Service Directors.” Service Director solutions which leverage resource controllers like VMware DRS (Dynamic Resource Scheduler), won’t work if they rely on manual rules and policies. In such complex environments you need automated intelligence.
As Gartner sees it, new “behavior learning technology” may be the answer.
Behavior learning technology uses mathematics and predictive analytics to automatically learn the behavior of your systems – virtual and physical. It helps you proactively detect potential performance bottlenecks so that you can suggest resource allocation changes to the Service Director in a closed-loop fashion.
Here is how these ideas come together in a solution:
While most enterprises share the same major goal of cloud computing, increased efficiency and greater agility, the ultimate sign of success is running top tier applications on the new infrastructure. In order to reach success you need to overcome the early challenges such as eliminating virtual sprawl and avoiding point tool proliferation, then you must be able to deliver on the promise of performance. Manual, policy-based approaches will not work with the complex dependencies of your new infrastructure.
Gartner believes that solutions like Netuitive “move IT Operations to a more proactive state where issues can be detected and addressed before affecting the business.”
Graham Gillen is the Sr. Product Marketing Manager for Netuitive. He currently leads Netuitive’s initiatives to market solutions in the areas of virtualization and capacity management, systems management, and applications performance management. Before joining Netuitive in 2008, Graham was a Sr. Solutions Marketing Manager at VeriSign, and Sr. Product Manager at webMethods. Graham recently completed his Certificate in Marketing from the University of Virginia Darden School of Business. He also received a Master of Science degree in Operations Research from Georgia Tech and a Bachelor of Science degree in Engineering from the University of Virginia.