New Data Reveals Widespread Downtime and Security Risks in 99% of Enterprise Private Cloud Environments
February 08, 2017

Doron Pinhas
Continuity Software

Share this

Industrial and technological revolutions happen because new manufacturing systems or technologies make life easier, less expensive, more convenient, or more efficient. It's been that way in every epoch – but Continuity Software's new study indicates that in the cloud era, there's still work to be done.

With the rise of cloud technology in recent years, Continuity Software conducted an analysis of live enterprise private cloud environments – and the results are not at all reassuring. According to configuration data gathered from over 100 enterprise environments over the past year, the study found that there were widespread performance issues in 97% of them, putting the IT system at great risk for downtime. Ranked by the participating enterprises as the greatest concern, downtime risks were still present in each of the tested environments.

A deep dive into the study findings revealed numerous reasons for the increased operational risk in private cloud environments, ranging from lack of awareness to critical vendor recommendations, inconsistent configuration across virtual infrastructure components and incorrect alignment between different technology layers (such as virtual networks and physical resources, storage and compute layers, etc.).

The downtime risks were not specific to any particular configuration of hardware, software, or operating system. Indeed, the studied enterprises used a diverse technology stack: 48% of the organizations are pure Windows shops, compared to 7% of the organizations that run primarily Linux. 46% of the organizations use a mix of operating systems. Close to three quarters (73%) of the organizations use EMC data storage systems and 27% of the organizations use replication for automated offsite data protection. And 12% utilized active-active failover for continuous availability.

Certainly in the companies in question, the IT departments include top engineers and administrators – yet nearly all of the top companies included in the study have experienced some, and in a few cases many, issues.

While the results are unsettling, they are certainly not surprising. The modern IT environment is extremely complex and volatile: changes are made daily by multiple teams in a rapidly evolving technology landscape. With daily patching, upgrades, capacity expansion, etc., the slightest miscommunication between teams, or a knowledge gap could result in hidden risks to the stability of the IT environment.

Unlike legacy systems, in which standard testing and auditing practices are employed regularly (typically once or twice a year), private cloud infrastructure is not regularly tested. Interestingly, this fact is not always fully realized, even by seasoned IT experts. Virtual infrastructure is often designed to be "self-healing," using features such as virtual machine High Availability and workload mobility. Indeed, some evidence is regularly provided to demonstrate that they are working; after all, IT executives may argue, "not a week goes by with some virtual machines failing over successfully."

This perception of safety can be misleading, since a chain is only as strong as its weakest link; Simply put, it's a number game. Over the course of any given week, only a minute fraction of the virtual machines will actually be failed-over – usually less than 1%. What about the other 99%? Is it realistic to expect they're also fully protected?

The only way to determine the private cloud is truly resilient would be to prove every possible permutation of failure could be successfully averted. Of course, this could not be accomplished with manual processes, which would be much too time consuming, and potentially disruptive. The only sustainable and scalable approach would be to automate private cloud configuration validation and testing.

Individual vendors offer basic health measurements for their solution stack (for example, VMware, Microsoft, EMC and others). While useful, this is far from a real solution, since, as the study shows, the majority of the issues occur due to incorrect alignment between the different layers. In recent years, more holistic solutions have entered the market, that offer vendor agnostic, cross-domain validation.

While such approaches come with a cost, it is by far less expensive than the alternative cost of experiencing a critical outage. The cost of a single hour of downtime, according to multiple industry studies, can easily reach hundreds of thousands of dollars (and, in some verticals even millions).

Doron Pinhas is CTO of Continuity Software.

Share this

The Latest

April 21, 2017

In the spirit of Earth Day, which is Saturday, April 22, we recently asked IT professionals for the tips and tricks they're using to help keep their data centers as green as possible. Here are a few ideas inspired by the responses we got ...

April 20, 2017

Almost One-Third (28 percent) of IT workers surveyed fear that cloud adoption is putting their job at risk, according to a survey conducted by ScienceLogic ...

April 19, 2017

A majority of senior IT leaders and decision-making managers of large companies surveyed around the world indicate their organizations have yet to fully embrace the aspects of IT Transformation needed to remain competitive, according to a new study conducted by Enterprise Strategy Group (ESG) ...

April 18, 2017

The move to cloud-based solutions like Office 365, Google Apps and others is one of the biggest fundamental changes IT professionals will undertake in the history of computing. The cost savings and productivity enhancements available to organizations are huge. But these savings and benefits can't be reaped without careful planning, network assessment, change management and continuous monitoring. Read on for things that you shouldn't do with your network in preparation for a move to one of these cloud providers ...

April 17, 2017

One of the most ubiquitous words in the development and DevOps vocabularies is "Agile." It is that shining, valued, and sometimes elusive goal that all enterprises strive for. But how do you get there? How does your organization become truly Agile? With these questions in mind, DEVOPSdigest asked experts across the industry — including analysts, consultants and vendors — for their opinions on the best way for a development or DevOps team to become more Agile ...

April 12, 2017

Is composable infrastructure the right choice for your IT environment? The following are 5 key questions that can help you begin to explore the capabilities of composable infrastructure and its applicability within your own IT environment ...

April 11, 2017

What is composable infrastructure, and is it the right choice for your IT environment? That's the question on many CIOs' minds today as they work to position their organizations as "digitally driven," delivering better, deeper, faster user experiences and a more agile response to change in whatever vertical market you do business in today ...

April 10, 2017

As companies adopt new hardware and applications, their networks grow larger and become harder to manage. For network engineers and administrators, the continued emergence of integrated technology has forced them to reconfigure and manage networks in a more dynamic way ...

April 07, 2017

The complexity of data in motion is growing and risks undermining the success of the modern data-driven enterprise. A recent survey of data engineers and architects, conducted by StreamSets, sought to bring some perspective to the new reality in the enterprise, leading to some interesting insights about the enterprise data landscape ...

April 06, 2017

The biggest factor driving happiness is the quality of relationships IT professionals have with their coworkers, including users, peers, and managers, according to the 2017 IT Job Satisfaction report from Spiceworks ...