New Data Reveals Widespread Downtime and Security Risks in 99% of Enterprise Private Cloud Environments
February 08, 2017

Doron Pinhas
Continuity Software

Share this

Industrial and technological revolutions happen because new manufacturing systems or technologies make life easier, less expensive, more convenient, or more efficient. It's been that way in every epoch – but Continuity Software's new study indicates that in the cloud era, there's still work to be done.

With the rise of cloud technology in recent years, Continuity Software conducted an analysis of live enterprise private cloud environments – and the results are not at all reassuring. According to configuration data gathered from over 100 enterprise environments over the past year, the study found that there were widespread performance issues in 97% of them, putting the IT system at great risk for downtime. Ranked by the participating enterprises as the greatest concern, downtime risks were still present in each of the tested environments.

A deep dive into the study findings revealed numerous reasons for the increased operational risk in private cloud environments, ranging from lack of awareness to critical vendor recommendations, inconsistent configuration across virtual infrastructure components and incorrect alignment between different technology layers (such as virtual networks and physical resources, storage and compute layers, etc.).

The downtime risks were not specific to any particular configuration of hardware, software, or operating system. Indeed, the studied enterprises used a diverse technology stack: 48% of the organizations are pure Windows shops, compared to 7% of the organizations that run primarily Linux. 46% of the organizations use a mix of operating systems. Close to three quarters (73%) of the organizations use EMC data storage systems and 27% of the organizations use replication for automated offsite data protection. And 12% utilized active-active failover for continuous availability.

Certainly in the companies in question, the IT departments include top engineers and administrators – yet nearly all of the top companies included in the study have experienced some, and in a few cases many, issues.

While the results are unsettling, they are certainly not surprising. The modern IT environment is extremely complex and volatile: changes are made daily by multiple teams in a rapidly evolving technology landscape. With daily patching, upgrades, capacity expansion, etc., the slightest miscommunication between teams, or a knowledge gap could result in hidden risks to the stability of the IT environment.

Unlike legacy systems, in which standard testing and auditing practices are employed regularly (typically once or twice a year), private cloud infrastructure is not regularly tested. Interestingly, this fact is not always fully realized, even by seasoned IT experts. Virtual infrastructure is often designed to be "self-healing," using features such as virtual machine High Availability and workload mobility. Indeed, some evidence is regularly provided to demonstrate that they are working; after all, IT executives may argue, "not a week goes by with some virtual machines failing over successfully."

This perception of safety can be misleading, since a chain is only as strong as its weakest link; Simply put, it's a number game. Over the course of any given week, only a minute fraction of the virtual machines will actually be failed-over – usually less than 1%. What about the other 99%? Is it realistic to expect they're also fully protected?

The only way to determine the private cloud is truly resilient would be to prove every possible permutation of failure could be successfully averted. Of course, this could not be accomplished with manual processes, which would be much too time consuming, and potentially disruptive. The only sustainable and scalable approach would be to automate private cloud configuration validation and testing.

Individual vendors offer basic health measurements for their solution stack (for example, VMware, Microsoft, EMC and others). While useful, this is far from a real solution, since, as the study shows, the majority of the issues occur due to incorrect alignment between the different layers. In recent years, more holistic solutions have entered the market, that offer vendor agnostic, cross-domain validation.

While such approaches come with a cost, it is by far less expensive than the alternative cost of experiencing a critical outage. The cost of a single hour of downtime, according to multiple industry studies, can easily reach hundreds of thousands of dollars (and, in some verticals even millions).

Doron Pinhas is CTO of Continuity Software.

Share this

The Latest

February 24, 2017

Global revenue in the BI and analytics software market is forecast to reach $18.3 billion in 2017, an increase of 7.3 percent from 2016, according to the latest Gartner forecast. Gartner believes the rapidly evolving modern BI and analytics market is being influenced by the following 7 dynamics ...

February 23, 2017

An important aspect of performance monitoring is where the observer stands when looking at the IT scenario. Each participant has a different view of what is bad performance - network, database, web, system, user personnel, management and external people - customers, regulatory bodies etc. These are what I call viewpoints ...

February 22, 2017

An important aspect of performance monitoring is where the observer stands when looking at the IT scenario. If a complaint says the performance of an application is dreadful, the network man might say "Everything is fine" and the database man may agree, both saying "What's the problem?" All these people may say that the performance world is rosy but not to other people who have a different idea on what is rosy and what is not ...

February 21, 2017

Instapaper, a "read later" tool for saving web pages to read on other devices or offline, suffered an extensive outage 2 weeks ago. While Instapaper hit a unique problem — a file size limitation — its experience speaks to a much larger problem: scaling a database is difficult, and never quick. That basic fact explains why outages like this are surprisingly common ...

February 16, 2017

Hybrid Cloud is the preferred enterprise strategy, according to RightScale's 2017 State of the Cloud Report ...

February 15, 2017

IT departments often try to protect against downtime by focusing on the web application. Monitoring web application's performance helps identify malfunctions and their cause on a code level, so that the DevOps team can solve the problem. But, monitoring application performance only protects against application errors and ignores external factors such as network traffic, hardware, connectivity issues or bandwidth usage, all of which can have an impact performance and availability of a website ...

February 14, 2017

Everybody loves DevOps. In fact, DevOps is the hottest date in IT. That's because DevOps promises to satisfy the deepest longings of digital business — including fast execution on innovative ideas, competitively differentiated customer experiences, and significantly improved operational efficiencies ...

February 13, 2017

Forrester forecasted that direct online sales totaled 11.6 percent of total US retail sales in 2016, but digital touchpoints actually impacted an estimated 49 percent of total US retail sales, according to The State of Retailing Online 2017: Key Metrics, Business Objectives and Mobile report, released by the National Retail Federation’s Shop.org division and Forrester ...

February 10, 2017

Cisco's acquisition of AppDynamics – and the premium it paid – represents a "statement acquisition" that addresses several converging trends in both technology and financial markets. For strategic acquirers and tech investors, the acquisition is about delivering value to users and improving business outcomes through a go-to-market model that drives recurring revenues ...

February 08, 2017

Industrial and technological revolutions happen because new manufacturing systems or technologies make life easier, less expensive, more convenient, or more efficient. It's been that way in every epoch – but Continuity Software's new study indicates that in the cloud era, there's still work to be done ...