Clustrix announced its nResiliency feature which ensures that the database, and hence the application, remains available in the event of multiple simultaneous server or instance failures.
Available now, nResiliency offers confidence that valuable data is safe and continuously available should two or even more servers (nodes) fail at the same time. Companies can now decide on the maximum number of nodes that could potentially fail in the cluster without losing any data, then ClustrixDB automatically generates the number of data replicas necessary to successfully recover, in the event of multi-node failure.
“Too many companies rely on databases for OLTP applications that are susceptible to even single-node failure,” said Mike Azevedo, CEO, Clustrix. “By offering protection against multi-node failure, we’re offering peace of mind through an easy-to-use feature that would otherwise require IT resources that most companies don’t have and can’t afford. This is critically important for larger scale applications that typically service millions of users like in e-commerce, gaming, adtech and social.”
ClustrixDB was developed to address MySQL’s scale limitations, but its architecture is distinct from other MySQL replacements in that it is designed to “scale out” both writes and reads by adding server nodes. This enables it to scale linearly to the point where there are almost no limits to the number of simultaneous transactions it can handle, with practically imperceptible latency to the end user.
Scale-out ability, combined with the new nResiliency protection against multi-node failure, means that companies can now easily scale to the demands placed on their application by millions of concurrent users. E-commerce sites facing holiday shopping traffic; gaming companies launching a new title; consumer web services and social applications can now all freely match database capacity to demand. Easily add scale when you need it, and then scale-back when you don’t, only paying for the servers you need.
ClustrixDB’s new nResiliency feature provides the ability to define the number of servers in the cluster that can become unavailable simultaneously while ensuring continuous database availability: it is easily configurable according to data sensitivity and criticality.
For example, users may:
- Set MAX_FAILURES at a high number for their high-value data that are necessary to keep mission-critical applications running in the event of simultaneous failures
- Set MAX_FAILURES at a mid-range number for high volume data that are not required to have multiple levels of redundancy
- Set MAX_FAILURES at a low number for high-throughput, ‘fast-lane’ data which can be easily replaced
The Latest
In MEAN TIME TO INSIGHT Episode 11, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Secure Access Service Edge (SASE) ...
On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner's annual global survey of CIOs and technology executives ...
Artificial intelligence (AI) is rapidly reshaping industries around the world. From optimizing business processes to unlocking new levels of innovation, AI is a critical driver of success for modern enterprises. As a result, business leaders — from DevOps engineers to CTOs — are under pressure to incorporate AI into their workflows to stay competitive. But the question isn't whether AI should be adopted — it's how ...
The mobile app industry continues to grow in size, complexity, and competition. Also not slowing down? Consumer expectations are rising exponentially along with the use of mobile apps. To meet these expectations, mobile teams need to take a comprehensive, holistic approach to their app experience ...
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...