Hard Lessons of Public Cloud: Designing for When AWS Goes Down
March 07, 2017

Eric Wright
Turbonomic

Share this

So, your site is down because AWS S3 went away. Too soon? It's never too soon to talk about why the responsibility for designing resilient infrastructure belongs in your camp. It's like when Smokey the Bear used to say that "only you can prevent forest fires." The difference is that it's Jeff Bezos saying it this time.

We have some real insight into what design for cloud resiliency really means thanks to a chat that I had recently.

Cloud Goes Down, so Design for It

There is no special text in the terms and conditions. These are hard facts. AWS designs its infrastructure to be as resilient as possible, but clearly tells you that you should design with the intention of surviving partial service outages. It isn't that AWS plans on being down a lot, but they have been hit by specific DDoS attacks, and also have had to reboot EC2 hosts in order to patch for security vulnerabilities.

At the time I was writing this, AWS S3 was fighting its way back to life in the US-east-1 Region. This means that there were multiple Availability Zones in the throes of recovery, and that potentially hundreds of thousands of web sites, and applications were experiencing issues retrieving objects from the widely used object storage platform.

So, how do we do this better? Let's ask someone who does design and see how the developers think about things. With that, I wanted to share a great discussion that I had with former Disney lead architect and current Principal Software Architect at Turbonomic, Steve Haines.

Q&A: Understanding the Developer's Reaction to the AWS Outage

EW: What does it mean to think about designing across regions inside the public cloud?

SH: Designing an application to run across multiple AWS regions is not a trivial task. While you can deploy stateless services or micro-services to multiple regions and then configure Route53 (Amazon's DNS Service) to point to Elastic Load Balancers (ELBs) in each region, that doesn't completely solve the problem.

First, it's crucial to consider the cost of redundancy. How many regions and how many availability zones (AZ) in each region do we want to deploy to? From historical outages, you're probably safe with two regions, but you do not want to keep a full copy of your application deployed in another region just for disaster recovery: you want to use it and distribute workloads across those regions!

For some use cases this will be easy, but for others you will need to design your application so that it is close to the resources it needs to access. If you design your application with failure in mind and to run in multiple regions then you can manage the cost because both regions will be running your workloads.

EW: That seems to be a bit of the cost of doing business for design and resiliency, but what is the impact below the presentation layers? It feels like that is the sort of "low hanging fruit" as we know it, but there is much more to the application architecture than that, right?

SH: Exactly! That leads to the next challenge: resources, such as databases and files. While AWS provides users multi-A to Z database replication free of charge for databases running behind RDS, users are still paying for storage, IOPS, etc. However, this model changes if a user wants to replicate across regions. For example, Oracle provides a product called GoldenGate for performing cross-region replication, which is a great tool but can significantly impact your IT budget.

Alternatively, you can consider one of Amazon's native offerings, Aurora, which supports cross- region replication out-of-the-box, but that needs to be a design decision you make when you're building or refactoring your application. And, if you store files in S3, be sure that you enable cross- region replication, it will cost you more, but it will ensure that files stored in one region will be available in the event of a regional outage.

EW: Sounds like we have already got some challenges in front of us with just porting our designs to cloud platforms, but when you're already leaning into the cloud as a first-class destination for your apps we have to already think about big outages. We do disaster recovery testing on-premises because that's something we can control. How do we do that type of testing out in the public cloud?

SH: Good question. It's important to remember that while designing an application to run in a cross-region capacity is one thing, having the confidence that it will work when you lose a region is another beast altogether!

This is where I'll defer to Netflix's practice of designing for failure and regularly testing failure scenarios. They have a "Simian Army" (https://github.com/Netflix/SimianArmy) that simulates various failure scenarios in production and ensures that everything continues to work. One of the members of the Simian Army is the Chaos Gorilla that regularly kills a region and ensures that Netflix continues to function, which is one of the reasons they were able to sustain the previous full region outage.

If you're serious about running across regions then you need to regularly validate that it works!

But maybe we should think bigger than cross-region – what if we could design across clouds for the ultimate protection?

EW: Thanks for the background and advice, Steve. Good food for thought for all of us in the IT industry. I'm sure there are a lot of people having this discussion in the coming weeks after the recent outage.

Eric Wright is Principal Solutions Engineer at Turbonomic.

Share this

The Latest

May 24, 2017

In today's digital world, it is possible to gauge the cost implications of an IT outage on employee productivity, revenue generation but it is usually much more tricky to measure the negative impacts on the very IT people's lives ...

May 22, 2017

APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 5 offers some interesting final thoughts ...

May 19, 2017

APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 4 covers automation and the dynamic IT environment ...

May 18, 2017

APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 3 covers monitoring and user experience ...

May 17, 2017

APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 2 covers visibility and data ...

May 16, 2017

Managing application performance today requires analytics. IT Operations Analytics (ITOA) is often used to augment or built into Application Performance Management solutions to process the massive amounts of metrics coming out of today's IT environment. But today ITOA stands at a crossroads as revolutionary technologies and capabilities are emerging to push it into new realms. So where is ITOA going next? With this question in mind, APMdigest asked experts across the industry — including analysts, consultants and vendors — for their opinions on the next steps for ITOA ...

May 15, 2017

Digital transformation initiatives are more successful when they have buy-in from across the business, according to a new report titled Digital Transformation Trailblazing: A Data-Driven Approach ...

May 11, 2017

The growing market for analytics in IT is one of the more exciting areas to watch in the technology industry. Exciting because of the variety and types of vendor innovation in this area. And exciting as well because our research indicates the adoption of advanced IT analytics supports data sharing and joint decision making in a way that's catalytic for both IT and digital transformation ...

May 10, 2017

Colin Fletcher, Research Director at Gartner, talks about Algorithmic IT Operations (AIOps) and the challenges and recommendations for AIOps adoption ...

May 09, 2017

In APMdigest's exclusive interview, Colin Fletcher, Research Director at Gartner, talks about Algorithmic IT Operations (AIOps) and how it will impact ITOA and APM ...