So, your site is down because AWS S3 went away. Too soon? It's never too soon to talk about why the responsibility for designing resilient infrastructure belongs in your camp. It's like when Smokey the Bear used to say that "only you can prevent forest fires." The difference is that it's Jeff Bezos saying it this time.
We have some real insight into what design for cloud resiliency really means thanks to a chat that I had recently.
Cloud Goes Down, so Design for It
There is no special text in the terms and conditions. These are hard facts. AWS designs its infrastructure to be as resilient as possible, but clearly tells you that you should design with the intention of surviving partial service outages. It isn't that AWS plans on being down a lot, but they have been hit by specific DDoS attacks, and also have had to reboot EC2 hosts in order to patch for security vulnerabilities.
At the time I was writing this, AWS S3 was fighting its way back to life in the US-east-1 Region. This means that there were multiple Availability Zones in the throes of recovery, and that potentially hundreds of thousands of web sites, and applications were experiencing issues retrieving objects from the widely used object storage platform.
So, how do we do this better? Let's ask someone who does design and see how the developers think about things. With that, I wanted to share a great discussion that I had with former Disney lead architect and current Principal Software Architect at Turbonomic, Steve Haines.
Q&A: Understanding the Developer's Reaction to the AWS Outage
EW: What does it mean to think about designing across regions inside the public cloud?
SH: Designing an application to run across multiple AWS regions is not a trivial task. While you can deploy stateless services or micro-services to multiple regions and then configure Route53 (Amazon's DNS Service) to point to Elastic Load Balancers (ELBs) in each region, that doesn't completely solve the problem.
First, it's crucial to consider the cost of redundancy. How many regions and how many availability zones (AZ) in each region do we want to deploy to? From historical outages, you're probably safe with two regions, but you do not want to keep a full copy of your application deployed in another region just for disaster recovery: you want to use it and distribute workloads across those regions!
For some use cases this will be easy, but for others you will need to design your application so that it is close to the resources it needs to access. If you design your application with failure in mind and to run in multiple regions then you can manage the cost because both regions will be running your workloads.
EW: That seems to be a bit of the cost of doing business for design and resiliency, but what is the impact below the presentation layers? It feels like that is the sort of "low hanging fruit" as we know it, but there is much more to the application architecture than that, right?
SH: Exactly! That leads to the next challenge: resources, such as databases and files. While AWS provides users multi-A to Z database replication free of charge for databases running behind RDS, users are still paying for storage, IOPS, etc. However, this model changes if a user wants to replicate across regions. For example, Oracle provides a product called GoldenGate for performing cross-region replication, which is a great tool but can significantly impact your IT budget.
Alternatively, you can consider one of Amazon's native offerings, Aurora, which supports cross- region replication out-of-the-box, but that needs to be a design decision you make when you're building or refactoring your application. And, if you store files in S3, be sure that you enable cross- region replication, it will cost you more, but it will ensure that files stored in one region will be available in the event of a regional outage.
EW: Sounds like we have already got some challenges in front of us with just porting our designs to cloud platforms, but when you're already leaning into the cloud as a first-class destination for your apps we have to already think about big outages. We do disaster recovery testing on-premises because that's something we can control. How do we do that type of testing out in the public cloud?
SH: Good question. It's important to remember that while designing an application to run in a cross-region capacity is one thing, having the confidence that it will work when you lose a region is another beast altogether!
This is where I'll defer to Netflix's practice of designing for failure and regularly testing failure scenarios. They have a "Simian Army" (https://github.com/Netflix/SimianArmy) that simulates various failure scenarios in production and ensures that everything continues to work. One of the members of the Simian Army is the Chaos Gorilla that regularly kills a region and ensures that Netflix continues to function, which is one of the reasons they were able to sustain the previous full region outage.
If you're serious about running across regions then you need to regularly validate that it works!
But maybe we should think bigger than cross-region – what if we could design across clouds for the ultimate protection?
EW: Thanks for the background and advice, Steve. Good food for thought for all of us in the IT industry. I'm sure there are a lot of people having this discussion in the coming weeks after the recent outage.
Eric Wright is Principal Solutions Engineer at Turbonomic.
The retail industry is highly competitive, and as retailers move online and into apps, tech factors play a deciding role in brand differentiation. According to a recent QualiTest survey, a lack of proper software testing — meaning glitches and bugs during the shopping experience — is one of the most critical factors in affecting consumer behavior and long-term business ...
Consumers aren't patient, and they are only one back-button click from Google search results and competitors' websites. A one-second delay can bump the bounce rate by almost 50 percent on mobile, and a two-second delay more than doubles it ...
Optimizing online web performance is critical to keep and convert customers and achieve success for the holidays and the entire retail year. Recent research from Akamai indicates that website slowdowns as small as 100 milliseconds can significantly impact revenues ...
Public sector organizations undergoing digital transformation are losing confidence in IT Operations' ability to manage the influx of new technologies and evolving expectations, according to the 2017 Splunk Public Sector IT Operations Survey ...
It's no surprise that web application quality is incredibly important for businesses; 99 percent of those surveyed by Sencha are in agreement. But despite technological advances in testing, including automation, problems with web application quality remain an issue for most businesses ...
Market hype and growing interest in artificial intelligence (AI) are pushing established software vendors to introduce AI into their product strategy, creating considerable confusion in the process, according to Gartner. Analysts predict that by 2020, AI technologies will be virtually pervasive in almost every new software product and service ...
Organizations are encountering user, revenue or customer-impacting digital performance problems once every five days, according a new study by Dynatrace. Furthermore, the study reveals that individuals are losing a quarter of their working lives battling to address these problems ...
Cloud adoption is still the most vexing factor in increased network complexity, ahead of the internet of things (IoT), software-defined networking (SDN), and network functions virtualization (NFV), according to a new survey conducted by Kentik ...
Gigabit speeds and new technologies are driving new capabilities and even more opportunities to innovate and differentiate. Faster compute, new applications and more storage are all working together to enable greater efficiency and greater power. Yet with opportunity comes complexity ...