You ask a friend to "check" on your dog while you're away. Obliging, your friend goes to your house, rings the doorbell to listen for a bark and then returns to their car. However, when you made the request you really wanted your friend to go into the house for a bit, make sure there were no issues and immediately notify you if something was wrong. A perfect case of a poorly negotiated SLA!
What Are SLAs and Why Do We Have Them?
A Service Level Agreement is a contractual agreement between a service provider and a customer regarding the level of service that will be provided. SLAs are beneficial for both parties – they define what is being purchased and also the roles and responsibilities to remediate any issues. A well-constructed SLA strengthens the customer relationship by bridging the gap between the vendor services and customer expectations. With software services, websites and applications becoming increasingly complex, negotiating and adhering to SLAs is more important than ever.
What Do SLAs Typically Cover?
It is very important to keep the SLA simple, measurable and realistic. SLAs typically cover:
■ Description of overall services
■ Service performance metrics
■ Financial aspects of service delivery
■ Responsibilities of service provider and customer
■ Disaster recovery process
■ Review process and frequency of review
■ Termination of agreement process
The specific performance metrics that manage the compliance of service delivery are called Service Level Objectives (SLOs). In the context of web services, SLOs would cover availability, uptime and response time for the service; probably accessibility by geography and problem resolution metrics such as mean time to answer and/or mean time to repair.
Is a service really available if the customer cannot use it? A well-constructed SLA should include a unit of measurement that defines availability to align with the customer's critical business process, and not just the availability of the servers URL/URI or log in process.
Using our doorbell analogy in web services context, a poorly negotiated SLA will ring the doorbell equivalent of looking for the 200 OK from the server. The 200 code, like the dog's bark, will just tell you that someone is home and not the actual condition i.e. health of the service. Checking a website or authenticating without validating the business process you rely on, exposes you to downtime without financial leverage.
Step One: Measure What You Have
What can you, the service provider, do to get most out of SLAs? Let's say you are providing a marketing automation system to an enterprise that will run its global web activities over your system. You have promised them 95% availability and suitable performance from the USA east and west coasts, UK, Germany and India.
Before you commit to an exact performance target, hopefully you have measured what you have now. You need to baseline the performance of your service in order to understand what you can offer. No sense promising 95% availability in India if your system typically only is available 80% of the time in India. However, when it comes to SLAs, under committing can lead to lost business opportunities and lost revenue. You can use your SLA as a competitive advantage, only if you know what you can and cannot deliver. Baselining performance will help you commit not too much, not too little but just right!
Using a synthetic performance monitoring tool, you can baseline your services. Ex. Let's say you want to measure performance of a user log in activity from UK during business hours. You can record this multi-step user transaction and use that script to create a monitor. Next, you can create an SLA for that monitor by setting desired response time and availability objective. A quality synthetic tool will not only see if the service is up and running but also measures the response times and functional correctness from its global monitoring nodes; assuring SLA compliance by comparing the actual performance with SLA objectives.
By observing your monitors in real time , as well as from the SLA summary, you get the realistic and complete picture of your performance.
Step Two: Include What Applies to Your Customer, Exclude the Rest
If your agreement states that you will provide a certain level of service for east coast, west coast, UK, Germany and India, don't provide the data regarding the Netherlands and Africa. You also need to account for operational time for you, clearly mention the descriptions of your maintenance windows and/or upgrades. When building the service-level-agreement, keep in mind the operating periods as well as both ongoing and one-time events.
Customers are getting used to the multi tenancy nature of service providers. So be open to SLA negotiations, however calculate the cost associated with customization and make sure it aligns with your aggregate business interest in that customer. Many times the customer can also be found in over/under demanding situations. Baselining customer's performance requirements will lead to more realistic SLAs and a win-win situation for both parties.
Step Three: Monitor Aggressively
In order to make realistic availability and performance goals and keep them, you have to take enough measurements so that a single failure doesn't skew the overall results.
I want to talk a little bit about the law of large numbers: which is a principle of probability and statistics. The law of large numbers states that as a sample size grows, its mean will get closer and closer to the average of the whole population.
This is an important context for monitoring and setting SLAs. If you run an availability test from 5 locations once an hour, one time, and one of those tests fails. Your availability is down to 80 percent. If you run tests from 10 locations every 5 minutes for an hour that is 50 tests – and if 1 fails then your availability is now 98%! Less aggressive monitoring leaves you vulnerable to an SLA violation for a brief outage.
In conclusion, service level agreements are valuable for you and your customers. These three steps will help you look at SLAs as an opportunity than a restriction.
■ Make the right agreement based on baseline performance
■ Measure the correct things with the correct frequency
■ Take enough measurements to smooth out variability
John Lucania is Senior Sales Engineer at SmartBear Software.
The Latest
If there's one thing we should tame in today's data-driven marketing landscape, this would be data debt, a silent menace threatening to undermine all the trust you've put in the data-driven decisions that guide your strategies. This blog aims to explore the true costs of data debt in marketing operations, offering four actionable strategies to mitigate them through enhanced marketing observability ...
Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...
In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...
The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...
In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...
eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...
Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...
Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...
Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...