Blog post

Cloud IaaS SLAs can be meaningless

By Lydia Leong | December 05, 2012 | 3 Comments

Infrastructure

In infrastructure services, the purpose of an SLA (or, for that matter, the liability clause in the contract) is not “give the customer back money to compensate for the customer’s losses that resulted from this downtime”. Rather, the monetary guarantees involved are an expression of shared risk. They represent a vote of confidence — how sure is the provider of its ability to deliver to the SLA, and how much money is the provider willing to bet on that? At scale, there are plenty of good, logical reasons to fear the financial impact of mass outages — the nature of many cloud IaaS architectures create a possibility of mass failure that only rarely occurs in other services like managed hosting or data center outsourcing. IaaS, like traditional infrastructure services, is vulnerable to catastrophes in a data center, but it is additionally vulnerable to logical and control-plane errors.

Unfortunately, cloud IaaS SLAs can readily be structured to make it unlikely that you’ll ever see a penny of money back — greatly reducing the provider’s financial risks in the event of an outage.

Amazon Web Services (AWS) is the poster-child for cloud IaaS, but the AWS SLA also has the dubious status of “worst SLA of any major cloud IaaS provider”. (It’s notable that, in several major outages, AWS did voluntary givebacks — for some outages, there were no applicable SLAs.)

HP has just launched its OpenStack-based Public Cloud Compute into general availability. HP’s SLA is unfortunately arguably even worse.

Both companies have chosen to express their SLAs in particularly complex terms. For the purposes of this post, I am simplifying all the nuances; I’ve linked to the actual SLA text above for the people who want to go through the actual word salad.

To understand why these SLAs are practically useless, you need to understand a couple of terms. Both providers divide their infrastructure into “regions”, a grouping of data centers that are geographically relatively close to one another. Within each region are multiple “availability zones” (AZs); each AZ is a physically distinct data center (although a “data center” may be comprised of multiple physical buildings). Customers obtain compute in the form of virtual machines known as “instances”. Each instance has ephemeral local storage; there is also a block storage service that provides persistent storage (typically used for databases and anything else you want to keep). A block storage volume resides within a specific AZ, and can only be attached to a compute instance in that same AZ.

AWS measures availability over the course of a year, rather than monthly, like other providers (including HP) do. This is AWS’s hedge against a single short outage in a month, especially since even a short availability-impacting event takes time to recover from. 99.95% monthly availability only permits about 21 minutes of downtime; 99.95% yearly availability permits nearly four and a half hours of downtime, cumulative over the course of the year.

However, AWS and HP both define their SLA not in terms of instance availability, or even AZ availability, but in terms of region availability. In the AWS case, a region is considered unavailable if you’re running instances in at least two AZs within that region, and in both of those AZs, your instances have no external network connectivity and you can’t launch instances in that AZ that do; this is metered in five-minute intervals. In the HP case, a region is considered unavailable if an instance within that region can’t respond to API or network requests, you are currently running in at least two AZs, and you cannot launch a replacement instance in any AZ within that region; the downtime clock doesn’t start ticking until there’s more than 6 minutes of unavailability.

(Update: HP provided some clarifications.)

Every AZ that a customer chooses to run in effectively imposes a cost. An AZ, from an application architecture point of view, is basically a data center, so running in multiple AZs within a region is basically like running in multiple data centers in the same metropolitan region. That’s close enough to do synchronous replication. But it’s still a pain to have to do this, and many apps don’t lend themselves well to a multi-data-center distributed architecture. Also, that means paying to store your data in every AZ that you need to run in. Being able to launch an instance doesn’t do you very much good if it doesn’t have the data it needs, after all. The AWS SLA essentially forces you to replicate your data in two AZs; the HP one makes you do this for all the AZs within a region. Most people are reasonably familiar with the architectural patterns for two data centers; once you add a third and more, you’re further departing from people’s comfort zones, and all HP has to do is to decide they want to add another AZ in order to essentially force you to do another bit of storage replication if you want to have an SLA.

(I should caveat the former by saying that this applies if you want to be able to usefully run workloads within the context of the SLA. Obviously you could just choose to put different workloads in different AZs, for instance, and not bother trying to replicate into other AZs at all. But HP’s “all AZs not available” is certainly worse than AWS’s “two AZs not available”.)

Amazon has a flat giveback of 10% of the customer’s monthly bill in the month in which the most recent outage occurred. HP starts its giveback at 5% and caps it at 30% (for less than 99% availability), but it covers strictly the compute portion of the month’s bill.

HP has a fairly nonspecific claim process; Amazon requires that you provide the instance IDs and logs proving the outage. (In practice, Amazon does not seem to have actually required detailed documentation of outages.)

Neither HP nor Amazon SLA their management consoles; the create-and-launch instance APIs are implicitly part of their compute SLAs. More importantly, though, neither HP nor Amazon SLA their block storage services. Many workloads are dependent upon block storage. If the storage isn’t available, it doesn’t matter if the virtual machine is happily up and running — it can’t do anything useful. For example of why this matters, you need look no further than the previous Amazon EBS outages, where the compute instances were running happily, but tons of sites were down because they were dependent on data stores on EBS (and used EBS-backed volumes to launch instances, etc.).

Contrast these messes to, say, the simplicity of the Dimension Data (OpSource) SLA. The compute SLA is calculated per-VM (i.e., per-instance). The availability SLA is 100%; credits start at 5% of monthly bill for the region, and go up to 100%, based on cumulative downtime over the course of the month (5% for every hour of downtime). One caveat: Maintenance windows are excluded (although in practice, maintenance windows seem to affect the management console, not impacting uptime for VMs). The norm in the IaaS competition is actually strong SLAs with decent givebacks, that don’t require you to run in multiple data centers.

Amazon’s SLA gives enterprises heartburn. HP had the opportunity to do significantly better here, and hasn’t. To me, it’s a toss-up which SLA is worse. HP has a monthly credit period and an easier claim process, but I think that’s totally offset by HP essentially defining an outage as something impacting every AZ in a region — something which can happen if there’s an AZ failure coupled with a massive control-plane failure in a region, but not otherwise likely.

Customers should expect that the likelihood of a meaningful giveback is basically nil. If a customer needs to, say, mitigate the fact he’s losing a million dollars an hour when his e-commerce site is down, he should be buying cyber-risk insurance. The provider absorbs a certain amount of contractual liability, as well as the compensation given by the SLA, but this is pretty trivial — everything else is really the domain of the insurance companies. (Probably little-known fact: Amazon has started letting cyber-risk insurers inspect the AWS operations so that they can estimate risk and write policies for AWS customers.)

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Comments are closed

3 Comments

  • Ken Larson says:

    I disagree with your SLA assessment and purpose. The best definition of the SLA is given by ITIL; essentially an agreement on service level targets; “Service Level Targets are based on Service Level Requirements, and are needed to ensure that the IT Service design is Fit for Purpose.”

    I think this article questionably represents the value and “Fit for Purpose” aspect of the SLA. Despite the opening paragraph, it suggests that the cloud consumer is better served by a simple language SLA that is generous in its refund policy. Further, it represents the DimensionData SLA as a better situation than the Amazon or HP SLA. I think this article supports a simplistic view of SLAs as a one-size-fits-all approach to services from the cloud with generous refunds as the objective.

    A cloud provider would be unwise to not specify their limit of liability and the cloud consumer would be unwise not to look under the covers and ensure that buy resources scaled to their availability and disaster tolerance needs.

    A company the size of Amazon or HP would need to scale their resources to handle potential high availability and disaster tolerant systems. I suspect DataDimension would have fewer sites and global choices. Who knows? It’s in the cloud!

    Speaking of which, did you actually read the DataDimension SLA that seems so highly recommended in this article?

    It clearly notes the consumer responsibility:
    “Customer’s system is fully redundant at all tiers and is configured for failover operation;”
    and excluding in it’s liability: “Events of force majeure, including acts of war, god, earthquake, flood, embargo, riot, sabotage, labor dispute (outside of OpSource’s own employees), government act, or failure of the Internet.”

    So…if the consumer runs a business critical system on Opsource, and doesn’t configure it to be highly available, nor pay for the redundancy, and if any sort of “force majeure”: occurs….NO REFUND. 0%. This is better than what?

    I have no issue with DataDimension, Amazon or HP but caveat emptor when it comes to the public cloud.

  • Lydia Leong says:

    Two quick notes:

    Both HP and Amazon also exclude force majeure; this exclusion is actually the norm in contracts, not only in cloud, but in a wide range of other services (infrastructure, technical, and non-tech alike).

    I don’t think you read the Dimension Data SLA sufficiently carefully. The Dimension Data SLA is broken into two parts. One is an infrastructure SLA, and one is an application SLA. The SLA that is comparable to the HP and Amazon SLAs is the pure infrastructure one, which does not require you to build app redundancy, etc. Dimension Data will ALSO give you an application SLA if you buy their managed services and conform to those additional set of rules, but if you don’t do so, then you get just the infrastructure SLA as stated.

    This is not about refunds, actually, and I apparently did not make that sufficiently clear in my blog post. This is about shared risk. An SLA reflects a provider’s assessment of its risks. Obviously, large-scale providers have legitimate concerns about risk, and they need to carefully assess how much of an SLA they can actually take on. Customers, on the other hand, have to decide what the SLA says about the provider’s confidence in its engineering.

  • Blake Yeager says:

    Based on our dialogue regarding SLAs I’ve posted a blog http://bit.ly/TUTvNh to help clarify the HP Cloud Services approach to SLAs!