Blog post

AWS moves from ECU to vCPU

By Kyle Hilgendorf | April 16, 2014 | 4 Comments


In a quiet move yesterday, Amazon Web Services apparently abandoned their Elastic Compute Unit (ECU) approach to describing and selling EC2 instance types and towards a more traditional vCPU approach. This was done without any formal announcement and I wonder what effect it will have (positively or negatively) on customers.

For existing AWS customers that have grown accustomed to ECU over the course of the past years, this could be a somewhat disruptive change, especially for those at larger scale that have invested a lot of time and money optimizing instance size and horizontal scalability based on their own performance testing and analysis of which kind and how many EC2 instances they need for their use case. Initially, this may not matter much for existing deployments, but it will have an impact on scaling out or for new use cases. Bottom line – these types of customers are pretty savvy and will find ways to adjust.

For new or prospective AWS customers, the ECU was always a gnarly concept to grasp and it took time. More traditional deployments, like those based upon VMware were always declared with vCPU. Bottom line – more traditional IT Ops admins and new AWS customers will likely welcome this move as a move toward familiarity and simplicity.

However, for all customers, there is one aspect of this that could be problematic. AWS is a massive scale cloud provider, with a wide variety mix of servers and processor architectures in existence. Therefore, two instances each with 2 vCPU will not necessarily be equivalent. One instance could reside on top of a 2012-based processor while the other could reside on top of a 2014-based processor. Many people have written about the fact that EC2 processor architecture varies across instance types and across regions, even those described as having the “same specs”. Therefore, some savvy organizations have moved to a “deploy and ditch” strategy whereby they deploy many instances, interrogate them all for processor architecture and then ditch all the ones that are not up to current or fastest specs.

This will further escalate an important transparency event for AWS. AWS will need to clarify the physical processor architecture strategy per instance type or instance family. As a customer, I will want to know which instance types are based on Sandy Bridge processor architectures for example – because that tells me what a vCPU will equate to. I will want to know the processor strategy similarities/differences between an m2 and an m3 or between an m3.medium and m3.large. And if there are no differences – I will want to know that also and have something in writing stating as such.  Customers wanted this before with ECU, but ECU gave AWS a way to deflect these customer questions.

ECU was a foreign concept to grasp initially, but it did provide one benefit – a standard of measure. Now that AWS has moved to a vCPU strategy will customers applaud this or complain? I’d love to hear your thoughts in the comments below.

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Comments are closed


  • Ryan Maple says:

    If you attempt to launch a new instance via the AWS Management Console you’ll still see the ECU values (along side the new vCPU values), and the information you seek for most of the newer instance types. For example:

    “For M3 instances, each vCPU is a hardware hyperthread from Intel Xeon E5-2670 processors.”

    “For CR1 instances, each vCPU is a hardware hyperthread from Intel Xeon E5-2670 processors.”

    .. and so on.

  • Mark Skilton says:

    This move is no doubt a commercial one as the cloud market matures and consumers are needing more reliable benchmarks for cloud service providers. Presumably AWS has been disadvantaged due to their proprietary attitude and have finally joined the rest of the world. Ive seen a number of Units of Measure comparisons in pilots that I call the “apples and oranges comparison problem” in cloud where you are comparing say AMIs and VMIs. While the ECS model and the S,M,L scaling was a useful method for blocking capacity ( I still have issues with their dynamic pricing but thats another story) I suspect other factors such as appliance sales from other competitors trying to dislodge AWS may also be driving this. For example EMC and others in this market seeing to shift kit. From their perspective I am seeing more moves to try to containerize a total solution but have different concerns and doubts they are able to join up a complicated sales story of applications, appliances and services when many customers may prefer a provider platform. I think as we move into the next game in cloud of advanced PAAS and multi-platform and device services the plumbing layer of infrastructure just needs to be transparent and clear so the real added value services can be provided on top of this.

  • @Ryan – Thank you very much for highlighting that. I will log into the console and see what it looks like. From what you pasted, that could be very helpful. If this is in the console, it would be great if AWS would also post this on the EC2 Instance Types pages for people evaluating/comparing AWS and not actually deploying yet.

  • @Mark – Thanks for the opinion and insights!