Gartner Blog Network


Can A Cloud Computing Provider Be Too Massive?

by Tom Bittman  |  September 15, 2008  |  4 Comments

It’s an interesting question, and I’ve written about this in research and seem to have conversations about this topic with vendors every day. There are two variables that make this interesting – economies of scale, and continued technology innovation.

One of the benefits of being massive is the ability to leverage huge economies of scale. Spread fixed costs as thinly as possible. But, when do the costs essentially become linear? I would question whether Google’s costs per compute unit went down much when they extended from one datacenter to two – not to mention when they extended from two to several dozen.

But even more interesting, the assumption is that computing is a commodity. This is only true at a point in time. Computing technologies continue to get cheaper for the same performance, they shrink in space requirements, and perhaps most importantly in today’s world, they consume less power.

Clearly the ability to leverage new technology – to be agile – is an important capability. A megaprovider that cannot adopt new technology quickly can be undercut in price by smaller, agile providers. Or, at least, higher agility reduces the size a provider needs to attain to remain price-competitive with the megaproviders.

How big is big enough? And could this issue lead to a megaprovider struggling to remain competitive with a host of smaller competitors? Can small providers federate in the cloud to take down a megaprovider? The picture in my mind is a huge apatosaurus being take down by a group of velociraptors…

Additional Resources

View Free, Relevant Gartner Research

Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.

Read Free Gartner Research

Category: agility  cloud  

Tags: cloud-computing  google  

Thomas J. Bittman
VP Distinguished Analyst
20 years at Gartner
31 years IT industry

Thomas Bittman is a vice president and distinguished analyst with Gartner Research. Mr. Bittman has led the industry in areas such as private cloud computing and virtualization. Mr. Bittman invented the term "real-time infrastructure," which has been adopted by major vendors and many… Read Full Bio


Thoughts on Can A Cloud Computing Provider Be Too Massive?


  1. Dan Sholler says:

    Tom,
    Aren’t you looking at this from the wrong end?

    The reason that cloud computing has superior economics is because they aggregate demand, and use that aggregation to smooth out the variations, so that they can maintain less fixed cost per unit delivered than someone who is doing this on their own.

    The more demand you can get (and the more out-of-phase the individual demands of customers are) the better off you can be. I believe this is the essence of how many electricity provider contracts work today.

    In effect, what we are talking about with cloud computing is wholesaling compute capacity. (Wholesaler’s business value is that that they aggregate demand and therefore can create smoother supply than the individual companies could do on their own dealing with the manufacturers..

    Wholesaling works even in an environment where there are goods with imperfect substitutability. . Why then do the differences in computers and their capabilities matter? After all, If I can achieve a given service level using 2 of the old computers or one of the new computers, the provider may be better off using one of hte new ones, but that change only really affects the provider margin opportunities (or pricing opportunities). Whether the provider chooses the old ones or the new ones does not matter to the customer relationship, as he or she is buying some computer power at a particular service level at a particular cost.

    Obviously this is not a commodity notion entirely, because some applications will run on some types of servers (or OSs or Databases) and not others.

    However, the question of scale is again one of opportunity. If there is more uneven demand to be matched, growing bigger will be valuable. If there is not then no matter how big or small you are it will be challenging.

    Size does matter, but only after you aggregate demand with different needs and characteristics.

    So the way the velociraptors in your example come to exist is by being better at finding and aggregating uneven supply. Not because they are smaller.

  2. Tom Bittman says:

    Agree to a point, Dan. Since many workloads are either (1) relatively small/short-lived or (2) granular/componentized/parallelized, it doesn’t take many workloads to “smooth out” utilization variation. Clearly, providers would prefer very small workloads chunks that can easily utilize capacity – that way, they can drive high utilization rates with smaller numbers of individual workloads. I don’t believe that will typically be millions of workloads – thousands probably would do it. That would be thousands of consumers, or a handful of medium-sized enterprises.

    Differences in computers doesn’t matter to users, but if it costs one provider twice as much (compute, power, space) as another provider, then differences certainly do matter to them.

  3. […] had a number of posts on the structure of the cloud computing market, (Can A Cloud Computing Provider Be Too Massive?, Is Google the Mainframe of Cloud Computing?, Partly or Mostly Cloudy?) and I’m getting more and […]

  4. Niraj says:

    good point.
    Makes me wonder as to the future of Amazon when the innovation in (realtime provisioning and pooled capacity) becomes a feature for big iron vendors(ibm , sun ,hp) and amazon is just stuck with an opportunity to capitalize on economies of scale.

    To continue on its success Amazon will either have to go downstream (an potentially buy sun and continue innovating at a different level in the HaaS picture) Or go upstream (like google and msft) and focus on SaaS – PaaS and essentially capitalize on “economies of Scope”- Hal Varian



Leave a Reply

Your email address will not be published. Required fields are marked *

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.