It’s an interesting question, and I’ve written about this in research and seem to have conversations about this topic with vendors every day. There are two variables that make this interesting – economies of scale, and continued technology innovation.
One of the benefits of being massive is the ability to leverage huge economies of scale. Spread fixed costs as thinly as possible. But, when do the costs essentially become linear? I would question whether Google’s costs per compute unit went down much when they extended from one datacenter to two – not to mention when they extended from two to several dozen.
But even more interesting, the assumption is that computing is a commodity. This is only true at a point in time. Computing technologies continue to get cheaper for the same performance, they shrink in space requirements, and perhaps most importantly in today’s world, they consume less power.
Clearly the ability to leverage new technology – to be agile – is an important capability. A megaprovider that cannot adopt new technology quickly can be undercut in price by smaller, agile providers. Or, at least, higher agility reduces the size a provider needs to attain to remain price-competitive with the megaproviders.
How big is big enough? And could this issue lead to a megaprovider struggling to remain competitive with a host of smaller competitors? Can small providers federate in the cloud to take down a megaprovider? The picture in my mind is a huge apatosaurus being take down by a group of velociraptors…