Blog post

Cloud Elasticity Could Make You Go Broke

By Daryl Plummer | March 11, 2009 | 10 Comments

Service OrientationEmerging TrendsEmerging PhenomenaCloud

Ever had a mobile phone and get a bill that was way, way more than you expected? You know what I mean. The day that bill for 700 dollars comes in and your eyes bug out of your head because you could swear (and in fact you do swear – at the customer service rep) that you could not possibly have exceeded your plan minutes? Or maybe you “pay as you go” but are down right certain that you never talked for THAT long! Come on – admit it. You’ll feel better after. Well, don’t let that happen to your cloud computing initiative.

What is he rambling about now, you ask? Well, the gist of it is this. When you approach cloud computing, one of the things you might be seeking is elasticity of the cloud services and platform. You know the ability to have capacity on demand and to release that capacity when you are done with it? And you pay only for what you use? It’s nice not to have all those fixed assets lying around when you want to reduce your reliance on them. But, have you considered how On-Demand might affect your expenditures that are no longer fixed?

I call it On-Demand Overspending. Nice little name to go alone with my previously named IT Overdraft protection. That was about what some call “Cloud bursting” – where you get more capacity from the cloud automatically when you run out; and, you pay as you go. But the history is pretty clear on this. When we have variability of expenditures with fewer controls, we can run into overspending (BuddyTV had a nice video on this). Some research done by was highlighted in a web article recently citing how mobile customers in the UK were overspending on their mobile plans because they had been sold plans smaller than their actual needs. These users either talked using more minutes than they expected they did, or talked beyond the number of planned minutes they had paid for. The same will happen, both intentionally and unintentionally with cloud services.

As more workloads move to cloud style deployment, there will be a corresponding, but not necessarily equal, movement of expenses from fixed capital outlay to operating outlay to cover variable use and supply. The problem with this is that it means there is a lot more variability in budgeting (which budget people don’t always like), and there may be fewer controls on who uses how much of what when. In fact, operating expenses dwarf fixed capital outlay for most technology organizations. Operations maintenance costs are already high. But with a pay as you go model, or a “plan” model, we will see some companies begin to overspend on compute resources simply because they use what they can use and may not have the controls to say when they should stop using.

Let’s take an example. In a previous life, I delivered a system to allow legislators to print their own reports rather than relying on a developer to write them. What happened after a short time was that the legislators began printing reports multiple times in multiple formats and multiple copies. Now this was valuable to them so it was encouraged. However, it did require an increase in the capacity of the database platform, the servers, and the network bandwidth (not to mention printer ribbons and paper). The side-effect costs were higher than imagined. This is a trivial example but it illustrates a simple but important axiom. Call it Plummer’s 2nd and a half axiom of overspending – to wit, “a consumer of an unlimited capability will consume unexpected amounts”.

Now this is not to say that IT organizations will not try to put controls in place to meter and limit usage. And, we know that capacity is not unlimited even from the cloud. My colleague Lydia Leong pointed out that even Amazon limits the number of servers (nee services) that you can spawn. This is only logical. But the axiom still stands and there must first be metering in place on cloud services usage. It would be a good idea for that metering to be exposed to the ultimate consumers of the services just as the electrical meter on your home is exposed to you. The Cloud paradigm will eventually evolve a metric equivalent to the “Kilowat-hour” in order to communicate clearly and simply to business decision-makers how much service capability they are using.

On the IT side, the metering is likely to be in place earlier as developers build and use cloud based resources. But, again history provides a warning. When we first began to build serious Web applications, the cost of testing that came from developers all hitting our web resources, or from customers trying everything out that they could pushed us beyond established “plans” of use and changed our budget expectations.  Developers especially are notorious for using whatever resources are within reach. They had better have limits or your money will fly out with their ambition. And back to testing. When you use services owned by someone else, you have to test what you build but the tests will often run against the live services (unless the provider gives you test images to use) – and you have to pay as you go. Is the cost of this testing built in? The days are gone when we can duplicate their environment in-house for testing purposes. Can your budget handle this variably as well?

At the same time, when we go to limited plans for usage of cloud services, we will inevitably run into situations where we buy less than we actually need and have to start paying (perhaps at a premium) for usage beyond our plan. And just as with mobile phones, this will lead to a degree of variability in our costs that we cannot always predict or control.  As cloud computing grows, there will be pay as you go plans for services, there will be limited plans and unlimited plans. Not everyone will want a per-message or per-transfer price. But no matter which kind of usage plan you devise. You had better have a corresponding plan to scope and ensure that usage itself is within a set of limited bounds or at least well-understood bounds.

If you do not, you may find there ain’t no “rollover minutes” left over to bail you out.

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Comments are closed


  • Daryl,

    Your thoughts are a great sanity check amidst a lot of noise proposing that cloud is the savior of IT (and its budgets).

    I am particularly intrigued by the mention that your “colleague Lydia Leong pointed out that even Amazon limits the number of servers (nee services) that you can spawn.” Assuming the cloud is “limitless” would be folly for enterprise customers and it sounds as if those limitations are far more damaging to scalability than the artificial financial ones imposed in the local data center.

    Thanks for keeping us grounded.


  • Daryl:

    You might be interested in something I wrote regarding this topic some months ago titled “EDoS : Economic Denial of Sustainability”


  • Daryl,

    Excellent analysis. We have spent some time trying to analyze the potential for just this type of problem. The Amazon model for EC2 seemed a bit misleading to us, as the unit price is so small and most consumers don’t bother to do the extended calculations. As such, we’re trying to be cognizant of user behavior patterns in our service, and to develop ways to notify customers when we think they may be unwittingly spending resources (idle time notifications, detailed usage reporting, etc.). Of course, we don’t want to be intrusive, so companies who choose to provide “cloud” or SaaS services, like ours, need to walk a fine line between reporting and nagging. Analyses like yours really help us understand this new world and how we can best serve the customer in it. Thanks very much.

  • Actually metering could already be exposed during development & test and tracked during production via activity based costing.

    A Unified Approach to Performance Management and Cost Management for Cloud Computing – An activity-based costing solution that assigns costs to applications, services and components based on resource consumption.

    ABC for Cloud Computing


  • Daryl,

    There is probably an inclination to overspend, but that is if one is not disciplined. For instance when the cell phone minutes are metered, people are careful about using them. They watch how many they use and if it is necessary.

    The Cloud elasticity is needed when it is difficult to predict what the demand will be. Demand is function of many things – but primarily price. Again the cell phone plans make perfect sense, where there are price tiers.

    So it is possible to control spending if the demand is predictable. Amazon’s new computing power reservation scheme is a step in the right direction to address this very concern.

    Ranjit Nayak

  • Daryl Plummer says:

    Mark. Thanks for the comment. This is an area that will evolve in interesting ways. One wonders if people will be careful with their metered use of services or if they will stay away from cloud services. I hope the former, not the latter.

  • Daryl Plummer says:

    Thanks, Ranjit. I do believe that some will be disciplined but I fear that many will not. Given the tendency to allocate budget in a fixed form today, there will be a transition period between that and a variable pricing model. I suspect that many will opt for a fixed pricing plan – wich will lead to unused resources being paid for.

  • Daryl Plummer says:

    Thanks Lori.

  • Daryl Plummer says:

    Thanks, Christofer. I checked it out. Interesting.

  • Daryl Plummer says:

    Thanks, William, for the links. Good stuff.