Ever had a mobile phone and get a bill that was way, way more than you expected? You know what I mean. The day that bill for 700 dollars comes in and your eyes bug out of your head because you could swear (and in fact you do swear – at the customer service rep) that you could not possibly have exceeded your plan minutes? Or maybe you “pay as you go” but are down right certain that you never talked for THAT long! Come on – admit it. You’ll feel better after. Well, don’t let that happen to your cloud computing initiative.
What is he rambling about now, you ask? Well, the gist of it is this. When you approach cloud computing, one of the things you might be seeking is elasticity of the cloud services and platform. You know the ability to have capacity on demand and to release that capacity when you are done with it? And you pay only for what you use? It’s nice not to have all those fixed assets lying around when you want to reduce your reliance on them. But, have you considered how On-Demand might affect your expenditures that are no longer fixed?
I call it On-Demand Overspending. Nice little name to go alone with my previously named IT Overdraft protection. That was about what some call “Cloud bursting” – where you get more capacity from the cloud automatically when you run out; and, you pay as you go. But the history is pretty clear on this. When we have variability of expenditures with fewer controls, we can run into overspending (BuddyTV had a nice video on this). Some research done by Onecompare.com was highlighted in a web article recently citing how mobile customers in the UK were overspending on their mobile plans because they had been sold plans smaller than their actual needs. These users either talked using more minutes than they expected they did, or talked beyond the number of planned minutes they had paid for. The same will happen, both intentionally and unintentionally with cloud services.
As more workloads move to cloud style deployment, there will be a corresponding, but not necessarily equal, movement of expenses from fixed capital outlay to operating outlay to cover variable use and supply. The problem with this is that it means there is a lot more variability in budgeting (which budget people don’t always like), and there may be fewer controls on who uses how much of what when. In fact, operating expenses dwarf fixed capital outlay for most technology organizations. Operations maintenance costs are already high. But with a pay as you go model, or a “plan” model, we will see some companies begin to overspend on compute resources simply because they use what they can use and may not have the controls to say when they should stop using.
Let’s take an example. In a previous life, I delivered a system to allow legislators to print their own reports rather than relying on a developer to write them. What happened after a short time was that the legislators began printing reports multiple times in multiple formats and multiple copies. Now this was valuable to them so it was encouraged. However, it did require an increase in the capacity of the database platform, the servers, and the network bandwidth (not to mention printer ribbons and paper). The side-effect costs were higher than imagined. This is a trivial example but it illustrates a simple but important axiom. Call it Plummer’s 2nd and a half axiom of overspending – to wit, “a consumer of an unlimited capability will consume unexpected amounts”.
Now this is not to say that IT organizations will not try to put controls in place to meter and limit usage. And, we know that capacity is not unlimited even from the cloud. My colleague Lydia Leong pointed out that even Amazon limits the number of servers (nee services) that you can spawn. This is only logical. But the axiom still stands and there must first be metering in place on cloud services usage. It would be a good idea for that metering to be exposed to the ultimate consumers of the services just as the electrical meter on your home is exposed to you. The Cloud paradigm will eventually evolve a metric equivalent to the “Kilowat-hour” in order to communicate clearly and simply to business decision-makers how much service capability they are using.
On the IT side, the metering is likely to be in place earlier as developers build and use cloud based resources. But, again history provides a warning. When we first began to build serious Web applications, the cost of testing that came from developers all hitting our web resources, or from customers trying everything out that they could pushed us beyond established “plans” of use and changed our budget expectations. Developers especially are notorious for using whatever resources are within reach. They had better have limits or your money will fly out with their ambition. And back to testing. When you use services owned by someone else, you have to test what you build but the tests will often run against the live services (unless the provider gives you test images to use) – and you have to pay as you go. Is the cost of this testing built in? The days are gone when we can duplicate their environment in-house for testing purposes. Can your budget handle this variably as well?
At the same time, when we go to limited plans for usage of cloud services, we will inevitably run into situations where we buy less than we actually need and have to start paying (perhaps at a premium) for usage beyond our plan. And just as with mobile phones, this will lead to a degree of variability in our costs that we cannot always predict or control. As cloud computing grows, there will be pay as you go plans for services, there will be limited plans and unlimited plans. Not everyone will want a per-message or per-transfer price. But no matter which kind of usage plan you devise. You had better have a corresponding plan to scope and ensure that usage itself is within a set of limited bounds or at least well-understood bounds.
If you do not, you may find there ain’t no “rollover minutes” left over to bail you out.