by Gregor Petri | April 30, 2013 | 1 Comment
Even though today’s crowning ceremony in Amsterdam enjoyed some modest sunshine, the temperatures across Europe are at an all time low. A more reliable indication that spring has started, are the annual Cool Vendor reports being published.
For the first time this series includes a note dedicated to cloud activity in Europe. The “Cool Vendors in the European Cloud Computing Market, 2013” note describes four European vendor making a difference in the local and global cloud market. The report also points to several other European cool vendors featured in other notes. Such as in the “Cool Vendors in Cloud Services Brokerage Enablers, 2013“, “Cool Vendors in Cloud Services Brokerages, 2013” and “Cool Vendors in Cloud Management, 2013“.
One of the featured cool vendors is directly involved in the Amsterdam crowning ceremony by hosting the social crowd control app that gives the many visitors real-time insight into movements and current volumes of people at the different venues. A nice example of a nexus application that gathers social information from mobile devices, performs real-time analysis of that big data type information in the cloud and makes it available again to the crowd via a mobile app.
Category: Uncategorized Tags:
by Gregor Petri | March 24, 2013 | 2 Comments
The ‘third’ in this cloud menage a trois is the network, which is joining Storage and Compute as a Software Defined Resources that can be allocated on demand through a self-service API or portal. As a term “Software Defined” is in race to catch up with established but equally vague terms such as “on demand” and “as a Service” and surfacing in all kinds of combinations.
The current frontrunner – Software Defined Networking (SDN) – might very well already be the most hyped term of 2013. All network technology providers are busy either building or acquiring SDN capabilities but the largest acquisition to date (for more than one billion dollars) was done by a virtualization platform provider. Meanwhile most network providers are looking to leverage SDN to increase the agility and reduce the cost of the services they provide.
An important reason for the interest in SDN is the size of the market it is promising to disrupt. The total revenue for Network and Communication services makes up a large part total worldwide IT spending. Of the total 3.7 trillion IT spending in 2013 46 percent will be spent on Telecom Services (next to 8 percent on software, 22 percent on hardware and 25 percent on IT Services). Any development that such a large proportion of the total cost influences can count on great interest, also from the telecom industry itself. At the SDN World Congress in Darmstadt no less than 13 of the largest telecommunications companies announced a joint initiative to promote ‘Network Function Virtualization. This initiative encourages network technology providers to enable their network functions to run on (clouds of) industry standard servers instead of on proprietary hardware appliances.
The main advantage of a software defined network – just like any other kind of software defined infrastructure construct – is that it no longer consists of dedicated and proprietary boxes with names like: firewall, load balancer, router, etc. If an organization tomorrow suddenly two times more firewalls than load balancers needs (or vice versa), they can just provision other software on their existing hardware. In addition everything that is controlled by software can be much easier automated than something that is based on hardware. And this offers benefits not just to providers but also to end users as it can reduce the time needed to reconfigure en network to changing needs from several days or weeks to a few hours, or even shorter. And as a result the network can become as dynamic a cloud resource as compute and storage already were.
But let’s not forget that for many organizations the transition to ‘as a Service’ and “on demand” began in the network area. Back in the 80-ties companies started to give up their own in-house controlled and managed wide area networks in exchange for the use of shared packet-switched networks, often based of X.25 and commonly called the “public data network”. Maybe something to remember for those currently afraid of “public clouds”?
PS Press release March 25: Gartner Says Software Defined Networking Creates a New Approach to Delivering Business Agility
Category: Uncategorized Tags:
by Gregor Petri | February 8, 2013 | Comments Off
IT is an acronym crazed world. So crazy that sometimes – when running out of three letter ones – we simply recycle them or add sequence numbers. Remember MRP, which used to mean Material Requirements Planning, but then became Manufacturing Resource Planning (called MRP II to avoid confusion), to only a couple of years – and a few trillion of investments – later, resurface as ERP.
In that era the term BPR also became popular. BPR stood for Business Process Re-engineering (see for example Hammer, M. and Champy, J. A.: (1993) Reengineering the Corporation: A Manifesto for Business Revolution), a movement that in my perspective*really gained steam from the realization that throwing new technology (ERP) at existing processes, only offered limited improvement potential. As Business Process Re-engineering often resulted in massive redundancies and layoffs, BPR got kind of a bad rap, especially among employees and unions (remember those?). Today the term Business Process Redesign is a more socially acceptable reincarnation, and even though that could fly under the same acronym (eg. BPR II), people seem to prefer to call it by its full name.Nowadays all the buzz is around cloud. And also here people are finding that just throwing this new idea at existing processes is not bringing the benefits expected, not to mention that the cost of absorbing it into our existing ways of working is very high (remember how much we spend on trying to force fit ERP packages into our existing processes by modifying them beyond recognition, only to reimplement these packages a few years and a few releases later in a more off the shelf manner?).
And increasingly the topic of adapting processes to the cloud is becoming popular. Last month my note “Cloud Solution Providers Ignore Customer Processes at Their own Peril” published, and this month KPMG published a study called “the cloud takes shape“. In that study KPMG uses the term Business Process Redesign and states “One of the most important lessons uncovered by this year’s research is that business process redesign must occur in tandem with cloud adoption if organizations hope to achieve the full potential of their cloud investments.” In my note I describe the need to challenge the status quo on processes using the term Cloud Process Re-engineering (the cloud generation is likely to have a less emotional response to re-engineering, while some of its disruptiveness still rings through).
Some IT departments I speak to seem to think of cloud computing largely as a way to move their applications to the cloud. But if an enterprise organization with thousands of applications running in their datacenter, moves these apps to the cloud, they will still have thousands of apps. And more importantly those apps will do the same they did before, just running somewhere else (so you might ask “why bother?” as the savings on hardware, housing and electricity are small compared to the overall cost and today’s management is much more interested in new (revenue) possibilities and agility than in efficiencies that in the end form a mere rounding error in the companies overall bottom line). If organizations however look at each process and decide how the cloud can optimally support or even completely eliminate that process (for example through Business Process as a Service (BPaaS) or by participating in multienterprise applications) the potential impact can become much more significant.
The idea of optimizing processes to new realities – for example through Business Process Management (BPM) – is not new and covered broadly in Gartner’s existing body of research. Cloud practioners – both consumers and providers of cloud services – should take an interest in this softer, more human, side of the cloud.*
PS. Whether CPR will turn out to become the acronym that replaces BPR is debatable, also as it seems to be already firmly taken by the medical profession (for both the life saving CardioPulmonary Resuscitation and for Computer-based Patient Records).
Category: Uncategorized Tags:
by Gregor Petri | January 2, 2013 | 1 Comment
Ecosystem could be “the word” of 2013, if only vendors, providers, ISVs and other technology conglomerates stop acting in a “This Town ain’t big enough for the both of us“ way.
As an App user* I am increasingly amazed, affected and annoyed by what in my view can only be described as turf wars between various technology providers. Increasingly cooperation – that originated by a desire to have a quick time to market – is being replaced by outright competition driven by a desire to own the full stack. Some recent examples:
- Phone manufacturers replacing perfectly good map applications with in-house brews*
- Search engines wanting to become social networks*
- Social networks* and web retailers* wanting to become advertising specialists
- Photo filtering apps opting out of 140 char event timelines* and v.v. event timeline apps adding photo filtering*
- Email providers abandoning the use of third party sync to enterprise messaging apps*
- Providers replacing third party music and movie services with in-house variants limited to their stack*
- Just about everyone adding their own inline chat and messaging functionality*
- Not to mention the various patent wars companies are waging, trying to block each other out of their home markets*
Now I am not against healthy competition (on the contrary) but as a consumer I fail to see how these developments are benefiting me. It seems many companies are answering the markets desire for integration by forcing consumers into their own, closed, single stack shops.
With cloud computing rapidly breaking down the walls between traditional industry segments, times are confusing for providers. Where we used to buy hardware and software form different vendors and solicited help – to get these two to work together – from yet a third category of providers, these demarcation lines are now rapidly blurring. Hardware and software are merging into services, while at the same time we see phones behaving like camera’s, tablets behaving like PCs and TVs behaving like tablets. Naturally companies are worried about where in that blurring supply chain the largest profits will fall and as a result everyone seems determined to own the whole chain, wall to wall and soup to nuts.
But increasingly the limiting factor in market success is no longer the ability of providers to supply functionality, it is the capability of consumers to absorb functionality. Aan – at least at my age – once I mastered the science how to color my pictures, how to create a playlist, how to interact socially, how to access my email, etc., etc., I just want to be able to continue to do so, but in a seamlessly integrated fashion. I don’t want to replace it with a new app, that does virtually the same, but in a different way.
Just a couple of years ago there was a lot of talk and enthusiasm about “Open Innovation”, where companies could make the market pie bigger by working together (instead of fighting over who got what piece of the existing pie). To some extend it is the old “single vendor” versus “best of breed” dilemma, do I concentrate on having a good enough homogeneous product that does it all, or do I focus on building the best product for my functional area and work/integrate closely with others (at the risk their area turns out to be more profitable (in market speak: has a better business model)). In other words do I go integrated/closed/proprietary or more interoperable/open/standard.
My believe (or at least my hope) is that companies that act more from the perspective of consumers/customers, than from their own financial/shareholder perspective, will eventually come out better. Note however that in this context it is very important to understand exactly who the customer is: is it the user buying access to the service or the advertiser buying access to the user (in which case the user is merely the product being sold). If the app economy is to continue to grow, it will need to increasingly address the primary customer (the users). And if (granted, a big if) the market is a bit like me , it will prefer ecosystems of leading open apps over fully integrated closed stacks.
Traditionally, before the current trend towards exclusion instead of collaboration took hold, the silicon valley pressure cooker was the center of such collaboration. Maybe Europe – being a collaborative environment by nature – can step into its place and use this as much needed differentiator against the increasingly mega-large, mega-integrated and mega-closed conglomerates from Asia and North America.
Category: Uncategorized Tags:
by Gregor Petri | December 1, 2012 | Comments Off
Those of you who followed my blog for a while know that the idea of applying manufacturing best practices to cloud computing is a favorite topic of mine*. This week the topic popped up in a fireside chat (the popular term for keynotes delivered from a set of armchairs, often with no fire in sight) between Amazon’s CTO Werner Vogel and CEO Jeff Bezos at re:Invent, the first Amazon Web Services customer conference.
I won’t cover the conference here – many blogs and media sites already did – but in the chat Bezos made a number of interesting points on how principles of lean manufacturing are guiding Amazon’s overall endeavors and how cloud computing both supports and benefits from this approach. He discussed how – for developers- this approach turns the cost of infrastructure operations from an abstract overhead-like concept into a very visible direct cost they can directly influence. And how the cost of quality is always lower than the cost of non-quality, as fixing problems later - after it has shipped to the customer – is many times more expensive than doing things first time right. But also how cloud computing allows to continuously improve products and processes (similar to how factory workers at Toyota were empowered to stop the production line and jointly improve the process). He also stressed the importance of focusing on customers and their requirements (by continuously measuring and providing feedback loops) instead of focusing on competitors or winning.
A 70/30 rule
In the chat Bezos also discussed his assertion from an earlier interview that- with customers being more connected through social media and through the transparency that internet and big data is bringing to all markets – the effective way of doing business is flipping from spending 30% on creating a product and 70% on “shouting about it” (making sure - through marketing and sales activities – that people know about it and buy it) to the reverse. A new reality, where it makes more sense to spend the majority of energy, effort and cost on building the best possible product and a significantly smaller effort on communication and delivery. In other words, the more transparent markets are becoming, the more product quality (being fit for purpose) will rule success.
This in turn makes working on product quality (though principles such as Lean) more important, but – taking this beyond was what was said on stage – is also likely to drive total costs down and result in lower overall prices. For the enterprise IT industry – where a lot of the product costs stem from the lengthy sales & implementation cycles that traditional complex enterprise products require – this may turn out to be a very disruptive development. In fact, earlier in the week Amazon had a panel of their partners discuss their experiences and – although they all created some impressive new cloud successes for their customers – you could sense they all realized that going forward the world was no longer going to be what it was before.
A valid question to ask is whether the whole Enterprise IT industry will follow this trend. In other words: what percentage of large enterprise organizations will be interested (and able!) to adopt the self service, super market model of the cloud (see also A Cloud That Cares? Or About Eating Your Cloud And Having it too). Will there be a large percentage that prefers ready made meals (instead of home cooking using super market ingredients – One could think of SaaS solutions in this context) or will there be a significant number of organizations that - voluntarily or forced by a lack of in house capabilities - continues to prefer a full service restaurant model, where the provider does not just supply the ingredients, but also does most of the day to day work?
Let’s end with a question you may want to ask yourself: When looking at your markets and customers, how fast are they moving from a 30/70 to a 70/30 model and how prepared is your organization for that?
* In one of my first Gartner blog posts I wrote about the Rise of IT-dustrialization , earlier publications include “LEAN and the Art of Cloud Computing Management (2010)” and my LeanITmanager blog.
Category: Uncategorized Tags:
by Gregor Petri | November 9, 2012 | Comments Off
Also this year cloud computing was a large topic at Gartner’s annual symposium in Barcelona. It shared the limelight with the other three forces of the Nexus (Social, Mobile and Information) but managed to pop into most conversations and presentations.
For those who missed it – and for attendees that did not manage to be in all parallel sessions at once – video recordings can now be accessed at gartnereventsondemand.com (highlights available after registration of email, full sessions for registered attendees).
Personally I did a couple of dozen 1on1’s, seven larger sessions (presentations, roundtable’s and clinics) and met with several more people over breakfast and dinner. With breakfasts at the time most Americans are accustomed to and dinners taking place according to local tradition, you can imagine these were pretty lengthy days. Not that we noticed, because the 1on1 rooms (unlike last year) had no daylight. Luckily we managed to get some fresh air as we popped over to the special CSP (Communication Service Provider) track that was running across the street.
Of course this special track also touched upon cloud computing as an opportunity for CSPs using the recently published (and now publically available) high level overview of Gartner’s Advice for CSPs Becoming Cloud Service Providers.
On day two I popped into the IT Expo to present the new MQ for Cloud Infrastructure as a Service, the first Gartner MQ I got to participate in creating (Note: this is not a public doc, but there was some press coverage and a number of providers are offering access via their websites). Presenting at the expo is different as the audience is sitting in a kind of fishbowl while wearing headsets as the noise from the surrounding stands, raffles and demoes is at times deafening.
My conclusion from four days (where I managed to speak to maybe 1% of the total attendees and a slightly larger percentage of the participating providers) is that participants are increasingly aware of the slower pace of cloud in Europe. Providers are pushing the cloud envelope harder than (most) end-users, who –as we saw in the outsourcing and cloud clinics that we ran throughout the event – are often still more looking for a restaurant with cloud views than for self-service cloud supermarkets.
Category: Uncategorized Tags:
by Gregor Petri | October 26, 2012 | 1 Comment
At the end of last month the EU released its plans for “Unleashing the Potential of Cloud Computing in Europe”. But although the document (s) - just like EU commissioner Kroes in this video - do a good job describing in non-technical terms what cloud is and why Europe should care about having a competitive cloud position, it kind of stops there.
Even though it defines three key actions – around Standards, Terms and Public Sector taking a lead role – most described actions consist of softer items such as “promoting trust by coordinating with stakeholders”, “identifying best practices”, ”promoting partnerships” and “investigating how to make use of other available instruments”. Now of course European cloud computing can benefit from funding reserved for other EU initiatives such as the Connecting Europe Facility and from side initiatives such as the “Opinion on Cloud Computing” published by the Article 29 working party that gives privacy-related contracting guidance, but in general the recent published plan seems to be more about what could and should be, than about what is or will be.
Meanwhile, both regular and social media seem to be increasingly negative regarding the progress that Europe is making. With the North American continent clearly being the biggest cloud geo and ASIAPAC – also thanks to its many emerging economies – claiming the position of fastest growing cloud geo, it only leaves less desirable labels – such as slowest or most fragmented – for describing the state of cloud activities in Europe.
Continuing to look at why things are harder and slower in Europe will just further reinforce negative sentiments, better to focus on European examples that are showing success. And in “Switch: How to Change Things When Change Is Hard” the brothers Dan and Chip Heath offer an engaging recipe for doing just that. In their book they describe how by identifying “Bright Spots” (small pockets of positive exceptions) potential future success scenarios can be discovered. Next, they encourage promoting very specific actions instead of giving broad directions. For example: Instead of asking people to eat healthier (too vague, too hard), they suggest healthcare activists promote a specific action such as “buying skimmed instead of full fat milk” (simpler, easier, more actionable, more effective).
So in Europe, instead of pushing cloud as a concept (too vague, too hard), why not focus on identifying a few very specific and very simple scenarios including their specific benefits. Next Europe can concentrate on removing any (legal, fiscal, economic, cultural) barriers to these specific scenarios and promote these few clearly and broadly. And in doing so best to follow the Heath brothers advice to promote this both on a rational and on an emotional level (or as the brothers put it eloquently: both “Direct the Rider and Motivate the Elephant“ ).
PS What potential European cloud Bright Spots would you suggest (using the comment field below)?
Category: Uncategorized Tags:
by Gregor Petri | September 29, 2012 | 3 Comments
Although self-service -together with elasticity, pooling/sharing, etc. - is a defining attribute of cloud computing, many of the companies expressing an interest in cloud computing do not seem to be aware of that.
In fact, when asked: who do you expect to provision your services to the cloud?; who will monitor your services’ performance and availability? and; who do you expect to take action if something goes wrong?, a majority of the companies asked look to be somewhat surprised by the question, as they simply assumed that their service provider would do so.
This is a bit like going to a supermarket (a typical self-service facility), pointing to the ingredients you like and expecting the cashier to clean, cook and serve them for you. The name we generally use for such a service however is “restaurant” and it comes with significant different expectations and pricing, as demonstrated by the price of a bottle of the wine in a restaurant versus that same bottle at a supermarket (which is one reason restaurants prefer to buy from exclusive wine merchants and not to put bottles or their wine list that are available in retail).
The supermarket versus restaurant analogy may sound like a silly comparison, but is useful to further illustrate the difference between cloud computing and more traditional IT services. It is not just that the product is vastly different: a raw steak on styroform and a brown bag with vegetables versus a prepared steak – cooked to our liking – on a nice plate, brought to our table with a smile.
The much more telling difference lies in what we would reasonable expect to happen if something goes wrong. For example: if a supermarket burns down, we would expect the supermarket to – first and foremost – concentrate on building a new facility so it can restore its service. We would not expect the supermarket to call us and help us plan tonight’s meal or offer any alternatives on an individual basis. Likewise expect cloud providers to focus primarily on getting their cloud back up in case of problems.
However, if a restaurant burns down we would find it reasonable they would call people that have made reservations. And if we booked a wedding there for next weekend, we would expect the restaurant to help us find a new facility, help us agree the new menu with the new chef and reimburse any additional cost (unless we change the menu from steak to lobster).
Self-service in most cases means the provider is not aware of what the individual customer is using its product or service for. As a result it is not getting involved with individual outcomes (as it simply does not know those). This separation is also not uncommon in self-service infrastructure and even used as line of defense in case that infrastructure turned out to be used in “less than legal” ways.
Self-service supermarket have a lot of benefits that restaurant customers may also be interested in: choice, speed, price, no need to make a reservation, ample parking, to name just a few. So is there a way to eat our cake and have it to?
One option are self-service restaurants, like the ones you may find along most European highways. Service tends to be fast, no need to make a reservation and if the restaurant happens to be full, we just drive to the next one. Here self-service is the overriding attribute. It’s a supermarket with cooked foods, in most cases without the price advantage. And we probably would not plan having something important, like a wedding (or another mission critical event) there.
An option closer to the desired experience may be eating at a full service restaurant that sources from a supermarket. Such a restaurant could source it ingredients on demand (by simply walking across the street), it could offer enormous choice and would in most cases not run out of ingredients. It would however likely have to pay retail prices for these ingredients (supermarkets margins are thin and they do not have a lot of room for additional discounts, especially if you don’t buy in bulk). But retail prices might conceivably still be lower than what a restaurant would normally pay from it traditional channels (like the exclusive wine merchant).
As the prices of the underlying ingredients are now transparent, end-user pricing becomes an issue. Do we pay for an all in price a cooked meal (including seat and cutlery) or do we pay separate for cooking, serving and use of the facilities. In North America paying separate for service is still customary, and although not too long ago – in the southern parts of Europe – you would be charged separately for the couvert and the service, European restaurants now largely evolved to all-in-pricing. The drawback of such fully inclusive prices is that people compare it to the publicly known ingredient prices (a problem not unfamiliar to many an IT manages, who tried to explain the difference between the TCO based price of the fully managed PC their department offered and the price of that same PC in the local mall or – even more pronounced – from an on-line retailer).
But -just like cloud computing is not all about cost – eating out is not just about price, it is also about agility, productivity and resilience. By not having to spend time cooking, people can have more quality time or -more likely – spend more time at work: finishing that last assignment, winning that additional customer. This does however mean we are now dependent on the restaurant for a pretty essential part of our life: eating. So what happens if our restaurant – that now sources from a third party self-service supermarket- runs into problems?
As we are not buying this as self-service, we would expect the restaurant to care about the outcome (us being hungry or fed) and take DR measures in case something goes wrong. But in how far is it fair to expect that the restaurant can reasonably do this, as they are now dependent on the supermarket? Can the restaurant stay open if the supermarket closes for a holiday or if I order something after supermarket closing hours. Or is it relegated to being a middle man no longer able to control its own SLAs.
Having the supermarket and the restaurant under the same management (meaning the restaurant guys have keys to the supermarkets’ back door) can help. But only if the supermarket manager allows his restaurant colleagues to interfere in his operations and impact his targets and quality (something not very common in larger organizations).
Smart restaurant would likely source from two or more supermarkets – preferably from separate chains, located in different streets. So it will be able to serve its customers even if one of them closes or burns down. And maybe that should also be the conclusion if you are looking (even though the definition would argue there is no such thing) for a non self-service cloud. In other words if we want to eat your cloud and have it too.
Disclosure: Before getting caught up in IT & Clouds, the author worked as a manager at the largest restaurant in the Netherlands, which indeed did burn down during that period but managed to restore it services within a week and keep it running throughout the reconstruction.
Category: Uncategorized Tags:
by Gregor Petri | August 26, 2012 | Comments Off
Innovations are commonly judged by how fast they reached 50 million users (Radio, 38 years; TV, 13 years; Internet, 4 years; iPod, 3 years, etc.). Another way to look at this is by time equivalents: If one Dog Year equals 7 human years than how many years of traditional IT do we travel in one Cloud Year?
This cloud year we saw quit a lot of change – also from existing mega vendors entering the cloud market – but did it match 7 years of progress in traditional IT (taking us roughly from SOA till today)? And do we really expect the next three years to bring as much change as we saw since the days of client-server or the next seven years to be the equivalent of the journey from the days of the mainframe to today?
Now Einstein pointed out that speed is relative to the point of view of the beholder. In that spirit one of my former employers handed out gold watches after 10 years, instead of after the customary 20 years, because he felt “It all moves a little faster here”. I never made the 10 year mark there (not would that have mattered as they changed the policy in my seventh year), but I did make my first A-year last month (A as in Analyst). No watch here either, just some musings on time.
Talking about musings on time, several dog years (and a few employers) earlier, I wrote a small time perspective on the ERP market, called “the Best Years” (named after the little rural town called Best, where we had kept office till then). I did not keep a copy of that internal note but the main theme was that in just a few years the way customers procured ERP had completely changed. From vendors leading the sales process, often doing custom demoes that wowed prospects with fancy features (features that BTW seldom got to be implemented post sale), to cookie cutter selection cycles where third party consultants fed vendor profiles and offerings through standardized spreadsheets generating normalized scores. Vendor offerings became more and more comparable and our RFP responses were demoted to becoming column fodder for the Lotus123 sheet (Yes, was some time ago, when there was still a market and not an oligopoly).
Question is whether the cloud market – or more specifically the Infrastructure as a Service market – has started on a comparable journey and at what speed. At Gartner we are currently working in full swing on the next iteration of the Cloud IaaS Magic Quadrant giving us an upfront view of convergence and comparability (or even compatibility) of various offerings. For those of you interested in the MQ process, I suggest reading the recent blogs of my colleague Lydia Leong, who shares some useful background and pointers.
Closing out my first A-year I also got to write technology profiles for a few of the Hype Cycle reports (such as the ones on PaaS, on CSP infrastructure, on the Telecommunications Industry and for the brand new Hype Cycle dedicated to Cloud Service Brokerage , an increasingly popular topic, also for European CSPs). These Hype Cycle reports reflect our official take on speed (years to mainstream adoption) and impact (low, moderate, high or even transformational) of such new developments. More on these later.
First however putting some focus on increasing my personal speed, as some deadlines (like for the upcoming Barcelona Symposium) are approaching rapidly.
Category: Uncategorized Tags:
by Gregor Petri | June 28, 2012 | 2 Comments
Not all major changes are visible to the naked eye. Standing next to a glacier it is difficult to determine direction (does it grow or shrink across seasons) and watching continents move takes even some stamina for the casual observer. Luckily this is not the case for cloud computing.
Apart from the very noticeable cloud hype (more on the cycle of that soon) there is also very noticeable growth. Today, as the end of a deep and wide group effort, Gartner published its “Forecast: Public Cloud Services, Worldwide, 2010-2016, 2Q12 Update“ accompanied by Market Definitions and Methodology: Public Cloud Services. As I highlighted several years ago in Can the Real Cloud Market Size Please Stand Up? definitions are all important when trying to compare various cloud forecasts and especially cloud forecast categories.
This year I was on the creating side of a cloud forecast, working on “Compute” services, which together with “Storage” and “Print” makes up the section “Cloud System Infrastructure Services (IaaS)”. Other sections in today’s published forecast are “Cloud Business Process Services”, “Cloud Application Services (SaaS)”, “Cloud Application Infrastructure Services (PaaS)” and the new “Cloud Management and Security Services”. I am sure there will be some public press announcements with numbers, percentages and stats later today, so I won’t go into that here.
Most publications picking up on this will likely focus on the overall biggest number ( the total of all cloud services in the furthest away year). Not sure that overall total has the granularity that makes it useful to anyone in particular – as it includes many different markets (IT and non IT) and areas (from IaaS all the way to BPaaS ). But at that level of granularity you could say that the size of the public cloud service market is developing from being roughly the size of Luxembourg’s GDP only a few years ago, via sizes comparable to countries like Oman, Angola, Vietnam, Hungary, New Zealand and Romania, into a size being roughly equal to the current size of the Irish economy by 2016. For reference, today’s overall Enterprise IT spend of 3.7 trillion per year is roughly equivalent to the size of the economy of Germany, so still plenty of room for growth.
But that growth has to come from somewhere. One of the more interesting questions –more interesting than absolute size – is which traditional market glaciers are melting (or at least slowing down their progress) as a result of the global warming caused by cloud computing. A lot of the cloud growth comes from enabling stuff (technical term for new services, new markets) that simply was not possible before. But – with global growth stagnating in some regions – some of the cloud growth will come from cannibalizing traditional markets. Scientists have not yet decided (at least the last time I looked) what actually killed the dinosaur and how long it took for them to become extinct, but for traditional IT it is safe to say it won’t be as sudden as a meteor hit, but it could happen significantly faster than global warming. And just like with these phenomena’ s, some people will be in denial (adapting too late) and others will be adapting too early . Guess –as always- timing is everything, and – as usual – timing is the hardest to get right, something our forecast efforts aim to help with.
Category: Uncategorized Tags: