by Tom Bittman | February 9, 2011 | 4 Comments
We’re having an interesting discussion inside of Gartner (due credit to Neil MacDonald, Lydia Leong, Cameron Haight and David Cearley for the ideas in this post – I hope they post further on this). The concepts here aren’t new. For example, in 2004, I talked about “the walls coming down” between business, the data center and development. I wasn’t unique – others have discussed boundaries breaking down between different aspects of IT architecture for years. However, I’m not sure how many people are aware of how utterly pervasive this megatrend in IT really is, and how much it affects all of us. In a word, the megatrend is "blur." Think about it.
- Whatever happened to the market where there were distinct servers, storage, and networks? Fabric is blurring that.
- What the heck is an operating system any more, and what does it matter when I have a virtual pool of distributed resources I need to use?
- Whatever happened to the boundary between consumer technology and enterprise technology? Consumerization of IT. And not just personal technology devices – some IT services are given away for free (and subsidized by advertising). Which leads to boundaries disappearing in business models.
- Whatever happened to the boundary between outsourcing and insourcing? Now we have cloud computing: public, private, hybrid, and every other variation. Looking for a black and white definition of cloud computing? A waste of time – it’s gray!
- What about ownership of intellectual property? Open source, community collaboration. Is it plagiarism if you add value to existing content? In a society of information, can you afford not to build on what’s already out there? What should 21st century students do?
- What about the boundary between trusted enterprise data and untrusted data? Can we really afford to ignore any business information that might be useful? Isn’t it about what we do with the data, rather than whether the data is 100% trusted and owned by the enterprise? The boundaries of data used for business intelligence have been blown completely down. For that matter, we are entering a period of data overload – some we can trust, some we partially trust, some that is impartial, some that is partial. Successful people and businesses will be able to find value in that data. Unsuccessful people and businesses will drown in the data, or hide from it.
- Whatever happened to the boundary between IT and the business? In some cases, being solidified in the form of services-orientation (e.g., cloud computing), in other cases, the boundary simply does not exist. How many business people can afford to be laggards in leveraging the latest IT capabilities? How many IT personnel can ignore business strategy?
- What about the boundary between applications and operations – and security, for that matter? It used to be that developers threw their creations over the wall for operations to run, with a kiss “good luck”. New applications are being written based on operational models, with automated deployment/operations/optimization in mind. Security is being captured as policy that moves with the application.
Virtualization. Consumerization. Cloud. Instant connections and collaboration. I could go on.
An overall IT megatrend today is a complete and utter blurring of boundaries – which we could handle conceptually, but it directly affects people and market competition. It’s a lot harder to re-skill, re-organize, and react to partners that become competitors and competitors that become partners and partners who are also competitors depending on the situation.
If there is one “skill” that is critical for an enterprise to have, and for individuals to have who use and/or help deliver IT capabilities (which, by the way, is everyone) – it’s “agility.” If you depend on the predictability of competition, and the predictability of a job category, you’re not gonna make it. You or your company will become noncompetitive faster than you can say “blur.”
To use Neil MacDonald’s perfect phrase, success requires “Embracing the Blur.”
(By the way, Neil has pointed out an interesting book by Stan Davis, called – not surprisingly – “Blur.” I need to take a look!)
Category: Agility Cloud Education Future of Infrastructure Virtualization Tags: cloud computing, Virtualization
by Tom Bittman | December 7, 2010 | 2 Comments
Interesting discussions here at Gartner’s Data Center Conference in Las Vegas. While discussing the importance of economies of scale to cloud providers, I pointed out that economies of scale is a double-edged sword.
While enterprises tend to have many (often hundreds, or even thousands) IT services that they provide, cloud providers tend to have only one, or a handful, but provided on huge scale. Standardization makes automation much easier, and certainly makes economies of large scale very attractive. But what happens when a “service” suffers a decline in demand? For an enterprise, diversification makes this much less of an issue – usually, a decline in one “service” will be made up by growth in another. The capital expense risk is real, but not huge. But what about a cloud provider that focuses on just that service?
Economies of fail.
Megaproviders in the cloud are not immune to economic declines, or changing demand. One of the benefits of cloud computing for end users is transferring their own capital risk to cloud providers. Doesn’t this sound an awful lot like the mortgage crisis in the U.S.?
For cloud providers to be successful, they must protect themselves. As much as possible they must find corollary markets for their services that are not directly related to their core service market – without abandoning the simplification and standardization that enables automation and economies of scale.
Potential customers of cloud providers should be very aware of a cloud provider’s business risk, and protect themselves. Cloud provider resiliency, market diversification and stability should be selection criteria. Remember: a provider cannot be too big to fail – in fact, some providers might become so big and so focused that failure is inevitable.
Category: Cloud Tags: cloud computing, GartnerDC
by Tom Bittman | October 18, 2010 | 18 Comments
My first presentation at Symposium 2010 was “Server Virtualization: From Virtual Machines to Private Clouds.” Attendance was crazy – the large room was packed, people were standing at the back, and apparently a few dozen were turned away at the door. This proves that server virtualization is not only a hot topic, it’s getting hotter right now (one stat I mentioned was that more virtual machines would be deployed during 2011 than 2001 through 2009 combined).
I started the presentation with some fundamental changes in server virtualization since I presented a year ago.
1) Virtual machine penetration has increased 50% in the last year. We believe that nearly 30% of all workloads running on x86 architecture servers are now running on virtual machines.
2) Midsized enterprises rule. For the first time, the penetration of virtualization in midsized enterprises (100-999 employees) now exceeds that of the global 1000 (or it will before year-end). There has been a HUGE uptake in the last year. Also, unlike large enterprises, midsized enterprises tend to deploy all at once – with outside help.
3) Hyper-V is under-performing. Maybe my expectations were too high, but Hyper-V has not grabbed as much market share as I was predicting. I especially thought that Microsoft would be the big beneficiary of midmarket virtualization. Surveys show otherwise – VMware is doing pretty well there. Here’s a theory. Clients repeatedly told us that live migration was a big hole in Microsoft’s offering – even for midmarket customers (to reduce planned downtime managing the parent OS). Microsoft’s Hyper-V R2 (with live migration) came out 8/2009. Was that too late? Did the economy put pressure on midsized enterprises to virtualize early, before Hyper-V R2 was proven in the market? Or did VMware just have too much mindshare?
VMware’s competition is growing (especially Microsoft, Citrix and Oracle), but VMware is still capturing plenty of new customers.
4) Private clouds are the buzz. Every major vendor on the planet who sells infrastructure stuff has a private cloud story today. In the last year, the marketing, product announcements and acquisitions have been mind-numbing. Some of this is clearly cloudwashing (“old stuff, new name”), but we’ve seen a number of smart start-ups captured by big vendors, and important product rollouts (notably VMware’s vCloud Director). Now the question is – what will the market buy?
5) IaaS Providers Shifting to Commercial VMs. IaaS (infrastructure as a service) providers have focused on open source and internal technologies to deliver solutions at the lowest possible cost. But that’s changing. In the past year, there’s been a rapidly growing trend for IaaS providers to add support for major commercial VM formats – especially VMware, but also Hyper-V and XenServer. The reason? To create an easy on-ramp for enterprises. As enteprises virtualize (and in many cases, build private clouds), the IaaS providers know that they need to make interoperability, hybrid, overdrafting, migration as easy as possible. The question is whether that will require commercial offerings (such as VMware’s vCloud Datacenter Services, or Microsoft Dynamic Datacenter Alliance), or if conversion tools will be good enough. I tend to think that service providers better make the off-premises experience as identical to the on-premises experience as possible – and I’m not sure conversion will get them there.
Category: Cloud Virtualization Tags: Citrix, cloud computing, Microsoft, private cloud, symposium, Virtualization, VMware
by Tom Bittman | October 18, 2010 | 1 Comment
Gartner’s Symposium this year is a blow-out – more than 7,500 attendees, and more than 1,600 CIOs. That means a very busy week of presentations and one-on-ones. As an analyst, what I always find interesting is “the buzz”. You get a real good sense of what’s hot based on one-on-one load, and one-on-one topics. I was one of a few analysts fully booked a few weeks before Symposium, so my topics are hot. The questions? Continued interest in virtualization, but shifting heavily to cloud computing, both private and public.
Because of presentations, roundtables and so forth, I only had 35 one-on-one slots available. 11 of those are on virtualization (mostly VMware and Microsoft). 9 are about cloud computing (mainly what’s ready, which services, which providers, customer experiences). 14 are about private cloud (how do I start, VMware’s vCloud, etc.).
The sense I get so far is the interest in cloud computing continues to grow, but there is more real activity and near-term spending on private cloud solutions. A lot of interest in VMware’s vCloud – but attendees want some proof first.
At the end of the week, I’ll summarize what I learned. Should be a great week!
Category: Cloud Virtualization Tags: cloud computing, private cloud, symposium, Virtualization
by Tom Bittman | May 24, 2010 | 2 Comments
After spending the day discussing IT operations, here are some musings on the future of IT ops.
Traditionally, IT ops has been responsible for managing operationally "dumb" applications. These legacy applications are like infants – they need constant care and feeding. They can’t take care of themselves, and they rely entirely on others to survive. Actually, these dumb applications are even less capable than infants – at least infants cry when they’re hungry!
IT operations today is like day-care. Every infant is different, has different needs, signals their needs in different ways. There’s not much economies of scale here at all. Not a lot that can be automated. And new infants are being added daily!
There are three major paths for IT operations in the future – and each of them is very different:
(1) The Day-Care for Clones: Limit IT operations to management of a single (or small number of) applications. Knowing exactly how these applications work allows you to custom design IT operations/automation to their needs. This is what cloud providers typically do today, and application-centric environments (around Oracle, for example).
(2) The Smart Day-Care: The effort for years has been to make the day-care smarter, more adaptive, more on-demand. This has been a huge challenge, and will continue to be a huge challenge. One new concept has been the introduction of virtual machines, that can be used to encapsulate workloads – which doesn’t solve the problem, but it does enable more automation. Ideally, you still want to have metadata about what’s inside the virtual machine, which can describe service topology, security requirements, even service level requirements.
(3) The University: Expect more from the applications. They need to manage themselves, describe their requirements. They don’t "trust" infrastructure at all – if there are failures, the application is designed to be resilient and extremely self-reliant. On the other hand, IT operations still has a role. With "smart" applications, IT operations can’t necessarily trust them. The role of IT operations is to set constraints, manage the amount of resource that can be used, monitor behavior, look for changes in behavior.
The issue in IT operations is that these three paths are each viable, but each has very different skill, architecture, process, and management tool requirements. This confusion will take place inside enterprise IT – managing a mixed bag of “dumb” applications, “smart” applications, management of virtual machines, private clouds, and public clouds. Get ready for a bumpy ride!
Category: Cloud Future of Infrastructure Tags: Future of Infrastructure, private cloud, Virtualization
by Tom Bittman | May 18, 2010 | 30 Comments
I continue to talk with clients who understand the concept of private cloud computing, they think they know it when they see it, but they can’t quite explain it in words. A year ago I described The Spectrum of Private to Public Cloud Services, but I didn’t put that in the form of a definition. Here’s a shot.
Gartner’s official definition of cloud computing is “A style of computing where scalable and elastic IT-enabled capabilities are delivered as a service to customers using Internet technologies.” We also describe five defining attributes of cloud computing: service-based, scalable and elastic, shared, metered by use, uses Internet technologies. A key to cloud computing is an opaque boundary between the customer and the provider. Graphically, that looks like this:
When the customer does not see the implementation behind the boundary, and the provider doesn’t care who the customer is, you have a public cloud service. So what is private cloud?
Private cloud is “A form of cloud computing where service access is limited or the customer has some control/ownership of the service implementation.”
Graphically, that means that either the provider tunnels through that opaque boundary and limits service access (e.g., to a specific set of people, enterprise or enterprises), or the customer tunnels through that opaque boundary through ownership or control of the implementation (e.g., specifying implementation details, limiting hardware/software sharing). Note that control/ownership is not the same as setting service levels – these are specific to the implementation, and not even visible through the service.
The ultimate example would be enterprise IT, building a private cloud service used only by its enterprise. But there are many other examples, such as a virtual private cloud (the same as the example above, except replace ‘enterprise IT’ with ‘third-party provider’), and community clouds (the same as a virtual private cloud, except opened up to a specific and limited set of different enterprises).
Still “foggy”, or is it “clear”?
Category: Cloud Tags: cloud computing, private cloud
by Tom Bittman | April 21, 2010 | 15 Comments
I’ve been looking for an excuse to use this cartoon – I finally found it!
I’m finishing a research note on some polls I took recently of data center executives, managers and decision-makers. Interesting results. Here’s a summary:
(1) The first poll was focused on the top three concerns that data center professionals have with public cloud computing. The weighted score for “Security and Privacy” was more than the score for the next three concerns combined. Sometimes, when it looks like a meteor, it is a meteor (see, I got the cartoon in here)!
(2) The next two polls focused on public cloud computing plans versus private cloud computing plans. Three-fourths said that they were or would be pursuing a private cloud computing strategy by 2012 (only 4% said they weren’t). Three-fourths said that they would invest more in private cloud computing than in public cloud computing through 2012. Hype plays a part here, but we continue to believe that IT organizations will spend more money on private than on public cloud computing through at least 2012.
(3) The final poll focused on challenges with private cloud computing. “Technology” was considered sixth out of seven challenges offered. “Management and Operational Processes” came in first, closely followed by “Funding/Chargeback Model.” Process, people and relationship changes will be bigger challenges with private cloud computing than technology.
Once again, thanks to Doug Savage for allowing me to use one of his cartoons (check out the others on his site).
Category: Cloud Tags: cloud computing, private cloud
by Tom Bittman | April 16, 2010 | 9 Comments
Private cloud computing is rapidly moving up the Gartner hype cycle. In terms of raw market hype, I think we’ll peak late this year. VMware’s “Redwood” won’t be the only announcement – every major infrastructure vendor in the planet will likely put “private cloud” in their announcements, their marketing, their product names.
So before we get too overwhelmed with private cloud computing mania, what’s going to be real, and what isn’t? How will private cloud computing be used?
Just like early virtualization deployments, development and test is the favorite starting point for private cloud computing. Take out the middle-man, and provide a self-service portal for developers to acquire resources. Manage the life cycle of those resources, and return them to the pool when the developer is done. Dev/test is a perfect starting point, because there is a need for rapid provisioning and de-provisioning.
I think the next logical place will be the computing sandbox. This is a place for production workloads that need to be put up quickly – a stand-alone web server, a short-running computational task, a pilot project. “I need it NOW.”
The sandbox will especially be the place to put a workload prior to full production deployment internally, but when it needs to go up fast – and when external deployment (in the “public cloud”) isn’t appropriate for one reason or another.
Sandboxes can have different operational rules than normal production workloads. For example, perhaps it is a short-term “lease” and expires after thirty days. Perhaps the software is never maintained or patched during that window. Perhaps there is no backup or disaster recovery in place for those workloads. Perhaps security coverage is limited.
While a workload is running in a sandbox, the administrivia required to get appropriate approvals and fulfill organizational process requirements can be finished in parallel.
Ideally, after some period of time (like at the end of a thirty day lease), there might be a way to move the workload from the sandbox to full production, with all of the service level requirements in place.
Many large organizations will start with dev/test first, and build a sandbox next. I believe for many organizations the sandbox itself will mature and become a broader and more capable private cloud service. But there’s no rush.
Category: Cloud Virtualization Tags: cloud computing, private cloud, Virtualization, VMware
by Tom Bittman | March 13, 2010 | 5 Comments
Almost all large companies and many small and midsized enterprises are virtualizing. Based on surveys, the majority of large companies consider building a private cloud a core strategy. Surprisingly, that’s even true with midsized organizations – but slow down a bit. While the direction makes sense, be careful about getting too caught up in the hype of building a perfect private cloud. A cloud service requires a self-service (or non-manual) interface, and some form of usage metering, or even chargeback. Behind the interface, the services are delivered automatically on demand.
The fact is, not every IT organization needs a fully self-service interface, and many smaller organizations see no value in usage metering. They simply want to deliver services faster. For them, a 70% private cloud is absolutely good enough.
There is still value in virtualizing your resources, automating how the resources are allocated to meet demand, automating provisioning based on standard service offerings in a published service catalog. But you may want a person in the middle of the process. Or you may want to route the pure self-service requirements to your favorite external cloud provider rather than build your own. And that’s OK. It all comes down to business requirements, return on investment, and future strategy (including the potential to evolve to external cloud providers in the future). How far you go is your decision.
So while most enterprises may consider private cloud their goal, and vendor hype is going to skyrocket on how to reach that goal – my bet is that most organizations will find that a less than pure private cloud is going to be good enough.
Category: Cloud Virtualization Tags: cloud computing, private cloud, Virtualization
by Tom Bittman | February 9, 2010 | 5 Comments
Do cloud computing providers understand customer requirements? Do customers understand their requirements? No and no, and this is a problem.
Today, most cloud computing providers offer one-size-fits-all services – with few options or service level alternatives. As the market matures, there will be thousands of providers, each trying to differentiate by focusing on specific market needs, and offering service level alternatives and options to attract specific types of customers.
This sounds great, except service providers don’t really know what options the market needs, and perhaps more importantly, potential customers don’t always understand their service level requirements.
Service providers will figure this out by experimenting, and making adjustments as they go. Some of these adjustments will be hard and expensive to make (and retrofit). Some providers will simply fail. The customers will figure this all out by getting burned.
No doubt, there are many service needs that can be fulfilled just fine by one-size-fits-all services – go for it. But the next stage of cloud computing use – more varied, business-critical services – will require something more.
A key to success for cloud computing is getting the interface right, and getting the service requirements right. The interface defines the service offering in detail for the customer, and the service options link directly to automation behind the interface. Building this automation isn’t easy – and many providers will focus on specific market needs rather then create a huge array of options. If success requires new options, that means new automation, and that might require fundamental architecture changes. Providers who guess right on market requirements before they build their service offering will have a definite advantage over those who need to make a (perhaps costly) mid-course correction.
Enterprise IT organizations building private cloud services will have a different issue. Do they limit their offerings to their service catalog alone, or do they allow exceptions and special requests – which will add overhead and cost? A key to their success is spending time early in the design process understanding current and future enterprise requirements, and ensuring their architecture gives them enough flexibility to adjust as needed. Bottom line – understand your customer and their service needs, first!
But the real danger is to cloud computing customers – especially those bypassing enterprise IT to use an apparently attractive cloud service. In most cases, they’re used to an enterprise IT provider who reacts to custom requests and changes in requirements. There is someone to talk to. There are often implicit “service level” requirements that enterprise IT handles without the customer even knowing – like disaster recovery, security, regulatory compliance, availability, legal requirements/risk. Enterprise IT often over-provisions services for users – giving them more than they asked for. Don’t expect that from a cloud service provider. Failure to understand your own requirements might lead you to choose the wrong provider, increase your costs, or any number of scarier problems. Bottom line, fully understand your service level needs before you take the leap.
Even if enterprises don’t expect to use many cloud services for years, now is the time to start re-shaping the relationship between IT and the business. Build rich, detailed service level agreements. Make explicit those things that are provided implicitly. Prepare for the time when external cloud service providers will be a viable choice. A center of competency for cloudsourcing within the enterprise (or outside service broker help) is a good idea.
Thanks to Doug Savage for allowing me to use his excellent cartoon – if you’ve got a minute, take a look at his site – very funny stuff!
Category: Cloud Tags: cloud computing, private cloud