Thomas Bittman

A member of the Gartner Blog Network

Thomas J. Bittman
VP Distinguished Analyst
18 years at Gartner
29 years IT industry

Thomas Bittman is a vice president and distinguished analyst with Gartner Research. Mr. Bittman has led the industry in areas such as private cloud computing and virtualization. Mr. Bittman invented the term "real-time infrastructure," which has been adopted by major vendors and many… Read Full Bio

Coverage Areas:

Virtualization 3.0

by Tom Bittman  |  September 22, 2008  |  3 Comments

I had a conversation with some folks at Intel today, and they referred to research from IDC on what Virtualization 2.0 looked like. This got me thinking. There is a tendency to take a trend like virtualization and wrap every other trend inside it. That creates a semantic nightmare, in my opinion. Virtualization does not equal cloud, cloud does not equal SOA, virtualization does not encompass management automation, etc. These are all separate trends that cross paths – they may be catalysts, they may overlap, they may actually all take us to the same place – but they are different trends, different legs of the stool. Applying service-level automation to virtualization, for example, is not a virtualization trend – it is an automation trend leveraging virtualization. The real question is – what does virtualization mean to users? What does virtualization enable? How does that change – and does it change in identifiable stages? I say it does.

Virtualization 1.0: Consolidation and cost savings. No question – the reason people first virtualize their storage, their servers, their network capacity is to save money. It’s all about being more efficient. Almost all of the early adopters I talk to about virtual machines are in it to reduce server hardware and power spending, possibly space. Often to introduce disaster recovery (because it is so much cheaper!). But, talk to these same people a year later, and they talk about…

Virtualization 2.0: Agility and speed. Saving money is great, but it often doesn’t impact the customers of IT directly – or, at least, it doesn’t help the business grow. Our clients tell us they can deploy virtual servers thirty times faster than physical. They also tell us that customer demand roughly doubles when they deliver faster – a lower barrier to entry leads to greater demand. Speed and agility also has significant impacts on management and operational processes. But after cost efficiency, and after speed, what’s next?

Virtualization 3.0: Alternate sourcing. I’d say the next big thing will be the ability to leverage the abstraction layer created by virtualization in its many forms to creatively source the function that was virtualized. In its earliest form, an example would be moving virtual machines between servers dynamically to meet demand. However, more interesting will be moving off-premise – dynamically sourcing based on cost, quality of service and agility needs. Maybe a workload is spiking dramatically. Move that workload to a cloud provider (for good, or temporarily).

Virtualization 3.0 is not cloud computing – but virtualization 3.0 is certainly one important enabler of cloud computing, among many enablers.

So what does Virtualization 4.0 look like? Have to think about that for a future post…

3 Comments »

Category: Cloud Future of Infrastructure Virtualization     Tags: , ,

3 responses so far ↓

  • 1 Steve Caughey   October 3, 2008 at 5:48 am

    Tom,

    I agree that alternate sourcing is enabled through virtualization. Initially mapping some virtualized resource (machine, application, application component etc.) onto the physical resources required to support it will occur within the IT department or datacenter. However, federating local infrastructure (a private cloud) with external infrastructure (public clouds) opens up the possibility of competition between public clouds with regard to all aspects of service delivery. Indeed the private cloud will be itself in direct competition with the public clouds and, as public clouds improve their service and reduce their costs, those clouds could eventually replace much, if not all, of the local infrastructure. Here at Arjuna we think this form of arbitrage between clouds will be of great benefit to business but that delivering on this vision requires the ability to dynamically create and maintain effective service agreements between federated clouds. Our product, Agility, attempts to address these issues.

  • 2 Tom Bittman   October 9, 2008 at 2:33 am

    Two things – one, I believe that service brokers (the SIs of cloud computing) will appear to handle the arbitrage – either manually, or eventually programmatically. But also, I believe it is important to avoid conflict of interest in sourcing. Sourcing decisions will need to be made by organizations that span the current customer/provider gap. So, for example, I expect that a new dynamic sourcing team will exist between IT and the business, making decisions whether sourcing internally or externally makes sense. In a perfect world. In the real world, sourcing will be decided sometimes by developers (developing to the Microsoft cloud, for instance), or in some cases by operators (using VMware vCloud, for instance). It’s gonna be messy.

  • 3 a tale of 2 clouds « David’s blog   October 29, 2008 at 2:54 pm

    [...] and virtualization are not the same as my colleague Tom Bittman writes http://blogs.gartner.com/thomas_bittman/2008/09/22/virtualization-30/. While virtualization doesn’t mean you have a cloud, it doesn’t mean you don’t have one [...]

Leave a Comment