Gartner Blog Network

Heterogeneous Virtualization Trends at Gartner Data Center

by Chris Wolf  |  December 11, 2012  |  2 Comments

Heterogeneous virtualization has been a hot topic among clients and last week at the Gartner Data Center conference in Las Vegas I presented a session on the subject. During the session, I polled the audience on their heterogeneous virtualization plans. Fifty participants responded to each polling question.

The first question I asked was about the current hypervisors that were deployed (note that the values are the number of respondents and not a percentage).


As you can see, most participants used VMware vSphere as expected, and there was a good mix of Hyper-V, XenServer, and some RHEV and Oracle VM.

It’s one thing to have multiple hypervisors, but not everyone is using multiple hypervisors to run production server applications in their data centers. That’s why I asked attendees which hypervisors they were using to run production server applications.


Notice that the drop was pretty significant. In the first poll, 44 non-VMware hypervisors were used. In the second poll, that number dropped to 25. The drop is consistent with an important but often unreported multi-hypervisor trend – while most organizations are using multiple hypervisors, most are not using multiple hypervisors for their production server applications (Oracle VM is a common exception). The second or third hypervisors deployed within an organization are often used to support branch office or departmental deployments. The fact that the additional hypervisors are being used is important, but so is understanding the use cases.

With that in mind, I also asked attendees about their plans for a single hypervisor.


Most (57%) planned to use a single hypervisor for production server workloads that required DR, with DR simplicity being the primary driver behind that decision. Clients frequently tell me that they fear that multiple hypervisors will recreate some of the same DR challenges that they initially solved with server virtualization. In addition, the OPEX concerns are real. Clients doing heterogeneous virtualization today almost always have a separate management silo for each hypervisor. When political or geographical issues preserve IT silos, the per-hypervisor silos might not be too big of a deal. However, organizations looking to be more centralized and efficient should aim for higher degrees of standardization.

Does this data mean that VMware wins? Not necessarily. I’ve had many calls with clients that are considering to switching to Hyper-V as their standard virtualization offering. That switch will take place over a 3-5 year period, with the end goal of having a homogeneous virtualization layer. If VMware is smart, it will focus on the OPEX and DR benefits of its homogenous solution, while still offering heterogeneous management throughout its stack to give customers choice. It’s clear that there is also plenty of interest in best-of-breed solutions as well, so opportunity exists for all vendors.

In 2013 we will spend a lot of time helping our clients unleash their inner service provider. That will involve taking some nontraditional approaches to data center standardization and optimization to help reduce TCO and improve efficiency and scalability. Stay tuned for more information on that subject.

What do you think? I’m curious to hear your plans around heterogeneous virtualization.

Additional Resources

View Free, Relevant Gartner Research

Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.

Read Free Gartner Research

Category: cloud-computing  server-virtualization  

Tags: citrix  cloud  microsoft  oracle  redhat  virtualization  vmware  

Chris Wolf
Research VP
6 years at Gartner
19 years IT industry

Chris Wolf is a Research Vice President for the Gartner for Technical Professionals research team. He covers server and client virtualization and private cloud computing. Read Full Bio

Thoughts on Heterogeneous Virtualization Trends at Gartner Data Center

  1. Sai Mukundan says:

    The key challenge in simulating a complete disaster recovery is the application downtime that it requires. This puts business revenue at risk and causes customer inconvenience. Further, the tests are often scheduled over weekends to minimize the impact of downtime putting the IT staff at great inconvenience. These challenges dissuade companies from regularly testing the DR plans thereby increasing the risk exposure of the DR strategy. Veritas Cluster Server (VCS) provides the ability to test the Disaster Recovery readiness of business-critical applications without any application downtime. This is done through a feature called VCS FireDrill. What’s more, VCS works with multiple hypervisors thereby abstracting the challenges associated with a heterogeneous virtualization strategy.

  2. Jorge Vértiz says:

    I think that virtualization should be treated as many things in IT, like servers and data bases. When you start dealing with complexity, it is not connectivity or coexistence issue is a technical support and overall TCO issue, and if we can recycle our knowledge one must design for KISS (keep it simple stupid)

Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.