by Chris Wolf | August 19, 2013 | 10 Comments
Think of one of your favorite bands. Odds are that when they first hit the scene they were brash, unapologetic, and reached stardom at an unthinkable pace. Then what happened? If they’re like some of my favorite bands, they got rich, “matured,” and lost touch with what got them to reach their early success. They then spent their remaining days playing their early hits to a devoted audience. Or they break up.
Remind you of VMware, or other disruptive technology vendors? I ask because here we are a week before VMworld, and I’m wondering if the predictable VMware will show up – you know the one that plays the hits and caters to its base – or will we see something brasher?
My money is on the older, richer, more conservative VMware. Wearing my customer hat, I’d love to be wrong. Ten years ago VMware didn’t care who it offended. Along the way server hardware vendors had no choice but to partner with them even though VMware was screaming from the rooftops “With us, you’ll need less servers!” Now think about VMware’s 2013 push around the software-defined data center (SDDC). You know what word isn’t in SDDC? Hardware.
If VMware wants to really get the SDDC to take off, it needs to rediscover its inner rebellious teenager – the one that got it to where it is in the first place. Consider successful public cloud service providers such as AWS. Amazon’s stack places a premium on software and sees hardware as a commodity. Yet VMware is pushing a software-defined data center mostly on top of enterprise-grade hardware from its partners. How do you get to be cost competitive with AWS when you place a premium in the entire stack while Amazon only places a premium in software? You don’t. And if VMware and its partners believe it’s possible, they’re fooling themselves. Take a look at the VMworld 2013 Global Diamond Partners. They have one thing in common (Hint: It starts with “hard” and ends with “ware”). So in the end, the graduation party for the SDDC is primarily sponsored by hardware vendors.
Don’t get me wrong. I’m not saying that you can get rid of the enterprise hardware in your data centers – certainly not yet. But there is increasingly less of a need to build a virtual and physical infrastructure around the greatest common denominator – the tier 1 workload. That’s great for the vendors but not so great for your bottom line. Down the road I expect several of our clients to look at alternative lower cost technologies for less critical workloads. VMware needs to look at offerings with lower price points and perhaps a lower SLA that clients can use for less critical workloads. This is an area where competitors will attack VMware and try to get a foothold in the enterprise data center. VMware needs to show greater flexibility in how it offers choice to customers. One size doesn’t fit all. VMware needs to be more outspoken about lower cost architectures, even if that offends some of its high-end enterprise hardware vendor partners. Ten years ago, VMware was aggressive and unapologetic. It was a company that was passionate about helping its customers save money while also thriving.
In the process of becoming a “big company,” VMware lost its inner voice. VMware needs to remember what got it to where it is today. It wasn’t just great technology, but also an attitude where it put its customers first. It can continue to grow by thinking beyond how a typical large company should act. Give customers greater flexibility. Hold their hand and help them make smarter choices regarding their data center investments. Show them how to build a private cloud and SDDC where all of the value is in software. If VMware truly wants its SDDC vision to succeed, it’s going to have to learn to make some enemies and remember that if it keeps the focus on its customers, it will thrive in the end.
So VMware, do you have another hit in you? Or will we hear the same old songs that will surely make your hardware partners happy? SDDC has plenty of potential, but only if you let it all out. Sing a great song about how your clients can truly build a software-defined data center. Tell them how they can build low cost solutions with VMware software. Show them all the benefits of software-defined infrastructure even if the chorus is something that your hardware partners don’t want to hear. Your clients didn’t come to VMworld to hear Nickelback. They deserve better.
Category: Cloud Server Virtualization Virtualization Tags: cloud, vmware, vmworld
by Chris Wolf | June 5, 2013 | 1 Comment
This year’s Gartner Catalyst conference is shaping up to be a memorable event. Catalyst has always been known for cutting edge sessions and this year is no different. Some of you may recall the Thrilla in California in which Citrix’s Simon Crosby debated VMware’s Scott Drummonds. The full debate was made available online and generated over 4,000 views in the first three days. Numerous bloggers weighed in, and this short post really captured how Simon and Scott felt about each other following the debate.
This year Simon is back with a new challenger – our very own Gunnar Berger. We also have VDI guru Ruben Spruijt onboard as the moderator.
The point of the debate is simple – should you even bother with VDI? Is it worth the expense and effort, or not? It’s a pressing question that many want answered and we aim to do it in what I hope will be an entertaining and informative way.
Here’s the full session description. We hope to see you there!
The Rumble in the Jungle — Debating the Necessity of Virtual Desktops
Organizations are transitioning to a Web and mobile world, yet many are investing in virtual desktop technology. Others question if the technology is worth the expense and effort. This hard-hitting debate aims to answer the question of whether or not organizations should invest in virtual desktops or if their IT dollars are better spent elsewhere. Key questions answered in the debate include:
- Are virtual desktops really worth the expense and effort?
- What use cases make sense for virtual desktops?
- If virtual desktops are not the answer, then what is?
Category: Client Virtualization Tags: GartnerCat
by Chris Wolf | March 13, 2013 | 11 Comments
Let’s face it. Sometimes being an “enabler,” is admirable. However, if you’ve seen an episode of Intervention lately, being an enabler is not always a good thing. VMware’s IaaS strategy was to enable its partners to offer vCloud services and give it’s customers near unlimited (>9,500 partners) choice of cloud providers. There was a big issue with this strategy – it assumed that VMware’s cloud partners would be A-OK with allowing customers to come and go. At the end of the day, that didn’t meet VMware’s provider partners business model. No one wants to race to the bottom of a commodity market and providers rightfully should be concerned with their ability to differentiate with competitors and show value while sharing a common VMware stack.
Today’s news shouldn’t come as too much of a surprise. Nearly two years ago I blogged that this day would eventually come. The market would force VMware to be a provider, and it has.
Forget about the talk of “open.” At the end of the day, every vendor and provider is in the business of doing whatever possible to lock customers in and make it tough for them to leave. Providers have always wanted high degrees of extensibility so that they can add value to a cloud offering and in the end offer enough customized services to make it tough for customers to leave.
If we look at today’s IaaS announcement, VMware is trying to have greater control of the “choice” it’s customers get. Choice will mean a VMware-hosted offering that in theory will make it easy for customers to move VMware-based workloads in and out of the pubic cloud. The aim is an “inside-out” approach where workloads between a private data center and a public cloud operate seamlessly. The trick here, however, is how important mobility and choice will be to customers. Workloads that go straight to cloud and have few traditional enterprise management needs can go to any cloud. Front end web servers are a great example – static data, built to horizontally scale, and no backup requirements.
VMware’s challenge going forward will be to differentiate. If VMware is the “enterprise alternative” to Amazon, it better launch it’s IaaS solution with enterprise features (AWS isn’t perfect but it has tons of features that large enterprises are now taking for granted). Redundant data centers, enterprise storage, networking, backup, and security are a must. In addition, it must offer serious tools for developers; the time for VMware to show the results of its investment in Puppet Labs should be when the public IaaS offering launches. Otherwise, Amazon and other providers will continue to win on features and the ease of experience that developers have on its platform. Granted, this can’t all happen overnight, but VMware needs to show value quickly in order to gain momentum.
VMware also needs to make customers understand that the VM is the easy part. Management has always been the challenge in hybrid cloud models – most organizations running hybrid clouds have at least two management silos – one for public cloud assets and one for private cloud assets. Failing workloads over to different infrastructure and different hypervisors isn’t a challenge because of converting a VM. The challenge exists because the operational software stack deployed to the VM (e.g., backup, security, performance management) may have hooks into a particular hypervisor’s APIs. So moving a workload often can entail considerable QA work to ensure that the production workload runs and is managed properly in the new environment. This is an opportunity where VMware can leverage its management assets both inside the data center and in the public cloud to allow customers to redeploy workloads and not have to worry about the infrastructure or management stack. That can significantly reduce complexity and opex overhead for organizations looking to operate seamlessly across both public and private clouds.
That said, another bottleneck to VM mobility in the cloud is software licensing. VMware can’t control the licenses of the software that runs in its VMs, but VMware must make it easier for customers to license VMware management software in a hybrid cloud environment and allow licenses to move with VMs between cloud environments.
Finally, let’s not forget that VMware isn’t blazing new ground by being a provider and enabler. Microsoft is taking the same approach with Azure. In the end, there should be enough room for partner’s to differentiate by industry vertical or geography, for example.
In the end, becoming a cloud IaaS provider is a move that VMware had to make. VMware was losing cloud mindshare to its competitors and its only choice to keep up with their rapid innovation pace was to be a provider itself. Entering the market late as an IaaS provider places VMware in the unenviable place of playing catch-up. VMware’s strength is its enterprise data center dominance and it’s ability to make a compelling case to its clients for hybrid cloud will undoubtedly determine its success. Telling that story is about way more than the technology. Businesses don’t care about technology details – they want agility and value. VMware must carefully craft a message that speaks to the business needs while convincing customers that a proprietary VMware stack is in their best interest. Many clients I speak with are trying to be more standardized and service provider-like in how they deliver services. This translates to fewer vendors in the data center which leads to lower opex costs and often an easier ability to automate IT processes. There is certainly an opportunity, but VMware success is far from guaranteed.
What do you think about the news today? Is it too little too late or will you give the VMware IaaS offering a serious look?
Category: Cloud Tags: cloud, emcvmware, vmware
by Chris Wolf | February 4, 2013 | 4 Comments
Today I talked to a client about their private cloud architecture and pending investments. The talk hit on a lot of areas, ranging from software licensing, to vendor support, to orchestration, and finally to standardization. When we got to the topic of standardization and procurement, they couldn’t contain themselves. One member of the organization said:
We can’t even say we’re a Microsoft Exchange shop. As far as procurement is concerned, we can’t even have a standard for email.
If that sounds odd to you, then consider your investments for private cloud. Providers achieve tremendous economies of scale through high degrees of standardization, yet that approach is nearly impossible for many enterprises. The reason for many are the folks in the procurement group whose job it is to save the company on capex costs. These folks have long prided themselves on getting a 15% discount in selecting one vendor product over another.
Once the discounted solution is procured, then it’s the job of IT Ops to run it. If that decision results in a 30% premium on opex, then so be it. At that point the procurement group is already focused on the next purchase.
It’s a story I hear a lot and in my opinion is an extremely shortsighted approach. Until procurement is retooled to place the emphasis on TCO instead of capex, I will continue to work with clients on stringing together a hodgepodge of point solutions at a ridiculously high cost.
Granted, not every vertical faces this issue to the same degree, but it is especially painful in the public sector. The finger often gets pointed at IT Ops for being too costly, but the real source is ironically a group that prides themselves on saving money – the procurement group. Procurement is trying to save money in the best interest of the business, but an approach purely focused on capex often hurts the business.
Cloud computing is forcing one of the greatest collective IT modernization efforts in our history. It’s time that procurement processes join us in the 21st century as well.
Update: This morning (February 5th) I discussed this particular issue with a client. In their case, standardization was a mandate set at the VP level and impacts all business units. The mandate changed the role of procurement to one of standardization enforcement with the expectation of getting better volume discounts by working with fewer vendors. In addition, he mentioned the other added benefits around opex costs. The procurement team no longer looks for the best deal in terms of upfront cost. They look to check to see if a product already exists within the approved vendor set and requires the business units to work off the approved list. It’s a significant shift that he said will take multiple years to complete, but they expect considerable benefits in terms of lower costs and better SLAs. Currently they are working with individual units to determine the standard for other infrastructure components such as networking and storage.
Category: Cloud Virtualization Tags: cloud
by Chris Wolf | December 11, 2012 | 2 Comments
Heterogeneous virtualization has been a hot topic among clients and last week at the Gartner Data Center conference in Las Vegas I presented a session on the subject. During the session, I polled the audience on their heterogeneous virtualization plans. Fifty participants responded to each polling question.
The first question I asked was about the current hypervisors that were deployed (note that the values are the number of respondents and not a percentage).
As you can see, most participants used VMware vSphere as expected, and there was a good mix of Hyper-V, XenServer, and some RHEV and Oracle VM.
It’s one thing to have multiple hypervisors, but not everyone is using multiple hypervisors to run production server applications in their data centers. That’s why I asked attendees which hypervisors they were using to run production server applications.
Notice that the drop was pretty significant. In the first poll, 44 non-VMware hypervisors were used. In the second poll, that number dropped to 25. The drop is consistent with an important but often unreported multi-hypervisor trend – while most organizations are using multiple hypervisors, most are not using multiple hypervisors for their production server applications (Oracle VM is a common exception). The second or third hypervisors deployed within an organization are often used to support branch office or departmental deployments. The fact that the additional hypervisors are being used is important, but so is understanding the use cases.
With that in mind, I also asked attendees about their plans for a single hypervisor.
Most (57%) planned to use a single hypervisor for production server workloads that required DR, with DR simplicity being the primary driver behind that decision. Clients frequently tell me that they fear that multiple hypervisors will recreate some of the same DR challenges that they initially solved with server virtualization. In addition, the OPEX concerns are real. Clients doing heterogeneous virtualization today almost always have a separate management silo for each hypervisor. When political or geographical issues preserve IT silos, the per-hypervisor silos might not be too big of a deal. However, organizations looking to be more centralized and efficient should aim for higher degrees of standardization.
Does this data mean that VMware wins? Not necessarily. I’ve had many calls with clients that are considering to switching to Hyper-V as their standard virtualization offering. That switch will take place over a 3-5 year period, with the end goal of having a homogeneous virtualization layer. If VMware is smart, it will focus on the OPEX and DR benefits of its homogenous solution, while still offering heterogeneous management throughout its stack to give customers choice. It’s clear that there is also plenty of interest in best-of-breed solutions as well, so opportunity exists for all vendors.
In 2013 we will spend a lot of time helping our clients unleash their inner service provider. That will involve taking some nontraditional approaches to data center standardization and optimization to help reduce TCO and improve efficiency and scalability. Stay tuned for more information on that subject.
What do you think? I’m curious to hear your plans around heterogeneous virtualization.
Category: Cloud Server Virtualization Tags: citrix, cloud, microsoft, oracle, redhat, Virtualization, vmware
by Chris Wolf | December 10, 2012 | 5 Comments
At the Gartner Data Center conference in Las Vegas last week I asked several polling questions regarding desktop virtualization adoption plans and trends, and thought that they were worth sharing. Note that the poll was taken in my session on “Desktop Virtualization: Tales from the Trenches,” so the audience was already at least considering the technology.
The first question I asked was regarding business drivers.
As you can see above, the majority of respondents wanted to use the technology to reduce TCO, while giving users a “Follow-me desktop” was a close second. We have multiple clients that have been able to reduce TCO 10% or higher, so the expectations are legitimate.
The next polling question looked at virtual desktop adoption goals.
Note that 11-30% seemed to be the sweet spot, while other organizations had more aggressive targets, and some had less. We talk to many clients that are using virtual desktops for a variety of use cases, so the range of answers was expected. Some healthcare organizations see the technology reaching the majority of their doctors and clinicians. Other verticals are using virtual desktops for remote worker and remote office support. In fact, I spoke to several clients at the conference who were expanding to Eastern Europe and the Asia Pacific regions. They didn’t want to hire any IT staff to manage the remote offices, so the virtual desktop was a sound investment for them.
I often get asked about virtual desktop vendor preferences and the survey respondents pointed to a near even split between Citrix and VMware, along with growing interest in Microsoft.
We still see Citrix having a slight edge among Gartner clients that we speak with each day; however, Citrix should take note of the poll response that several organizations see VMware as a capable alternative. Note that the poll sample was from 105 conference attendees.
The last question that I had asked was about storage preferences. This question was a little more involved and about half of the poll participants responded to this one.
Attendees could select multiple options, and while the enterprise storage array features were expected, the interest in the native hypervisor features such as IntelliCache and View Accelerator was a bit of a surprise. However, virtual desktops are capex-sensitive and when native platform technologies can be used, it’s a logical first option. Still, oftentimes specialized storage is closely evaluated by organizations looking to reduce their storage capex, and that’s where vendors like Nutanix and Tintri get a look. Also, we often see vendors like Atlantis Computing and FusionIO brought in to address storage performance scalability challenges as the environment grows to 1,000 or more users.
Our average client spends anywhere from 40-60% of their desktop virtualization budget on storage, so it’s really important to take the time to get the storage architecture right the first time. The alternative often involves going to the CFO six months into the project, saying “My bad,” and requesting more budget.
What do you think? Any surprises in this year’s polls?
Category: Client Virtualization Tags: citrix, gartnerdc, microsoft, vmware
by Chris Wolf | August 3, 2012 | Comments Off
This year at Catalyst we are going out with a bang, with industry heavyweights Brad Anderson (Microsoft) and Simon Crosby (Bromium) offering keen insights into mobility, application, data, and endpoint futures. If that’s not enough, some of my Gartner colleagues (Tom Austin, Larry Cannell, and Ken Agress) will share their knowledge of mobility futures as well. If you’re at Catalyst, be sure to stay through the Thursday morning sessions or you’ll miss out on great perspectives regarding future planning considerations. If you haven’t signed up, it’s not too late! Besides, is there a better place to be in August than in San Diego?
Here’s more information on the sessions that we have in our Mobility Futures track.
Windows, SaaS, and Mobile: Bridging the Divide (Chris Wolf)
Workspace aggregators are an emerging technology that allows users to connect to applications and data from a variety of sources, including virtual desktops, server-based computing, software as a service (SaaS) and mobile. This session looks beyond the hype and provides specific insights into how organizations should leverage these solutions to be more people-centric in their approach to application delivery. Key Issues:
• Workspace aggregator use cases
• Vendor landscape and competitive differentiation
• Practical guidance for moving forward
Mobile Apps and the Contact Center: The Future of Customer Service (Ken Agress)
As more and more enterprises create applications to meet their customer needs, service challenges and opportunities arise. These apps offer the organization the opportunity to provide customers with better access to information and service, while gathering more useful data on customer behaviors and needs. Key Issues:
• What apps are available?
• What apps are on the horizon?
• How can apps change the contact center and customer service landscape?
The Everywhere Office: From Myth to Reality (Tom Austin, Larry Cannell)
No longer bound to a desktop computer while at home, employees expect to use their smartphone or tablet to connect with colleagues and access information. This session balances analysis with advice on how to prepare for these changes. Key Issues:
• The influence of mobility on the changing nature of work
• How mobility enables a "personal cloud," which redefines the role of the PC
• The impact of mobility on communication and collaboration technologies
• IT’s role in enabling a person-centric, mobile and social work environment
Industry Point of View: Forget the Desktop; It’s All About Me! (Simon Crosby)
The desktop revolution calls for a profound change in the trustworthiness of our infrastructure. We need systems that are inherently secure — by design. If such a thing already existed, then the mess of practices around VDI, PC configuration and life cycle management, data loss protection and endpoint protection would not exist. This Industry Point of View session will present a powerful new way to consider how technology can transform our own core principles, resulting in new ways that IT can pivot to support people, not devices.
Mastermind Interview With Brad Anderson, Corporate Vice President, Microsoft (Brad Anderson, Microsoft, with Chris Wolf)
Virtualization, SaaS, mobile apps, and cloud computing have ushered in a new era of possibilities and expectations for how we connect users to their applications and data. Microsoft is a company firmly rooted in the middle of this transition. But should you think of Microsoft as the gateway to your future, or the bridge to your IT past? Attend this enlightening session to hear Gartner challenge Microsoft Corporate VP, Brad Anderson, on your most pressing concerns.
Category: Client Virtualization Cloud Mobility Tags: GartnerCat
by Chris Wolf | June 21, 2012 | 8 Comments
Standardization Attention Deficit Disorder (ADD) or (SADD): (n) A condition in which one professes to support standardization, yet can’t help but be distracted by the newest, shiniest object – Opex costs be damned.
I’m a hypocrite. There. I said it. It’s almost therapeutic. Are you one too?
Here’s how I see it. We are all taking part in a great conspiracy. Many of us are both victors and victims in this circular history that we can’t help but repeat. End user organizations are spending way too much on IT services, and we are all at fault. Why? Let’s start with complexity. Every management vendor wants complexity. The more complex the environment, the more software and professional services they can sell. Hardware vendors? Ditto. Startups challenging incumbents? You got it. If you standardized you wouldn’t buy their products and they’d be out of business. Again – the newest, shiniest object is better than what you already have. And it’s cheaper! What’s not to love? IT pros generally like complexity at times because it lets us flex our intellectual muscle and show our value. Consultants and analysts? Check and check. Complexity equates to a greater need for advisory services.
Are we all a part of one of the greatest con jobs in history? Sometimes it feels that way. We can always find a “business reason” to make things harder than they need be. Or maybe we’re the victims? We’re being duped by a community that professes the values of standardization but on the other hand goes at length to justify anything but standardized approaches to IT challenges. Many of us suffer from SADD. So how on earth can a highly standardized approach to delivering IT services hold our attention? If a lack of standardization costs the business more money long term, then so be it.
We need to pit vendors against each other. Right? Group think often implies that it’s a better strategy. But who is it really better for? Vendors? Consultants? Analysts? The IT department? How often do we ever wonder if it’s best for the business? I’d argue not nearly enough. After all, in the quest to save 10-20% on one solution, what are you paying for new consulting, advisory services, and management products to deal with the added complexity?
A few years ago I blogged about emerging cloud technologies and what I called “the Wal-Martification of IT,” stating:
Think of public cloud providers as the neighborhood Wal-Mart. In many towns across the US, small businesses were swallowed by Wal-Mart. Many of these businesses were unwilling or unable to change their existing business processes or target markets in the wake of Wal-Mart’s entrance to their community. At the same time, Wal-Mart doesn’t exist in ghost towns. Look around most Wal-Marts and you’ll still see plenty of successful businesses.
In the Wal-Martification post I urged organizations to change their ways, but three years later I see history repeating itself. Look at your virtualization and private cloud initiatives. I frequently find myself as a minority voice holding the position that heterogeneous virtualization is a bad idea for production server workloads. In discussions with clients I often raise the following issues:
- DR complexity (capacity management is more complex when you need to ensure available capacity at each site on a per-hypervisor basis)
- Reconfiguring/replacing operational software
- VM conversion, driver replacement and scheduled downtime
- Vendor support
- Organizational processes and governance, and the creation of new management silos (many private cloud initiatives result in the collapse of management silos, while it’s possible that heterogeneity can create new ones)
- Quality assurance – oftentimes each hypervisor stack requires a new QA check due to the differences in performance validation, configuration, and operational management requirements
Multi-hypervisor is an effective strategic direction to go from one virtualization platform to another, but it has serious tradeoffs if it’s the end goal for the production server workloads in your data center. Additional hypervisors for one-off siloed initiatives is often practical, but becoming less standardized in your data centers is anything but efficient.
As you further build out your private clouds, will you follow the service providers who seem to have the SADD antidote and go with a highly standardized infrastructure stack? Or will you go the heterogeneous route? There’s a huge community hoping that you make your private cloud as non-standardized and complex as possible. Their profits depend on it. What are you going to do? Am I out of my mind on this one?
Category: Cloud Server Virtualization Virtualization Tags: cloud, Virtualization
by Chris Wolf | September 20, 2011 | 6 Comments
Remember the days of Windows NT Server? I was among the many who mocked it as a serious data center server operating system. Then came Windows 2000 Server, and perceptions began to change. With the release of Windows Server 2003, Microsoft turned the tide of server OS dominance in the data center, placing Microsoft on a path to where the majority of servers would run a Windows OS. What initially seemed like a pipe dream became reality, and I was among many who were wrong about Microsoft’s chances as a dominant server OS vendor.
That takes us to last week’s Microsoft Build conference, where Microsoft demonstrated several significant feature enhancements coming to the next generation of Hyper-V. If you compare Hyper-V maturity to Windows Server OS maturity, this could be the equivalent to Windows Server 2003. Microsoft unveiled many new features that positions Hyper-V as a serious enterprise-grade virtualization platform.
I was most impressed by the improved virtual switch architecture and extensibility features. For years, I had seen the lack of extensibility and monitoring capabilities in the Hyper-V virtual switch architecture as a barrier to supporting multitenant environments. While Hyper-V today can offer unicast isolation for traffic on shared virtual switches and support VLANs, it does not support any type of port spanning or promiscuous monitoring. That made it difficult to monitor and enforce network security in Hyper-V virtual networks, and have made the hypervisor ill-suited for some large enterprise and many cloud IaaS scenarios. Those barriers are removed in Windows 8.
In addition to rich network monitoring and enforcement capabilities, Hyper-V’s extensible switch architecture opens the door for technology partners in the networking and security space to reside in the Hyper-V fabric. Cisco has already announced support for the Nexus 1000V on Windows 8 Hyper-V. I expect other leading players in the networking and security space to follow suit. Juniper, HP, Riverbed, and F5 are good candidates to also offer Hyper-V virtual network appliances. Citrix is already there (i.e., NetScaler VPX for Hyper-V).
One other architectural element of significance is that virtual networking and security requirements are embedded in each Hyper-V VM’s metadata file. So prior to any live migration job, for example, a VM’s underlying third party dependencies are validated on a target host. Keeping relevant network and security metadata with the VM ensures that mobility constraints can always easily be validated before any migration job. These features are significant. Having an extensible network architecture, extensible VM metadata, and extensible management (i.e., via the System Center suite and third party integration) isn’t Microsoft following VMware. It’s leadership. I have communicated extensibility requirements to VMware for years, and I’m happy to see Microsoft stepping up and addressing customer and partner extensibility requirements.
There are numerous feature enhancements in Windows 8 that address scalability, performance, security, storage, and management. Rather than offer a list, these posts offer really good rundowns of the forthcoming improvements:
Finally, I think it’s important to consider the potential industry impact. Paul Maritz is intimately familiar with the Microsoft playbook. He knows exactly what Microsoft is doing in terms of strategy and execution. At this point, the question is whether Hyper-V can realize the same success Microsoft saw with the release of Windows Server 2003. Microsoft doesn’t have to match VMware feature-for-feature. It simply needs a good enough alternative with all of the features that enterprises care about. That being said, changing a hypervisor can be a very costly endeavor. The typical enterprise has invested in operational software (e.g., security, backup, orchestration, capacity management) that directly ties into the vSphere hypervisor. Replacing a hypervisor doesn’t simply mean converting a VM format. There are numerous potential costly implications for operational/management software updates or replacement, training, and process updates. So even if organizations are really excited by Windows 8 Hyper-V, I don’t expect wholesale migrations.
Instead, incremental deployments to a new Hyper-V infrastructure are more likely. That might begin with a refresh of Microsoft’s next generation server applications (e.g., Exchange, SQL Server, and SharePoint). That early success could lead to further migrations at the refresh interval of other applications. For VMware’s part, it will need to espouse the benefits of staying single hypervisor as part of hybrid cloud architectures, and make the case for the value of homogeneity across its integrated product portfolio.
Does the hypervisor follow the way of the database server, where enterprises rely on both Oracle and Microsoft, for example, for different classes of workloads? Or do organizations stay mostly homogenous? I think that a parallel to the database server market is a possibility, but to be clear, this is far from an apples-to-apples comparison. For example, multiple hypervisors also bring with them added complexity when it comes to supporting business continuity and disaster recovery. Resources may be bound to specific clusters by hypervisor association, making it impossible to simply move a resource to another hypervisor running on systems with spare capacity in order to resolve a performance spike (QA processes typically include the hypervisor, so while V2V conversions/migrations are technically possible, they’re typically not practical for dealing with real time performance issues). For DR, organizations may need to pre-stage multiple hypervisors at a DR site, potentially adding to the cost of infrastructure required to support DR. The intricacies of multi-hypervisor support is a very long discussion, and definitely beyond the scope of this post.
Regardless of where you sit in terms of hypervisor loyalty, you will benefit when Windows 8 ships. Organizations that wish to remain homogenous VMware shops will be able to put more price pressure on VMware during contract renewals. Organizations that wish to be more heterogeneous in their approach to virtualization will benefit from a lower-cost and robust platform from Microsoft, that on paper today looks very promising.
I said quite a bit and did a lot of thinking out loud in this post. I would love to hear your thoughts.
Category: Server Virtualization Tags: microsoft, vmware
by Chris Wolf | September 8, 2011 | 4 Comments
In a recent Gartner field search study, two early internal IaaS cloud adopters noted that if Amazon was the benchmark by which they are measured in terms of cost, then they had to make tough decisions regarding best-of-breed vs. good enough. In particular, the two clients cited whether deploying a third party virtual switch (i.e., Cisco Nexus 1000V) was absolutely necessary, especially if the cost made the internal cloud less competitive with Amazon. These organizations weren’t doing apple-to-oranges comparisons either. They came up with a per-VM cost broken down by both infrastructure and management/operations software. The cost of operational software was added to the Amazon cost to create an apples-to-apples comparison.
Enterprises are having to make tough choices regarding virtualization technology and all associated infrastructure and management products. To deliver cloud services, the enterprise has to be able to provide services quickly, securely, and reliably. In other words, the cloud service should come with the expectation “that it just works.” That’s a tall order for increasingly complex data center infrastructures. At this point, you may be wondering what any of this has to do with VMworld. Let me explain.
VMware made numerous data center and cloud related announcements at VMworld, including:
I’m not here to dissect all of the announcements. For good perspectives on the vCloud Connector and Global Connect announcements, take a look at Lydia Leong’s and Kyle Hilgendorf’s posts. That being said, I wanted to comment on the body of work. VMware’s vCloud web site lists a growing number of provider partners, and many VMware customers I speak to about hybrid cloud state concerns about the need for hypervisor parity. That’s because they include the hypervisor as part of the application QA processes. As a result, they see it as less costly to move a VM between the same hypervisor type. I had blogged about this subject before. Bottom line – for many enterprises seeking mobility between data centers and cloud, VMware has a home court advantage. Other providers (e.g., Amazon) maintain the advantage for applications deployed straight to the cloud, with the enterprise having no intention to pull them back in.
VMware’s hybrid cloud strategy is quickly evolving, many customers are onboard with it, and at the same time, those customers are starting to question where they can save costs. Competitors such as Microsoft have their own thoughts on cost. Assuming organizations maintain a homogenous VMware IaaS cloud, that means that instead of trying to cut costs at the hypervisor/virtual infrastructure layer, they’ll look elsewhere. Again, if Amazon is the benchmark, the enterprise has to be sensitive to cost.
To VMware’s credit, they have been more transparent with partners regarding their strategic direction. There is no question that storage, networking, security, and management features that VMware considers essential to hybrid cloud infrastructure will be in the vSphere platform. I had lengthy discussions with two security vendors at the show, and they were comfortable in how they would innovate around vSphere moving forward. I got the same impression from the storage vendors I met with.
Today we have two general classes of cloud IaaS platforms: commodity “I don’t care” infrastructure, and enterprise “I do care” infrastructure. Enterprises use commodity infrastructure (e.g., AWS) for some workloads and enterprise (e.g., vSphere) for others (I know; it’s not that black-and-white. Stay with me). With an increasing number of features (VXLAN is the latest example) going into the hypervisor, one could say that VMware is creating a third tier – call it a “good enough enterprise tier,” or whatever you like. That tier, in my opinion, will try to compete with both the “I don’t care” and “I do care” infrastructure options. It will be lighter on third party value-adds and heavy on VMware products. This should concern some VMware technology partners. Their job is to convince customers that any “good enough” tier really isn’t good enough without their value-add.
If you’re a customer, you should be thrilled. Amazon has put down the gauntlet on cost, and the industry has to follow. VMware and other virtualization vendors (XenServer Intellicache is a good example) are commoditizing select infrastructure features that previously had come at a premium. This means that infrastructure software and hardware vendors have to step up their game. They have no choice but to innovate. At the same time, they have to be increasingly cognizant to the fact that “good enough” is becoming a more serious competitor.
At VMworld, VMware showcased a vision for a highly robust IaaS platform. We’re in a significant state of transition, and there will be some major winners and losers. If we go forward 10 years and VMware is the winner, then who are the losers? Or is VMware heading down the wrong path? I’d love to hear your thoughts.
Category: Cloud Server Virtualization Tags: citrix, microsoft, vmware, vmworld