Disclaimer: Gartner analysts use their blogs to share their personal views and opinions on subjects close to their hearts.
In my previous blog post I discussed how virtual desktops are a growing market, and that I give most of the credit to an increased user experience (as opposed to Server Based Computing). I didn’t mention that there is a glass ceiling with virtual desktops and that historically it just wasn’t possible to use virtual desktops in some conditions. Today, an announcement is being made that I believe will shatter that ceiling and also make waves in some unexpected areas. What’s today’s announcement? NVIDIA has created a new GPU, the VGX (Virtual GPU Experience), that has the ability to share the GPU with multiple VMs.
Having a GPU in a virtual desktop is not a new concept, there have been technologies around for a while that have allowed a GPU to directly pass into a VM. These technologies have been restricted to a one to one relationship; IE one VM gets one phyical GPU. This may work for a small business, but if you want to virtualize thousands or tens of thousands of desktops, getting servers that support multiple GPU slots is only the start of your problem; you still have to power them, cool them, and find space for them. What’s interesting about today’s announcement isn’t that you can get a GPU into a VM; it’s that you can share a single GPU with multiple VMs. Also, this GPU has been specifically designed to slide into many commonly used servers, which means power/cooling has already been considered by the manufacturer.
Unfortunately, today’s announcement is not going to share every detail I got to see under NDA so specific specs will be released at a later date after the manufacturer is ready to release them. Needless to say I asked a lot of questions about power, cooling, racking, and user density, and honestly even in the pre-release stage I saw the board in, I was pleased with the numbers they were sharing. It was apparent to me that they were looking at this product as a solution that would scale, not just an adjunct product to the virtual desktop market.
So why do I think this product will make waves:
- Application Compatibility: The obvious answer is Application Compatibility, from Medical Imaging or high end engineering work loads, there are some applications out there that rely on a GPU (and that won’t work with a virtual GPU). Having a real GPU makes it possible for these applications to run on a virtual desktop. The next step will be to see how well application vendors take to this GPU and whether or not they will support it.
- Reduce CAPEX: There are very few use cases where virtual desktops actually reduce CAPEX, in just about every design I’ve worked on, the pitch for virtual desktops is an OPEX pitch (and there is nothing wrong with that). However, in these high end workstation environments the pitch could potentially change. I know some environments spend anywhere from $2,000 to $10,000 per engineering desktop and these desktops have none of the advantages of virtual desktops. So not only are they very expensive to purchase (CAPEX) they are expensive to maintain (OPEX). If these same environments move to a virtual desktop model there is potential to save on CAPEX.
- Better User Experience: The user experience is the big win for virtual desktops, people genuinely like it (or genuinely have no idea that they are running on a virtual desktop). Having a GPU enables IT departments to enable a better user experience for things like Aero or transparent windows.
- CPU Offload: One thing that I find particularly interesting about this technology is that there is potential to offload the rendering of the video from the x86 architecture. This is particularly interesting to me because the Teradici APEX2800 card does the same thing for PCoIP, which only works for VMware and only offloads PCoIP encode/decode. The Shared GPU technology is cross platform, Citrix/VMware even Microsoft will benefit from this technology. This removes some of the stickiness from a hardware choice but accomplishes the same task: reducing the amount of CPU required to render a video.
- User Density: The next obvious step to CPU Offload is if you can decrease the amount of CPU required per user, you can potentially get more VMs per core (more users per core). This is a standard pitch for increased user density. Now I have a lot of arguments against this pitch, but it’s a pitch non-the –less.
- Protocol: I was really taken by surprise when I saw the mechanics behind the shared GPU. Not only does it do everything I’ve already stated, the GPU can actually send a H.264 stream directly to the end point device. This may not sound like much but what is really interesting about this is that if you think about it, this could have a major impact on protocol. I’ve asked a lot of questions about this and if I’m understanding this correctly the Shared GPU can basically send a video stream down a different channel using H.264 and has potential to improve as the codecs improve. This means it could work outside the standard protocol and potentially open up a flood gate of the future of the protocol in the virtual desktop market. More importantly, there are bandwidth ramifications to this stream, specifically it was shared with me that in testing the H.264 stream was able to beat MMR from a bandwidth stand point, that’s spectacular as that means potentially we could get the benefits of the MMR experience (low bandwidth) without the drawbacks (IE limited amount of file types it supports).
When I think about a shared GPU I normally think it’s a good idea as it enables IT departments to push more stuff into the virtual desktop space. However, now that I’ve seen the implementation of it, I think I’m seeing a potential shift in the virtual desktop market. If the products lives up to the hype (of which I’m currently creating) I think it has potential to make some big waves.
——————————————
Follow me on Twitter: @gunnarwb
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
4 Comments
Great write up Gunnar! I am hoping the price for a VGX board is not outrageous. It looks the The Verge released details around the specs here,
http://www.theverge.com/2012/5/15/3022346/nvidia-geforce-grid-cloud-virtual-gpu
“Each VGX board itself contains four GPUs, each with 192 CUDA cores and 4GB of frame buffer — it also has 16GB of onboard memory and uses the standard PCI Express interface.”
-Scott
Thanks for sharing Scott, I try to error on the side of caution when it comes to specs, as I’m told those in confidence from the manufacture so I’d rather they just release them. I’m glad I can say that the goal is 100vms per card now, or 25 per GPU.
Great write up. I think this is just the beginning of the era. I hope Intel jumps into this market soon which will be great for consumers and it will lower the price for virtual GPU technology.
Don’t forget that the APEX card from Teradici only offloads the PCoIP compression and encryption, not the graphics. A server with BOTH the APEX card (doing PCoIP) and a VFX card (doing graphics) would be pretty awesome… without graphics and protocol to deal with, the CPU would once again be completely free to be just that.
Just needs VMware to support VGX too…