Gartner Blog Network

VM Templates Should Include at Least 2 Virtual Hard Disks

by Chris Wolf  |  August 5, 2010  |  14 Comments

Today I was working with a client on their next generation data center architecture. They are building a highly virtualized data center with the goal of offering cloud IaaS to other departments within the organization. While talking about VM templates we discussed a favorite topic of mine – virtual hard disk structure.

For several years, I have recommended to clients that they use at least two virtual hard disk files per VM. One virtual disk file is used for the OS and application files, and a second virtual hard disk is used for paging, swap, and temp files. Alternatively, a third virtual disk may be created for data files.

The result for VMware environments would be at least two .vmdk files per VM. For Hyper-V, that would mean two .vhd files. On the surface, this may seem like an academic exercise, but it’s important.

Supposed that sometime down the road you wanted to begin leveraging asynchronous replication to replicate VM data to another site. If you have your transient data (e.g., pagefile or swap file) on a separate virtual disk, it’s much easier to filter the data so that it’s not replicated. You may say “Well if I need that functionality I can just configure it later” and that’s true. However, remapping a pagefile to a new disk, for example, requires a reboot. The result is downtime. If you create a separate virtual hard disk for the pagefile and include it in your default template, all newly created VMs would be able to more elegantly and efficiently take advantage of storage features such as  asynchronous replication. Asynchronous replication is just one reason. The amount of storage intelligence and flexibility creeping into virtual infrastructure is rapidly expanding. VMware’s new vStorage APIs for Array Integration (VAAI) is a good example. In the end, what’s the harm of having a second virtual disk in your default VM template? The result is one more file in a VM’s folder. If you don’t separate the pagefile and wish to more intelligently manage it at a later time, the result may be downtime. What do you think?

Additional Resources

View Free, Relevant Gartner Research

Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.

Read Free Gartner Research

Category: cloud-computing  server-virtualization  

Tags: hyper-v  virtualization  vmware  

Chris Wolf
Research VP
6 years at Gartner
19 years IT industry

Chris Wolf is a Research Vice President for the Gartner for Technical Professionals research team. He covers server and client virtualization and private cloud computing. Read Full Bio

Thoughts on VM Templates Should Include at Least 2 Virtual Hard Disks

  1. Kevin says:

    While I generally like the idea, the more I think about it….I like it even better.

    First of all I don’t believe that using two disks adds either complexity or management overhead, so it’s just a matter of pros and cons.

    Some pros are replication as noted above, less fragmentation, and the option to use different storage pools. Also Windows 2003 has restrictions with growing a system partition, and at one point VMware did not support paravirtualized drivers for a boot partition (which is no longer the case I believe). Also if the balloon driver kicks in, the I/O stress will be isolated to the swap drive and not the OS drive.

    Cons? Well you will still want a small page file on the system partition to support a minidump for support purposes, but you can engineer this into your templates.

    Another “con” could be storage consumption, but there’s options here. We don’t follow the traditional rule of 1.5 times physical memory for our VM’s. With features like TPS and now memory compression (vSphere 4.1), you’re much better off feeding the VM sufficient RAM and limiting the disk I/O and degradation of swapping. If you use the old 1.5 x physical rule, you’ll be wasting a lot of SAN space that you don’t really need when you multiply across all your VMs. We use something significantly smaller based on the workload profile and just monitor for heavy paging in the guests.

  2. Rich Newton says:

    Is it possible to replicate any Windows Server OS partition without replicating the paging file? I would assume the OS would not start at the remote site.

  3. Chris Wolf says:

    Hi Rich,

    Thanks for the comment. I should have completed the thought in the blog post. When I discuss asynchronous replication with clients, I point them to vendor documents that describe the page file issue and how to work around it (e.g., how to stage a pagefile .vmdk at the recovery site). NetApp, for example, describes this issue in the following doc: You’ll see specific guidance in Appendix A.

  4. Jasnoor says:

    @Chris:It actually sounds pretty useful, come to think of it. One can simple backup/replicate the file containing the data and everything is set.
    @Kevin: Good point regarding the small pagefile for minidump purposes. Can come in useful sometimes.
    @Rich: I think Windows usually creates a new page file in case the old page file is deleted (when the system is powered off of course)

  5. Jasnoor says:

    @Rich: Sorry, I missed it. Chris is correct. One would still need to stage a .vmdk file for the new pagefile to be created in if you didnt want the one in the boot partition to expand or the OS to crash.

  6. Kevin says:

    @Jasnoor Not sure but I am thinking that DR products like VMware’s SRM could help you automate that VMDK creation in the event of a failover.

  7. Jasnoor says:

    @kevin: Probably. Another checkbox to be noted as part of the process.

  8. Hi Chris,

    I strongly agree with your post, this is actually our reference design for customer builds. We find, especially in VDI environments, that Windows temp/paging activity and the cache for virtualized applications are usually the source of most disk writes and should be on a separate drive

  9. Hi Chris,

    few comments, in order to use array based async replication to accomplish what you are suggesting, you should have these plagefile vdisks reside on a seperate LUN as well right? because array based replication will replicate at block level. i understand that the new storage APUs will allow you a bit more granularity but will they allow you to seperate the vdisk with the pagefile on the same LUN?

    If they can’t do that, then you not only have to have a seperate vdisk, but that vdisk has to physically reside on a seperate datastore on a seperate LUN.
    and that is fine, because that is the only way you can really get performance out of the pagefile anyway as it is now serviced by seperate spindles etc…

    that does become a bit cumbersome though no?

    For VDI sure, you have a pagefile locally, but you only need 1 vdisk in that case, not sure why we would need 2 and you don’t want to replicate the pagefile vdisk so that is not a problem there.

    just my 2 cents, would love to get your feedback on this though.


  10. one last thing, if you are using host based replication, then you can pretty much filter the pagefile regardless of where it is at.

    I see your point of view, just throwing out some ideas for mind share.


  11. Oups, my first comment did not make it for some reason.

    I agree with you on this but let me ask you a few questions:

    1- if you are oding array based replication, the only way to filter the vdisk for paging file out is to place the pagefile vdisk on a seperate LUN, deperate datastore. I know that the new storage APIs give you a bit more granularity but it is still block level replication so is there anyway to filer it out other than placing it on a different LUN?

    that is a bit cumbersome. However, it would be the right way of doing pagefile, as the performance benefits are only gained when you seperate the pagefile on a seperate LUN etc..


  12. Chris Wolf says:

    Eli –

    You raise good points. You’re right about the LUN separation today. The NetApp link I posted in an earlier comment describes a solution to that. To me this issue is one of flexibility. I’d like to have the option to place a pagefile on a separate LUN, for example, without penalty (i.e., a reboot). Using a separate .vmdk/.vhd gives me that option. Thinking forward, VMware’s VAAI may provide greater granularity at the .vmdk level and having the virtual disk separation would make it easier to exploit those innovations if/when they arrive.

    For HVD (VDI), there’s the potential for future benefits with localization of the pagefile to a DAS or local SSD. Again, creating the logical separation (i.e., separate pagefile disk) as part of the template doesn’t have to imply immediate usage. It’s more about having an easy way to “turn on” such features as the opportunity presents.

    You’re absolutely right on host-based replication. That would circumvent the need for the logical pagefile separation since it’s filtered anyway. In that scenario, I’d still prefer to use a separate virtual hard disk for the pagefile for the sole purpose of having the option to more efficiently use other forms of array-level replication later down the road.

    This is a good discussion. Thanks for your thoughtful comments.

  13. Chris.

    Certainly agree with many points raised from the article instigation and comments throughout the thread of replies.

    I approach the placement of the VMDKs in a similar way, O/S boot drives located within VMFS datastores dedicated for ‘boot’ and the same for ‘data’. My rationale has been the SAN snap shot & replication frequencies can be tailored to meet application SLAs. As an additional benefit customers to introduce de-dupe technologies on ‘Day 2’ can reap the rewards immediately on specific tiers of storage – assuming the ‘boot’ VMDKs sit within different cabinets / vendor technologies.


  14. Chris Runte says:

    Don’t forget additional benefits in that by moving transient data (pagefile, swap, etc) to a second disk on a second volume/ or LUN that they can be excluded from both backups and deduplication, and depending on your storage architecture as well, more easily managed, expanded, or held on alternate storage if needed.

Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.