Cameron Haight

A member of the Gartner Blog Network

Cameron Haight
Research VP
10 years at Gartner
30 years IT industry

Cameron Haight is a research vice president in Gartner Research. His primary research focus is on the management of server virtualization and emerging cloud computing environments. Included in this effort is… Read Full Bio

Coverage Areas:

Server Virtualization Performance

by Cameron Haight  |  March 9, 2010  |  3 Comments

Every month I meet with a group of Gartner clients to discuss the challenges (and benefits) that they are receiving as a consequence of investing in server virtualization technology.  At our February meeting, the item on the agenda was performance management.  I took the liberty of presenting to the group a polling question:

From what part of the virtual server infrastructure have most of your performance problems originated (note: the numbers sum to less than 100% due to rounding)?

  1. Individual VMs – 42%
  2. Host server (i.e., ESX) – 21%
  3. Cluster – 0%
  4. SAN/storage – 26%
  5. Network – 0%
  6. Other – 0%
  7. We have had no performance problems (10%)

Now, only 19 firms responded so let’s not go and make any hasty strategy changes.  Still, I was somewhat struck by the answers as I had assumed that no.4 (SAN/storage) would have been the primary issue as that is often what I hear (and no, it’s probably not due to the SAN itself, but the decisions relating to the number of VMs per HBA, VMs per LUN, VM swapping, etc.).  Interestingly, what I heard were some comments along the lines of while the initial finger pointing was towards the virtual infrastructure, it was often found to actually be an application problem (i.e., misbehaving Java program).  It’s harder to fix code than to change a configuration parameter, so the onus was usually always first on the virtualization administrator. 

As I thought some more about this, it reminded me of what I saw when I covered Java management years back.  A client would complain about poor response and would assume it was a database problem (since it seemed to be the component that was over utilized), but it sometimes turned out to be a poorly coded application (what sticks in my mind is the example of a single URL request resulting in many SQL queries).  So, perhaps a case of the more things change, the more they stay the same.  

3 Comments »

Category: Uncategorized     Tags:

3 responses so far ↓

  • 1 Paul Hackett   March 11, 2010 at 10:43 am

    I have also found that when we took the time to discover the source of the performance problem, in most cases it was related to an app running wild.
    What I would find interesting is what people are using to monitor their performance in a virtual enviroment from the application down to the storage as pinpointing the culprit can be very time consuming.

  • 2 John Gannon   March 25, 2010 at 3:38 pm

    As I like to say, the VMware guys are the “new” network guys when it comes to the blame game…

  • 3 Jim Gochee   April 9, 2010 at 7:24 pm

    Applications consume infrastructure resources as they execute and sometimes virtualized environments create challenges for apps. However, these challenges can almost always be solved in the app. For example, a virtualized database may be slower than a physical one, but changing a database query can be a cheaper solution than provisioning a faster database tier. And it’s a solution that reduces long term cost.

    To Paul’s question, you need good application visibility in addition to infrastructure visibility. There are many tools out there for this, just Google for \Application Performance Management\. Our company happens to have one as well.