Every month I meet with a group of Gartner clients to discuss the challenges (and benefits) that they are receiving as a consequence of investing in server virtualization technology. At our February meeting, the item on the agenda was performance management. I took the liberty of presenting to the group a polling question:
From what part of the virtual server infrastructure have most of your performance problems originated (note: the numbers sum to less than 100% due to rounding)?
- Individual VMs – 42%
- Host server (i.e., ESX) – 21%
- Cluster – 0%
- SAN/storage – 26%
- Network – 0%
- Other – 0%
- We have had no performance problems (10%)
Now, only 19 firms responded so let’s not go and make any hasty strategy changes. Still, I was somewhat struck by the answers as I had assumed that no.4 (SAN/storage) would have been the primary issue as that is often what I hear (and no, it’s probably not due to the SAN itself, but the decisions relating to the number of VMs per HBA, VMs per LUN, VM swapping, etc.). Interestingly, what I heard were some comments along the lines of while the initial finger pointing was towards the virtual infrastructure, it was often found to actually be an application problem (i.e., misbehaving Java program). It’s harder to fix code than to change a configuration parameter, so the onus was usually always first on the virtualization administrator.
As I thought some more about this, it reminded me of what I saw when I covered Java management years back. A client would complain about poor response and would assume it was a database problem (since it seemed to be the component that was over utilized), but it sometimes turned out to be a poorly coded application (what sticks in my mind is the example of a single URL request resulting in many SQL queries). So, perhaps a case of the more things change, the more they stay the same.
Category: Uncategorized Tags: