Benchmarking data is a powerful tool. But it can be misused just as any tool can. Benchmarking data is meaningless unless the comparison is against something very very similar to a particular providers operation. Comparing restoration times for servers against switches against PCs against consumer electronics is bogus. Even within these asset classes it is often inappropriate as taking the casing off a mid-range box may take significantly longer than ripping and replacing multiple components for a blade system. Unless you compare oranges against oranges the best you can get is misleading results that may help or hinder. Troubleshooting a simple software application can take a fraction of the time needed to get into the guts of a complex enterprise application with multiple integration points and system dependencies. Even benchmarking performance for the same product or solution can be problematic. Solutions that are deployed in chaotic IT shops will fail more frequently and be harder to diagnose and remediate than comparable implementations in well managed and controlled environments.
“Obvious” comparisons that can be done easily are just as risky if not more so. Consider inbound call hold times for a service desk. Lower call hold times are good, right? Maybe. Maybe not. Always remember to look at it from your customer’s perspective. That’s the only perspective they use. Technical callers will often prefer to wait an additional 30 seconds to be connected if it means that they can be routed to someone they have worked with before. Relationship continuity value cannot and should not be ignored. How many IVR systems give people the option to remain on hold to get to the person they really want to talk to? Not enough!
So what do you suggest Mr Smarty-Pants?
The best plan is to continually benchmark your delivery organization against itself. Are there any regions, teams or individuals that are performing significantly “better” than the average? Seek them out and look at what they do differently (remembering that not everything that is different is automatically better). Be mindful of the need to segment your customer base by their level of IT process maturity (Historic failure rates will be a good simple indicator for this) to prevent chasing false trends and corner cases. Remember that “better” is often seen through the eyes of the beholder. Dirk Gently teaches us that the fundamental interconnected nature of things is such that any improvement in one area may have a corresponding positive or detrimental impact somewhere else. A detrimental effect that could be catastrophic if it were to be replicated across the entire organization.
Is there any customer feedback on direct competitors that can be used? Why not ask the question? If you’d prefer not to open that particular can of worms, then a year on year improvement target of 10-15% for all key metrics is usually an appropriate starting position. But this insular focus should not be the only driver for improvement.
In addition to this continuous incremental improvement objective we would also highly recommend deconstructing the end-to-end process occasionally (every couple of years or so) to take it back to first principles and create a blue sky plan of what you could do if you were free from the constraints of the current set up. This is much harder than it sounds as many organizations remain blinkered by what they have always done and what they think that their customers want. (Note: Very few service organizations ever really ask what their customers (Remembering that the customer is often different to the service consumer) want, need or expect) – they tend to assume that customer expectations continue to rise (which they may) and that the performance of the past was never good enough (which may also be true) BUT unless you ask the right questions of the right people you will never really know. Consider involving new employees that haven’t yet been conditioned to your corporate culture. Better yet, involve new entrants to the workplace to avoid the norms of the industry tainting their view. If all else fails, find yourself a bright 6-8 year old (I am fortunate to have several on hand ) and explain what you are trying to do to them. They will invariably ask the highly perceptive questions that you may be too “knowledgeable and intelligent” to consider. Another good catalyst for this type of activity is to try and apply parallel industry models to your circumstances – How would washing machine repair processes play in your environment? What do consumer electronics providers do? Go further than that. Look beyond the direct comparisons… What are the fast food retailers doing? How are the media companies delivering their customer experiences? What do manufacturing, petrochemical and the pharmaceutical industries do?
So that’s it? You torpedo conventional wisdom and run? Typical!
Yes. Yes it is. And let us be clear. I have not said that all benchmarking is bogus. Benchmarking can be a VERY useful tool providing it is relevant and used appropriately. The problem is that relevant and useful benchmarking data in the product support space is very hard to come by. We are often asked for it. And we are sometimes criticized for not having it. But it remains my view that having data that we know to be bogus would be more harmful than having none at all. If you want metrics for the sake of metrics then there are places you can find them. If you want meaningful data that is relevant and useful then you may have to pay for it. The choice as they say, is yours…