Blog post

Metrics Provide Clues, Not Answers

By Hank Barnes | June 12, 2018 | 2 Comments


There are a few phrases that bug me.  One of those is “if you can’t measure it, it doesn’t matter.”  Being an intuitive thinker, I firmly believe there are lots of things that you can’t measure (or at least can’t measure easily) that matter greatly.  And, I imagine that many of you, like me can just feel if something is right–you don’t have to measure it.

That being said, there is a place for data and measurements—a very important place.    But this post is a call for some balance and rational thought.

Photo by Kevin Ku from Pexels
Photo by Kevin Ku from Pexels

I’m hearing more and more stories of metrics being the defined measure for many decisions, particularly about performance.  I was speaking to a trainer at the place I workout (OrangeTheory Fitness) and she was telling me that they are measured by reviews that come back after class.  If she gets a low rating because the studio is cold or someone’s heart rate monitor is wonky, it is all on her.    I’ve heard similar stories about firms having to have a certain number of 4 star ratings or they get “dinged” on some app exchanges.

Most metrics don’t provide answers–they provide clues.    Ratings and reviews don’t provide answers, they provide clues.   The number of dials or “engagements” someone creates is a clue.   All are signs to dig deeper to understand if there is something going on that needs to be addressed.  This is similar to organizations using the Magic Quadrant graphic to make decisions, without reading the document for the details.  The MQ graphic is a clue, its not the answer.

But here is the biggest issue.  The farther you get from the source activity that drives the metric, the more likely you are to miss the context.  The trainer’s manager can easily cull through the reviews and see that the source of low ratings were out of her control and adjust accordingly.   But as it rolls up to the region or corporate, it is less likely that that context will remain.  They’ll just see a lower average score.  And for those that just view it casually, they’ll quickly jump to conclusions—potentially the wrong conclusions.

And that is the thing to be more worried about.   I’m not sure of the answer to this problem other than reinforcing that metrics aren’t answers, they are clues.   If you are someone that is “steps removed” from the source of the metric, don’t “under analyze” them.  Push towards the source to get context and the whole story.   Or just try to ignore them and focus more on data that is relevant.   Trusting metrics blindly may create more problems than value.

Don’t succumb to “Metric Madness.”  They are clues, not answers.

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Leave a Comment


  • Goron Hogg says:

    This is why we read your posts. “keep digging and never lose context”. It’s hard. That’s why you’re an excellent consultant.

  • Another great post, Hank. Thanks.

    While it’s almost always true that “you can’t manage what you can’t measure,” it’s almost never true that “if you can’t measure it, it doesn’t matter.” Complicating the situation: the tendency of IT and business people to base decisions on the easiest metrics to assess (number of support requests, time required to resolve each one, e.g.) instead of metrics that matter more to users or the business (satisfaction levels and financial value, e.g.).

    Here’s hoping that artificial intelligence (AI), machine learning (ML), advanced automation, and other developments will help more decision makers to surface more meaningful metrics. And that those decision makers will choose to use those metrics to make better decisions.