Paul Proctor

A member of the Gartner Blog Network

Paul Proctor
VP Distinguished Analyst
10 years at Gartner
28 years IT Industry

Paul Proctor is a vice president, distinguished analyst, and the chief of research for security and risk management. He helps organizations build mature risk and security programs that are aligned with business need. Read Full Bio

Coverage Areas:

No One Cares About Your Security Metrics and You are to Blame

by Paul Proctor  |  August 11, 2013  |  4 Comments

I live by the axiom “If risk management doesn’t influence better business decision making, why are you bothering?” This can be extended to all security metrics and dashboards to mean “If the metrics you deliver aren’t valuable to decision makers, why are you bothering?”

In my role as chief of research for risk and security at Gartner, I get to see dozens of examples of metrics programs and dashboards every year. Everyone is asking about them, and everyone is doing it wrong. I could sugarcoat this, but I don’t want to waste your time. The single biggest mistake is that no one cares about what you are delivering to them. Search your feelings Luke, you know this to be true.

The reason no one cares is because you know nothing about them and what matters to them. They (the decision makers) ask you for these metrics because they have a fiduciary duty to do so. Not because they care or even want to hear from you. They want this problem (security distracting them from their day jobs) to go away.

Very simply put, if you want them to care, you have to give them something that influences their decision making, something that matters to THEM. That means you have to know something about them AND be able to link your metrics to their issues. This also means that no one, NO ONE, can hand you a list of metrics that works well for everyone. More on that later.

First a quick model to understand the issue. If you’ve read any of my research, seen me speak, or spent any time drinking the finest Islay malts with me, then you know this model already, but let’s apply it to metrics specifically.

Above the line, below the line

As seen in the diagram, above the line are executive decision makers, and below the line are IT operations. Below the line are a number of operational metrics that benefit operational decision making, but everyone makes the same mistake. They try to identify the “best” below the line operational metrics to share with executive decision makers above the line. We now know this does not work…ever.

The correct approach is to abstract out all of the technology and operational metrics leaving only metrics with causal relationships to executive decision making. This is very, very difficult to do. Let me demonstrate with a very traditional example of vulnerability management.

  • Metric: Number of times we were “attacked” last month. <- Very common, completely worthless
  • Metric: Number of unpatched vulnerabilities. <- Very common. Has potential, but it is not specific enough to guide even operational decision making. If you report an improvement in this, you have only proven that you are doing your job.
  • Metric: Number of unpatched critical vulnerabilities against critical systems. Not as common as you think because most organizations do not have sufficient asset management to know which systems are critical. Potential for useful operational decision making when this number goes up or down, but the “number” of critical patches and even critical systems fluctuates, so a “number” can be meaningless.
  • Metric: Percentage of unpatched critical vulnerabilities against critical systems. Now we are getting somewhere. By normalizing the number we can compare it month over month. This is useful for operational decision making to guide resources. However, still worthless for executive decision makers.
  • Metric: Number of days it takes to patch critical systems with critical patches. This is even more useful because it further abstracts out the technology so it is understandable by a non-IT decision maker. It still doesn’t cross the threshold to be useful for non-IT decision making because it has no business context.
  • Metric: Number of days it takes to patch systems supporting the manufacturing line in Kuala Lumpur with critical patches. To appreciate this you need to know that the manufacturing line in KL has 3x the unscheduled outage time caused by IT of all the other facilities and it represents 40% of the company’s revenues based on new contracts in the last year. Reporting this to the CIO and the P&L business owner helps them address the unscheduled outage time and respond to a very pissed off CEO who wants to know why output dropped 3% last quarter at the most critical line of the company. Now THIS is a useful above the line metric because the technology is abstracted out, it has a business context, and it supports critical decision making all the way up to the CEO.

Good metrics don’t have to be complex. They just have to support real problems. You still need operational metrics, but keep them where they belong… in operations.

As you can see, the metrics that matter are the ones with a business context for YOUR executives. It is not possible to pick up a published list from any source that will be anything more than starting point for you to develop a good and unique set of metrics for your organization.

And this discussion is only the starting point for the real holy grail which is the integration of risk and security metrics integrated with corporate performance. Gartner clients can access our risk-adjusted value management methodology and our catalogs of more than 200 leading indicators of operational risk and business performance.

The Gartner Business Risk Model: A Framework for Integrating Risk and Performance (G00247513)
The Gartner Business Value Model: A Framework for Measuring Business Performance (G00249947)

We also just published workshop materials for clients to facilitate a conversation between IT and non-IT executives so they can determine the dependencies and performance linkages at their organization.

Toolkit: Risk-Adjusted Value Management Workshop (G00247503)

Finally, we also just published a formal methodology to tie risk metrics back to the corporate financials.

Implement Business Outcome Monetization as a Process for Increasing Project Success (G00249950)

You already know that creating meaningful metrics is hard. I’m sorry for not being the bluebird of happiness, but you aren’t going to find “the right set” of metrics sitting on the web. You need to roll up your sleeves, sit down with your decision makers, and figure out what matters to them.

Follow me on Twitter (@peproctor)

4 Comments »

Category: Uncategorized     Tags:

4 responses so far ↓

  • 1 Richard Steven Hack   August 15, 2013 at 3:15 am

    My opinion on metrics:

    Do you want your people measuring their mistakes – or correcting them?

  • 2 Paul Proctor   August 15, 2013 at 3:53 am

    @Richard,

    Metrics are about transparency and visility into your actvities. How do you correct mistakes you can’t see? Would you be shocked to know that most organizations have no idea how many days it takes to apply critical patches to critical systems. Our research shows that number is climbing but if you aren’t measuring it you have no idea if it is a problem or not. If you are measuring it, you can correct it when it stretches.

    If you are measuring it, you have metrics…

    To your point, I’m a fan of dashboards and metrics that are green and always green. When they turn yellow or red it means you need to act. To your point, most people think a “risk” dashboard is red and always red because it is measuring risks. That is not productive.

  • 3 russ gallery   August 16, 2013 at 10:46 am

    Number of days it takes to patch critical systems with critical patches. This is even more useful because it further abstracts out the technology so it is understandable by a non-IT decision maker. It still doesn’t cross the threshold to be useful for non-IT decision making because it has no business context….

    I always find this number to be interesting. Is it number of days since a patch was available, or number of days since your Vulnerability scanner discovered a vulnerability?….now we are talking about scan rhythms and intervals between scans. Good luck to all those firms that are scanning once a month…..your almost always a couple weeks behind the starting line….

  • 4 Mattias   August 27, 2013 at 3:16 am

    @Russ: maybe the metric should be “number of days it takes to mitigate critical vulnerabilities in critical systems”.

    I’d say the clock starts the second the vulnerability is commonly known and you have a chance to learn about it through normal security intelligence gathering. That’s the time when the attackers window of opportunity opens.

    How long it takes for vulnerability scanners or other processes to actually detect the issue and for the vendor to create a patch are interesting metric in itself, for gauging the effectiveness of internal processes, IT operations and vendors (it’s not interesting for business decisions. :) ).