I live by the axiom “If risk management doesn’t influence better business decision making, why are you bothering?” This can be extended to all security metrics and dashboards to mean “If the metrics you deliver aren’t valuable to decision makers, why are you bothering?”
In my role as chief of research for risk and security at Gartner, I get to see dozens of examples of metrics programs and dashboards every year. Everyone is asking about them, and everyone is doing it wrong. I could sugarcoat this, but I don’t want to waste your time. The single biggest mistake is that no one cares about what you are delivering to them. Search your feelings Luke, you know this to be true.
The reason no one cares is because you know nothing about them and what matters to them. They (the decision makers) ask you for these metrics because they have a fiduciary duty to do so. Not because they care or even want to hear from you. They want this problem (security distracting them from their day jobs) to go away.
Very simply put, if you want them to care, you have to give them something that influences their decision making, something that matters to THEM. That means you have to know something about them AND be able to link your metrics to their issues. This also means that no one, NO ONE, can hand you a list of metrics that works well for everyone. More on that later.
First a quick model to understand the issue. If you’ve read any of my research, seen me speak, or spent any time drinking the finest Islay malts with me, then you know this model already, but let’s apply it to metrics specifically.
As seen in the diagram, above the line are executive decision makers, and below the line are IT operations. Below the line are a number of operational metrics that benefit operational decision making, but everyone makes the same mistake. They try to identify the “best” below the line operational metrics to share with executive decision makers above the line. We now know this does not work…ever.
The correct approach is to abstract out all of the technology and operational metrics leaving only metrics with causal relationships to executive decision making. This is very, very difficult to do. Let me demonstrate with a very traditional example of vulnerability management.
- Metric: Number of times we were “attacked” last month. <- Very common, completely worthless
- Metric: Number of unpatched vulnerabilities. <- Very common. Has potential, but it is not specific enough to guide even operational decision making. If you report an improvement in this, you have only proven that you are doing your job.
- Metric: Number of unpatched critical vulnerabilities against critical systems. Not as common as you think because most organizations do not have sufficient asset management to know which systems are critical. Potential for useful operational decision making when this number goes up or down, but the “number” of critical patches and even critical systems fluctuates, so a “number” can be meaningless.
- Metric: Percentage of unpatched critical vulnerabilities against critical systems. Now we are getting somewhere. By normalizing the number we can compare it month over month. This is useful for operational decision making to guide resources. However, still worthless for executive decision makers.
- Metric: Number of days it takes to patch critical systems with critical patches. This is even more useful because it further abstracts out the technology so it is understandable by a non-IT decision maker. It still doesn’t cross the threshold to be useful for non-IT decision making because it has no business context.
- Metric: Number of days it takes to patch systems supporting the manufacturing line in Kuala Lumpur with critical patches. To appreciate this you need to know that the manufacturing line in KL has 3x the unscheduled outage time caused by IT of all the other facilities and it represents 40% of the company’s revenues based on new contracts in the last year. Reporting this to the CIO and the P&L business owner helps them address the unscheduled outage time and respond to a very pissed off CEO who wants to know why output dropped 3% last quarter at the most critical line of the company. Now THIS is a useful above the line metric because the technology is abstracted out, it has a business context, and it supports critical decision making all the way up to the CEO.
Good metrics don’t have to be complex. They just have to support real problems. You still need operational metrics, but keep them where they belong… in operations.
As you can see, the metrics that matter are the ones with a business context for YOUR executives. It is not possible to pick up a published list from any source that will be anything more than starting point for you to develop a good and unique set of metrics for your organization.
And this discussion is only the starting point for the real holy grail which is the integration of risk and security metrics integrated with corporate performance. Gartner clients can access our risk-adjusted value management methodology and our catalogs of more than 200 leading indicators of operational risk and business performance.
The Gartner Business Risk Model: A Framework for Integrating Risk and Performance (G00247513)
The Gartner Business Value Model: A Framework for Measuring Business Performance (G00249947)
We also just published workshop materials for clients to facilitate a conversation between IT and non-IT executives so they can determine the dependencies and performance linkages at their organization.
Toolkit: Risk-Adjusted Value Management Workshop (G00247503)
Finally, we also just published a formal methodology to tie risk metrics back to the corporate financials.
Implement Business Outcome Monetization as a Process for Increasing Project Success (G00249950)
You already know that creating meaningful metrics is hard. I’m sorry for not being the bluebird of happiness, but you aren’t going to find “the right set” of metrics sitting on the web. You need to roll up your sleeves, sit down with your decision makers, and figure out what matters to them.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.