This post addresses one of The Three Unanswered Board Questions That Drive Cybersecurity Investment.
Our research shows that one of the top asks of every CIO and CISO that filters down from their board is to have peer comparative measures of cybersecurity.
Gartner has been benchmarking cybersecurity spend data for years through our IT Key Metrics Database. We produce spending benchmarks across various security spend categories.
Gartner also collects and benchmarks cybersecurity program maturity data. Program maturity itself is the gold standard for cybersecurity professionals, CIOs, and Boards to assess the readiness of a security program.
We are preparing to evolve the game again by benchmarking cybersecurity value delivery metrics. See Value is Missing in Executive Communication on Cybersecurity
The two biggest challenges to that goal are:
- Your cybersecurity metrics are terrible.
- Everyone measures cybersecurity differently.
Challenge 1: Building a Better Cybersecurity Metric
We find that most organizations have terrible metrics that even they don’t think have much value. And they’ve been asking us for years to help them solve that problem. We finally did.
I address the first challenge in my post What Cybersecurity Metrics Should I Report To My Board? so I’ll only summarize by saying that your metrics should:
- Reflect value delivery
- Inform decision making for priorities and investments
- Align to business outcomes
Challenge 2: Creating Consistency in Value Delivery Measurements To Support Benchmarking
Even where a handful of our clients have some worthy metrics, everyone measures the same metric differently! We are stepping up to create a set of common definitions that organizations big and small, in every industry, globally. Doing so enables us to create consistent data that informs a global comparison benchmark.
The biggest challenge has been to remove what I call “jitter” in the system. Jitter is the inconsistency in measuring similar metrics across different organizations.
For example, when you ask an organization to report how long it takes to close a security incident, we found wide variation in when organizations start the clock, stop the clock, and what they defined as an incident. This significantly reduces the benchmark value of any data collected.
So we have to ask everyone to measure the same thing. We define where to start and stop the clock, the definition of an incident, and the calculation to use. Now the data will be consistent and extremely useful in peer comparisons.
How Were These Metrics and Definitions Developed?
Gartner research in outcome-driven metrics was developed 3 years ago and we have a catalog of more than 100 ODMs across 20 different control classes.
We vetted our catalog with CISOs and CIOs from some of the world’s largest, well-funded, and capable organizations to prioritize the list by relevance and value.
We market tested the list with organizations of all sizes in all industries to settle on an initial list of 20.
The Benefit of a Global Cybersecurity Value Delivery Benchmark is Hard to Exaggerate
These defined benchmarks support more than just peer comparisons. They’re recommended guidance. They directly inform cybersecurity business decisions and investments because each supports direct investment to deliver a protection level.
Imagine a future where you can do peer comparisons for cybersecurity outcomes like time to close incidents, speed to patch systems, third party risk, endpoint protection, phishing scores, cloud security configuration drift, and zero-trust implementation.
Combining this data with maturity and spending benchmarks gives you absolute control over your cybersecurity investments and reporting for all your key stakeholders.
They will inform organizations on priorities and investments in all situations:
- No security program: where to get started
- Poor maturity: where to invest next
- Good maturity: fine tuning outcomes in a business context
The Elephants in the Room
Q. Do I have to be a Gartner client to participate?
A. Our current plan is to have a public version of the foundational survey. We will deliver a basic benchmark to everyone who participates.
Clients will get an expanded survey, expanded benchmark and custom data cuts.
Q. I’m not gathering the metrics you defined, can I participate?
A. The price of playing is that you need to gather the information as we define it. Best estimates are OK in the early implementations, because many organizations will not be instrumented to gather this data easily.
The benchmark will evolve over time with sharper definitions, the addition of new metrics and the retiring of lower value ones.
Q. Why should I invest to gather data the way Gartner defines it?
A. While you may need to invest some resources to collect the information, these metrics are also guidance.
These ARE the metrics you should be reporting to your board, so do it for them, not for this benchmark.
Q. I read this whole blog post and you didn’t give me the list. What gives?
A. There are 5 examples of the 20 in this blog post: What Cybersecurity Metrics Should I Report to My Board? We’re still finalizing the list and the details, but I have a date.
Webinar and Benchmark Release
April 20, 2022, 11:00 AM ET: Webinar Make Cybersecurity a Priority Business Investment addresses this topic and more. This webinar is open to everyone and will be available for replay.
June 10, 2022, 9:00 AM ET: At the Friday morning keynote of our Security and Risk Management Summit in National Harbor, MD I will be releasing the benchmark definitions for our first generation cybersecurity value delivery benchmark and early data gathered from a small number of organizations.
We will also release the survey to all conference participants to fill out their own data and get the earliest version of our official benchmark.
For my blog readers who are not Gartner clients, I will write a blog post with whatever version of the definitions and data we are releasing publicly. I expect a version of the survey to be generally available to gather data and give the respondents a foundation benchmark in return, but the details of that are still being worked out.
Follow me on Twitter