Gartner Blog Network


On Negative Pressure or Why NOT Objectively Test Security?

by Anton Chuvakin  |  January 22, 2018  |  6 Comments

A question came up as we are ramping up our testing security and breach and attack simulation tools research projects. Just how motivated are organizations to test whether they have done a good job with security? Note that I think there is a subtle difference between:

  1. How secure are we?
  2. How good of a job we’ve done securing ourselves?

Imagine the following scenario: a CSO who spent years or even decades in the field gaining experience (notably, this does not mean that he is a good CSO…or a bad one – just an experienced one!) and then perhaps years defining and building security at his current employer. Suddenly, a feisty vendor shows up at his door and says: we can now objectively test how good of a job you’ve done! [for the sake of this argument, let’s assume they really can]

Would you agree that it takes copious amounts of intellectual honesty – even courage! – to say “yes, let’s test it!” For sure, a new CSO who inherited much of the security technology and many security policy decisions, may be inclined to say “yes” [at the very least, he can be motivated by his desire to prove his predecessor wrong and deploy all different controls :–)] However, a CSO who perhaps spent years growing his baby … eh … his security program… may be included to say “why test?! we know what we are doing here!”

Look, luck-based security is alive and well. Brainless compliance is alive and well (PCI DSS before, NIST CSF now; at some places who say “our security = our compliance + our hope for the best”). People’s cognitive biases are very much alive – and will always be alive.

Conclusion? If you show up with testing tools and/or methodologies, BE READY TO FIGHT!!

REMINDER: vendors who offer breach and attack (BAS) simulation tools, make sure to schedule the briefings with us soon, or be left in the footnotes of our report :-) Most of you have already done it, but some have not…

Related blog posts:

Category: security  simulation  testing  

Anton Chuvakin
Research VP and Distinguished Analyst
5+ years with Gartner
17 years IT industry

Anton Chuvakin is a Research VP and Distinguished Analyst at Gartner's GTP Security and Risk Management group. Before Mr. Chuvakin joined Gartner, his job responsibilities included security product management, evangelist… Read Full Bio


Thoughts on On Negative Pressure or Why NOT Objectively Test Security?


  1. Erik T Heidt says:

    If you don’t test… the script kiddies will!

    This is related to the problem of “everything is tested in production, what do you want your pre-production test coverage to be?”

    My question is how regularly do I need to retest the application – not due to changes in the application but changes in the capabilities of security testing tool?

    While production acceptance testing could be a one time event, security testing is an ongoing, operational behavior akin to availability monitoring.

    • Andre Gironda says:

      @ Erik: That’s a great question — when do the capabilities change? There are hundreds of questions like this one.

      That is — when does the common-operating model change at all? How does the model adjust due to these and future changes?

      “a feisty vendor shows up at his door and says: we can now objectively test how good of a job you’ve done […] [or how secure you are]”

      This is the crux of the issue. Nobody and no technology actually can do this unless they have domain-speciific (also read: org-specific) knowledge about how to grade the work. In other words, stubborn/biased CSOs are sort-of right (N.B., they are actually wrong, though). The running-risk model has to adapt to the variables including humans and their automation that is currently attacking and defending. There will be anomalies on both sides, and an org that terminates an entire team of DFIR pros (or engages a new vendor) is the surest way to adjust your model sided to the adversaries.

      Or as they say, if it ain’t broke, don’t try to fix it.

  2. Andrii says:

    Great point! Would you consider measuring SOC / detection capabilities maturity and coverage as part of BAS? This can be a fast and semi-automated process as long as we can connect central SIEM (or tech playing SIEM role) with MITRE ATT&CK. “How would my network fair against next Bad Rabbit? Or Fancy Bear?” This kind of benchmark will show exact gaps in log sources, security controls, threat intel sources and say for example “we will detect 30 out of 120 tools (actors, techniques pick the one based on target audience) described in Public via ATT&CK. But if we deploy Sysmon or EDR this will go to 101! Then use MITRE Caldera https://www.youtube.com/watch?v=xjDrWStR68E or the https://my.socprime.com/en/security-virtual-assistant/ I think its most important for CISO & CSO that finding gaps in real-time and displaying improvement metrics on continuous basis will help to win the arguments with other stakeholders for more FTE / policy updates / budget etc.

  3. pro4people says:

    Ha, that’s a good point with telling apart good cso and an experienced one;)

  4. Carl Wright says:

    We saw this in the early 2000’s with IDS reporting. I have run into this once out of 100 times in 2017 (“The report needs to have more green on it, I need something I can show to my boss”).

    I strongly believe that effective security programs will spend upward towards 10% of the available security program dollars on automated system and process testing (continuous security validation). MITRE invented the CVE framework and it is now used for standardized testing by most enterprises. The other side of that coin is simply known attacker “ATT&CK” TTPs. If you don’t find the gaps or protection failures before the adversary does you lose. If the attacker finds them first…they win…

    I would also add that validation of various security controls and configurations is equally as important as attacker TTP simulation – specifically referring to the routinely misconfigured items such as Amazon VPC and S3 buckets.

    OffensiveDenfense capabilities will be a game changer for security programs.

  5. […] that we are on a subject of testing security and breach/attack simulation tools, one more interesting questions arises: if you test security, […]



Leave a Reply

Your email address will not be published. Required fields are marked *

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.