A question came up as we are ramping up our testing security and breach and attack simulation tools research projects. Just how motivated are organizations to test whether they have done a good job with security? Note that I think there is a subtle difference between:
- How secure are we?
- How good of a job we’ve done securing ourselves?
Imagine the following scenario: a CSO who spent years or even decades in the field gaining experience (notably, this does not mean that he is a good CSO…or a bad one – just an experienced one!) and then perhaps years defining and building security at his current employer. Suddenly, a feisty vendor shows up at his door and says: we can now objectively test how good of a job you’ve done! [for the sake of this argument, let’s assume they really can]
Would you agree that it takes copious amounts of intellectual honesty – even courage! – to say “yes, let’s test it!” For sure, a new CSO who inherited much of the security technology and many security policy decisions, may be inclined to say “yes” [at the very least, he can be motivated by his desire to prove his predecessor wrong and deploy all different controls :–)] However, a CSO who perhaps spent years growing his baby … eh … his security program… may be included to say “why test?! we know what we are doing here!”
Look, luck-based security is alive and well. Brainless compliance is alive and well (PCI DSS before, NIST CSF now; at some places who say “our security = our compliance + our hope for the best”). People’s cognitive biases are very much alive – and will always be alive.
Conclusion? If you show up with testing tools and/or methodologies, BE READY TO FIGHT!!
REMINDER: vendors who offer breach and attack (BAS) simulation tools, make sure to schedule the briefings with us soon, or be left in the footnotes of our report 🙂 Most of you have already done it, but some have not…
Related blog posts:
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed
5 Comments
If you don’t test… the script kiddies will!
This is related to the problem of “everything is tested in production, what do you want your pre-production test coverage to be?”
My question is how regularly do I need to retest the application – not due to changes in the application but changes in the capabilities of security testing tool?
While production acceptance testing could be a one time event, security testing is an ongoing, operational behavior akin to availability monitoring.
@ Erik: That’s a great question — when do the capabilities change? There are hundreds of questions like this one.
That is — when does the common-operating model change at all? How does the model adjust due to these and future changes?
“a feisty vendor shows up at his door and says: we can now objectively test how good of a job you’ve done […] [or how secure you are]”
This is the crux of the issue. Nobody and no technology actually can do this unless they have domain-speciific (also read: org-specific) knowledge about how to grade the work. In other words, stubborn/biased CSOs are sort-of right (N.B., they are actually wrong, though). The running-risk model has to adapt to the variables including humans and their automation that is currently attacking and defending. There will be anomalies on both sides, and an org that terminates an entire team of DFIR pros (or engages a new vendor) is the surest way to adjust your model sided to the adversaries.
Or as they say, if it ain’t broke, don’t try to fix it.
Great point! Would you consider measuring SOC / detection capabilities maturity and coverage as part of BAS? This can be a fast and semi-automated process as long as we can connect central SIEM (or tech playing SIEM role) with MITRE ATT&CK. “How would my network fair against next Bad Rabbit? Or Fancy Bear?” This kind of benchmark will show exact gaps in log sources, security controls, threat intel sources and say for example “we will detect 30 out of 120 tools (actors, techniques pick the one based on target audience) described in Public via ATT&CK. But if we deploy Sysmon or EDR this will go to 101! Then use MITRE Caldera https://www.youtube.com/watch?v=xjDrWStR68E or the https://my.socprime.com/en/security-virtual-assistant/ I think its most important for CISO & CSO that finding gaps in real-time and displaying improvement metrics on continuous basis will help to win the arguments with other stakeholders for more FTE / policy updates / budget etc.
Ha, that’s a good point with telling apart good cso and an experienced one;)
We saw this in the early 2000’s with IDS reporting. I have run into this once out of 100 times in 2017 (“The report needs to have more green on it, I need something I can show to my boss”).
I strongly believe that effective security programs will spend upward towards 10% of the available security program dollars on automated system and process testing (continuous security validation). MITRE invented the CVE framework and it is now used for standardized testing by most enterprises. The other side of that coin is simply known attacker “ATT&CK” TTPs. If you don’t find the gaps or protection failures before the adversary does you lose. If the attacker finds them first…they win…
I would also add that validation of various security controls and configurations is equally as important as attacker TTP simulation – specifically referring to the routinely misconfigured items such as Amazon VPC and S3 buckets.
OffensiveDenfense capabilities will be a game changer for security programs.