Now that we are truly done with SOAR, our Testing Security project continues in full force. This post is a bit contemplative, and related to the question of ‘why test security if we are >>oh-so-sure<< that we did things right here?’
From my very first days doing security, I’ve heard the mantra that “a good pentester always gets in.” Sure, of course, fine, OK. Perhaps this was true in 1998 and it is true in 2018. Along the same lines, an insightful BAS vendor [that shall remain nameless here as per my policy of not mentioning vendor names on this blog] shared that every time they POC their threat simulation (BAS) tool at a new prospect environment, they find glaring holes. Mind you, he didn’t say 76.3% or something – but every time. And, also, this applies to organizations with 9-digit … count ‘em, NINE … security budgets.
Now, you recall that I often lament that organizations blow the budget on the “boxes” and then not have any money left to hire the good people to run them. This would apply to SIEM, DLP, UEBA, NTA, EDR, etc – namely to security technologies with ongoing operations process requirements, typically in my beloved detection, intelligence and monitoring domains.
However, this is NOT my point here. The point here is a lot of preventative [as well as detection and other] security technologies is misconfigured, not configured optimally, set to default or deployed broken in a miriad other ways. And it is rather the norm, not the exception!
From many sources, we hear stories that even “legacy” anti-malware configured by an expert is known to outperform the “next-gen” stuff running with default settings. Stories of DLP projects deployed for prevention – with prevention features never enabled. Stories of NIDS sitting on disconnected taps. Stories of SIEM with collectors crashed months ago.
People, this shit is all over the place! Beware!!
Rather, not beware but … TEST, and do NOT ASSUME!!!
P.S. This post is partially inspired by this bit of excellent reading.
Blog posts related to this project: