While I generally dislike abstract security debates like “how to be more proactive?”, “are we dynamic enough?” and “should we automate more?”, some recent experiences made me pick the last one up. So, in one ear I am hearing “we need to automate more” since we don’t have enough people or since our infratructure is too fluid, but in the other ear I am hearing “automation breaks things“, “robots suck at security” etc.
My conclusion? There is – at this stage of security technology development, at least – GOOD AUTOMATION and EVIL AUTOMATION. Longer term, we will certainly see more automation and more domains of information security (cybersecurity, if you have to) covered by automation, BUT I’d be willing to bet anything that the profession of a security analyst will never be full automated (just like [IMHO] doctors and police offices will always use automated tools, but will never be full replaceable by technologies, smart machines notwithstanding).
So, this here is my informal attempt to separate the cases of “automate OR die” from cases “automate AND die”….
Automate to WIN:
- gather additional information from various sources
- fuse gathered information together
- enrich alert data for better alert triage
- share data with another system
- sandbox stuff and gather results
- process raw inputs with analytic algorithms, and present the results
- ask for approval when needed
- generate email notification
- open tickets for a human to act on some issue
Overall, this category of “good automation” covers ways to acquire more useful data to make a decision, to help human make a decision, streamline routine, and other boring and repetitive tasks.
Automate to LOSE [at least, proceed VERY carefully!]
- block network access
- disconnect system or device
- change configuration of something
- disable user
Overall, “evil automation” covers cases where the systems is supposed to make a disruptive change to external systems and applications, and have potential to cause heavy damage to our fragile IT infrastructures….
To mitigate its “evil effects” while preserving the benefits, look at “semi-automated” or assisted mode with human in the loop where the automation gathers all the information and then a human makes one simple call with all available data.
Select blog posts tagged “philosophical”:
- On Tanks vs Tractors
- Enable the Business? Sometimes Security Must Say “NO”…
- Defeat The Casual Attacker First!!
- Critical Vulnerability Kills Again!!!
- Security Essentials? Basics? Fundamentals? Bare Minimum?
- On “Defender’s Advantage”
- Security And/Or/Vs/Not Compliance?
- Bye-bye, Compliance Thinking. Welcome, Military Thinking!
- Security Chasm Illustrated
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed
I see a lot of people advocating the “no automatic denies” and “only deny with manual approval” approach. To me that’s the equivalent of running an IPS (intrusion prevention system) in monitor-only mode, a.k.a the IDDS mode (I don’t do ).
The reality is every company relies heavily on automated denial controls: anti-malware, anti-spam, proxy server category updates, DLP, etc. Even patching is an automatic deny type of control because it stops the exploit without human intervention.
And since these systems are vendor designed and applied, doesn’t that mean we trust our nameless and faceless vendors with our environment more than we trust our own employees? What kind of message are we sending to our people when we agree to allow our vendors to place automatic controls but we won’t allow our own staff to do so? That we trust our own people’s skills, knowledge and judgement fully? Probably not.
At the financial institution where I work we rely heavily on automatic controls, both the vendor controls and ones we’ve devised. Yes, both occasionally block traffic, transactions and users inappropriately but the “wins” are far more than the “loses”. And during pen tests our custom-designed controls always pick up the attacks before the high-priced vendor systems, sometimes by a couple of days.
In an operating environment where the door only needs pried open a sliver, to rely on manual approvals is to sit by helplessly and watch as the data leaves the company or as the infection spreads. We would much rather apologize for the occasional inappropriate block than have a mess we need to disclose and cleanup.
I’ve worked with large banks where their response time to a security event is measured in many hours and they think that’s great because it’s down from days. Our response time is in milliseconds and occasionally even that is not fast enough because we triggered on a secondary event rather than the primary.
Thanks a lot for the insightful comment. Indeed, you bring up a great point: why AV and IPS (and WAF) vendor automation is somehow “more OK” than internal IT automation? I suspect that AV et al automate small tactical things (if you see a process that is a known virus, remove it … well, try to :-)) and they are safer? Or maybe vendor who automate and then break things can be screamed at and beaten into submission? Still, it seems like a good question to think about…
This has been our theme for the last year both at RSA and Black Hat. Automation must be process based yet these processes must be well tested. Automating bad processes merely gets you bad results faster.
But once you have the process right, the manual tasks that automation handles well completely disappears from the labor budget.
Invotas.io Automate or Die. @invotascyber
The way I position the automation debate (very consistent with your view) is organizations need to do more to automate detection and investigations, but be very careful about automating remediation.
Thanks for the comment, Matthew. Indeed, we need to automate to save humans for better things, but not to cause too much mayhem on fragile IT…
Totally agree with view points of the benefits of automation and the need to scale the integration of various prevent/detect/respond processes. The blog does not talk about the foundational elements to get there. I do believe that APIs are going to be the foundation for any security automation. APIs provide a consistent way to interact, collect data and make real-time policy actions and eliminate any errors due to manual processes. Similar to what SNMP did for the network management did for IT Operations, APIs will do wonders for Security and IT Operations and automate the heck out of the disparate tools and processes.
Thanks for the comment — you are right, I missed a lot of foundational elements [that would have been included in a real research piece, of course]
Anton. As always, very good article. And also the title is definetly interesting. I Think that there is an important dependency between the source (input) and the automated action. That can be dangerous if not correctly interpreted by the automated system. I doubt that Critical Infrastructure Customers, for example, would rely on full and unattended automation. Automatically Closing the network/user/whatever in an oil pipe,for example,could generate more damages than solutions. Our current experience is that Full Automation is sexy for Security Operation Center prople but potentially very dangerous for IT Risk Managers. Balance between the two standpoints hasn’t been reached yet.
Thanks for the comment. Exactly – inout automation or action automation seems to be the primary point of concern. Frankly, I am not even sure it is that sexy for SOC people…