Machine learning is relatively new to security. It first went mainstream a few years ago in a few security domains such as UEBA, network traffic analytics and endpoint protection. Several vendors earned strong brand recognition by pioneering ML in those spaces. (For examples, see Forecast Snapshot: User and Entity Behavior Analytics, Worldwide, 2017 ; Magic Quadrant for Endpoint Protection Platforms ; Cylance SWOT ).
But when I speak with adopting users and ask if they know what kind of machine learning is under the hood of their acquired software, the typical response I receive is: “not sure – we just know that it works”
Is this response good enough to take machine learning mainstream in security? Will generally skeptical security professionals trust their vendors to ascertain risk? Will they rely on black box ML software to kill processes or kick users off a system?
The resounding and obvious answer to this question for most enterprises is a FLAT NO.
Vendors can’t sell black boxes. Users need to understand what a machine learning model is doing and how they themselves can manage, control and tune the results as needed.
Welcome to Automatic Generation of Intelligent Rules
At least two fraud vendors – DataVisor and ThreatMetrix – have come up with an innovative approach to this black box dilemma for their fraud management clients. That is, their machine learning engines automatically generate rules using attributes provided by their ML models.
In the case of DataVisor, their unsupervised machine learning engine automatically identifies attributes of coordinated attack campaigns and creates new rules daily using their Automated Rules Engine. DataVisor monitors its automatically generated rules on a continuous basis for relevance or obsolescence (in which case they are retired).
ThreatMetrix’s Smart Learning engine works much the same way albeit based on its supervised (rather than unsupervised) machine learning model which gets its ‘truth’ data from its ”Digital Identity Network”.
This gives these vendors’ customers the ability to manage and tune machine generated rules. This ‘clearbox’ approach – a term coined by ThreatMetrix – takes the mystery out of machine learning. Indeed and in any event, at the end of the day machine learning models generate a set of rules which implement their logic.
Security vendors would be well advised to incorporate such a clearbox approach into their products. By doing so they would gain much more adoption by justifiably skeptical security managers who need more control over security policies and actions.
Lessons from Google Search
We should all take a lesson from Google who is slowly moving its way into having AI manage its search engine.
Google executives have traditionally been resistant to using machine learning inside their search engine for good reason. It is often difficult to understand why neural nets behave the way they do, so it would be much more difficult for them to manage and refine their search behavior and results.
Nonetheless, as they gain greater insights into AI engines, they will start relying on it more and more. Larry Page of Google recently put it best:
“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.”
Security works much the same way. It would be ideal for AI to automatically ascertain our security postures and to immediately and continuously correct the errors and vulnerabilities that it finds.
Those days are far off however. In the meantime, all we can do is try to understand what our AI and machine learning engines are doing, and learn to control them rather than the other way around.