Remember [some] NIDS of the 1990s? Specifically, those that were unable to show the packets that matched the rule triggering the alert! Remember how they were deeply hated by the intrusion detection literati? Security technology that is not transparent and auditable is … what’s the polite term for this? … BAD SHIT!
My research into security analytics and Gartner recent forays into so-called “smart machines” research converge in this post. Hilarity ensues!
Today we are – for realz! – on the cusp of seeing some security tools that are based on non-deterministic logic (such as select types of machine learning) and thus are unable to ever explain their decisions to alert or block. Mind you, they cannot explain them not because their designers are sloppy, naïve or unethical, but because the tools are build on the methods and algorithms that inherently unexplainable [well, OK, a note for the data scientist set reading this: the overall logic may be explainable, but each individual decision is not].
For example, if you build a supervised learning system that can look at known benign network traffic and known attack traffic (as training data), then extract the dimensions that it thinks are relevant for making a call on which is which in the future will NEVER be able to fully explain why the decision was made. [and, no, I don’t believe such a system would be practical for a host of others reasons, if you have to ask] Sure, it can show you the connection it flagged as “likely bad”, but it cannot explain WHY it flagged it, apart from some vague point like “it was 73% similar to some other bad traffic seen in the past.” Same with binaries: even if you amass the world’s largest collection of known good and known bad binaries, build a classifier, extract features, train it, etc – the resulting system may not explain why it flagged some future binary as bad [BTW, these examples does not match any particular vendors that I know of, and any matches are purely coincidental!]
My dear security industry peers, are we OK with that? Frankly, I am totally fine with ML-based recommendation engines (what to buy on Amazon? what to watch on Netflix?) – occasionally they are funny, sometimes incorrect, but they are undoubtfully useful. Can the same be said about the non-deterministic security system? Do we want a security guard that shoots people based on random criteria, such as those he “dislikes based on past experiences”, rather than using a white list (let them pass) and a black list (shoot them!)?
One security data scientist reminded me recently that “fast / correct / explainable – pick any two” wisdom applies to statistical models pretty well, and those very models are now creeping into the domain of security. Note that past heuristics and anomaly detection approaches, if complex, are substantially different from this coming wave of non-linear machine logic. You can still do the those old anomaly detection computations “on paper” (however hard the math), and come to the same conclusion as the system – but not with today’s ensemble learning (ha-ha, my candidate model just beat up your champion model!) where the exact decision logic is machine-determined on each occasion, for example.
By the way, my esteemed readers know that all of my work focuses on reality, not marketing pipe dreams and silly media proclamations (remember the idiot who said “Cyber security analytics isn’t particularly challenging from a technical perspective”?). I assure you that this concern is about to become a real concern!
When asked about this issue, designers of security tools that substantially rely on non-deterministic logic offer the following bit of advice: build trust over time by simply using the system. In essence, don’t push the system to any blocking [or “waking people up at 3AM”] mode until you trust it to be correct enough to whatever standard you hold dear. Do you think this is sufficient, in all honesty? Sure, some people will say “yes” – after all, most users of AV tools do not manually inspect all the anti-malware signatures, choosing to trust the vendor. But it is one thing to trust the ultimately-accountable vendor threat research team, and quite another to trust what is essentially a narrow AI.
P.S. We are still hiring – read this and apply!
Blog posts on the security analytics topic:
- SIEM / DLP Add-on Brain
- Those Pesky Users: How To Catch Bad Usage of Good Accounts
- Security Analytics Lessons Learned — and Ignored!
- Security Analytics: Projects vs Boxes (Build vs Buy)?
- Do You Want “Security Analytics” Or Do You Just Hate Your SIEM?
- Security Analytics – Finally Emerging For Real?
- Why No Security Analytics Market?
- SIEM Real-time and Historical Analytics Collide?
- SIEM Analytics Histories and Lessons
- Big Data for Security Realities – Case 4: Big But Narrowly Used Data
- Big Data Analytics Mindset – What Is It?
- Big Data Analytics for Security: Having a Goal + Exploring
- More On Big Data Security Analytics Readiness
- Broadening Big Data Definition Leads to Security Idiotics!
- 9 Reasons Why Building A Big Data Security Analytics Tool Is Like Building a Flying Car
- “Big Analytics” for Security: A Harbinger or An Outlier?
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed
8 Comments
Although I am happy to have been cited, I happen to agree only partly with your blog post.
A lot of what you are talking about here ties into one of the entries of my “Security Machine Learning Buyer’s Guide”, which is “Why are you building ML into this on the first place?”.
I agree 100% that replacing a deterministic process for a non-deterministic one for the sake of saying your product is now a cornerstone of the Security Singularity is a terrible idea. You are now replacing something that can be explained and reproduced by some “magic AI sauce” that, if implemented correctly, is intrinsically stochastic and mathematically proven to be error-prone.
But what about automating non-deterministic processes? By definition, you would not be able to expressed in a deterministic rule breakdown. For instance, if you are trying to automate threat analyst hunting patterns on a log corpus based on their knowledge of known threats that can affect your organization and the knowledge of good communication, good luck configuring a deterministic tool, like your traditional SIEM, to make this happen. I am sure every single one of your analysts will provide you different answers every time you ask them.
There is a place for non-deterministic tools in security, specially when our traditional deterministic ones are falling so short. During my research on this field and by getting to know peers and their research, I have seen a lot of different approaches and experiments that are trying to bridge this gap, and honestly, a bunch of them suck badly. But this does not mean that the good ones are “shooting people at random”, because the successful models will be more carefully designed than that.
I honestly believe some of these experiments and emerging technologies are our best bet so far to giving us a fighting chance against novel threats. The explainability factor can be defined as a customer usability issue, and providing additional context information for each detection entry provides the ability to validate if the alerts are worth investigating or not.
Unless you are doing unsupervised learning, because friends don’t let friends put unsupervised machine learning algorithms in production. 😉
@Alex Thanks a lot for your super-insightful comment.
First, “replacing a deterministic process for a non-deterministic one for the sake of saying your product is now a cornerstone of the Security Singularity is a terrible idea.” makes for an awesome quote 🙂 Speed up the singularity – use AI to decide what coffee to drink and which ear to wash first 🙂
Also, note that I ask questions about the role of such tools, not call for their ban! Indeed, I agree with “There is a place for non-deterministic tools in security” and as the other commenter notes, they also work together with smart humans to reveal possible subtle signals.
Finally, a most excellent point about “automating non-deterministic processes.” You so totally win here 🙂
Here you are making the assumption that all the solutions using non-deterministic technologies can be used as a “silver bullet” without human intervention. IDS, SIEM, Threat Intelligence and more can be lumped into what I call “Security decision support systems”, meaning they aid an human to make decisions in regards to a specific security control effectiveness or whether or not an organization got compromised. They help processing data at wire speed or making decision on large amount of “big data”. So they have to be used and interpreted for aiding an analyst to comprehend security events.
Probably vendors would like them to useful by themselves without the aid of human intelligence, but they cannot be thought in a vacuum.
@Michelangelo Thanks for the comment. I am happy and sad that you made the comment. Indeed [the happy part] if a smart human runs that “Security decision support systems”, then vague indicators revealed by machine logic would be useful.
However, [the sad part] even for systems designed to be run this way, there will be HUGE CROWDS of people who would want this system to REPLACE an analyst. Their fail will be epic indeed – but they will (IMHO) blame the system designer and not themselves…..
First: Great post! Love the discussion here, too.
I see three different things happening here:
1. Deterministic vs. non-deterministic:
This insinuates that given the same input, the algorithm can produce different outputs. Users hate such surprises and data-scientists hate the difficulty this places on testing. This is why it’s fairly uncommon… Unless there’s a good reason for it, such as when dealing with very-big-data. For example, we have an unsupervised, non-deterministic clustering algorithm that uses randomness to allow processing without building immensely huge distance-matrices which would take more storage than the original input.
2. Supervised vs. non-supervised:
“Supervised learning” can mean a lot of things — it is truly a very broad spectrum. The question, however, is whether there is value in “learning” and — equally important — what to learn, and from what teacher?
If you’re building an algorithm for self-driving car, you probably do not want to be doing any active “learning” from the human in production.
However, if you’re building a security solution, it might be very helpful to learn from the user, but the trick is knowing what to learn. The biggest challenge in this case is over-fitting: Learning too much from the user that just scaled-up his benefits and, worse, his flaws:
“The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.” — Bill Gates
3. Explainable vs. Inexplainable:
I lose much sleep over this… A great algorithm, good results, but the user isn’t convinced — he wants to know “how”, but not to understand the algorithm; how he can do it! It’s natural for us to fear what we do not understand…
IMHO, there is no magic answer here: the data-scientist must be both domain-expert about the algorithm as well as the problem being solved. That is the only way, I found, that you can force, truly force, some explanation to an otherwise inexplainable algorithm. The explanation doesn’t have to be about the algorithm but it must make sense to the user.
“fast / correct / explainable – pick any two”
This is partial: if your algorithm is 100% “correct”, you rarely need to be “explainable” — you just have to automate it — call it a blacklist — you only have to be “fast (enough) & correct.”
The problem, in both machine-learning and the vague field of cyber-security, is that algorithms are rarely that correct (regardless of speed) and that’s where you need humans — to investigate. We, today, have so many “anomaly/behavior” detection tools that shoot alerts at various degrees of “confidence” (because “correctness” is hard on marketing) and we have humans do the post-analysis. Those detection tools are truly amazing and, I predict, we will have more of them, that are more specialized and, probably, less explainable — at least on their own. In parallel, we need a different set of solutions that help analysts take all those highly specialized, hard-to-understand alerts and make them understandable.
We need all three! 🙂
P.S.
Since this long and unformatted, I also replied at https://www.linkedin.com/pulse/re-rise-non-deterministic-security-alex-vaystikh
@Alex Thanks for a truly awesome comment. I will touch on your #1 and #3 (#2 is sort of naturally makes sense!)
Indeed, your point about deterministic (#1) and explainable (#3) is an excellent one. I sort of tied them together and it was wrong. Very hard math may be hard to explain, but will be totally deterministic. A ML logic may be simple, yet nondeterministic.
The risk I sort of hinted at in the post was in a combination of non-deterministic and hard to explain methods. Is the only be is “try it and if you like it, accept the results?” Presumably, the answer is ‘yes’….
> “try it and if you like it, accept the results?” Presumably, the answer is ‘yes’
I think it may come down to what type of “results” we’re talking about.
If we broke down the steps required for “good cyber-security,” e.g. a closed feedback loop ranging from Detection, Assessment, Action, Evaluation and back to Detection — then, and dare I guess _only then_, people would accept inexplainable “results.”
However, I suspect the thesis would fail the moment you insert a human to that loop: Some results will be accepted; others never will, particularly when some human responsible for those “results.”
It would be interesting thesis to test. If I may go philosophical for moment, this is not very different from the insurance / responsibility debate in self-driving cars, or the debate in aviation about whether over reliance on flight-automation increases risk or not.
Perhaps one day we’ll have an MSSP that operates completely automatically, like an algo-trading company, and when mistakes happen — insurance.
Either way, it would be interesting to test exactly under what conditions would different types of results would be accepted without explanation [in cyber security].
Wanna write a paper with me?
Hey Alex … thanks for another comment.
Re: human in the loop – reminds me of this piece the other Alex mentioned: https://hbr.org/2015/02/heres-why-people-trust-human-judgment-over-algorithms
Indeed, even explainable algorithms may not get trusted. I suspect if the algorithm is seen as useful/trusted/correct over time (what time?) some people will accept that they don’t get how it delivers what it does…
BTW, an MSSP with less/no human involvement – that actually work – would be an amazing achievement.
Re: paper – while it sounds exciting, my employer will definitely object. But I suspect we will eventually write a paper each – and can definitely review each other’s findings.