This post is inspired by a few painful discussions on artificial intelligence (AI) that I had both in public (on Twitter) and internally too.
Let’s start with a joke:
Q: How do you know that a security vendor REALLY uses AI in their product?
A: If they say they do it, then you know they don’t.
On a more serious note, where do you draw the line TODAY, in real reality of real life, not some remote future where AGI is as common as a smartphone today (personally, I will give it at least 50 years). Today we are being flooded by “AI for security” marketing, as well as by dangerous levels of radiation confusion of AI, deep learning, machine learning and even advanced analytics (the last sounding decidedly “oh so 2016”).
To save the world, Gartner analysts have crafted a lot of serious research on this topic, and more excellent pieces are coming. My favorite, by far is, “Questions to Ask Vendors That Say They Have ‘Artificial Intelligence’” that has such gems as “Ask the Vendor to Define AI and ML, and Then to Demonstrate How the Product Uses AI” and “Require the Vendor to Describe How the Product Improves in Outcomes [as it learns].”
Another great piece is “Hype Hurts: Steering Clear of Dangerous AI Myths” with gems like “Demand clear proof of the viability of, and solid reason for, an AI investment. Trust none of the myths and hype around AI.”
And then there is this: “There is No Intelligence in AI” that says that anybody is free to label any amazing innovation “AI” for that simple reason that there is no intelligence in any machine today (and probably won’t be for decades, if ever). So, go and stick an “AI” sticker on anything cool!
It turns out that our AI analysts often use the phrase “AI” to mean “top techniques from the field of Artificial Intelligence” which today means “deep neural networks” (DNNs, shorthanded to “deep learning” by some), natural language processing, image recognition, etc (the latter probably use DNNs anyway).
This helps, but does not truly save us living in the realm of cyber. And one thought still haunts me in relation to this: what is the actual RED LINE for “they use AI.” So, how do I do it? How do I tell almost-AI from lying-scum-AI?
I. First, if you consider AGI to be the only “real AI”, then relax. Nobody has it. Easy, huh?
II. Now, let’s get to “narrow AI” or “weak AI.” Sorry, marketers, not every machine learning (ML) method qualifies [especially given that people now consider linear regression to be an ML method]. To me, DNNs and other deep learning stuff does qualify as of today, as long as it solved hard problems, learns and delivers good results. Just as IMHO, simpler supervised and unsupervised ML is just that – ML, and claiming “AI” is … well… debatable.
III. So, does the vendor use deep learning for anything important in their product and does it work better than alternatives? If yes and yes, perhaps they are indeed an AI security vendor.
BTW, one of the vendors I spoke with had an intern download Tensorflow and then experiment with it. Are they now “an AI startup”? Hell no! This fails “use for important functions” and “it works IRL” criteria.
So, better now? I do expect this evolve and change as AI evolves and learns…. and we learn it. And it learns us 🙂
Some good related reading is here and here and of course here (sorry, Raffy!).
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed
“If they say they do, you know they don’t” puts some fresh perspective on that subject;) Also thanks for the footnotes!
Robin Sommer and Vern Paxson wrote a paper quite a few years ago on their positions relating to machine learning as applied to network intrusion detection. I’ve always liked the position they took in the paper.
Hey Seth, thanks a lot for this – I remember reading it, but kinda forgot it existed. Thanks a lot for the reminder, not need to reread it…
How exactly do you can compare in real life situation two different AI/ML solutions?
How do you know whether they improve over time?
Sadly, the only answer I have on this is “real world testing.” In the beginning, it means a longer POC vs other tools, and overall it means aggressive metrics collection re: its effectiveness.
Sorry, no easy way here