This post is inspired by a few painful discussions on artificial intelligence (AI) that I had both in public (on Twitter) and internally too.
Let’s start with a joke:
Q: How do you know that a security vendor REALLY uses AI in their product?
A: If they say they do it, then you know they don’t.
On a more serious note, where do you draw the line TODAY, in real reality of real life, not some remote future where AGI is as common as a smartphone today (personally, I will give it at least 50 years). Today we are being flooded by “AI for security” marketing, as well as by dangerous levels of radiation confusion of AI, deep learning, machine learning and even advanced analytics (the last sounding decidedly “oh so 2016”).
To save the world, Gartner analysts have crafted a lot of serious research on this topic, and more excellent pieces are coming. My favorite, by far is, “Questions to Ask Vendors That Say They Have ‘Artificial Intelligence’” that has such gems as “Ask the Vendor to Define AI and ML, and Then to Demonstrate How the Product Uses AI” and “Require the Vendor to Describe How the Product Improves in Outcomes [as it learns].”
Another great piece is “Hype Hurts: Steering Clear of Dangerous AI Myths” with gems like “Demand clear proof of the viability of, and solid reason for, an AI investment. Trust none of the myths and hype around AI.”
And then there is this: “There is No Intelligence in AI” that says that anybody is free to label any amazing innovation “AI” for that simple reason that there is no intelligence in any machine today (and probably won’t be for decades, if ever). So, go and stick an “AI” sticker on anything cool!
It turns out that our AI analysts often use the phrase “AI” to mean “top techniques from the field of Artificial Intelligence” which today means “deep neural networks” (DNNs, shorthanded to “deep learning” by some), natural language processing, image recognition, etc (the latter probably use DNNs anyway).
This helps, but does not truly save us living in the realm of cyber. And one thought still haunts me in relation to this: what is the actual RED LINE for “they use AI.” So, how do I do it? How do I tell almost-AI from lying-scum-AI?
I. First, if you consider AGI to be the only “real AI”, then relax. Nobody has it. Easy, huh?
II. Now, let’s get to “narrow AI” or “weak AI.” Sorry, marketers, not every machine learning (ML) method qualifies [especially given that people now consider linear regression to be an ML method]. To me, DNNs and other deep learning stuff does qualify as of today, as long as it solved hard problems, learns and delivers good results. Just as IMHO, simpler supervised and unsupervised ML is just that – ML, and claiming “AI” is … well… debatable.
III. So, does the vendor use deep learning for anything important in their product and does it work better than alternatives? If yes and yes, perhaps they are indeed an AI security vendor.
BTW, one of the vendors I spoke with had an intern download Tensorflow and then experiment with it. Are they now “an AI startup”? Hell no! This fails “use for important functions” and “it works IRL” criteria.
So, better now? I do expect this evolve and change as AI evolves and learns…. and we learn it. And it learns us 🙂
View Free, Relevant Gartner Research
Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.Read Free Gartner Research
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.