In November of 2020, DeepMind, the same company behind the victorious AlphaGo game, announced a major breakthrough in medical science. The latest version of its deep machine learning AI system AlphaFold succeeded in predicting the folding behaviors of proteins to the accuracy of the width of an atom. This means they could now achieve the same accuracy, within a few days, that scientists painstakingly would discover after years of trial and error in the lab.
This is a tremendous breakthrough because how proteins fold their amino acids into complex “ribbon” shapes determines how they behave. The possibilities here are astonishing and this opens a new and specialized branch of biophysics with the potential to alter modern medicine.
And yet Steven Hawking has stated that, “The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
So what is artificial intelligence (AI) really?
Is it friend or foe, savior or destructor or something in between? Why all the drama?
Artificial intelligence is, at its core, a marketing term.
It is a brilliant marketing term as inaccurate as it is prevalent. But, without this great marketing brilliance we may not have had “The Terminator”, “I, Robot,” “A.i.,” “Ex Machina” and a host of other Hollywood hits. In fact, we may not have AI technology at all. The term has captured people’s imagination for over 60 years. It has, to some degree, propelled the field forward. But it has also generated an unprecedented amount of destructive hype.
Artificial intelligence is often defined as:
- “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”
- “The capability of a machine to imitate intelligent human behavior”
- “Machines that can understand or learn anything that a human being can.”
The unifying theme behind most of the common definitions of AI is around imitating or simulating human intelligence. This is where the inaccuracy lies.
Let’s for fun try out an alternative and more accurate description of AI and assess how it would have played on the silver screen.
Artificial intelligence is a computer program that accepts inputs and infers desired outputs using rules-based or statistics-based algorithms.
So perhaps a more accurate label than “artificial intelligence” would be “inference programs.” Computer scientists often refer to AI programs as inference engines.
Which label and description do you like better, mine or the more common ones? I’m betting on the more common ones. The ones I offer are more accurate. But they are also more technical, more difficult to understand and not nearly as interesting. In other words, much less likely to start a Hollywood genre.
Not only has the AI term caught on but it has spawned a whole set of equally brilliant and inaccurate derivative marketing terms like machine learning, neural networks, model training, cognitive computing and natural language understanding. All of these terms feed misunderstanding and hype. This hype leads to inflated expectations, fears and poor executive decisions.
A better understanding of these terms helps combat the hype and focus on reality. Let’s have a little fun looking at a few of these terms. Let’s translate the marketing speak into more accurate tech speak and then into something more applicable to business.
A common definition of machine learning (wikipedia and oxford) goes something like this.
“Machine learning is a computer program that does not follow explicit instruction but learns automatically through experience.”
More accurately, machine learning is a computer program based on mathematical algorithms that use an example data set to determine the variable input parameters that deliver a desired output.
In business terms, machine learning is a computer program that uses probability math to identify something based on similarity to documented examples (lots of them… preferably).
Artificial Neural Networks
Artificial neural networks are commonly described as computer programs that mimic the way the brain works.
However, they don’t. The similarity ends at the most rudimentary layer where one neuron passes information to the next. Comparing a computer program neural network to the brain’s neural network is a little like comparing a single breeze to the earth’s weather systems.
More accurately, neural networks are arrays of interconnected simple mathematical formulas, with variable parameters, that pass outputs from the formulas in one layer as inputs to formulas in the next layer. The variables are adjusted in this array (often randomly) until the adjusted formulas arrive at the desired output. In short, it is a multilayer array of integrated simple equations.
In business terms, neural networks are computer programs that break big problems into small pieces and aggregate the results to find an acceptable solution.
Neural networks are often applied to detection problems such as computer vision, voice recognition and natural language processing where pictures, audio or natural language are parsed into smaller segments (pixel groups, wave segments, words and phrases) then aggregated by the neural network to recognize larger patterns.
The term “model training” invokes the image of artificial intelligence programs going into a classroom to get some level of instruction on how to solve problems. But, in reality, there is no teaching or learning involved, at least not the teaching and learning we normally associate with human beings.
A more accurate and technical description for model training is a process where computer algorithm variable parameters in a designed mathematical model are estimated from a preexisting data set with known parameters.
In business terms, model training is using numerous examples of a desired result, most often in the form of large data sets, to build a mathematical model that can match new items with the model’s desired result.
For example, a bank might want to profile the characteristics of their most profitable customers. So they take data on a large number of known profitable customers and apply AI to build a model based on relevant common characteristics of those profitable customers. They then run this “trained” model against other customers and prospects (or even portions of the general public) to identify people or businesses with high profitability potential. The bank can then adjust their product and marketing approach to try to acquire or nurture more profitable customers.
Natural Language Understanding
Whenever the word “understanding” is used with machine intelligence you should mentally replace it with recognition. With natural language understanding the machine doesn’t actually understand the concept. It merely recognizes, through matching, the letter, phrase or sound wave patterns. For example, if you speak or type the word “dog,” the algorithm will recognize the letters or sound waves associated with the word. The machine matches your “dog” input with the algorithm “dog” pattern. It doesn’t “understand” the concept. It doesn’t know that dogs are “man’s best friend” or that they bark or that they love treats or that you can’t teach old dogs new tricks.
So more accurate terms for natural language understanding would be language pattern matching.
Why this post?
The reason I have spent the last few pages exploring marketing vs. reality is that it is critical for business leaders to understand that hype around AI is extensive and dangerous. Business leaders must develop a good sense for what is real and what is hype. Business leaders that get caught up in the hype have unrealistic expectations of the capabilities of AI and tend to make poor decisions on how to apply it to their businesses.
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.