Blog post

Model Blending Prevents Artificial General Intelligence (AGI)

By Anthony J. Bradley | April 26, 2022 | 0 Comments

Many artificial intelligence (AI) and technology experts take the position that artificial general intelligence (AGI) is just a matter of time. But right now, there is no discernible path between current AI capabilities and AGI. I completely agree with the latter, and not with the former. If there is no traceable path then AGI is pure speculation.

 

Model blending is the holy grail of artificial general intelligence. And we have no idea how to get there

 

I do agree with the AI pundits who believe that “learning” is at the heart of the AGI challenge. However, machine learning (ML) algorithms learn within the confines of a single model. But the human brain is a multi-model marvel. Our brains empower us to learn by absorbing disparate models and blending them to create entirely new models. This major difference is critical. And yet you almost never come across the concept of model blending in the ML literature.

 

Let’s examine the basic scenario of watching a video of a cat playing the piano. We know we’re on a computer so we have that model in our head. We recognize the pixels of a cat. But we also recall our “cat” model with all the robustness of what they are. And then we blend in our model of a piano and the music. Our brain might also absorb a model of the room the piano is in. We blend all of these models together to form an understanding of the situation. And then we conclude that although entertaining, the cat is not really playing the piano.

 

Human brains blend unknown multitudes of mental models into a tapestry of understanding

 

We blend together some models to understand the central theme and other models to put it in context. Our brains are continuously blending a wide array of models so that we can understand the world in which we live. All of us are born with this multi-model marvel. And thankfully, it seems to come with a large number of predefined models.

 

Every human brain sits somewhere on the spectrum from common sense to rare wisdom. Placement is determined by its ability to blend numerous models and form new, sometimes unique ones. Until we crack this multi-model code, we can’t even come close to building machines that learn the way humans do. And here’s the rub, we don’t even know how our brains do it. We certainly understand some mechanics at the neuron and synapse level. But that is a far cry from how the billions of neural connections work together to absorb, store, retrieve, blend, and build models.

 

Model Blending Fuels Abductive Reasoning

 

In Eric J Larsons excellent book, “The Myth of Artificial Intelligence” he talks about abductive reasoning (vs. deductive and inductive). One of his main points is that AI, as it exists today, is completely devoid of abductive reasoning.

 

Abductive reasoning is very different from inductive and deductive reasoning. It is very different because the actual source of the reasoning is largely unknown. Abductive reasonings are educated guesses, mental leaps and even subconscious decision making. The overwhelming majority of decisions we humans make every day results from abductive reasoning. Mental model blending is the capability behind abductive reasoning.

 

Have you ever been lost in thought while driving your car and yet still arrived at your destination without recalling making any turns? Or have you ever had a great idea that seemed to come out of nowhere? That’s abductive reasoning. We can do this because our brains are designed, from before birth, to process the complexity of this world by consciously and subconsciously blending a multitude of models. 

 

ML can’t provide the multi-model blending capabilities required for AGI

 

But despite what you may have heard, algorithms do not learn like humans learn. There is a fundamental and enormous difference. ML algorithms don’t blend models. Instead they “learn” within the boundaries of a single model. Whether it’s supervised, unsupervised, semi-supervised, self-supervised, super-supervised or ultra-mega-supervised, it is still the optimization of a single model. Even “transfer learning” is essentially a cut-and-paste from one model to jumpstart a similar model.

 

I’m being funny here. I don’t mean to downplay the significance of what has been done with machine learning algorithms. It is absolutely tremendous. It offers humanity a wide array of great opportunities. However, when compared to the learning capabilities of the human brain, ML is cripplingly limited.

 

The human brain model blending equivalent in ML terms would be a vast and deep network of different model algorithms. And these algorithms would dynamically pass parameters back-and-forth in a continuous and simultaneous state of contextual awareness, retraining and execution. We don’t know how to do that 🙂

 

So why did I take the time to write this post? Here’s where the broken record kicks in. Business leaders shouldn’t spend any time on artificial general intelligence and the prospect of machine intelligence replacing human intelligence. Don’t get sucked into the media hype trap. Instead, focus on machine intelligence and the benefits it can deliver to us humans.

Leave a Comment