Blog post

Artificial General Intelligence (AGI) is Impeding AI Machine Learning Success

By Anthony J. Bradley | November 20, 2019 | 5 Comments

I was at a social gathering a few weeks ago and one of the guests approached me saying, “I understand you know something about artificial intelligence.“ And then he went on to tell me how scary it is that within a few years we will have computers that can replace human beings. He is talking about what is referred to as artificial general intelligence (AGI).

One common definition of AGI is machine intelligence that can understand or learn anything that a human being can. 

I listened to him for quite a while to try to understand the viewpoint. And several of his statements began with something like, “Imagine when…“ or “Imagine if…” Ah, there’s the rub. As human beings we can imagine all sorts of things that, in all likelihood, will never come to pass. Star Trek, Star Wars, Terminator and our current obsession with dragons and zombies all qualify in this category. Though some science fiction is closer to potential reality, AGI, ala “Ex Machina,” is more fantasy than science fiction.

Just because we can imagine machines behaving like humans doesn’t mean it will ever happen.

Still, AGI is a favorite topic for many artificial intelligence podcasts and other media outlets. They bring it up in discussions with very legit experts in the AI/ML field. And even though many of these luminaries believe that AGI is currently pure fiction and that we may never get there, the AGI conversations go on and on. We seem quite preoccupied with these imaginings. I’m not saying that the media obsession is completely unfounded.

Events like Watson beating Jeopardy champions and AlphaGo defeating the Go world champion feed the AGI beast.

People who don’t understand how machine learning really works, and that is by far most people, can easily make a mental leap to imagining a world where people are completely replaced by machines. When AlphaGo made unexpected moves, announcers immediately began talking about the software being creative, inventive, or genius. When, in fact, it was executing mathematical equations and acting on resulting probabilities.

It is this underlying lack of understanding that feeds the obsession. Let’s look back at the industrial revolution or almost any big historical technology breakthrough. Because they were primarily mechanical we could readily see the limitations of the technology. We could see and understand how a cotton gin worked. We knew it would have a major impact on that particular job but we also didn’t think that cotton gins were going to threaten human dominance. AlphaGo is similar to the cotton gin in that it was built for one specific job. It can play the game of Go under a very strict set of rules. I heard one ML expert clearly articulate this specificity with the following (paraphrasing), “If you made a small change to the Go rules like altering the shape of the board, the AlphaGo program would be lost.” It would no longer be able to play. However, any human who knows how to play Go could still play the game after changing the shape of the board from a square to a circle or a diamond.

Because of this “black box” invisibility and the ensuing lack of understanding, I believe it is important for people who do know how AI/ML really works to downplay AGI and suppress the obsession.

Continued discussions around AGI as if it is inevitable or achievable in the relevant future are destructive to the advancement of machine learning.

Here is my reasoning for the above statement.

1. Discussions on AGI can set unrealistic expectations for machine intelligence. This applies from business executives to individual consumers. Unrealistic expectations lead to disappointment which slows adoption.

2. AGI unnecessarily scares people and hinders them recognizing the tremendous benefits machine intelligence can offer to humanity. Again, this can lead to slower adoption and even sociopolitical fear and government regulation that can stifle progress. There are real concerns over the appropriate and ethical use of AI/machine learning but AGI is not one of them.

3. Human intelligence and machine intelligence are fundamentally different. They are very complimentary. AGI conflates the two and creates fear over one replacing the other rather than exploring the tremendous benefits of combining the two.

4. AGI is far less practical than artificial specific intelligence (I am using ASI instead of “narrow AI“ because of AGI parallels and I just like it better) that is making progress today. Why would we focus on AGI when it doesn’t deliver any greater value than ASI?

My team just produced a set of “Emerging Tech and Trends Impact Radar” reports for Gartner clients (an IT overall radar, a security radar and an AI radar). These reports profile the emerging technologies and trends we believe technology product and service providers should have on their product/service roadmaps. Guess what is not on the radars….AGI.

Thanks for reading. I’ll dive a bit deeper into these four reasons in subsequent posts. Respectful comments to further the discussion are always welcome. Others will be terminated 🙂

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Leave a Comment


  • Mark says:

    AGI as it’s own discipline is held back by machine learning. People are convinced that ML is the most viable route so little resource is given to other avenues. A common lack of understanding of what GI is holds back both fields possibly.

  • Tom Austin says:

    Well said, Anthony.
    You might want to look at this little piece I wrote in a related vein: It was inspired by a great new paper by Francois Chollet of Google, “The Measure of Intelligence” Francois Chollet 6 November 2019,

  • Anthony J. Bradley says:

    Thanks Tom, hope you are well.

  • Anthony J. Bradley says:

    Interesting perspective. I tend to agree that separating them out as different sub-disciplines might be one way to go but a lot of the drawbacks I mention from hype around AGI would continue and still have a negative impact on AI as a whole.

  • Daniel Allen says:

    I’m part of a project that has figured out a way to get to AGI results through human swarm intelligence. It’s called Project Voy (dot Com). You can ask it anything you can’t ask any AI today and get a creative intelligent answer.