Gartner Blog Network


Artificial General Intelligence (AGI) is Impeding AI Machine Learning Success

by Anthony J. Bradley  |  November 20, 2019  |  4 Comments

I was at a social gathering a few weeks ago and one of the guests approached me saying, “I understand you know something about artificial intelligence.“ And then he went on to tell me how scary it is that within a few years we will have computers that can replace human beings. He is talking about what is referred to as artificial general intelligence (AGI).

One common definition of AGI is machine intelligence that can understand or learn anything that a human being can. 

I listened to him for quite a while to try to understand the viewpoint. And several of his statements began with something like, “Imagine when…“ or “Imagine if…” Ah, there’s the rub. As human beings we can imagine all sorts of things that, in all likelihood, will never come to pass. Star Trek, Star Wars, Terminator and our current obsession with dragons and zombies all qualify in this category. Though some science fiction is closer to potential reality, AGI, ala “Ex Machina,” is more fantasy than science fiction.

Just because we can imagine machines behaving like humans doesn’t mean it will ever happen.

Still, AGI is a favorite topic for many artificial intelligence podcasts and other media outlets. They bring it up in discussions with very legit experts in the AI/ML field. And even though many of these luminaries believe that AGI is currently pure fiction and that we may never get there, the AGI conversations go on and on. We seem quite preoccupied with these imaginings. I’m not saying that the media obsession is completely unfounded.

Events like Watson beating Jeopardy champions and AlphaGo defeating the Go world champion feed the AGI beast.

People who don’t understand how machine learning really works, and that is by far most people, can easily make a mental leap to imagining a world where people are completely replaced by machines. When AlphaGo made unexpected moves, announcers immediately began talking about the software being creative, inventive, or genius. When, in fact, it was executing mathematical equations and acting on resulting probabilities.

It is this underlying lack of understanding that feeds the obsession. Let’s look back at the industrial revolution or almost any big historical technology breakthrough. Because they were primarily mechanical we could readily see the limitations of the technology. We could see and understand how a cotton gin worked. We knew it would have a major impact on that particular job but we also didn’t think that cotton gins were going to threaten human dominance. AlphaGo is similar to the cotton gin in that it was built for one specific job. It can play the game of Go under a very strict set of rules. I heard one ML expert clearly articulate this specificity with the following (paraphrasing), “If you made a small change to the Go rules like altering the shape of the board, the AlphaGo program would be lost.” It would no longer be able to play. However, any human who knows how to play Go could still play the game after changing the shape of the board from a square to a circle or a diamond.

Because of this “black box” invisibility and the ensuing lack of understanding, I believe it is important for people who do know how AI/ML really works to downplay AGI and suppress the obsession.

Continued discussions around AGI as if it is inevitable or achievable in the relevant future are destructive to the advancement of machine learning.

Here is my reasoning for the above statement.

1. Discussions on AGI can set unrealistic expectations for machine intelligence. This applies from business executives to individual consumers. Unrealistic expectations lead to disappointment which slows adoption.

2. AGI unnecessarily scares people and hinders them recognizing the tremendous benefits machine intelligence can offer to humanity. Again, this can lead to slower adoption and even sociopolitical fear and government regulation that can stifle progress. There are real concerns over the appropriate and ethical use of AI/machine learning but AGI is not one of them.

3. Human intelligence and machine intelligence are fundamentally different. They are very complimentary. AGI conflates the two and creates fear over one replacing the other rather than exploring the tremendous benefits of combining the two.

4. AGI is far less practical than artificial specific intelligence (I am using ASI instead of “narrow AI“ because of AGI parallels and I just like it better) that is making progress today. Why would we focus on AGI when it doesn’t deliver any greater value than ASI?

My team just produced a set of “Emerging Tech and Trends Impact Radar” reports for Gartner clients (an IT overall radar, a security radar and an AI radar). These reports profile the emerging technologies and trends we believe technology product and service providers should have on their product/service roadmaps. Guess what is not on the radars….AGI.

Thanks for reading. I’ll dive a bit deeper into these four reasons in subsequent posts. Respectful comments to further the discussion are always welcome. Others will be terminated 🙂

Additional Resources

View Free, Relevant Gartner Research

Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.

Read Free Gartner Research

Category: 

Anthony J. Bradley
GVP
13 years at Gartner
30 years in IT

Anthony J. Bradley is a Group Vice President in Gartner Research. In this role he leads global teams of analysts who research the emerging technologies and trends that are changing today's world and shaping the future. Mr. Bradley's group strives to provide technology product and service leaders (Tech CEOs, General Managers, Chief Product Officers, Practice Leads, Product Managers and Product Marketers) with unique, high-value research and indispensable advice on leveraging emerging technologies and trends to create and deliver highly successful products and services. Information technology now impacts pretty much every business function in all companies, all industries, and all geographies. Technology providers are critical to the technology and business innovation that will define the world of tomorrow. Innovation depends on technology providers. By helping them, we help the world.


Thoughts on Artificial General Intelligence (AGI) is Impeding AI Machine Learning Success


  1. Mark says:

    AGI as it’s own discipline is held back by machine learning. People are convinced that ML is the most viable route so little resource is given to other avenues. A common lack of understanding of what GI is holds back both fields possibly.

  2. Tom Austin says:

    Well said, Anthony.
    You might want to look at this little piece I wrote in a related vein: https://www.thansyn.com/ai-lacks-animal-intelligence/ It was inspired by a great new paper by Francois Chollet of Google, “The Measure of Intelligence” Francois Chollet 6 November 2019, https://arxiv.org/pdf/1911.01547.pdf
    Cheers,

  3. Anthony J. Bradley says:

    Thanks Tom, hope you are well.

  4. Anthony J. Bradley says:

    Interesting perspective. I tend to agree that separating them out as different sub-disciplines might be one way to go but a lot of the drawbacks I mention from hype around AGI would continue and still have a negative impact on AI as a whole.



Leave a Reply

Your email address will not be published. Required fields are marked *

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.