Gartner Blog Network


2030: Have They Social Engineered Your AI?!

by Anton Chuvakin  |  October 20, 2015  |  1 Comment

It is with much excitement that I am reporting that our maverick research paper has been published. Please welcome “Maverick* Research: Your Smart Machine Has Been Conned! Now What?”

Led by Patrick Hevesi (@PatrickHevesi ) and together with Erik Heidt (@CyberHeidt), we have explored the much-discussed concept of AI apocalypse. In the abstract we say “Smart machines [defined here, BTW] and AI pose huge future risks that derive from malicious humans using or abusing them to achieve their goals. Here, we focus on identifying and reducing those risks. (Maverick research deliberately exposes unconventional thinking and may not agree with Gartner’s official positions.)” Our original idea was that much of this bleating about “coming AI horrors” misses the point: the risks are indeed there, but they are NOT – for the foreseeable future – about the machine rebellion, but about the machine exploitation by people. Thus, we have built a smart machine attack taxonomy and outlined new defense approaches. We also have 3 fun scenarios of smart machines being deceived, from “almost real” to remote futuristic.

Some fun quotes follow below:

  • “The entire history of security teaches us that controls will fail; thus, AI and smart machines need extensive monitoring, anomaly detection and auditing controls to expose logic tampering.”
  • “Smart machines can be enormously complex and opaque [and non-deterministic], making it difficult or impossible to determine why they failed.”
  • “Focus on the risks likely to emerge between now and 2035 (smart machines being duped or otherwise misused by humans) instead of on longer-term existential fantasies and fears.”
  • “Scientifically speaking, we still don’t know what morality in humans actually is — and so creating a [full] digital morality in software is essentially impossible. ”
  • “Morality-based attacks [further in the future] will try to deceive the machine’s ethical programming into believing what was “wrong” is now “right.” Therefore, machine morality must be protected.”
  • “At this time, questions of AI safety and security engineering, resilience, and other risk considerations are barely understood. As the field develops, these questions will become critical in the design of future AI and smart machines.”

Enjoy the paper here – with any Gartner license. We had a lot of fun – and some frustrations! – creating it, so have fun reading it!

Possibly related posts:

Category: announcement  future  security  

Anton Chuvakin
Research VP and Distinguished Analyst
5+ years with Gartner
17 years IT industry

Anton Chuvakin is a Research VP and Distinguished Analyst at Gartner's GTP Security and Risk Management group. Before Mr. Chuvakin joined Gartner, his job responsibilities included security product management, evangelist… Read Full Bio


Thoughts on 2030: Have They Social Engineered Your AI?!


  1. I’m really enjoying the theme/design of your website. Do you ever run into any browser compatibility issues?

    A number of my blog visitors have complained about my blog not operating correctly in Explorer but looks great in Opera.
    Do you have any tips to help fix this problem?



Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.