Blog post

2030: Have They Social Engineered Your AI?!

By Anton Chuvakin | October 20, 2015 | 1 Comment

securityfutureannouncement

It is with much excitement that I am reporting that our maverick research paper has been published. Please welcome “Maverick* Research: Your Smart Machine Has Been Conned! Now What?”

Led by Patrick Hevesi (@PatrickHevesi ) and together with Erik Heidt (@CyberHeidt), we have explored the much-discussed concept of AI apocalypse. In the abstract we say “Smart machines [defined here, BTW] and AI pose huge future risks that derive from malicious humans using or abusing them to achieve their goals. Here, we focus on identifying and reducing those risks. (Maverick research deliberately exposes unconventional thinking and may not agree with Gartner’s official positions.)” Our original idea was that much of this bleating about “coming AI horrors” misses the point: the risks are indeed there, but they are NOT – for the foreseeable future – about the machine rebellion, but about the machine exploitation by people. Thus, we have built a smart machine attack taxonomy and outlined new defense approaches. We also have 3 fun scenarios of smart machines being deceived, from “almost real” to remote futuristic.

Some fun quotes follow below:

  • “The entire history of security teaches us that controls will fail; thus, AI and smart machines need extensive monitoring, anomaly detection and auditing controls to expose logic tampering.”
  • “Smart machines can be enormously complex and opaque [and non-deterministic], making it difficult or impossible to determine why they failed.”
  • “Focus on the risks likely to emerge between now and 2035 (smart machines being duped or otherwise misused by humans) instead of on longer-term existential fantasies and fears.”
  • “Scientifically speaking, we still don’t know what morality in humans actually is — and so creating a [full] digital morality in software is essentially impossible. ”
  • “Morality-based attacks [further in the future] will try to deceive the machine’s ethical programming into believing what was “wrong” is now “right.” Therefore, machine morality must be protected.”
  • “At this time, questions of AI safety and security engineering, resilience, and other risk considerations are barely understood. As the field develops, these questions will become critical in the design of future AI and smart machines.”

Enjoy the paper here – with any Gartner license. We had a lot of fun – and some frustrations! – creating it, so have fun reading it!

Possibly related posts:

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Comments are closed

1 Comment

  • I’m really enjoying the theme/design of your website. Do you ever run into any browser compatibility issues?

    A number of my blog visitors have complained about my blog not operating correctly in Explorer but looks great in Opera.
    Do you have any tips to help fix this problem?