by Craig Roth | October 5, 2017 | Comments Off on Predicting What Technology Will Do Is Much Easier Than Predicting What Humans Will Do With It
Here at Gartner we make a lot of predictions. I jumped in feet first as co-lead author of our Software Adoption predicts and have been closely watching the predictions on AI.
I’ve noticed a single factor that divides these predictions: whether they depend on the human element or not. If the topic is purely technical, like predicting what self-driving cars will be able to do on a test track in ten years, the human element is fairly low. Sure, the brilliance of the engineers, tenacity of the project managers, and ability of the execs to keep the business afloat long enough to see it through all play in. But mostly that’s a prediction about what the technology will be capable of.
Compare that to predicting how many self-driving cars will be on the road in ten years. Now you’ve introduced humans into the equation and certainty goes out the window. Technological feasibility is just the bar of entry to a morass of social, political, and psychological considerations – human issues – that greatly decrease the confidence interval of the prediction.
The easiest way to approach a prediction including humans is to “assume” them out. Assume the human response will be rational, intelligent (or informed), benevolent, and unimpeding of technological progress (RIBU? I’d come up with an acronym for NONLUDDITE, but I don’t have time for that).
Artificial intelligence (or automation, or robots) invite many predictions about what they will be able to do. And that’s a start. But now, if the technology turns out to be powerful, companies may be smart about it or stupid; governments may be smart or stupid. Predicting the effect isn’t just about what the tech will do – it’s a prediction of how humans will behave when thrown a powerful new toy.
Before there were predictions that AI would power the world, there were predictions that nuclear energy would power the world. How did those turn out? Wikipedia has a nice summary:
In 1973, concerning a flourishing nuclear power industry, the United States Atomic Energy Commission predicted that, by the turn of the 21st century, one thousand reactors would be producing electricity for homes and businesses across the U.S. However, the “nuclear dream” fell far short of what was promised because nuclear technology produced a range of social problems, from the nuclear arms race to nuclear meltdowns, and the unresolved difficulties of bomb plant cleanup and civilian plant waste disposal and decommissioning.
Nuclear energy is powerful and real. But those darned humans got in the way of those 1,000 reactors. In fact, according to the Nuclear Energy Institute there are only 99. Predictions of its technological capability to power the world greatly exceeded the reality of what happened. Once you throw humans into the picture you have fears of the risk of Fukushima-like failures, terrorism and espionage, and NIMBYism regarding storage of nuclear waste.
Back in the AI world, all those outcomes apply here. From social blowback (could “buy local” movements become “buy human”?), to terrorism (what if AI benefits hackers or malevolent governments more quickly than business?) to fear of catastrophe (could a Chalk River / Three Mile Island / Fukushima-type AI disaster launch a similar NIMBY-ism to the one that nuclear experienced?), the same outcomes are possible. It’s a big world, and it just takes one flawed use of AI in a critical public function to set any predictions about its progress back a decade.
So the biggest assumption on AI predictions may be “assuming that humans act rationally and don’t do anything stupid”.
I predict this is going to be interesting.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.