I enjoy writing research for Gartner, but it doesn’t let me indulge my literary side too often. Until now. I was recently part of a scenario-building effort around Future of Work Scenarios 2035, the documents of which have just been published. The main document “How Will Leaders Manage in a Majority-Bot Workforce World?” describes the four scenarios we explored. I was assigned to the most pessimistic scenario: “Bots Go Bad’.”
There are so many factors and possibilities to explore in this scenario, so I decided to create a backstory to help me envision a single thread through them. The scenario evolved from this point, so it doesn’t represent exactly what we talk about in the document.
Without further ado I bring you …
Bots Go Bad … The Story
Robogeddon may have ended with the overthrow of the century-old democratic world order by a rogues gallery of autocrats and dictators, but it started – simply enough – at a candy factory. And how it all came about says a lot more about the weaknesses of the humans that rejected AI than the strength of the machines…
The accident at the small town candy factory that killed nine workers and caused horrible, disfiguring injuries to two dozen more, was quickly blamed (correctly or not) on an AI algorithm that had been entrusted with setting the pressure in the boilers and managing their check valves. Before AI, monitoring and adjusting the machinery had been an art more than a science. The scarcity of the workers who could do it and time it took to adjust to new confections was seen as an anchor on the cost and speed with which the ever-changing tastes of children could be met.
Automation was tried with success, which quickly gave way to increasingly sophisticated AI as it learned from its human trainers. The AI was successful – too successful in fact since two years without incident encouraged more trust than it deserved. Switching to a new supplier for some key ingredients resulted in some mixtures that did not react as anticipated, resulting in the explosion.
Local first responders arrived quickly on the scene, followed closely by local media. Then national media. Photos of disfigured workers, framed by shots of an idyllic small town, caused a frenzy.
Pundits quickly drew a line connecting it with several other recent AI failures.
AI had become so strong that it was being used and trusted in myriad places. Given its ubiquity, there were bound to be failures, such as a self-driving truck that had rammed into the car of a family on vacation, an analysis finding the harshest sentences were being dealt out by a popular AI sentencing system to minorities, numerous corporations that had laid off scores of workers whose jobs were given to robots rather than retrain them, and an AI-driven cancer diagnosis service that was found to have misdiagnosed over 200 patients due to a bad data set (several had already died).
Of course there were many more industrial accidents, vehicle accidents, or medical misdiagnoses before AI came along, but it seems people were much more accepting of human failure than that of mindless machines that would not beg forgiveness and could not be brought to justice.
The cry of “do something!” became louder and louder until it was impossible for elected officials to ignore. With elections looming, each outdid the other in promising how tough they would be in clamping down on AI.
Which they did – spectacularly. A new set of elected officials, in office due to promises to put the lid on AI, passed laws on labeling, restricted R&D budgets, trebled damages due to AI (payable by its developers), required expensive and onerous paperwork for any company using it, and heavily incented “AI-free” companies. AI researchers found themselves in academic limbo, attendees would not want to be seen attending an AI conference.
If that was the end, it would simply have been a disappointing turn taken on the road to progress – one to a slower and winding path, but acceptable.
Instead, it lead to a more dangerous path. The only countries to respond to the reactionary cries of the public were countries with democracies responsive and answerable to their citizens. A collection of autocratic countries jumped on the chance to use a powerful technology sworn off by their adversaries. Before this incident, 80% of worldwide AI research funding originated in democratic countries. Eight years later that figure had dropped to 15%
The first volleys of the Robogeddon were financial plays, taking advantage of market trends and inefficiencies quicker than the best fund managers could hope to do, rapidly shifting profits to autocratic hands.
This funded even more AI production, which was used to launch cyber attacks. They were launched by sophisticated, rapidly evolving AI algorithms and the humans managing the systems under attack were helpless against them. The attacks hindered competition and stole even more funding.
Once automated weapons (one of the first uses outlawed by the democracies of the world) were established, the financial and military dominance of the Robo-powers were set for the next two hundred years. The Second Dark Age had begun.
Read Complimentary Relevant Research
Four Ways for CIOs to Cultivate Digital Dexterity in Leadership and the Workforce
To thrive in the digital era, enterprises need digital dexterity as an organizationwide competency. CIOs can boost their value by developing...
View Relevant Webinars
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.