by Peter Sondergaard | January 25, 2018 | Comments Off on Davos Discussions: Taking Stock of AI
It’s my second day at the 2018 World Economic Forum. The snow remains and the conversations continue to revolve around technology with artificial intelligence (AI) commanding one of the spotlight positions.
What I found most compelling were discussions focused on whether or not AI, in its broadest definition, will replace or augment work. The headlines might lead you to believe that only disastrous scenarios are possible — that the more organizations adopt AI, the more jobs will become irrelevant. But AI isn’t binary. Some jobs will be lost. However there is not a one-to-one correlation between the use of AI and en masse job elimination or replacement.
In fact, the contrary is true. AI augments human capabilities. It increases our accuracy, speeds our decision making and increases our productivity. And, as was true of the Industrial Revolution, technological advances as a result of AI will spur job creation. In 2020, AI will create 2.3 million jobs, while eliminating 1.8 million — a net growth of half a million new positions. Organizations will realize an added benefit as in 2021 AI augmentation will generate $2.9 trillion of business value and save 6.2 billion hours of worker productivity.
As productivity increases and AI assumes some jobs, others will open up requiring new skills. Take for instance actuaries. In the future, technology can be used to perform their jobs. Their skills and background make them the perfect candidates for data scientists. Organizations able to identify such candidates will be able to successfully “digitally upskill” talent. After all, artificial intelligence requires natural intelligence — people.
Here’s the challenge: How to develop the skills, knowledge and new way of thinking that AI demands? And more importantly, who bears the responsibility for retraining of workers? Is it businesses, governments, or the individuals themselves?
The latter sparked intense and lively conversation during one of my session breakouts. The answer varied depending on where one resides. In the U.S., it’s quite possible that we would look to big business to lead the charge. But in China, society and the government may likely assume responsibility, while in Europe the approach might be a trifecta of business, government and individuals.
When considering what the future holds for AI, it’s important to pause and take stock of where we are right now. Interest in AI technologies continues to soar, but the tech is still in the infancy stage. As such, adoption remains extremely limited. Few CIOs have deployed a large enterprise-wide project or initiative, although a large number are piloting AI or plan to do so the near future. Mistakes will be made. The good news is, we can learn from them.
Discussing data biases and “data slaves”
Another spirited and interesting AI discussion at WEF was around data bias. As we talked about AI and what it could do, we began to tackle the notion that the data that is input into AI environments or stems from AI can inherit the biases of its programmers or its users. If the majority of programmers are not representative of the broader population, we run the risk of perpetuating and amplifying the same human biases that plague society.
It’s a valid concern, one that yields some of the most divergent opinions. One thing is clear we need much earlier to educate children on how to read and interpret data and the potential bias of code. For example, students today are taught to read literature from various perspectives. In the future, schools would teach children to view data and code via different lens. Or, we could make diversity a priority for AI — diversity in people, data sources and problem selection would then be key components of AI success.
In other session, we tackled the concept of data ownership. If we give our personal data away and it is used in AI-based environments, are we then “data slaves” to those who control our data? A modern form of slavery. It is a worthy discussion as data, especially our personal data, is in the control of a select few yet is increasingly integral to the most critical parts of our lives. Health care. Finances. Employment.
We are all much more knowledgeable, protective and selective about how our data is collected and managed. In the next AI wave, perhaps we will own our data and have the power to treat it as we would any other personal asset. We could sell, exchange or barter, driving up the value equation for ourselves.
If it sounds far-fetched, remember that not too long ago we only imagined a world where interactive robots existed and algorithms allowed us to speak commands and make purchases from a bot in our bedrooms.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.