I was reading a fascinating new paper by Richard Thaler titled, “From Cashews to Nudges: The Evolution of Behavioral Economics” (and slide-show), when it dawned on me that in dialog with clients (and some analysts) there could be a misunderstanding about the differences between AI, machine learning and decision making. The article is not about decision making per se, but in exploring ideas related to choice and behavior, Mr. Thaler does bump into the process by which people take decisions. It was this that got me thinking: what exactly is a decision?
AI and machine learning are hot right now. Check out our Key Initiative Primer for AI. These technologies are appealing since they offer the promise of great benefit. These benefits range across a spectrum of extremes. One extreme is the (significant?) increase in automation of tasks that would otherwise have been undertaken by people, to the other extreme of discovery of new patterns in data that help with solving intractable problems or exploring unsightly opportunities. This is where we might do well to overlay what a decision is in context to these two extreme benefits.
For example the benefit of automation, very frequently talked about in the press, assumes some rather obvious but important points. Not all work can be automated; if we can separate complex, cognitive work from work that is less cognitive or complex than we might be able to automate it. These are not new points to you since automation is not new. What is new is that AI is able to re-draw the line – what was thought of us too complex and not routine can now be exploited with AI. AI can (so the promise goes) cope with more complex and more cognitive work than previous technologies.
This new reality will only survive the light of day if the outcome of the automated work left to AI make sense. If the new-fangled black-box takes decisions and changes outcomes that humans don’t understand, those humans will likely turn off the box. So understanding the decision to some degree is very important.
However understanding or interpreting a decision is quite different to understanding how the algorithm works. A human should be able to grasp the principles of inputs, choice, weights, and results, even if an algorithm combines many of these to an extent that we cannot even prove the process. If the gap between outcome and approximate inputs are too varied, trust in the algorithm will likely fail – that’s just human nature. What about AI nature?
There is of course a corollary here – there are probably just as many decisions that cannot be explained or interpreted by humans. This makes things doubly complex. Do we just trust the AI if we have no hope of explaining the outcome? I think we might improve our chances here if we focus first on the decisions we can explain, and then build out from there with trusted AI-helpers.
At the other extreme of the benefit scale we see pattern discovery and advances in medicine and science and other fields. Here is the Hail Mary- an unexplained but rich discovery of an opportunity humans are just unable to perceive. Here we literally seek opportunities that we can’t understand. But that’s not the point.
At this end of the benefits scale we are not asking AI to take a decision. We are asking or employing AI and ML to discover insights. We are not performing automation. Once the pattern has been discovered, humans will then decide how to employ those insights. We are not seeking to let the algorithm find the insight then automate its application directly. That would be conflating two uses cases into one.
Therefore we still need to understand a decision for both examples:
- For use cases related to automation the decisions itself is directly in focus and so plausibility, logic or predicable results will help increase confidence in the use of AI
- For use cases related to discover the decision itself is not in scope and is implied only as an external or environmental event. We may not employ the AI-discovered insight, or we may.
It is as if we are comparing apples to oranges when we talk of AI (and related machine learning techniques) and decisions. Not until a new use case for AI that applies at the intersection of the two extremes will we have the opportunity to jettison all grounds for responsibility with decision making. Maybe we won’t jettison those responsibilities so quickly. My colleague Erick Brethenoux wonders if this intersection point is where user and machine come together: humans to discovery how to apply decisions and measure their impact and AI to automate part of the decision process. Time enough for that blog to be written.
Of interest: The Chess Master and the Machine – The Truth behind Kasparov versus Deep Blue.
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed