The Coded Bias documentary film examines a number of moral issues that feel especially current, with strong attention to the issue of facial recognition. Facial recognition is particularly problematic, emotional, and visually compelling. In Gartner’s AI RC, about six of us watched the movie. Then we discussed it.
Is AI really different from previous technologies?
My opinion is that most of the challenges that arise from AI already arose from prior generations of computing. That includes even pre-digital generations. But I admit that AI scales everything. A few examples of that are personalization and surveillance and creepiness. And in fact, I was interested to see the spread among analysts who attended the meeting on the question of whether AI is more challenging than prior generations.
As you can see in the above graphic, my colleagues had strong opinions. Most people feel it does present more moral challenges. A few feel it’s no worse than prior generations. Not many people equivocated.
What the Coded Bias movie is good for
I do recommend you watch the movie as a way of engaging in dialogue with how laypersons experience AI. For example, the question of a false positive or a false negative to an analyst is an intriguing invitation to calibration. But to a person who is experiencing AI telling the police to investigate them, it means that AI is wrong. And not just “wrong” but unacceptably wrong.
Further, the fact that your grocery store used to know you better than your spouse (does your spouse know about that candy bar you buy and eat in the car?) doesn’t impress a citizen who now looks up and sees cameras everywhere. The film deftly captures the feeling of gazes always on you — gazes that are backed by narrow but disquieting “intelligence.”
I’m trying to understand more about people’s experience of AI. Both to get a sense of what they think it could do for them and what it could do to them. Consuming documentaries and fiction with AI in them is a good way to achieve that.