About 10-50% of interactions with Siri and other smart assistants is abusive in nature, I recently read. A peculiar spread of percentages, but it got me thinking.
Are there ethical objections to using offensive language, to verbally abusing digital technology? One would say there are none. You can’t really offend technology, so it doesn’t matter. You could compare it to shouting really loud to a wall, but it wouldn’t be impressed.
However, on second thought, it is a surprisingly complicated matter.
First of all, why would you want to shout to a smart assistant. There might be good reasons to do so. There are call center systems with voice recognition, and if you raise your voice, you can transferred to an operator immediately, instead of having to go through an endless menu structure.
And many of us know the feeling of techno-anger, when confronted with a badly designed system that keeps going around in circles. We have the feeling to run into a wall, facing rigidity that would make anyone insane.
Or maybe it is simply a fun thing to do. Finally something we can interact with, and let ourselves go, without it having any consequences. A good laugh! But does it really not matter?
I think it does show something about one’s inner civilization. As if you would only behave because others expect you to. Or because it may have consequences. Isn’t good behavior self-evident, a reward in itself and part of building character?
Furthermore, your behavior could be offensive to other people that might witness it. And maybe you could start behaving badly to others, because you got used to it in dealing with Siri. We can at least conclude that verbal abuse of technology doesn’t make you a better person.
Take the example of Microsoft Taj, a chatbot that learned from interactions through Twitter. A small group of trolls thought it was funny to feed it with racist language and Taj had to be taken offline within 24 hours because it started to repeat it.
How does verbal abuse work? It is only bad behavior because the object of it is offended? That would mean that the attributes that make behavior good or bad are entirely on the side of the receiver. In the case of digital technology, then there is no abuse. But if behavior is also judged based on intentions and the general meaning of the bad language that is used, then there is such a thing as verbal abuse of technology.
It certainly does work the other way around. Technology can offend people pretty quickly. Even with technology not having any intentions. For example, the face recognition software that classified people with a dark skin as gorillas did cause consternation. And the guy (being an amateur chef and a gardener) ordering a specific scales and some fertilizer, did not appreciate the Amazon algorithms recommending the missing ingredients for a drugs lab, inferred from all the big data.
This is not a case of artificial intelligence, but of artificial stupidity. We are at the beginning of a new technological era, dominated by machine learning and human/machine interactions. It will take a while before we all get it right.
Until then we should not feel insulted that much, and display tolerance. And say “thank you” if Siri is helping us. It would be adorning.
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed
Hi Frank, at least now we have “something” to yell at! Yelling at Apple’s spinning wheel, at Windows, or even at Clippy, never resulted in anything. Siri and its brethren may not mind that you yell, but as you say, some systems are programmed to detect intense frustration and react accordingly.