Back to Gartner for Marketers Blog

Avoid the Uncanny Valley When Automating Customer Experience Interactions

By Augie Ray | September 01, 2016 | 6 Comments

Source: Pixabay

I wrote some months ago that marketers trying to improve their brands’ customer experience can make the mistake of attempting to manufacture emotion rather than evoke it. Emotion is vitally important to build strong customer relationships, but the secret is to evoke positive emotions within your customers, not manufacture it at customers. Nowhere has this difference been more evident to me than in the way some brands stumble into the “uncanny valley” as they automate customer interactions.

If you are not familiar with the concept of the “uncanny valley,” it describes the way humans can experience discomfort and even revulsion at robots that appear almost, but not exactly like, real human beings. While this term is typically applied to robots, you can see this effect at work in the way brands automate their customer engagement. Research suggests that automation must balance humans’ awareness they are interacting with a machine with the functional and emotional cues provided by that machine. Attempts to make robots too human can create small incongruities that result in oversized negative reactions.

As brands adopt more automation in their social media, bots, IVR systems, marketing programs, and customer care systems, they must be careful that the desire to seem more human doesn’t inadvertently cause negative, brand-damaging experiences. Just as a single incorrect line of code can cause an entire application to break, the smallest of missteps into the uncanny valley can damage customer relationships.

The danger to brands of the uncanny valley came to mind recently as I interacted with two brands’ automated systems. In each instance, the brand attempted to inject emotion into their automated interactions in a way that created a negative rather than a positive response.

A Virtual Trainer Tries to Bolster My Ego

An online training program “hosted” by an imaginary virtual trainer provided positive feedback to a quiz response, telling me, “I’m proud of you.”  My reaction was profoundly negative for a number of reasons, not the least of which is that this pre-programmed, artificial being has no ability to feel anything, much less pride. The program designer stumbled into the uncanny valley, ascribing human emotion to a computer program. I know the system isn’t human; the instructional designer knew the system isn’t human; only the system seemed not to know this, and that felt creepy.

Another factor was that the level of praise was not appropriately matched to my action. The question I was presented was painfully obvious, and answering it correctly was no challenge. This level of effusive praise for such a simple behavior felt condescending, as if someone told me how proud they were I was able to tie my own shoes.

An IVR System Assumes Too Much Blame

I had a similar reaction when a health company’s automated IVR system couldn’t understand my voice input and kept repeating, “My mistake.”  This provoked a negative rather positive emotional response because I felt more guilty than I did blameless in the communication issue. By the third time I failed to successfully interact with the system and it groveled about its “mistake,” I began to feel a sense of responsibility–my inability to communicate was causing this not-quite-human to display self-condemnation for what was obviously a mutual problem.

Having the system accept fault by replicating the same words in the same intonation didn’t make me feel better; it led me into the uncanny valley. The attempt to manufacture remorse and culpability, coming from an automated system, did not feel right. A simple “I couldn’t understand that; please try again” (or better yet, “let me get you to a customer representative”) would have invoked a better emotional response. In other words, just as with the “pride” expressed by the virtual trainer, an unemotional program coded with less emotional dialog would have encouraged a better emotional response in me.

Social Media’s Uncanny Valley

Of course, we’re all different–you might not have had the same reactions I did–but that’s what makes the uncanny valley so dangerous. Until such time as automated programs can do the subtle job (that humans do innately) of matching language and emotional response to each customer’s preferences, feedback, and needs, the one-size-fits-all approach to automating customer interactions will remain risky.

One place where this has been evident is in social media, where brands have struggled to scale their interactions with humans in a way that is economical, personal and helpful. For example, the New England Patriots tried to automate a social media marketing program and ended up tweeting a racial slur for which it had to apologize. Bank of America attempted to automate its social customer care acknowledgments and appeared less human for it when the company broadcast a tone-deaf tweet to an Occupy protest account. And some years ago, Progressive compounded a reputational crisis with an automated response intended to communicate sympathy but that made the insurer seem less sympathetic.

These are obvious examples of how the road to the uncanny valley is paved with careless use of automation, attempts to program too much human emotion into heartless computer programs and a failure to vary responses as a human would.

How to Avoid the Uncanny Valley

Understanding and avoiding the uncanny valley will become more important as brands adopt machine learning, artificial intelligence, and automation in an effort to provide cost-effective customer experiences. From Messenger bots to automated tweets to robot service providers in hospitality and retail, brands risk having the best of intentions go awry if they do not recognize and evade the causes. To dodge inadvertent damage to your customer experience and relationships:

  • Be human–really human: Until the technology is so good that it pulls us out of the uncanny valley, be cautious with how much you attempt to “humanize” your brand through automation. The safest way to humanize your brand is with, well, humans. This isn’t to say that you cannot adopt more automation–brands have little choice but to do so–but that you must…
     
  • Respect the customer’s needs:  Do not try to overcome the challenges of automation with emotive scripts but by ensuring your automation provides the best possible experience to match customer needs and expectations. You love your banking app, ride-hailing service or cloud storage provider because it works, not because it compliments your taste or gives you a virtual hug every time you open the app. Great, user-centered functionality evokes strong positive emotion more than do effusive scripts and computerized responses. Let your automation shine by…
  • Letting machines be machines:  Match customers’ awareness and expectations of an automated engagement with the audio, visual and interaction cues provided by the system. No one complains about the convenience of ATMs compared to that of human tellers at branches–banks have succeeded with soulless boxes, not cyborgs that smile, chat up customers or count cash with robotic human hands. ATMs may not be particularly “human,” but the alignment between customer expectations and ATM functionality allows those ugly boxes to foster positive feelings of brand loyalty based on their location, reliability, ease, and functionality.

Automation is unavoidable, but as your brand embraces more programmed, algorithmic customer experiences, don’t make the mistake of confusing acting human with being human.  Lead your customers to feel something positive rather than leading them into the uncanny valley.

Leave a Comment

6 Comments

  • Nice Post. Did you write it or was it your chatbot?

    We see the same in sales. The tweets where you have a higher chance to get the discussion going are tweets that are personal and carry emotion.

    Of course, at one point in time a SalesRep has to demonstrate expertise and talk about his/her offer. But to build relations, not only bots don’t make it but humans following a script wont make it either.

    The technology can surface the “tweets” that are important for the author like a picture of the dog, an achievement, a participation at an event, an involvement in a charity…. but at the end of the day building a relation takes a human to jump in.
    Best

    • Augie Ray says:

      Thanks, Dominique. I wish I had a chatbot who could write for me–it’d sure make my job easier!

      On the social media front, there are plenty of ways automation can help. As you point out, it can help to listen for and evaluate tweets. Automated acknowledgments can set expectations for human response makes sense. And helping human admins by auto-selecting recommended text for responses can increase efficiency.

      But we’re still years away from automation being able to respond, and even then, I think the risks of the uncanny valley are greater than we realize as we see Watson do impressive feats. For all of the remarkable feats of the IBM and other A.I. platforms, it’s still easier to beat the world’s greatest Go player than to hold a simple human conversation.

  • Robin Solis says:

    …hold a simple human conversation. Bravo.

  • Another example of how AI can be derailed is Microsoft’s Chatbot, Tay, which tried to learn to act human on Twitter and swiftly became corrupted by other users. Therefore, as you say, it is better to focus on where automation adds value behind the scenes, freeing up agent time to handle the human parts of CX. There’s more in this Eptica blog at http://www.eptica.com/blog/technology-human-touch

  • This is a very insightful piece. As a digital marketer, I am constantly looking for ways to automate tasks to save time. This includes tasks handled in other departments like customer service. I have to be very careful with the projects I evaluate, because it’s more important to have a great user experience than to save company time- whether done by a human or a machine. Thanks!