by Michael Maoz | April 6, 2017 | Comments Off on AI shouldn’t become Ignorance by Design as regards customer processes
Sometimes you have to go out on a limb during the research process and write from your gut. Watching AI unfold for customer service and customer support has been a little like watching a three year old spinning around after their third sugar-frosted cupcake.
The basic premise of the piece, which you can read if you are a Gartner client is here: Artificial Intelligence Requires IT Leadership to Use Genuine Empathy , is that every business application is inherently skewed by the biases of its creators. AI is no exception. The upshot is that application leaders working on AI for customer-centric processes will need to use empathy as a guiding principle that governs the design and deployment of AI in customer-facing systems.
That might sound like a simple feat, but the distributed nature of customer engagement means that IT organizations working on CRM projects may be unaware of many other IT projects where AI / chatbots are deployed. IT is not one integrated group, and the nature of cloud computing and the fast pace of business change are accelerating the challenge. It is safe to conclude that, at the current pace of change, and with the existing organizational structures in place, the likelihood is that AI projects (and the use of algorithms in general) may set customer experience projects back rather than advance them.
Organizations will need to test their assumptions about the ‘neutrality’ of AI / bots. Are your customer processes the same regardless of whether the customer is poor or not technology savvy or does not interact with the AI as when the customer interacts frequently with AI/bots? Or do you supplement the information about the customer who does not engage with AI to show the same empathy and extend the same offers / levels of service?
While all applications exhibit some form of bias, the sophistication of the algorithm designers in filtering out bias toward (or away from) certain groups or customer segments will be what determines if the customer feels well or poorly served. Said another way, the more generally impartial the algorithm is, the broader the satisfaction will spread across all affected customer segments.
Most businesses will be reliant on software companies or external consultants. They should, therefore, create a means by which to monitor and assess what those external suppliers and partners are doing in AI design, and the bias they have (or do not have) in their work on AI and algorithms. In essence, the issue of ethics and empathy in AI may be about more than just education and awareness, and equally about talent sourcing or procurement.
So: do you think our concerns are overwrought and/or off base, or do you see that the AI revolution (the 60-year old revolution…) could have a bit of the HAL 9000 dilemma baked in? Let us know…..
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.