Gartner Blog Network


AI Ethicists: Moral Grounding or Public Relations Trick?

by Craig Roth  |  June 5, 2019  |  Comments Off on AI Ethicists: Moral Grounding or Public Relations Trick?

The Wall St. Journal wrote in a March 1, 2019 article that the “Need for AI Ethicists Becomes Clearer as Companies Admit Tech’s Flaws“. I’m all for ethics being applied to an uncharted technological domain that could have tremendous consequences. But what’s being described sounds more like “AI business risk mitigation” than “AI ethics” to me.

The start of the article points out the difference (carriage return and italics added for clarity):

The call for artificial intelligence ethics specialists is growing louder as technology leaders publicly acknowledge that their products may be flawed and harmful to employment, privacy and human rights.

Software giants Microsoft Corp. and Salesforce.com Inc. have already hired ethicists to vet data-sorting AI algorithms for racial bias, gender bias and other unintended consequences that could result in a public relations fiasco or a legal headache.

So the public call for AI ethics is growing louder since AI may be violating human rights. And the response is to uncover areas where AI can cause PR or legal problems. I sense a disconnect.

I’m glad companies recognize that doing the right thing can have a positive impact on the bottom line. This is a beneficial feature of capitalism in a society with rights of protest and freedom of the press. When the buyers and public care about human values they can hold their suppliers to account.

But that’s still a bit short of ethical standards. Ethics is about good and bad, right and wrong – and the tricky work of debating what those terms mean. The set of issues which will cause PR, legal, or recruiting (since millennials are said to care about ethical behavior by their employers) hassles doesn’t entirely overlap with what is good or bad for society.

Instead of “ethics”, the model that fits this behavior more closely is “risk assessment”. Risk assessment weighs the potential costs to the potential benefits of a business activity and passes that analysis on to business decision makers to decide. Indeed, Gartner has predicted that by 2023, over 75% of large organizations will hire AI behavior forensic, privacy and customer trust specialists to reduce brand and reputation risk.

How do ethics and risk assessment differ? Take these examples:

  1. An AI risk assessment could show that a morally compromised AI model is very unlikely to be unmasked and is a core foundation of an entire division’s profits (the morality of keeping everyone employed!), therefore worth continuing.
  2. An AI activity that is morally sound, but would be easy for the public to misunderstand or competitors to paint as evil, so a morality-free risk assessment may show that the potential damage to reputation exceeds the revenue of the product.
  3. A use of AI whose negative impacts are be so far away or difficult to grasp that the public is unlikely to protest. For example, AI applied to “dark UX” (user interfaces designed to be addictive or trick the user) is unlikely to create a public upswell that could harm a vendor’s reputation. But most of the public may consider it wrong.

These are cases that would have opposite recommendations when presented to an “AI ethicist” versus an “AI risk analyst”.

Many vendors have been realistic about these positions, with job titles such as “Head of Investigations and Machine Learning, Trust and Safety” or “Compliance Analyst, Trust & Safety”. And the article quotes Microsoft’s 2018 annual report as stating “If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”

I applaud that transparency, as long as everyone understands that the loud cry for AI ethicists is really to have someone inside the AI developers acting as an angel on the shoulder, not just a bean counter nearby. To the extent the risk to reputational harm guides good behavior, it is worth heeding. But with so much of the future of work and society (not just the vendor) at stake, there should be room for a voice of reason unbound by considerations of the visibility of bad outcomes.

Additional Resources

View Free, Relevant Gartner Research

Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.

Read Free Gartner Research

Category: ai  

Craig Roth
Research VP, Tech and Service Providers
7 years at Gartner
28 years IT industry

Craig Roth is a Research Vice President focused on cloud office suites, collaboration tools, content management, and how they are being impacted by digital workplace and digital business trends...Read Full Bio




Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.