Gartner Blog Network


Pitfalls of Algorithmic Decisions and How to Handle Them

by Jitendra Subramanyam  |  August 24, 2019  |  Submit a Comment

This post is by Veena Variyam, Director, Infrastructure & Operations Advisory and Research at Gartner.

Algorithmic Decisions (and Pitfalls) are Everywhere

Machine learning algorithms with access to large data sets are making myriads of critical decisions, such as medical diagnoses, welfare eligibility, and job recruitment, traditionally made by humans. However, high-profile incidents of biases and discrimination perpetuated by such algorithms1 are eroding people’s confidence in these decisions. In response, policy and law makers are proposing2 stringent regulations on complex algorithms to protect those affected by the decisions.

Anticipating and preparing for regulatory risks is a significant executive concern3; however, the more immediate need is to address the public mistrust that motivates the widespread call for regulating algorithms. This growing lack of trust4 can not only lead to harsher policies that impede innovation but can also result in substantial potential revenue losses for the organization5. Four factors drive public distrust of algorithmic decisions.

  1. Amplification of Biases: Machine learning algorithms amplify biases – systemic or unintentional – in the training data.
  2. Opacity of Algorithms: Machine learning algorithms are black boxes for end users. This lack of transparency – irrespective of whether it’s intentional or intrinsic6 – heightens concerns about the basis on which decisions are made.
  3. Dehumanization of Processes: Machine learning algorithms increasingly require minimal-to-no human intervention to make decisions. The idea of autonomous machines making critical, life-changing decisions evokes highly polarized emotions.
  4. Accountability of Decisions: Most organizations struggle to report and justify the decisions algorithms produce and fail to provide mitigation steps to address unfairness or other adverse outcomes. Consequently, end-users are powerless to improve their probability of success in the future.

What Chief Data and Analytics Officers Should Do

These are hard challenges that don’t have clear or easy solutions. Yet, organizations must act now to improve fairness, transparency, and accountability of their algorithms and get ahead of regulations. Here are three key areas to start with:

  • Increase awareness of AI: Educate business leaders, data scientists, employees, and customers about AI opportunities, limitations, and ethical concerns. Train employees to identify biases in data sets and models and encourage open discussions. Guide executives on when AI truly makes a difference and when traditional decision algorithms will do.
  • Create an ecosystem for self-regulation: Build interdisciplinary teams to review potential biases and ethical issues in algorithmic models. Institute multi-tier checks with human interventions7 for algorithmic decisions. Mandate review and certification from external entities for critical algorithms. Embed transparency in data models and provide recourse to end-users to petition the results of the algorithm.
  • Influence global regulations: Collaborate with government, private and public entities, think tanks, and industry associations to build policies that balance regulation with innovation.

While regulations are necessary, organizations that proactively improve people’s confidence in algorithmic decisions can avoid revenue loss, shape fair regulations and policies, and future-proof AI investments against adverse regulatory impact.

_______________________________________

References

1 A Popular Algorithm Is No Better at Predicting Crimes Than Random People,” The Atlantic, January 2018; Amazon Faces Investor Pressure Over Facial Recognition, NY Times, May 2019; “AI Perpetuating Human Bias In The Lending Space,”, Tech Times, April 2019

2 Algorithmic Accountability Act of 2019,” 116th Congress(2019-2020), April 2019

3Executive Perspectives on Top 10 Risks,” Protiviti, February 2019

4 B.Zhang, A.Dafoe, “Artificial Intelligence: American Attitudes and Trends; T.H.Davenport,“Can We Solve AI’s ‘Trust Problem’?” MIT Sloan Management Review, November 2018

5The Bottom Line on Trust,” Accenture Competitive Agility, October 2018

6 J.Berrell, “How the machine ‘thinks’: Understanding opacity in machine learning algorithms,” Big Data & Society, 2016

7A.Etzioni, O. Etzioni, “Should Artificial Intelligence Be Regulated?”, Issues in Science and Technology, Summer 2017

Other Key References

K.Hosanagar, A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control, Viking (March 12, 2019)

Digital Decisions, Center for Democracy & Technology (CDT)

A.Smith, “Public Attitudes Toward Computer Algorithms,”Pew Research Center, November 2018

M.Goodman, Future Crimes Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It, Anchor(April 2015)

Additional Resources

Category: data-and-analytics-leaders  data-and-analytics-strategies  security-and-risk-management-leaders  technology-innovation  

Tags: ai  algorithms  bias  

I work on practical case studies templates and tools that serve Data and Analytics teams worldwide.






Leave a Reply

Your email address will not be published. Required fields are marked *

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.