Blog post

Top 5 Priorities for Managing AI Risk Within Gartner’s MOST Framework

By Avivah Litan | January 21, 2021 | 3 Comments

Driving Digital Business Transformation for Industry LeadershipApplicationsArtificial Intelligence

Data privacy and security is viewed as a primary barrier to AI implementations, according to a recent Gartner survey (see Survey Analysis: Moving AI Projects From Prototype to Production). Yet few organizations face these issues head on. Risk Management is generally an afterthought when it comes to AI projects, much like it is across IT.

AI operates as a “black box” in most organizations. Gaining clarity about AI models is the first step organizations must take to gain the context needed for risk management. AI risk management poses new operational requirements that are not well understood.  Conventional controls do not sufficiently ensure AI’s trustworthiness, security and reliability.

After extensive consultations with practitioners throughout industry, my colleagues Jeremy D’Hoinne, Anthony Mullen and I just published Top 5 Priorities for Managing AI Risk within Gartner’s MOST Framework

Our research recommends organizations adopt Gartner’s MOST framework. (See Figure 1 below). First they must form cross-functional teams with a vested interest in AI outcomes, such those in legal, compliance, data and analytics, security and privacy, to work together to:

  1. Capture the extent of exposure by inventorying AI used in the organization and ensure the right level of explainability.
  2. Drive staff awareness across the organization by leading a formal AI risk education campaign.
  3. Eliminate exposures of internal and shared AI data by adopting data protection and privacy programs.
  4. Support model reliability, trustworthiness and security by incorporating risk management into model operations.
  5. Adopt specific AI security measures against adversarial attacks to ensure resistance and resilience.

We are all left to wonder why more alarms didn’t ring during a long widespread Russian incursion against U.S. government agencies and enterprises.  There certainly are plenty of enterprise security systems installed that use AI to help detect abnormal behavior by users, networks and endpoints. Perhaps if some of these AI risk management measures had been applied against existing AI security systems, these organizations would not have been caught off guard as sitting ducks.

Just because AI risk management is difficult, doesn’t mean it should be delayed. Cooperation and goal setting across business lines is a prerequisite for success, and that is always a difficult proposition, given pre-existing mandates and bandwidth. The alternative however – not moving forward — is an even more perilous risk.

Comments are closed

3 Comments

  • how could we partner with you?

    Developing an AI blockchain marketplace ecosystem – we developed NOOSPhere architecture A Digital transformation solution for the individual in a per-to-peer ecosystem. Blockchain driven and managed.

    The future of work, and the future of reskilling “apprenticeship program”…

  • Thanks for the great overview! The world needs more awareness on how to operationalize Trusted AI. By the way, what do you think about ETSI’s initiative?
    https://www.etsi.org/newsroom/press-releases/1871-2021-01-etsi-report-paves-the-way-for-first-world-standards-in-securing-artificial-intelligence

  • amir says:

    hiii
    Developing an AI blockchain marketplace ecosystem – we developed NOOSPhere architecture A Digital transformation solution for the individual in a per-to-peer ecosystem. Blockchain driven and managed.