Blog post

AI Security: The Dark Side of AI and How to Make it Trustworthy

By Avivah Litan | August 18, 2020 | 1 Comment

SecurityRisk Response StrategiesInformation TechnologyDriving Digital Business Transformation for Industry LeadershipDriving Digital Business Transformation for Industry LeadershipApplicationsAnalytics, BI and Data Science SolutionsCIO Leadership of Innovation, Disruptive Trends and Emerging PracticesData and Analytics LeadersData and Analytics Strategies

Repressive regimes are working hard on improving AI applications that control and micromanage populations, and suppress dissent. If these Orwellian efforts succeed, protests such as those in Belarus will be a sentimental relic of the past.  Wannabe protesters with rebellious intents will be squashed well before they ever get to the streets.

Less dramatic but serious threats also exist for AI in the enterprise. Indeed, according to a recent Gartner survey (see: Survey Analysis: Moving AI Projects from Prototype to Production) security and privacy concerns are the top barriers to adoption of AI in the enterprise — and for good reason. Both benign and malicious actors threaten the performance, fairness, security and privacy of AI models and data.

We just initiated coverage of AI Security, adding to Gartner’s extensive coverage of all aspects of AI in the enterprise. This research is not about using AI to secure the enterprise. It is about using protective measures to secure AI models and data. Our first research deliverable AI Security: How to make AI Trustworthy outlines specific threats against AI, as well as specific security measures that must be taken to mitigate multiple threats.

Threats against artificial intelligence (AI) are not new, but have been insufficiently addressed by enterprise users. Malicious hackers have attacked AI systems for as long as AI has existed.  The same can be said for benign actors who introduce mistakes and biases that undermine model performance and fairness.

Organizations are largely on their own

Business trust is key to enterprise AI success. Nonetheless, security and privacy standards that effectively protect organizations are still being developed. The current machine learning and AI platform market has not yet come up with consistent tooling — nor even a comprehensive set of tools.

This means most organizations are left largely on their own in terms of threat-defense.

Act Now: Solutions

The same security solutions that mitigate damage from malicious hackers also mitigate damage caused by benign users and processes that introduce mistakes and erroneous data into AI environments. These same solutions protect sensitive data from being compromised and stolen, and further protect against model bias caused by biased model training.

Retrofitting security into any system is much more costly than building it in from the outset. This is no less true with AI systems, where security and integrity controls that are built in from the start are much less costly than those that are retrofitted after a breach or compromise. Don’t wait until the inevitable breach, compromise or mistake damages or undermines your company’s business, reputation or performance.

Secure your AI today.


The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Comments are closed