Gartner Blog Network


AI Security: The Dark Side of AI and How to Make it Trustworthy

by Avivah Litan  |  August 18, 2020  |  1 Comment

Repressive regimes are working hard on improving AI applications that control and micromanage populations, and suppress dissent. If these Orwellian efforts succeed, protests such as those in Belarus will be a sentimental relic of the past.  Wannabe protesters with rebellious intents will be squashed well before they ever get to the streets.

Less dramatic but serious threats also exist for AI in the enterprise. Indeed, according to a recent Gartner survey (see: Survey Analysis: Moving AI Projects from Prototype to Production) security and privacy concerns are the top barriers to adoption of AI in the enterprise — and for good reason. Both benign and malicious actors threaten the performance, fairness, security and privacy of AI models and data.

We just initiated coverage of AI Security, adding to Gartner’s extensive coverage of all aspects of AI in the enterprise. This research is not about using AI to secure the enterprise. It is about using protective measures to secure AI models and data. Our first research deliverable AI Security: How to make AI Trustworthy outlines specific threats against AI, as well as specific security measures that must be taken to mitigate multiple threats.

Threats against artificial intelligence (AI) are not new, but have been insufficiently addressed by enterprise users. Malicious hackers have attacked AI systems for as long as AI has existed.  The same can be said for benign actors who introduce mistakes and biases that undermine model performance and fairness.

Organizations are largely on their own

Business trust is key to enterprise AI success. Nonetheless, security and privacy standards that effectively protect organizations are still being developed. The current machine learning and AI platform market has not yet come up with consistent tooling — nor even a comprehensive set of tools.

This means most organizations are left largely on their own in terms of threat-defense.

Act Now: Solutions

The same security solutions that mitigate damage from malicious hackers also mitigate damage caused by benign users and processes that introduce mistakes and erroneous data into AI environments. These same solutions protect sensitive data from being compromised and stolen, and further protect against model bias caused by biased model training.

Retrofitting security into any system is much more costly than building it in from the outset. This is no less true with AI systems, where security and integrity controls that are built in from the start are much less costly than those that are retrofitted after a breach or compromise. Don’t wait until the inevitable breach, compromise or mistake damages or undermines your company’s business, reputation or performance.

Secure your AI today.

 

Additional Resources

View Free, Relevant Gartner Research

Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.

Read Free Gartner Research

Category: analytics-and-bi-solutions  applications  io-leadership-of-innovation-disruptive-trends-and-emerging-practices  data-and-analytics-leaders  data-and-analytics-strategies  driving-digital-business-transformation-for-industry-leadership-finance  driving-digital-business-transformation-for-industry-leadership  it  risk-response-strategies  security  

Tags: aiintheenterprise  ai-security  

Avivah Litan
VP Distinguished Analyst
19 years at Gartner
34 years IT industry

Avivah Litan is a Vice President and Distinguished Analyst in Gartner Research. Ms. Litan's areas of expertise include endpoint security, security analytics for cybersecurity and fraud, user and entity behavioral analytics, and insider threat detection. Read Full Bio


Thoughts on AI Security: The Dark Side of AI and How to Make it Trustworthy




Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.