My Gartner colleagues Farhan Choudhary Jeremy D’Hoinne and I just launched coverage of the AI Trust Risk & Security Management (TriSM) market. AI TriSM comprises multiple software segments that ensure AI model; governance, trustworthiness, fairness, reliability, efficacy, security and data protection. See Market Guide for AI Trust, Risk & Security Management
As users start operationalizing their AI models, their attention quickly turns to managing model risk and ensuring their AI can be trusted to perform as designed. In fact, our last Gartner enterprise AI survey found that ‘security or privacy concerns’ are the number 1 barrier to AI model implementation so it’s no wonder that’s where user attention turns once models are put into production. See Survey Analysis: Moving AI Projects From Prototype to Production
Today the AI TriSM market is fragmented, and users must select solutions from multiple categories to manage many inherent risks and issues. Most AI vendors and platforms do not provide all required functionality but over time we believe they will acquire these capabilities from existing niche vendors as noted in Figure 1 below.
Third Party AI Models
Further, today users are mainly concerned about managing risks of their own homegrown models and have yet to incorporate TriSM around third party models that use AI, such as advanced cybersecurity products for endpoint and network protection. We believe these third party models will increasingly be managed by end-user organizations so that users understand the AI functions they have bought into, and can ensure models operate as expected.
Build TRiSM in – not as an Afterthought
Security and risk concerns are almost always an afterthought in any system development and deployment. When it comes to AI, this is an especially poor design choice since there are so many moving parts and so many of them are opaque to most users.
There’s no need to rely on a black box running critical functions for your enterprise. There are in fact many solutions that bring transparency and trust, keep the bad guys out, prevent benign mistakes, protect sensitive data, and keep AI models functioning as intended. These solutions just need to be used.
The good news is that the same solutions that manage risk and security also help optimize AI model performance. Hopefully that will get users to deploy AI TRiSM, and once they do, the inherent risks will be readily apparent.
After all, if you aren’t looking, you won’t see. But frankly, this is the kind of elephant in the room that you are much better off staring down.