Blog post

How can you trust ChatGPT or any other model for that matter? AI TRiSM

By Avivah Litan | January 18, 2023 | 0 Comments

AI Trust Risk and Security Management (AI TRiSM) is possible by applying cross-disciplined practices and methodologies to AI models and supporting data, and adopting tool sets that support these practices.

We just published an update to our AI TRiSM Market Guide Market Guide for AI Trust, Risk and Security Management that lays out a framework for managing model trust, risk and security, and lists sample vendors in the niche software categories that support that framework.

AI TRiSM methods and tools work with any model, ranging from open-source LLM models like Chat GPT or homegrown enterprise models that use a variety of AI techniques.  Of course, with open-source models, there are some differences – for example with regards to protecting enterprise training data on shared infrastructure used to update the model for enterprise use cases.

Additionally, enterprises have no discrete ability to govern open- source models using ModelOps tools we write about in the Market Guide.

But explainability, model monitoring, and AI application security tools can all be used on any open source or proprietary model to achieve the trustworthiness and reliability enterprise users need.  In fact, they should be used on third party models and products that embed AI to keep solution providers honest and ensure products perform as advertised.

The AI TRiSM market is still new and fragmented, and most enterprises don’t apply TRiSM methodologies and tools until models are deployed.  That’s shortsighted because building trustworthiness into models from the outset – during the design and development phase – will lead to better model performance.

As AI models proliferate, we expect AI TRiSM methods and tools will be more commonly adopted by teams across the enterprise involved in AI.  Here’s the roadmap we see for this market.



Producing this Market Guide was a real team effort across Gartner that tapped into many pockets of expertise involved in managing AI model trust risk and security. We are lucky to have so many AI experts working at Gartner, many of whom contributed to this research.

A big special shout out goes to co-authors Sumit Agarwal, Jeremy D’Hoinne and Bart Willemsen.

Next time you wonder if you can trust ChatGPT or any other model, find a team that can implement AI TRiSM practices and tools to answer that critical question.   There’s no reason you have to operate in the dark.

The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.

Comments are closed