Blog post

AI Validator – A Rising Role for Managing AI Risk in the Enterprise

By Svetlana Sicular | September 22, 2020 | 1 Comment

skillsRisk Assessment Process and MethodologiesHumansCrossing the ChasmAI in the EnterpriseAI"Data Scientist"

AI is a newcomer in the enterprise.  The current struggle in the space of AI is to meet the enterprise expectations by inventing and creating AI equivalents of the software lifecycle – DevOps and quality assurance (QA). MLOps is emerging as an equivalent of DevOps (see Use 3 MLOps Organizational Practices to Successfully Deliver Machine Learning Results). I spelled out “quality assurance” because – believe it or not! – some Ph.D.’s in ML have no idea what QA stands for.

A role of AI Model Validator is emerging as an equivalent of QA engineer. It originated in the financial services in 2011 (see regulation SR 11-7), and it is now expanding to many verticals – public sector, life sciences, high-tech manufacturing etc. This role also gets a much higher visibility in the financial services while being refined to accommodate an avalanche of new AI approaches and tasks.

An Emerging Role: an AI Model ValidatorAI validators are data scientists too, but they are independent from the data scientists who develop models. If model building takes in average 6-12 months, validators have 6-8 weeks to complete their work. According to Agus Sudjianto, EVP and head of corporate model risk at Wells Fargo, “The most important skills are fundamental mathematical skills, coding skills and communication skills — and getting the optimal combination can be a challenge.” Managing model risk and model bias is a key capability of Wells Fargo, and the company is recognized for it by the regulators (see Gartner research Advanced Data and Analytics: What Do Leading Organizations Do?).

The job of AI validators is to review and challenge models, including model assumptions, methodologies, data and calculations at all phases of the AI life cycle. They test model design and identify model weaknesses, assess and quantify model limitations, ensure ongoing monitoring and evaluate model controls. AI validators’ job is to break a model, find adversarial examples and ensure that they understand what a model does. The latter elevates explainability and helps not just with regulations, but also with data scientists’ attrition: If a validator understands a model, it is not a black box and someone else can continue the job if the creator of the model is gone.

If in addition to data science skills, you have curiosity, diligence and a healthy skepticism, perhaps, it is your next career step? Or perhaps, your next career step is creating a highly visible enterprise-wide model validation function in your organization?

 

Follow Svetlana on Twitter @Sve_Sic

Comments are closed

1 Comment

  • I couldnt agree more with your point – especially with regards to the fact that AI validators have an impact throughout the whole phases of the AI lifecycle. In a way, this very much confirms what we are seeing with the virtuous cycle of insights gathered from monitoring models in production to better (re)train, validate, and debug/troubleshoot. Overall, AI practitioners are moving towards a more structured approach of AI operations: one that includes new positions and better process for healthier models