
Coronavirus upended marketing’s predictive models. Beyond that, there’s an additional issue: how do you manage to rebuild your colleagues’ trust in them?
Because a predictive model is only as good as the data it is trained on, when the world changes, the predictive model’s accuracy changes. When the model’s relevance or accuracy reduces, the risk of making wrong decisions increases. Some of the decisions may result in brands’ communications appearing to be ‘tone deaf’ in triggered communications. Others may have a negative impact on the relevance of recommendations and personalization in the customer journey.
This doubt and uncertainty about making missteps because of changing or now inaccurate data has ramifications. It can lead to eroded trust in modeling as a concept and doubt in its ability to make actionable predictions. To maintain momentum in spite of uncertainty, marketers need to take action to preserve trust. Preserving trust is not strictly about keeping models running through chaotic times, it also underpins preparedness for reinstating paused models after periods of uncertainty.
Maintain trust in AI through an effective approach to governance covering:
- People: People decide what problems to solve, how to solve them and how to deliver the solutions. People also use the solutions. Include all team members and vendors involved in developing, training, managing or using AI models when reviewing the impact of major changes.
- Data: Define trust levels in the data sources by monitoring the effects recent changes have had on continuity, and short/long term relevance. Identify problem areas, affected by the external forces at play, such as product availability or customer conversion propensity or churn. Work back to any models leveraging the data sources in question and communicate what the impacts are, especially when the decision is to pause the model and associated activity. Monitoring trust in data and communicating over time will help prepare for any subsequent switch-on.
- Algorithms: Sources of algorithms and trained models vary. Check the source of your predictive models, re-engage external providers and connect with data science teams to scrutinize their application and data in today’s context.
Appraise your predictive models in three key areas to support decisions in model management, including short-term pause and resume tactics:
- Training data, which contains the information to learn from.
- How far can we trust the data sources we use?
- Is the data skewed or biased?
- How can we validate new data?
- A learner algorithm (or a set of learner algorithms), which interprets the training data.
- Can we revert to an earlier version of the model?
- What is the frequency of the model training?
- Do we have a feedback loop to improve and train the model based on new conditions?
- An output, which is a prediction or insight derived from the data.
- How can we demonstrate the trustworthiness of outcomes, especially when reinstating and re-training models after periods of instability ?
- Can we quantify how the outputs and decisions are affecting our marketing operation and customers?
- Do the end users of the predictive models know how to provide feedback to improve the output?
Finally, it is worth recognizing that in order to accelerate out of these times of uncertainty, experimentation will prove more vital than ever. Rapid cycles of experimentation will drive insights as to what works, what doesn’t and where value lies in the new ‘normal’. To learn more about how to manage marketing experimentation see our research on holdout testing here (Gartner subscription required) : Use Holdout Tests to Measure Marketing Channel ROI
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed