As marketers move from reacting to COVID to planning for a new-normal, it’s time to re-evaluate machine learning capabilities.
Obviously digital engagement and retail consumption patterns have shifted in most major markets, and they will shift again. It is also becoming clear that those shifts will vary tremendously across markets and products (one set of product examples here). Classically this is the type of complex problem where machine learning can help us better anticipate and meet consumers’ needs.
However, if you are an “algorithmic marketer” focused in the US, your machine learning models have likely never been “dumber” than they were in March, though April may unfortunately be a close second. Most of the machine learning deployed by marketers uses history to predict the future, but today’s scenario is unprecedented . Because of this, marketers looking to use machine learning moving forward must ask if their models are learning new consumer behavior fast enough. Most are not. Sample questions include:
- How long until your send time optimization algorithm catches-up to changing digital engagement patterns (e.g. for one common vendor it is 90 days)?
- When will your product recommendation algorithms—from personalized search to basket building recommendations—adjust to new buying patterns (e.g. many brands use the last 12 months, especially for next best product algorithms)?
- How long until your triggered campaigns adjust to new signals and communicate to the (new) right people with the (likely new) right message (e.g. many digital journey tools use 30-day look-back windows)?
These three examples all lead to the same question: Can your machine learning model use less data and become more agile? And these questions are likely starting to happen. A survey conducted last week in the US found that 37% of analytic teams were dealing with a “substantial number of requests related to COVID-19” or were already “focus[ed] exclusively on analytics related to COVID-19”.
When I wrote the report A 4-Step Process for Marketers to Evaluate Predictive Models (Gartner subscription required) I never thought this framework would help marketers in a pandemic. Rather I wanted to help marketers have productive conversations with data scientists with statisticians. Below is a snapshot of the 4 steps marketers should always take when evaluating predictive models.
Beyond the 15 questions outlined in that report, here are some additional questions marketers should ask during today’s pandemic:
- Can your models use less data and become more agile? The goal here is to shift the model from one that is precise during stable times, to one that is more responsive in changing times. Ask your data scientists to assess the impact of shortening the training window for each of your models. Is one-week enough data? This question is particularly important for your propensity or next best action models.
- Are you optimizing on the right criteria? Go through your product recommendation models and be sure you understand the optimization criteria. This is especially worthwhile if you were optimizing on multiple criteria. For example, organizations should lessen the impact of “available at your preferred store” when locations are closed, and more customers are ordering online.
- Should you change your targeting rules based on predicted segments? As brands look to move a greater percentage of purchasing behavior online, they should revisit historic messaging and inclusion rules that are based on machine learning. For example, a brand may have historically targeted consumers with both a high predicted likelihood to convert online and a high predicted spend. For some brands it may make sense to relax the second criteria to broaden the targeted audience while adding a free shipping threshold. Other brands will relax both criteria and add new tailored messaging to drive new behavior.
- Are you collecting the right data today to make you more successful tomorrow? If you are responsible for multiple markets, indicate the date the Covid-19 starting changing shopping patterns. Assess if you can you use shifts in consumer behaviors in one region to help prepare for shifts in consumer behavior in another. Also ask your data scientist if any of your machine learning algorithms are sensitive to consumption anomalies (i.e. hoarding behavior) in the data. If some are, should the data be recorded or omitted when rebuilding the models? (Side note: do you need to update maximum purchase quantities to increase product availability?)
- Are you assessing your models frequently enough? Rapidly changing consumer patterns require revisiting your machine learning with more frequency than anytime in the last decade. Do you have a process in place to monitor machine learning health? Get a recurring (virtual) meeting on the calendar now. Start with a weekly cadence and adjust from there.
Has the pandemic changed your brands approach to machine learning? If so, I would be very interested to hear more on the changes you have made either below in the comments or message me here.