In a recent AdExchanger Talks podcast, Nate Woodman of IPONWEB surfaced the specter of what we might call the brand-gorithm: a proprietary algorithm unique to a brand and used to make decisions such as how much to bid on an impression or what message to serve.

Most brands already do some of this, some of the time. Whenever a personalization platform comes up with a next-best-offer model or a marketing automation system scores leads, they’re using some proprietary data and machine learning methods to do it. What the brand-gorithm brings is a more comprehensive, omnichannel and machine-driven master model.

It’s promising a kind of digital DNA for a brand that can size up any prospect – on an exchange, in your lead file, browsing on your site, anywhere – and determine whether they’re likely to want something you’re selling.

At that point, a brand can present them with exactly what they’re most likely to buy using the message they’re most likely to hear. Benefits to a business are clear.

Amazon and Netflix are lighthouse examples of companies with vast inside data and predictive powers. They’re a window into what should be possible for other brands in future, a world where, as Netflix’s chief product officer once said, “There are no bad shows, just shows with small audiences.” Meaning: Even your duds can find a buyer, if you know how to spot him.

But what does success require? The biggest obstacle is not the learning methods but – of course – the data itself. You need an accurate cross-device, cross-channel identity. You need a clean set of features, the things you know about the prospect or customer. All this is harder than it sounds.

And finally, you need a lot of data from a lot of observations to train a useful model. It’s possible your canine boutique may never have enough digital action to train a good brand-gorithm, no matter how smart you are.

More Data, More Problems

Assuming you’ve cleared these jumps, how would a brand-gorithm work? First, a data science team gathers all the first-, second- and third-party data it can get mapped to people, known or unknown. Then a set of goals is defined and mapped to those same people – things like sales or grooming appointments.

Then the neural networks and deep-learning pods start to sift through the data. When they’re done, the brand team starts acting on the output recommendations and seeing what happens. They feed results back into the system as new information.

The end result is essentially a black box sitting on a cloud cluster somewhere that “defines” your brand, at least in terms of marketing.

There are already companies that claim to do something like this. Some of the most vocal include Adgorithms, Cognitiv, ReSci and Sentient. And there are dozens of tools available in the recommendation, lead scoring and propensity-to-buy category for retailers.

What’s different about the brand-gorithm is that it’s portable across channels and tactics. Algorithms can be something like apps, which are able to be bought, sold and traded on exchanges. If I think I’ve cracked the code on the Bernese mountain dog fancier, that insight should be worth some bones to the less enlightened dog entrepreneur. And, in fact, nascent algorithm marketplaces already exist.

AppNexus brought something similar to the programmatic world when it launched its Programmable Bidder in 2015. Chief Data Scientist Catherine Williams explained that it computes an expected value for an impression based on historical data, such as domain, geo, frequency or recency, using AppNexus’ own decision-tree language, Bonsai. Brands can create their own Bonsais using 40 variables.

Demand-side platforms (DSPs) have been using such models for years. Rocket Fuel’s so-called “moment scoring,” for example, uses historical and exchange data to estimate the value of an impression and inform a bid. AppNexus’ Programmable Bidder offers brands a way to pull the algorithm outside the DSP. The company has claimed at least 20 users, and there’s unofficial chatter that it helps.

Influence Engineering

Still, barriers to a true brand-gorithm are daunting. Even solving identity and data sparsity, the marketer faces the real problem of execution. Owned platforms, such as websites and email, should be able to act on the decisions. But anything outside the marketers’ walls will need APIs or other connections that are flexible, nuanced and fast.

The harder barrier is that more of the web is closed to brands. I don’t mean it can’t be bought; I mean it does not give back the precise person-level data brands need to populate their brand-gorithms. The information is captured, but it is held for the benefit of large walled systems. And because they see more of their own users’ lives than any brand, they’re in a position to build more accurate predictions.

As these predictions get more perceptive, marketing rather eerily becomes influence engineering. To take a trivial example, I love Bernese mountain dogs. Any marketer that puts one in its message to me gets my attention. We all have such triggers. I can’t think of any brands I buy that know this about me. Facebook and Google do, however. Oh boy, do they.

Brands are always hostage to their own data and what they can barter or buy. Creating a cleaner data set anchored in an accurate cross-platform identity is a good idea. Applying data science to improve decisions is just common sense.

But there’s a chance the comprehensive brand-gorithm will never really exist. It will always be a partial solution. The real brand gorillas will lurk behind the rising walls.

This post originally appeared in AdExchanger

  1. March 24, 2017 at 12:18 pm
    Ketharaman Swaminathan - GTM360 Marketing Solutions says:

    The data quantum challenge highlighted in this post is very real. We realized this when an AI platform wanted a list of 200 existing customers to be able to predict our ideal customer. We’re in the B2B tech space where no one but the large vendors will have that many existing customers. So the platform can’t be used by small and midsized vendors. And, if a vendor is that big, probably every company in the world is its ideal customer, so the platform won’t be used by the large vendors either!

  2. March 25, 2017 at 11:22 am
    Simon James says:

    Hi Martin,

    Interesting point of view, thanks for sharing. So where do you see this all heading in the 10 years? With a convergence of all these technologies, .self drives cars, smart homes, IoT, where should we focus or short term goals, which the long term?


  3. March 25, 2017 at 7:00 pm
    Rex Briggs says:

    This blog puts its finger on something important. We use machine learning (AI, if you prefer) to decision the next message, and the entire media and marketing mix. We put our AI algorithm into a robot… here is the story:

    MONICA (marketing optimization neural intelligence computer algorithm) is a good example as is Albert and some of the other companies you cited. I think you may be on to a new category with its own magic quadrant. What do you think?

  4. April 6, 2017 at 7:20 pm
    Bill says:

    “The harder barrier is that more of the web is closed to brands. I don’t mean it can’t be bought; I mean it does not give back the precise person-level data brands need to populate their brand-gorithms. The information is captured, but it is held for the benefit of large walled systems. And because they see more of their own users’ lives than any brand, they’re in a position to build more accurate predictions.”

    Translation: Brands are being “intermediated” by search engines, online marketplaces, and social platforms. They’ve almost entirely lost the 1:1 connection they used to have with consumers. Instead, they’re at the mercy of Google’s algorithm, Amazon reviews, and Facebook likes/sharing. Brands best find a way to control the dialogue and customer journey through a 1:1 mechanism or shoppers will be buying “Amazon Soap” in the not-too-distant future. Sorry, Dial and Ivory.

Comments are closed.