A Lesson in Attribution for Digital Marketers

By Andrew Frank | January 23, 2013 | 1 Comment

Here’s a riddle.

A marketer is using an attribution modeling tool to analyze how her digital display, search, and social campaigns are working. The tool looks at all the conversions on her web site and, for each conversion, determines which touch points were present in the customer’s path to conversion. It then uses this data to determine each touch point’s independent lift in conversion rates (a technique you may recognize as “algorithmic attribution.”) As you might expect, some touch points had zero effect on conversion rates – it made no difference whether a customer saw the ad or didn’t. A few were moderately effective, adding a point or two, but one in particular stood out: when this ad was seen by users, on-site conversion rates almost doubled from 6% to 10% (this is based on a true story). So, the marketer is happy: she knows where to cut her media spending, where to hold, and where to double-down.

Here’s where it gets tricky. She also uses an ad verification service (possibly from the same vendor) to ensure that all of the ads she buys are visible on the pages where they’re placed. (You might recall that the problem of marketers paying for ads that users never see because they’re below the fold or failed to load reached a fever pitch last fall.) To her great surprise it turns out that most of those ads with the highest conversion rates were identified by the ad verification service as never having actually been seen by users!  How is this possible?

Was one of the systems simply mistaken? Were the higher conversion rates some kind of strange coincidence – or some sort of fraud? An anomaly with cookies perhaps? If you’ve guessed the answer then congratulations: you’re a bona fide student of marketing data science.

To understand how this is possible we need to consider two things. First, the algorithmic attribution model as described doesn’t actually measure causality (although it seems like it should) – it just measures a correlation between events. Second, the touch point it’s looking at isn’t actually an ad exposure – it’s an exposure to the page that ad is on which causes the ad server to be called and the user’s cookie to be credited with a view (whether they saw it or not). So, what the attribution model is actually telling us isn’t that the ad had any effect – it couldn’t have – what it’s telling us is that people who bought our product are likely to have visited a certain web page in the days leading up to the purchase – perhaps because it was relevant to our product category and had good search traffic among people who were in market doing research and likely to buy.

But wait – there’s more.

Duly enlightened, our marketer kills the ad buy on the page where the ad was invisible and, lo and behold, her conversion rates drop! How can this be?

It turns out that page was such a good predictor of intent that her intelligent bidding system had learned to use it to bid on ads on other sites those visitors were found on. When she killed the ad buy, she also cut off an important source of predictive data that was responsible for improving the effectiveness of those other ad buys.

So, are we led to the absurd conclusion that marketers should happily pay for ads that people never see? No. First off, attribution modeling generally works better than this example suggests. Second, the real conclusion is this: sometimes, data is more valuable than media. So, perhaps an ideal sponsorship arrangement between an advertiser and a publisher might consist of nothing more than placing an invisible pixel (a.k.a. beacon) on relevant pages – reducing clutter while increasing inventory! (Yes, privacy needs to be addressed, I hear you.) This is what data brokers do – but their data is available to everybody in the market so it’s harder to use it to gain a competitive advantage – and, they take a cut. Maybe it’s time to consider some private exclusive data arrangements….

The moral of the story: don’t just measure the media, analyze the data too! …and you might need to hire a data scientist.

1 Comment
  1. 28 January 2013 at 7:29 am
    Ipad cover says:

    Thank you,
    The information you shared is very informative.
    ipad cover

Comments are closed.