Last week was more than usually dramatic on the digital marketing M&A front, as not one but two leading, well-regarded stand-alone attribution platforms were acquired by the big boys, leaving only one substantial independent left. Google bought Adometry (terms were not disclosed), and AOL took on more recent upstart Convertro for $101 million, mostly in cash.

And thus attribution begins to resemble the data management platform (DMP) space, with one preeminent stand-alone astride a shrinking landscape, and marketers left to wonder what it really means. (In the case of DMPs, the key survivor is [x+1]; in attribution, it’s VisualIQ. For now.)

Gartner will publish an official “First Take” on these acquisitions today, and interested clients will want to look for it to get our thoughts on implications for their business. Meanwhile, I offer three observations based on interactions with clients and others in the digital marketing sandbox:

  1. Attribution was the single most mentioned topic in my inquiries over the past year
  2. It is the single least understood concept in digital marketing
  3. The cost and effort required to execute a full-scale attribution program is often not worth the benefit — but marketers usually don’t realize this until they try it

As an academic concept, attribution has a lot of mojo. It also has that rarest of commodities in analytics: a descriptive name. It promises to attribute (see what I mean?) credit fairly across all the touchpoints in a users’ wending way toward the big conversion event (sale, sign up, test drive). Our 100% accurate attribution model would take as inputs all possible ways a marketer reached all possible prospects and customers. Which brings us to our first logistical problem — namely, attribution requires user level tracking.

What does this mean? Our 100% model would have to follow each person individually in their journey. “All models are wrong, but some are useful,” as the man said. It would have to include any message the person received about the brand, whether she did anything at the time or not. The perfect model would include all search, email and display advertising touchpoints (easy); mobile, video, native and social (harder); and offline channels include TV, radio, newspapers and billboards by the side of the road (impossible).

So you can see the logistical challenge. It gets worse. Of course, the model wouldn’t be accurate unless it tracked people across all their devices, also, so that it knew that Martin on this version of Explorer is the same Martin on that iPhone and on that tablet and booting up that app on his X-Box One. And I haven’t even mentioned the nontrivial (nerd-speak for “really, really hard”) problem of achieving statistical significance, and that experts don’t agree on a formula or algorithm to use. (Google introduced one last year based on game theory. For a great overview of attribution models in general, see Avinash Kaushik’s blog post.)

None of this is to imply that attribution isn’t worth the headache — quite often, it is. When it’s applied against a specific goal in a controlled environment, it can yield strong insights. Quite often, marketers will implement one of the attribution vendors — or build their own home-grown model, using ad server data, — only in search, display and email channels, for a specific campaign. The output can help apportion budget, often into touchpoints that are earlier in the users’ path, such as display, but whose impact is undervalued.

 Unfortunately, there is a long row to hoe between shuffling media dollars around and amping up ROMI. Here is where attribution exercises can underwhelm, through no fault of their own. As one Silicon Valley analytics type told me recently, “There’s very little swing in the thing.” What he meant is that in the real world, tweaking spend based on an attribution model output can yield a net financial benefit that is quite modest, in the single digits.

So is it worth it? Worst answer ever: It depends. Maybe, maybe not. The large providers who put solid attribution into their product suites will be met warmly, because they seem to make it easier for marketers to do it right. That’s what I mean when I say most marketers won’t miss stand-alone attribution platforms: it’s the setup time they won’t miss, not the platforms. If Google or AOL, say, can make life easier, that’s great.

Or is it? What some of us fear is that many marketers may rely on a single vendor’s built-in attribution solution as a reason to keep their dollars in a closed system, out of a sense of inertia or a desire for accurate signals. In other words, if AdNetworkX sells me ad inventory and promises to tell me exactly how it’s doing and how to make it perform better, it may feel easier to keep all my money in their system.

However, it’s never a good idea to put all your marketing dollars into a single pocket. At the very least, spreading it around keeps the networks on their toes.  Monopoly helps no one but the little guy with the monocle.

Finally, to recap:

  • Attribution is good — do not fear it
  • It is harder than you think, and may yield lower returns
  • Google and AOL are smart to plug it into their stack
  • We marketers should not use one-stop-shopism to lessen our vigilance

Do you think a world without independent attribution vendors is okay? Does it not matter? I’d be interested to hear below, or at @martykihn