In May, 2014, an algorithm named Vital was appointed to the Board of Directors of a Hong Kong corporation called Deep Knowledge Ventures. Since then the company, which invests in biotechnology ventures, has credited Vital with saving it from bankruptcy. As AIs gain new abilities to combine the analysis of big data with a newfound facility in natural language, it’s only a matter of time before we can expect AIs to make more frequent appearances in the boardrooms and war rooms of our corporations and organizations. After all, AIs have proven they can beat humans at the most complex of games, so presumably their counsel will rapidly become indispensable to the business world even if they’re a long way from possessing anything like general intelligence.
Let’s side-step the temptation to indulge in future shock speculation and examine what this might mean for the future of marketing strategy. As we consider the game theory analysis of marketing decisions like pricing, targeting, and investments in product and service improvements, one crucial variable emerges from the fog: the degree to which you can depend on the cooperation of other players. To explain why I’ll need a brief digression on game theory.
Game theory discussions almost always start with the Prisoner’s Dilemma (PD). You can read about it if you need a review, but recall the conclusion: defection always results in a better pay-off for either player, so it’s referred to as the dominant strategy because it wins no matter what the other player does. To drive this home: there are only two possibilities – your partner will either defect or he won’t. If he doesn’t defect, you’re better off defecting because you’ll go free rather than serve time. If he does defect, you’re still better off defecting because you’ll serve less time than if he defects and you don’t. In the real world, preventing defection requires a stronger incentive for cooperation, such as knowing that if either of you goes free while the other goes to jail, a worse punishment awaits the informant.
When generalized to group interactions, PD yields the Tragedy of the Commons. As in PD, the tragedy occurs when individuals sharing a communal resource calculate that self-interest would be better served by consuming more than one’s fair share, even when the community has agreed to a common limit on consumption. As in PD, the player reasons that, if no one else defects, they’ll do better by taking more. But the subtle part is when she asks herself, “but what if everyone behaved this way?” and realizes that, in that situation as well, she would be better off defecting first, before everyone else does. In fact, no matter who else defects, she’s always better off defecting. So, again, defection is the dominant strategy and cooperation fails.
While such models bode ill for issues like climate change, they’re essential to the consumer benefits of free market competition. Antitrust laws help assure that companies which might otherwise maximize profits by cooperating to keep prices high and service levels low and divide markets are instead incented to compete on price and quality even though it brings the “tragedy” of lower profits for all. At least that’s how it’s worked since the days when Adam Smith described the formula. To make a profit under perfectly competitive conditions, companies must differentiate with branding, innovation, and unique customer experiences.
So now let’s ask, what happens when AIs start playing this game? First, let’s be clear that algorithms are not likely to do well at assessing the intangible value of branding, innovation, and unique customer experiences. Second, let’s take secret collusion among competing algorithms off the table. The question then becomes, can an algorithm discover a pattern of moves that would counter incentives to “defect” on price and quality with stronger incentives to cooperate, to the benefit of its shareholders (whose value it is presumably programmed to maximize) and the detriment of its consumer market? This is apparently what nature has managed to do with species that cooperate in competitive situations. Some would point to it as the source of tribalism. In the real world, cooperation often beats combat as a strategy.
Emergent cooperation among corporations might not lead to price-fixing and monopolies, which would still be prohibited by law. But the key lesson here is that there’s no reason to doubt that cooperative strategies could emerge among marketing algorithms (and have likely emerged without them), and that reinforcing the assumption that everyone in the market game is playing to maximize shareholder value – to the exclusion of all other considerations – could have unintended consequences for consumers. Adding a wildcard, such as valuing customer satisfaction and loyalty beyond its actual economic worth, might re-level the playing field by changing the game, and focusing competition on those areas where humans still hold the edge.