Gartner Blog Network


The Inflated Price of Perfect Information

by Andrew White  |  May 9, 2017  |  Comments Off on The Inflated Price of Perfect Information

If you think about this long enough you realize that one of the cherished economic principles of the market and pricing has a fallacy.  Price is meant to signal the potential future value or economic benefit a thing or service may bestow upon its user or owner.  If the market as a whole believes that the future economic benefit of something over another is lower, its price will fall relative to that other thing.  This is full of holes of course.

First the market is a not a “whole” or a singular thing.  It is actually an amalgamation of lots of little wholes – some like you and me as individual investors and much more a small set of very large investor groups such as retirement vehicles.  So there is no “whole” even though we talk about it as if it was.

Second the idea of rational expectations has lived long and suggests that you and I tend to be rational things.  This is a statement and assumption about how we would behave given certain conditions.  Often time’s game theory comes to hand as a tool to model such conditions.  For example, would you forsake a candy bar today at one price for a larger candy bar next week at half the price?  Economists like to think that we are a “whole” too and that we all move in similar ways.  And yes, there are many situations where we do seem to behave the same way.  But as we all know now there are increasingly more and more situations where the outliers are much less than that, and so the rational expectations model breaks down.

Finally there is the price mechanism itself.  Price is meant to be signal the amalgam of all market knowledge.  In other words, everything that can be known is apparently known and “priced in”.  Thus, the only information missing from the markets’ knowledge is that which should not be known and that would get you ‘put-away’ for insider trading.  Likewise, price is free to move.  In other words, as information is discovered, that information is free to move across the market so that prices can reflect all market information.  This is sometimes described as the theory of perfect information.  Again, all of this is not quite right as we all know of examples that create information asymmetry whereby one party knows something about a thing or a price that the other does not.

But more specifically price itself and how it is shared across a market is a problem.  Technically price is supposed to be liquid – that is – there are no impediments to how price is changed and communicated.  It needs to change when new information emerges and the communication and sharing of that change needs to be freely available to all.  The problem is that the market only functions if prices are actually sticky.  What I mean to say is that if prices were perfectly dynamic, there would never be a chance at arbitrage and so never an opportunity to gain an advantage by one party extracting a higher margin over another.   This we know is absurd.

Two articles in the last week show the fallacy of the idea of perfect information.  In the Economist this week there is “How Price-bots can conspire against consumers – and how trustbusters might toward them“.  This article reports on an illicit cooperation between price bots setting gas prices in the US.  In today’s Wall Street Journal there is an article titled, “To Set Prices, Stores turn to Algorithms” that reports on a similar story in Germany.  The point of the message is that when algorithms are charged with maximizing certain objectives, they might accidentally “collude” (more precisely, behave as if they were colluding since they cannot) and the result has been, so the observations states, a rise in prices. How can this be?

The reason is nicely captured in the Wall Street Journal article: The article highlights how we all have different propensity to shop and select gas stations based on price sensitivity and price levels.  The algorithms can, with enough data, determine how each of us will respond to certain prices.  For example (and I over simplify) some of us might be well enough off that we tend not to shop for the odd penny here or the odd penny there.  As such we do not react or even need a discount on the gas price.  The result is that the price offered to us is a little higher than compared to average prices.  Another motorist might be very sensitive to price, and the same price offered to them as to me might result in them driving on to a competitor station.  So the algorithm might offer that consumer a slightly lower price compared to average.  That is, of course, if the algorithm can “expect” me or the time of day I tend to frequent that establishment.

So quite logically an algorithm can “up” the prices given certain conditions.  A competing algorithm might notice the price increases, or the price variation, and it might react to this and nudge its prices up, then down.  So it is quite possible that we can come up with logical arguments for why prices will rise with increased, i.e. more perfect information.

The Economist article looks into how regulation might try to tackle the issue.  But is it a real issue?  Why don’t we accept the idea that we are all different?  Why do we have to demand that we all pay the same price when in fact we are not all paid the same wages?  Surely price variability should be a logical even valuable technique to help allocate resources?  Could the idea extend to groceries?  To water?

It does, to a degree, with fashion and wealth items.  Only the rich or indebted can afford a Rolex.  So even though prices are set the same, there is an allocation going on already.  Why can we not accept the same principles with lower priced items where there is plentiful supply?  Perhaps this with higher average wages should pay just a little more for their cabbage.

 

 

Category: economics-of-information  information-asymmetry  information-theory  price  

Andrew White
Research VP
8 years at Gartner
22 years IT industry

Andrew White is a research vice president and agenda manager for MDM and Analytics at Gartner. His main research focus is master data management (MDM) and the drill-down topic of creating the "single view of the product" using MDM of product data. He was co-chair… Read Full Bio




Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.