by Andrew White | March 9, 2020 | Comments Off on Methods for Valuing Data
Diane Coyle‘a new paper, The Value of Data, is most interesting. The paper’s findings and recommendations seem somewhat consistent, and are focused on public policy implications for private markets, as is the recent EU Data Strategy white paper. After reading both and comparing the recommendations made to how markets operate and how information drives competitive behaviour, I found questions and challenges in both papers and shared a few of them in a recent blog.
The first reference in Diane Coyle’s paper was an Accenture while paper titled, Putting a Value in Data. This short paper reads in fact like a summary of the first half of Diane Coyle’s paper. The Accenture paper is therefore a good brief paper talking about the ‘why’ and the ‘how’ to value data without all the caveats and public policy implications Ms. Coyle and the EU develop.
I particularly like the drivers of information value in the Accenture paper. I am note sure we have thought through these in the same way though we have addressed the exact same dimensions covered in different ways. As Accenture structures the dimensions into in a single lens, I think it’s quite useful. However, these is at least one inconsistency that is hard wired in the paper and you can see the results manifest in Ms. Coyle’s and the EU Data Strategy.
Data value, so Accenture muses, is driven by several factors:
- Usage Restrictions
- Liabilities and risks
This is a good list overall. Here I believe there is an inconsistency that requires qualification or should be examined separately. Exclusivity and usage restrictions are two sides of the same coin. On the one hand exclusivity may raise the value of data due to its unique properties; it usage restrictions were relaxed then the value of the data may increase due to increased share-ability. This seems inconsistent and so the two dimensions should be re-defined, so they work together.
Secondly, interoperability and access are related but not interchangeable. Access is a simple concept: is it accessible, yes or no? Accessibility is being stretched by Accenture to include “and meaningfully” too. What would be the use of having access to data if it were un-interpretable?
But what makes data understandable or usable? That is the point of the other drivers: they all contribute to the meaningfulness of the data. If there is a gap here it is that to be interoperable, semantics and context has to be included and this is hard to do in such a simple model without making it messy. This is probably why Accenture fudged it and put the two concepts together.
In reality, and this is mostly glossed over by Diane Coyle’s paper and the EU Data Strategy, interoperability is not any longer about technology. We have been able to connect systems technically using standards for electronic interoperability for years. We had this capability with EDI! The real challenge is not technology; it is with semantic interoperability that is itself comprised of data and process (or use/context) semantics.
These high-level ideas are rare: most IT shops struggle to resolve them since they find it hard to explain to business people what this means. Yet in my travels many business people intuitively know what this is about. You just need the right language and questions to help them understand the need. Look at the success of HealthIT interoperability in the US? You would think this would have been a huge success by now, especially since the US government has mandated interoperability. See Interoperability is Not A Problem for Technology – It is A Problem for Data and Outcomes. But I digress.
The other part I like about the Accenture paper is the simplest: the right method for valuing data. There there three approaches offered that are typical for valuing any asset:
The third method is the easiest (so the paper says, and I agree) but it has hurdles to overcome. How much would it cost your organization to re-build your customer data file if it was corrupted? A gap is that this approach does not recognize future use/value of the data, so the monetized data will be under-valued.
The first method, being the income approach, is much harder but more intuitive to business people. How much revenue, net of costs to clean and re-clean data, will a newly improved customer up-selling business process or behavior yield over six months compared to the old approach? Simulation, value stream mapping and economic/financial modeling can help here. And business leaders will grasp the delta improvement very easily. The weakness is that assumptions need to be made about the cost per event for handling and re-handling data.
The hardest of all might be the market method. This assumes that you can find a market for data. If the data was made available to a wide set of users and uses, perhaps a price will emerge at which point value can be determined. This sounds logical and even easy, and it is a big focus for the EU Data Strategy, that seems to envisage the EU legislating for such a market. But there are challenges with this. The very essence for what makes data valuable would be undermined if it were widely available. Look back at the point about the stress between exclusivity and usage restrictions. Data that is valuable might be exclusive – i.e. our organization get’s competitive value from keeping it so. If we share it, our value drops since other companies might be able to do what we do with the data. And this assumes those other firms can learn from this data too.
Some firms are developing and testing markets for data internally. These are less markets in the true sense of the word and more like private data exchanges built on things like a catalog or inventory of data. These have merit; but they are not really marketing whereby any buyer can publicly come and go, and sellers have choice to enter or leave. Even if compensation is offered to the holders of private data (the EU suggests this) this won’t likely cover the cost or value of that data when it is private.
If you are still reading this blog, it’s here I should mention that we have published a lot on this topic. We have been writing about data monetization methods for years. But even we still struggle to find all the answers to the challenges outlined above. But there are answers and modern best practices for data and analytics governance, data management, and analytics/Data Science, can help too. Looking at what has worked in public markets is a good source. And understanding the context for how data is or might be used to dive unique competitive advantage as well as collaborative benefits, is required. In fact, thinking less about data and technology and thinking more about user and use case will help design the right kind of market or exchange.
View Free, Relevant Gartner Research
Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.Read Free Gartner Research
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.