If you’re only comparing your enterprise data to itself, how do you know if it’s any good?
Delivering high quality data is a goal of all MDM and data governance practitioners. In this pursuit of master data excellence, we’re required to determine the business rules that define exactly how we measure the quality of our data. These rules are a key component of any data governance initiative and are critical to establishing metrics which tie improvements in data to actual business outcomes. So far, so good.
Let’s assume you define the most robust data quality measures and supporting processes ever conceived. You’re providing compelling metrics to the business and you’re able to show how improvements in your data are driving the business forward. Mission accomplished.
However… how do you know where your data stacks up against your competition? You’re moving the needle – but are the improvements you’re making consistent with industry norms – or are you less efficient in your data management than your competitors? If you knew your data quality was better than any of your competitors and you didn’t necessarily need to invest as much as you were investing in your data to maintain a competitive advantage, would you change your approach?
Gartner provides research to allow it’s clients to create benchmarks to determine what ‘good’ looks like for a myriad of business processes (here is a great example from an IT management perspective) – but data and analytics leaders are essentially driving blind when it comes to understanding how their data quality ranks when compared to others in their industries. Data quality is most certainly subjective, but what if your definitions align to the definitions of other companies? If you agree on how to measure quality, could meaningful comparisons be drawn?
Cloud-based MDM software vendors are in a unique position to potentially deliver creative solutions to this situation. Could MDM vendors somehow start producing aggregated metrics to allow their customers to understand how their data quality compares to other companies? Should they? If this required you, as a customer of an MDM provider, to allow a vendor to access to portions of your data in a privacy/regulation-compliant way to produce these aggregated quality insights in exchange for access to these insights, would you do it?
I look forward to hearing what you have to say!!
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Benchmarking on CDEs within an organization is more critical than cross organizations. We have achieved it couple of different ways outside of what DQ tool can support.
From a business perspective, it is important for us to benchmark a consumable customer population – we term this as non suspect population.
Then benchmark CDEs for customer entity .
Within organization, we need to have semantic consistency and common understanding of the CDEs.
It will really help if DQ tools focuses on these aspects in addition to reporting just metrics as metrics tied to business concepts are insightful
For cross organizations, unless we have common taxonomies and definitions of who a customer , aggregated metrics may not be useful. I have worked for two competitors in financial services- their definition of customer is quite different .
Thanks for providing insights on how you addressed some of these challenges!