Data quality is almost always worse than anyone imagined when starting a master data management project, and the issues that this stirs up has a major knock-on effect for master data projects, which by definition are about arriving at single, trusted versions of key shared data such as customers, products, locations and assets. I was curious to see whether this state of affairs was merely anecdotal or indicative of a wider issue, so my firm recently completed a survey of 192 large organisations (about half of which were from the US) to look at the issue. The survey was sponsored by Informatica and Talend, but had no vendor-specific or vendor-supplied questions. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe Firstly it was interesting to see how organisations rated their own data quality: of the survey respondents, just 12 per cent rated it ‘good’ or better while 39 per cent reckoned it was poor or worse. The specific data issues identified were inconsistency, inaccuracy and lack of completeness. So far, no real surprise. A third of the companies who responded — and bear in mind that these were companies sufficiently interested in data quality to take a detailed survey — had no data quality programme at all. One encouraging aspect of the survey was that 54 per cent of respondents had data governance initiatives, which address business ownership of data and pull together the processes needed to address conflicting data. In my experience this is a key step, because IT departments typically lack the authority to get lines of business to change their ways when it comes to inconsistent data. At least with a data governance initiative your master data project has a fighting chance. Without one you are asking for trouble. Data quality stretches across most master data types, with 74 per cent of respondents reckoning that customer data would be a focus of their data quality initiatives, 50 per cent also taking into account product data and 40 per cent financial data. Other types of data mentioned included supply chain, personnel and location data, but it is clear that the issue is by no means confined to customer names and addresses. This is relevant because a large proportion of data quality tools focus exclusively on these fairly well understood and contained problems. Similarly, plenty of master data technologies were designed specifically for customer or product data. Organisations looking to select data quality and master data tools would be wise to choose ones to help with the broad variety of data that companies actually use. Fixing data quality at source, which will often need tools to enforce business rules, is the only way things will improve. What I found intriguing was that half the respondents found that it was “very difficult to present a business case” for data quality, and that 70 per cent made no attempt to measure the cost of poor data quality. The link is clear: if you do not measure the cost of poor data you have no idea of the monetary benefits of fixing it, which is the key to any business case. Yet the benefits are out there, often in large denominations. One respondent told us that incorrect forecasting of manpower needs due to poor quality data about contractors’ end dates would have led them to recruit 1000 more contractors than they actually needed. Spotting this problem saved $40m (£24.6). This is perhaps an extreme case but I have seen large savings almost whenever data quality has been properly addressed. One pharmaceutical company was holding dramatically more spare parts inventory than it needed — enough to supply its factories for 90 years — and this insight saved over £2m. Master data management projects would appear to have a mixed track record so far: only 24 per cent rated their own projects as ‘successful’ or better. I would suggest that many that fail have done so because of two key reasons: – Firstly they have failed to get the buy-in for a data governance initiative that would allow them to address inconsistent data definitions at source. – Secondly they have failed to take data quality seriously as an integral part of their master data project, nor budget properly for it: an earlier survey of ours showed that data quality took up 30 per cent of master data projects, yet had been budgeted at less than 10 per cent of project costs. In summary, the survey confirms that while most firms regard data quality as key to the success of master data projects, a third have no data quality programme. Since only a minority of firms measure data quality at all, let alone what it is costing them, it is hardly surprising that they struggle to get senior management to take the problem seriously. Data quality is truly the forgotten child of master data management, and while it remains so, master data projects will continue to have the mediocre track record that is indicated by these results. Andy Hayler is founder of research company The Information Difference. Previously, he founded data management firm Kalido after commercialising an in-house project at Shell Related content Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe