Pitching for investment into data management projects should be straightforward – everybody agrees that data has a value. Demonstrating what that value really is and where ROI is to be found is somewhat more difficult.
Do you agree with the following statement: data is a valuable asset for our business? As a DataIQ reader, there is a 99 per cent chance that you said yes. (Unless you work in a business where the data is in a complete mess and you are hoping to pick up some tips on how to sort it out.)
So how about this statement: we can put an accurate financial value on our data asset? There is a 99 per cent chance that this time, you said no. For the data industry, this gap between the concept and the reality is a significant one.
Data management is constantly pitching for budget to invest in new projects, to sustain existing data assets and to meet the data needs of a changing business. In doing so, it needs to prove to the finance director that expenditure will create value. One side of that argument is well established – arguing that ROI will arise either from cost savings or incremental revenue.
A recent survey by Experian QAS even put a headline-grabbing figure on this – £8 billion. That is what it estimates UK businesses could generate in incremental profit from improving the quality of customer and prospect data. The figure is an extrapolation from the average profits which companies in the survey calculated to have generated as a result of getting data quality right, ranging from £431,000 at SME level up to £9,262,000 for large organisations.
The other side of the business case lags far behind – the ability to show that data in its own right, together with the systems and processes that support it, has an asset value just like a new factory, product patent or fleet of vehicles.
A recent post in the Chief Data Officer group on LinkedIn asked how data’s asset value could be determined and whether it could be applied right down to specific tables in the database. It attracted few responses, with most of the information pointing back to attempts across the last decade that have generally failed to gain traction.
“Nobody disputes that data is an asset. That is a truism that is indisputably correct,” says Colin Rickard, managing director, EMEA, DataFlux. “It is seen as very important, but how many companies have you seen creating a data quality centre of excellence and investing a million pounds a year into it?”
His company has found success over the last year with the launch of its enterprise data management solution. But Rickard says clients are not investing on the basis that they recognise data’s core role within the business and a need to invest to develop its asset value.
“Where there is no regulatory driver, we have to produce very clear evidence of the benefits, which are usually financial. We have to be much more precise about our proposals and show the ROI,” he says. Arguing that investment is justified because call centre agents, for example, will save five minutes each per day does not cut it because, as he notes, directors will respond that this time will simply be spent on having another cup of tea.
Outside of regulated industries, there is no appetite for centralised data management functions. Instead, investment is based on understanding in detail the impact which product or SKU data is having within a business. At one Scottish brewer, data quality is now being tackled across the supply chain to identify where errors lead to over-ordering of raw materials or unnecessary stock holding. The data quality director reports directly to the finance director.
Rickard believes that the economic climate means investment will be made into data that supports core operating processes and which can be shown to drive efficiencies. “Data quality does drive revenue, but it is hard to justify investment using an argument around improving customer data to get better cross- and up-sell,” says. “In my view, that traditional marketing argument won’t cut it.”
He cites one client where the goal of the data management programme is to reduce the company’s tax bill. This will be achieved by ensuring goods being imported are categorised to attract the lowest rate of duty. Offering simple drop-down menus to logistics managers could result in multi-million pound savings.
That is the type of argument that finance directors understand. The accountancy profession is oriented towards optimising costs and taxation for the business. It is very nervous about investment cases built on future revenue growth (understandably so in the wake of Worldcom and Enron.)
As a mindset, this explains why valuation of data as an asset on the balance sheet has been resisted. In the US, the Accounting Pinciples Board allows for data which is purchased to be placed on the balance sheet, but not data which is developed internally, for example. Yet in most businesses, it is precisely this in-house data that is of greatest value, not least because it is often unique.
Other attempts to value data tend to look at downside risks as the drivers of value. Global Data Excellence has a Data Excellence Management System which it describes as “a data governance application which measures and governs non-compliant data in terms of dollars and Euros and pounds.” Insurance providers tasked with meeting data quality standards as part of Solvency II regulations can readily calculate the value of accurate data by offsetting the cost of data quality improvements against the capital reserves they would otherwise have to set aside, for example.
In other sectors, there may be a clear linear relationship between data quality and operational costs which moves data closer towards an asset-based valuation. Royal Mail is an obvious example. “The ROI on data is quite easy because the data we use creates efficiencies in the business,” says Keith Jones, head of data services. “We know the price to deliver a letter at address level and, when it goes wrong, how much it costs to bring it back and then take it out again to the correct address.”
The company spent some years establishing the core cost base of these services, so it has good visibility of how incorrect data generates unnecessary haulage miles, sortation staff and facilities. Those internal costs mean Royal Mail can also make the case externally to support other companies planning to invest in data quality. The organisation handles 200 million returned items of mail each year and estimates that each one costs £1.33 to generate.
“Royal Mail is an unusual business, although it is not unique, in that it has to have a lot of data just to do business. It automatically captures a lot. My job is then to work out whether that data has external usage possibilities,” says Jones.
Business cases for companies buying this data may be driven by validation of target identity and location, fraud reduction or cost-savings from data quality improvements. One notable dimension within this argument is that negative data may be as valuable as positive data – having a record of incorrect addresses or fake identities allows then to be identified and removed from a database, thereby eliminating risk and waste.
Even so, Jones recognises that, “it is a big challenge for people explaining how data has value and to make a business case around that. Explaining how we see it having an impact internally can give a company confidence that data does have a value and can also drive value. The bigger the organisation, the bigger the scale of the problem and the potential loss.”
Essential as Royal Mail’s data is to virtually every business in the UK, there is unfortunately evidence that investment in data quality improvements may have stalled – Jones points out that the number of returned items of mail it handles has stayed the same for five years. It is also worth pointing out that Royal Mail does not have data on its balance sheet as an asset, despite having one of the clearest arguments for this to happen.
Further proof of this emerged in the recent study carried out by Experian QAS, “Data quality: the untapped revenue potential for UK businesses.” It found that around a fifth of the companies it surveyed have not invested in enhancing their data quality over the last two years. The need to cut back on anything considered non-essential just to stay afloat may help to explain this funding gap.
Another reason may also be found in the fact that two-thirds of companies that have invested into data quality have not quantified the impact of improvements. In other words, whatever the business case made to get the necessary funds, no measures have been taken to show if it turned out as expected.
Joel Curry, UK managing director at Experian QAS, argues that this does not excuse failing to invest in data quality. “It is an issue that is probably never going to go away – it is endemic in data management, especially as databases will get ever larger.” Other factors driving this include the growing number of consumers shopping online – now up to 22 million – who will make data entry errors every time they register or transact, for example.
Some of these data problems can remain invisible. Curry points to one US retailer which launched an online operation and was surprised that it did not receive a single product return in the first three months. The reason? “Its delivery rate was so poor because of data errors – we found a 63 per cent error rate,” he says.
The five-digit American zipcode exacerbates these problems as it covers an average of 3,500 households. As a result, 23 per cent of all US mail is misaddressed, 17 per cent so badly that it affects delivery with 2.7 per cent undeliverable. UK rates do not hit those dizzying heights, but problems still persist.
Data quality is often used as a proxy for data value because its impact is easier to quantify, from the production costs of undelivered mail to the manual work arounds involved in fixing errors. At a higher level, the requirements of Basel II and Solvency II mandate a level of data quality in return for lower levels of liquidity. “Stop-loss has a big impact,” says Curry.
Elsewhere, these arguments will get the business case for data management only so far. Large-scale projects tend to be expensive to maintain, so justifying them on the basis of productivity gains or incremental revenue may not work.
“There tends to be those few magical windfalls that data can bring to marketing, usually targeting related, that can substantially reduce wastage as well as fuel revenue in to millions,” says Adam Williamson, head of data at Proximity London. “However, once achieved, it is about using data to refine and finesse, which still offers an upside, but is harder to justify further large scale data management costs. This shouldn’t be needed at this stage of data maturity anyway unless there is new product line or merger in the offing for example, or a major operational benefit.”
He believes that investing in data management is still a “no brainer” even when directly-attributed benefits are hard to find. Much rests on taking a leap of faith. “There needs to be a measurement ethos in the organisation to prove this, otherwise the benefits will go undetected. But the fact that most organisations have rapidly expanded their investment in data and analysis shows that at least they have an instinct that it is the right thing to do, if not the facts to support this decision,” says Williamson.
Faith is not something that accountants are comfortable with. Yet the search for facts to prove the value of data remains relatively fruitless. Rather like the source of the Nile, there may not be a single defining factor, but rather multiple contributory ones to the undoubtedly powerful force of data in business. Companies that stop paddling soon discover they are facing the wrong way.