Accountancy standards and intangible assets
Accountants and investors alike face a simple problem when valuing businesses and creating a balance sheet – the value of physical assets, cash at the bank, creditors and debtors rarely adds up to the market valuation placed on the company. Historically, this gap has been covered by the term “goodwill”, a catch-all which is allowed under General Auditing and Accounting Practice (GAAP).
A combination of global trading and interlinked economies has been driving a need for more commonality of accounting practices between territories and the alignment of accountancy standards. At the same time, the rise of digital and technology companies which often have little in the shape of physical assets, yet command huge valuations, has made it ever more necessary to explain the difference.
The solution has been to use intangible assets as the basis, covering customer relationships, patents, licences, brands and data. Two standards offer methods for arriving at a valuation for these: IAS 38 and FRS 102.
Brands are valued using these rules, yet seem to emerge from the process with much higher valuations than data, despite the obvious contribution made by the latter to value creation. Yet the process used for brand valuation is in many respects less robust than it might seem since it is based on assumptions and unknown variables. For example, it would not be possible for the most highly-valued brand in the world – Apple – to buy-in a replacement brand should its own suffer a catastrophic impairment, yet this is one of the ways intangible assets such as brands do get assessed within these standards.
During the discussion it became clear that frustration with the accounting standards is commonly shared and that the reductive description of data as “customer lists” within them simply does not explain why data is now delivering significant value for those companies that invest in it.
Pick a methodology and get behind it
Behind the scenes, a number of initiatives are underway that use data and analytics, combined with econometrics, to create a new model for data valuation. These are not happening from within the accountancy profession, but rather are being tackled by people with direct knowledge of the role which data plays in modern business management.
Value is often used as a term in a looser way without the rigour of accountancy standards or financial metrics. For example, one company that provides independent data on oil and gas reserves is considered to create value for its client base, such as industry analysts. Its parent company also manages credit card transaction data from which it looks to extract additional value beyond its primary purpose. Both of these use cases reflect what economists call “externality” – a benefit which is created for a third-party beyond what was intended by the first-party. In attempting to capture this as formal value creation (since those third-parties will pay for the benefit), it needs to be internalised by the first-party, which is where a methodology becomes necessary.
One bank uses a “shadow P&L” approach – using inferred value and proxy measures to demonstrate the value data is driving for the business, without reaching a full accounting standard. It is also starting to offer its analytical services to external clients and wants to be able to measure the impact it is having.
Business impacts are widely used as proxy measures for data, for example the gaming company which reduced churn from 5% to 1.4% per month through the application of analytics. Its UK chief data officer is keen to identify how data value as a concept can be promulgated across five other global divisions. Some organisations are beginning to adopt a cross-charging model where the data office is paid both for the cost of its input and also a proportion of any incremental revenue earned or costs saved.
One financial services organisations uses the equation that data value = volume x quality x use x inherent value. But this is very inexact compared to the ability of asset managers to calculate to the penny where the cash assets stand.
Gartner has proposed six dimensions of information value (see chart):
- intrinsic
- business
- performative
- cost
- market
- economic
It claims these can be used to determine data value to a level of robustness that will allow it to be used in cash flow forecasting and net present value discounting. However, it is worth noting that there are no risk, quality or impairment dimensions in this model.
Intrinsic value or derived value?
Data is unlike most other assets since it is not expended through use – in fact, its value can be increased through use and multiplied through combination with other data sets. The value of data also depends on the questions being asked by the organisation and the decisions being made. This implies that its value is not intrinsic – few organisations pay to capture data for its own sake – but entirely related to value derived from use.
Open banking has foregrounded the issue of data value for financial services since it appears to liberate customer data from the control of a single organisation. This could actually erode the value of customer data if large numbers of consumers seek to take advantage of the new transactional data portability rights enacted by this legislation.
Yet this shift in the balance of where value is created from data is attracting serious attention. In Japan, the government has backed databanks launched by Fujitsu and NTT which operate individual data stores through which consumers can manage and release their personal information to organisations they are transacting with. Ultimately, these are likely to be valued as assets which might create robust examples for accountancy to follow.
What is certain is that investors and analysts want to find additional methodologies that allow for asset valuation beyond what can be strictly observed. In the world of asset management, measures of environmental, social and corporate governance (ESG) are increasingly being used to improve the long-term risk-adjusted returns on investments by identifying assets where ESG is being positively addressed and risks are being mitigated as a result. These are not only intangible, they require many assumptions and proxy measures.
Data ethics and data value
With the creation of the Centre for Data Ethics and Innovation, the UK government has made a clear statement about the link it sees between these two activities. It is not alone in wanting to foreground this aspect of the value chain – University of Edinburgh has a vision to become a centre of excellence for data-related issues, including ethics, for example.
Much of what is meant by data ethics is still in play and no commonly-agreed standard has yet emerged. The Open Data Institute has developed a data ethics canvas which is a useful tool, although it rather ignores the more complex dimensions of ethical considerations, such as self-denying or “break glass” situations, which will increasingly arise with the deployment of AI and automation.
It is also worth noting that the current focus on data ethics is a major shift from the historical emphasis on data monetisation, out of which the desire to develop data valuation emerged. If both aspects of this discussion can be harnessed to drive a methodology which not only identifies the true business impact of data, but also how ethical data use deepens customer relationships – one of the existing dimensions of intangible asset valuation – then the result could be of great value to everybody in the industry.