Background
Data is valuable. That is the one thing on which all parties can agree. After that statement, however, things quickly fall apart. How valuable? By what measure? As an asset class or as a component of GDP? And how to tax that value appropriately?
According to The Boston Consulting Group, the quantifiable benefits from personal data across Europe will reach €1 trillion annually by 2020, of which two-thirds will accrue to consumers and one-third to businesses. Domestically and at a more granular level, The Centre for Economics and Business Research has estimated that open banking may eventually boost UK GDP by more than £1 billion annually.
It is the scale of those numbers that gets economists interested as they seek to explain where this value arises and this is where the difficulties start. Dig into the numbers in that BCG forecast, for example, and it soon becomes apparent that much of it is based on lower costs to consumers – which are presumably offset by lower revenues for business, potentially resulting in a zero sum – or on access to services provided for free by digital businesses in exchange for using data to target ads – which creates a cost base for those platforms which is not necessarily being covered by revenues (cf, loss-making Twitter, Uber, et al).
There is broad agreement that the use of data and analytics enhances company performance – according to the European Commission, “even limited use of big data analytics solutions by the top 100 EU manufacturers could boost EU economic growth by an additional 1.9% by 2020.” The most specific calculation attempted on this relates to Transport for London and its open data strategy. Deloitte estimated this to have contributed up to £130 million per year to the London economy, although this is again largely down to non-financial factors such as time-savings for travellers or the creation of high-value jobs, as much as it is the result of reduced costs for TfL.
But as HM Treasury acknowledged in “The economic value of data: Discussion paper”, issued in August 2018, “data analytics is still in its relative infancy and in some cases this failure to recognise the potential value of data may result from a lack of established use cases or a detailed evidence base.”
This whitepaper considers the current thinking among economists around the value of data at a macro and micro-level, looking at any implications for the specific valuation of data as a corporate asset and the potential for its future taxation.
Data as an intangible asset
“Data…is not generally understood to be property.” That statement by HM Treasury underlines the challenge facing organisations looking to invest in data and analytics, then to quantify the benefit which their organisations have achieved as a result. Invest in a new manufacturing plant that is more efficient and the company gets two benefits: increased production and a saleable asset. While the first will also derive from D&A, the second may not.
For accountants and auditors, data is classed as an intangible asset for which there is a specific accounting standard (IAS 38). It is also worth noting that valuation of these intangibles generally only occurs during the course of a “business combination” for which there is also a standard (IFRS 3) and that, if data undergoes valuation this way, it is then subject to an annual impairment assessment and any change in value has tax implications. As a result, few businesses have gone to the effort of achieving a valuation of their data which can be formally placed on the balance sheet.
Yet the importance of getting this right is clear from a study by the 2017 Purchase Price Allocation Study carried out on merger and acquisition deals by PPAnalyser. It found that up to 75% of the value of deals is accounted for by intangibles. These are split between goodwill and intangible assets (such as brands, patents, copyrights, etc). While service companies typically account for 84% of their value through intangibles, manufacturing firms actually had the highest level of intangible assets at 41%.
It might reasonably be expected that data assets now form a sizeable part of those intangibles, especially considering the market capitalisation put on companies which are primarily driven by these (cf, Facebook, Google). Yet when deal-makers and their accountants look to break down intangible assets into more granular categories, data in particular loses out badly compared to existing customer relationships (by which is meant contracts and order books), brands or patents.
This is because the relevant accounting standard treats data as “customer lists” and values it against market rental or sale prices. What this approach completely misses are the ways in which data can generate value both within the business that holds it and externally either through sharing, licensing or sale. (It is worth noting that there is no interest or appetite within the accounting profession to revisit or update this standard, not least because of the lack of pull-through from clients.)
Nonetheless, economists are looking at this issue and point to three bases on which value gets created, yet which pose challenges to valuing it:
1 – Non-rivalry: a single piece of data can be used in multiple algorithms and applications at the same time without necessarily suffering decay, erosion or being destroyed in the process. As a result, it is much harder to value than an asset which is only capable of being in one place at one time (such as manufacturing plant) or which is consumed during its processing (such as raw physical materials).
2 – Positive externalities: This means that while data can reveal new findings and insights if it is aggregated, linked and analysed, the benefits might not be directly foreseeable and may not always accrue to the data creator or controller. Microsoft has been running tests to establish the marginal effect of new data on predictions generated by machine learning in part to allow for better forecasting of this impact.
3 – Economies of scope: Merging two complementary datasets may produce more insight than keeping them separate. Again, this means that the potential value of data may not always be foreseeable to the data controller.
This third property in particular supports the market valuations being put on digital platforms where an intangible asset such as customer relationships has no reliable value (since the service is free and there is no contractual barrier to exit), whereas the ongoing insight gained from interactions may yield future innovations and even revenue streams.
Bases for taxing data
A key objective for economists is to develop theories that can inform political practices which capture value for social goals, primarily through taxation. In this respect, the current situation with data is problematic, either because companies that capture and use it in their processes are not capable of valuing it as a taxable asset or because consumers exchange it at no charge in return for free services.
Microsoft has been considering this issue as discussed in the whitepaper, “Should we treat data as Labour? Moving beyond ‘free’,” by Imanol Arrieta Ibarra,Leonard Goff, Diego Jimenez Hernandez, Jaron Lanier, and E. Glen Weyl. As they point out, this data-value exchange model, “undermines market principles of evaluation, skews distribution of financial returns from the data economy and stops users from developing themselves into ‘first-class digital citizens’.”
One market in which this could have a significant impact is artificial intelligence where users contribute significantly to the development of tools by building or evaluating models (through platforms like GitHub), grade outputs such as translations, or even act as product reviewers. The authors note that, “the free data model has made productivity-related data much less accessible than consumption-oriented data” because these inputs from users are locked into the platforms that capture them, rather than being shared.
In economists’ terms, the current approach to establishing either a specific valuation or to a broader economic value is known as “Data as Capital” (DaC). This treats data as “natural exhaust” from consumption to be collected by firms. Indeed, this is the very term used by one of the first large-scale studies in 2012 by McKinsey. The benefits from DaC almost entirely accrue to companies through their digital platforms and AI innovations, leading to the dominance in the digital and data economies of a few large players.
Yet it is evident that a truly competitive market for data would recognise the contribution made by users as significant. As a result, a growing number of economists are looking at the option of “Data as Labour” (DaL). In this model, the efforts made by users are recognised as creating marginal value in the same way as productive employees. Data is treated as the possession of users, just as labour is, with its benefits primarily accruing to them. Regulation and taxation policy would then limit the monopsony power of large-scale digital platforms (where monopsony is a market in which there is only one buyer).
A couple of issues arise from adopting the DaL model. The first is that users would demand compensation for their efforts, which could depress the profits of platforms. If those platforms refused to return a greater share of that value, is is conceivable that a “data labour union” could emerge to collectively bargain with the major firms or even call a strike.
How payments would be made and at what level of value they kick in is the subject of considerable effort, including at Microsoft, where regularised measures of the marginal value of data points are being designed to support transparent and efficient payments.
Conclusions
Where economists within and from outside government start to tread, economic policy is sure to follow. For some years now, there has been a concern about how to get the dominant digital platforms for contribute more to the economies in which they thrive by paying more tax. Existing strategies are clearly flawed and are either worked around through abuses of copyright (ie, local operations having to pay a substantial royalty to use a brand on which the parent holds the copyright) or subverted by local tax authorities (ie, Dublin, Luxembourg) granting sweetheart deals.
The value being derived from capturing, analysing and deploying data has become ever more visible, leading to attention being focused on how to capture some of that value for the public and social good. With GDPR, the new regulatory environment makes it clear that a rebalancing of rights is taking place. A similar rebalancing of the data-value equation will undoubtedly follow soon, even if it may take even longer to arrive than the Regulation did.
With the idea of taxing data as labour gaining favour, organisations could find that their own business intelligence will serve as a tool to determine tax liabilities, by measuring changes in volume, accuracy and variety of data held from one year end to the next. Even the GDPR-mandated algorithm log keeping could serve to show growths in data value.
While there will be plenty of skilled professional advisers to offer mitigating solutions, this is likely to be the moment when data and analytics grows up. And it will also find itself fighting for support with one hand tied behind its back, since it is unable easily to demonstrate the positive value of data as an asset to the level demanded by auditors. Advisers are working on that, too. The question is which will get over the line first – the asset valuation or the tax bill?