Further down the scale, however, organisations are currently left to make their own choices around the impact and effects of their use of artificial intelligence, whether this involves machine learning or new solutions like generative AI (genAI).
During the two roundtables on the subject, members discussed how to navigate the ethical landscape of AI, maintaining the balance between leveraging data potential while continuing to prioritise data governance, and security.
Discussions started with a DataIQ member explaining the journey they had been on over recent years – even before the rise of genAI – highlighting the importance of balancing innovation with responsibility.
The value of creating an AI forum
The need to establish some kind of forum, inviting diverse voices from various departments to engage in discussions on the subject of data, AI and analytics seemed to be an approach most members empathised with and had already set up. Typically, these forums included the traditional governance functions like data protection (DP), information systems (IS), and information governance, but also welcomed perspectives advocating for the customer, such as those from marketing and sales. Vendor management was also actively involved, fostering a collaborative and inclusive environment.
A key principle that emerged from these discussions: responsibility should not rest solely on one individual. The need to embrace a collective approach emerged as an important principle, recognising that ethical decisions require diverse input and perspectives. But while a collective approach is needed, a collective framework is less appropriate. With so many variables at play across different geographies and industries, there is no one-size-fits-all solution.
Prioritising which use cases and initiatives the forum focused on was discussed with medium- and high-risk scenarios brought before the group. For some this was enabled by the use of an AI inventory where business units and teams using AI were asked to bring their examples so these could be properly scrutinised, largely in the context of “what would the consumer feel about this initiative?” and, “is this likely to be a Daily Mail front page?”. Some had used the OEDC principles as the basis for their framework.
To enhance awareness and education, the industry is borrowing strategies from other sectors. For instance, a data yes checklist – inspired by the banking sector – consists of five questions for non-technical teams, with those unable to answer affirmatively being referred to the forum.
Responsible AI or ethical AI?
One notable point of discussion was the terminology used. Some suggested labelling it as responsible AI rather than ethical AI, emphasising adherence to the principles of fairness, accountability, and transparency. For those committed to true ethical practices, the philosophy goes beyond mere compliance, urging individuals to go above and beyond legal requirements. Indeed, where projects involved vulnerable consumers, the importance of ethical considerations were even more important.
Whether the goal is to ensure ethical AI or responsible AI, it must be led from the top of the organisation, it was felt. But when it comes to following regulations for what is right or wrong, who decides? It was argued that every line of code is an ethical choice – and thus technology embodies the ethics of the people who write it.
Such is the noise surrounding responsible AI that BSI (the UK’s national standards body) has just launched the international standard (BS ISO/IEC 42001), which is intended to assist organisations in responsibly using AI, addressing considerations like non-transparent automatic decision-making, the utilisation of machine learning (ML) instead of human-coded logic for system design and continuous learning. It is an impact-based framework that provides requirements to facilitate context-based AI risk assessments, with detail on risk treatments and controls for internal and external AI products and services. It aims to help organisations introduce a quality-centric culture and responsibly play their part in the design, development and provision of AI-enabled products and services that can benefit them and society.
AI for good
For some members, considerations of how AI can be used ethically to help support sustainability and inform sustainable decisions in line with policy and regulations are already underway. Further, consideration is being given to the ethical implications where the use of AI can be linked to increased CO2 emissions, and how this may need to be addressed in respect of the organisation’s sustainability policy.
One member meanwhile addressed the ever-present sociological concern that AI threatens to replace human jobs, pointing to the positive examples of AI serving to complement the role of the human in the workplace, such as in the automobile industry. In the right context, AI supports human activity with data and saves time, but of course there is a balancing act to be achieved. The need remains to maintain the balance between the human in the loop while leveraging the full potential of automation and AI and complying with data governance and security.
Training, training, training
Ensuring the business understands the importance of sense-checking their AI initiatives was also debated, with some members expressing frustration at the lack of understanding in their respective companies about AI and ML among key personnel. Indeed, in some cases this also extended to the data protection officer (DPO) and procurement teams, leading to a disjointed approach, if they were not au fait with such initiatives and the data being processed through these tools. Called out for their weaker understanding of data and technology were the marketing and customer experience functions, which are typically hotspots of AI.
Regulated industries
For heavily regulated industries, while there are different views, there are no direct regulations for AI in Europe as yet, and there is no long term forecast for how compliance could look in the future. Regulations, when they come, will mean immediate compliance – or huge economic sanctions.
The main concern in these sectors is around security, with the risk of exposure of data used within these industries and the potential for AI bias in customer analysis, decisions and marketing, especially those using external data sources.
Pace of change and financial implications for ethical AI
While there are undoubtedly comparisons with the introduction of GDPR and the “fear, uncertainty, and doubt” that ensued, part of the issue seems to be the speed with which many of these developments are progressing, with regulation struggling to keep up. With speed also comes financial implications as business teams develop AI projects at pace. One member predicted that budgets could spiral out of control. The member emphasised the need for coordinated conversations to align data and technology platforms to solve issues efficiently.
The conclusion drawn from these industry insights is that many companies are still in the early stages of navigating ethical AI, or as we may now need to term it, responsible AI.
There is some belief that it will boil down to safety rather than ethics, with general AI security so far focusing more on societal scale threats and risks from general purpose AI than on how the tools might be applied in a commercial setting.
In essence, the ethical and responsible journey with AI involves ongoing dialogue, collective responsibility and a commitment to transparency and accountability. As the industry grapples with the challenges posed by rapidly evolving technology and slower moving regulation, the imperative remains clear: first, do no harm, and then, strive to do some good.