Developing and introducing an ethical AI framework

Data leaders looking to introduce an ethical AI framework can look back to the way GDPR was implemented to provide guidance on how it can be approached.
A group of data leaders chatting about framework at a rondtable.

Ethical frameworks for AI follow a similar approach and the lessons learnt from GDPR implementation can be used as a guide for this new era of data.

 

Starting the framework  

The Organisation for Economic Co-operation and Development (OECD) produced its Principles for Ethical AI, which were adopted in May 2019. Alongside the OECD’s series of values-based principles, were a published list of recommendations for policy makers: 

Values-based principles

  • Inclusive growth, sustainable development and well-being
  • Human-centred values and fairness
  • Transparency and explainability
  • Robustness, security and safety
  • Accountability

 

Recommendations for policy makers

  • Investing in AI research and development
  • Fostering a digital ecosystem for AI
  • Providing an engaging policy environment for AI
  • Building human capacity and preparing for labour market transition
  • International cooperation for trustworthy AI

 

In April 2021, the European Commission proposed the first EU regulatory framework for AI, stating that AI systems used in different applications should be analysed and classified according to the risk they pose to users. The different risk levels will indicate the amount of regulation required.  

 

Implementing framework internally  

Data leaders need to locate pockets of the enterprise where teams are considering use cases of AI and reach out for early adopters of any framework being suggested. By building the importance of an ethical approach, a stronger level of success and implementation will be achieved and set the path for continued data culture evolution embracing an ethical AI framework.  

Organisations using AI should develop an AI risk impact assessment framework, which should have similarities to data protection risk impact assessments. Software tools can support this process which typically involves answering questions relating to the use case, with high-, medium- and low-risk consequence answers. 

Data leaders must consider the degree to which the use case has autonomous decision making once the human is no longer in the loop. Naturally, if there would be legal impacts of the decision, the risks rise considerably. Additionally, if there would be considerable reputational damage it is important to identify the best ways to mitigate the risk.  

It is important to keep revisiting the risks once they have been identified – it is not a one-time situation as risks evolve. Data leaders should complete several risk impact assessments when the team are building the product and another when the ultimate use is defined. 

Many companies are introducing AI forums which have representatives from across the business, which is reflective of the data protection officer committees that emerged in 2016. Procurement and IT are critical functions to be addressed from an ethical framework point of view as most of these applications require either internal technical resources or external vendor support. Data leaders should consider asking all vendors for their own ethical AI policies which can be included in vendor contracts.  

For teams to comprehensively understand how to implement ethical AI, it is important for there to be a degree of training, which has similarities to the initial implementation of GDPR training.  

Ethical frameworks are nothing new. However, as more companies experiment with AI – which removes the human – the need to consider consequences has forced a review of the approach to ethical data processing. Those with strong ethics as part of their values have found this a key benefit in making AI implementation decisions.