• Home
  • >
  • Editorial
  • >
  • NHS England partners with Ada Lovelace Institute to combat AI bias in healthcare

NHS England partners with Ada Lovelace Institute to combat AI bias in healthcare

NHS England has announced plans to pilot an algorithmic impact assessment, designed by the Ada Lovelace Institute, to combat the potential risks associated with AI in healthcare.
nhs-england-partners-with-ada-lovelace-institute-to-combat-ai-bias-in-healthcare

Innovation Minister Lord Kamall explained: “While AI has great potential to transform health and care services, we must tackle biases which have the potential to do further harm to some populations are part of our mission to eradicate health disparities. By allowing us to proactively address risks and biases in systems which will underpin the health and care of the future, we are ensuring we create a system of healthcare which works for everyone, no matter who you are or where you are from.”

As AI and ML integrates with society, the need for safeguards is becoming increasingly apparent. Last year, Twitter introduced a “bias bounty” when it invited developers to solve bias in its image-cropping algorithm – which would consistently cut out female and non-white faces. Facebook has ditched automatic facial recognition altogether. In policing, studies in the US have shown that racist feedback loops can arise in predictive tools when trained on arrest data. Many US police departments are known to arrest more people in black neighbourhoods, which can lead algorithms to direct more policing to those areas, in turn leading to more arrests.

“A 2019 study found that an algorithm was less likely to allocate programmes to black people than equally sick white people.”

In health, a 2019 study found that an algorithm widely used in US hospitals to allocate care was less likely to refer relevant programmes to black people than equally sick white people. The algorithm assigned risk scores to patients based on average healthcare costs, but failed to account for the greater prevalence of conditions such as diabetes, high blood pressure and anaemia in black communities.

Ensuring that the team behind an algorithm is representative of society can help to combat unintended biases. In a recent interview with DataIQ, Sathya Bala, CEO of DEI in data consultancy True Change, explained that organisations could aim to achieve “equity by design” by considering the makeup of teams designing and automating processes, as well as the genesis of any datasets used for analysis. “What perspectives have been excluded? Who is involved in testing for unintended negative or exclusionary outcomes? How are we testing once things are deployed?” But this approach can only go so far when working with historical, and potentially biased, datasets.

The Ada Lovelace Institute’s report on AIA sets out a seven-step roadmap toward the implementation of AIA: a reflective exercise; application filtering; participatory workshops; AIA synthesis; making data access decisions; the publication of the completed AIA; and a further iteration. The trial complements ongoing work from the ethics team at the NHS AI Lab on ensuring datasets for training and testing AI systems are diverse and inclusive. Brhmie Balaram, head of AI research and ethics at the NHS AI Lab, said: “The algorithmic impact assessment will prompt developers to explore and address the legal, social and ethical implications of their proposed AI systems as a condition of accessing NHS data. We anticipate that this will lead to improvements in AI systems and assure patients that their data is being used responsibly and for the public good.”

To maximise public participation, the NHS will support researchers and developers in engaging patients and healthcare professionals in the early stages of AI development, when there is more scope to make adjustments and respond to concerns. “Building trust in the use of AI technologies for screening and diagnosis is fundamental if the NHS is to realise the benefits of AI. Through this pilot, we hope to demonstrate the value of supporting developers to meaningfully engage with patients and healthcare professionals much earlier in the process of bringing an AI system to market,” explained Balaram.

“Building trust in the use of AI technologies for screening and diagnosis is fundamental if the NHS is to realise the benefits.”

The NHS hopes that in the future, AIAs could increase the transparency, accountability and legitimacy for the wider use of AI in healthcare. Octavia Reeve, interim lead at the Ada Lovelace Institute, said: “Algorithmic impact assessments have the potential to create greater accountability for the design and deployment of AI systems in healthcare, which can in turn build public trust in the use of these systems, mitigate risks of harm to people and groups and maximise their potential for benefit. We hope that this research will generate further considerations for the use of AIAs in other public and private-sector contexts.”

The pilot will run across a number of associated NHS initiatives and used as part of the data access process for the National Covid-19 Chest Imaging Database (NCCID) and the proposed National Medical Imaging Platform (NMIP).

Upcoming Events

No event found!