Artificial intelligence offers enormous promise for banking customers: more personalised services, faster outcomes, and proactive fraud prevention. Yet the risks (bias, misinformation, privacy breaches, etc) are equally significant. NatWest recognised that harnessing AI’s benefits required a governance framework that would protect customers, pre-empt regulation, and embed ethical practices across the organisation.
Working with long-term partner Version 1, NatWest has delivered exactly that. Its new AI governance framework has moved the bank from fragmented experimentation to coordinated, risk-aware adoption. The judges recognised this as a model of best practice, naming it the winner of Best Data Governance with AI Initiative at the 2025 DataIQ Awards.
“NatWest’s AI governance programme stands out as a comprehensive, enterprise-wide transformation with clear structure, measurable impact, and lasting cultural change.” – Judges’ comments
The Challenge
NatWest had already invested heavily in AI, from handling 10.8 million customer conversations through its chatbot Cora, to deploying AI to identify vulnerable customers affected by rising costs. In March 2024, it became the first UK bank to partner with OpenAI.
But with over 100 AI initiatives underway and ownership scattered across the business, the risks of inconsistency and reputational damage were rising. NatWest needed a centralised, pragmatic way to manage AI safely without stifling innovation.
The Solution
With Version 1, the bank established a two-stage AI risk assessment process. Every project team completed an initial review covering purpose, data sources, and perceived risks, using a simple template. These assessments were then reviewed by a cross-functional panel of experts from privacy, legal, security, supply chain, and risk.
Projects were either approved, approved with caveats, or sent for further review. As initiatives moved closer to production, they underwent a second, more detailed assessment.
Importantly, the process wasn’t just a compliance gate. The panel provided proactive support to project owners, many of whom had limited AI or regulatory experience. By guiding teams through the assessment and connecting them with functions like procurement and data governance, the framework became an enabler of success rather than a blocker.
Tangible Outcomes
The programme assessed more than 80 AI use cases in 2024. It has provided senior leaders with confidence that AI is being deployed ethically and responsibly, while allowing innovation to scale safely.
Unexpected benefits also emerged. Insights from AI assessments were shared with wider risk owners, helping them enhance existing governance processes. The initiative directly led to NatWest’s first ethical AI risk framework and prompted a simplification programme to address duplication in change processes.
Perhaps most telling, the governance initiative has now been embedded into business-as-usual operations. With processes firmly established, NatWest has created a dedicated AI and Data Ethics (AIDE) team to take this work forward.
A Springboard for Innovation
The judges highlighted that this was not governance as a brake, but governance as a springboard: reducing risk while building confidence, empowering teams, and accelerating responsible adoption. With benefits estimated at £50m over the next five years, the framework provides a repeatable model for banks navigating the same challenges.
For NatWest and Version 1, the award recognises not just effective governance, but a cultural shift: showing how responsible AI can underpin innovation in financial services.



