Chances are you don’t think much about where that nuclear component is coming from. But if forced to, you’d probably assume that it was supplied by a business with a deep understanding of how to run a nuclear power station. You might expect there to be a regulator imposing stringent operational standards. Further, you would anticipate that such a critical piece of infrastructure would be protected by the state from bad actors.
All of which is absolutely true of nuclear power. But until the first week of November 2023, it was not true of artificial intelligence. That is despite the view, as given in the Bletchley Declaration, that “AI…poses significant risks, including in those domains of daily life.”
The comparison between AI and nuclear energy is not a spurious one. Both have emerged from extensive academic research to establish that there is a robust theoretical model for these transformative capabilities. Huge sums have been invested to create practical solutions in which those models have been brought to life.
While both have “the potential to transform and enhance human wellbeing, peace and prosperity,” as the same statement said of AI, they also have significant risks of unintended consequences: nuclear from fallout or theft of radioactive material; AI from its application to bioterrorism, fraud or hacking.
This is why global governments have moved fast to wrap safety measures around AI in the wake of warnings that it could pose an “existential threat“, according to the OECD on 26 January. The Future of Life Institute published an open letter signed by more than 1,000 AI experts and investors on 22 March calling for development to be paused for six months to allow for safety measures to be introduced. In a blog post on 22 May 2023, the co-founders of OpenAI called for urgent risk mitigation and governance of the very superintelligence they are developing.
Asked and answered. In a move that many saw as classic superpower manoeuvring, US President Biden issued an executive order on 30 October – one day before the Bletchley Summit on AI – which calls in AI models for safety testing and verification. There is much work to be done before this becomes a reality – for one thing, standards need to be developed for what safe AI actually looks like in the wild.
The order specifically concerns itself with the way AI tools could be turned towards the creation of malicious biological materials (one of OpenAI’s skills is in protein folding, for example), while the creation of malware or fraudulent impersonation using AI are also covered.
Somewhat overshadowed by this, the Bletchley Declaration is more modest in scope. It proposes sharing evidence of risks internationally, much as nuclear scientists did during its development phase, and that standards-based policies need to be developed ahead of specific regulation.
Its most notable win was having China as a signatory – a state which causes many in the West concerns should it pursue an AI arms race outside of internationally-agreed standards. What is also clear is that the focus is on frontier AI – the point where new models move towards general artificial intelligence and become smarter than their human parents.
So, what does all of this mean for the ordinary organisation poised to use some of its electricity supply to deploy a licensed AI model, allow its employees to engage with a public tool such as generative AI, or simply make use of the many AI co-pilots proliferating within existing productivity tools?
Principally, the benefit of both the White House and Bletchley Park actions is to provide comfort to third-tier risk and governance professionals, especially those operating in highly regulated sectors. These are the teams who seek to interpret guidance and regulation, compare it to their organisational policies, standards and processes and make corrections as necessary.
Consider one box ticked for this group – using an AI provider which has voluntarily submitted its model for safety testing is evidence of due diligence. That will provide a small amount of cover in the event of a sector regulator investigating a consequential harm from the use of AI.
It’s a start. But remember – there is a long journey from starting school to passing exams and applying the knowledge gained.