• Home
  • >
  • Editorial
  • >
  • International collaborative AI safety agreement signed

International collaborative AI safety agreement signed

The US and UK have signed a landmark deal to collaborate with advanced AI testing, but the reality may not add up to much.
David Reed presenting at the 2024 DataIQ 100 Reveal.

The immediate implications of this agreement are that they follow up on the commitments made at the AI Safety Summit in 2023 where regulation was the key focus – but actions speak louder than words and nothing in this agreement appears to be binding.  

 

Regulation for AI safety 

Currently, regulators in the US or the UK have not diminished anything the leading AI companies are attempting to achieve, in addition to regulators not demanding access to information such as tool training data and the environmental costs of running AI tools. 

There are growing concerns around the safety of AI tools and its rapid development, and various nations have stated their acknowledgment of the issue – but legislation is often slow to catch up with burgeoning technologies. Leading academics have highlighted their concerns around the misuse of AI tools, likening it to nuclear or biological sciences which can be weaponised. 

In January 2024, a fake AI-generated call pertaining to be from President Joe Biden urged voters in New Hampshire to not vote in a recent US primary election. Additionally, AI developer OpenAI announced it would not release a voice cloning tool it developed due to the serious risks the technology presents. 

“To quote Macbeth, this agreement is a lot of ‘sound and fury signifying nothing’,” said David Reed, Chief Knowledge Officer and Evangelist, DataIQ. “Both states have merely agreed to collaborate on developing AI safety tests, the outlines of which were already laid down in the White House Order last year.”

“Notably, this Order focused on models which are larger than anything currently operating by a factor of five and is not binding. Until regulators get tough, the current position is as meaningless as the mantra constantly repeated by tech developers that ‘AI will be for the benefit of everyone.’ Neither is likely.” 

 

Europe entering the AI regulation scene 

As things stand, AI businesses in the US and UK are essentially regulating themselves. The EU AI Act is due to become law in the near future, which may start the beginning of the end for self-regulation in the AI industry. 

In the same way that GDPR introduced data protection impact assessments, the EU AI Act proposes similar measures in the context of high-risk AI systems. Users of high-risk AI systems will need to register on an EU database and where such a system is being used to recognise emotions, consumers must be notified of this fact.   

The EU has adopted the OECD definition of AI in which objectives and impact are considered when defining the difference between high-risk AI systems and simpler software systems. 

It should be noted that the EU AI Act aims to encourage innovation with a light touch approach to regulation for start-ups as the scene is currently dominated by US-based businesses, with no European equivalent.  

There are legitimate concerns about the uses of AI and the nefarious ways in which they can impact people across the globe, so it will be interesting to see if this latest agreement between the US and UK will develop into something actionable or remain a series of hopeful wishes.  

Upcoming Events

No event found!