• Home
  • >
  • Editorial
  • >
  • What the US Artificial Intelligence Safety Institute (USAISI) means for business and AI

What the US Artificial Intelligence Safety Institute (USAISI) means for business and AI

With the launch of a new USAISI, what does this mean for businesses focused on the development of AI tools and regulations?
David Reed addressing an audience about AI at the 2023 DataIQ Conference

Kelly previously contributed to the Biden Administration’s efforts to regulate AI with the AI Executive Order before the USAISI. The focus of the Presidential executive order is on AI which may rival human intelligence. The requirements of what constitutes human intelligence will apply to models that are trained using an amount of computational power above a set threshold of 100 million billion billion operations.

Currently, no AI models have been trained using this much computing power. For example, OpenAI’s GPT 4 – the most capable publicly available AI model – is estimated thought to have been trained with five times less than this amount. This means that any users of ChatGPT 4 (or ChatGPT 5, when it lands) should have minimal fears from the impact of this order. The intention from the US government is to get ahead of the theorised existential risks that supercomputing-driven models may one day present.  

Currently, the USAISI is just one of numerous ways US policymakers have tried to mitigate the risks posed by AI. There are fears around the impact AI could have on civil rights and the detrimental effects of low AI adoption by government agencies, and these new measures will require AI companies that develop powerful AI models to report the results of any safety tests they carry out. 

There is heavy influence behind these moves that aim to support US primacy, such as pushing government adoption and ensuring relevant talent can be hired into the US. When placed into the context of current talent shortages in the data and analytics industry, this could create a risk of a brain drain from the UK and Europe because of the wages on offer in the US. 

It should be noted that the EU AI Act will likely be more impactful on classic AI and machine learning models. There have been numerous delays and compromises across Europe for the adoption of the EU AI Act, most notably from pro-business advocates in Germany where there was a push to protect competitiveness and innovation for small and medium-sized businesses.

EU regulators are keen to avoid the mistakes they made with social media, digital and big data where they introduced strict legislation (GDPR) after the technology had rapidly evolved; it was too late, and US monopolies dominated the scene. As we look forward to a new AI era, the EU wants to foster European AI challengers to the US, and this means there will likely be more flexibility for current model types.  

Meanwhile, within the DataIQ community, Chief Knowledge Officer and Evangelist, David Reed, has been speaking with data leaders in banking and insurance about the rise of AI. He noted that those of the coal face of AI innovation stated they cannot get anything through that would remotely come close to attracting regulatory scrutiny. This means that if EU-based businesses can rival the developments hoped for in the US under much stricter regulation, those based in the US should have nothing to fear from regulation (for now).  

Upcoming Events

No event found!