SB 1047 receives amendments
California’s SB 1047 – also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act – has always set out to provide a simple, non-intrusive touch to AI regulations that will protect the public interest while ensuring developers can continue to innovate. Founded on conversations across academia, industry, and open-source stakeholders, SB 1047 has raised some eyebrows with its latest amendments that originally sought to address concerns of stakeholders who seek to protect California’s thriving and competitive AI ecosystem while ensuring safety at the frontier of generative AI (genAI).
The issue is that many feel the amendments will hinder development and cause California to lose its edge in the burgeoning AI market. Global leaders such as Google and Meta have voiced their opposition to the amendments. On the other hand, SB 1047 has garnered support from prominent AI researchers emphasising the importance of balancing innovation with safety.
The updated SB 1047 draft addresses concerns from corporations by removing criminal penalties and replacing them with civil ones, which has caused some critics to claim the bill has been sugar-coated for businesses and removed any real deterrent against breaking the rules.
“SB 1047 illustrates clearly the dilemma facing all law makers and regulators – how to balance protection with the freedom to innovate,” said Peter Galdies, Director, DataIQ. “On the face of it, an annual requirement to audit very large-scale applications does not seem too extreme; however, in the fast paced and dynamic AI developer landscape of today such processes could restrict the pace of innovation allowing applications developed in less regulated regimes to gain dominance – albeit with the good intention of ensuring such developments are safe.”
One of the major concerns is that as this will be one of the first major bills to pass regarding AI, it will set the precedent and cause other US states (and perhaps countries) to follow suit. If the bill passes, beginning January 1, 2028, the developer of a covered model must annually retain a third-party auditor to perform an independent audit of compliance with the requirements of the bill.
SB 1047 looks to set standards for AI models with significant computational power (frontier AI systems) – models that utilise 1026 floating-point operations (FLOPS) per second and cost more than $100 million to train. These will of course not be used by the vast majority of organisation, but the heavy hitters in the industry are being vocal about how this may impact their ability to innovate.
The bill is advancing to the Assembly floor and must be voted on by August 31. Will the amendments to SB 1047 create a scenario (which has seen before) where the wellbeing of the bottom line for corporations takes precedence over the safety of consumers? Or can those seeking to innovate in the world of AI truly be deterred from breaking the rules with these amendments? With each US government department being told they must employ a CDO, perhaps the basic fabric of data culture in the US will adapt to new legislation and be adopted by non-governmental organisations.
To get involved with upcoming exclusive DataIQ roundtable discussions where topics such as upcoming AI regulations are discussed, click here.