What the EU AI Act means for you

As the EU releases new information around the EU AI Act, David Reed examines what the new rules will mean for data leaders in the near future.
David Reed, DataIQ Chief Knowledge Officer, talks about the EU AI Act.

That gap of six years between a formal regulatory framework for the digital space and its accelerated reach is significant because of what happened during that time. Think Cambridge Analytica and you will be instantly reminded of the Wild West of digital and social media where personal information was a currency minted freely by platforms and spent unthinkingly by consumers. 

By contrast, it took ChatGPT just two months from its launch in November 2022 to having 100 million users (although it has hit something of a plateau with an estimated 180 million one year on). Similarly, the EU has moved fast – the draft AI Act of 2021 has undergone significant revision to reach the form agreed between the European Commission and Parliament that was announced on 9th December 2023. 

Notably, it took three days of talks, described as “marathon”, to get it over the line, with Germany and France both lobbying hard to build in protections for their nascent AI start-up sectors. But nobody in the EU was willing to be caught out in the same way as happened in the digital space, which is why the accelerated action to agree an EU AI Act framework happened. 

Although there will now be a process of trilogue between the Council, Commission and Parliament before the final legal form is agreed, then a further two years before it becomes law, the Act should be welcomed by everybody using or developing AI because of the certainty it introduces. Globally, it is likely that other territories will follow similar models. 

 

What does the EU AI Act mean for you? 

Fundamental rights impact assessment 

Just as GDPR introduced data protection impact assessments, so the new Act proposes similar measures in the context of high-risk AI systems. Users of such systems will need to register on an EU database. Where the system is being used to recognise emotions, consumers will have to be notified of this fact.  

The key component is in the distinction between high-risk AI systems and simpler software systems. For this purpose, the EU has adopted the OECD definition of AI in which objectives and impact are considered. 

This will leave simpler AI systems, such as classic machine learning, under much lighter-touch regulation, since their impact on fundamental rights is much less than that from foundational AI models and systems. 

For commercial organisations who are users, rather than developers of high-risk models, it will be important to track any obligations that end up being encoded into the Act when it is passed. Much as GDPR elided data controllers and data processors, it is possible that any organisation which embeds an element of foundational AI into its processes, such as through the use of generative AI, could become exposed to the legislation.

Governance and penalties

With AI such a fast-moving domain of development, the Act proposes a scientific panel of independent experts working with an EU AI office to help evaluate foundational AI for its level of risk. A board and user forum will bring in extended perspectives. This is critical given the desire of the EU to balance controls with supporting innovation. 

It is certainly not holding back on giving powers to a surveillance authority – users of a banned high-risk AI model would be hit with a fine of 7% of their previous year’s turnover, while violation of the Act is punishable with a 3% penalty. Even providing incorrect information could result in a 1.5% fine, although there are more proportionate fines for SMEs and administrative errors. 

What this makes clear is that the EU does not intend to be found wanting in the face of egregious breaches. It has learned its lesson from being off the pace with digital and social media controls and toothless when it mattered. 

 

Support for innovation

One of the biggest concerns in Brussels has always been that the major global tech firms are all American – there are no European equivalents of Apple, Amazon, eBay, Facebook, Google, Netflix, or X. AI could present an opportunity to shift the balance, given the strength of European academic research, if only this can be spun out into commercial solutions.  

The proposed EU AI Act has been substantially modified to include measures that will support innovation. Although not spelled out, the goal is to enable start-ups to scale rapidly without excessive regulatory constraints in the hope that the EU becomes a global player in AI.

This is likely to lead to a flood of new solutions being offered to commercial organisations, especially those built on small language models which do not require as much hyper-scaled compute power that sits behind large language models and is one reason why they are currently the sole domain of a small handful of players. 

With the major lobbying done and negotiations concluded, the shape of AI regulation across the EU is becoming clear, even though much detail is still pending. For commercial organisations using AI tools, this is reassuring. Applying due diligence on business providers and keeping data governance top of mind, it should now be possible to progress at pace. 

Upcoming Events

No event found!