AI regulation policy and the impact on organisations

Following the announcement of the latest government policy paper, "Establishing a pro-innovation approach to regulating AI", DataIQ’s Peter Galdies examines how these new policies may impact organisations and his thoughts on real-world issues that may arise for DataIQ members because of the new policies being announced.
AI Regulation Image.jpg

The contextual, multi-regulatory and principle-led approach would appear to be an interesting and (theoretically, at least) effective general approach, however it may leave organisations facing some significant issues including: 

Complexity of regulation/guidance 

Relying on a contextual multiple-regulator, non-statutory, governance approach means a potentially confusing and complex, layering of AI guidance and regulation to manage and comprehend. I suspect centralised resources will eventually arrive to help, but if not, this could add heavily to compliance teams already busy workloads. In addition, the complexity could also mean a lack of skilled individuals available to interpret the regulatory stack and implement the required mechanisms to maintain compliance. 

“Grey” Regulation    

As we have seen with some aspects of the GDPR when regulation is not clear, overly conservative approaches to compliance can occur – look at the number of organisations who decided that positive consent was required when, in fact, legitimate interest may have been appropriate. Without specific regulations the potential for “Grey” here is particularly strong, however I fully appreciate the requirement for a flexible approach. It’s a dichotomy without an obvious middle ground and one to watch. 

Rapidly changing landscape 

As the application of AI increases to a growing number of use-cases it is also likely that regulatory guidance (if not eventually statute) will also evolve and change. This creates a tricky and dynamic landscape in which businesses are expected to operate, however this would appear to be inevitable and the trade-off for “light touch”. The challenge for organisations will be to keep current with such changes and to be prepared to modify their approaches as guidance arrives, which mandates sufficient available resource to cope. 

Cross-border services 

While having a relatively “light-touch” approach to AI regulation is desirable for innovation in the UK it may well end up being contradictory to the more proscribed approaches in other jurisdictions (such as the EU). For organisations using AI to provide services across boundaries this will add complexity to the development approach – again it’s a situation we already see in Data Protection where multi-national organisations have had to develop expensive and complex systems to manage different regulations and, in some cases, have defaulted to the most restrictive set of regulations to ease that complexity – losing business opportunity in the process. 

So, assuming that the overall intention of the policy is good (and I know many of my colleagues in the Privacy world will have concerns with the light-touch, non-statutory nature of this policy) then what approaches will organisations have to adopt? 

I think a good model is “Privacy by Design” adapting this seven-principle approach to AI would make much sense. 

“AI Governance by Design” would ensure that all regulatory requirements are properly considered at all points through the lifecycle of AI applications – from concept, design, training and implementation through to decommission. Risk assessment would be wrapped into the process and compliance and AI development teams would work hand in hand. In many cases, the practical implementation of mandatory “Privacy-by-Design” and “AI Governance by Design” would overlap and could be accommodated within the same management system. 

Such “by Design” methods can be interwoven into Agile and other methodologies to get the important work done upfront (minimising rework and expense) but still rely on compliance and implementation teams having a strong understanding of the requirements, meaning AI regulation specific training for those teams will need to be expanded. 

It is also important to consider that the principle and risk-based approach outlined doesn’t automatically lead to increased risk to the rights and freedoms of individuals. The policy emphasises that legal accountability must be clear – and it is likely that the various regulators will require organisations to be able to demonstrate that they have considered the principles in their approaches to AI. These two factors, when combined with the ever-present customer trust perspective, should be enough to make responsible organisations “do the right thing”. Of course, there will always be those who, without a sufficiently strong deterrent, won’t stick to the rules – and this is where the lack of a statutory approach may ultimately fail, but in the meantime, I encourage the innovation led approach to the policy and hope that we can build the strong-but-loose framework it will require to make it work. 

As a data governance expert with a career in data and technology stretching over 35 years, Peter has been advising organisations on how to best manage and utilise valuable their valuable data assets. In his role as director and head of advisory at DataIQ Peter is tasked with ensuring our Members get the knowledge and value they need from their DataIQ Corporate Memberships.

Upcoming Events

No event found!
Prev Next