Scaling Agentic AI – Frameworks, Governance, and Real-World Impact

A panel at the 2026 DataIQ 100 Discussion examined frameworks, governance, and real-world impact in an environment where agentic AI is rapidly scaling.
Scaling Agentic AI – Frameworks, Governance, and Real-World Impact

This is a shorter version of the full article. The full learnings can be found on the members only DataIQ Hub.

Across industries, leaders are confronting the same underlying challenge. Agentic AI cannot be seen as another technology layer; it reshapes operating models, governance, workforce design, and even who the “customer” might be. 

The early lessons suggest the path to scale is less about technical capability and more about organisational clarity. 

Gavin Goodland, Chief Data Officer, National Grid; Miryem Salah, Director Digital Data & Transformation VodafoneThree; Perry Philipp, Chief Data Officer, Entain; Shreenivasa Rajanala, VP, Global Head of Data Science and Advanced Analytics – Markets & Customer Powerhouse, Bayer; and Viveca Pavon, Data & AI Lead for Public Sector, Accenture, joined a panel at the 2026 DataIQ 100 Discussion to dive into frameworks, governance, and real-world impact. 

 

Key takeaways: 

  • Identification of pilots that can scale to achieve organisational success 
  • Interaction with AI is changing rapidly 
  • Governance does not mean centralised control 
  • The workforce will change 
  • Measuring value remains difficult 

 

From a thousand pilots to a curated strategy 

Many organisations started by letting innovation run loose. In several cases, the first phase of generative AI adoption resembled what one leader described as letting “a thousand flowers bloom”. Teams experimented with use cases across marketing, documentation, internal productivity and customer support. Some initiatives worked, others did not, but the aim was exploration rather than optimisation. 

With the exploratory phase now closing, leaders are shifting towards a more deliberate portfolio approach, identifying which use cases genuinely drive business value and concentrating resources there. Efficiency and effectiveness have become the two dominant lenses focused on where AI reduces cost, and where it improves outcomes. 

The shift is about structuring innovation. Innovation is still happening across organisations, but the most promising ideas are increasingly pulled into a central strategy by turning scattered pilots into what one executive described as a “curated garden”. 

This change reflects a broader realisation that scaling AI requires governance of both risk and attention. 

 

Governance is no longer about control 

Perhaps the most visible tension in the agentic AI discussion is governance as almost every organisation began with the same instinct of shutting everything down. 

Early in the GenAI wave, many companies banned external tools outright while they assessed risks around data exposure and intellectual property. That lockdown phase was typically short-lived as two forces made it unsustainable. 

Firstly, employees continued experimenting anyway, often using personal tools. Secondly, competitors were moving quickly and a rigid ban risked leaving organisations behind. 

The emerging governance model therefore looks different. Instead of attempting to control experimentation, companies are focusing on three areas: 

  • Guardrails. Secure environments, approved tools, and policy frameworks that enable safe experimentation. 
  • Transparency. Mechanisms that show exactly how agents are built, including data sources, models, and prompts. 
  • Proportional oversight. Heavy governance for high-impact systems, but far lighter controls for individual productivity tools. 

Organisations are developing what can be described as an AI “control plane” or a way to observe and intervene across a distributed ecosystem of agents without centralising everything. 

The ambition is like workforce management, whereby if an agent behaves badly, it can be removed or corrected without shutting down the entire system. 

 

Value is emerging, but measurement remains difficult 

Where is the value? The answer is beginning to appear, but unevenly. 

Text-heavy processes, such as legal work, documentation, regulatory submissions, marketing content, etc, are producing the clearest early gains. In several cases, organisations report significant reductions in time-to-output and noticeable cost improvements. 

Knowledge management is also showing rapid returns. Companies are discovering that large language models expose weaknesses in their knowledge bases, including fragmented documentation, outdated policies, and poorly structured information. 

Improving these foundations delivers immediate productivity benefits, yet measuring the overall financial impact of AI remains difficult. Traditional business cases struggle when AI capabilities are embedded across multiple systems and processes simultaneously. 

Some organisations are responding by creating dedicated value realisation functions, designed to track AI’s contribution across the full operating model rather than within individual projects. 

AI rarely produces value in isolation; it amplifies existing processes, data assets, and organisational structures, which is why strong, clean foundations are so essential. 

 

Organisations are becoming AI-native 

Technology itself is rarely the limiting factor. The constraints lie in governance, data quality, workforce readiness, and organisational design. 

Scaling agentic AI requires a mindset focused on creating an environment where AI systems, employees, and data can operate together effectively. For many organisations, that transition has only just begun, but the trajectory is becoming clearer. What started as a wave of experimentation is slowly evolving into the redesign of how organisations operate in an AI-enabled world. 

 

This is a shorter version of the full article. The full learnings can be found on the members only DataIQ Hub.