1. The need for always-on governance and risk monitoring
Traditional AI governance relies heavily on pre-deployment approval and static controls. Agentic AI makes that approach fragile.
Because AI agents can change behaviour over time and act continuously, leaders describe the need for ongoing monitoring, not just upfront sign-off. This includes tracking behavioural drift, unexpected actions and execution outside normal business hours.
Human-in-the-loop controls remain non-negotiable, especially for customer-facing or policy-relevant decisions. However, peers stressed that oversight only works when those accountable have the expertise to challenge agent behaviour. Without that capability, governance risks become symbolic rather than genuinely protective.
2. Operational complexity is higher than many expect
While early agentic AI use cases often look simple, operational complexity rises quickly once agents move beyond narrow, human-triggered tasks.
Leaders highlight the need for orchestration, monitoring and tighter system integration when agents operate persistently, execute multi-step actions or interact with other systems that were never designed to work autonomously. Once deployed, these agents also require more specialist support than traditional IT systems, including prompt maintenance, behavioural analysis and escalation handling.
As a result, many organisations are deliberately avoiding multi-agent systems in early deployments. Instead, they are relying on chained or semi-agentic workflows that deliver some agentic benefits without introducing agent-to-agent delegation, and, therefore, avoiding full orchestration layers, complex authorisation models or new escalation paths.
3. Data quality and hallucination risks show up fast
Hallucination is a genuine concern. Leaders shared examples where AI-generated enrichment data introduced errors that damaged trust in both the system and the wider data function.
These issues tend to surface quickly once agents are live, particularly where knowledge bases are fragmented or poorly governed. Several organisations are using failure cases to prioritise content clean-up and consolidation, rather than delaying deployment indefinitely.
That said, peers consistently advise starting with simple, constrained use cases and scaling gradually to limit the risk of reputational damage from incorrect outputs.
4. Cost and ROI are volatile and hard to predict
Agentic AI introduces a new kind of cost uncertainty. Small changes to system-level prompts or agent behaviour can trigger disproportionate compute usage, particularly where agents interact or chain actions together.
Leaders increasingly see cost visibility as a governance issue, not just a finance one. Real-time dashboards, automated alerts and restrictions on unsupervised agent chaining are becoming standard safeguards.
At the same time, many pilots struggle to scale because they are framed as innovation projects without clear business outcomes. Where initiatives progress, leaders tie them to visible, costly problems and model value conservatively rather than promising transformational ROI.
5. Culture and skills are slowing adoption more than technology
The challenge of adoption is often underestimated. Fear of job loss, mistrust of opaque systems and low decision literacy all limit uptake. In some cases, usage plateaus even when AI tools clearly saved time.
Leaders cautioned that human-in-the-loop design only works when organisations invest in the skills needed to make it meaningful. Teams also need to rethink agents not as black boxes, but as collaborative tools designed to augment human expertise and productivity — systems whose outputs can be shaped, tested, and challenged.
6. Traditional organisational designs and career models are not ready to absorb agentic AI
Peers consistently stated that AI and automation are reshaping work faster than they are reducing headcount. Manual and repetitive tasks are being reduced, while demand increases for judgement, oversight and escalation. This shift is already narrowing parts of middle management, particularly work centred on coordination, summarisation and workflow oversight.
As a result, leaders highlighted emerging pressure on traditional career paths, especially where progression has historically depended on managing information or workflow. As these activities are increasingly automated, organisations face a growing cohort of skilled employees with fewer clear advancement routes.
Rather than treating this as a future concern, peers described active workforce planning efforts focused on role redesign, differentiated training by seniority, reskilling, and alternative career models that recognise expertise and contribution rather than span of control. Several stressed that these changes sit alongside AI deployment and cannot be deferred without risking capability loss or stalled adoption.

These insights were drawn from confidential DataIQ peer exchanges among senior data and AI leaders.
Become a DataIQ client for full access to our exclusive peer intelligence platform.


