1. Executives see data as a cost centre
Many organisations still frame data and AI spending as operational cost. Once that language takes hold, executive discussion shifts immediately towards scrutiny and reduction rather than opportunity.
To counteract this, leaders deliberately reframed the conversation using investment language tied directly to business return. One financial services leader explained:
“One of the challenges that I find is hearing data as a cost centre. That’s a worry for me because people are not going to see us as an investment. So I always try to change the narrative: it’s the money required rather than the cost required to do something. It’s basically the investment you need in order to realise the value.”
Changing the vocabulary changed the lens. Framing initiatives as investments anchored the conversation in ROI and strategic upside rather than operational expense.
2. Foundation work is ignored
Data quality improvements, pipelines and platform upgrades rarely excite executive stakeholders, even though AI initiatives depend on them.
To avoid pitching foundational work directly, peers embedded it inside visible AI use cases. A logistics-sector leader described the approach:
“We built a few proofs of concept to show what was possible but made it clear they were not built on any form of scalable architecture. That was the carrot. Then, once we got the excitement, we created the business case to deliver those use cases properly and included the foundations in there.”
As they put it:
“It’s like your plumber. You get one bill for the bathroom, whether or not you need new pipes.”
The result is that executives funded outcomes they could see. The underlying infrastructure travelled with those outcomes as part of the delivery package.
3. Approvals take too long
Lengthy funding approvals often signal a deeper problem: the business does not feel ownership of the proposal.
To change this dynamic, leaders began co-writing investment proposals with finance leaders and business sponsors from the outset. One peer described the difference:
“Instead of the business case coming from the data and analytics team, it was co-written with all of our CFOs. It came with their rubber stamp to say, yes, we’ve already approved this. It was much smoother than when the data team just came asking for more money to spend on shiny tech.”
Co-authorship turned potential sceptics into sponsors. Approval became confirmation rather than persuasion.
4. Bureaucracy slows delivery
Complex governance processes often duplicate conversations and delay projects without necessarily improving oversight.
To avoid these roadblocks, one organisation introduced “minimum acceptable governance” — keeping stage gates but allowing low-risk decisions to move forward without full committee review.
“Stage gates still exist, but we self-approve through some of them where the risk is low. Otherwise we’d lose weeks duplicating conversations.”
The approach preserved accountability while removing unnecessary friction that stalled delivery.
5. Ideas fail to land internally
Even well-argued proposals can stall if relationships are strained or if leadership has become resistant to internal messaging.
To recover momentum and authority, some organisations brought in external advisors to deliver the same recommendation from an impartial perspective.
As one leader admitted:
“We’ve done it before where we got consultants to come in completely impartially and make the same recommendation we’d been making. But they’ll listen to that.”
External voices created perceived neutrality and credibility, even when the message itself was unchanged.
6. Leaders are fatigued by AI hype
In organisations saturated with AI messaging, high-level strategies and vision decks can lose impact.
To build credibility, leaders focused on small demos and quick prototypes that showed concrete outcomes. Rather than waiting for full readiness, they delivered visible artefacts early in the journey.
As one leader noted:
“If you wait for everything to be ready, it’s never going to happen.”
Tangible demonstrations replaced abstract promise, helping executives see what AI could actually deliver.
7. ROI arguments fail to resonate
Projected ROI can feel speculative and unconvincing, particularly in regulated sectors.
For messaging to be more effective, leaders reframed AI initiatives around risk mitigation rather than growth alone.
One peer explained:
“If you frame use cases in terms of risk, that can resonate better, especially when you’re talking about compliance or data accuracy.”
Risk framing aligned AI initiatives with board-level priorities around resilience, compliance and operational stability.
8. Committees lack accountability and direction
AI steering committees often drift into passive oversight rather than active ownership.
To prevent this slide, one organisation rebranded its steering board as an AI Outcomes Group and gave business leaders explicit responsibility for use cases and benefits.
As the leader described:
“It’s senior leaders across the business, facilitated by data and AI, but not driven by us. They tell us what we should be working on. They sponsor the use cases, they own the benefits.”
The change shifted the focus from governance to delivery. Business sponsors defined success metrics and were accountable for outcomes.

These insights were drawn from a confidential DataIQ peer exchange among senior data and AI leaders.
Become a DataIQ client for full access to our exclusive peer intelligence platform.


