The real barrier to scale
Most organisations can build sophisticated AI, but far fewer manage to turn it into something people genuinely want to use — something that integrates into daily life and delivers ROI. At DataIQ’s World Congress, Mara Pometti, Mastercard’s Vice President of Agentic Experience Strategy, argued that adoption, not accuracy or performance, is the real test of value.
“Every poor AI interaction is a broken relationship. When AI feels irrelevant, customers churn, and revenue drops. This isn’t a UX problem. It’s a P&L problem.” – Mara Pometti, Vice President Agentic Experience Strategy, Mastercard
Drawing on her experience building AI products at IBM, McKinsey QuantumBlack and now at Mastercard—where she is leading the design and development of a cutting-edge agentic platform— she outlined how designing the AI experience for users (the way systems interact with humans) determines whether pilots move beyond proof of concept.
“To scale AI, we don’t need products that work ‘well enough’; we need meaningful experiences people want to come back to.”
Four dimensions of AI experience
Pometti describes AI experience as the bridge between intelligence and outcome, the layer that makes capability usable. She breaks it down into four human dimensions that determine whether people stay engaged:
- Specificity: Does it understand me and my context?
- Attention: Does it prioritise what matters to my goals?
- Trust: Is it safe, explainable, and reliable?
- Agency: Does it return control to me for important decisions?
If any of these is weak, adoption decays even when the system performs technically well. An LLM can hit 95% accuracy on a benchmark, but if the output feels off, unhelpful or not specific, the user won’t stay. Technical performance isn’t the same as human alignment and adoption hinges on the latter.
Start with the outcome
Pometti’s team begins every build by defining first the golden outcome, — what the user expects — and then works backwards to the technical design choices and architecture that guarantees it.
For example, a request like plan next quarter’s cash flow fails without persistent context and memory: which quarter, which previous conversion thread, which data? A good system retrieves earlier sessions and resolves “Q3” correctly because its memory and reasoning layers are designed for that specific user’s context.
The takeaway: start with what success looks like for the user, then engineer backwards.
Building trust and control
Trust is not a statement but a system. Pometti’s team begins by mapping potential errors and risks to understand how to mitigate them and build the right controls and evaluation mechanisms. Without that feedback loop to ensure trust and safety, no pilot can scale.
As AI agents begin acting on behalf of users, organisations must also define how much autonomy they should have. Pometti developed a framework to identify agency tiers by use case: some agents can observe and propose; others can act only within strict limits. Clear boundaries and escalation paths are designed from the outset to preserve human agency. Pometti noted that delegation tiers, user intent recognition, and trust, are also a core components of Mastercard’s Agent Pay.
Leadership Lessons
Pometti underscores that scaling AI isn’t only about better AI systems but better experiences that can be built by:
- Designing for adoption from day one.
- Treating trust, context, and user control as product features.
- Measuring ROI through experience metrics like task completion, return rate, and satisfaction, not model performance.
Join DataIQ to access the full article and all of Pometti’s insights.


