Moderator: Justin Heller, Executive Advisor & Former Chief Data Officer, Synchrony Financial
- Tifini McCann, Vice President, Data and Analytics, Otsuka
- Sudarsan Kumar, Data and Analytics Data Delivery Director (Wholesale Chief Data Office), Truist
- Moataz Mahmoud, SVP, Enterprise Data Management, First Citizens Bank
The full article and learnings is available for DataIQ clients on our members only Hub.
Data and AI leaders from banking, pharmaceuticals, and legal services explored how highly regulated organizations are balancing innovation with growing compliance complexity.
The discussion focused on the organizational realities underneath regulation, rather than viewing it as a blocker with fragmented operating models, evolving AI governance, technical debt, and the challenge of scaling controls consistently.
The urgency is increasing as boards push aggressively on AI adoption while regulators begin signaling expectations around explainability, oversight, and human accountability.
- Is regulation slowing innovation, or are internal operating models the bigger constraint?
- How should organizations balance AI innovation with risk management?
- Which AI risks are proving hardest to anticipate and control?
- How do you scale governance without slowing delivery?
- How are boards and executives changing their risk appetite around AI?
- How should leaders evaluate AI investments where ROI is uncertain?
- How do you prevent risk culture becoming organizational paralysis?
Treat governance as infrastructure, not oversight
Several panelists argued that regulation itself is rarely the true inhibitor and that the bigger operational issues stem from fragmented accountability and inconsistent execution.
Moataz Mahmoud from First Citizens Bank pointed to “fragmented operating model, unclear ownership, or ambiguity in lineage and definitions” as the recurring cause of delivery friction. Teams repeatedly remediate the same issues because governance is disconnected across domains.
Sudarsan Kumar from Truist reframed controls as foundational engineering rather than bureaucracy, stating that “risk and REG controls are not necessarily a break. They’re actually ending up being a competitive advantage and an accelerator as you build it from the ground up.” His analogy landed clearly: “The best brakes that you find in any car are in an F1 car.”
The strategic implication is important as mature organizations are embedding governance into delivery architecture itself, and lineage, auditability, ownership, and controls become reusable operational assets, not post-hoc review processes.
The biggest AI challenge is ambiguity
Tifini McCann from Otsuka Pharmaceuticals drew a distinction between established regulation and today’s evolving AI landscape. “If the regulations are clear and unambiguous, it’s much easier to work within those guardrails.”
The difficulty is that organizations are now operating in anticipatory mode. Legal and compliance teams are reacting to signals, emerging state-level AI laws (for those operating in the US), and early enforcement actions without fully established standards.
Tifini referenced what she described as the FDA’s first written warning related to AI overreliance, where AI-generated SOPs and specifications allegedly lacked sufficient human review, summarizing that “we don’t want to be the folks everybody else is learning from.”
That uncertainty is pushing organizations towards tiered governance models. As an example, Tifini described how Otsuka categorizes use cases into low-, moderate-, and high-risk segments, applying different levels of oversight depending on explainability, customer impact, auditability, and data provenance.
The broader lesson: blanket AI governance slows adoption. Risk-tiering creates room for experimentation while protecting genuinely sensitive workflows.
AI amplifies existing weaknesses faster than organizations can manage them
The panel largely agreed that AI is not introducing entirely new categories of risk. It is exposing weaknesses that already existed in enterprise data environments.
Moataz described AI as “amplifying existing risk rather than creating new risk”. Poor lineage, inconsistent governance, weak ownership, and siloed controls become more dangerous when AI systems consume data at scale across multiple business functions simultaneously.
Tifini added a further distinction between traditional systems and AI systems: “Traditional system risks are more deterministic… AI systems are more dynamic.” Models evolve with data, user interaction, and deployment context, meaning validation cannot remain a one-off activity.
This shifts governance from static approval towards continuous monitoring, meaning that the control problem becomes operational rather than procedural. To succeed, organizations now need observability, audit trails, human review mechanisms, and governance embedded directly into runtime environments.
Some AI investments cannot be justified with traditional ROI
The panel challenged conventional approaches to investment justification.
Tifini argued that some AI programs should be treated more like R&D portfolios than operational efficiency projects. In areas such as patient finding for rare diseases, the commercial value cannot be fully quantified upfront because discovery itself creates the opportunity.
The comment “you may not be able to anticipate what that ROI is going to be until you actually perform the experiment” changes governance conversations significantly. Instead of demanding immediate productivity metrics, organizations need staged investment models with checkpoints, portfolio thinking, and the willingness to pivot based on emerging evidence.
Moataz made a related point around infrastructure investments. Improving lineage, governance, and ownership may appear expensive initially, but it reduces long-term rework and accelerates delivery over time.
For many leaders, the challenge is now translating those trade-offs into language executives and P&L owners can operationalize. Translation was a recurring theme in the recent DataIQ reports The End of AI Theatrics and The Rise of Decision Intelligence.
The full article and learnings is available for DataIQ clients on our members only Hub.


