Few technologies have generated as much excitement in financial services as artificial intelligence (AI).
In compliance and financial crime, the conversation has been relentless.
AI as the answer to scale, cost pressure, talent shortages and ever-more sophisticated criminal behaviour.
Yet as we move into 2026, a more honest picture is emerging. AI has delivered real gains, but not yet the breakthrough many expected.
The past year has largely played out as predicted. Banks and RegTechs have invested heavily in becoming “AI-ready”, strengthening cloud infrastructure, data foundations, security and operational maturity.
However, the majority of this momentum has centred on generative and agentic AI used to support analyst efficiency, rather than core financial crime detection.
These tools work. Drafting case notes, summarising alerts, supporting investigations and improving workflow productivity have delivered tangible time savings.
Bradley Elliott, CEO at RelyComply, highlights that: “For stretched compliance teams, these early wins matter. But they also revealed a more nuanced reality about where AI is and isn’t delivering value anticipated. Ultimately, they have exposed the tension at the heart of today’s AI narrative.”
The gap between conversation and value
Despite the noise, most AI initiatives in banking are not delivering material impact at scale. Research from McKinsey and BCG shows that while the majority of Tier 1 banks are running AI pilots, only around 20% to 30% ever make it into full production or generate measurable ROI, particularly in risk and compliance.
Gartner estimates that up to 40% of Agentic AI projects stall or underperform, often due to data quality, governance and explainability challenges and predict that 30% of generative AI projects in 2025 would be abandoned after proof of concept.
Deloitte has singled out AML as one of the hardest domains for AI to move beyond support tooling because of prudential, regulatory and model-risk constraints.
“This is where the hype meets reality. Productivity tools are valuable, but they are not transformational on their own. They make existing processes faster; they don’t fundamentally change how financial crime is detected,” explains Elliott.
Incremental gains versus real breakthroughs
The real breakthrough still lies further down the stack, in transactional intelligence. This is where AI can move beyond supporting human workflows and into materially improving how financial crime is detected and prevented.
True progress means using AI to identify new and evolving criminal typologies across complex transaction networks, dynamically adjusting risk in near-real time, automatically closing clear false positives, and surfacing patterns that human analysts simply cannot detect at scale.
These capabilities go far beyond summarisation or workflow automation. They fundamentally change how risk is understood and managed.
Yet this is also where adoption slows
The closer AI moves to core risk decisions, the higher the bar becomes. Data readiness, model governance, auditability and explainability are no longer nice-to-haves; they are prerequisites.
Regulators expect institutions to understand and justify how decisions are made, not simply that a model performs well in testing. Black-box approaches may look impressive in demos, but they struggle to survive real-world regulatory scrutiny.
Elliott believes this is why “2026 will not be defined by “AI everywhere”, but by organisations that can responsibly operationalise AI where it matters most.”
Why humans still matter
This is why human oversight remains central to the next phase of AI in AML. Not as a brake on innovation, but as the mechanism that allows AI to scale responsibly.
While financial institutions broadly welcome the potential of AI in advanced AML and KYC use cases, adoption also compounds an existing skills challenge.
Organisations need specialist data and risk engineers alongside compliance teams that understand how to work with AI-driven systems day-to-day.
Compliance teams must be able to interrogate model outputs, understand how conclusions were reached, challenge errors and correct bias.
Feedback loops between human expertise and machine learning are essential to improving performance over time – particularly in environments where criminal behaviour constantly evolves.
The tension, then, is not between humans and machines, but between speed and trust. Moving fast is easy at the productivity layer.
Moving safely into decision-making is much harder. This will be a defining challenge of the next phase, separating organisations that remain stuck at productivity gains from those able to deploy AI confidently in core risk decisions.
This context matters. It reinforces an uncomfortable truth: AI productivity gains alone are not the end goal for compliance.
From hype to impact
The institutions that lead will be those willing to confront this tension honestly. AI will continue to deliver efficiency gains, and those gains should not be dismissed.
But the real value lies in pushing beyond surface-level wins and doing the harder work of embedding AI into the core of financial crime detection, without sacrificing transparency, accountability or regulatory confidence.
AI will not transform AML through hype or tooling alone. The breakthrough will come when organisations align technology, data, governance and human expertise to move AI from promise into practice. That is where the next chapter of compliance will be written.