A.I.’s strengths in pattern recognition, anomaly detection, and real-time monitoring can be undercut by A.I.-powered deepfake crimes.
While financial institutions are in the early throes of utilizing artificial intelligence (A.I.) to strengthen know-your-customer (KYC) and anti-money laundering (AML) processes, cybercriminals are rapidly advancing their tactics to bypass these meticulous measures.
One troubling development for compliance teams is the recent emergence of a new A.I.-powered deepfake tool, ProKYC, which hackers are weaponizing to circumvent KYC protocols on cryptocurrency exchanges. The technology enables malicious actors to generate fake identities, complete with convincing identification documents and deepfake videos designed to fool facial recognition systems. And so it did — successfully bypassing the defenses of the Dubai-based cryptocurrency exchange Bybit, which F.B.I. officials say lost $1.5 billion in cryptocurrencies via a heist orchestrated by hackers supported by the North Korean government.
Israel-based network security company Cato Networks issued a stark warning in its latest cybersecurity report, stating that this development marks a new battleground in the fight against cybercrime.
Simultaneously, however, efforts are underway across the industry to harness A.I. for good. Thanks to its strengths in pattern recognition, anomaly detection, real-time monitoring, and adaptability, AI has much potential when it comes to combating financial crime.
It can analyze massive volumes of transaction data far faster than a human, flagging unusual behavior that may signal fraud, money laundering, or insider trading. It could theoretically provide continuous monitoring and alert compliance teams to potential threats, while machine learning models could be used to keep pace with the changing tactics of criminals.
AI-enhanced offerings from fintech providers are also crawling out of the woodwork that are boldly claiming to be redefining the space.
In April, financial crime, risk, and compliance services provider NICE Actimize officials unveiled a revamped version of its X-Sight platform, supercharged with machine learning, natural language processing, and generative A.I. The company says this innovation marks a major leap forward in making financial crime prevention more automated and proactive.
In March, open-source intelligence provider Fivecast has also integrated A.I. insights into its platform in a bid to boost financial crime investigations.

James Ferrarelli
For those on the buy side, the role of A.I. in financial crime prevention is multifaceted. James Ferrarelli, executive vice president and chief operating officer (COO) of $4.67 trillion investment manager State Street Global Advisors, currently views it as both a risk mitigant and a risk factor, depending on various factors.
From a risk mitigation perspective, SSGA believes A.I. will enhance the detection of suspicious behavior by processing vast amounts of data far more efficiently than traditional tools. This results in more accurate identification of financial crimes such as fraud and money laundering.
“A.I.-driven systems can reduce the number of false positives in transaction monitoring, making the process more efficient and allowing compliance teams to focus on genuine threats,” Ferrarelli says. Additionally, it empowers compliance teams to take a more proactive stance on risk, offering real-time alerts and recommendations that can “ensure the stability of portfolios,” he adds.
Yet, there are a plethora of risk factors holding back the adoption of A.I. for fighting financial crime. First, the complexity of A.I. models can introduce risks if not properly governed. Ensuring transparency, explainability, and robustness of A.I. models is crucial to avoid unintended consequences. As such, firms and regulators are calling for effective controls and oversight mechanisms to manage the risks tied to A.I. models, such as validating and back-testing models regularly.
Data quality is another critical factor. As Ferrarelli points out, “The effectiveness of AI in financial crime prevention heavily depends on the quality and availability of data. Poor data quality can lead to incorrect insights and decisions.”
On top of all this, the evolving regulatory landscape also poses challenges for firms integrating AI into their compliance frameworks. In the realm of financial crime, institutions must demonstrate they have strong compliance frameworks that align with both global and domestic KYC laws, helping to prevent money laundering, terrorist financing, and other related offenses.
A recent survey of risk and compliance officers conducted by LSEG Risk Intelligence found that “growing levels of financial crime and regulatory scrutiny are forcing organizations to step up spending on compliance, but they are less sold on AI as the standalone solution.”

Daniel Hartnett
Daniel Hartnett, director of Third-Party Risk Intelligence at LSEG, says in his experience working with large financial institutions as a provider of enhanced due diligence reports, firms are tackling financial crime along two, often coordinated, tracks:
- Increasingly expecting—or pushing—due diligence providers to adopt technology upgrades such as I.-based enhancements to improve the quality, speed, and cost-efficiency of reports;
- Asking more frequently about how A.I. is being used in the preparation of those reports.
“They don’t want A.I. simply for the sake of A.I. They want the outcome to be safe, reliable, and transparent. The trend we are seeing so far is that it’s okay for tested A.I. as an enabling tool for certain aspects of report production, but it’s not okay for unvetted or excessive use of A.I. in reports that omit ultimate human oversight of a report’s findings,” Hartnett says.
As such, there is a continued assumption that humans remain involved in the process, typically at the back end, reviewing A.I.-generated outputs to ensure that any potential errors are caught.
“Unfortunately, you wouldn’t always know, because a lot of those hallucinations are very convincing in the sense that they make logical sense when you’re reading them. But given the role of our clients, the compliance officers who buy our products, they’re just not comfortable yet. That’s why we haven’t gone down that route,” he adds.
Even LSEG itself has restrictions on the use of A.I. It cannot roll out any product, whether it’s a due diligence report or otherwise, without it first being reviewed by its A.I. Review Board. The board’s role is to thoroughly examine the product to ensure there are no “hallucinations” or errors, as these could undermine the business and put clients at risk — before the product is released into the wild.
It’s clear that A.I. holds significant promise in financial crime prevention, yet without proper oversight, it could introduce new risks. Meanwhile, as financial institutions work to integrate these technologies at their own pace, cybercriminals are quick to exploit new innovations, creating an ever-growing challenge for compliance teams.
Need a Reprint?