When Fintech News Network convened a panel of fraud and risk leaders from across Asia Pacific, one thing quickly became clear.
The fraud landscape is shifting at a pace that outstrips most financial institutions’ ability to respond.
From deepfakes to synthetic identities, and from fraud-as-a-service on the dark web to real-time manipulation of biometric data, criminals are deploying artificial intelligence with a level of speed and precision that few could have imagined even a few years ago.
Globally, in 2024, over US$1 trillion was lost to scammers, and the increased use of AI will further accelerate this issue. Some reports are even indicating a 400% increase in Gen AI-enabled scams.
The warning is not abstract. With generative AI projected to drive US$40 billion in fraud losses by 2027 in the United States alone, the urgency for banks and fintechs in the Asia Pacific could not be more pressing.
The New Face of Fraud
Gabby Tomas, Operations Group Head at Rizal Commercial Banking Corporation (RCBC), has been tracking this evolution for years.

For him, the concern is not just the sophistication of the tools but their accessibility. He mentioned that the barriers to entry have gone down significantly.
This means that threat actors don’t need a high set of skills to be able to implement a fraud solution.
“You just have to know who to talk to,” said Gabby.
Albert Dela Cruz, Chief Information Security Officer at GoTyme Bank, echoed that sentiment but sharpened the point.
He described this as the industrialisation of fraud, where attacks are now scalable, adaptive, and able to shift in real time mid-conversation.
In his experience, attacks now go beyond static impersonations to real-time manipulation. He even suggested that sometimes these attacks dynamically adapt and change during mid-conversation.
That adaptability is one of the most unsettling aspects of the new wave of scams. It makes fraud feel less like isolated attempts and more like an organised industry.
Adding a regulatory perspective, Chen Jee Meng, Head of Financial Crime Compliance at CIMB Singapore, pointed to the sheer scale of what Singapore is already facing.
In 2024, Singapore recorded almost a 1,500% surge in deepfake fraud cases, with tests showing only a quarter of people could reliably distinguish fake videos from real ones.
The takeaway that we can clearly see is that fraud has evolved into more of an organised industry, and even well-informed, educated consumers are struggling to tell fact from fabrication.
Biometrics in the Crosshairs
The panel then turned to authentication, once considered the strongest line of defence. For Lukas Bayer, who leads biometrics, user acquisition and experience at Jumio, the battlefield has shifted dramatically.
Lukas pointed out that the number one attack vector that is happening right now is deepfakes and injection attacks.

With generative AI, criminals can now easily generate not just selfies but entirely synthetic ID images, making large-scale attacks possible.
We referenced a recent remark by OpenAI’s Sam Altman that “AI has fully defeated most of the ways that people authenticate currently,” and asked Lukas if this was true.
His answer showed a glimmer of hope.
“Biometrics are far from defeated,” Lukas replied.
“That’s why you need a multi-layer defence strategy where you combine biometrics with device risk or identity intelligence.”
The idea of layering defences would emerge again and again in the conversation.
The Human Weak Link
For all the talk of sophisticated attacks, the panellists were unanimous that the weakest link often remains the human being.
Gabby stated that the vulnerability lies simply with the user.
“No matter what kind of identification measures we put in, if fraudsters get into the psychology of the users, whatever measures you put in will be defeated,” he said.
We’re often led to believe that the most targeted victims would be the elderly or the ones who are not that tech-literate.
But recent news suggests otherwise. In Singapore, a company’s finance director nearly lost over US$499,000 to scammers using deepfakes to impersonate their CEO.
If you look up North to Singapore’s closest neighbour, Malaysia, we have the infamous Maybank’s CFO incident.
Chen agreed with that. He added that:

You see, the issue is not limited to the elderly or the less tech-savvy.
The panel referenced recent cases in Hong Kong and Singapore where executives and professionals fell victim to deepfake scams involving millions of dollars.
It seems like there’s a very human element to always have some form of vulnerability.
“Sometimes all they need is one moment of distraction,” Vincent said.
Are Banks Ready?
This raised the inevitable question of whether incumbent banks, digital banks and also fintech players are prepared. For Albert, the answer depends on the type of institution. He argued that cloud-native players have an edge.

Gabby, representing one of the Philippines’ largest incumbent banks, stressed that the response cannot be left to technology alone.
The Operations Group Head at Rizal Commercial Banking Corporation (RCBC) pointed to the need for stronger collaboration with regulators and law enforcement, supported by new laws in the Philippines that make it easier to pursue mule accounts and lift secrecy restrictions in fraud cases.
He also pointed to new laws in the Philippines that empower authorities to go after mule accounts and lift traditional bank secrecy restrictions in fraud investigations.
“We are moving where our clients need us to be,” Gabby added.
Security and Experience
If banks know they must strengthen defences, the challenge is how to do so without alienating customers.
Lukas said that everyone wants to “add as much friction as possible to make it harder for fraudsters”, and at the same time, making sure that the end users do not feel it.
Gabby, however, acknowledged that some degree of inconvenience is inevitable to us as end users.
“For better or worse, there will be a bit more friction points with clients if we want to secure them. Even if you give them a really good experience, if they find they’ve been compromised, then the experience is terrible,” he said.
Albert reframed the solution as “intelligent friction”.
According to him, these are adaptive controls, such as passive biometrics or risk scoring, that activate only when needed.
The idea of intelligent friction may prove to be the middle ground, allowing institutions to harden their defences without overwhelming customers with unnecessary barriers.
A Shared Battle
As the discussion wound down, the panellists looked ahead. They all agreed that the fight against AI-driven fraud is only beginning.
“We see the AI arms race accelerating, but there’s also the need to have more collaborative defence,” said Albert Dela Cruz, who called for continuous, context-aware authentication to become the norm in digital banking.
For Chee, the implications go beyond technology. He stressed that the response must also include people and policy.
“Banking and compliance practitioners will continue to challenge whether risk frameworks are adequate … there might also be a push towards hiring new talent and training employees,” he said.
That combination of better defences, smarter regulation, and deeper expertise may be what it takes to keep pace in the years ahead.
The consensus was clear, and the panel agreed that no single actor can win this fight alone. Banks, fintechs, regulators, telcos and consumers all have a role to play.
Plus, the balance will be delicate. Too much friction and customers may resist the very measures designed to protect them. Too little, and fraudsters will exploit the gaps.
But remember, the stakes are too high for complacency.
Tune in to the webinar on How AI is Transforming FSI’s Approach to Fraud, right down below.
Featured image: Edited by Fintech News Singapore based on an image by Freepik.







