According to Deloitte, the financial services sector is one of the largest adopters of artificial intelligence (AI), with over 60% of financial institutions leveraging AI-powered solutions for decision-making, risk assessment, and automation.
However, as AI systems become more integrated into financial services, regulators worldwide are scrambling to develop oversight frameworks and regulations that ensure responsible usage without stifling innovation.
The uncertainty surrounding AI regulations has left financial institutions grappling with compliance challenges, particularly in regions like the Asia-Pacific (APAC), where regulatory approaches vary widely.
Unlike the European Union’s (EU) comprehensive AI Act or the United States’ sectoral approach, APAC’s regulatory landscape remains highly fragmented.
So, how can the economies in this region harness the benefits of AI while ensuring ethical use, data security, and fairness?

The West And Its Wide Range of AI Regulations for Financial Services
Before we begin talking about what frameworks and regulations this region has, let’s first take a glance at what the global “superpowers” have in store and how they differ from APAC.
The EU Approach to AI Regulation

The European Union (EU) has positioned itself as a global leader in AI regulation through the introduction of the EU AI Act. For starters, the AI Act is a comprehensive framework aimed at addressing AI risks while fostering innovation. It classifies AI systems into four categories: prohibited, high-risk, limited-risk, and minimal-risk.
AI applications in financial services, such as credit scoring and fraud detection, are considered high-risk due to their potential impact on individuals and market stability. As a result, they are subject to stringent compliance requirements to ensure fairness, security, and accountability.
The Act mandates that high-risk AI systems must adhere to transparency obligations, undergo rigorous human oversight, and implement risk mitigation measures. It is all made to prevent biases or discriminatory practices.
Additionally, the legislation emphasises consumer protection by requiring financial institutions to ensure that AI-driven decision-making does not lead to adverse or unjust outcomes.
The Act also aligns with broader EU regulations. This includes intellectual property, data privacy, and financial services, ensuring a holistic approach to AI governance. Furthermore, the EU is a signatory to the AI Convention.
In short, it is a legally binding international agreement designed to establish uniform AI regulatory principles globally.
This agreement reinforces key principles such as oversight, accountability, and safe innovation, setting a precedent for responsible AI development.
The U.S. and U.K. AI Regulatory Frameworks for Financial Services

The regulatory landscape in the United States is markedly different from that of the EU. Instead of a unified AI Act, the U.S. relies on existing regulatory agencies. Such include the likes of the Securities and Exchange Commission (SEC) and the Consumer Financial Protection Bureau (CFPB).
Both, with the role of overseeing AI compliance within the financial services.
Although there is no overarching AI-specific legislation at the federal level, various state-level initiatives, including California’s AI Bill, have emerged to address accountability and ethical concerns related to AI-driven decision-making in financial services.
Recent efforts by the U.S. government include a bipartisan Senate report which outlines some key policy areas. They are privacy, liability, transparency, and safeguarding against AI risks.
Such initiative, along with the Biden Administration’s Executive Order 14110 on AI regulation, signals an increasing focus on AI governance at the federal level.
The Executive Order directs over 50 federal entities to implement more than 100 AI-related actions. They include enhancing cybersecurity protections, mitigating AI bias, and ensuring that financial institutions use accurate and representative data in their AI models.
In contrast, the U.K. has taken a more principles-based approach to AI regulation. Rather than imposing a rigid legal framework, the U.K.’s Financial Conduct Authority (FCA) has issued non-binding guidelines for AI use in financial services, highlighting fairness, transparency, and accountability.
This flexible approach allows financial institutions to innovate while adhering to regulatory expectations.
The U.K. government has also signalled plans to introduce binding requirements for developers of highly capable AI models. It is said to potentially align its strategy more closely with the EU in the future.
Overall, while the EU, U.S., and U.K. take different regulatory approaches to AI in financial services, they share common goals. Most notably, the goals are to ensure AI systems are fair, transparent, and accountable.
The challenge for APAC regulators now is to determine which elements of these models best suit the region’s diverse and rapidly evolving financial landscape.

APAC’s Landscape Is Fragmented but Evolving
Unlike the EU’s harmonised framework, APAC’s AI regulations vary widely, reflecting the region’s diverse economies and regulatory maturity.
Some jurisdictions have introduced AI-specific legislation, such as China’s Internet Information Service Algorithmic Recommendation Management Provisions, which requires financial AI models to undergo government audits.
South Korea has introduced the Act on Fostering the AI Industry, classifying certain AI models as high-risk, similar to the EU framework. Vietnam has incorporated AI governance into its Law on Insurance Business, mandating risk assessments for AI-powered insurance decisions.
Other APAC countries have opted for guidelines and ethical frameworks rather than legislation. Singapore’s Monetary Authority of Singapore (MAS) promotes the FEAT (Fairness, Ethics, Accountability, and Transparency) principles, offering voluntary AI governance standards for financial firms.
Japan’s Ministry of Economy, Trade, and Industry has released the Governance Guidelines for AI, emphasising transparency and human oversight. Australia’s Artificial Intelligence Ethics Framework provides voluntary guidelines for responsible AI use in financial services.
Many APAC countries, including Indonesia, Malaysia, and the Philippines, are still developing national AI strategies but lack enforceable AI regulations for financial services. This regulatory gap creates uncertainty for financial institutions operating across borders.
With no unified APAC AI framework, multinational financial institutions face compliance challenges when navigating differing national regulations. Unlike the EU, where AI laws apply across all member states, APAC remains highly fragmented.
Many financial regulators in APAC lack the technical expertise to evaluate AI systems effectively.
Artificial intelligence models, particularly generative AI, introduce complexities such as black-box decision-making, making regulatory oversight difficult.

AI Regulations in APAC Financial Services Must Learn from Global Best Practices
AI in financial services relies on vast data pools, often spanning multiple jurisdictions. APAC countries have varying data protection laws. Take for instance China’s strict Cybersecurity Law and Singapore’s more flexible Personal Data Protection Act (PDPA) just to show some contrast.
Thus, ensuring AI compliance with multiple privacy regimes will be a significant challenge.
The lack of standardised AI definitions and regulatory expectations across different jurisdictions creates hurdles for companies trying to implement AI-based financial solutions.
Additionally, many financial institutions struggle to demonstrate how their AI models make decisions, particularly when AI systems operate as “black boxes” with opaque decision-making processes.
One of the biggest challenges is balancing innovation with risk management. AI cando a lot of wonders. Things like operational efficiency, detecting fraud, and even providing tailored financial products. But it can also lead to some nasty stuff.
Do also note that AI can lead to unintended biases, discriminatory practices, and financial instability if not properly regulated.
Regulators need to ensure that AI systems used in financial services are transparent, accountable, and do not inadvertently harm consumers.
While APAC’s regulatory landscape is still evolving, the region can adopt best practices from other jurisdictions. A risk-based approach similar to the EU’s classification of AI risks and proportional oversight could help regulators establish clearer compliance requirements.
Following the U.S., APAC regulators can integrate AI governance into existing financial laws instead of drafting entirely new legislation. A principles-based guidance model, similar to the U.K.’s flexible framework, could encourage responsible AI innovation while minimising compliance burdens.
To bridge the regulatory gap, APAC financial institutions should do something, fast. Something like proactively developing internal AI governance frameworks that align with global regulatory trends. They can also work together to establish AI risk assessment methodologies.
In return, it can ensure data privacy compliance, and enhancing explainability in AI-driven decision-making are critical steps toward responsible AI adoption.
Striking a Balance Between Innovation and Oversight
As APAC economies increasingly integrate AI into financial services, regulators face the challenge of maintaining oversight while fostering innovation.
A regional AI governance framework under organisations like ASEAN or APEC could facilitate cross-border regulatory compliance, while governments must enhance regulatory agencies’ AI expertise to keep pace with evolving risks.
Collaboration between industry and regulators is crucial in shaping effective AI policies. Financial firms working alongside policymakers can establish best practices for AI risk management.
Transparency and explainability in AI-driven financial models should remain a priority to align with global regulatory standards.
Public-private partnerships could also drive AI governance forward. By developing AI testing environments, regulators, AI developers, and financial institutions can assess compliance measures before full deployment. This approach would help create industry-wide standards that balance technological advancement with consumer protection.
The regulation of AI in APAC’s financial sector remains in flux, with countries experimenting with different approaches, from rigid laws to loosely defined voluntary guidelines.
So how can we as a region make sure that AI is deployed ethically and securely?
Featured image credit: Edited from Freepik