Singapore has launched the Model AI Governance Framework for Agentic AI, which is the first in the world to include a comprehensive guide for enterprises to deploy agentic artificial intelligence responsibly.
Minister for Digital Development and Information Josephine Teo announced the framework at the World Economic Forum, with the Infocomm Media Development Authority leading its development.
The new framework builds on Singapore’s original Model AI Governance Framework introduced in 2020 and focuses specifically on agentic AI systems that can reason and take actions on behalf of users.
Unlike traditional or generative AI, agentic systems can perform tasks with a higher degree of autonomy, such as updating databases, processing transactions or making payments.
While this creates opportunities to automate routine work and improve productivity, it also introduces new risks, particularly around access to sensitive data, unauthorised actions and challenges in maintaining effective human oversight.
IMDA said this includes risks such as automation bias, where users may over-trust systems that have performed reliably in the past.
Managing risks and maintaining human accountability
The framework is designed to help organisations understand and manage these risks by combining technical and organisational measures.
It emphasises that humans remain ultimately accountable for the actions of AI agents and that meaningful human control should be maintained throughout the system’s lifecycle.
The framework targets organisations deploying agentic AI, whether through in-house development or third-party solutions, and outlines four main areas of focus.
These include assessing and bounding risks upfront by selecting appropriate use cases and limiting system autonomy, defining checkpoints that require human approval, especially for irreversible actions such as making payments, implementing technical controls such as baseline testing and restricting access to whitelisted services, and enabling end-user responsibility through transparency, education and training.
IMDA noted the framework was developed with input from both government agencies and private sector organisations and is intended to function as a living document that will evolve as new use cases and risks emerge.
The authority has invited organisations to submit feedback and case studies on responsible deployments.
It also plans to release additional guidelines focused on testing agentic AI applications for safety and reliability, building on its existing Starter Kit for testing large language model-based applications.
The initiative forms part of Singapore’s broader efforts to build a trusted AI ecosystem.
The country is working with regional and international partners through platforms such as the AI Safety Institute and is leading the ASEAN Working Group on AI Governance, alongside domestic tools including AI Verify and related testing frameworks.
Featured image: Edited by Fintech News Singapore, based on images by Frolopiaton Palm and rskorzus via Freepik




