When something goes wrong inside an organisation, the instinct is usually to ask a simple question.
Who approved this?
That question has anchored corporate accountability for decades as decisions were made by people, systems executed them, and responsibility could usually be traced back to a role, a team, or a signature.
Even as automation expanded, that basic logic held. But Agentic AI somewhat unsettles it.
As AI systems gain the ability to plan, decide, and take actions across enterprise systems, the familiar lines of responsibility, as always, begin to blur.
These systems can now trigger transactions, interact with other systems, and act at machine speed, which most of the time is without a human watching every step.
That is why Singapore came up with the Model AI Governance Framework for Agentic AI last week, as a response to this shift.
Rather than debating whether AI should be allowed to act, the country’s framework starts from the assumption that it already is.
Its focus is more uncomfortable and more practical.
Singapore wants to know, if lets say, AI systems are allowed to act, someone must remain meaningfully accountable for what they do. And that someone, is us.
Agentic AI Changes the Nature of Responsibility

Traditional AI systems largely sat on the advisory side of decision-making. They flagged risks, generated insights, or recommended next steps, but humans retained direct control over execution.
Agentic AI is different. It more often or not, changes that old dynamic.
Agentic systems can decide which tools to invoke, chain multiple actions together, interact with other agents, and adapt their behaviour as situations evolve.
They can trigger transactions, update records, send communications, or take operational actions without waiting for a human prompt.
The framework is explicit that this shift creates new accountability challenges.
Outcomes may emerge from the interaction of multiple agents rather than a single decision.
Actions may be taken at machine speed, outside the visibility of any individual operator. Responsibility can easily become fragmented across developers, vendors, and deploying organisations.
Without deliberate governance, the result is not just technical risk, but organisational ambiguity.
When something goes wrong, it becomes harder to answer who was responsible, who had authority, and who should have intervened.
Making Humans Meaningfully Accountable
Singapore’s response is not to limit autonomy outright, but to remove any ambiguity about where accountability sits.
The Model AI Governance Framework for Agentic AI repeatedly stresses that delegating tasks to AI agents does not mean delegating responsibility.
Organisations remain accountable for the behaviour of the systems they deploy, even when those systems operate independently and even when outcomes arise from complex interactions.
What is notable is the emphasis on meaningful accountability. It is not enough for responsibility to exist on paper.
We are expected to be able to explain why an agent was given certain permissions, what boundaries were placed around its behaviour, and how its actions are monitored and reviewed.
Accountability, in this framing, is not retrospective. It begins at design and continues through deployment, operation, and intervention.
If humans cannot realistically understand or control an agent’s scope of action, the framework suggests that the deployment itself may be irresponsible.

Rethinking Human Oversight Beyond Slogans
The framework also confronts a long-standing weakness in AI governance discussions.
The idea of “human in the loop” has often been treated as a catch-all safeguard, even when it no longer reflects how systems operate in practice.
Singapore’s guidance acknowledges that continuous human oversight does not scale once AI agents operate at speed and volume.
Expecting humans to approve or review every action is unrealistic, and in some cases misleading. It can create a false sense of control while introducing automation bias, where people defer too readily to system behaviour.
Instead, the framework calls for risk-based oversight. Human intervention should be required at points where actions are high-impact, irreversible, or legally sensitive.
Examples include executing financial transactions, deleting critical data, or communicating externally on behalf of an organisation.
At the same time, organisations are expected to regularly test whether their oversight mechanisms are actually working.
Oversight is treated as something that must be maintained and validated over time, not assumed to be effective once implemented.
Giving Agents Authority, But Not a Blank Cheque
One of the more practical aspects of the framework is how it treats AI agents within enterprise systems. Rather than viewing them as passive tools, it treats them as actors operating with delegated authority.
This has concrete governance implications.
Agents are expected to have defined identities, scoped permissions, and clear limits on what they are allowed to do. The principle of least privilege is applied directly to agentic systems.
An agent should only be able to access the tools and data necessary for its function, and nothing more.
Traceability is equally important. Organisations should be able to distinguish between actions taken by humans, actions initiated independently by agents, and actions carried out by agents on behalf of humans.
This distinction matters not just for audits, but for assigning responsibility when incidents occur.
The framework’s approach mirrors long-standing practices in regulated industries. Authority is delegated deliberately, access is controlled tightly, and actions are logged with accountability in mind.
Agentic AI is expected to operate under the same discipline.
Accountability Does Not Stop at the Organisation’s Boundary
The framework also recognises that agentic AI systems rarely exist in isolation. They often rely on models developed by one provider, agent platforms from another, and tools or APIs maintained by multiple third parties.
Singapore’s position is clear. Accountability cannot be outsourced.
Organisations deploying agentic AI are expected to understand the limitations of their vendors, assess the controls available to them, and ensure that any gaps in visibility or intervention are acceptable within their risk tolerance.
Where responsibilities are shared across parties, they should be explicitly defined through governance arrangements and contracts.
Acknowledging that perfect transparency is often unattainable, the framework recognises that organisations cannot realistically maintain absolute control over every link in the AI supply chain.
What it requires is honesty about those limits and restraint in deploying agentic systems where accountability cannot be clearly established.
A Governance Question That Will Not Go Away
Singapore’s framework does not pretend that agentic AI can be made perfectly safe or entirely predictable. It assumes the opposite.
Systems will fail, behaviours will surprise, and not every outcome will be traceable to a single decision.
What it insists on is clarity. If AI agents are given authority, someone must be prepared to answer for their actions.
Accountability cannot disappear simply because a system acted autonomously or faster than a human could intervene.
In most organisations, mistakes made by people carry consequences. Decisions are reviewed, responsibility is assigned, and accountability follows.
As AI agents take on more responsibility inside enterprises, the harder question is whether organisations are prepared to apply the same standard.
If an outcome is unacceptable when a human causes it, should it become acceptable when an AI does?
And if the answer is no, who is prepared to be held accountable when the system acts on their behalf?
Featured image: Edited by Fintech News Singapore based on images by bunnyhop and f11photo via Freepik.




