FIs (financial institutions) are typically considered to be accountable for the behaviour of AI that they use, irrespective of whether it was built or procured.
-Mindforge AI Risk Management: Operationalisation Handbook
Somewhere in an institution’s software stack, there might just be an AI model running on an update they did not approve. Think about the patch to your AI product that was rolled out automatically last quarter. Without you knowing, it may be posing cybersecurity, technology and possibly even reputational risks.
Yet the contract with your vendor says nothing specific about it.
This challenge is one of the core things that 24 leading institutions across banking, insurance, and capital market sectors, as well as consulting and technology partners and industry associations set out to figure out. Business names on the list include DBS, HSBC, Standard Chartered, BlackRock, MSIG, AWS, Google Cloud and NVIDIA, all collaborating under Project Mindforge.
Led by the Monetary Authority of Singapore, Project Mindforge developed the MindForge AI Risk Management Operationalisation Handbook as a practical playbook for how financial institutions can get a handle on AI risk, including the risks that creep in when vendors update or modify the AI they supply.
When third-party vendors update their products, they may add AI features to non-AI products and services that didn’t previously have them, or change AI products or services that the financial institution is already using.
This can bring AI into the organisation’s ecosystem without proper oversight and controls in place to manage the risks that come with it. These unvetted AI additions are a way shadow AI creeps into an organisation.
The crux of the problem is the fact that contracts signed by financial institutions were written for software that tends to be static, predictable, and tracked through methods like release notes and change logs. AI behaves differently; it updates and evolves after you’ve officially deployed it.
Accountability for that behaviour, however, sits firmly with the institution, including in scenarios like hallucinations or the exposure of customer data.
Yet many contract clauses for Singapore fintechs and financial institutions have yet to address this reality. There’s no trigger in their agreements to notify when, say, an AI component is added or modified.

According to the handbook, requests for training data disclosures tend to get declined by vendors due to factors like data being commercially sensitive. Meanwhile, most organisations do not have a documented process to decide whether to pursue the matter further or not.
Why Existing Vendor Frameworks Fall Short for AI
The principle of accountability in financial services has been around for quite a while. Financial institutions have always been responsible for the conduct of their vendors.
AI, however, throws a unique wildcard into the mix. It creates a specific version of exposure that existing vendor management frameworks aren’t built to handle. The handbook states,
“The use of AI products or services from third-party vendors, service providers, and contractors may introduce new AI-specific risks, especially as FIs shift towards using AI as Saas.”
The issue has two roots, and they compound each other.
The first is opacity at the point of procurement. When you buy traditional software, you can test it and reasonably expect that what you evaluated is what you deployed. AI products, particularly foundation models accessed as a service, don’t work that way.
You often have no visibility into the underlying model and no reliable way to know how vendor updates will affect your specific use cases. The handbook is direct about this: some AI products or services “may not be fully transparent to the FI.”
The second problem is that AI’s opacity moves. A model you evaluated six months ago may behave differently today because the vendor retrained it or modified its guardrails. Unless your contract requires notification of material changes, you will not know this has happened until something goes wrong.
None of this is bad intent on a vendor’s part, but rather reflects how AI products and services are built and maintained as live systems.
The combination of opacity and continuous change means that traditional procurement due diligence is structurally insufficient for AI. Assuming the product remains stable leaves a meaningful and growing gap in your institution’s risk posture.
How to Manage AI Risk in Third-Party Vendor Contracts
The handbook shares that institutions can consider several factors when disclosures by third-party vendors are deemed incomplete.
Indemnification As A Limited Line of Defence
If a third-party vendor refuses to share details about how their AI model was trained, they may instead offer a contractual indemnification, a legal promise to cover costs if an Intellectual Property (IP) violation occurs.
This can push third parties to take risk more seriously and give institutions a way to recover financial losses if something goes wrong.
That said, indemnification only kicks in after the act. It does little to stop problems from happening in the first place. Institutions should also keep in mind that some AI-related harms, like risks to customer relationships or reputation, cannot be fixed with a mere payout.
Testing Third-Party AI Before You Buy
As part of their procurement process, institutions can test third-party AI products and services against key risk-related performance metrics. This is known as compensatory testing. It helps fill knowledge gaps about how an AI model or system actually behaves by putting it through a range of scenarios and observing where risks may emerge.
In short, it can be a practical way to learn what a vendor may not tell you upfront.
Getting an Outside Expert to Verify What Third Parties Won’t Show You
When a third party is unwilling to share details about how their AI system is controlled or safeguarded, institutions can opt to bring in a trusted external body like an auditor to independently review and verify those elements on the financial institution’s behalf.
This external attestation can confirm, for example, that the third party is meeting relevant regulatory requirements or has properly implemented recognised standards. While institutions may not get direct visibility into the third party’s systems, a credible independent sign-off can still provide meaningful assurance.
Embedding AI Risk Checks Into Every Procurement Stage
Managing third-party AI risk needs to be embedded across the entire lifecycle, from initial procurement through to ongoing use.
When evaluating AI products and services, institutions should address any gaps in their entire procurement and risk management processes.
Consider looking into information disclosures, legal review, vendor assessments, compensatory testing, and other relevant mitigating factors, including whether the third party’s AI aligns with the institution’s values and principles.

Once a product or service is in use, financial institutions should continuously monitor its performance and periodically reassess whether its risks are still being managed effectively.
If an AI product ends up being used in ways that go beyond its original scope, institutions should consider revisiting or supplementing the initial evaluation.
Building on Existing Cybersecurity and Procurement Frameworks
Financial institutions do not need to build their AI risk management approach from scratch. Most already have procurement and third-party risk management practices in place, including cybersecurity assessments, defined accountability structures, and processes for identifying, reviewing, mitigating and accepting risks.
These existing frameworks, including legal and cybersecurity reviews, can continue to be applied when assessing AI products and services. Before creating an entirely new AI-specific function, institutions should first ask where their current processes fall short and address those gaps through targeted improvements.
The Easy Starting Point
The handbook suggests a sequenced approach that begins with identification, building a clear picture of which vendors in your current stack supply AI products or services, including embedded AI in non-AI-primary products.
It recommends asking vendors directly to disclose whether a product includes or is connected to AI components, particularly at renewal or renegotiation points. This is a simple question that creates clarity and is an appropriate ask at any stage of a vendor relationship.
The second step is standardising disclosure requests. The MindForge consortium has published an AI Card template in the handbook’s appendix. This is a structured disclosure document covering model description, intended use, technical limitations, monitoring capabilities, and training data information, as showcased below:
MAS AI Card Template
The AI Card Template gives financial institutions a useful starting point for gathering information about AI products and services from their vendors. Institutions are encouraged to tailor it to suit their own needs and the specific context of their third-party relationships.


Using a standard template creates consistency and gives vendors a clear brief for what you need, making it more likely you will receive usable responses.
The third step is developing a defined decision process for incomplete disclosures. Rather than making case-by-case calls without documented rationale, fintechs should establish a policy that sets out what information they require, what mitigations (indemnification, attestation, testing) they will accept in lieu of direct disclosure, and what the approval pathway looks like for products where disclosure is incomplete.
This does not need to be complex. A one-page decision framework, owned by risk and signed off by legal, creates far more defensibility than the current informal approach most fintechs are using.
The final step is updating contracts with existing vendors to include AI notification provisions. These may even be notification obligations tied to material changes: new AI features, significant model retraining, changes to data handling practices.
This is a clause that most vendors will accept on request, and it provides the early warning system that currently does not exist.
Featured image edited by Fintech News Singapore based on an image by on Freepik




