Swift, the global financial messaging cooperative, has announced two AI-driven experiments in collaboration with member banks to combat cross-border payments fraud which could potentially save the industry billions in fraud-related costs.
The first pilot aims to enhance Swift’s existing Payment Controls service, which detects fraud indicators, by using an AI model to create a more accurate picture of potential fraud activity.
This enhancement will utilise historical patterns on the Swift network, refined with real-world data from Payment Controls customers.
In a separate experiment, Swift has teamed up with 10 leading financial institutions, including BNY Mellon, Deutsche Bank, DNB, HSBC, Intesa Sanpaolo, and Standard Bank, to test AI technology for analysing anonymously shared data.
This initiative could transform confidential data sharing and improve global fraud detection.
Additionally, the tests could lead to the wider use of information sharing in fraud detection, building on its success in assessing cybersecurity threats.
The group will employ secure data collaboration and federated learning technologies, enabling financial institutions to exchange relevant information while maintaining strong privacy controls.
Swift’s AI anomaly detection model will analyse this enriched dataset to identify potential fraud patterns.
Fraud cost the financial industry US$485 billion in 2023. AI has significant potential to reduce these costs and help achieve the G20’s goal of faster cross-border payments.
Tom Zschach, Chief Innovation Officer at Swift, said,
“AI has great potential to significantly reduce fraud in the financial industry. That’s an incredibly exciting prospect, but one that will require strong collaboration.
Swift has a unique ability to bring financial organisations together to harness the benefits of AI in the interests of the industry, and we’re excited by the potential of both of these pilots to help further strengthen the cross-border payments ecosystem.”
Swift has built an AI governance framework in collaboration with its community to ensure that accuracy, explainability, fairness, auditability, security, and privacy are integral to every aspect of its AI applications.
These pilots are rooted in responsible use of AI and are aligned with emerging global standards, such as ISO 42001, the NIST AI Risk Management Framework, and the EU AI Act.