Amazon Web Services (AWS) and OpenAI have entered a multi-year partnership to run and scale advanced artificial intelligence (AI) workloads on AWS infrastructure.
The US$38 billion commitment gives OpenAI access to hundreds of thousands of NVIDIA GPUs, including Amazon EC2 UltraServers, with the ability to scale to tens of millions of CPUs.
The collaboration will support the development of next-generation AI models and help millions of users continue to get value from ChatGPT.
AWS said the partnership draws on its experience managing secure, large-scale AI systems.
The infrastructure clusters NVIDIA GB200 and GB300 GPUs on Amazon EC2 UltraServers to deliver low-latency performance and efficient processing across interconnected systems.

“Scaling frontier AI requires massive, reliable compute.
Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
said OpenAI co-founder and CEO Sam Altman.

“As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions.
The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads.”
said Matt Garman, CEO of AWS.
The collaboration builds on earlier work between the two companies.
OpenAI’s open-weight foundation models are available on Amazon Bedrock, giving AWS customers additional model options for tasks such as coding, data analysis, and scientific research.
Deployment of the new compute capacity is expected to be completed by the end of 2026, with further expansion planned through 2027.
Featured image: Edited by Fintech News Singapore, based on images by JASON REDMOND / AFP via Getty Images, and vart_dant via Freepik








