OpenAI Secures $38 Billion Cloud and AI Infrastructure Deal with Amazon
OpenAI secures a $38 billion, seven-year deal with Amazon for cloud and AI infrastructure, utilizing Nvidia chips to train and run its AI models.
Overview
- OpenAI has entered a $38 billion, seven-year agreement with Amazon to access cloud services and AI infrastructure for its operations.
- The deal provides OpenAI with thousands of Nvidia graphics processors, including GB200 and GB300 AI accelerators, for training and running its AI models.
- OpenAI plans to utilize Amazon Web Services (AWS) immediately, with full capacity expected to be deployed by the end of 2026.
- This agreement follows OpenAI's recent restructuring, which increased its operational and financial independence, reducing reliance on Microsoft.
- OpenAI CEO Sam Altman has committed to significant investment in computing resources, aiming for 30 gigawatts, highlighting future AI infrastructure needs.
Report issue

Read both sides in 5 minutes each day
Analysis
Center-leaning sources frame this story with a cautionary tone, highlighting the immense scale of AI investments while emphasizing the significant systemic risks involved. They use evocative language to describe the "tangled web" of deals and the potential for an "AI bubble," suggesting a fragile market where a single failure could trigger widespread collapse.
Articles (6)
Center (3)
FAQ
Previously, Microsoft was OpenAI's exclusive cloud provider following a $10 billion investment in 2023. However, after a recent restructuring, OpenAI is no longer required to get Microsoft's approval to source computing from other firms. This deal with Amazon signals a reduction in OpenAI’s reliance on Microsoft, enabling it to diversify and expand its cloud infrastructure independently[1].
The deal is a major win for Amazon Web Services (AWS), which had been perceived as lagging behind Microsoft and Google in the AI cloud race. By securing OpenAI as a high-profile client, AWS bolsters its reputation as a top infrastructure provider for AI, potentially reshaping market dynamics and intensifying competition among the three cloud giants[2].
The $38 billion, seven-year commitment represents a significant portion of OpenAI's broader strategy to secure massive computing resources for AI development. CEO Sam Altman has outlined plans to spend over $1.4 trillion on infrastructure, indicating both the scale of OpenAI's ambitions and the financial risks involved, as the company has not yet detailed how it will raise the necessary capital[2].
OpenAI will gain access to 'hundreds of thousands' of Nvidia GPUs, including advanced AI accelerators like the GB200 and GB300. These chips are critical for training large language models and generative AI systems, as they deliver the high-performance computing power required for complex parallel processing tasks fundamental to modern AI development[1].
OpenAI will begin using AWS capacity immediately, with the full planned infrastructure set to be operational by the end of 2026. The agreement also includes options for further expansion into 2027 and beyond, underscoring the growing and potentially open-ended demand for AI compute resources[1].
History
- This story does not have any previous versions.







