Recently, sources have revealed that OpenAI is gradually reducing its reliance on Microsoft for cloud computing, aiming to achieve greater independence and diversify its computing resources. This development follows OpenAI’s successful completion of a $6.6 billion funding round, as disclosed internally by CEO Sam Altman and CFO Sarah Friar.
Friar stated at a shareholder meeting that Microsoft’s current cloud computing capabilities do not meet OpenAI’s requirements for speed and efficiency. Consequently, OpenAI is proactively exploring alternative data center options to secure more efficient and reliable cloud services. According to the contract terms between OpenAI and Microsoft, such explorations are explicitly permitted.
Altman further noted that the speed at which Microsoft can provide servers is insufficient to support OpenAI’s rapid growth, potentially causing OpenAI to lag behind Elon Musk’s xAI company. Musk reportedly plans to launch his purportedly most powerful AI model, Grok 3, by the end of this year, and xAI is currently building a large-scale server infrastructure in Memphis.
To address this challenge, OpenAI is strengthening its partnership with Oracle. According to The Information, OpenAI announced its first collaboration agreement with Oracle in June this year, with Microsoft's involvement being relatively minor. However, this partnership will still contribute to Microsoft’s Azure revenue, as OpenAI will operate Azure infrastructure on Oracle servers.
Additionally, OpenAI is currently negotiating with Oracle to lease an entire data center in Abilene, Texas. It is estimated that by mid-2026, the Abilene data center could reach a capacity of nearly 1 gigawatt, accommodating hundreds of thousands of Nvidia AI chips. If energy supplies are sufficient, the data center's capacity could be expanded to 2 gigawatts.
Meanwhile, Microsoft also plans to grant OpenAI access to approximately 300,000 of Nvidia’s latest GB200 graphics processors for their data centers in Wisconsin and Atlanta. However, Altman has requested Microsoft to expedite the Wisconsin project to ensure OpenAI receives the necessary computing power in a timely manner.
To sustainably meet the growing computing demands and reduce costs, OpenAI also intends to use more in-house developed AI chips in the future. To this end, OpenAI is collaborating with partners like Broadcom and Marvell to design ASIC chips and has reserved capacity in TSMC’s next-generation A16 Angstrom process. Large-scale production is expected to commence in the second half of 2026.
These initiatives demonstrate OpenAI's proactive efforts to achieve independence and diversify its cloud computing services. As artificial intelligence technology continues to develop and its applications expand, OpenAI’s demand for high-performance computing resources will keep increasing.