The CEO of OpenAI, Sam Altman, has disclosed that the rollout of the latest model, GPT-4.5, has to be implemented in phases due to a current shortage of GPUs.
In a post on the X platform, Altman mentioned that GPT-4.5 is an exceptionally large and costly model. Before it can be made available to more ChatGPT users, the model requires "tens of thousands" of additional GPUs. According to the plan, GPT-4.5 will first be accessible to ChatGPT Pro subscribers this Thursday, followed by ChatGPT Plus users next week.
The enormous scale of GPT-4.5 might partly explain its high costs. OpenAI has set a steep price for using this model: $75 for every million tokens input (approximately 750,000 words), and $150 for every million tokens generated. These rates are 30 times and 15 times higher than the input and output costs of the GPT-4o model, respectively.
Altman wrote in his post, "We've been growing rapidly, but now we're facing a GPU shortage. Next week, we'll add tens of thousands of GPUs and then roll out GPT-4.5 to Plus users... This isn't how we want to operate, but it's difficult to precisely predict growth spurts that lead to GPU shortages."
Previously, Altman noted that a lack of computing power is slowing down the launch of the company’s products. To address this issue in the coming years, OpenAI plans to develop its own AI chips and build a vast network of data centers.