OpenAI has introduced an enhanced version of its "reasoning" AI model, o1, in the developer API, now called o1-pro.
Compared to o1, o1-pro utilizes significantly more computational resources to deliver "consistently superior responses." Currently, this model is accessible only to a limited group of developers—those who have spent at least $5 on OpenAI's API services—and it comes with a hefty price tag.
To be precise, OpenAI charges $150 for every million tokens input into the model (approximately 750,000 words) and $600 for every million tokens generated by the model. This pricing is twice the input cost of OpenAI's GPT-4.5 and ten times the standard output cost of the base o1 model.
OpenAI hopes that the exceptional performance of o1-pro will justify its high cost and encourage developers to adopt it.
An OpenAI spokesperson stated that o1-pro is a variant of o1 designed to leverage additional computational power for deeper reasoning, offering better solutions to the most challenging problems. In response to numerous requests from the developer community, the company decided to integrate this model into the API to provide more reliable outputs.
However, since December of last year, o1-pro has been available to ChatGPT Pro subscribers on OpenAI’s AI chatbot platform, ChatGPT. Initial feedback has not been overwhelmingly positive. Users noted that the model struggles with solving Sudoku puzzles and can be stumped by simple visual illusion jokes.
Furthermore, internal benchmark tests conducted by OpenAI at the end of last year revealed that o1-pro performs only marginally better than the standard o1 when tackling programming and math problems.