OpenAI Releases o3-mini AI Inference Model Free for ChatGPT Users

2025-02-01

Following a two-week anticipation prompted by OpenAI CEO Sam Altman, the artificial intelligence giant has today officially introduced the new o3-mini AI inference model across its ChatGPT and API services. Even more thrilling is the fact that this cutting-edge technology has made a limited-rate version available to free users of ChatGPT for the first time, granting a broader audience access to OpenAI's latest advancements.

The o3-mini demonstrates significant performance improvements over its predecessor, the o1-mini. Specifically, its response time has been accelerated by 24%, while the accuracy of answers has been further refined. Like the o1-mini, the o3-mini not only delivers responses but also elucidates the reasoning process, offering users greater insight into how the AI arrives at its conclusions.

For developers, they now have the option to utilize the o3-mini via OpenAI's API services, which include the Chat Completions API, Assistants API, and Batch API. This shift undoubtedly presents developers with additional options and potential, aiding in the progression and application of AI technologies.

In ChatGPT, the o3-mini operates at a medium inference intensity by default, ensuring efficient service delivery while maintaining answer accuracy. For paying customers, there is an even smarter variant available—the o3-mini-high. However, it should be noted that this version may entail slightly longer response generation times. Nevertheless, Pro users can enjoy unrestricted access to both the o3-mini and o3-mini-high, experiencing more intelligent and efficient services.

Beyond Pro users, ChatGPT Plus and Team subscribers can also leverage the o3-mini. To better address user needs, OpenAI has tripled the daily message limits for Plus and Teams users to 150 messages per day. Nonetheless, only those who pay $200 monthly as Pro users are entitled to unlimited usage of the o3-mini.

This marks the inaugural opportunity for ChatGPT's free users to experience OpenAI's inference models. They simply need to select the Reason feature within the chat interface; despite rate limitations similar to those of the existing GPT-4o, this provides them with sufficient exposure to the allure of AI inference models.