OpenAI’s o3-mini and o3-mini-high now available in response to Chinese AI

OpenAI’s o3-mini offers faster performance and improved efficiency…

In response to the new efficient AI models from China—such as DeepSeek or Alibaba’s offering—OpenAI now introduces the new OpenAI o3-mini, notably also a highly cost-effective model that is already in use. It is specifically intended for science, mathematics and programming tasks where low latency and low cost are key.

The new system includes features that will delight STEM professionals, who can choose the level of reasoning effort required for each task and track latency as well. However, o3-mini does not support vision capabilities, so continued use of OpenAI o1 is recommended for visual reasoning tasks.

Users on ChatGPT Plus, Team, and Pro can access OpenAI o3-mini starting today, while the Enterprise version is slated for release in February. o3-mini will replace OpenAI o1-mini in the model selector, offering higher usage limits and reduced latency.

As part of this upgrade, the daily message limit for Plus and Team users will be tripled, rising from 50 messages with o1-mini to 150 with o3-mini. In addition, o3-mini now supports a search feature to provide up-to-date answers with links to relevant sources.

Free plan users can also try OpenAI o3-mini by selecting “Reason” in the message composer or regenerating a response. This marks the first time a reasoning model has been made available to free ChatGPT users.

On the security front, OpenAI confirms that, just like OpenAI o1, o3-mini significantly outperforms GPT-4o in complex security and jailbreak evaluations.

In short, the launch of OpenAI’s new model addresses the need to maintain a reasonable level of reasoning while reducing costs—reportedly by 95% per token since GPT-4’s launch, according to OpenAI.

Popular articles

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *