OpenAI o3
OpenAI o3 is a reflective generative pre-trained transformer (GPT) model developed by OpenAI as a successor to OpenAI o1. It is designed to devote additional deliberation time when addressing questions that require step-by-step logical reasoning. On January 31, 2025, OpenAI released a smaller model, o3-mini, followed on April 16 by o3 and o4-mini.
History
The OpenAI o3 model was announced on December 20, 2024, with the designation "o3" chosen to avoid trademark conflict with the mobile carrier brand named O2. OpenAI invited safety and security researchers to apply for early access of these models until January 10, 2025. Similarly to o1, there are two different models: o3 and o3-mini.
On January 31, 2025, OpenAI released o3-mini to all ChatGPT users (including free-tier) and some API users. OpenAI describes o3-mini as a "specialized alternative" to o1 for "technical domains requiring precision and speed". o3-mini features three reasoning effort levels: low, medium and high. The free version uses medium. The variant using more compute is called o3-mini-high, and is available to paid subscribers. Subscribers to ChatGPT's Pro tier have unlimited access to both o3-mini and o3-mini-high.
On February 2, OpenAI launched OpenAI Deep Research, a ChatGPT service using a version of o3 that makes comprehensive reports within 5 to 30 minutes, based on web searches.
On February 6, in response to pressure from rivals like DeepSeek, OpenAI announced an update aimed at enhancing the transparency of the thought process in its o3-mini model.
On February 12, OpenAI further increased rate limits for o3-mini-high to 50 requests per day (from 50 requests per week) for ChatGPT Plus subscribers, and implemented file/image upload support.
On April 16, 2025, OpenAI released o3 and o4-mini, a successor of o3-mini.
Capabilities
Reinforcement learning was used to teach o3 to "think" before generating answers, using what OpenAI refers to as a "private chain of thought". This approach enables the model to plan ahead and reason through tasks, performing a series of intermediate reasoning steps to assist in solving the problem, at the cost of additional computing power and increased latency of responses.
o3 demonstrates significantly better performance than o1 on complex tasks, including coding, mathematics, and science. OpenAI reported that o3 achieved a score of 87.7% on the GPQA Diamond benchmark, which contains expert-level science questions not publicly available online.
On SWE-bench Verified, a software engineering benchmark assessing the ability to solve real GitHub issues, o3 scored 71.7%, compared to 48.9% for o1. On Codeforces, o3 reached an Elo score of 2727, whereas o1 scored 1891.
On the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) benchmark, which evaluates an AI's ability to handle new logical and skill acquisition problems, o3 attained three times the accuracy of o1.